GreenTouch release tools and technologies to significantly reduce mobile networks energy consumption

Mobile Phone

Mobile industry consortium GreenTouch today released tools and technologies which, they claim, have the potential to reduce the energy consumption of communication networks by 98%

The world is now awash with mobile phones.

According to Ericsson’s June 2015 mobility report [PDF warning], the total number of mobile subscriptions globally in Q1 2015 was 7.2 billion. By 2020, that number is predicted to increase another 2 billion to 9.2 billion handsets.

Of those 7.2 billion subscriptions, around 40% are associated with smartphones, and this number is increasing daily. In fact, the report predicts that by 2016 the number of smartphone subscriptions will surpass those of basic phones, and smartphone numbers will reach 6.1 billion by 2020.

Number of connected devices

When you add to that the number of connected devices now on mobile networks (M2M, consumer electronics, laptops/tablets/wearables), we are looking at roughly 25 billion connected devices by 2020.

That’s a lot of data passing being moved around the networks. And, as you would expect that number is increasing at an enormous rate as well. There was a 55% growth in data traffic between Q1 2014 and Q1 2015, and there is expected to be a 10x growth in smartphone traffic between 2014 and 2020.

So how much energy is required to shunt this data to and fro? Estimates cite ICT as being responsible for the consumption of 2% of the world’s energy, and mobile networking making up roughly half of that. With the number of smartphones set to more than double globally between now and 2020, that figure too is shooting up.

Global power consumption by telecommunications networks

Fortunately five years ago an industry organisation called GreenTouch was created by Bell Labs and other stakeholders in the space, with the object of reducing mobile networking’s footprint. In fact, the goal of GreenTouch when it was created was to come up with technologies reduce the energy consumption of mobile networks 1,000x by 2015.

Today, June 18th in New York, they announced the results of their last five years work, and it is that they have come up with ways for mobile companies to reduce their consumption, not by the 1,000x that they were aiming for, but by 10,000x!

The consortium also announced

research that will enable significant improvements in other areas of communications networks, including core networks and fixed (wired) residential and enterprise networks. With these energy-efficiency improvements, the net energy consumption of communication networks could be reduced by 98%

And today GreenTouch also released two tools for organisations and stakeholders interested in creating more efficient networks, GWATT and Flexible Power Model.

They went on to announce some of the innovations which led to this potential huge reduction in mobile energy consumption:


  • Beyond Cellular Green Generation (BCG2) — This architecture uses densely deployed small cells with intelligent sleep modes and completely separates the signaling and data functions in a cellular network to dramatically improve energy efficiency over current LTE networks.
  • Large-Scale Antenna System (LSAS) — This system replaces today’s cellular macro base stations with a large number of physically smaller, low-power and individually-controlled antennas delivering many user-selective data beams intended to maximize the energy efficiency of the system, taking into account the RF transmit power and the power consumption required for internal electronics and signal processing.
  • Distributed Energy-Efficient Clouds – This architecture introduces a new analytic optimization framework to minimize the power consumption of content distribution networks (the delivery of video, photo, music and other larger files – which constitutes over 90% of the traffic on core networks) resulting in a new architecture of distributed “mini clouds” closer to the end users instead of large data centers.
  • Green Transmission Technologies (GTT) – This set of technologies focuses on the optimal tradeoff between spectral efficiency and energy efficiency in wireless networks, optimizing different technologies, such as single user and multi-user MIMO, coordinated multi-point transmissions and interference alignment, for energy efficiency.
  • Cascaded Bit Interleaving Passive Optical Networks (CBI-PON) – This advancement extends the previously announced Bit Interleaving Passive Optical Network (BiPON) technology to a Cascaded Bi-PON architecture that allows any network node in the access, edge and metro networks to efficiently process only the portion of the traffic that is relevant to that node, thereby significantly reducing the total power consumption across the entire network.

Now that these innovations are released, mobile operators hoping to reduce their energy costs will be looking closely to see how they can integrate these new tools/technologies into their network. For many, realistically, the first opportunity to architect them in will be with the rollout of the 5G networks post 2020.

Mobile phone mast

Having met (and exceeded) its five year goal, what’s next for GreenTouch?

I asked this to GreenTouch Chairman Thierry Van Landegem on the phone earlier in the week. He replied that the organisation is now looking to set a new, bold goal. They are looking the energy efficiency of areas such as cloud, network virtualisation, and Internet of Things, and that they will likely announcement their next objective early next year.

I can’t wait to see what they come up with next.

Mobile phone mast photo Pete S


Schneider Electric – focussed on making organisations more efficient

Schneider Influencer Summit

We were invited to attend this year’s Schneider Electric Influencer Summit and jumped at the chance. Why? Schneider Electric is a fascinating company with fingers in lots of pies, and we were keen to learn more about this company.

Schneider Electric was founded in 1836, so the company is coming up on 180 years old. Schneider reported revenue of almost €23.5bn in 2013, of which €1.9bn was profit, and employs in the order of 152,000 people globally. So, not an insignificant organisation.

The Influencer Summit coincided with the opening of its Boston One campus, Schneider Electric’s new facility in Andover. This site is now Schneider’s main R&D lab, as well as its North American HQ. Situating its main R&D labs in its HQ says a lot about how Schneider views the importance of research and development. In fact, at the event Schneider EVP and North American CEO Laurent Vernerey, reported that Schneider devotes 4-5% of sales to R&D annually.

At the influencer event, we discovered the breath of Schneider’s portfolio went far beyond what we were aware of. Not only are they heavily involved in electrical automation, control and distribution systems, but they also help make highly energy efficient data centres (they bought APC back in 2007), they have building management solutions, a cybersecurity suite (developed especially for critical infrastructure), water management solutions, a smart cities business, a weather forecasting arm (with a staff of 80 meteorologists!), and a strong services division. See, fingers in lots of pies!

Schneider Electric, as its name suggests, was traditionally more of a hardware company, but with the move to the digitisation of infrastructure, that has changed fundamentally, and Schneider is now very much a software company as well as a hardware one. Of the 20,000 employees in North America, 1,200 are software engineers.

This digitisation of infrastructure is happening at an ever increasing pace, helped by the constantly falling price of electronics and sensors. If it costs a mere $2.50 to put an SoC on a piece of infrastructure, why wouldn’t you do it? Particularly when adding the SoC makes the device IP addressable. Now it can report back on its status in realtime. As Schneider CMO Chris Hummel said, “connected systems will fundamentally change everything”.

Addressing potential security issues associated with making critical infrastructure IP addressable Schneider said that connected devices are more secure than disconnected devices because they can be monitored, and everything that’s done to them can be tracked.

With that in mind, it is not surprising that Schneider is a member of the Industrial Internet Consortium.

While it is always instructive to hear a company’s executives talking about their organisation, it is always far more interesting to hear their customers speak. And this event didn’t disappoint on that score. The customer speaker in this case was Todd Isherwood, the Energy Efficiency and Alternative Energy project manager for the City of Boston. Todd discussed how the City of Boston, with 15,000 employees, 2,700 utility accounts and a $50m electricity spend was working with Schneider Electric on its journey to becoming a more sustainable city.

Boston launched its Greenovate Boston campaign, it passed its Building Energy Reporting and Disclosure Ordinance (BERDO). This Ordinance requires Boston’s large- and medium-sized buildings to report their annual energy and water use to the City of Boston, after which the City makes the information publicly available. All of which will have helped Boston achieve its ranking of most energy efficient city in the US.

The biggest takeaway from the event though, was that Schneider Electric is, at its core, hugely interested in helping organisations become more efficient. And seemingly for all the right reasons. That’s not something you can say about many companies. And because of that, we’ll be watching Schneider with great interest from here on out.

Disclosure – Schneider Electric paid my travel and accommodation expenses to attend this event.


Technology for Good – episode twenty one with Gary Barnett

Welcome to episode twenty one of the Technology for Good hangout. In this week’s episode we had industry analyst Gary Barnett, as a guest on the show. As well as being a fellow industry analyst, Gary is an old friend, so we had a lot of fun discussing this week’s crop of stories. We had some connectivity issues at times during the hangout unfortunately, but that didn’t stop us from having a very interesting discussion about topics as diverse as climate, energy efficiency, and communications.

Here are the stories that we discussed in this week’s show:

World’s energy systems vulnerable to climate impacts, report warns
Peak Coal: Why the Industry’s Dominance May Soon Be Over


Cable TV boxes become 2nd biggest energy users in many homes
Microsoft Supercharges Bing Search With Programmable Chips


Amazon strikes back, launching speedy solid-state block storage one day after Google


Google’s Balloon Internet Experiment, One Year Later
Facebook has built it’s own switch
Antitheft Technology Led to a Dip in iPhone Thefts in Some Cities, Police Say
Google and Microsoft add phone kill switch
Fire Phone against the world: can Amazon take on iOS and Android


Ahead of Apple’s HealthKit, WebMD app now tracks health & fitness data from connected accessories


Harley-Davidson’s First Electric Motorcycle Surprisingly Doesn’t Suck


Microsoft launches new startup accelerator in Redmond, focusing on home automation and the ‘Internet of Things’
Swiss based encrypted email service, brought to you by CERN and MIT scientists.


Autodesk’s Farnborough office going for LEED certification

Autodesk UK recently moved offices to a facility in Farnborough. In their previous offices, they had occupied several floors, so they set out to find offices where all their staff could be on the same floor, and yet have plenty off access to light. Also, they wanted to drastically reduce their footprint, so they to great care to make the office as green as possible (given that it was a retrofit, not a new build) and they have applied for LEED Gold certification for the office.

I visited with Autodesk in Farnborough last week and I was extremely impressed with the steps they have taken, as well as with the pride Autodesk rightfully show for the ongoing benefits of this project.

Some of the highlights:
The construction
94% construction waste recycled/ diverted from landfill
All energy/water consumption measured and monitored on site
The site was registered with the Considerate Constructors Scheme and achieved a score of 34 (85%)

Low energy lighting in Autodesk UK office

Low energy lighting in Autodesk UK office

High percentage of FSC timber sourced
All new furniture contains high % recycled/recyclable content. Re-used old furniture items where possible and ensured all unused items were diverted from landfill (e.g. donated to charity)
All paints, sealants and adhesives have been sourced with a low Volatile Organic Compound (VOC) content, to minimise chemicals and maximise occupant well-being
Selected new materials with high recycled content
A high proportion of new materials have been manufactured within 500 miles

Secure bicycle racks, lockers, shower and changing facilities are provided for cyclists
10% of parking spaces are allocated to car sharers
Water consumption has been reduced by at least 20% through the installation of water efficient taps, shower fittings, WC’s and urinals.
Occupancy sensors have been installed on the lights for more than 90% of the lighting load and daylight controls on more than 50% – meaning that lights are not left on or are lighting areas unnecessarily.
Air conditioning equipment has been zoned in order to provide control to suit requirements for solar exposure and ensuring employee comfort
Recycling facilities have been built into the layout to ensure recycling wherever possible
All new electrical appliances are Energy Star rated
Desks have been located to try and maximise natural daylight and external views

The company also has a 6 seater TelePresence suite to reduce the amount of business travel it’s employees need to do. And Autodesk facilitates employees who wish to work from home – so much so that around 50% of their staff take advantage of this – reducing Autodesk’s property footprint, and the number of commute miles its workers undertake.

Autodesk’s Singapore office was awarded LEED Platinum certification earlier this year – with any luck when I’m in Singapore in November I’ll get a chance to check it out!

Full disclosure – Autodesk is not a GreenMonk client and the trip to visit AutoDesk’s Farnborough facilities was undertaken entirely at GreenMonk’s expense.

Image credit Tom Raftery


Data Center War Stories talks to the Green Grid EMEA Tech Chair Harqs (aka Harkeeret Singh)

And we’re back this week with another instalment in our Data Center War Stories series (sponsored by Sentilla).

In this episode I talked to the Green Grid’s EMEA Tech Chair, Harqs (also known as Harkeeret Singh).

The Green Grid recently published a study on the RoI of energy efficiency upgrades for data center cooling systems [PDF warning]. I asked Harqs to come on the show to discuss the practical implications of this study for data center practitioners.

Here’s the transcript of our conversation:

Tom Raftery: Hi everyone, welcome to GreenMonk TV, this is the Data Centers War Stories series sponsored by Sentilla. The guest on the show today is Harkeeret Singh aka Harqs. And Harqs is the EMEA Tech Chair of Green Grid. Harqs welcome to the show.

Harqs: Thank you Tom.

Tom Raftery: So Harqs as the Tech Chair of the Green Grid in the EMEA region, you get a larger overview of what goes on in data centers than some of the previous interviewees we?ve had on the show.

We have been talking about kind of the war stories, the issues that data centers have come across and ways they have resolved them. I imagine you would have some interesting stories to tell about what can go on in data centers globally. Do you want to talk a bit about some of that stuff?

Harqs: The Green Grid undertook a study which implements a lot of good practices that we all talk about in terms of improving air flow management and increasing temperature, putting in variable frequency drives. And what we did is after each initiative, we measure the benefit of — in terms of — from an energy consumption perspective.

And Kerry Hazelrigg from the Walt Disney Company led the study in a data center in the southeast of the US. And we believe that is representative of the most data centers, probably pre-2007, so we think there is a lot of good knowledge in here which others can learn from.

So I am going to take you through some of the changes that were made and some of the expectations but also some of the findings some of which we weren?t expecting. So starting with — there was five different initiatives, the first initiative was implementing the variable speed drives. And what we found was, they installed new CRAC units and CRAH units which they put in variable frequency drives in the standard, there is 14 of them, and then they retrofitted 24 existing CRAHs. And took out the fixed speed drives and put in the variable speed drives.

The expectation was we would find a reduction in energy consumption and fan horsepower. And also there was a potential of maybe looking always the providing of coolant to the right place in the data center. And once we put those in, we found out that they didn?t actually introduce any, any hotspots, which was a positive thing. But some of the things that were a little different from what we expected and the PUE didn?t reflect the savings. That was because there was external factors, things like the external weather which impacted the PUE figure as well. So you need to bear that in mind as you make changes. You need to look at the average across the year.

The other issue that we found was by putting in variable speed drives they found it introduced harmonics to the power systems. And that came through the monitoring tools and so they are putting — they put in filtering to help resolve those harmonics.

The last issue was also around skills, so they had to train the data center staff on using variable frequency drives and actually maintain them. This was the biggest power saving, it was a third of the overall saving and the saving in total was 9.1% of energy consumption and that?s saved some thing in the order of $300,000 in terms of real cash and the PUE went down from 1.87 down to 1.64 by doing these five initiatives.

The second issue was actually putting in the air flow management. So things like the blanking panels and the floor grommets, putting in the cold tiles where they are supposed to be, and that was for around 7 inch cabinets and the findings were that that reduced the cold aisle temperature because you have less mixing, and also increase the temperate on the hot isle in terms of the temperatures going back to the CRAH. So that was interesting.

We saw that being a key enabler to actually increase in temperature, so you have cold to cold aisles and hot to hot aisles because of those mixing. There wasn?t any energy savings for this piece in itself, but it was in this — airflow management activity is an enabler in that it allows you to then do some optimization and also to increase temperature without risk.

The third activity was relocating the sensors that the CRAHs worked off from, way from the return to CRAC and return to CRAH which is what most data centers use today to actually aligning that to sensors on the front of the cabinets. So actually moving from return air to supply and that?s the specification that ASHRAE provides, that?s what we should be controlling, the temperature and the humidity of the air going into the servers. They themselves say they don?t really care about what the temperature is coming out of back of the servers. Well the rest of us do from a — making sure that it?s not too hot for our data center operators.

So what we did was move those sensors to the front of cabinets and what that did was that optimized the fan speeds and actually started to raise the temperature and the cold air that was required by the servers. It did take them a little awhile getting the locations right, so making sure that they have them moving them around as much as possible, looking the CFD to make sure they are optimizing and putting it in the right place eventually. And that was a small improvement, but it was the — again another enabler for increasing temperate. So there was only a few percent improvement by doing that, but what it does is when you start look at increasing temperature you are increasing temperature at the right point.

Tom Raftery: So how much did it increase temperature by — was it like from 20 to 25 or… —

Harqs: That?s a good question as the next initiative they did was, they were increasing the temperature, so I was just about to — so they went from 18? C, which was what their original set point was and they took it up to 22? C. Now obviously that?s still in the middle of the ASHRAE standard. So there is still more scope there to become better. But it wasn?t just increasing the temperature in the room but it was actually increasing the temperature of the chiller plant which — where the biggest savings were, so if you increase the temperature of the room that then allows you to increase the temperature of your chiller plant.

And that?s — they increase the set point of their chiller plant from 6.7? C to just under 8. And what they found was, there was significant savings due to the reduction in compressor and condensor fan power. And what they found was for each and I’m going to do this in degree F because they calculate degree F. So they went from 44 to 46? F. For every degree F they increased the set point of the chiller, they found out that reduced just over 50 kilowatts of chiller energy consumption.

Now in terms of other people?s data centers, they are also — your mileage may vary depending on the configuration and where you are, but that?s what their significant saving was. By doing that what they found was — by doing it this way, where they put the air flow management in place and then they increased temperature in the room, increased the set points of the chiller plant they found that actually there was — that made no significant impact on the data center in terms of hot spots or anything like that. So there is no detrimental impact to the data center by doing this. Obviously the saving of the energy was a positive and saved real money.

Tom Raftery: Alright, Harqs that was great, thanks a million for coming on the show.


ArcelorMittal FCE roll out Organisational Risk Management software to unify maintenance processes

Organisational risk management (ORM) is the new hotness in the sustainability field. It is receiving increasing attention, as SAP’s Jeremiah Stone mentioned when I talked to him at SAP’s Sapphire/TechEd event last week. One assumes that it is receiving this increasing attention at least partly because that’s where the customer dollar is focussed right now.

What exactly is ORM? Organizational risk management is “risk management at the strategic level” according to the Software Engineering Institute – think of it as kind an amalgam of the traditional Environment, Health and Safety (EH&S) and Governance, Risk and Compliance (GRC) sectors.

How do these fit into the sustainability agenda? Well, risk mitigation is all about reducing the risks of an adverse event occurring – one that either hurts people or the company reputation (or often both!). It does this by mandating procedures and processes with verifiable sign-offs. It also does this by scheduling maintenance and raising alerts when equipment goes out of tolerance. This properly scheduled maintenance of machinery will ensure it not only runs safer, but often will also mean it stays more fuel efficient. This can mean significant energy savings in organisations which use a lot of power.

While at SAP’s TechEd/Sapphire last week I spoke with Edwin Heene, who works with ArcelorMittal and is responsible for the rollout of their ORM software solution. I had a quick chat with him to camera about the project and why ArcelorMittal embarked on it.

Here’s a transcript of our conversation:

Tom Raftery: Hi, everyone welcome to GreenMonk TV. We are at the SAP Sapphire event in Madrid, and with me I have Edwin Heene from ArcelorMittal. Edwin you have been involved in the organizational risk management project rollout or are involved in at the moment for ArcelorMittal. Can you tell us a little bit about that?

Edwin Heene: So in ArcelorMittal Flat Carbon Europe we are doing a global organizational standardization project in maintenance, that including also the safety processes and this is something that we do in several countries namely eight countries in Flat Carbon Europe.

Tom Raftery: So maybe we should give it a bit of context first. Who are ArcelorMittal? You are a large steel company, but could you just give us a little bit of background about the company first?

Edwin Heene: ArcelorMittal is the largest steel producing company in the world doing — covering about 6% of the annual year production. Has a number of employees about 260,000 in 2010. And presence in Flat Carbon Europe because that?s the sector where — segment where I work in. It is covering eight different countries, and we have about 35 locations in Flat Carbon Europe.

Tom Raftery: Okay, so as I mentioned in the start you are in the middle of this organizational risk management software rollout, can you talk to us a little bit about that?

Edwin Heene: So this system we — in 2008 we selected the fact the solution has update to the support us with this harmonization and there we found out that there was a good supporting tool for operational risk management and safety processes namely the solution was PWCM –Work Clearance Management solution.

Tom Raftery: Okay, and you brought SAP in to help you in a kind of collaborative role in developing the application for yourselves.

Edwin Heene: Yeah, because in Flat Carbon Europe we had already a number of plans that are on good level of safety and managing the safety risks and so on in the operational part with some legacy systems, with a decision to go to one common system, the SAP system, we have to convince the other people with, which have already a good supporting IT tool to move over to SAP.

And therefore we found out that there were some lags still in the supporting SAP PWCM solution. So we had a number of meetings with Jeremiah Stone from SAP who is leading co-innovation programs in SAP and there we decided to, in order to close these gaps to provide these functionalities in standard SAP environment to step in a co-innovation program with SAP.

Tom Raftery: Okay and why roll it out — what was the reason behind the rollout of the application?

Edwin Heene: The reason behind the harmonization and standardization program in Flat Carbon Europe is first of all to improve the maintenance processes in effect implementing all the best practices that we have in several plants and you have a best practice in every single plant to absorb that in one common model, one common business model and implement that in all different plants. Throughout this implementation of best practices you have business results, operational results in every single plant. Benefiting of being in a large group and learning from each other, learning from the best practice from another group.

Tom Raftery: Excellent, Edwin that?s been great, thanks a million.

Edwin Heene: Thank you.


Japan achieves its 15% energy reduction goal


I wrote a post a number of weeks back where I talked about how TEPCO were using realtime data to help manage energy demand in Japan. Towards the end of the piece I speculated on whether or not Japan would be able to maintain the effort through August – their hottest month.

You will remember that after the March earthquake, Japan had to shut down all but 15 of its 54 nuclear power plants. This forced the Japanese government to issue an order on July 1st obliging large scale users of electricity (>500kW) to cut their consumption by 15%. They also asked households and small businesses to do likewise but the cut was not legally binding on them

Well, according to the New York Times, Japan made it through the month of August and so successful were they, that this month, ahead of schedule, the government lifted all restrictions on power use. This despite the nuclear power stations not being turned back on.

This is an amazing success story and goes to show how, when a people are properly motivated (in this case with a sense of national pride and unity), they can achieve the seemingly impossible.

The downside of this story is that in the absence of nuclear power Japan is now burning far more fossil fuels to meets its energy requirements. Hopefully, they’ll transition away from fossil fuels and onto renewables to make up for the shortfall in their generation needs.

Photo credit Tom Raftery


Carbon Disclosure Project’s emissions reduction claims for cloud computing are flawed

data center

The Carbon Disclosure Project (CDP) is a not-for-profit organisation which takes in greenhouse gas emissions, water use and climate change strategy data from thousands of organisations globally. This data is voluntarily disclosed by these organisations and is CDP’s lifeblood.

Yesterday the CDP launched a new study Cloud Computing ? The IT Solution for the 21st Century a very interesting report which

delves into the advantages and potential barriers to cloud computing adoption and gives insights from the multi-national firms that were interviewed

The study, produced by Verdantix, looks great on the surface. They have talked to 11 global firms that have been using cloud computing for over two years and they have lots of data on the financial savings made possible by cloud computing. There is even reference to other advantages of cloud computing – reduced time to market, capex to opex, flexibility, automation, etc.

However, when the report starts to reference the carbon reductions potential of cloud computing it makes a fundamental error. One which is highlighted by CDP Executive Chair Paul Dickinson in the Foreword when he says

allowing companies to maximize performance, drive down costs, reduce inefficiency and minimize energy use ? and therefore carbon emissions

[Emphasis added]

The mistake here is presuming a direct relationship between energy and carbon emissions. While this might seem like a logical assumption, it is not necessarily valid.

If I have a company whose energy retailer is selling me power generated primarily by nuclear or renewable sources for example, and I move my applications to a cloud provider whose power comes mostly from coal, then the move to cloud computing will increase, not decrease, my carbon emissions.

The report goes on to make some very aggressive claims about the carbon reduction potential of cloud computing. In the executive summary, it claims:

US businesses with annual revenues of more than $1 billion can cut CO2 emissions by 85.7 million metric tons annually by 2020


A typical food & beverage firm transitioning its human resources (HR) application from dedicated IT to a public cloud can reduce CO2 emissions by 30,000 metric tons over five years

But because these are founded on an invalid premise, the report could just as easily have claimed

US businesses with annual revenues of more than $1 billion can increase CO2 emissions by 85.7 million metric tons annually by 2020


A typical food & beverage firm transitioning its human resources (HR) application from dedicated IT to a public cloud can increase CO2 emissions by 30,000 metric tons over five years

This wouldn’t be an issue if the cloud computing providers disclosed their energy consumption and emissions information (something that the CDP should be agitating for anyway).

In fairness to the CDP, they do refer to this issue in a sidebar on a page of graphs when they say:

Two elements to be considered in evaluating the carbon impact of the cloud computing strategies of specific firms are the source of the energy being used to power the data center and energy efficiency efforts.

However, while this could be taken to imply that the CDP have taken data centers’ energy sources into account in their calculations, they have not. Instead they rely on models extrapolating from US datacenter PUE information [PDF] published by the EPA. Unfortunately the PUE metric which the EPA used, is itself controversial.

For a data centric organisation like the CDP to come out with baseless claims of carbon reduction benefits from cloud computing may be at least partly explained by the fact that the expert interviews carried out for the report were with HP, IBM, AT&T and CloudApps – all of whom are cloud computing vendors.

The main problem though, is that cloud computing providers still don’t publish their energy and emissions data. This is an issue I have highlighted on this blog many times in the last three years and until cloud providers become fully transparent with their energy and emissions information, it won’t be possible to state definitively that cloud computing can help reduce greenhouse gas emissions.

Photo credit Tom Raftery


Computer storage systems rapidly taking on the energy efficiency challenge

In the video above, Dave Wright, founder and CEO of SolidFire makes the point that what with ARM-based servers, OpenCompute, etc. there has been a lot of breakthroughs on the computing side of servers, to make them more efficient recently, but very little innovation has happened with storage systems. Predictably he’s gone after storage modernisation with his new company SolidFire offering SSD-based enterprise storage solutions.

My laptop

My laptop

One of the biggest advantages of SSD’s, as storage for servers, is it is incredibly fast, so you get an immediate performance win. I first found this when I changed my laptop to one with an SSD, instead of a normal HDD. The drive is far faster, but because the SSD doesn’t generate heat, there is no requirement for a fan. This makes the laptop cooler (no laptop burn), quieter and it has a far longer battery life. Samsung affirmed this in a server situation when I talked to them earlier this year. Because SSD’s don’t require power-hungry fans to cool down the heat created by spinning drives, the reduced power requirement and heat generation is a big win in a data centre environment.

SolidFire are far from being alone in this field. Just last week FlashSoft announced that they had secured $3m in first round funding to develop Flash virtualisation software for enterprises. They have nifty software which runs on servers with hybrid storage (some SSD and some HDD). Their software identifies regularly accessed data (hot data) and caches this in SSD, while moving less frequently accessed data to spinning disks. Having regularly accessed data in a cache on SSD greatly increases the performance of the storage.

The hybrid model is one way of getting over the issue of the cost differential between HDD’s and SSD’s. SolidFire have a different approach – they don’t go for the hybrid model. Instead their all-SSD model uses a combination of data compression, de-duplication and thin client provisioning to reduce the amount of space required for storage.

A performance enhancing tactic regularly employed with HDD’s is to only use a small amount of the available space on the outside of the disk for your storage. The outside of the disk spins fastest giving you faster read/write access. However, this is hugely inefficient as most of the disk remains unused.

SolidFire do away with the need to have any HDD’s at all making your storage far more efficient. While in Flashsoft’s hybrid model, you can do away with the requirement for faster spinning SAS drives and instead go for slower, cheaper SATA drives without taking a performance hit. Both solutions reduce your energy and cooling needs.

Then out of Japan comes news that in response to requirements for energy efficiency there (due to the earthquake earlier this year closing nuclear power plants), Nexsan have come up with new power managed storage systems with in-built MAID capable of supporting any combination of SATA/SAS/SSD drives. Because MAID allows disks to be spun down when not in use, Nexsan are claiming up to 85% savings in energy usage for its systems.

It is true certainly that SSD’s have a shorter lifetime than HDD’s but even this has been given a boost with the recent announcement from IBM that their new Phase Change Memory chips (PCM) will be faster, cheaper and longer lasting than todays SSD’s.

So while Dave, above, feels that there isn’t much innovation happening in the efficiency of storage, I would respectfully differ and say this is very exciting times to be looking into storage energy efficiency!

Photo credit Tom Raftery


3 easy steps to see if your Cloud solution is energy efficient


I’ve written a number of posts questioning whether Cloud Computing is Green or Energy Efficient but to be a little more helpful, here is a simple test you can do to see if your Cloud Computing delivered applications are yielding energy efficiency gains for you:

  1. Have you moved some of your applications to a Cloud provider? – if “Yes” go to step 2 (if no, then cloud is not saving you energy)
  2. Do you know what the energy consumption* of that application was before moving it to the cloud? – if “Yes”, go on to step 3 (if no, then you have no way to tell if your Cloud solution is saving you energy)
  3. Do you know the energy consumption of your application after it has moved to the Cloud? – if “Yes” subtract 3 from 2 and if the answer is positive then Cloud is saving you energy (if no, then you have no way to tell if your Cloud solution is saving you energy)

*Obviously, the units of energy consumption in steps 2 and 3 need to be the same for this to work. To make sure they are, try contacting your Cloud provider before moving your applications to the Cloud and asking them what their method for measuring energy consumption is. If they tell you (more than likely they won’t) you can match your measurement units in step 2 to theirs.

Unfortunately, as Cloud Computing providers are, as yet, not publishing energy consumption information, for now, this only works as a thought experiment. However with coming regulatory requirements around reporting of energy consumption, Cloud Providers may be forced to reveal this information.

It is only when Cloud providers detail their energy consumption information that we will be able to say whether Cloud Computing is energy-efficient, or not.

Photo credit kevindooley