Efficiency and Ecological Responsibility of Cloud Computing (including water footprint)

A BrightTALK Channel

Unfortunately the provider for this webinar requires a login to listen to this discussion. If you don’t wish to register, my username is [email protected] and my password is 000000

Mark Thiele from Switch recently invited me to participate in a webinar he was moderating on the Efficiency and Ecological Responsibility of Cloud Computing which took place yesterday evening.

Also participating in the discussion were Harkeeret Singh (aka Harqs) Global Head of Energy & Sustainable IT at Thomson Reuters and Jason Hoffman CTO and Founder of Joyent.

The discussion started off asking whether or not cloud computing is efficient and the panel was fairly unanimous in deciding that cloud computing is not efficient. The main point I made here is that because cloud providers are not publishing energy information, it is not possible to say whether or not cloud computing is energy efficient.

At around 15 minutes into the conversation, we shifted on to asking whether or not cloud computing is green. There was a good discussion on this with the fact that efficiency not necessarily being green being one of the main points raised. Also brought up was how plummeting costs of cloud computing are leading to an explosion in consumption – in itself not very green. And as a counterpoint Harqs raised the fact that lower costs are beneficial to start-ups in developing countries.

Then 33 minutes into the conversation, we started discussing the impacts on water of cloud computing. One point I raised is that if you run a 25kW rack for one hour the water footprint from electricity productions is:

  • 0.1 litres if the electricity comes from wind
  • 2.5 litres if the electricity comes from solar
  • 45 litres if the electricity comes from coal and
  • 55 litres if the electricity comes from nuclear (and this doesn’t include the considerable water footprint of uranium mining).

Nuclear power plants use phenomenal amounts of water. From the Union of Concerned Scientists report [PDF] on this

the typical 1,000 Mwe nuclear power reactor with a 30oF ?T needs approximately 476,500 gallons per minute. If the temperature rise is limited to 20oF, the cooling water need rises to 714,750 gallons per minute. Some of the new nuclear reactors being considered are rated at 1,600 Mwe. Such a reactor, if built and operated, would need nearly 1,144,000 gallons per minute of once-through cooling for a 20 degree temperature rise.

Actual circulating water system flow rates in once-through cooling systems are 504,000 gpm at Millstone Unit 2 (CT); 918,000 gpm at Millstone Unit 3 (CT); 460,000 gpm at Oyster Creek (NJ); 311,000 at Pilgrim (MA); and 1,100,000 gpm at each of the twp Salem reactors (NJ).

And that level of water consumption has big biodiversity effects – imagine the large water intakes required for a nuclear plant taking in one million gallons of water per minute. These water intakes don’t just take in water, they also take in any life forms in that water. None of these life forms survive going through a nuclear power plant obviously. And then there’s the heat pollution effects from the warmer water coming from the power plant outlets.

Towards the end of the discussion Jason asked if making this data available to end users would be a clear differentiator for Joyent. I responded that it would be because a) there is a demand for this information and b) because having seen how Greenpeace successfully went after Facebook, (and in their latest report are now targeting Apple, Amazon and Microsoft) for their dis-regard for the footprint of their cloud computing infrastructure, nobody wants to be the next company in Greenpeace’s sights.

Harqs added that any company pursuing such a policy should open-source it so everyone could contribute to the development of constantly improving reporting standards.

The highlight of the webinar for me was at 47:30 when Jason committed to doing just that.

All in all a superb discussion with a fantastic outcome. I hope you enjoyed it as much as I did.


The Switch SuperNAP data centre – one of the most impressive I’ve been in

Switch SuperNAP data centre

If you were going to build one of the world’s largest data centre’s you wouldn’t intuitively site it in the middle of the Nevada desert but that’s where Switch sited their SuperNAPs campus. I went on a tour of the data centre recently when in Las Vegas for IBM’s Pulse 2012 event.

The data centre is impressive. And I’ve been in a lot of data centre’s (I’ve even co-founded and been part of the design team of one in Ireland).

The first thing which strikes you when visiting the SuperNAP is just how seriously they take their security. They have outlined their various security layers in some detail on their website but nothing prepares you for the reality of it. As a simple example, throughout our entire guided tour of the data centre floor space we were followed by one of Switch’s armed security officers!

The data centre itself is just over 400,000 sq ft in size with plenty of room within the campus to build out two or three more similarly sized data centres should the need arise. And although the data centre is one of the world’s largest, at 1,500 Watts per square foot it is also quite dense as well. This facilitates racks of 25kW and during the tour we were shown cages containing 40 x 25kW racks which were being handled with apparent ease by Switch’s custom cooling infrastructure.

Switch custom cooling infrastructure

Because SuperNAP wanted to build out a large scale dense data centre, they had to custom design their own cooling infrastructure. They use a hot aisle containment system with the cold air coming in from overhead and the hot air drawn out through the top of the contained aisles.

The first immediate implication of this is that there are no raised floors required in this facility. It also means that walking around the data centre, you are walking in the data centre’s cold aisle. And as part of the design of the facility, the t-scif’s (thermal seperate compartment in facility – heat containment structures) are where the contained hot aisle’s air is extracted and the external TSC600 quad process chillers systems generate the cold air externally for delivery to the data floor. This form of design means that there is no need for any water piping within the data room which is a nice feature.

Through an accident of history (involving Enron!) the SuperNAP is arguably the best connected data centre in the world, a fact they can use to the advantage of their clients when negotiating connectivity pricing. And consequently, connectivity in the SuperNAP is some of the cheapest available.

As a result of all this, the vast majority of enterprise cloud computing providers have a base in the SuperNAP. As is the 56 petabyte ebay hadoop cluster – yes, 56 petabyte!

US electricity generation

Given that I have regularly bemoaned cloud computing’s increasing energy and carbon footprint on this blog, you won’t be surprised to know that one of my first questions to Switch was about their energy provider, NV Energy.

According to NV Energy’s 2010 Sustainability Report [PDF] coal makes up 21% of the generation mix and gas accounts for another 63.3%. While 84% electricity generation from fossil fuels sounds high, the 21% figure for coal is low by US standards, as the graph on the right details.

Still, it is a long way off the 100% of electricity from renewables that Verne Global’s new data centre has.

Apart from the power generation profile, which in fairness to Switch, is outside their control (and could be considerably worse) the SuperNAP is, by far, the most impressive data centre I have ever been in.

Photo Credit Switch