I attended Google’s European Data Center Summit earlier this week and it was a superb event. The quality of the speakers was tremendous and the flow of useful information was non-stop.
The main take home from the event is that there is a considerable amount of energy being wasted still by data centers – and that this is often easy to fix.
Some of the talks showed exotic ways to cool your data center. DeepGreen, for example, chose to situate itself beside a deep lake, so that it could use the lake’s cold water for much cheaper cooling. Others used river water and Google mentioned their new facility in Finland where they are using seawater for cooling. Microsoft mentioned their Dublin facility where they are using air-side economisation (i.e. it just brings in air from outside the building) and so is completely chiller-less. This is a 300,00sq ft facility.
IBM’s Dr Bruno Michel did remind us that it takes ten times more energy to move a compressible medium like air, than it does to move an non-compressible one like water but then, not all data centers have the luxury of a deep lake nearby!
Both Google and UBS, the global financial services co., gave what were hugely practical talks about simple steps to reducing your data center’s energy footprint.
Google’s Director of Operations, Joe Kava (pic on right) talked about a retrofit project where Google dropped the PUE of five of its existing data centers from 2.4 down to 1.5. They did this with an investment of $25k per data center and the project yielded annual savings of $67k each!
What kind of steps did they take? They were all simple steps which didn’t incur any downtime.
The first step was to do lots of modelling of their airflow and temperatures in their facilities. With this as a baseline, they then went ahead and optimised the perforated tile layout! The next step was to get the server owners to buy into the new expanded ASHRAE limits – this allowed Google to nudge the setpoint for the CRACs up from its existing 22C to 27C – with significant savings accruing from the lack of cooling required from this step alone.
Further steps were to roll out cold aisle containment and movement sensitive lighting. The cold aisles were ‘sealed’ at the ends using Strip Doors (aka meat locker sheets). This was all quite low-tech, done with no downtime and again yielded impressive savings.
Google achieved further efficiencies by simply adding some intelligent rules to their CRACs so that they turned off when not needed and came on only if/when needed.
UBS’ Mark Eichenberger echoed a lot of this in his own presentation. UBS has a fleet of data centers globally whose average age is 10 years old and some are as old as 30. Again, simple, non-intrusive steps like cold-aisle containment and movement sensitive lighting is saving UBS 2m Swiss Francs annually.
Google’s Chris Malone had other tips. If you are at the design phase, try to minimise the number of conversion steps from AC<->DC for the electricity and look for energy efficient UPS’.
Finally, for the larger data center owners, eBay’s Dean Nelson made a very interesting point. When he looked at all of eBay’s apps, he saw they were all in Tier 4 data centers. He realised that 80% of them could reside in Tier 2 data centers and by moving them to Tier 2 data centers, he cut eBay’s opex and capex by 50%
Having been a co-founder of the Cork Internet eXchange data center, it was great to hear that the decisions we made back then around cold aisle containment and highly energy efficient UPS’ being vindicated.
Even better though was that so much of what was talked about at the summit was around relatively easy, but highly effective retrofits that can be done to existing data centers to make them far more energy efficient.