post

Power Usage Efficiency (PUE) is a poor data center metric

Problems with PUE

Power Usage Effectiveness (PUE) is a widely used metric which is supposed to measure how efficient data centers are. It is the unit of data center efficiency regularly quoted by all the industry players (Facebook, Google, Microsoft, etc.).
However, despite it’s widespread usage, it is a very poor measure of data center energy efficiency or of a data center’s Green credentials.

Consider the example above (which I first saw espoused here) – in the first row, a typical data center has a total draw of 2MW of electricity for the entire facility. Of which 1MW goes to the IT equipment (servers, storage and networking equipment). This results in a PUE of 2.0.

If the data center owner then goes on an efficiency drive and reduces the IT equipment energy draw by 0.25MW (by turning off old servers, virtualising, etc.), then the total draw drops to 1.75MW (ignoring any reduced requirement for cooling from the lower IT draw). This causes the PUE to increase to 2.33.

When lower PUE’s are considered better (1.0 is the theoretical max), this is a ludicrous situation.

Then, consider that not alone is PUE a poor indicator of an data center’s energy efficiency, it is also a terrible indicator of how Green a data center is as Romonet’s Liam Newcombe points out.

Problems with PUE

Consider the example above – in the first row, a typical data center with a PUE of 1.5 uses an average energy supplier with a carbon intensity of 0.5kg CO2/kWh resulting in carbon emissions of 0.75kg CO2/kWh for the IT equipment.

Now look at the situation with a data center with a low PUE of 1.2 but sourcing energy from a supplier who burns a lot of coal, for example. Their carbon intensity of supply is 0.8kg CO2/kWh resulting in an IT equipment carbon intensity of 0.96kg CO2/kWh.

On the other hand look at the situation with a data center with a poor PUE of 3.0. If their energy supplier uses a lot of renewables (and/or nuclear) in their generation mix they could easily have a carbon intensity of 0.2kg CO2/kWh or lower. With 0.2 the IT equipment’s carbon emissions are 0.6kg CO2/kWh.

So, the data center with the lowest PUE by a long shot has the highest carbon footprint. While the data center with the ridiculously high PUE of 3.0 has by far the lowest carbon footprint. And that takes no consideration of the water footprint of the data center (nuclear power has an enormous water footprint) or its energy supplier.

The Green Grid is doing its best to address these deficiencies coming up with other useful metrics such as, Carbon Usage Effectiveness (CUE) and Water Usage Effectiveness (WUE).

Now, how to make these the standard measures for all data centers?

The images above are from the slides I used in the recent talk I gave on Cloud Computing’s Green Potential at a Green IT conference in Athens.

Comments

  1. says

    “If the data center owner then goes on an efficiency drive and reduces the IT equipment energy draw by 0.25MW (by turning off old servers, virtualising, etc.), then the total draw drops to 1.75MW (ignoring any reduced requirement for cooling from the lower IT draw). This causes the PUE to increase to 2.33.”

    I get your point about the limitations of using PUE as a single measure of datacenter greenness, but the example you used seems to discount the point of PUE entirely, in that it’s a ratio of how much power is lost in the rest of the infrastructure before the power does “useful” work. Decreasing these overheads is in my opinion worthwhile, regardless of the carbon footprint of the power source.

    In the case you mentioned decreasing the power draw of the equipment should also reduce the overall power draw of the data center i.e. the PUE should remain constant. Whether this happens in practice is an interesting question that didn’t seem to be addressed in this post, but I doubt that a reduction in power draw from the equipment would inevitably result in a corresponding increase in the PUE figure as you’d stated.

    I’m happy to be educated if you think I’m wrong.

    John

  2. says

    John,

    thanks for your interest and for commenting on this post.

    You said

    I’m happy to be educated if you think I’m wrong.

    – ok, cool, I do think you are wrong so let me try to explain why…

    I obviously explained this poorly because, in fact, you are right about the part of your assertion where you say

    decreasing the power draw of the equipment should also reduce the overall power draw of the data center

    It does.

    In the hypothetical datacenter I used as an example, the total power draw of the data center before efficiency is 2MW. 1MW for IT equipment, and 1MW for facilities (cooling, lighting etc.).

    This yields a PUE of 2MW / 1MW = 2.0.

    After the efficiency drive, the draw for the IT equipment drops to 0.75MW. The facilities draw remains at 1MW so the total draw for the data center is now 1.75MW.

    This yields a PUE of 1.75MW / 0.75MW = 2.3333.

    I hope this is more clear and apologies for the confusion.

  3. says

    No apologies required, my assumption is that if you reduce the power draw from the equipment that the facilities draw will decrease in a corresponding way keeping the PUE ratio more of less the same, you assert that the facilities draw remains the same which doesn’t seem right to me, but I’ve not got any hard data to support my assumption. I’ll do some more research, but the more I think of it the overheads probably have some fixed and some variable components. I might check with some people I know that run datacenters for a living to get their feedback to see how variable the overheads are in their datacenter.

  4. says

    Ah,

    right John, now I see where you are coming from.

    Well, first off, I should mention that I am co-founder and director of the CIX data center in Cork, Ireland (http://www.cix.ie). Since I moved to Spain in 2008 I’m not as involved in the day-to-day running of the data center as I used to be, but I still know quite a bit about the design, build and running of data centers from personal experience.

    Having said that, there is validity to what you say – a reduction in the IT draw will possibly lead to a reduction in the facilities draw. I say possibly because it depends on the equipment used in the cooling infrastructure and how variable its draw is.

    Let’s say this is a data center with new cooling equipment which has very variable draw – then, yes the total facilities draw will drop as well. But not by an equivalent amount. You may get a drop of 0.05MW to 0.1MW at the very most.

    This will lead to a PUE of 1.65MW / 0.75MW = 2.2

    The PUE still goes the wrong direction.