post

Data centers as energy exporters, not energy sinks!

Temp alert
Photo Credit alexmuse

I have been asking a question of various manufacturers recently and not getting a satisfactory answer.

The question I was asking was why don’t we make heat tolerant servers. My thinking being that if we had servers capable of working in temperatures of 40C then data centers wouldn’t expend as much energy trying to cool the server rooms.

This is not as silly a notion as might first appear. I understand that semiconductors performance dips rapidly as temperature increases, however, if you had hyper-localised liquid cooling which ensured that the chip’s temperature stayed at 20C, say, then the rest of the server could safely be at a higher temp, no?

When I asked Intel, their spokesperson, Nick Knupffer responded by saying

Your point is true – but exotic cooling solutions are also very expensive + you would still need AC anyway. We are putting a lot of work into reducing the power used by the chips in the 1st place, that equals less heat. For example, our quad-core Xeon chips go as low as 50W of TDP. That combined with better performance is the best way of driving costs down. Lower power + better performance = less heat and fewer servers required.

He then went on to explain about Intel’s new hafnium infused high-k metal gate transistors:

It is the new material used to make our 45nm transistors – gate leakage is reduced 100 fold while delivering record transistor performance. It is part of the reason why we can deliver such energy-sipping high performance CPU’s.

At the end of the day – the only way of reducing the power bill is by making more energy efficient CPU’s. Even with exotic cooling – you still need to get rid of the heat somehow, and that is a cost.

He is half right! Sure, getting the chip’s power consumption down is important and will reduce the server’s heat output but as a director of a data center I can tell you that what will happen in this case is more servers will be squeezed into the same data center space thus doing away with any potential reduction in data center power requirements. Parkinson’s law meets data centers!

No, if you want to take a big picture approach, you reduce power consumption by the chips and you then cool these chips directly with a hyper-localised solution so the server room doesn’t need to be cooled. This way the cooling is only going where it is required.

IBM’s Steven Sams IBM’s Vice President, Global Site and Facilities Services sent me a more positive answer:

We’ve actually deployed this in production systems in 3 different product announcements this year

New z series mainframes actually have a closed coolant loop inside the system that takes coolant to the chips to let us crank up the performance without causing chip to slide off as the solder melts. New high performance unix servers system P….. actually put out 75,000 watts of heat per rack….. but again the systems are water cooled with redundant coolant distribution units at the bottom of the rack. The technology is pretty sophisticated and I’ve heard that each of these coolant distribution units has 5 X the capacity to dissipate heat of our last water cooled mainframe in the 1980’s. The cooling distribution unit for that system was about 2 meters wide by 2 meters high by about 1 meter deep. The new units are about 10 inches by 30 inches.

The new webhosting servers iDataplex use Intel and AMD microprocessors and jam a lot of technology into rack that is about double the width but half the depth. To ensure that this technology does not use all of the AC in a data center the systems are installed with water cooled rear door heat exchangers… ie a car radiator at the back of the rack. These devices actually take out 110% of the heat generated by the technology so the outlet temp is actually cooler then the air that comes in the front. A recent study by the a west coast technology leadership consortium at a facility provided by Sun Microsystems actually showed that this rear door heat exchanger technology is the most energy efficient of all the alternative they evaluated with the help of the Lawrence Berkeley national laboratory.

Now that is the kind of answer I was hoping for! If this kind of technology became widespread for servers, the vast majority of the energy data centers burn on air conditioning would no longer be needed.

However, according to the video below, which I found on YouTube, IBM are going way further than I had thought about. They announced their Hydro-Cluster Power 575 series super computers in April. They plan to allow data centers to capture the heat from the servers and export it as hot water for swimming pools, cooking, hot showers, etc.

This is how all servers should be plumbed.

Tremendous – data centers as energy exporters, not energy sinks. I love it.

Comments

  1. says

    Tom,

    You should ask a retrospective briefing on mainframes… Some buildings in the 70’ies were built to use the computer room as a big boiler room to heat all the buildings. According to that story I’ve heard, when they removed the mainframe, they had to install heating.

    The annoying thing is that we seem to be going backward with computing technology, installing fat clients everywhere, all served by inadequate transmissions systems (have you ever thought about all those KW wasted on TCP/IP stacks?) on one end and on the other end by web “farms” (a bunch of cheap LinTel servers, all working at utilisation rates 50 points lower than mainframes).

    Think of Google for instance: all requests are sent to multiple nodes for speed, there’s huge spare capacity, etc, etc, etc… Very energy inefficient…

  2. says

    Great point Ludovic – mainframes were before my time (in IT!) so I know very little about them. The little I do know definitely makes today’s servers seem hugely inefficient.

  3. says

    I have to post a product positioning comment, but we are the most granular
    “hyper localised solution” using air 6KW (426W/sqft) or water 15KW (1071W/sqft). and those are our half racks. Our internal air loop is 100’s of times a minute. You are the fist person to put their finger on it, thank you. I have been slugging it out in a garage for a long time.
    http://www.youtube.com/ellipticalmobile

  4. says

    I am not sure with your idea of “Server Idea not to be cooled”. It should be though you take some alternative ways to have your way for cooling.