FaceBook open sources building an energy efficient data center

FaceBook's new custom-built Prineville Data Centre

Back in 2006 I was the co-founder of a Data Centre in Cork called Cork Internet eXchange. We decided, when building it out, that we would design and build it as a hyper energy-efficient data centre. At the time, I was also heavily involved in social media, so I had the crazy idea, well, if we are building out this data centre to be extremely energy-efficient, why not open source it? So we did.

We used blogs, flickr and video to show everything from the arrival of the builders on-site to dig out the foundations, right through to the installation of customer kit and beyond. This was a first. As far as I know, no-one had done this before and to be honest, as far as I know, no-one since has replicated it. Until today.

Today, Facebook is lifting the lid on its new custom-built data centre in Prineville, Oregon.

Not only are they announcing the bringing online of their new data centre, but they are open sourcing its design, specifications and even telling people who their suppliers were, so anyone (with enough capital) can approach the same suppliers and replicate the data centre.

Facebook are calling this the OpenCompute project and they have released a fact sheet [PDF] with details on their new data center and server design.

I received a pre-briefing from Facebook yesterday where they explained the innovations which went into making their data centre so efficient and boy, have they gone to town on it.

Data centre infrastructure
On the data centre infrastructure side of things, building the facility in Prineville, Oregon (a high desert area of Oregon, 3,200 ft above sea level with mild temperatures) will mean they will be able to take advantage of a lot of free cooling. Where they can’t use free cooling, they will utilise evaporative cooling, to cool the air circulating in the data centre room. This means they won’t have any chillers on-site, which will be a significant saving in capital costs, in maintenance and in energy consumption. And in the winter, they plan to take the return warm air from the servers and use it to heat their offices!

By moving from centralised UPS plants to 48V localised UPS’s serving 6 racks (around 180 Facebook servers), Facebook were able to re-design the electricity supply system, doing away with some of the conversion processes and creating a unique 480V distribution system which provides 277V directly to each server, resulting in more efficient power usage. This system reduces power losses going in the utility to server chain, from an industry average 11-17% down to Prineville’s 2%.

Finally, Facebook have significantly increased the operating temperature of the data center to 80.6F (27C) – which is the upper limit of the ASHRAE standards. They also confided that in their next data centre, currently being constructed in North Carolina, they expect to run it at 85F – this will save enormously on the costs of cooling. And they claim that the reduction in the number of parts in the data center means they go from 99.999% uptime, to 99.9999% uptime.

New Server design
Facebook also designed custom servers for their data centres. The servers contain no paint, logos, stickers bezels or front panel. They are designed to be bare bones (using 22% fewer materials than a typical 1U server) and for ease of serviceability (snap-together parts instead of screws).

The servers are 1.5U tall to allow for larger heat sinks and larger (slower turning and consequently more efficient) 60mm fans. These fans only take 2-4% of the energy of the server, compared to 10-20% for typical servers. The heat sinks are all spread at the back of the mother board so none of them will be receiving pre-heated air from another heat sink, reducing the work required of the fans.

The server power supply accepts both 277V AC power from the electrical distribution system and 44V DC from the UPS in the event of a utility power failure. These power supplies have a peak efficiency of 94.5% (compared to a more typical 90% for standard PSU’s) and they connect directly to the motherboard, simplifying the design and reducing airflow impedance.

Open Compute
Facebook relied heavily on open source in creating their site. Now, they say, they want to make sure the next generation of innovators don’t have to go through the same pain as Facebook in building out efficient data centre infrastructure. Consequently, Facebook is releasing all of the specification documentation which it gave to its suppliers for this project.

Some of the schematics and board layouts for the servers belong to the suppliers so they are not currently being published, though Facebook did say they are working with their suppliers to see if they will release them (or portions of them) but they haven’t reached agreement with the suppliers on this just yet.

Asked directly about their motivations for launching Open Compute Facebook’s Jay Park came up with this classic reply

… it would almost seem silly to do all this work and just keep it closed

Asking Facebook to unfriend coal
Greenpeace started a campaign to pressure Facebook into using more renewable energy resources due to the fact that Pacific Power, the energy supplier Facebook will be using for its Prineville data center produces almost 60% of its electricity from burning coal.

Greenpeace being Greenpeace, created a very viral campaign, using the Facebook site itself, and the usual cadre of humurous videos etc., to apply pressure on Facebook to thinking of sourcing its electricity from more renewable sources.

When we asked Facebook about this in our briefing, they did say that their data centre efforts are built around many more considerations than just the source of energy that comes into the data centre. They then went on to maintain that they are impressed by Pacific Power’s commitment to moving towards renewable sources of energy (they are targeting having 2,000MW of power from renewables by 2013). And they concluded by contending that the efficiencies they have achieved in Prineville more than offsets the use of coal which powers the site.

Facebook tell us this new custom data centre at Prineville has a PUE of 1.07, which is very impressive.

They have gone all out on innovating their data centre and the servers powering their hugely popular site. More than that though, they are launching the Open Compute Project giving away all the specs and vendor lists required to reproduce an equally efficient site. That is massively laudable.

It is unfortunate that their local utility has such a high gen-mix of coal in its supply to besmirch an otherwise great energy and sustainability win for Facebook. The good thing though is that as the utility adds to its portfolio of renewables, Facebook’s site will only get greener.

For more on this check out the discussions on Techmeme

You should follow me on Twitter here

Photo credit FaceBook’s Chuck Goolsbee


Ad Infinitum’s Insite helping companies save energy


Continuing my series of chats with companies in the data center energy management space, I spoke recently to Philip Petersen, CEO of UK-based ad infinitum.

Their product, called InSite, like that of most of the others in this space I have spoken to, is a server based product, front-ended by a browser.

InSite pulls the data directly from devices (like power strips, distribution board meters, temperature and humidity sensors) and stores them in a PostgreSQL database. Having an SQL database makes it that much easier to integrate with other systems for pulling in data, and also for sharing information. This is handy when InSite is connected to a Building Management System (BMS), it allows organisations to see what proportion of a building’s power is going to the keep the Data Center running, for example. And because InSite can poll servers directly, it can be used to calculate the cost of running server-based applications (such as Exchange, Notes, SQL Server, SAP, etc.).

I asked Philip about automation and he said that while InSite has an inbuilt Automation Engine, it hasn’t been deployed because “no client that we have spoken to has wanted to do that yet”. Demand for automation will come, he said but right now companies are looking for more basic stuff – they often just want to see what’s actually going on, so that they can decide on the best way to respond.

InSIte’s target customers are your typical medium too large organisations (ones likely to have significant IT infrastructures) as well as co-lo operators. Unlike some of the other companies in this space though, Ad infinitum were able to share some significant customer wins – Tiscali’s UK Business Services, Equinix and Cisco’s UK Engineering labs.

In fact, Cisco have published a Case Study on the website referencing this solution [PDF] and how Cisco were able to achieve a 30% reduction in IT equipment power consumption and a 50% drop in their cooling costs!

It’s hard to argue with a significant customer win like that!

You should follow me on Twitter here

Photo credit JohnSeb


More info please IBM…

IBM Green Data Center in Second Life

Speaking of data centers, I was delighted to read this morning of a partnership between IBM and Indian bank Kotak.

According to the release, IBM is helping the bank consolidate its server rooms into one data center and Kotak will save:

over US$1.2 million in operational efficiency and reduced energy costs over the next five years

I’d like to see some of the calcs behind those data – $1.2m over five years sounds low to me unless it is a modest data center.

Intriguingly, the release refers to:

a chilled water-based cooling and an automatic floor pressurization system

If that is water cooled servers (as opposed to water cooled air handling units) then this is nice. I’d love to know what an ” automatic floor pressurization system” system is. Anyone know? My guess is that it is something for maintaining underfloor airflow integrity but if it is that, then it sounds like traditional air cooled servers, not water cooled 🙁

Hello? Anyone from IBM have any more info on this?


How to build a hyper Energy-efficient Data Center

I am speaking next week at a virtual conference called “bMighty – A Deep Dive on IT Infrastructure for SMBs” – apologies in advance for the state of the website(!)

My talk is titled “How to build a hyper Energy-efficient Data Center” and is based on the CIX data center which I helped develop (and am still a director of).

This is the slide deck I will be presenting there.


Data centers as energy exporters, not energy sinks!

Temp alert
Photo Credit alexmuse

I have been asking a question of various manufacturers recently and not getting a satisfactory answer.

The question I was asking was why don’t we make heat tolerant servers. My thinking being that if we had servers capable of working in temperatures of 40C then data centers wouldn’t expend as much energy trying to cool the server rooms.

This is not as silly a notion as might first appear. I understand that semiconductors performance dips rapidly as temperature increases, however, if you had hyper-localised liquid cooling which ensured that the chip’s temperature stayed at 20C, say, then the rest of the server could safely be at a higher temp, no?

When I asked Intel, their spokesperson, Nick Knupffer responded by saying

Your point is true – but exotic cooling solutions are also very expensive + you would still need AC anyway. We are putting a lot of work into reducing the power used by the chips in the 1st place, that equals less heat. For example, our quad-core Xeon chips go as low as 50W of TDP. That combined with better performance is the best way of driving costs down. Lower power + better performance = less heat and fewer servers required.

He then went on to explain about Intel’s new hafnium infused high-k metal gate transistors:

It is the new material used to make our 45nm transistors – gate leakage is reduced 100 fold while delivering record transistor performance. It is part of the reason why we can deliver such energy-sipping high performance CPU’s.

At the end of the day – the only way of reducing the power bill is by making more energy efficient CPU’s. Even with exotic cooling – you still need to get rid of the heat somehow, and that is a cost.

He is half right! Sure, getting the chip’s power consumption down is important and will reduce the server’s heat output but as a director of a data center I can tell you that what will happen in this case is more servers will be squeezed into the same data center space thus doing away with any potential reduction in data center power requirements. Parkinson’s law meets data centers!

No, if you want to take a big picture approach, you reduce power consumption by the chips and you then cool these chips directly with a hyper-localised solution so the server room doesn’t need to be cooled. This way the cooling is only going where it is required.

IBM’s Steven Sams IBM’s Vice President, Global Site and Facilities Services sent me a more positive answer:

We’ve actually deployed this in production systems in 3 different product announcements this year

New z series mainframes actually have a closed coolant loop inside the system that takes coolant to the chips to let us crank up the performance without causing chip to slide off as the solder melts. New high performance unix servers system P….. actually put out 75,000 watts of heat per rack….. but again the systems are water cooled with redundant coolant distribution units at the bottom of the rack. The technology is pretty sophisticated and I’ve heard that each of these coolant distribution units has 5 X the capacity to dissipate heat of our last water cooled mainframe in the 1980’s. The cooling distribution unit for that system was about 2 meters wide by 2 meters high by about 1 meter deep. The new units are about 10 inches by 30 inches.

The new webhosting servers iDataplex use Intel and AMD microprocessors and jam a lot of technology into rack that is about double the width but half the depth. To ensure that this technology does not use all of the AC in a data center the systems are installed with water cooled rear door heat exchangers… ie a car radiator at the back of the rack. These devices actually take out 110% of the heat generated by the technology so the outlet temp is actually cooler then the air that comes in the front. A recent study by the a west coast technology leadership consortium at a facility provided by Sun Microsystems actually showed that this rear door heat exchanger technology is the most energy efficient of all the alternative they evaluated with the help of the Lawrence Berkeley national laboratory.

Now that is the kind of answer I was hoping for! If this kind of technology became widespread for servers, the vast majority of the energy data centers burn on air conditioning would no longer be needed.

However, according to the video below, which I found on YouTube, IBM are going way further than I had thought about. They announced their Hydro-Cluster Power 575 series super computers in April. They plan to allow data centers to capture the heat from the servers and export it as hot water for swimming pools, cooking, hot showers, etc.

This is how all servers should be plumbed.

Tremendous – data centers as energy exporters, not energy sinks. I love it.


RackSpace’s customers ‘won’t pay a premium’ for Green products?

Photo Credit ignescent_infidel

Jon Brodkin wrote a piece in ComputerWorld UK about a survey of RackSpace‘s customers which seems to suggest that they ‘won’t pay a premium’ for Green products. John goes on to extrapolate that they:

found some results suggesting businesses are losing interest in green technology.

There are a number of problems with this assumption. First off you have to realise that Rackspace don’t do co-lo. Rackspace only do managed hosting. So, if I am an IT manager I can’t put my equipment, no matter how energy-efficient, in a RackSpace Data Center, I have to use their equipment. What is not clear from the piece John wrote is what was the ‘premium’ the RackSpace customers were being asked to pay.

Again, if I am an IT manager, I can choose to buy, for example Dell’s PowerEdge™ Energy Smart 2950 III (SV22952), which is cheaper but slightly less powerful than their standard PowerEdge™ 2950 (SV22951). Realistically, the only reason I am going to do this is if it is going to save me money.

As James said previously – the wrong people are paying the electricity bill in companies currently (no pun):

IT doesn’t pay for its electricity. No, seriously, go to your FM manager or IT manager and ask who pays to power your IT properties. The vast majority of IT systems get a free ride on electricity bills, which is one reason its taken so long to fully consider IT carbon costs.

When that changes (and it will) watch IT managers suddenly become extremely interested in the energy ratings of their servers.

Going back to the RackSpace survey, fundamentally I think Rackspace are taking the wrong approach. What they should be doing is increasing prices to their customers across the board to reflect their own increased energy bill – except for those customers who chose to be hosted on energy efficient servers. If RackSpace took that route, suddenly you’d see a an about-face in the number of their customers who are apparently losing interest in green technology!!!

[Disclosure: I am co-founder a director of Cork Internet eXchange (CIX) an energy efficient data center based in Cork, Ireland. CIX charges all customers separately for their electricity usage.]


IBM reckons Green is where economic and ecological concerns converge

I love this ad. It demonstrates that not only has IBM a sense of humour but also that they have the right story – today, with soaring energy prices, Green is where economic and ecological concerns converge.

Last year IBM announced Project Big Green. This was a commitment by IBM to re-direct $1 billion USD per annum across its businesses to increase energy efficiency! Serious money by anyone’s standards.

This isn’t just some philanthropic gesture on IBM’s part. By making this investment the company expects to save more than five billion kilowatt hours per year. IBM anticipates it will double the computing capacity in the eight million square feet of data center space which IBM operates within the next three years without increasing power consumption or its carbon footprint. In other words they expect to double their compute power, without adding data centers, nor increasing their carbon footprint!

This year, IBM have gone even further! As an extension of their project Big Green they have announced ‘modular data centers’ similar to Sun’s S20 product. They come in three sizes and IBM claims they are

designed to achieve the world’s highest ratings for energy leadership, as determined by the Green Grid, an industry group focused on advancing energy efficiency for data centers and business compute ecosystems.

I’d love to see comparable metrics between the S20 and IBMs modular data centers.

However, the take home message today is that IBM is committing serious resources to its Green project. Not because they care deeply for the planet (I’m sure they do) but because they care deeply about the bottom line and with increasing energy costs, there is now a sweet convergence between doing the right thing for the planet and for the shareholder!


IPv6: Towards a Greener Internet

As you probably know by now, we’re very interested in the idea of what might constitute a green API or protocol, so I was very interested when I received a link via twitter from @Straxus (Ryan Slobojan).

The Aon Scéal? (That’s Any News in Gaelic) blog by Alastrain McKinstry points to this piece by Yves Poppe which argues that IPv6 could save 300 Megawatts.

Easy to forget that most mobile devices used by Time Square revelers were behind IPv4 NAT’s and that always on applications such as Instant Messaging, Push e-mail, VoIP or location based services tend to be electricity guzzlers. It so happens that applications that we want always to be reachable have to keep sending periodic keepalive messages to keep the NAT state active. Why is that so? The NAT has an inactivity timer whereby, if no data is sent from your mobile for a certain time interval, the public port will be assigned to another device.

You cannot blame the NAT for this inconvenience, after all, its role in live is to redistribute the same public addresses over and over; if it detects you stopped using the connection for a little while, too bad, you lose the routable address and it goes to someone else. And when a next burst of data communication comes, guess what? It doesn’t find you anymore. Just think of a situation we would loose our cell phone number every time it is not in use and get a new one reassigned each time.

Nokia carried out the original study. Good work Nokia researcher guys! Another way of looking at the saved energy, which I think we’d all vote for, is potentially longer battery life of our mobile access devices. I am sure the folks at Nortel, who are so enthusiastically driving the green agenda for competitive advantage, would be interested in this research, and quite honestly its one of the first arguments I have heard that makes me think ah yes IPv6 lets pull the trigger. There are some good skeptical arguments in the comments here, but on balance I can definitely see the value of the initial research. Its surely worth further study.

While writing this article I also came across the rather excellent Green IT/Broadband blog. The author clearly believes in our Bit Miles concept, even if he doesn’t call it that.

Governments around the world are wrestling with the challenge of how to reduce carbon dioxide emissions. The current preferred approaches are to impose “carbon” taxes and implement various forms of cap and trade or carbon offset systems. However another approach to help reduce carbon emission is to “reward” those who reduce their carbon footprint rather than imposing draconian taxes or dubious cap and trade systems. It is estimated that consumers control or influence over 60% of all CO2 emissions. As such, one possible reward system of trading “bits and bandwidth for carbon” is to provide homeowners with free fiber to the home or free wireless products and other electronic services such as ebooks and eMovies if they agree to pay a premium on their energy consumption which will encourage them to reduce emissions by turning down the thermostat or using public transportation. Not only does the consumer benefit, but this business model also provides new revenue opportunities for network operators, optical equipment manufacturers, and eCommerce application providers.

European IPv6 Day, hosted by the EU is on the 30th May. Come to think about it the guy I should talk to about green IP is Vint Cerf of Google.


Finnair: Awesomeness by Carbon Calculator (never say never)

Just the other day I say we wouldn’t be covering Carbon Calculators unless they ran on AMEE. Wrong. This afternoon I got a link from Joseph Simpson at MovementDesign and it got me thinking. I have no idea why a thinktank dedicated to the future of movement wouldn’t actually blog the link rather than sending it to me, but that’s the web for you. Wired has a story about Finnair. Wired gives them props for not being defensive about emissions, but that’s not what jumped out at me. What I like is the fact Finnair is showing customers the potential carbon impacts of different journeys through different hubs.

It’s a simple application, but it’s pretty cool. Just load in your departure and arrival city, and the calculator returns the total distance of your trip, the amount of fuel used per passenger, and the amount of CO2 generated by that fuel. To calculate the per passenger number, Finnair looks at typical load factors for their different flight segments (long haul flights tend to be 85% full, leisure flights 95%, etc.), and also takes into account what type of plane is being flown on each route, since fuel efficiency varies depending on model. And, with typical Scandinavian thoroughness, Finnair has designed the calendar so that you’re able to see how emissions are impacted by connections at various Finnair hub cities.

Its that last function which interests me most, in some respects. Now if we could just get Finnair to integrate with AMEE at the back end and dopplr, the travel serendipity platform, at the front end for trip-planning, then we’d be cooking with… uh… a wind-powered oven. Exciting times. I would love to know what the implications are for trips through different hubs. I am pretty sure Heathrow, with its circling, and fuel-burning on the ground is just awful. Computers and augmented intelligence are going to redefine travel in the new energy era.