post

FaceBook open sources building an energy efficient data center

FaceBook's new custom-built Prineville Data Centre

Back in 2006 I was the co-founder of a Data Centre in Cork called Cork Internet eXchange. We decided, when building it out, that we would design and build it as a hyper energy-efficient data centre. At the time, I was also heavily involved in social media, so I had the crazy idea, well, if we are building out this data centre to be extremely energy-efficient, why not open source it? So we did.

We used blogs, flickr and video to show everything from the arrival of the builders on-site to dig out the foundations, right through to the installation of customer kit and beyond. This was a first. As far as I know, no-one had done this before and to be honest, as far as I know, no-one since has replicated it. Until today.

Today, Facebook is lifting the lid on its new custom-built data centre in Prineville, Oregon.

Not only are they announcing the bringing online of their new data centre, but they are open sourcing its design, specifications and even telling people who their suppliers were, so anyone (with enough capital) can approach the same suppliers and replicate the data centre.

Facebook are calling this the OpenCompute project and they have released a fact sheet [PDF] with details on their new data center and server design.

I received a pre-briefing from Facebook yesterday where they explained the innovations which went into making their data centre so efficient and boy, have they gone to town on it.

Data centre infrastructure
On the data centre infrastructure side of things, building the facility in Prineville, Oregon (a high desert area of Oregon, 3,200 ft above sea level with mild temperatures) will mean they will be able to take advantage of a lot of free cooling. Where they can’t use free cooling, they will utilise evaporative cooling, to cool the air circulating in the data centre room. This means they won’t have any chillers on-site, which will be a significant saving in capital costs, in maintenance and in energy consumption. And in the winter, they plan to take the return warm air from the servers and use it to heat their offices!

By moving from centralised UPS plants to 48V localised UPS’s serving 6 racks (around 180 Facebook servers), Facebook were able to re-design the electricity supply system, doing away with some of the conversion processes and creating a unique 480V distribution system which provides 277V directly to each server, resulting in more efficient power usage. This system reduces power losses going in the utility to server chain, from an industry average 11-17% down to Prineville’s 2%.

Finally, Facebook have significantly increased the operating temperature of the data center to 80.6F (27C) – which is the upper limit of the ASHRAE standards. They also confided that in their next data centre, currently being constructed in North Carolina, they expect to run it at 85F – this will save enormously on the costs of cooling. And they claim that the reduction in the number of parts in the data center means they go from 99.999% uptime, to 99.9999% uptime.

New Server design
Facebook also designed custom servers for their data centres. The servers contain no paint, logos, stickers bezels or front panel. They are designed to be bare bones (using 22% fewer materials than a typical 1U server) and for ease of serviceability (snap-together parts instead of screws).

The servers are 1.5U tall to allow for larger heat sinks and larger (slower turning and consequently more efficient) 60mm fans. These fans only take 2-4% of the energy of the server, compared to 10-20% for typical servers. The heat sinks are all spread at the back of the mother board so none of them will be receiving pre-heated air from another heat sink, reducing the work required of the fans.

The server power supply accepts both 277V AC power from the electrical distribution system and 44V DC from the UPS in the event of a utility power failure. These power supplies have a peak efficiency of 94.5% (compared to a more typical 90% for standard PSU’s) and they connect directly to the motherboard, simplifying the design and reducing airflow impedance.

Open Compute
Facebook relied heavily on open source in creating their site. Now, they say, they want to make sure the next generation of innovators don’t have to go through the same pain as Facebook in building out efficient data centre infrastructure. Consequently, Facebook is releasing all of the specification documentation which it gave to its suppliers for this project.

Some of the schematics and board layouts for the servers belong to the suppliers so they are not currently being published, though Facebook did say they are working with their suppliers to see if they will release them (or portions of them) but they haven’t reached agreement with the suppliers on this just yet.

Asked directly about their motivations for launching Open Compute Facebook’s Jay Park came up with this classic reply

… it would almost seem silly to do all this work and just keep it closed

Asking Facebook to unfriend coal
Greenpeace started a campaign to pressure Facebook into using more renewable energy resources due to the fact that Pacific Power, the energy supplier Facebook will be using for its Prineville data center produces almost 60% of its electricity from burning coal.

Greenpeace being Greenpeace, created a very viral campaign, using the Facebook site itself, and the usual cadre of humurous videos etc., to apply pressure on Facebook to thinking of sourcing its electricity from more renewable sources.

When we asked Facebook about this in our briefing, they did say that their data centre efforts are built around many more considerations than just the source of energy that comes into the data centre. They then went on to maintain that they are impressed by Pacific Power’s commitment to moving towards renewable sources of energy (they are targeting having 2,000MW of power from renewables by 2013). And they concluded by contending that the efficiencies they have achieved in Prineville more than offsets the use of coal which powers the site.

Conclusion
Facebook tell us this new custom data centre at Prineville has a PUE of 1.07, which is very impressive.

They have gone all out on innovating their data centre and the servers powering their hugely popular site. More than that though, they are launching the Open Compute Project giving away all the specs and vendor lists required to reproduce an equally efficient site. That is massively laudable.

It is unfortunate that their local utility has such a high gen-mix of coal in its supply to besmirch an otherwise great energy and sustainability win for Facebook. The good thing though is that as the utility adds to its portfolio of renewables, Facebook’s site will only get greener.

For more on this check out the discussions on Techmeme

You should follow me on Twitter here

Photo credit FaceBook’s Chuck Goolsbee

post

Sentilla thinks of data centers, as data factories!

Data center

If you have been following this blog, you’ll know I have been profiling Data Center efficiency companies over the last few weeks. This week I take a look at Sentilla.

I talked to Sentilla’s CTO and co-founder, Joe Polastre, the other day and Joe told me that Sentilla came out of Berkeley where they had been looking at data analytics problems around large, potentially incomplete or inaccurate, streaming datasets. The challenge was how to turn that into a complete picture of what’s going on so people could make business decisions.

Sentilla takes an industrial manufacturing approach to Data Centers – in manufacturing you have power going in one side, and products and (often) waste heat coming out the other. In the same way in data centers you have power going in one side and coming out the other side you have the product (compute cycles) and waste heat. To optimise your data center you need to get the maximum data/compute (product) output with the minimum power in and the least waste heat generated. Sentilla thinks of data centers, as data factories!

Unlike most of the data center people I have talked to, Sentilla don’t talk so much about energy savings. Instead they emphasise maximising performance – getting the most out of your existing data centers, your existing storage, your existing servers, your existing power supply. By far the greatest saving from deploying Sentilla, Joe claimed, is not from the energy savings. That pales in comparison to the capital deferment savings gained from being able to delay the building of extra data center facilities by however many years, he said.

So how does Sentilla help?

Well Sentilla analyses the energy profile of every asset in the data center, whether metered or not, and makes recommendations to improve the planning and management of data center operations. I highlighted the “whether metered or not” bit because this is an important differentiator for Sentilla – they have developed and patented what they call “virtual meters”. These are algorithms which look at the work that a device is doing, and based on models which Sentilla have built up, and measurements they have done, as well as some benchmarks which are out there, Sentilla computes how much power is being used by that equipment.

The reason this is so important is because the most inefficient equipment in the data center is not the new stuff (which is likely to already be metered) but the legacy devices. These are the ones which need to be most carefully managed, and the ones where the greatest performance gains for the data center can be made. And because Sentilla can pull usage information from management databases like Tivoli, it means the Sentilla doesn’t need to poll every piece of equipment in the data center (with the increased network traffic and data that would generate).

Also, because Sentilla has its virtual meters, it is a software-only product and can therefore be rolled out very quickly.

The other nice feature Sentilla has is that it can identify the energy utilisation of virtualised servers. This is important because with the increasing ease of deployment of virtual servers, under-utilised VM’s and VM clutter are starting to become issues for data centers. VM clutter isn’t just an issue for energy reasons – there are also implications for software licensing, maintenance and SLA requirements.

I asked Joe about whether Sentilla is a SaaS product and he said that while they have a SaaS version of the product, so far most of Sentilla’s clients prefer to keep their data in-house and they haven’t gone for the SaaS option.

Finally I asked about pricing and Joe said that Sentilla is priced on a subscription basis and, apparently, it is priced such that for any modest sized data center, for every $1 you put into Sentilla, you get $2 back. Or put another way, Joe said, deploying Sentilla will generally mean that you reclaim around 18-20% of your power capacity.

Disclosure: Sentilla are a client (but this post is not part of their client engagement)

You should follow me on Twitter here

Photo credit The Planet

post

Power Assure automates the reduction of data center power consumption

Data centre

If you’ve been following this blog in the last couple of weeks you’ll have noticed that I have profiled a couple of data centre energy management companies – well, today it is the turn of Power Assure.

The last time I talked to Power Assure was two years ago and they were still very early stage. At that time I talked to co-founder and CTO, Clemens Pfeiffer, this time I spoke with Power Assure’s President and CEO, Brad Wurtz.

The spin that Power Assure put on their energy management software is that, not only do they offer their Dynamic Power Management solution which provides realtime monitoring and analytics of power consumption across multiple sites, but their Dynamic Power Optimization application automatically reduces power consumption.

How does it do that?

Well, according to Brad, clients put an appliance in each of the data centres they are interested in optimising (Power Assure’s target customer base are large organisations with multiple data centres – government, financial services, healthcare, insurance, telco’s, etc.). The appliance uses the management network to gather data – data may come from devices (servers, PDU’s, UPS’s, chillers, etc.) directly, or more frequently, it gathers data directly from multiple existing databases (i.e. a Tivoli db, a BMS, an existing power monitoring system, and/or inventory system) and performs Data Centre analytics on those data.

Data centre

The optimisation module links into existing system management software to measures and track energy demand on a per applications basis in realtime. It then calculates the amount of compute capacity required to meet the service level agreements of that application and adds a little bit of headroom. From the compute it knows the number of servers needed, so it communicates with the load balancer (or hypervisor, depending on the data centre’s infrastructure) and adjusts the size of the server pool to meet the required demand.

Servers removed from the pool can be either power capped or put in sleep mode. As demand increases the servers can be brought fully online and the load balancer re-balanced so the enlarged pool can meet the new level of demand. This is the opposite of the smart grid demand response concept – this is supply-side management – matching your energy consumption (supply to the demand for compute resources).

A partnership with Intel means that future versions will be able to turn off and on individual components or cores to more precisely control power usage.

The software is agentless and interestingly, given the customer profile Brad outlined (pharmas, financial institutions, governments, etc.), customers log in to view and manage their power consumption data because it is SaaS delivered.

The two case studies on their site make for interesting reading and show reductions in power consumption from 56% – 68% which are not to be sneezed at.

The one client referred to in the call is NASA and Power Assure are involved in a data centre consolidation program with them. Based on the work they have done with Power Assure, Brad informed me that NASA now expects to be able to consolidate their current 75 Data Centres significantly. That’ll make a fascinating case study!

You should follow me on Twitter here

Photo credit cbowns

post

Viridity’s new President and CEO Arun Oberoi speaks to GreenMonk

Viridity EnergyCheck Screen Shot

We all know data centre’s are massive consumers of energy but just how much? The European data centre consumption was 50 terawatt hours (TWh) in 2008, according to a recent article in the Guardian. This will rise to 100TWh by 2020, roughly the same as the electricity consumption of Portugal.

I mentioned on here just before Christmas that data center energy management company Viridity had named Arun Oberoi as their new President and CEO. Arun has an impressive CV which is outlined in Viridity’s press release about the appointment.

I had the opportunity to chat with Arun recently and he talked to me about Viridity’s solutions.

Data centre with cold aisle containment

As Arun put it, the world has done a great job of mapping dependencies to IT Services in the Enterprise Management world but very little has been done so far on bridging the physical world (think power, space and cooling) to the logical world. These are resources which are becoming very expensive but whose ability to be measured and managed has been hampered by the traditional separation of roles between facilities and IT, for example.

Three areas Viridity can help company’s with, according to Arun are

  1. Power and cost savings
  2. Sustainability – emissions reduction and
  3. Mapping physical to logical to ensure optimisation of resources and managing data centre physical constraints (which, unlike IT, can’t be virtualised!)

Viridity’s software takes the data from many, often disparate sources and provides analysis and trending information to allow managers decide how best to reduce their electricity and space costs. The next version will have automation built-in to enable even greater savings!

In an ideal world this would mean that European data centre consumption might only rise to 60 terawatt hours (TWh) by 2020, instead of the projected 100TWh. However, Parkinson’s Law teaches us that data centre’s expand to fill the power available to run them!

Photo credit Tom Raftery

post

SAP’s Palo Alto energy efficiency and CO2 reductions

Cisco Telepresence

As mentioned previously, I was in Santa Clara and Palo Alto last week for a couple of SAP events.

At these events SAP shared some of its carbon reduction policies and strategies.

According to SAP Chief Sustainability Officer Peter Graf, the greatest bang for buck SAP is achieving comes from the deployment of telepresence suites. With video conferencing technologies SAP is saving ?655 per ton of CO2 saved. This is hardly surprising given Cisco themselves claim to have saved $790m in travel expenditure from their telepresence deployments!

Other initiatives SAP mentioned were the installation of 650 solar panels on the roof of building 2 which provides for around 5-6% of SAP’s Palo Alto energy needs. This means that on sunny days, the SAP Palo Alto data centre can go completely off-grid. The power from the solar panels is not converted to AC at any point – instead it is fed directly into the data centre as DC – thereby avoiding the normal losses incurred in the conversion from DC->AC->DC for computer equipment. Partnerships with OSISoft and Sentilla ensure that their data centre runs at optimum efficiency.

SAP also rolled out 337 LED lighting systems. These replaced fluorescent lighting tubes and because the replacement LED lights are extremely long-life, as well as low energy, there are savings on maintenance as well as electricity consumption.

Coulomb electric vehicle charging station at SAP HQ in Palo Alto

SAP has placed 16 Coulomb level two electric vehicle charging stations around the car parks in its facility. These will allow employees who purchase electric vehicles to charge their cars free of charge (no pun!) while they are at work. SAP has committed to going guarantor on leases for any employees who plan to purchase electric vehicles. We were told to watch out for a big announcement from SAP in January in the electric vehicle space!

In its entirety, SAP has invested $2.3m in energy efficiency projects at their Palo Alto campus. This will lead to $665,000 savings per annum with a payback in under four years and an annual CO2 emissions reduction of over 807 tons.

This may sound like small potatoes but SAP intends to be both an exemplar and an enabler – so they want to be able to ‘walk the walk, as well as talking the talk’.

One of the points that SAP constantly mention in briefings is that while their CO2 emissions amounted to 425,000 tons for 2009, the CO2 emissions of their customer base, associated with their running SAP software is 100 times that and the total CO2 emissions of their customer base is 100 times that again! Consequently SAP sees itself as potentially having sway over a large portion of the world’s carbon emissions. SAP hopes to be able to use this influence to help its client companies to significantly reduce their emissions – and to use its software to report on those same reductions!

Two questions I forgot to ask SAP on the day were:

  1. if they were getting any rebates from their utility (PG&E) for energy reductions? and
  2. if the car charging stations were being run from the solar panels (and if so, were they also running DC-DC directly)?
post

NightWatchman Server Edition v2.0 helps tackle virtual server sprawl

Grassy server room!

Photo via Tom Raftery

In a bid to help companies tackle server sprawl, 1E launched v2.0 of its NightWatchman Server Edition yesterday.

1E’s NightWatchman software comes in two flavours – the desktop edition to allow for central administration of the power management of laptops and desktops (including Macs) and the server edition.

The power consumption of desktop computers, which are often only used 8 hours a day, (and may need to be woken up once a month at 3am for an update) is relatively straightforward to manage. On the other hand, the power management of servers is quite a bit more complex. Servers are, by definition supposed to be accessible at all times, so you can’t shut them down, right?

Well, yes and no.

Not all servers are equal. Up to 15% of servers globally are powered on, and simply doing nothing. This equates to roughly $140 billion in power costs and produces 80 million tons of carbon dioxide annually.

Nightwatchman helps in a number of ways. First, its agent-based software quickly identifies servers whose CPU utilisation is is simply associated with its own management and maintenance processes (i.e. the server is unused). These servers can be decomissioned or repurposed.

NightWatchman goes further though and it uses its Drowsy Server technology to dynamically drop CPU and fan speeds on servers when they are not under pressure, and ramp them back up once more as soon as the server starts to be used. 1E estimates an average 12% energy savings per server due to Drowsy Server alone.

This latest release of NightWatchman Server Edition addresses virtualisation and virtual server sprawl.

Because virtualisation software vendors have made virtualisation such a trivial task there is now a growing issue of virtual server sprawl. NightWatchman v2.0 can now identify unused virtual servers to allow for them to be deleted or put to work freeing up server resources (and software licenses!).

Even more interesting though, is that 1E are publishing in the Customer Spotlight section of their blog, a series of posts from CSC detailing the journey of installing and using NightWatchman Server Edition within CSC to reduce energy consumption.

It is one thing to hear from the vendor just how good their product is. It is another thing completely to have someone like CSC detail the rollout of the software across their North American server infrastructure. This is a blog I will be following with interest.

You should follow me on twitter here.

post

Should FaceBook’s investors be worried that the site is sourcing energy for its new data center from coal?

Mountain-top removal

Photo credit The Sierra Club

Should FaceBook’s investors be worries that the site is sourcing energy for its new data center from primarily coal-fired power?

FaceBook is fourth largest web property (by unique visitor count) and well on its way to becoming third. It is valued in excess of $10 billion and its investors include Russian investment company DST, Accel Partners, Greylock Partners, Meritech Capital and Microsoft.

FaceBook announced last month that it would be locating its first data center in Prinville Oregon. The data center looks to be all singing and dancing on the efficiency front and is expected to have a Power Usage Effectiveness (PUE) rating of 1.15. So far so good.

However, it soon emerged that FaceBook are purchasing the electricity for their data center from Pacific Power, a utility owned by PacifiCorp, a utility whose primary power-generation fuel is coal!

Sourcing power from a company whose generation comes principally from coal is a very risky business and if there is anything that investors shy away from, it is risk!

Why is it risky?

Coal has significant negative environmental effects from its mining through to its burning to generate electricity contaminating waterways, destroying ecosystems, generation of hundreds of millions of tons of waste products, including fly ash, bottom ash, flue gas desulfurisation sludge, that contain mercury, uranium, thorium, arsenic, and other heavy metals and emitting massive amounts of radiation.

And let’s not forget that coal burning is the largest contributor to the human-made increase of CO2 in the air [PDF].

The US EPA recently ruled that:

current and projected concentrations of the six key well-mixed greenhouse gases–carbon dioxide (CO2), methane (CH4), nitrous oxide (N2O), hydrofluorocarbons (HFCs), perfluorocarbons (PFCs), and sulfur hexafluoride (SF6)–in the atmosphere threaten the public health and welfare of current and future generations.

Note the wording “the public health and welfare of current and future generations”

Who knows what legislation the EPA will pass in the coming months and years to control CO2 emissions from coal-fired power plants in the coming months and years – and the knock on effects this will have on costs.

Now think back to the litigation associated with asbestos – the longest and most expensive tort in US history. Then note that climate change litigation is gaining ground daily, the decision to go with coal as a primary power source starts to look decidedly shaky.

Then GreenPeace decided to wade in with a campaign and FaceBook page to shame FaceBook into reversing this decision. Not good for the compay image at all.

Finally, when you factor in the recent revolts by investors in Shell and BP to decisions likely to land the companies in hot water down the road for pollution, the investors in FaceBook should be asking some serious questions right about now.

post

Green Numbers round-up 10/30/2009

Posted from Diigo. The rest of my favorite links are here.

post

More info please IBM…

IBM Green Data Center in Second Life

Speaking of data centers, I was delighted to read this morning of a partnership between IBM and Indian bank Kotak.

According to the release, IBM is helping the bank consolidate its server rooms into one data center and Kotak will save:

over US$1.2 million in operational efficiency and reduced energy costs over the next five years

I’d like to see some of the calcs behind those data – $1.2m over five years sounds low to me unless it is a modest data center.

Intriguingly, the release refers to:

a chilled water-based cooling and an automatic floor pressurization system

If that is water cooled servers (as opposed to water cooled air handling units) then this is nice. I’d love to know what an ” automatic floor pressurization system” system is. Anyone know? My guess is that it is something for maintaining underfloor airflow integrity but if it is that, then it sounds like traditional air cooled servers, not water cooled 🙁

Hello? Anyone from IBM have any more info on this?

post

How to build a hyper Energy-efficient Data Center

I am speaking next week at a virtual conference called “bMighty – A Deep Dive on IT Infrastructure for SMBs” – apologies in advance for the state of the website(!)

My talk is titled “How to build a hyper Energy-efficient Data Center” and is based on the CIX data center which I helped develop (and am still a director of).

This is the slide deck I will be presenting there.