The Switch SuperNAP data centre – one of the most impressive I’ve been in

Switch SuperNAP data centre

If you were going to build one of the world’s largest data centre’s you wouldn’t intuitively site it in the middle of the Nevada desert but that’s where Switch sited their SuperNAPs campus. I went on a tour of the data centre recently when in Las Vegas for IBM’s Pulse 2012 event.

The data centre is impressive. And I’ve been in a lot of data centre’s (I’ve even co-founded and been part of the design team of one in Ireland).

The first thing which strikes you when visiting the SuperNAP is just how seriously they take their security. They have outlined their various security layers in some detail on their website but nothing prepares you for the reality of it. As a simple example, throughout our entire guided tour of the data centre floor space we were followed by one of Switch’s armed security officers!

The data centre itself is just over 400,000 sq ft in size with plenty of room within the campus to build out two or three more similarly sized data centres should the need arise. And although the data centre is one of the world’s largest, at 1,500 Watts per square foot it is also quite dense as well. This facilitates racks of 25kW and during the tour we were shown cages containing 40 x 25kW racks which were being handled with apparent ease by Switch’s custom cooling infrastructure.

Switch custom cooling infrastructure

Because SuperNAP wanted to build out a large scale dense data centre, they had to custom design their own cooling infrastructure. They use a hot aisle containment system with the cold air coming in from overhead and the hot air drawn out through the top of the contained aisles.

The first immediate implication of this is that there are no raised floors required in this facility. It also means that walking around the data centre, you are walking in the data centre’s cold aisle. And as part of the design of the facility, the t-scif’s (thermal seperate compartment in facility – heat containment structures) are where the contained hot aisle’s air is extracted and the external TSC600 quad process chillers systems generate the cold air externally for delivery to the data floor. This form of design means that there is no need for any water piping within the data room which is a nice feature.

Through an accident of history (involving Enron!) the SuperNAP is arguably the best connected data centre in the world, a fact they can use to the advantage of their clients when negotiating connectivity pricing. And consequently, connectivity in the SuperNAP is some of the cheapest available.

As a result of all this, the vast majority of enterprise cloud computing providers have a base in the SuperNAP. As is the 56 petabyte ebay hadoop cluster – yes, 56 petabyte!

US electricity generation

Given that I have regularly bemoaned cloud computing’s increasing energy and carbon footprint on this blog, you won’t be surprised to know that one of my first questions to Switch was about their energy provider, NV Energy.

According to NV Energy’s 2010 Sustainability Report [PDF] coal makes up 21% of the generation mix and gas accounts for another 63.3%. While 84% electricity generation from fossil fuels sounds high, the 21% figure for coal is low by US standards, as the graph on the right details.

Still, it is a long way off the 100% of electricity from renewables that Verne Global’s new data centre has.

Apart from the power generation profile, which in fairness to Switch, is outside their control (and could be considerably worse) the SuperNAP is, by far, the most impressive data centre I have ever been in.

Photo Credit Switch

GreenMonk TV talks flywheel UPS’s with Active Power

I attended the 2011 DataCenterDynamics Converged conference in London recently and at it I chatted to a number of people in the data center industry about where the industry is going.

One of these was Active Power‘s Graham Evans. Active Power make flywheel UPS’s so we talked about the technology behind these and how they are now becoming a more mainstream option for data centers.

Tom Raftery: Hi everyone, welcome to GreenMonk TV, we are at the DCD Converge Conference in London and with me I have Graham Evans from Active Power. Graham you guys, you make the spinning UPSs.

Graham Evans: That?s right yes the flywheel UPSs, kinetic energy. So behind us here we have our powerhouse. So what we found with the flywheel UPS is because of its high density environment, the fact it doesn?t need cooling the fact that it is really suited to a containerized environment we?ve put it in a powerhouse to show the guys in DCD to show the benefits that we can provide from a systems perspective.

Tom Raftery: So what the flywheel UPS does is it takes in electricity, while the electricity is running, spins wheels really, really fast and then if there is a break it uses that kinetic energy to keep the system up.

Graham Evans: Not quite, so the flywheel itself is spinning all the time as an energy storage device. The UPS system is primarily conditioning the power. So as the power comes through it?s a parallel online system, all of the kilowatts flow through to the load and our converters regulate that power to make sure you get UPS grade output through to your critical load. At the same time the flywheel is spinning it?s sat there as a kinetic energy store ready to bridge the gap when its required to do so.

Active Power flywheel UPS

An Active Power flywheel UPS

So voltage or mains fails on the input, the flywheel itself changes state instantaneously from a motor to a generator and we extrapolate that kinetic energy through some converters to support the load, start our diesel engine, and that then becomes the primary power source through to the system.

Tom Raftery: And you got the diesel engine in fact built into the system here, it?s on the right hand side as we are looking at here. So there is a reason that you have your own diesel engine kind of fitted into in there.

Graham Evans: Yes, so we are not holding into one particular diesel engine manufacturer so what we do as a complete system is designed as a critical power solution. So the thought really is from a client point of view we can be flexible in terms of their requirements. We can size the engine to support the UPS load only or maybe we can pick up some mechanical loads as well. We make some enhancements to the diesel so we have our own diesel controller to start the diesel quickly. We have our own product we call GenSTART,which allows us to have a UPS backed starter mechanism to the system so we can use that UPS power to start it.

Tom Raftery: And that?s important because the flywheel don?t stay up as long as say a battery bank.

Graham Evans: Its important because this type of loads that we are supporting need that quick power restorations, so from a UPS point of view we need to restore or keep power instantaneously that?s the job of a UPS, no break power supply, but we also find with mechanical loads certainly in high density datacenter environments we need to restore the short break mechanical loads very quickly. So the system you see here is able to do that. We continuously support the UPS load and we can bring on the cooling load ten seconds afterwards. So very fast starting, very robust system.

Tom Raftery: And the whole flywheel UPS idea is relatively new comer to the datacenter environment?

Graham Evans: Not especially I think it feels like that sometimes but we have been around for 15 years as a business, we have 3000 plus installations worldwide, but certainly we are not as common place as some other technologies but we are probably one of the fastest growing companies globally. So, yeah not brand new 15 years in business, but yeah the concept?s really taken off and it?s been really successful for us.

Tom Raftery: Cool. Graham that?s been fantastic, thanks for coming to the show.

Graham Evans: No problem, thank you, cheers.

Power Usage Efficiency (PUE) is a poor data center metric

Problems with PUE

Power Usage Effectiveness (PUE) is a widely used metric which is supposed to measure how efficient data centers are. It is the unit of data center efficiency regularly quoted by all the industry players (Facebook, Google, Microsoft, etc.).
However, despite it’s widespread usage, it is a very poor measure of data center energy efficiency or of a data center’s Green credentials.

Consider the example above (which I first saw espoused here) – in the first row, a typical data center has a total draw of 2MW of electricity for the entire facility. Of which 1MW goes to the IT equipment (servers, storage and networking equipment). This results in a PUE of 2.0.

If the data center owner then goes on an efficiency drive and reduces the IT equipment energy draw by 0.25MW (by turning off old servers, virtualising, etc.), then the total draw drops to 1.75MW (ignoring any reduced requirement for cooling from the lower IT draw). This causes the PUE to increase to 2.33.

When lower PUE’s are considered better (1.0 is the theoretical max), this is a ludicrous situation.

Then, consider that not alone is PUE a poor indicator of an data center’s energy efficiency, it is also a terrible indicator of how Green a data center is as Romonet’s Liam Newcombe points out.

Problems with PUE

Consider the example above – in the first row, a typical data center with a PUE of 1.5 uses an average energy supplier with a carbon intensity of 0.5kg CO2/kWh resulting in carbon emissions of 0.75kg CO2/kWh for the IT equipment.

Now look at the situation with a data center with a low PUE of 1.2 but sourcing energy from a supplier who burns a lot of coal, for example. Their carbon intensity of supply is 0.8kg CO2/kWh resulting in an IT equipment carbon intensity of 0.96kg CO2/kWh.

On the other hand look at the situation with a data center with a poor PUE of 3.0. If their energy supplier uses a lot of renewables (and/or nuclear) in their generation mix they could easily have a carbon intensity of 0.2kg CO2/kWh or lower. With 0.2 the IT equipment’s carbon emissions are 0.6kg CO2/kWh.

So, the data center with the lowest PUE by a long shot has the highest carbon footprint. While the data center with the ridiculously high PUE of 3.0 has by far the lowest carbon footprint. And that takes no consideration of the water footprint of the data center (nuclear power has an enormous water footprint) or its energy supplier.

The Green Grid is doing its best to address these deficiencies coming up with other useful metrics such as, Carbon Usage Effectiveness (CUE) and Water Usage Effectiveness (WUE).

Now, how to make these the standard measures for all data centers?

The images above are from the slides I used in the recent talk I gave on Cloud Computing’s Green Potential at a Green IT conference in Athens.

Data Center War Stories talks to the Green Grid EMEA Tech Chair Harqs (aka Harkeeret Singh)

And we’re back this week with another instalment in our Data Center War Stories series (sponsored by Sentilla).

In this episode I talked to the Green Grid’s EMEA Tech Chair, Harqs (also known as Harkeeret Singh).

The Green Grid recently published a study on the RoI of energy efficiency upgrades for data center cooling systems [PDF warning]. I asked Harqs to come on the show to discuss the practical implications of this study for data center practitioners.

Here’s the transcript of our conversation:

Tom Raftery: Hi everyone, welcome to GreenMonk TV, this is the Data Centers War Stories series sponsored by Sentilla. The guest on the show today is Harkeeret Singh aka Harqs. And Harqs is the EMEA Tech Chair of Green Grid. Harqs welcome to the show.

Harqs: Thank you Tom.

Tom Raftery: So Harqs as the Tech Chair of the Green Grid in the EMEA region, you get a larger overview of what goes on in data centers than some of the previous interviewees we?ve had on the show.

We have been talking about kind of the war stories, the issues that data centers have come across and ways they have resolved them. I imagine you would have some interesting stories to tell about what can go on in data centers globally. Do you want to talk a bit about some of that stuff?

Harqs: The Green Grid undertook a study which implements a lot of good practices that we all talk about in terms of improving air flow management and increasing temperature, putting in variable frequency drives. And what we did is after each initiative, we measure the benefit of — in terms of — from an energy consumption perspective.

And Kerry Hazelrigg from the Walt Disney Company led the study in a data center in the southeast of the US. And we believe that is representative of the most data centers, probably pre-2007, so we think there is a lot of good knowledge in here which others can learn from.

So I am going to take you through some of the changes that were made and some of the expectations but also some of the findings some of which we weren?t expecting. So starting with — there was five different initiatives, the first initiative was implementing the variable speed drives. And what we found was, they installed new CRAC units and CRAH units which they put in variable frequency drives in the standard, there is 14 of them, and then they retrofitted 24 existing CRAHs. And took out the fixed speed drives and put in the variable speed drives.

The expectation was we would find a reduction in energy consumption and fan horsepower. And also there was a potential of maybe looking always the providing of coolant to the right place in the data center. And once we put those in, we found out that they didn?t actually introduce any, any hotspots, which was a positive thing. But some of the things that were a little different from what we expected and the PUE didn?t reflect the savings. That was because there was external factors, things like the external weather which impacted the PUE figure as well. So you need to bear that in mind as you make changes. You need to look at the average across the year.

The other issue that we found was by putting in variable speed drives they found it introduced harmonics to the power systems. And that came through the monitoring tools and so they are putting — they put in filtering to help resolve those harmonics.

The last issue was also around skills, so they had to train the data center staff on using variable frequency drives and actually maintain them. This was the biggest power saving, it was a third of the overall saving and the saving in total was 9.1% of energy consumption and that?s saved some thing in the order of $300,000 in terms of real cash and the PUE went down from 1.87 down to 1.64 by doing these five initiatives.

The second issue was actually putting in the air flow management. So things like the blanking panels and the floor grommets, putting in the cold tiles where they are supposed to be, and that was for around 7 inch cabinets and the findings were that that reduced the cold aisle temperature because you have less mixing, and also increase the temperate on the hot isle in terms of the temperatures going back to the CRAH. So that was interesting.

We saw that being a key enabler to actually increase in temperature, so you have cold to cold aisles and hot to hot aisles because of those mixing. There wasn?t any energy savings for this piece in itself, but it was in this — airflow management activity is an enabler in that it allows you to then do some optimization and also to increase temperature without risk.

The third activity was relocating the sensors that the CRAHs worked off from, way from the return to CRAC and return to CRAH which is what most data centers use today to actually aligning that to sensors on the front of the cabinets. So actually moving from return air to supply and that?s the specification that ASHRAE provides, that?s what we should be controlling, the temperature and the humidity of the air going into the servers. They themselves say they don?t really care about what the temperature is coming out of back of the servers. Well the rest of us do from a — making sure that it?s not too hot for our data center operators.

So what we did was move those sensors to the front of cabinets and what that did was that optimized the fan speeds and actually started to raise the temperature and the cold air that was required by the servers. It did take them a little awhile getting the locations right, so making sure that they have them moving them around as much as possible, looking the CFD to make sure they are optimizing and putting it in the right place eventually. And that was a small improvement, but it was the — again another enabler for increasing temperate. So there was only a few percent improvement by doing that, but what it does is when you start look at increasing temperature you are increasing temperature at the right point.

Tom Raftery: So how much did it increase temperature by — was it like from 20 to 25 or… –

Harqs: That?s a good question as the next initiative they did was, they were increasing the temperature, so I was just about to — so they went from 18? C, which was what their original set point was and they took it up to 22? C. Now obviously that?s still in the middle of the ASHRAE standard. So there is still more scope there to become better. But it wasn?t just increasing the temperature in the room but it was actually increasing the temperature of the chiller plant which — where the biggest savings were, so if you increase the temperature of the room that then allows you to increase the temperature of your chiller plant.

And that?s — they increase the set point of their chiller plant from 6.7? C to just under 8. And what they found was, there was significant savings due to the reduction in compressor and condensor fan power. And what they found was for each and I’m going to do this in degree F because they calculate degree F. So they went from 44 to 46? F. For every degree F they increased the set point of the chiller, they found out that reduced just over 50 kilowatts of chiller energy consumption.

Now in terms of other people?s data centers, they are also — your mileage may vary depending on the configuration and where you are, but that?s what their significant saving was. By doing that what they found was — by doing it this way, where they put the air flow management in place and then they increased temperature in the room, increased the set points of the chiller plant they found that actually there was — that made no significant impact on the data center in terms of hot spots or anything like that. So there is no detrimental impact to the data center by doing this. Obviously the saving of the energy was a positive and saved real money.

Tom Raftery: Alright, Harqs that was great, thanks a million for coming on the show.

Data Center War Stories – Maxim Samo from UBS

Back at the start of the summer I mentioned that Sentilla had asked us to run a series of chats with data center practitioners to talk about their day-to-day challenges.

This was very much a hands-off project from Sentilla – I sourced all the interviewees, conducted all the interviews and am publishing the results without any Sentilla input or oversight. This had advantages and disadvantages – obviously, from an independence perspective, this was perfect but from a delivery point of view it was challenging. It turns out that data center practitioners are remarkably camera shy, so it far longer than anticipated to produce this series, however, finally I’m ready with the first of the series, with more to come every week in the coming weeks.

This first episode in the series is with Maxim Samo, who is responsible for managing the European data center operations of global financial services firm UBS.

Here’s a transcript of our conversation:

Tom Raftery: Hi everyone and welcome to GreenMonk?s Data Centers War Stories series, sponsored by Sentilla. This week?s guest is Maxim Samo who works for the Swiss financial services company UBS. Maxim, do you want to start off by telling us what you do for UBS and what the kind of size of your datacenter fleet is?

Maxim Samo: Yeah, I run the Swiss and European operation for UBS, at the moment we have five datacenters in Switzerland and three outside of Switzerland spread around Europe, the total size or capacity probably being around six megawatts.

Tom Raftery: Okay and what kind of age is the fleet, is it like you know the last five years or 10, or is that — it?s obviously a variety that you didn?t build all eight in the one go.

Maxim Samo: Right, it?s anywhere between — they were built anywhere between 1980 and 2004, there is a couple of colo?s that are probably newer than that, but yeah.

Tom Raftery: So if they were built starting in 1980, I mean I assume that this is one of the reasons why you think more in terms of power as supposed to space because your — they weren?t optimized around power at that time I?m sure.

Maxim Samo: Oh not at all, exactly. They were built with a density of around 300 watts per square meter or even less right, I mean they were mainframe datacenters and we kind of ? well, we did some refurbishments in there and as a matter of fact one of those datacenters is undergoing a major renovation right now to increase the amount of power that we can put in there.

Tom Raftery: Power is obviously one of the more pressing issues you guys are running up against, but what are the other kind of issues you have in the datacenters in the day-to-day operations?

Maxim Samo: So the way our datacenters are built in Europe at least within UBS is that, we don?t have like big data halls, but we have a number of smaller rooms within the datacenter building and in order to be cost effective you know we don?t have every single network available in all the rooms, we don?t have every single storage device and storage network in terms of production storage or tester development storage available in all the rooms.

So some of our constraints or else or around that we have to — not only do we have to manage the capacity, but we have to figure out which rooms the servers come in and then try to get adequate forecasts of how much the business and the developers want to put into what datacenter rooms and try to juggle the capacity around that.

Tom Raftery: We are calling the show the Datacenter War Story. So, have you any interesting problems that you came across in the last number of years and resolved any interesting issues that you hit up against?

Maxim Samo: So, in terms of war stories I guess we are — one thing is we are going to have the interesting issue about switching the electrical network of the datacenter that is undergoing renovation and we are currently looking at the options of how we can do that.

One option would be that we would switch — well, that we would put both ups into utility bypass, runoff the utility, and then switch over the network, where of course you have the risk of a power blip coming through which takes down your datacenter. So, in order to mitigate that we are also talking about a full scale shutdown of the datacenter, which right now is being received very well by the people involved, so that?s going to be an interesting one.

So we had, actually very recently we had a very funny case where we — what we do is, we conduct black star tests, black star test is when you almost, you like pull the plug and see what happens, right. So you literally cut off the utility network, your ups will carry the power, the diesel generators will start and you make sure everything works smoothly.

The last time we did this test that was a week ago on the weekend when the diesel generator started it created so much smoke that a pedestrian out on the street actually called the fire department and we had the fire department come in and lot of people were panicking and asking what is going on, we have a fire in the datacenters, like no, we just tested our diesel generators, that was a very funny instance.

I can really remember a war story in terms of the datacenter going down luckily that has not happened for a very long time, we absolutely, we probably — well, we did have a partial failure at one point where pretty big power switch within the switch gear has failed and brought down one side of the power.

However, since most of our servers and IT equipment is dual power attachsed it did not have any impact on their production.

Tom Raftery: Great, that?s been fantastic. Max, thanks for coming on the show.

Maxim Samo: All right, thanks Tom.

Disclaimer – Maxim asked me to mention that any views he expressed in this video are his own, and not those of his employer, UBS AG.

Carbon Disclosure Project’s emissions reduction claims for cloud computing are flawed

data center

The Carbon Disclosure Project (CDP) is a not-for-profit organisation which takes in greenhouse gas emissions, water use and climate change strategy data from thousands of organisations globally. This data is voluntarily disclosed by these organisations and is CDP’s lifeblood.

Yesterday the CDP launched a new study Cloud Computing ? The IT Solution for the 21st Century a very interesting report which

delves into the advantages and potential barriers to cloud computing adoption and gives insights from the multi-national firms that were interviewed

The study, produced by Verdantix, looks great on the surface. They have talked to 11 global firms that have been using cloud computing for over two years and they have lots of data on the financial savings made possible by cloud computing. There is even reference to other advantages of cloud computing – reduced time to market, capex to opex, flexibility, automation, etc.

However, when the report starts to reference the carbon reductions potential of cloud computing it makes a fundamental error. One which is highlighted by CDP Executive Chair Paul Dickinson in the Foreword when he says

allowing companies to maximize performance, drive down costs, reduce inefficiency and minimize energy use ? and therefore carbon emissions

[Emphasis added]

The mistake here is presuming a direct relationship between energy and carbon emissions. While this might seem like a logical assumption, it is not necessarily valid.

If I have a company whose energy retailer is selling me power generated primarily by nuclear or renewable sources for example, and I move my applications to a cloud provider whose power comes mostly from coal, then the move to cloud computing will increase, not decrease, my carbon emissions.

The report goes on to make some very aggressive claims about the carbon reduction potential of cloud computing. In the executive summary, it claims:

US businesses with annual revenues of more than $1 billion can cut CO2 emissions by 85.7 million metric tons annually by 2020


A typical food & beverage firm transitioning its human resources (HR) application from dedicated IT to a public cloud can reduce CO2 emissions by 30,000 metric tons over five years

But because these are founded on an invalid premise, the report could just as easily have claimed

US businesses with annual revenues of more than $1 billion can increase CO2 emissions by 85.7 million metric tons annually by 2020


A typical food & beverage firm transitioning its human resources (HR) application from dedicated IT to a public cloud can increase CO2 emissions by 30,000 metric tons over five years

This wouldn’t be an issue if the cloud computing providers disclosed their energy consumption and emissions information (something that the CDP should be agitating for anyway).

In fairness to the CDP, they do refer to this issue in a sidebar on a page of graphs when they say:

Two elements to be considered in evaluating the carbon impact of the cloud computing strategies of specific firms are the source of the energy being used to power the data center and energy efficiency efforts.

However, while this could be taken to imply that the CDP have taken data centers’ energy sources into account in their calculations, they have not. Instead they rely on models extrapolating from US datacenter PUE information [PDF] published by the EPA. Unfortunately the PUE metric which the EPA used, is itself controversial.

For a data centric organisation like the CDP to come out with baseless claims of carbon reduction benefits from cloud computing may be at least partly explained by the fact that the expert interviews carried out for the report were with HP, IBM, AT&T and CloudApps – all of whom are cloud computing vendors.

The main problem though, is that cloud computing providers still don’t publish their energy and emissions data. This is an issue I have highlighted on this blog many times in the last three years and until cloud providers become fully transparent with their energy and emissions information, it won’t be possible to state definitively that cloud computing can help reduce greenhouse gas emissions.

Photo credit Tom Raftery

Learnings from Google’s European Data Center Summit

Google's EU Data Center Summit conference badge

I attended Google’s European Data Center Summit earlier this week and it was a superb event. The quality of the speakers was tremendous and the flow of useful information was non-stop.

The main take home from the event is that there is a considerable amount of energy being wasted still by data centers – and that this is often easy to fix.

Some of the talks showed exotic ways to cool your data center. DeepGreen, for example, chose to situate itself beside a deep lake, so that it could use the lake’s cold water for much cheaper cooling. Others used river water and Google mentioned their new facility in Finland where they are using seawater for cooling. Microsoft mentioned their Dublin facility where they are using air-side economisation (i.e. it just brings in air from outside the building) and so is completely chiller-less. This is a 300,00sq ft facility.

IBM’s Dr Bruno Michel did remind us that it takes ten times more energy to move a compressible medium like air, than it does to move an non-compressible one like water but then, not all data centers have the luxury of a deep lake nearby!

Google's Joe Kava addressing the European Data Center Summit

Both Google and UBS, the global financial services co., gave what were hugely practical talks about simple steps to reducing your data center’s energy footprint.

Google’s Director of Operations, Joe Kava (pic on right) talked about a retrofit project where Google dropped the PUE of five of its existing data centers from 2.4 down to 1.5. They did this with an investment of $25k per data center and the project yielded annual savings of $67k each!

What kind of steps did they take? They were all simple steps which didn’t incur any downtime.

The first step was to do lots of modelling of their airflow and temperatures in their facilities. With this as a baseline, they then went ahead and optimised the perforated tile layout! The next step was to get the server owners to buy into the new expanded ASHRAE limits – this allowed Google to nudge the setpoint for the CRACs up from its existing 22C to 27C – with significant savings accruing from the lack of cooling required from this step alone.

Further steps were to roll out cold aisle containment and movement sensitive lighting. The cold aisles were ‘sealed’ at the ends using Strip Doors (aka meat locker sheets). This was all quite low-tech, done with no downtime and again yielded impressive savings.

Google achieved further efficiencies by simply adding some intelligent rules to their CRACs so that they turned off when not needed and came on only if/when needed.

UBS’ Mark Eichenberger echoed a lot of this in his own presentation. UBS has a fleet of data centers globally whose average age is 10 years old and some are as old as 30. Again, simple, non-intrusive steps like cold-aisle containment and movement sensitive lighting is saving UBS 2m Swiss Francs annually.

Google’s Chris Malone had other tips. If you are at the design phase, try to minimise the number of conversion steps from AC<->DC for the electricity and look for energy efficient UPS’.

Finally, for the larger data center owners, eBay’s Dean Nelson made a very interesting point. When he looked at all of eBay’s apps, he saw they were all in Tier 4 data centers. He realised that 80% of them could reside in Tier 2 data centers and by moving them to Tier 2 data centers, he cut eBay’s opex and capex by 50%

Having been a co-founder of the Cork Internet eXchange data center, it was great to hear that the decisions we made back then around cold aisle containment and highly energy efficient UPS’ being vindicated.

Even better though was that so much of what was talked about at the summit was around relatively easy, but highly effective retrofits that can be done to existing data centers to make them far more energy efficient.

You should follow me on Twitter here
Photo credit Tom Raftery

Data center war stories sponsored blog series – help wanted!

Data center work

Sentilla are a client company of ours. They have asked me to start a discussion here around what are the day-to-day issues data center practitioners are coming up against.

This is a very hands-off project from their point of view.

The way I see it happening is that I’ll interview some DC practitioners either via Skype video, or over the phone, we’ll have a chat about DC stuff (war stories, day-to-day issues, that kind of thing), I’ll record the conversations and publish them here along with transcriptions. They’ll be short discussions – simply because people rarely listen to/watch rich media longer than 10 minutes.

There will be no ads for Sentilla during the discussions, and no mention of them by me – apart from an intro and outro simply saying the recording was sponsored by Sentilla. Interviewees are free to mention any solution providers and there are no restrictions whatsoever on what we talk about.

If you are a data center practitioner and you’d like to be part of this blog series, or simply want to know more, feel free to leave a comment here, or drop me an email to [email protected]

You should follow me on Twitter here
Photo credit clayirving

Top 10 Data Center blogs

Data center air and water flows

Out of curiosity, I decided to see if I could make a list of the top 10 data center focussed blogs. I did a bit of searching around, found around thirty blogs related to data centers (who knew they were so popular!). I went through the thirty blogs and eliminated them based on arbitrary things I made up on the spot like post frequency, off-topic posts, etc. until I came up with a list I felt was the best. Then I counted them and lo! I had exactly 10 – phew, no need to eliminate any of the good ones!

So without further ado – and in no particular order, I present you with my Top 10 Data Center blogs:

What great data center blogs have I missed?

The chances are there are superb data center blogs out there which my extensive 15 seconds of research on the topic failed to uncover. If you know of any, feel free to leave them in the comments below.

Image credit Tom Raftery

FaceBook open sources building an energy efficient data center

FaceBook's new custom-built Prineville Data Centre

Back in 2006 I was the co-founder of a Data Centre in Cork called Cork Internet eXchange. We decided, when building it out, that we would design and build it as a hyper energy-efficient data centre. At the time, I was also heavily involved in social media, so I had the crazy idea, well, if we are building out this data centre to be extremely energy-efficient, why not open source it? So we did.

We used blogs, flickr and video to show everything from the arrival of the builders on-site to dig out the foundations, right through to the installation of customer kit and beyond. This was a first. As far as I know, no-one had done this before and to be honest, as far as I know, no-one since has replicated it. Until today.

Today, Facebook is lifting the lid on its new custom-built data centre in Prineville, Oregon.

Not only are they announcing the bringing online of their new data centre, but they are open sourcing its design, specifications and even telling people who their suppliers were, so anyone (with enough capital) can approach the same suppliers and replicate the data centre.

Facebook are calling this the OpenCompute project and they have released a fact sheet [PDF] with details on their new data center and server design.

I received a pre-briefing from Facebook yesterday where they explained the innovations which went into making their data centre so efficient and boy, have they gone to town on it.

Data centre infrastructure
On the data centre infrastructure side of things, building the facility in Prineville, Oregon (a high desert area of Oregon, 3,200 ft above sea level with mild temperatures) will mean they will be able to take advantage of a lot of free cooling. Where they can’t use free cooling, they will utilise evaporative cooling, to cool the air circulating in the data centre room. This means they won’t have any chillers on-site, which will be a significant saving in capital costs, in maintenance and in energy consumption. And in the winter, they plan to take the return warm air from the servers and use it to heat their offices!

By moving from centralised UPS plants to 48V localised UPS’s serving 6 racks (around 180 Facebook servers), Facebook were able to re-design the electricity supply system, doing away with some of the conversion processes and creating a unique 480V distribution system which provides 277V directly to each server, resulting in more efficient power usage. This system reduces power losses going in the utility to server chain, from an industry average 11-17% down to Prineville’s 2%.

Finally, Facebook have significantly increased the operating temperature of the data center to 80.6F (27C) – which is the upper limit of the ASHRAE standards. They also confided that in their next data centre, currently being constructed in North Carolina, they expect to run it at 85F – this will save enormously on the costs of cooling. And they claim that the reduction in the number of parts in the data center means they go from 99.999% uptime, to 99.9999% uptime.

New Server design
Facebook also designed custom servers for their data centres. The servers contain no paint, logos, stickers bezels or front panel. They are designed to be bare bones (using 22% fewer materials than a typical 1U server) and for ease of serviceability (snap-together parts instead of screws).

The servers are 1.5U tall to allow for larger heat sinks and larger (slower turning and consequently more efficient) 60mm fans. These fans only take 2-4% of the energy of the server, compared to 10-20% for typical servers. The heat sinks are all spread at the back of the mother board so none of them will be receiving pre-heated air from another heat sink, reducing the work required of the fans.

The server power supply accepts both 277V AC power from the electrical distribution system and 44V DC from the UPS in the event of a utility power failure. These power supplies have a peak efficiency of 94.5% (compared to a more typical 90% for standard PSU’s) and they connect directly to the motherboard, simplifying the design and reducing airflow impedance.

Open Compute
Facebook relied heavily on open source in creating their site. Now, they say, they want to make sure the next generation of innovators don’t have to go through the same pain as Facebook in building out efficient data centre infrastructure. Consequently, Facebook is releasing all of the specification documentation which it gave to its suppliers for this project.

Some of the schematics and board layouts for the servers belong to the suppliers so they are not currently being published, though Facebook did say they are working with their suppliers to see if they will release them (or portions of them) but they haven’t reached agreement with the suppliers on this just yet.

Asked directly about their motivations for launching Open Compute Facebook’s Jay Park came up with this classic reply

… it would almost seem silly to do all this work and just keep it closed

Asking Facebook to unfriend coal
Greenpeace started a campaign to pressure Facebook into using more renewable energy resources due to the fact that Pacific Power, the energy supplier Facebook will be using for its Prineville data center produces almost 60% of its electricity from burning coal.

Greenpeace being Greenpeace, created a very viral campaign, using the Facebook site itself, and the usual cadre of humurous videos etc., to apply pressure on Facebook to thinking of sourcing its electricity from more renewable sources.

When we asked Facebook about this in our briefing, they did say that their data centre efforts are built around many more considerations than just the source of energy that comes into the data centre. They then went on to maintain that they are impressed by Pacific Power’s commitment to moving towards renewable sources of energy (they are targeting having 2,000MW of power from renewables by 2013). And they concluded by contending that the efficiencies they have achieved in Prineville more than offsets the use of coal which powers the site.

Facebook tell us this new custom data centre at Prineville has a PUE of 1.07, which is very impressive.

They have gone all out on innovating their data centre and the servers powering their hugely popular site. More than that though, they are launching the Open Compute Project giving away all the specs and vendor lists required to reproduce an equally efficient site. That is massively laudable.

It is unfortunate that their local utility has such a high gen-mix of coal in its supply to besmirch an otherwise great energy and sustainability win for Facebook. The good thing though is that as the utility adds to its portfolio of renewables, Facebook’s site will only get greener.

For more on this check out the discussions on Techmeme

You should follow me on Twitter here

Photo credit FaceBook’s Chuck Goolsbee