post

Schneider Electric – focussed on making organisations more efficient

Schneider Influencer Summit

We were invited to attend this year’s Schneider Electric Influencer Summit and jumped at the chance. Why? Schneider Electric is a fascinating company with fingers in lots of pies, and we were keen to learn more about this company.

Schneider Electric was founded in 1836, so the company is coming up on 180 years old. Schneider reported revenue of almost €23.5bn in 2013, of which €1.9bn was profit, and employs in the order of 152,000 people globally. So, not an insignificant organisation.

The Influencer Summit coincided with the opening of its Boston One campus, Schneider Electric’s new facility in Andover. This site is now Schneider’s main R&D lab, as well as its North American HQ. Situating its main R&D labs in its HQ says a lot about how Schneider views the importance of research and development. In fact, at the event Schneider EVP and North American CEO Laurent Vernerey, reported that Schneider devotes 4-5% of sales to R&D annually.

At the influencer event, we discovered the breath of Schneider’s portfolio went far beyond what we were aware of. Not only are they heavily involved in electrical automation, control and distribution systems, but they also help make highly energy efficient data centres (they bought APC back in 2007), they have building management solutions, a cybersecurity suite (developed especially for critical infrastructure), water management solutions, a smart cities business, a weather forecasting arm (with a staff of 80 meteorologists!), and a strong services division. See, fingers in lots of pies!

Schneider Electric, as its name suggests, was traditionally more of a hardware company, but with the move to the digitisation of infrastructure, that has changed fundamentally, and Schneider is now very much a software company as well as a hardware one. Of the 20,000 employees in North America, 1,200 are software engineers.

This digitisation of infrastructure is happening at an ever increasing pace, helped by the constantly falling price of electronics and sensors. If it costs a mere $2.50 to put an SoC on a piece of infrastructure, why wouldn’t you do it? Particularly when adding the SoC makes the device IP addressable. Now it can report back on its status in realtime. As Schneider CMO Chris Hummel said, “connected systems will fundamentally change everything”.

Addressing potential security issues associated with making critical infrastructure IP addressable Schneider said that connected devices are more secure than disconnected devices because they can be monitored, and everything that’s done to them can be tracked.

With that in mind, it is not surprising that Schneider is a member of the Industrial Internet Consortium.

While it is always instructive to hear a company’s executives talking about their organisation, it is always far more interesting to hear their customers speak. And this event didn’t disappoint on that score. The customer speaker in this case was Todd Isherwood, the Energy Efficiency and Alternative Energy project manager for the City of Boston. Todd discussed how the City of Boston, with 15,000 employees, 2,700 utility accounts and a $50m electricity spend was working with Schneider Electric on its journey to becoming a more sustainable city.

Boston launched its Greenovate Boston campaign, it passed its Building Energy Reporting and Disclosure Ordinance (BERDO). This Ordinance requires Boston’s large- and medium-sized buildings to report their annual energy and water use to the City of Boston, after which the City makes the information publicly available. All of which will have helped Boston achieve its ranking of most energy efficient city in the US.

The biggest takeaway from the event though, was that Schneider Electric is, at its core, hugely interested in helping organisations become more efficient. And seemingly for all the right reasons. That’s not something you can say about many companies. And because of that, we’ll be watching Schneider with great interest from here on out.

Disclosure – Schneider Electric paid my travel and accommodation expenses to attend this event.

post

Technology for Good – episode twenty one with Gary Barnett

Welcome to episode twenty one of the Technology for Good hangout. In this week’s episode we had industry analyst Gary Barnett, as a guest on the show. As well as being a fellow industry analyst, Gary is an old friend, so we had a lot of fun discussing this week’s crop of stories. We had some connectivity issues at times during the hangout unfortunately, but that didn’t stop us from having a very interesting discussion about topics as diverse as climate, energy efficiency, and communications.

Here are the stories that we discussed in this week’s show:

Climate
World’s energy systems vulnerable to climate impacts, report warns
Peak Coal: Why the Industry’s Dominance May Soon Be Over

Efficiency

Cable TV boxes become 2nd biggest energy users in many homes
Microsoft Supercharges Bing Search With Programmable Chips

Cloud

Amazon strikes back, launching speedy solid-state block storage one day after Google

Communications

Google’s Balloon Internet Experiment, One Year Later
Facebook has built it’s own switch
Antitheft Technology Led to a Dip in iPhone Thefts in Some Cities, Police Say
Google and Microsoft add phone kill switch
Fire Phone against the world: can Amazon take on iOS and Android

Health

Ahead of Apple’s HealthKit, WebMD app now tracks health & fitness data from connected accessories

Transportation

Harley-Davidson’s First Electric Motorcycle Surprisingly Doesn’t Suck

Start-ups

Microsoft launches new startup accelerator in Redmond, focusing on home automation and the ‘Internet of Things’
Swiss based encrypted email service, brought to you by CERN and MIT scientists.

post

Autodesk’s Farnborough office going for LEED certification

Autodesk UK recently moved offices to a facility in Farnborough. In their previous offices, they had occupied several floors, so they set out to find offices where all their staff could be on the same floor, and yet have plenty off access to light. Also, they wanted to drastically reduce their footprint, so they to great care to make the office as green as possible (given that it was a retrofit, not a new build) and they have applied for LEED Gold certification for the office.

I visited with Autodesk in Farnborough last week and I was extremely impressed with the steps they have taken, as well as with the pride Autodesk rightfully show for the ongoing benefits of this project.

Some of the highlights:
The construction
94% construction waste recycled/ diverted from landfill
All energy/water consumption measured and monitored on site
The site was registered with the Considerate Constructors Scheme and achieved a score of 34 (85%)

Low energy lighting in Autodesk UK office

Low energy lighting in Autodesk UK office

Materials
High percentage of FSC timber sourced
All new furniture contains high % recycled/recyclable content. Re-used old furniture items where possible and ensured all unused items were diverted from landfill (e.g. donated to charity)
All paints, sealants and adhesives have been sourced with a low Volatile Organic Compound (VOC) content, to minimise chemicals and maximise occupant well-being
Selected new materials with high recycled content
A high proportion of new materials have been manufactured within 500 miles

Office
Secure bicycle racks, lockers, shower and changing facilities are provided for cyclists
10% of parking spaces are allocated to car sharers
Water consumption has been reduced by at least 20% through the installation of water efficient taps, shower fittings, WC’s and urinals.
Occupancy sensors have been installed on the lights for more than 90% of the lighting load and daylight controls on more than 50% – meaning that lights are not left on or are lighting areas unnecessarily.
Air conditioning equipment has been zoned in order to provide control to suit requirements for solar exposure and ensuring employee comfort
Recycling facilities have been built into the layout to ensure recycling wherever possible
All new electrical appliances are Energy Star rated
Desks have been located to try and maximise natural daylight and external views

The company also has a 6 seater TelePresence suite to reduce the amount of business travel it’s employees need to do. And Autodesk facilitates employees who wish to work from home – so much so that around 50% of their staff take advantage of this – reducing Autodesk’s property footprint, and the number of commute miles its workers undertake.

Autodesk’s Singapore office was awarded LEED Platinum certification earlier this year – with any luck when I’m in Singapore in November I’ll get a chance to check it out!

Full disclosure – Autodesk is not a GreenMonk client and the trip to visit AutoDesk’s Farnborough facilities was undertaken entirely at GreenMonk’s expense.

Image credit Tom Raftery

post

Data Center War Stories talks to the Green Grid EMEA Tech Chair Harqs (aka Harkeeret Singh)

And we’re back this week with another instalment in our Data Center War Stories series (sponsored by Sentilla).

In this episode I talked to the Green Grid’s EMEA Tech Chair, Harqs (also known as Harkeeret Singh).

The Green Grid recently published a study on the RoI of energy efficiency upgrades for data center cooling systems [PDF warning]. I asked Harqs to come on the show to discuss the practical implications of this study for data center practitioners.

Here’s the transcript of our conversation:

Tom Raftery: Hi everyone, welcome to GreenMonk TV, this is the Data Centers War Stories series sponsored by Sentilla. The guest on the show today is Harkeeret Singh aka Harqs. And Harqs is the EMEA Tech Chair of Green Grid. Harqs welcome to the show.

Harqs: Thank you Tom.

Tom Raftery: So Harqs as the Tech Chair of the Green Grid in the EMEA region, you get a larger overview of what goes on in data centers than some of the previous interviewees we?ve had on the show.

We have been talking about kind of the war stories, the issues that data centers have come across and ways they have resolved them. I imagine you would have some interesting stories to tell about what can go on in data centers globally. Do you want to talk a bit about some of that stuff?

Harqs: The Green Grid undertook a study which implements a lot of good practices that we all talk about in terms of improving air flow management and increasing temperature, putting in variable frequency drives. And what we did is after each initiative, we measure the benefit of — in terms of — from an energy consumption perspective.

And Kerry Hazelrigg from the Walt Disney Company led the study in a data center in the southeast of the US. And we believe that is representative of the most data centers, probably pre-2007, so we think there is a lot of good knowledge in here which others can learn from.

So I am going to take you through some of the changes that were made and some of the expectations but also some of the findings some of which we weren?t expecting. So starting with — there was five different initiatives, the first initiative was implementing the variable speed drives. And what we found was, they installed new CRAC units and CRAH units which they put in variable frequency drives in the standard, there is 14 of them, and then they retrofitted 24 existing CRAHs. And took out the fixed speed drives and put in the variable speed drives.

The expectation was we would find a reduction in energy consumption and fan horsepower. And also there was a potential of maybe looking always the providing of coolant to the right place in the data center. And once we put those in, we found out that they didn?t actually introduce any, any hotspots, which was a positive thing. But some of the things that were a little different from what we expected and the PUE didn?t reflect the savings. That was because there was external factors, things like the external weather which impacted the PUE figure as well. So you need to bear that in mind as you make changes. You need to look at the average across the year.

The other issue that we found was by putting in variable speed drives they found it introduced harmonics to the power systems. And that came through the monitoring tools and so they are putting — they put in filtering to help resolve those harmonics.

The last issue was also around skills, so they had to train the data center staff on using variable frequency drives and actually maintain them. This was the biggest power saving, it was a third of the overall saving and the saving in total was 9.1% of energy consumption and that?s saved some thing in the order of $300,000 in terms of real cash and the PUE went down from 1.87 down to 1.64 by doing these five initiatives.

The second issue was actually putting in the air flow management. So things like the blanking panels and the floor grommets, putting in the cold tiles where they are supposed to be, and that was for around 7 inch cabinets and the findings were that that reduced the cold aisle temperature because you have less mixing, and also increase the temperate on the hot isle in terms of the temperatures going back to the CRAH. So that was interesting.

We saw that being a key enabler to actually increase in temperature, so you have cold to cold aisles and hot to hot aisles because of those mixing. There wasn?t any energy savings for this piece in itself, but it was in this — airflow management activity is an enabler in that it allows you to then do some optimization and also to increase temperature without risk.

The third activity was relocating the sensors that the CRAHs worked off from, way from the return to CRAC and return to CRAH which is what most data centers use today to actually aligning that to sensors on the front of the cabinets. So actually moving from return air to supply and that?s the specification that ASHRAE provides, that?s what we should be controlling, the temperature and the humidity of the air going into the servers. They themselves say they don?t really care about what the temperature is coming out of back of the servers. Well the rest of us do from a — making sure that it?s not too hot for our data center operators.

So what we did was move those sensors to the front of cabinets and what that did was that optimized the fan speeds and actually started to raise the temperature and the cold air that was required by the servers. It did take them a little awhile getting the locations right, so making sure that they have them moving them around as much as possible, looking the CFD to make sure they are optimizing and putting it in the right place eventually. And that was a small improvement, but it was the — again another enabler for increasing temperate. So there was only a few percent improvement by doing that, but what it does is when you start look at increasing temperature you are increasing temperature at the right point.

Tom Raftery: So how much did it increase temperature by — was it like from 20 to 25 or… —

Harqs: That?s a good question as the next initiative they did was, they were increasing the temperature, so I was just about to — so they went from 18? C, which was what their original set point was and they took it up to 22? C. Now obviously that?s still in the middle of the ASHRAE standard. So there is still more scope there to become better. But it wasn?t just increasing the temperature in the room but it was actually increasing the temperature of the chiller plant which — where the biggest savings were, so if you increase the temperature of the room that then allows you to increase the temperature of your chiller plant.

And that?s — they increase the set point of their chiller plant from 6.7? C to just under 8. And what they found was, there was significant savings due to the reduction in compressor and condensor fan power. And what they found was for each and I’m going to do this in degree F because they calculate degree F. So they went from 44 to 46? F. For every degree F they increased the set point of the chiller, they found out that reduced just over 50 kilowatts of chiller energy consumption.

Now in terms of other people?s data centers, they are also — your mileage may vary depending on the configuration and where you are, but that?s what their significant saving was. By doing that what they found was — by doing it this way, where they put the air flow management in place and then they increased temperature in the room, increased the set points of the chiller plant they found that actually there was — that made no significant impact on the data center in terms of hot spots or anything like that. So there is no detrimental impact to the data center by doing this. Obviously the saving of the energy was a positive and saved real money.

Tom Raftery: Alright, Harqs that was great, thanks a million for coming on the show.

post

ArcelorMittal FCE roll out Organisational Risk Management software to unify maintenance processes

Organisational risk management (ORM) is the new hotness in the sustainability field. It is receiving increasing attention, as SAP’s Jeremiah Stone mentioned when I talked to him at SAP’s Sapphire/TechEd event last week. One assumes that it is receiving this increasing attention at least partly because that’s where the customer dollar is focussed right now.

What exactly is ORM? Organizational risk management is “risk management at the strategic level” according to the Software Engineering Institute – think of it as kind an amalgam of the traditional Environment, Health and Safety (EH&S) and Governance, Risk and Compliance (GRC) sectors.

How do these fit into the sustainability agenda? Well, risk mitigation is all about reducing the risks of an adverse event occurring – one that either hurts people or the company reputation (or often both!). It does this by mandating procedures and processes with verifiable sign-offs. It also does this by scheduling maintenance and raising alerts when equipment goes out of tolerance. This properly scheduled maintenance of machinery will ensure it not only runs safer, but often will also mean it stays more fuel efficient. This can mean significant energy savings in organisations which use a lot of power.

While at SAP’s TechEd/Sapphire last week I spoke with Edwin Heene, who works with ArcelorMittal and is responsible for the rollout of their ORM software solution. I had a quick chat with him to camera about the project and why ArcelorMittal embarked on it.

Here’s a transcript of our conversation:

Tom Raftery: Hi, everyone welcome to GreenMonk TV. We are at the SAP Sapphire event in Madrid, and with me I have Edwin Heene from ArcelorMittal. Edwin you have been involved in the organizational risk management project rollout or are involved in at the moment for ArcelorMittal. Can you tell us a little bit about that?

Edwin Heene: So in ArcelorMittal Flat Carbon Europe we are doing a global organizational standardization project in maintenance, that including also the safety processes and this is something that we do in several countries namely eight countries in Flat Carbon Europe.

Tom Raftery: So maybe we should give it a bit of context first. Who are ArcelorMittal? You are a large steel company, but could you just give us a little bit of background about the company first?

Edwin Heene: ArcelorMittal is the largest steel producing company in the world doing — covering about 6% of the annual year production. Has a number of employees about 260,000 in 2010. And presence in Flat Carbon Europe because that?s the sector where — segment where I work in. It is covering eight different countries, and we have about 35 locations in Flat Carbon Europe.

Tom Raftery: Okay, so as I mentioned in the start you are in the middle of this organizational risk management software rollout, can you talk to us a little bit about that?

Edwin Heene: So this system we — in 2008 we selected the fact the solution has update to the support us with this harmonization and there we found out that there was a good supporting tool for operational risk management and safety processes namely the solution was PWCM –Work Clearance Management solution.

Tom Raftery: Okay, and you brought SAP in to help you in a kind of collaborative role in developing the application for yourselves.

Edwin Heene: Yeah, because in Flat Carbon Europe we had already a number of plans that are on good level of safety and managing the safety risks and so on in the operational part with some legacy systems, with a decision to go to one common system, the SAP system, we have to convince the other people with, which have already a good supporting IT tool to move over to SAP.

And therefore we found out that there were some lags still in the supporting SAP PWCM solution. So we had a number of meetings with Jeremiah Stone from SAP who is leading co-innovation programs in SAP and there we decided to, in order to close these gaps to provide these functionalities in standard SAP environment to step in a co-innovation program with SAP.

Tom Raftery: Okay and why roll it out — what was the reason behind the rollout of the application?

Edwin Heene: The reason behind the harmonization and standardization program in Flat Carbon Europe is first of all to improve the maintenance processes in effect implementing all the best practices that we have in several plants and you have a best practice in every single plant to absorb that in one common model, one common business model and implement that in all different plants. Throughout this implementation of best practices you have business results, operational results in every single plant. Benefiting of being in a large group and learning from each other, learning from the best practice from another group.

Tom Raftery: Excellent, Edwin that?s been great, thanks a million.

Edwin Heene: Thank you.

post

Japan achieves its 15% energy reduction goal

Candle

I wrote a post a number of weeks back where I talked about how TEPCO were using realtime data to help manage energy demand in Japan. Towards the end of the piece I speculated on whether or not Japan would be able to maintain the effort through August – their hottest month.

You will remember that after the March earthquake, Japan had to shut down all but 15 of its 54 nuclear power plants. This forced the Japanese government to issue an order on July 1st obliging large scale users of electricity (>500kW) to cut their consumption by 15%. They also asked households and small businesses to do likewise but the cut was not legally binding on them

Well, according to the New York Times, Japan made it through the month of August and so successful were they, that this month, ahead of schedule, the government lifted all restrictions on power use. This despite the nuclear power stations not being turned back on.

This is an amazing success story and goes to show how, when a people are properly motivated (in this case with a sense of national pride and unity), they can achieve the seemingly impossible.

The downside of this story is that in the absence of nuclear power Japan is now burning far more fossil fuels to meets its energy requirements. Hopefully, they’ll transition away from fossil fuels and onto renewables to make up for the shortfall in their generation needs.

Photo credit Tom Raftery

post

Carbon Disclosure Project’s emissions reduction claims for cloud computing are flawed

data center

The Carbon Disclosure Project (CDP) is a not-for-profit organisation which takes in greenhouse gas emissions, water use and climate change strategy data from thousands of organisations globally. This data is voluntarily disclosed by these organisations and is CDP’s lifeblood.

Yesterday the CDP launched a new study Cloud Computing ? The IT Solution for the 21st Century a very interesting report which

delves into the advantages and potential barriers to cloud computing adoption and gives insights from the multi-national firms that were interviewed

The study, produced by Verdantix, looks great on the surface. They have talked to 11 global firms that have been using cloud computing for over two years and they have lots of data on the financial savings made possible by cloud computing. There is even reference to other advantages of cloud computing – reduced time to market, capex to opex, flexibility, automation, etc.

However, when the report starts to reference the carbon reductions potential of cloud computing it makes a fundamental error. One which is highlighted by CDP Executive Chair Paul Dickinson in the Foreword when he says

allowing companies to maximize performance, drive down costs, reduce inefficiency and minimize energy use ? and therefore carbon emissions

[Emphasis added]

The mistake here is presuming a direct relationship between energy and carbon emissions. While this might seem like a logical assumption, it is not necessarily valid.

If I have a company whose energy retailer is selling me power generated primarily by nuclear or renewable sources for example, and I move my applications to a cloud provider whose power comes mostly from coal, then the move to cloud computing will increase, not decrease, my carbon emissions.

The report goes on to make some very aggressive claims about the carbon reduction potential of cloud computing. In the executive summary, it claims:

US businesses with annual revenues of more than $1 billion can cut CO2 emissions by 85.7 million metric tons annually by 2020

and

A typical food & beverage firm transitioning its human resources (HR) application from dedicated IT to a public cloud can reduce CO2 emissions by 30,000 metric tons over five years

But because these are founded on an invalid premise, the report could just as easily have claimed

US businesses with annual revenues of more than $1 billion can increase CO2 emissions by 85.7 million metric tons annually by 2020

and

A typical food & beverage firm transitioning its human resources (HR) application from dedicated IT to a public cloud can increase CO2 emissions by 30,000 metric tons over five years

This wouldn’t be an issue if the cloud computing providers disclosed their energy consumption and emissions information (something that the CDP should be agitating for anyway).

In fairness to the CDP, they do refer to this issue in a sidebar on a page of graphs when they say:

Two elements to be considered in evaluating the carbon impact of the cloud computing strategies of specific firms are the source of the energy being used to power the data center and energy efficiency efforts.

However, while this could be taken to imply that the CDP have taken data centers’ energy sources into account in their calculations, they have not. Instead they rely on models extrapolating from US datacenter PUE information [PDF] published by the EPA. Unfortunately the PUE metric which the EPA used, is itself controversial.

For a data centric organisation like the CDP to come out with baseless claims of carbon reduction benefits from cloud computing may be at least partly explained by the fact that the expert interviews carried out for the report were with HP, IBM, AT&T and CloudApps – all of whom are cloud computing vendors.

The main problem though, is that cloud computing providers still don’t publish their energy and emissions data. This is an issue I have highlighted on this blog many times in the last three years and until cloud providers become fully transparent with their energy and emissions information, it won’t be possible to state definitively that cloud computing can help reduce greenhouse gas emissions.

Photo credit Tom Raftery

post

Computer storage systems rapidly taking on the energy efficiency challenge

In the video above, Dave Wright, founder and CEO of SolidFire makes the point that what with ARM-based servers, OpenCompute, etc. there has been a lot of breakthroughs on the computing side of servers, to make them more efficient recently, but very little innovation has happened with storage systems. Predictably he’s gone after storage modernisation with his new company SolidFire offering SSD-based enterprise storage solutions.

My laptop

My laptop

One of the biggest advantages of SSD’s, as storage for servers, is it is incredibly fast, so you get an immediate performance win. I first found this when I changed my laptop to one with an SSD, instead of a normal HDD. The drive is far faster, but because the SSD doesn’t generate heat, there is no requirement for a fan. This makes the laptop cooler (no laptop burn), quieter and it has a far longer battery life. Samsung affirmed this in a server situation when I talked to them earlier this year. Because SSD’s don’t require power-hungry fans to cool down the heat created by spinning drives, the reduced power requirement and heat generation is a big win in a data centre environment.

SolidFire are far from being alone in this field. Just last week FlashSoft announced that they had secured $3m in first round funding to develop Flash virtualisation software for enterprises. They have nifty software which runs on servers with hybrid storage (some SSD and some HDD). Their software identifies regularly accessed data (hot data) and caches this in SSD, while moving less frequently accessed data to spinning disks. Having regularly accessed data in a cache on SSD greatly increases the performance of the storage.

The hybrid model is one way of getting over the issue of the cost differential between HDD’s and SSD’s. SolidFire have a different approach – they don’t go for the hybrid model. Instead their all-SSD model uses a combination of data compression, de-duplication and thin client provisioning to reduce the amount of space required for storage.

A performance enhancing tactic regularly employed with HDD’s is to only use a small amount of the available space on the outside of the disk for your storage. The outside of the disk spins fastest giving you faster read/write access. However, this is hugely inefficient as most of the disk remains unused.

SolidFire do away with the need to have any HDD’s at all making your storage far more efficient. While in Flashsoft’s hybrid model, you can do away with the requirement for faster spinning SAS drives and instead go for slower, cheaper SATA drives without taking a performance hit. Both solutions reduce your energy and cooling needs.

Then out of Japan comes news that in response to requirements for energy efficiency there (due to the earthquake earlier this year closing nuclear power plants), Nexsan have come up with new power managed storage systems with in-built MAID capable of supporting any combination of SATA/SAS/SSD drives. Because MAID allows disks to be spun down when not in use, Nexsan are claiming up to 85% savings in energy usage for its systems.

It is true certainly that SSD’s have a shorter lifetime than HDD’s but even this has been given a boost with the recent announcement from IBM that their new Phase Change Memory chips (PCM) will be faster, cheaper and longer lasting than todays SSD’s.

So while Dave, above, feels that there isn’t much innovation happening in the efficiency of storage, I would respectfully differ and say this is very exciting times to be looking into storage energy efficiency!

Photo credit Tom Raftery

post

3 easy steps to see if your Cloud solution is energy efficient

Cloud

I’ve written a number of posts questioning whether Cloud Computing is Green or Energy Efficient but to be a little more helpful, here is a simple test you can do to see if your Cloud Computing delivered applications are yielding energy efficiency gains for you:

  1. Have you moved some of your applications to a Cloud provider? – if “Yes” go to step 2 (if no, then cloud is not saving you energy)
  2. Do you know what the energy consumption* of that application was before moving it to the cloud? – if “Yes”, go on to step 3 (if no, then you have no way to tell if your Cloud solution is saving you energy)
  3. Do you know the energy consumption of your application after it has moved to the Cloud? – if “Yes” subtract 3 from 2 and if the answer is positive then Cloud is saving you energy (if no, then you have no way to tell if your Cloud solution is saving you energy)

*Obviously, the units of energy consumption in steps 2 and 3 need to be the same for this to work. To make sure they are, try contacting your Cloud provider before moving your applications to the Cloud and asking them what their method for measuring energy consumption is. If they tell you (more than likely they won’t) you can match your measurement units in step 2 to theirs.

Unfortunately, as Cloud Computing providers are, as yet, not publishing energy consumption information, for now, this only works as a thought experiment. However with coming regulatory requirements around reporting of energy consumption, Cloud Providers may be forced to reveal this information.

It is only when Cloud providers detail their energy consumption information that we will be able to say whether Cloud Computing is energy-efficient, or not.

Photo credit kevindooley

post

FaceBook open sources building an energy efficient data center

FaceBook's new custom-built Prineville Data Centre

Back in 2006 I was the co-founder of a Data Centre in Cork called Cork Internet eXchange. We decided, when building it out, that we would design and build it as a hyper energy-efficient data centre. At the time, I was also heavily involved in social media, so I had the crazy idea, well, if we are building out this data centre to be extremely energy-efficient, why not open source it? So we did.

We used blogs, flickr and video to show everything from the arrival of the builders on-site to dig out the foundations, right through to the installation of customer kit and beyond. This was a first. As far as I know, no-one had done this before and to be honest, as far as I know, no-one since has replicated it. Until today.

Today, Facebook is lifting the lid on its new custom-built data centre in Prineville, Oregon.

Not only are they announcing the bringing online of their new data centre, but they are open sourcing its design, specifications and even telling people who their suppliers were, so anyone (with enough capital) can approach the same suppliers and replicate the data centre.

Facebook are calling this the OpenCompute project and they have released a fact sheet [PDF] with details on their new data center and server design.

I received a pre-briefing from Facebook yesterday where they explained the innovations which went into making their data centre so efficient and boy, have they gone to town on it.

Data centre infrastructure
On the data centre infrastructure side of things, building the facility in Prineville, Oregon (a high desert area of Oregon, 3,200 ft above sea level with mild temperatures) will mean they will be able to take advantage of a lot of free cooling. Where they can’t use free cooling, they will utilise evaporative cooling, to cool the air circulating in the data centre room. This means they won’t have any chillers on-site, which will be a significant saving in capital costs, in maintenance and in energy consumption. And in the winter, they plan to take the return warm air from the servers and use it to heat their offices!

By moving from centralised UPS plants to 48V localised UPS’s serving 6 racks (around 180 Facebook servers), Facebook were able to re-design the electricity supply system, doing away with some of the conversion processes and creating a unique 480V distribution system which provides 277V directly to each server, resulting in more efficient power usage. This system reduces power losses going in the utility to server chain, from an industry average 11-17% down to Prineville’s 2%.

Finally, Facebook have significantly increased the operating temperature of the data center to 80.6F (27C) – which is the upper limit of the ASHRAE standards. They also confided that in their next data centre, currently being constructed in North Carolina, they expect to run it at 85F – this will save enormously on the costs of cooling. And they claim that the reduction in the number of parts in the data center means they go from 99.999% uptime, to 99.9999% uptime.

New Server design
Facebook also designed custom servers for their data centres. The servers contain no paint, logos, stickers bezels or front panel. They are designed to be bare bones (using 22% fewer materials than a typical 1U server) and for ease of serviceability (snap-together parts instead of screws).

The servers are 1.5U tall to allow for larger heat sinks and larger (slower turning and consequently more efficient) 60mm fans. These fans only take 2-4% of the energy of the server, compared to 10-20% for typical servers. The heat sinks are all spread at the back of the mother board so none of them will be receiving pre-heated air from another heat sink, reducing the work required of the fans.

The server power supply accepts both 277V AC power from the electrical distribution system and 44V DC from the UPS in the event of a utility power failure. These power supplies have a peak efficiency of 94.5% (compared to a more typical 90% for standard PSU’s) and they connect directly to the motherboard, simplifying the design and reducing airflow impedance.

Open Compute
Facebook relied heavily on open source in creating their site. Now, they say, they want to make sure the next generation of innovators don’t have to go through the same pain as Facebook in building out efficient data centre infrastructure. Consequently, Facebook is releasing all of the specification documentation which it gave to its suppliers for this project.

Some of the schematics and board layouts for the servers belong to the suppliers so they are not currently being published, though Facebook did say they are working with their suppliers to see if they will release them (or portions of them) but they haven’t reached agreement with the suppliers on this just yet.

Asked directly about their motivations for launching Open Compute Facebook’s Jay Park came up with this classic reply

… it would almost seem silly to do all this work and just keep it closed

Asking Facebook to unfriend coal
Greenpeace started a campaign to pressure Facebook into using more renewable energy resources due to the fact that Pacific Power, the energy supplier Facebook will be using for its Prineville data center produces almost 60% of its electricity from burning coal.

Greenpeace being Greenpeace, created a very viral campaign, using the Facebook site itself, and the usual cadre of humurous videos etc., to apply pressure on Facebook to thinking of sourcing its electricity from more renewable sources.

When we asked Facebook about this in our briefing, they did say that their data centre efforts are built around many more considerations than just the source of energy that comes into the data centre. They then went on to maintain that they are impressed by Pacific Power’s commitment to moving towards renewable sources of energy (they are targeting having 2,000MW of power from renewables by 2013). And they concluded by contending that the efficiencies they have achieved in Prineville more than offsets the use of coal which powers the site.

Conclusion
Facebook tell us this new custom data centre at Prineville has a PUE of 1.07, which is very impressive.

They have gone all out on innovating their data centre and the servers powering their hugely popular site. More than that though, they are launching the Open Compute Project giving away all the specs and vendor lists required to reproduce an equally efficient site. That is massively laudable.

It is unfortunate that their local utility has such a high gen-mix of coal in its supply to besmirch an otherwise great energy and sustainability win for Facebook. The good thing though is that as the utility adds to its portfolio of renewables, Facebook’s site will only get greener.

For more on this check out the discussions on Techmeme

You should follow me on Twitter here

Photo credit FaceBook’s Chuck Goolsbee