post

If Utilities don’t step up their customer communications, they risk their considerable smart grid investments

Smart meter

Smart grids don’t come cheap.

They are typically projects costing in the order of hundreds of millions of dollars (or Euro’s, or pounds or whatever your currency of choice). Just think, the most fundamental piece of the smart grid, the smart meter, alone costs in the order of $100. When you factor in the costs of installation, etc., you are looking at over $200 per smart meter. Therefore if you have in the order of one million customers it’s going to cost you around $200m just for the smart meter rollout.

Given that they are so costly to implement, you’d think utility companies would do everything possible to protect these projects from failure – not so, according to the latest smart grid research from Oracle.

The report from Oracle surveyed 150 North American C-level utility executives about their vision and priorities for smart grids over the next ten years. The findings are both interesting and disturbing.

It is interesting but not too surprising for example, that when asked to select their top two smart grid priorities over the next 10 years, they chose improving service reliability (45%) and implementing smart metering (41%) at the top of the list.

What is worrying though is that while 71% of utilities say securing customer buy-in is key to successful smart grid roll-outs, only 43% say they are educating their customers on the value proposition of smart grids. This is hugely problematic because, as I have written about previously, customer push-back can go a long way to de-railing smart grid projects.

And those who are educating their customers, how are they doing it?

Well, from the report, to communicate with their customers 76% of utilities use postal communications, and 72% use their own website. Only 20% use social media (and who knows how well those 20% are using their social media channels).

Tellingly, the report also mentions that only 38% of utility customers take advantage of energy conservation programs when they are made available. There are a number of reasons for this:

  1. the savings from these programs often require work on the part of the customer for no immediately visible benefit
  2. the savings are typically small (or put another way, energy is still too cheap) and
  3. Because of the extremely poor job utility companies have done on communications to-date, their customers don’t trust them, or their motivations. There is no quick fix for this. It will take time and a significant improvement in how utility companies converse with their customers before they start to be trusted

I have written lots of times over the years about the need for utilities to improve their communications.

Utilities have a lot of work to do rolling out their smart grids – but if they don’t step up their customer communications, they risk their considerable smart grid investments.

Photo credit Tom Raftery

post

FaceBook open sources building an energy efficient data center

FaceBook's new custom-built Prineville Data Centre

Back in 2006 I was the co-founder of a Data Centre in Cork called Cork Internet eXchange. We decided, when building it out, that we would design and build it as a hyper energy-efficient data centre. At the time, I was also heavily involved in social media, so I had the crazy idea, well, if we are building out this data centre to be extremely energy-efficient, why not open source it? So we did.

We used blogs, flickr and video to show everything from the arrival of the builders on-site to dig out the foundations, right through to the installation of customer kit and beyond. This was a first. As far as I know, no-one had done this before and to be honest, as far as I know, no-one since has replicated it. Until today.

Today, Facebook is lifting the lid on its new custom-built data centre in Prineville, Oregon.

Not only are they announcing the bringing online of their new data centre, but they are open sourcing its design, specifications and even telling people who their suppliers were, so anyone (with enough capital) can approach the same suppliers and replicate the data centre.

Facebook are calling this the OpenCompute project and they have released a fact sheet [PDF] with details on their new data center and server design.

I received a pre-briefing from Facebook yesterday where they explained the innovations which went into making their data centre so efficient and boy, have they gone to town on it.

Data centre infrastructure
On the data centre infrastructure side of things, building the facility in Prineville, Oregon (a high desert area of Oregon, 3,200 ft above sea level with mild temperatures) will mean they will be able to take advantage of a lot of free cooling. Where they can’t use free cooling, they will utilise evaporative cooling, to cool the air circulating in the data centre room. This means they won’t have any chillers on-site, which will be a significant saving in capital costs, in maintenance and in energy consumption. And in the winter, they plan to take the return warm air from the servers and use it to heat their offices!

By moving from centralised UPS plants to 48V localised UPS’s serving 6 racks (around 180 Facebook servers), Facebook were able to re-design the electricity supply system, doing away with some of the conversion processes and creating a unique 480V distribution system which provides 277V directly to each server, resulting in more efficient power usage. This system reduces power losses going in the utility to server chain, from an industry average 11-17% down to Prineville’s 2%.

Finally, Facebook have significantly increased the operating temperature of the data center to 80.6F (27C) – which is the upper limit of the ASHRAE standards. They also confided that in their next data centre, currently being constructed in North Carolina, they expect to run it at 85F – this will save enormously on the costs of cooling. And they claim that the reduction in the number of parts in the data center means they go from 99.999% uptime, to 99.9999% uptime.

New Server design
Facebook also designed custom servers for their data centres. The servers contain no paint, logos, stickers bezels or front panel. They are designed to be bare bones (using 22% fewer materials than a typical 1U server) and for ease of serviceability (snap-together parts instead of screws).

The servers are 1.5U tall to allow for larger heat sinks and larger (slower turning and consequently more efficient) 60mm fans. These fans only take 2-4% of the energy of the server, compared to 10-20% for typical servers. The heat sinks are all spread at the back of the mother board so none of them will be receiving pre-heated air from another heat sink, reducing the work required of the fans.

The server power supply accepts both 277V AC power from the electrical distribution system and 44V DC from the UPS in the event of a utility power failure. These power supplies have a peak efficiency of 94.5% (compared to a more typical 90% for standard PSU’s) and they connect directly to the motherboard, simplifying the design and reducing airflow impedance.

Open Compute
Facebook relied heavily on open source in creating their site. Now, they say, they want to make sure the next generation of innovators don’t have to go through the same pain as Facebook in building out efficient data centre infrastructure. Consequently, Facebook is releasing all of the specification documentation which it gave to its suppliers for this project.

Some of the schematics and board layouts for the servers belong to the suppliers so they are not currently being published, though Facebook did say they are working with their suppliers to see if they will release them (or portions of them) but they haven’t reached agreement with the suppliers on this just yet.

Asked directly about their motivations for launching Open Compute Facebook’s Jay Park came up with this classic reply

… it would almost seem silly to do all this work and just keep it closed

Asking Facebook to unfriend coal
Greenpeace started a campaign to pressure Facebook into using more renewable energy resources due to the fact that Pacific Power, the energy supplier Facebook will be using for its Prineville data center produces almost 60% of its electricity from burning coal.

Greenpeace being Greenpeace, created a very viral campaign, using the Facebook site itself, and the usual cadre of humurous videos etc., to apply pressure on Facebook to thinking of sourcing its electricity from more renewable sources.

When we asked Facebook about this in our briefing, they did say that their data centre efforts are built around many more considerations than just the source of energy that comes into the data centre. They then went on to maintain that they are impressed by Pacific Power’s commitment to moving towards renewable sources of energy (they are targeting having 2,000MW of power from renewables by 2013). And they concluded by contending that the efficiencies they have achieved in Prineville more than offsets the use of coal which powers the site.

Conclusion
Facebook tell us this new custom data centre at Prineville has a PUE of 1.07, which is very impressive.

They have gone all out on innovating their data centre and the servers powering their hugely popular site. More than that though, they are launching the Open Compute Project giving away all the specs and vendor lists required to reproduce an equally efficient site. That is massively laudable.

It is unfortunate that their local utility has such a high gen-mix of coal in its supply to besmirch an otherwise great energy and sustainability win for Facebook. The good thing though is that as the utility adds to its portfolio of renewables, Facebook’s site will only get greener.

For more on this check out the discussions on Techmeme

You should follow me on Twitter here

Photo credit FaceBook’s Chuck Goolsbee

post

Cloud Energy Consumption: Google, Twitter and the Systems Vendors

Yesterday Tom posed a question: just how green is cloud computing? We have been frankly disappointed by Cloud computing providers reticence to start publishing numbers on energy consumption. We know for sure that energy is a big deal when it comes to the huge data centers the likes of Facebook are building- these firms are siting data centers next to rivers to take advantage of hydro-electric power, and in Google’s case are even looking at building their own wind turbine farms.

Some of you may remember the huge fuss when Alex Wissner Gross, a researcher from Harvard University estimated how much energy the net consumed, which became a Sunday Times story about Google Searches in terms of kettles boiled. The story claimed:

performing two Google searches from a desktop computer can generate about the same amount of carbon dioxide as boiling a kettle” or about 7g of CO2 per search

Perhaps surprisingly, Google responded, to debunk the news story:

In terms of greenhouse gases, one Google search is equivalent to about 0.2 grams of CO2.

The story petered out- which is somewhat of a shame. A real, open debate, with shared figures, bringing in all of the main players, would clearly benefit us all. With that in mind I was pleased to see that one of Raffi Krikorian, tech lead of the Twitter API team, chose to talk about power/tweet at the company’s Chirp developer conference last week:

In summary, Raffi estimated that energy consumed is around 100 Joules per tweet.

Before jumping to a conclusion that Twitter is more efficient than Google its important to note that Raffi’s estimates, unlike Google’s, don’t include the power of the PC in the equation. You should also watch the video of his presentation – for the simple reason that Raffi seems to channel Jay-Z in his presenting: the guy’s body language is straight out of a hip hop video.

I discussed Twitter’s “disclosure” with my colleague Tom this morning. He questioned its value because its an estimate, rather than a measurement. He has a point. It may be however that Raffi is just the man to take this debate to the next level. He is clearly deeply technical, can think at the level of the isolated API – and is finally a Sustainability advocate of note- I first heard of him through his seminal How Valentine’s Day Causes Global Warming riff.

We need to encourage competition on the basis of power efficiency.

I’d like to close with a call to action. Surely its time for the major web players to get together with Dell, HP and IBM in order to agree standards so we can move from estimates to measurements of Cloud energy consumption, perhaps using AMEE ($client) as a back end for standard benchmarks. You can’t have sustainability through obscurity. Open data is key to working through the toughest environmental challenges.

post

Should FaceBook’s investors be worried that the site is sourcing energy for its new data center from coal?

Mountain-top removal

Photo credit The Sierra Club

Should FaceBook’s investors be worries that the site is sourcing energy for its new data center from primarily coal-fired power?

FaceBook is fourth largest web property (by unique visitor count) and well on its way to becoming third. It is valued in excess of $10 billion and its investors include Russian investment company DST, Accel Partners, Greylock Partners, Meritech Capital and Microsoft.

FaceBook announced last month that it would be locating its first data center in Prinville Oregon. The data center looks to be all singing and dancing on the efficiency front and is expected to have a Power Usage Effectiveness (PUE) rating of 1.15. So far so good.

However, it soon emerged that FaceBook are purchasing the electricity for their data center from Pacific Power, a utility owned by PacifiCorp, a utility whose primary power-generation fuel is coal!

Sourcing power from a company whose generation comes principally from coal is a very risky business and if there is anything that investors shy away from, it is risk!

Why is it risky?

Coal has significant negative environmental effects from its mining through to its burning to generate electricity contaminating waterways, destroying ecosystems, generation of hundreds of millions of tons of waste products, including fly ash, bottom ash, flue gas desulfurisation sludge, that contain mercury, uranium, thorium, arsenic, and other heavy metals and emitting massive amounts of radiation.

And let’s not forget that coal burning is the largest contributor to the human-made increase of CO2 in the air [PDF].

The US EPA recently ruled that:

current and projected concentrations of the six key well-mixed greenhouse gases–carbon dioxide (CO2), methane (CH4), nitrous oxide (N2O), hydrofluorocarbons (HFCs), perfluorocarbons (PFCs), and sulfur hexafluoride (SF6)–in the atmosphere threaten the public health and welfare of current and future generations.

Note the wording “the public health and welfare of current and future generations”

Who knows what legislation the EPA will pass in the coming months and years to control CO2 emissions from coal-fired power plants in the coming months and years – and the knock on effects this will have on costs.

Now think back to the litigation associated with asbestos – the longest and most expensive tort in US history. Then note that climate change litigation is gaining ground daily, the decision to go with coal as a primary power source starts to look decidedly shaky.

Then GreenPeace decided to wade in with a campaign and FaceBook page to shame FaceBook into reversing this decision. Not good for the compay image at all.

Finally, when you factor in the recent revolts by investors in Shell and BP to decisions likely to land the companies in hot water down the road for pollution, the investors in FaceBook should be asking some serious questions right about now.