Learnings from Google’s European Data Center Summit

Google's EU Data Center Summit conference badge

I attended Google’s European Data Center Summit earlier this week and it was a superb event. The quality of the speakers was tremendous and the flow of useful information was non-stop.

The main take home from the event is that there is a considerable amount of energy being wasted still by data centers – and that this is often easy to fix.

Some of the talks showed exotic ways to cool your data center. DeepGreen, for example, chose to situate itself beside a deep lake, so that it could use the lake’s cold water for much cheaper cooling. Others used river water and Google mentioned their new facility in Finland where they are using seawater for cooling. Microsoft mentioned their Dublin facility where they are using air-side economisation (i.e. it just brings in air from outside the building) and so is completely chiller-less. This is a 300,00sq ft facility.

IBM’s Dr Bruno Michel did remind us that it takes ten times more energy to move a compressible medium like air, than it does to move an non-compressible one like water but then, not all data centers have the luxury of a deep lake nearby!

Google's Joe Kava addressing the European Data Center Summit

Both Google and UBS, the global financial services co., gave what were hugely practical talks about simple steps to reducing your data center’s energy footprint.

Google’s Director of Operations, Joe Kava (pic on right) talked about a retrofit project where Google dropped the PUE of five of its existing data centers from 2.4 down to 1.5. They did this with an investment of $25k per data center and the project yielded annual savings of $67k each!

What kind of steps did they take? They were all simple steps which didn’t incur any downtime.

The first step was to do lots of modelling of their airflow and temperatures in their facilities. With this as a baseline, they then went ahead and optimised the perforated tile layout! The next step was to get the server owners to buy into the new expanded ASHRAE limits – this allowed Google to nudge the setpoint for the CRACs up from its existing 22C to 27C – with significant savings accruing from the lack of cooling required from this step alone.

Further steps were to roll out cold aisle containment and movement sensitive lighting. The cold aisles were ‘sealed’ at the ends using Strip Doors (aka meat locker sheets). This was all quite low-tech, done with no downtime and again yielded impressive savings.

Google achieved further efficiencies by simply adding some intelligent rules to their CRACs so that they turned off when not needed and came on only if/when needed.

UBS’ Mark Eichenberger echoed a lot of this in his own presentation. UBS has a fleet of data centers globally whose average age is 10 years old and some are as old as 30. Again, simple, non-intrusive steps like cold-aisle containment and movement sensitive lighting is saving UBS 2m Swiss Francs annually.

Google’s Chris Malone had other tips. If you are at the design phase, try to minimise the number of conversion steps from AC<->DC for the electricity and look for energy efficient UPS’.

Finally, for the larger data center owners, eBay’s Dean Nelson made a very interesting point. When he looked at all of eBay’s apps, he saw they were all in Tier 4 data centers. He realised that 80% of them could reside in Tier 2 data centers and by moving them to Tier 2 data centers, he cut eBay’s opex and capex by 50%

Having been a co-founder of the Cork Internet eXchange data center, it was great to hear that the decisions we made back then around cold aisle containment and highly energy efficient UPS’ being vindicated.

Even better though was that so much of what was talked about at the summit was around relatively easy, but highly effective retrofits that can be done to existing data centers to make them far more energy efficient.

You should follow me on Twitter here
Photo credit Tom Raftery


Ad Infinitum’s Insite helping companies save energy


Continuing my series of chats with companies in the data center energy management space, I spoke recently to Philip Petersen, CEO of UK-based ad infinitum.

Their product, called InSite, like that of most of the others in this space I have spoken to, is a server based product, front-ended by a browser.

InSite pulls the data directly from devices (like power strips, distribution board meters, temperature and humidity sensors) and stores them in a PostgreSQL database. Having an SQL database makes it that much easier to integrate with other systems for pulling in data, and also for sharing information. This is handy when InSite is connected to a Building Management System (BMS), it allows organisations to see what proportion of a building’s power is going to the keep the Data Center running, for example. And because InSite can poll servers directly, it can be used to calculate the cost of running server-based applications (such as Exchange, Notes, SQL Server, SAP, etc.).

I asked Philip about automation and he said that while InSite has an inbuilt Automation Engine, it hasn’t been deployed because “no client that we have spoken to has wanted to do that yet”. Demand for automation will come, he said but right now companies are looking for more basic stuff – they often just want to see what’s actually going on, so that they can decide on the best way to respond.

InSIte’s target customers are your typical medium too large organisations (ones likely to have significant IT infrastructures) as well as co-lo operators. Unlike some of the other companies in this space though, Ad infinitum were able to share some significant customer wins – Tiscali’s UK Business Services, Equinix and Cisco’s UK Engineering labs.

In fact, Cisco have published a Case Study on the website referencing this solution [PDF] and how Cisco were able to achieve a 30% reduction in IT equipment power consumption and a 50% drop in their cooling costs!

It’s hard to argue with a significant customer win like that!

You should follow me on Twitter here

Photo credit JohnSeb


RF Code and their wireless environmental sensors coming to Europe

RF Code PDU Tag

To go along with the data center energy efficiency posts I have been writing in the last few weeks, I talked to RF Code earlier this week to find out what they have been up to recently. RF Code make wireless sensing devices which are proving quite popular lately in data centers.

I was speaking to Chad Riseling, RF Code’s VP of Worldwide Sales and he told me that a large portion of their 2010 growth came from their wireless environmental monitoring solutions. RF Code has wireless tags to monitor humidity, temperature, leak detection, PDU and CDU power usage (for certain vendors, as yet), (rack) door status and dry contact status.

The wireless sensors which RF Code sell, are roughly the size of a box of matches, they run off a battery which is rated to last around three years (and which starts alerting you about low battery status three months before the battery is depleted) and they have a range of mountings, including a peel and stick option, to facilitate easy deployment almost anywhere in a data center.

If you are wondering why a wireless solution is such a big deal, well think about the wireless internet network in your own home and how that has changed how you browse the net. Now you can access it anywhere in your home. Similarly, wireless sensors in a data center don’t need any extra cables to be rolled out for deployment and so can be installed quickly and relatively ubiquitously.

According to Chad, in a small 3,000 square foot data center, you could have up to 10,000 sensors being read by 3-4 readers and the data is handed off to the software stack, called Sensor Manager. Sensor manager can be used to track the data, or if companies have already invested in BMS or service management software, the data can be integrated with that.

One nice touch that RF Code have is that, in an homage to the Puppy Dog sales technique, they sell Starter Packs which contain sensors, readers and management software (enough to get you going, in other words) for as little as $2,995. If you are happy with the starter pack, you can simply buy more tags, readers, etc. to build out your solution.

Yesterday RF Code announced that they are launching a European Channel Program to grow beyond their current, predominantly US-based, market. Cool.

You should follow me on Twitter here

Photo copyright RF Code


Sentilla thinks of data centers, as data factories!

Data center

If you have been following this blog, you’ll know I have been profiling Data Center efficiency companies over the last few weeks. This week I take a look at Sentilla.

I talked to Sentilla’s CTO and co-founder, Joe Polastre, the other day and Joe told me that Sentilla came out of Berkeley where they had been looking at data analytics problems around large, potentially incomplete or inaccurate, streaming datasets. The challenge was how to turn that into a complete picture of what’s going on so people could make business decisions.

Sentilla takes an industrial manufacturing approach to Data Centers – in manufacturing you have power going in one side, and products and (often) waste heat coming out the other. In the same way in data centers you have power going in one side and coming out the other side you have the product (compute cycles) and waste heat. To optimise your data center you need to get the maximum data/compute (product) output with the minimum power in and the least waste heat generated. Sentilla thinks of data centers, as data factories!

Unlike most of the data center people I have talked to, Sentilla don’t talk so much about energy savings. Instead they emphasise maximising performance – getting the most out of your existing data centers, your existing storage, your existing servers, your existing power supply. By far the greatest saving from deploying Sentilla, Joe claimed, is not from the energy savings. That pales in comparison to the capital deferment savings gained from being able to delay the building of extra data center facilities by however many years, he said.

So how does Sentilla help?

Well Sentilla analyses the energy profile of every asset in the data center, whether metered or not, and makes recommendations to improve the planning and management of data center operations. I highlighted the “whether metered or not” bit because this is an important differentiator for Sentilla – they have developed and patented what they call “virtual meters”. These are algorithms which look at the work that a device is doing, and based on models which Sentilla have built up, and measurements they have done, as well as some benchmarks which are out there, Sentilla computes how much power is being used by that equipment.

The reason this is so important is because the most inefficient equipment in the data center is not the new stuff (which is likely to already be metered) but the legacy devices. These are the ones which need to be most carefully managed, and the ones where the greatest performance gains for the data center can be made. And because Sentilla can pull usage information from management databases like Tivoli, it means the Sentilla doesn’t need to poll every piece of equipment in the data center (with the increased network traffic and data that would generate).

Also, because Sentilla has its virtual meters, it is a software-only product and can therefore be rolled out very quickly.

The other nice feature Sentilla has is that it can identify the energy utilisation of virtualised servers. This is important because with the increasing ease of deployment of virtual servers, under-utilised VM’s and VM clutter are starting to become issues for data centers. VM clutter isn’t just an issue for energy reasons – there are also implications for software licensing, maintenance and SLA requirements.

I asked Joe about whether Sentilla is a SaaS product and he said that while they have a SaaS version of the product, so far most of Sentilla’s clients prefer to keep their data in-house and they haven’t gone for the SaaS option.

Finally I asked about pricing and Joe said that Sentilla is priced on a subscription basis and, apparently, it is priced such that for any modest sized data center, for every $1 you put into Sentilla, you get $2 back. Or put another way, Joe said, deploying Sentilla will generally mean that you reclaim around 18-20% of your power capacity.

Disclosure: Sentilla are a client (but this post is not part of their client engagement)

You should follow me on Twitter here

Photo credit The Planet


Power Assure automates the reduction of data center power consumption

Data centre

If you’ve been following this blog in the last couple of weeks you’ll have noticed that I have profiled a couple of data centre energy management companies – well, today it is the turn of Power Assure.

The last time I talked to Power Assure was two years ago and they were still very early stage. At that time I talked to co-founder and CTO, Clemens Pfeiffer, this time I spoke with Power Assure’s President and CEO, Brad Wurtz.

The spin that Power Assure put on their energy management software is that, not only do they offer their Dynamic Power Management solution which provides realtime monitoring and analytics of power consumption across multiple sites, but their Dynamic Power Optimization application automatically reduces power consumption.

How does it do that?

Well, according to Brad, clients put an appliance in each of the data centres they are interested in optimising (Power Assure’s target customer base are large organisations with multiple data centres – government, financial services, healthcare, insurance, telco’s, etc.). The appliance uses the management network to gather data – data may come from devices (servers, PDU’s, UPS’s, chillers, etc.) directly, or more frequently, it gathers data directly from multiple existing databases (i.e. a Tivoli db, a BMS, an existing power monitoring system, and/or inventory system) and performs Data Centre analytics on those data.

Data centre

The optimisation module links into existing system management software to measures and track energy demand on a per applications basis in realtime. It then calculates the amount of compute capacity required to meet the service level agreements of that application and adds a little bit of headroom. From the compute it knows the number of servers needed, so it communicates with the load balancer (or hypervisor, depending on the data centre’s infrastructure) and adjusts the size of the server pool to meet the required demand.

Servers removed from the pool can be either power capped or put in sleep mode. As demand increases the servers can be brought fully online and the load balancer re-balanced so the enlarged pool can meet the new level of demand. This is the opposite of the smart grid demand response concept – this is supply-side management – matching your energy consumption (supply to the demand for compute resources).

A partnership with Intel means that future versions will be able to turn off and on individual components or cores to more precisely control power usage.

The software is agentless and interestingly, given the customer profile Brad outlined (pharmas, financial institutions, governments, etc.), customers log in to view and manage their power consumption data because it is SaaS delivered.

The two case studies on their site make for interesting reading and show reductions in power consumption from 56% – 68% which are not to be sneezed at.

The one client referred to in the call is NASA and Power Assure are involved in a data centre consolidation program with them. Based on the work they have done with Power Assure, Brad informed me that NASA now expects to be able to consolidate their current 75 Data Centres significantly. That’ll make a fascinating case study!

You should follow me on Twitter here

Photo credit cbowns


JouleX Energy Manager – I’m impressed!

JouleX Energy Manager Dashboard

After I mentioned in this post that JouleX had recently updated their Energy Manager product to version 2.5 and all the extra functionality that brought, I was curious to find out a little more about them.

I talked to Tim McCormick, JouleX’ VP Marketing and Sales and Mark Davidson, their Sustainability Officer.

I was intrigued to discover that the company was founded in 2009 by former execs from Internet Security Systems, a security company which had been bought by IBM.

Having moved on from ISS, instead of building another app to scan networks looking for threats and vulnerabilities, they created software to go out over the network and sniff out the power consumption information of all devices on the network! They call their solution JouleX Energy Manager (JEM).

What exactly does JEM do? Well, this is where it gets interesting.

JEM pulls in energy information from all kinds of devices. In the office environment, it grabs energy data from computers, printers, VoIP phones, hubs, switches, access points, you name it – anything with an IP address. Similarly in a data centre environment. However, where it really starts to stand out on its own is when it hooks into facilities’ machinery. JEM can grab energy utilisation figures from access control systems, PDU’s, video cameras, CRAC’s, lighting, even HVAC systems.

Even more interestingly, JEM can harvest all this energy utilisation data without needing to install any software agents, or to deploy any smart IP devices/PDU’s or wireless sensors. Nor does it require any changes to be made to the network, or the security of the network. Quite an achievement.

So what does JEM do with all this information?

Joulex Mobile settings screen

Joulex Mobile settings screen

Well, as you’d imagine, JEM has quite a comprehensive analytics engine which slices and dices that info by energy cost, energy usage, CO2e, device, manufacturer, date, time, location, business unit, any way you want to look at it. Also what-if analyses allow you to check out the savings from policies before rolling them out, and JEM can even calculate the energy ROI for new buy equipment versus legacy allowing you to validate purchase decisions before buying.

Finally, JEM also has an events-based policy engine which looks at data feeds and implements policies based on events or thresholds. With their JouleX Mobile phone app – the event could be turn off all the devices in Tom’s office (printer, scanner, VoIP phone, computer, lights, wireless access point, etc.) when Tom is more than 500m from the building (using the phone’s inbuilt GPS), and turn them all back on when Tom returns.

In data centres, under utilised servers can have the power reduced to their CPU’s, cutting their energy consumption (JouleX call this Load Adaptive Computing) and organisations can even take advantage of JouleX’ ability to interface with ADR and OpenADR to reduce energy use and sell the unused electricity back to their utility.

In the next version of JEM JouleX will roll out Load Adaptive Networking – this will scale back the power utilisation of routers, switches and other networking equipment when they are not in use – an area which, to-date, has been very poorly addressed.

JouleX have a very comprehensive application here. They have stellar customers and partners. I have a feeling this is a company we’ll be hearing a lot more about.

Photo credit Tom Raftery


Viridity’s new President and CEO Arun Oberoi speaks to GreenMonk

Viridity EnergyCheck Screen Shot

We all know data centre’s are massive consumers of energy but just how much? The European data centre consumption was 50 terawatt hours (TWh) in 2008, according to a recent article in the Guardian. This will rise to 100TWh by 2020, roughly the same as the electricity consumption of Portugal.

I mentioned on here just before Christmas that data center energy management company Viridity had named Arun Oberoi as their new President and CEO. Arun has an impressive CV which is outlined in Viridity’s press release about the appointment.

I had the opportunity to chat with Arun recently and he talked to me about Viridity’s solutions.

Data centre with cold aisle containment

As Arun put it, the world has done a great job of mapping dependencies to IT Services in the Enterprise Management world but very little has been done so far on bridging the physical world (think power, space and cooling) to the logical world. These are resources which are becoming very expensive but whose ability to be measured and managed has been hampered by the traditional separation of roles between facilities and IT, for example.

Three areas Viridity can help company’s with, according to Arun are

  1. Power and cost savings
  2. Sustainability – emissions reduction and
  3. Mapping physical to logical to ensure optimisation of resources and managing data centre physical constraints (which, unlike IT, can’t be virtualised!)

Viridity’s software takes the data from many, often disparate sources and provides analysis and trending information to allow managers decide how best to reduce their electricity and space costs. The next version will have automation built-in to enable even greater savings!

In an ideal world this would mean that European data centre consumption might only rise to 60 terawatt hours (TWh) by 2020, instead of the projected 100TWh. However, Parkinson’s Law teaches us that data centre’s expand to fill the power available to run them!

Photo credit Tom Raftery