post

Ad Infinitum’s Insite helping companies save energy

Servers

Continuing my series of chats with companies in the data center energy management space, I spoke recently to Philip Petersen, CEO of UK-based ad infinitum.

Their product, called InSite, like that of most of the others in this space I have spoken to, is a server based product, front-ended by a browser.

InSite pulls the data directly from devices (like power strips, distribution board meters, temperature and humidity sensors) and stores them in a PostgreSQL database. Having an SQL database makes it that much easier to integrate with other systems for pulling in data, and also for sharing information. This is handy when InSite is connected to a Building Management System (BMS), it allows organisations to see what proportion of a building’s power is going to the keep the Data Center running, for example. And because InSite can poll servers directly, it can be used to calculate the cost of running server-based applications (such as Exchange, Notes, SQL Server, SAP, etc.).

I asked Philip about automation and he said that while InSite has an inbuilt Automation Engine, it hasn’t been deployed because “no client that we have spoken to has wanted to do that yet”. Demand for automation will come, he said but right now companies are looking for more basic stuff – they often just want to see what’s actually going on, so that they can decide on the best way to respond.

InSIte’s target customers are your typical medium too large organisations (ones likely to have significant IT infrastructures) as well as co-lo operators. Unlike some of the other companies in this space though, Ad infinitum were able to share some significant customer wins – Tiscali’s UK Business Services, Equinix and Cisco’s UK Engineering labs.

In fact, Cisco have published a Case Study on the Cisco.com website referencing this solution [PDF] and how Cisco were able to achieve a 30% reduction in IT equipment power consumption and a 50% drop in their cooling costs!

It’s hard to argue with a significant customer win like that!

You should follow me on Twitter here

Photo credit JohnSeb

post

Microsoft System Center Configuration Manager R3 has Power Management functionality built-in

Microsoft recently released in Beta the R3 version of its System Center Configuration Manager 2007.

Microsoft Corporate VP Brad Anderson, introduced it at the Microsoft Management Summit 2010 saying

The most significant change to the System Center Configuration Manager in R3 is the new power management set of strategies.

By way of background Brad talked about how an increasing number of RFP’s being received by Microsoft were requesting information on what Microsoft was doing to reduce its footprint. According to Brad, reducing your energy footprint is now an imperative to doing business, not just a way of saving the company money.

Microsoft System Center Configuration Manager 2007 R3

System Center Configuration Manager 2007 R3 config screen

Microsoft’s System Center Configuration Manager allows systems administrators to centrally control all kinds of policies on client servers and PCs on a network. Everything from what appears in the Start menu right through to security management policies can be deployed using this software (aside – as a sysadmin of a small co. back in the early 00’s I used the config manager to set people’s wallpaper on their PCs to a html version of the co. phone book!).

The ability to control the energy policies of client PC’s is hugely important because that’s where the maximum number of CPU’s is in most organisations. The Ford Motor company, for example, recently announced that by rolling out 1E’s Nightwatchman PC energy management application it was going to save

$1.2 million and reduce its carbon footprint by 16,000-25,000 metric tons annually

1E are a Microsoft partner and their NightWatchman product goes significantly further with PC power management according to Microsoft’s Rob Reynolds, Director of Product Planning for System Center, who briefed me on the new System Center Configuration Manager (config manager will only put PC’s into Sleep Mode, for example, whereas NightWatchman can shut them down completely and NightWatchman has significant power management controls for XP clients which config manager is missing).

The new software gives you

  • The ability to see and set how and where the power is being used
  • The ability to see what your user activity looks like
  • A set of recommendations on policy to show you how to reduce your power consumption and
  • Tracking and reporting on how much carbon you have prevented from being released as a result of your power management capabilities

On the server front, Rob outlined a scenario where based on reduced demand (overnight, say), virtual machines can be re-provisioned onto fewer hosts and then some of the servers could be put into a low power state. Then as demand picks up once more (following morning) the servers in low power mode can be woken back up and the virtual machines moved back onto them.

While many products such as NightWatchman already exist with this functionality, having it built into Configuration Manager will now put this within easy reach of all Microsoft customers and that can only be a good thing.

post

Greenmonk’s Inaugural Cool Award: Fujitsu Siemens


A couple of months ago I got the chance to chat with Fujitsu Siemens Computer’s chief technology officer (CTO), Dr Joseph Reger, who leads the company’s sustainability initiatives. We went over a fair amount of ground, but one thing that stuck with me was a new technology that came to the market last month – monitors that consume zero power when on standby. Let me just say that again – computer monitors that consumer zero watts on standby. When not in use DC power shuts down completely.
Anyone that has checked the power consumption of their electronic devices, using a Kill-a-Watt monitor, for example, knows just how greedy devices on standby can be (TVs and set-top boxes = bad news). And we have a lot of them in every home and office. According to FSC’s press release:

“Reducing European Union-wide power consumption through the adoption of electrical goods that use zero watts in standby mode would save an estimated 35 Terawatt hours per year according to the German Federal Institute for Materials Research and Testing (Bundesanstalt für Materialforschung) – while the EU Stand-by Initiative reports estimates that stand-by power accounts for about 10 percent of the electricity use in homes and offices of the EU Member States.”

In other words, standby power is a problem very much worth solving. This is innovation at work and I commend the engineers at FSC for their efforts. Now if they can just apply the same technology to every other device I use…

When I first heard about the SCENICVIEW ECO device, I thought it had to be worth an award. So I thought why not award it? We need to work out what the COOL award means (Greenmonk probably needs a logo, for it, for example), but for now I would just like to say well done FSC – and congratulations. You are worthy winners of the first Greenmonk cool award for finding ways to lower global carbon emissions and energy consumption.

post

Data centers as energy exporters, not energy sinks!

Temp alert
Photo Credit alexmuse

I have been asking a question of various manufacturers recently and not getting a satisfactory answer.

The question I was asking was why don’t we make heat tolerant servers. My thinking being that if we had servers capable of working in temperatures of 40C then data centers wouldn’t expend as much energy trying to cool the server rooms.

This is not as silly a notion as might first appear. I understand that semiconductors performance dips rapidly as temperature increases, however, if you had hyper-localised liquid cooling which ensured that the chip’s temperature stayed at 20C, say, then the rest of the server could safely be at a higher temp, no?

When I asked Intel, their spokesperson, Nick Knupffer responded by saying

Your point is true – but exotic cooling solutions are also very expensive + you would still need AC anyway. We are putting a lot of work into reducing the power used by the chips in the 1st place, that equals less heat. For example, our quad-core Xeon chips go as low as 50W of TDP. That combined with better performance is the best way of driving costs down. Lower power + better performance = less heat and fewer servers required.

He then went on to explain about Intel’s new hafnium infused high-k metal gate transistors:

It is the new material used to make our 45nm transistors – gate leakage is reduced 100 fold while delivering record transistor performance. It is part of the reason why we can deliver such energy-sipping high performance CPU’s.

At the end of the day – the only way of reducing the power bill is by making more energy efficient CPU’s. Even with exotic cooling – you still need to get rid of the heat somehow, and that is a cost.

He is half right! Sure, getting the chip’s power consumption down is important and will reduce the server’s heat output but as a director of a data center I can tell you that what will happen in this case is more servers will be squeezed into the same data center space thus doing away with any potential reduction in data center power requirements. Parkinson’s law meets data centers!

No, if you want to take a big picture approach, you reduce power consumption by the chips and you then cool these chips directly with a hyper-localised solution so the server room doesn’t need to be cooled. This way the cooling is only going where it is required.

IBM’s Steven Sams IBM’s Vice President, Global Site and Facilities Services sent me a more positive answer:

We’ve actually deployed this in production systems in 3 different product announcements this year

New z series mainframes actually have a closed coolant loop inside the system that takes coolant to the chips to let us crank up the performance without causing chip to slide off as the solder melts. New high performance unix servers system P….. actually put out 75,000 watts of heat per rack….. but again the systems are water cooled with redundant coolant distribution units at the bottom of the rack. The technology is pretty sophisticated and I’ve heard that each of these coolant distribution units has 5 X the capacity to dissipate heat of our last water cooled mainframe in the 1980’s. The cooling distribution unit for that system was about 2 meters wide by 2 meters high by about 1 meter deep. The new units are about 10 inches by 30 inches.

The new webhosting servers iDataplex use Intel and AMD microprocessors and jam a lot of technology into rack that is about double the width but half the depth. To ensure that this technology does not use all of the AC in a data center the systems are installed with water cooled rear door heat exchangers… ie a car radiator at the back of the rack. These devices actually take out 110% of the heat generated by the technology so the outlet temp is actually cooler then the air that comes in the front. A recent study by the a west coast technology leadership consortium at a facility provided by Sun Microsystems actually showed that this rear door heat exchanger technology is the most energy efficient of all the alternative they evaluated with the help of the Lawrence Berkeley national laboratory.

Now that is the kind of answer I was hoping for! If this kind of technology became widespread for servers, the vast majority of the energy data centers burn on air conditioning would no longer be needed.

However, according to the video below, which I found on YouTube, IBM are going way further than I had thought about. They announced their Hydro-Cluster Power 575 series super computers in April. They plan to allow data centers to capture the heat from the servers and export it as hot water for swimming pools, cooking, hot showers, etc.

This is how all servers should be plumbed.

Tremendous – data centers as energy exporters, not energy sinks. I love it.