post

Spectacular HomeCamp feedback!


Home Camp – What the community says from chris dalby on Vimeo.

HomeCamp was the first of what I hope will be a series of unconferences around Energy hacking or as they say on the website:

Home Camp is an unconference about using technology to monitor and automate the home for greener resource use and to save costs

The first HomeCamp was in London this last Saturday November 29th and based on Andrew Whitehouse’s write-up and Chris Dalby’s live videos, the day was a phenomenal success.

The video above also gives some flavour of what delegates took away from the day.

I’m really sorry I couldn’t make it along but I do hope to make the next one which will be in March ’09.

[Disclosure: RedMonk were sponsors of HomeCamp]

post

GreenMonk talks Sustainability with IBM’s Stan Litow

IBM

Photo Credit ChicagoEye

[audio:http://media.libsyn.com/media/redmonk/StanLitowPodcast.mp3]

My guest on this podcast is Stan Litow. Stan is IBM’s VP for Corporate Affairs and Corporate Citizenship.

IBM recently issued their 2008 Corporate Responsibility Report. It is an extremely interesting, very comprehensive overview of IBM’s work in this space. You can download the entire report here (PDF warning!).

Having gone through the report, I was interested to discuss it with Stan and he graciously agreed to come on the show and gave a fascinating look at some of the thinking behind IBM’s initiatives in this space.

Download the entire interview here
(20.3mb mp3)

post

The network is the computer

RJ45 ethernet connector
Photo Credit Olivander

I see a news item on CNET this morning about IBM which says:

IBM is set to debut a technology at the VMWorld conference in Las Vegas that executives say reduces storage costs by up to 80 percent.

That’s a pretty big claim and if it were anyone else I’d have a hard time believing it but IBM bring a lot of credibility as well as resources to the table when making an announcement like this.

The CNET story goes on to quote the IBM release (which is, as yet, not available on the IBM News site):

Based on an algorithm developed by IBM Research, VSO [Virtual Storage Optimizer] dramatically reduces the large physical storage requirements associated with storing virtual images. The solution also allows organizations to streamline operations by creating new desktop images in mere seconds or minutes, a process which previously could take up to 30 minutes – a 75% reduction in the time required to create and deploy new virtual machines. This represents a tremendous operational savings for clients, and allows them to realize more immediate returns on their investments.

If this is the case then the main barriers to virtualization, cost and complexity, are no more and we may well see a move to massively more efficient computing which can only be a good thing!

John Gage, one of the founders of Sun, coined the phrase “The network is the computer” and it sure looks like that vision is now coming to pass.

post

IBM’s Vik Chandra on how software can help reduce your carbon footprint

IBM Green Data Center in Second Life
The IBM Green Data Center in Second Life

[audio:http://media.libsyn.com/media/redmonk/IBM-VikChandraPodcast.mp3]

Episode 2 of the GreenMonk Podcasts – 27 mins 27 secs

My guest on this podcast is IBM’s Vik Chandra. According to IBM Vik

is currently responsible for Market Management and Strategy for IBM software offerings that enable organizations to reduce their energy consumption and environmental impact. IBM’s software group offers middleware from its Tivoli, Rational, WebSphere, Lotus and Information Management brands.

I was interested to know how Vik felt software could help companies reduce their carbon footprint so I invited him to come on the show to discuss this and also to answer questions I solicited from readers of this site.

Here are the questions I asked Vik and the approx. times I asked them:

It is easy to see how more efficient hardware can help drop a company’s energy use but how is software helping companies reduce their carbon footprint? – 00:20

Demand response – the ability to have devices adjust their settings dynamically in response to pricing signals from utilities etc is recently gaining a lot of attention. Is this something IBM are looking into? 03:23

Questions from readers:

Chris Dalby
Are there any plans to expand the current cost craze that has hit Hursley? With rising energy and utility costs in general, are there plans to help companies intelligently manage and automate their energy infrastructure using mqtt? – 05:57


Alan in Belfast

As CPU/core speeds increase, software has become more and more processor hungry, driving up heat, fan, power etc. Energy efficient machines – even Eee PC 1000s! – start to alter the processor speed to keep power demands down. Are IBM serious about de-bloating their software to make it more light-weight? And do they have any feel for whether that could make a 1% difference or a 20% difference to desktop/laptop/server power usage? – 08:14

Is it more efficient to build features into hardware or software? A lot of the enterprise monitoring software that gets installed to instrument PCs/servers runs continuously. Better to make lighter hardware modules to do the same? Is there a day when a Linux-on-a-chip (etc) will be embedded in PCs/servers as a more energy-efficient method of performing these tasks? (Bring back the PIC chip!) – 10:28

Jim Spath
We’re moving toward more virtualization, currently running IBM AIX on Power5 LPARs, starting to run virtual CPUs, memory, storage and I/O. What are the limiting factors for software licensing in such a landscape? It seems we save money on hardware but pay more for software that could run in different frames.
I think Linux is a partial answer, but there are corporate concerns with having multiple OS images, not to mention uneasiness about GNU and BSD license models. – 14:23

Jim Hughes
I see plenty of power management software going into desktop and laptop PCs (clock slowing, fans that run only when necessary etc.), but precious little into servers.

As many enterprises appear to be shuffling ever more equipment into noisy, over heating server rooms, surely power (and noise) management should be a big issue here.

Are IBM ignoring servers because they’re hidden away from all but the long suffering sys admins? – 17:01

Ed Gemmell
Of the $1 billion IBM said they would invest in Green IT. How much has already been invested (can we see it in the financials?) and how much has been in Software. What do you have to show for the $1billion so far? – 21:31

Uldis Boj?rs
It would be interesting to learn more about what is IBM’s experience and lessons learned in enterprise use of new social media and collaboration tools such as microblogging and virtual 3D worlds. – 25:58

Download the entire interview here
(25.1mb mp3)

post

Build carbon software efficiently (practice what you preach!)

motion gears -team force
Photo Credit ralphbijker

I have been having some very interesting conversations with people in the carbon software sector these last couple of weeks.

The first was with Michael Meehan of Carbonetworks (which I blogged about here) and we discussed their offering which is a “carbon strategy platform”. From my blog post about Carbonetworks:

The app at its most basic helps companies understand what their carbon footprint is, and then helps the companies translate that into a financial bottom line. The app helps companies see what options they have to reduce their carbon footprint and helps them create a carbon strategy from a managerial perspective on how to proceed in the carbon market.

Then I talked to Stefan Guertzgen, Marketing Director for Chemicals and Franz Hero, vp, chemical industry business unit both at SAP. They were talking about the SAP Environmental Compliance application which, in their words:

enables companies to gather information on the use of energy, in all its forms, throughout the enterprise, identify areas for energy reduction, monitor the implementation of energy excellence projects, and make the results available throughout the enterprise

Earlier this week I was talking to Kevin Leahy, who is a director in IBM’s IT Optimization Business Unit about IBM’s House of Carbon for which they have also developed carbon reporting software for their client base.

Finally, yesterday I was speaking to Gavin Starks, founder and CEO of AMEE. We have talked about AMEE several times before on this blog. AMEE is an open-source, neutral, platform for

measuring the Energy Consumption of everything… aggregates “official” energy metrics, conversion factors and CO2 data from over 150 countries… is a common platform for profiling and transactions (there’s a transaction engine at the core of AMEE)

Noticing a common thread here? Guys, stop re-inventing the wheel.

IBM and SAP (and anyone else thinking of embarking on carbon software) STOP NOW! It has already been done and done well by companies with open api’s (and open data in AMEE’s case).

Get on the phone to Carbonetworks and AMEE, and instead of building another carbon app, use their already comprehensive infrastructures and api’s to get a jump-start and bring best-of-breed carbon software to market efficiently!

post

Any questions for Vik Chandra?

Questions
Photo Credit oberazzi (Tim O’Brien)

We have started a podcast series here on GreenMonk. As part of the process, when I can, I will be posting ahead of time who I will be interviewing. This will give readers an opportunity to have me put questions on their behalf during the podcast.

The first such interview will take place next Wed, August 13th and the interviewee will be IBM’s Vik Chandra. According to IBM Vik

is currently responsible for Market Management and Strategy for IBM software offerings that enable organizations to reduce their energy consumption and environmental impact. IBM’s software group offers middleware from its Tivoli, Rational, WebSphere, Lotus and Information Management brands. Core capabilities include service management from Tivoli, application servers and runtime infrastructure from WebSphere, database, information management and business intelligence from Information Management, collaboration from Lotus and software development and delivery from Rational.

We will be discussing ways in which IBM software can be used by companies to reduce their carbon footprint.

If you have any questions/suggestions you’d like me to put to Vik in the podcast, please leave them in a comment to this post or email them to [email protected] before Wed August 13th at 2pm GMT.

post

Supercomputers can be Green – who knew?

ibm supercomputer
Photo Credit gimpbully

According to Wikipedia most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.

The IBM Roadrunner supercomputer, for example, is a cluster of 3240 computers, each with 40 processing cores while NASA’s Columbia is a cluster of 20 machines, each with 512 processors.

If servers and data centers are considered the bad boys of the IT energy world, then supercomputers must be raving psychopaths, right? Well, not necessarily.

The findings of the Green500 List, an independent ranking of the most energy-efficient supercomputers in the world, show that this is far from the case. In fact in their June 2008 listings they report that:

The first sustained petaflop supercomputer – Roadrunner, from DOE Los Alamos National Laboratory – exhibits extraordinary energy efficiency.

Roadrunner, the top-ranked supercomputer in the TOP500, is ranked #3 on the Green500 List. This achievement further reinforces the fact that energy efficiency is as important as raw performance for modern supercomputers and that energy efficiency and performance can coexist.

Other interesting findings from the list are:

  1. The top three supercomputers surpass the 400 MFLOPS/watt milestone for the first time.
  2. Energy efficiency hits the mainstream – The energy efficiency of a commodity system based on Intel’s 45-nm low-power quad-core Xeon is now on par with IBM BlueGene/L (BG/L) machines, which debuted in November 2004 and
  3. Each of supercomputers in the top ten from this edition of the Green500 List has a higher FLOPS/watt rating than the previous #1 Green500 supercomputer (the previous list was 4 months ago in February)

IBM come out of this list as Big Green – out of the first 40 ranked systems, 39 are IBM-based. That is an incredible committment to Green which can’t be argued with and for which IBM deserves due credit.

And speaking of Green, it is great to see a supercomputer based in Ireland, the Irish Centre for High-End Computing’s Schrödinger supercomputer, coming in joint 4th place on the list of Green computers.

What makes this even more interesting is that many supercomputers are used in climate modelling and for research into Global Warming.

It is counterintuitive that supercomputers would be highly energy-efficient but it is precisely because they consume so much power that a lot of research is going into reducing supercomputers’ power requirements, thereby cutting their running costs. Once again a case of the convergence of ecology and economics (or green and greenbacks!).

post

Data centers as energy exporters, not energy sinks!

Temp alert
Photo Credit alexmuse

I have been asking a question of various manufacturers recently and not getting a satisfactory answer.

The question I was asking was why don’t we make heat tolerant servers. My thinking being that if we had servers capable of working in temperatures of 40C then data centers wouldn’t expend as much energy trying to cool the server rooms.

This is not as silly a notion as might first appear. I understand that semiconductors performance dips rapidly as temperature increases, however, if you had hyper-localised liquid cooling which ensured that the chip’s temperature stayed at 20C, say, then the rest of the server could safely be at a higher temp, no?

When I asked Intel, their spokesperson, Nick Knupffer responded by saying

Your point is true – but exotic cooling solutions are also very expensive + you would still need AC anyway. We are putting a lot of work into reducing the power used by the chips in the 1st place, that equals less heat. For example, our quad-core Xeon chips go as low as 50W of TDP. That combined with better performance is the best way of driving costs down. Lower power + better performance = less heat and fewer servers required.

He then went on to explain about Intel’s new hafnium infused high-k metal gate transistors:

It is the new material used to make our 45nm transistors – gate leakage is reduced 100 fold while delivering record transistor performance. It is part of the reason why we can deliver such energy-sipping high performance CPU’s.

At the end of the day – the only way of reducing the power bill is by making more energy efficient CPU’s. Even with exotic cooling – you still need to get rid of the heat somehow, and that is a cost.

He is half right! Sure, getting the chip’s power consumption down is important and will reduce the server’s heat output but as a director of a data center I can tell you that what will happen in this case is more servers will be squeezed into the same data center space thus doing away with any potential reduction in data center power requirements. Parkinson’s law meets data centers!

No, if you want to take a big picture approach, you reduce power consumption by the chips and you then cool these chips directly with a hyper-localised solution so the server room doesn’t need to be cooled. This way the cooling is only going where it is required.

IBM’s Steven Sams IBM’s Vice President, Global Site and Facilities Services sent me a more positive answer:

We’ve actually deployed this in production systems in 3 different product announcements this year

New z series mainframes actually have a closed coolant loop inside the system that takes coolant to the chips to let us crank up the performance without causing chip to slide off as the solder melts. New high performance unix servers system P….. actually put out 75,000 watts of heat per rack….. but again the systems are water cooled with redundant coolant distribution units at the bottom of the rack. The technology is pretty sophisticated and I’ve heard that each of these coolant distribution units has 5 X the capacity to dissipate heat of our last water cooled mainframe in the 1980’s. The cooling distribution unit for that system was about 2 meters wide by 2 meters high by about 1 meter deep. The new units are about 10 inches by 30 inches.

The new webhosting servers iDataplex use Intel and AMD microprocessors and jam a lot of technology into rack that is about double the width but half the depth. To ensure that this technology does not use all of the AC in a data center the systems are installed with water cooled rear door heat exchangers… ie a car radiator at the back of the rack. These devices actually take out 110% of the heat generated by the technology so the outlet temp is actually cooler then the air that comes in the front. A recent study by the a west coast technology leadership consortium at a facility provided by Sun Microsystems actually showed that this rear door heat exchanger technology is the most energy efficient of all the alternative they evaluated with the help of the Lawrence Berkeley national laboratory.

Now that is the kind of answer I was hoping for! If this kind of technology became widespread for servers, the vast majority of the energy data centers burn on air conditioning would no longer be needed.

However, according to the video below, which I found on YouTube, IBM are going way further than I had thought about. They announced their Hydro-Cluster Power 575 series super computers in April. They plan to allow data centers to capture the heat from the servers and export it as hot water for swimming pools, cooking, hot showers, etc.

This is how all servers should be plumbed.

Tremendous – data centers as energy exporters, not energy sinks. I love it.

post

IBM reckons Green is where economic and ecological concerns converge

I love this ad. It demonstrates that not only has IBM a sense of humour but also that they have the right story – today, with soaring energy prices, Green is where economic and ecological concerns converge.

Last year IBM announced Project Big Green. This was a commitment by IBM to re-direct $1 billion USD per annum across its businesses to increase energy efficiency! Serious money by anyone’s standards.

This isn’t just some philanthropic gesture on IBM’s part. By making this investment the company expects to save more than five billion kilowatt hours per year. IBM anticipates it will double the computing capacity in the eight million square feet of data center space which IBM operates within the next three years without increasing power consumption or its carbon footprint. In other words they expect to double their compute power, without adding data centers, nor increasing their carbon footprint!

This year, IBM have gone even further! As an extension of their project Big Green they have announced ‘modular data centers’ similar to Sun’s S20 product. They come in three sizes and IBM claims they are

designed to achieve the world’s highest ratings for energy leadership, as determined by the Green Grid, an industry group focused on advancing energy efficiency for data centers and business compute ecosystems.

I’d love to see comparable metrics between the S20 and IBMs modular data centers.

However, the take home message today is that IBM is committing serious resources to its Green project. Not because they care deeply for the planet (I’m sure they do) but because they care deeply about the bottom line and with increasing energy costs, there is now a sweet convergence between doing the right thing for the planet and for the shareholder!