post

Technology for Good – episode nine

Welcome to episode nine of the Technology for Good hangout. In this week’s show we had special guest John Clark, Worldwide Manager of Smart Buildings for IBM. Given the week that was in it with Google’s announcement of Android Wear, and Twitter’s eighth birthday, there were plenty of stories about social networks, and wearable devices.

Here’s the stories that we discussed in the show:

Climate

Wearables

Health

Open Source

Twitter

Internet of Things

Misc

post

The Internet of Things is bringing Electricity 2.0 that much closer

One of the reasons I started working with GreenMonk back in 2008 was that James heard my Electricity 2.0 vision, and totally bought into it.

The idea, if you’re not familiar with it, was that as smart grids are deployed, homes will become more connected, devices more intelligent, and home area networks would emerge. This would allow the smart devices in the home (think water heaters, clothes dryers, dish washers, fridges, electric car chargers, etc.) to listen to realtime electricity prices, understand them, and adjust their behaviour accordingly. Why would they want to do this? To match electricity demand to its supply, thereby minimising the cost to their owner, while facilitating the safe incorporation of more variable suppliers onto the grid (think renewables like solar and wind).

That was 2008/2009. Fast forward to the end of 2013 and we see that smart meters are being deployed in anger, devices are becoming more intelligent and home area networks are becoming a reality. The Internet of Things, is now a thing (witness the success of devices like Nest’s Thermostat and Protect, the Philips Hue, and Belkin’s WeMo devices). Also, companies like Gridpoint, Comverge and EnerNoc are making demand response (the automatic reduction of electricity use) more widespread.

We’re still nowhere near having realised the vision of utility companies broadcasting pricing in realtime, home appliances listening in and adjusting behaviour accordingly, but we are quite a bit further down that road.

One company who have a large part to play in filling in some of the gaps is GE. GE supplies much of the software and hardware used by utilities in their generation, transmission and distribution of electricity. This will need to be updated to allow the realtime transmission of electricity prices. But also, GE is a major manufacturer of white goods – the dish washers, fridges, clothes dryers, etc. which will need to be smart enough to listen out for pricing signals from utilities. These machines will need to be simple to operate but smart enough to adjust their operation without too much user intervention – like the Nest Thermostat. And sure enough, to that end, GE have created their Connected Appliances division, so they too are thinking along these lines.

More indications that we are headed the right direction are signalled by energy management company Schneider Electric‘s recently announced licensing agreement with ioBridge, and Internet of Things connectivity company.

Other big players such as Intel, IBM and Cisco have announced big plans in the Internet of Things space.

The example in the video above of me connecting my Christmas tree lights was a trivial one, obviously. But it was deliberately so. Back in 2008 when I was first mooting the Electricity 2.0 vision, connecting Christmas tree lights to the Internet and control them from a phone wouldn’t have been possible. Now it is a thing of nothing. With all the above companies working on the Internet of Things in earnest, we are rapidly approaching Electricity 2.0 finally.

Full disclosure – Belkin sent me a WeMo Switch + Motion to try out.

post

HP joins ranks of microserver providers with Redstone

Redstone server platform

The machine in the photo above is HP’s newly announced Redstone server development platform.

Capable of fitting 288 servers into a 4U rack enclosure, it packs a lot of punch into a small space. The servers are System on a Chip based on Calxeda ARM processors but according to HP, future versions will include “Intel? Atom?-based processors as well as others”

These are not the kind of servers you deploy to host your blog and a couple of photos. No, these are the kinds of servers deployed by the literal shedload by hosting companies, or cloud companies to get the maximum performance for the minimum energy hit. This has very little to do with these companies developing a sudden green conscience, rather it is the rising energy costs of running server infrastructure that is the primary motivator here.

This announcement is part of a larger move by HP (called Project Moonshot), designed to advance HP’s position in the burgeoning low-energy server marketplace.

Nor is this anything very new or unique to HP. Dell have been producing microservers for over three years now. In June and July of this year (2011) they launched the 3rd generations of their AMD and Intel based PowerEdge microservers respectively.

And it’s not just Dell, Seamicro has been producing Atom-based microservers for several years now. Their latest server, the SM10000-64 contains 384 processors per system in a 10U chassis with a very low energy footprint.

And back in April of this year Facebook announced its Open Compute initiative to open-source the development of vanity free, low cost compute nodes (servers). These are based on Intel and AMD motherboards but don’t be surprised if there is a shift to Atom in Open Compute soon enough.

This move towards the use of more energy efficient server chips, along with the sharing of server resources (storage, networking, management, power and cooling) across potentially thousands of servers is a significant shift away from the traditional server architecture.

It will fundamentally change the cost of deploying and operating large cloud infrastructures. It will also drastically increase the compute resources available online but the one thing it won’t do, as we know from Jevons’ Paradox, is it won’t reduce the amount of energy used in IT. Paradoxically, it may even increase it!

Photo credit HP

post

Corporate Social Responsibility – tech companies reviewed!

Corporate Social Responsibility

According to its Wikipedia definition, Corporate Social Responsibility (CSR)

is a concept whereby organizations consider the interests of society by taking responsibility for the impact of their activities on customers, suppliers, employees, shareholders, communities and other stakeholders, as well as the environment. This obligation is seen to extend beyond the statutory obligation to comply with legislation and sees organizations voluntarily taking further steps to improve the quality of life for employees and their families as well as for the local community and society at large.

Companies are now starting to report on their Corporate Social Responsibility initiatives in greater numbers. Drivers for this include the rise in ethical consumerism, socially responsible investing, employee recruitment and loyalty, changing laws and regulations, increased scrutiny and transparency and risk mitigation.

According to the Sustainable Investment Research Analyst’s (SIRAN) 2008 report (pdf warning):

  • 86 of the S&P 100 companies now have corporate sustainability websites, compared to 58 in mid-2005, an increase of 48 percent;
  • 49 of the leading U.S. companies produced a sustainability report in 2007, an increase of 26 percent from 39 in 2005

In an attempt to define standards and make these reports cross-comparable, the Global Reporting Initiative has come up with a sustainability reporting framework. According to Wikipedia:

The GRI Guidelines are the most common framework used in the world for reporting. More than 1000 organizations from 60 countries use the Guidelines to produce their sustainability reports.

A quick search of tech sites reveals:
IBM’s stellar Corporate Responsibility site – IBM’s site has a ton of good information and a downloadable CSR report (pdf) and includes the Global Reporting initiative (GRI) index. If there is a tech company with a better CSR site than this, please tell me, I haven’t found it yet!

From the Dell site you can see dell has been producing Sustainability reports back to 1998 (called Environment reports back then). The 2008 CSR report (pdf) is linked to from the company Values page and is a really good example of how to do these reports well.

SAP’s Sustainability site is pretty bare bones (and though found by Google, I couldn’t find a link to it on the corporate website! Having said that, their Sustainability report (pdf), linked to from their Sustainability site, is very good for a first effort. It includes a GRI index and while SAP admit that the report is prepared to GRI Application Level C, they give a commitment to producing a “report to GRI B+ standard externally assured and audited in second quarter 2009”.

Cisco’s CSR site includes a great 5 minute video on CSR from Cisco CEO John Chambers and some of his CSR related staff. Unfortunately the video is not embeddable and is all rights reserved or I would embed it here 🙁 Cisco’s CSR 2008 report is available in a Flash interactive version or the more traditional (and easier to consume) pdf version! Again this report has a GRI index included.

Sun’s excellent CSR site includes a podcast, lots of great links to relevant information and its superb 2008 CSR report (pdf) – again with the GRI index data.

Oracle also has a good CSR site. Oracle’s site links to its 2008 Corporate Citizenship report (pdf) but it doesn’t include a GRI index link.

HP’s Global Citizenship site looks good until you check out their CSR report – it dates to financial year 2007 (which ended October 31, 2007). In its defense, it does include a GRI index but guys, come on, 2007?

Neither Intel nor AMD have reports for 2008. But while Intel have a very comprehensive downloadable pdf report on their CSR initiatives for 2007, the AMD offering consists of a disappointing four tables of performance indicators across the last few years.

If you are looking for Microsoft’s CSR report, you will find it buried under Resource Center -> Awards and Reports -> now click on the Reports tab on their Corporate Citizenship site. The most recent report is dated 2007-08. It is a 5 page document of mostly images, there is no mention whatsoever of GRI, there is no executive involvement, and in comparison to previous years reports, it looks like Microsoft’s limited focus on CSR has waned completely.

Having said that, at least Microsoft has produced a report! Apple didn’t even do that. When As You Sow, recently tabled a shareholder resolution that would require Apple to publish a corporate social responsibility (CSR) report, The company issued a proxy filing asking shareholders to vote against this resolution, saying that the publication would be an unnecessary expense that would “produce little added value.”

Having said that, at least Apple have a section on their site dedicated to their environmental efforts, Amazon don’t even appear to do that. Their filed reports page makes no effort to include any reports about environmental stewardship or corporate citizenship although given the story which came out before Christmas about Amazon’s shocking employment practices, that can hardly be any surprise.

Ironically Google’s CSR efforts are supremely difficult to find! They do have a corporate website dedicated to their Green Initiatives but like Apple, they too don’t have any CSR report (that I could find!).

Who’d I miss? Who is better? Who is worse?

Original photo by ATIS547

post

Supercomputers can be Green – who knew?

ibm supercomputer
Photo Credit gimpbully

According to Wikipedia most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.

The IBM Roadrunner supercomputer, for example, is a cluster of 3240 computers, each with 40 processing cores while NASA’s Columbia is a cluster of 20 machines, each with 512 processors.

If servers and data centers are considered the bad boys of the IT energy world, then supercomputers must be raving psychopaths, right? Well, not necessarily.

The findings of the Green500 List, an independent ranking of the most energy-efficient supercomputers in the world, show that this is far from the case. In fact in their June 2008 listings they report that:

The first sustained petaflop supercomputer – Roadrunner, from DOE Los Alamos National Laboratory – exhibits extraordinary energy efficiency.

Roadrunner, the top-ranked supercomputer in the TOP500, is ranked #3 on the Green500 List. This achievement further reinforces the fact that energy efficiency is as important as raw performance for modern supercomputers and that energy efficiency and performance can coexist.

Other interesting findings from the list are:

  1. The top three supercomputers surpass the 400 MFLOPS/watt milestone for the first time.
  2. Energy efficiency hits the mainstream – The energy efficiency of a commodity system based on Intel’s 45-nm low-power quad-core Xeon is now on par with IBM BlueGene/L (BG/L) machines, which debuted in November 2004 and
  3. Each of supercomputers in the top ten from this edition of the Green500 List has a higher FLOPS/watt rating than the previous #1 Green500 supercomputer (the previous list was 4 months ago in February)

IBM come out of this list as Big Green – out of the first 40 ranked systems, 39 are IBM-based. That is an incredible committment to Green which can’t be argued with and for which IBM deserves due credit.

And speaking of Green, it is great to see a supercomputer based in Ireland, the Irish Centre for High-End Computing’s Schrödinger supercomputer, coming in joint 4th place on the list of Green computers.

What makes this even more interesting is that many supercomputers are used in climate modelling and for research into Global Warming.

It is counterintuitive that supercomputers would be highly energy-efficient but it is precisely because they consume so much power that a lot of research is going into reducing supercomputers’ power requirements, thereby cutting their running costs. Once again a case of the convergence of ecology and economics (or green and greenbacks!).

post

Data centers as energy exporters, not energy sinks!

Temp alert
Photo Credit alexmuse

I have been asking a question of various manufacturers recently and not getting a satisfactory answer.

The question I was asking was why don’t we make heat tolerant servers. My thinking being that if we had servers capable of working in temperatures of 40C then data centers wouldn’t expend as much energy trying to cool the server rooms.

This is not as silly a notion as might first appear. I understand that semiconductors performance dips rapidly as temperature increases, however, if you had hyper-localised liquid cooling which ensured that the chip’s temperature stayed at 20C, say, then the rest of the server could safely be at a higher temp, no?

When I asked Intel, their spokesperson, Nick Knupffer responded by saying

Your point is true – but exotic cooling solutions are also very expensive + you would still need AC anyway. We are putting a lot of work into reducing the power used by the chips in the 1st place, that equals less heat. For example, our quad-core Xeon chips go as low as 50W of TDP. That combined with better performance is the best way of driving costs down. Lower power + better performance = less heat and fewer servers required.

He then went on to explain about Intel’s new hafnium infused high-k metal gate transistors:

It is the new material used to make our 45nm transistors – gate leakage is reduced 100 fold while delivering record transistor performance. It is part of the reason why we can deliver such energy-sipping high performance CPU’s.

At the end of the day – the only way of reducing the power bill is by making more energy efficient CPU’s. Even with exotic cooling – you still need to get rid of the heat somehow, and that is a cost.

He is half right! Sure, getting the chip’s power consumption down is important and will reduce the server’s heat output but as a director of a data center I can tell you that what will happen in this case is more servers will be squeezed into the same data center space thus doing away with any potential reduction in data center power requirements. Parkinson’s law meets data centers!

No, if you want to take a big picture approach, you reduce power consumption by the chips and you then cool these chips directly with a hyper-localised solution so the server room doesn’t need to be cooled. This way the cooling is only going where it is required.

IBM’s Steven Sams IBM’s Vice President, Global Site and Facilities Services sent me a more positive answer:

We’ve actually deployed this in production systems in 3 different product announcements this year

New z series mainframes actually have a closed coolant loop inside the system that takes coolant to the chips to let us crank up the performance without causing chip to slide off as the solder melts. New high performance unix servers system P….. actually put out 75,000 watts of heat per rack….. but again the systems are water cooled with redundant coolant distribution units at the bottom of the rack. The technology is pretty sophisticated and I’ve heard that each of these coolant distribution units has 5 X the capacity to dissipate heat of our last water cooled mainframe in the 1980’s. The cooling distribution unit for that system was about 2 meters wide by 2 meters high by about 1 meter deep. The new units are about 10 inches by 30 inches.

The new webhosting servers iDataplex use Intel and AMD microprocessors and jam a lot of technology into rack that is about double the width but half the depth. To ensure that this technology does not use all of the AC in a data center the systems are installed with water cooled rear door heat exchangers… ie a car radiator at the back of the rack. These devices actually take out 110% of the heat generated by the technology so the outlet temp is actually cooler then the air that comes in the front. A recent study by the a west coast technology leadership consortium at a facility provided by Sun Microsystems actually showed that this rear door heat exchanger technology is the most energy efficient of all the alternative they evaluated with the help of the Lawrence Berkeley national laboratory.

Now that is the kind of answer I was hoping for! If this kind of technology became widespread for servers, the vast majority of the energy data centers burn on air conditioning would no longer be needed.

However, according to the video below, which I found on YouTube, IBM are going way further than I had thought about. They announced their Hydro-Cluster Power 575 series super computers in April. They plan to allow data centers to capture the heat from the servers and export it as hot water for swimming pools, cooking, hot showers, etc.

This is how all servers should be plumbed.

Tremendous – data centers as energy exporters, not energy sinks. I love it.