post

Facebook and ebay’s data centers are now vastly more transparent

ebay's digital service efficiency

Facebook announced at the end of last week new way to report PUE and WUE for its datacenters.

This comes hot on the heels of ebay’s announcement of its Digital Service Efficiency dashboard – a single-screen reporting the cost, performance and environmental impact of customer buy and sell transactions on ebay.

These dashboards are a big step forward in terms of making data centers more transparent about the resources they are consuming. And about the efficiency, or otherwise, of the data centers.

Even better, both organisations are going about making their dashboards a standard, thus making their data centers cross comparable with other organisations using the same dashboard.

Facebook Prineville Data Center dashboard

There are a number of important differences between the two dashboards, however.

To start with, Facebook’s data is in near-realtime (updated every minute, with a 2.5 hour delay in the data), whereas ebay’s data is updated every quarter of a year. So, ebay’s data is nowhere near realtime.

Facebook also includes environmental data (external temperature and humidity), as well as options to review the PUE, WUE, humidity and temperature data for the last 7 days, the last 30 days, the last 90 days and the last year.

On the other hand, ebay’s dashboard is, perhaps unsurprisingly, more business focussed giving metrics like revenue per user ($54), the number of transactions per kWh (45,914), the number of active users (112.3 million), etc. Facebook makes no mention anywhere of its revenue data, user data nor its transactions per kWh.

ebay pulls ahead on the environmental front because it reports its Carbon Usage Effeftiveness (CUE) in its dashboard, whereas Facebook completely ignores this vital metric. As we’ve said here before, CUE is a far better metric for measuring how green your data center is.

Facebook does get some points for reporting its carbon footprint elsewhere, but not for these data centers. This was obviously decided at some point in the design of its dashboards, and one has to wonder why.

The last big difference between the two is in how they are trying to get their dashboards more widely used. Facebook say they will submit the code for theirs to the Opencompute repository on Github. ebay, on the other hand, launched theirs at the Green Grid Forum 2013 in Santa Clara. They also published a PDF solution paper, which is a handy backgrounder, but nothing like the equivalent of dropping your code into Github.

The two companies could learn a lot from each other on how to improve their current dashboard implementations, but more importantly, so could the rest of the industry.

What are IBM, SAP, Amazon, and the other cloud providers doing to provide these kinds of dashboards for their users? GreenQloud has had this for their users for ages, now Facebook and ebay have zoomed past them too. When Facebook contributes oits codebase to Github, then the cloud companies will have one less excuse.

Image credit nicadlr

post

Data Center War Stories talks to SAP’s Jürgen Burkhardt

And we’re back this week with the second instalment in our Data Center War Stories series (sponsored by Sentilla).

This second episode in the series is with Jürgen Burkhardt, Senior Director of Data Center Operations, at SAP‘s HQ in Walldorf, Germany. I love his reference to “the purple server” (watch the video, or see the transcript below!).

Here’s a transcript of our conversation:

Tom Raftery: Hi everyone welcome to GreenMonk TV. Today we are doing a special series called the DataCenter War Stories. This series is sponsored Sentilla and with me today I have Jürgen Burkhardt. Jürgen if I remember correctly your title is Director of DataCenter Operations for SAP is that correct?

Jürgen Burkhardt: Close. Since I am 45, I am Senior Director of DataCenter Operations yes.

Tom Raftery: So Jürgen can you give us some kind of size of the scale and function of your DataCenter?

Jürgen Burkhardt: All together we have nearly 10,000 square meters raised floor. We are running 18,000 physical servers and now more than 25,000 virtual servers out of this location. The main purpose is first of all to run the production systems of SAP. The usual stuff FI, BW, CRM et cetera, they are all support systems, so if you have ABAP on to the SAP in marketplace, you, our service marketplace, this system is running here in Waldorf Rot, whatever you see from sap.com is running here to a main extent. We are running the majority of all development systems here and all training — the majority of demo and consulting system worldwide at SAP.

We have more than 20 megawatt of computing power here. I mentioned the 10,000 square meters raised floor. We have 15 — more than 15 petabyte of usable central storage, back up volume of 350 terabyte a day and more than 13,000 terabyte in our back up library.

Tom Raftery: Can you tell me what are the top issues you come across day to day in running your DataCenter, what are the big ticket items?

Jürgen Burkhardt: So one of the biggest problems we clearly have is the topic of asset management and the whole logistic process. If you have so many new servers coming in, you clearly need very, very sophisticated process, which allows you to find what we call the Purple Server, where is it, where is the special server? What kind of — what it is used for? Who owns it? How long is it already used? Do we still need it and all that kind of questions is very important for us.

And this is also very important from an infrastructure perspective, so we have so many stuff out there, if we start moving servers between locations or if we try to consolidate racks, server rooms and whatsoever, it’s absolutely required for us to know exactly where something is, who owns it, what it is used for etcetera, etcetera. And this is really one of our major challenges we have currently.

Tom Raftery: Are there any particular stories that come to mind, things issues that you’ve hit on and you’ve had to scratch your head and you’ve resolved them, that you want to talk about?

Jürgen Burkhardt: I think most people have a problem with their cooling capacity. Even if we are — we are running a very big data center. We have a long capacity down the other side. There was room for improvement. So what we did is we implemented a cool aisle containment system by ourselves.

So there are solutions available, you can purchase from various companies. So what we did is, so first of all we measured our power and cooling capacity and consumption in very detail, and on basis of that we figured out a concept to do that by ourselves.

So the first important thing is today I think it’s standard. We have to change our requisitions, especially in the DataCenter which is ten years old, and which now also got the premium certificate. That data center, the rack positions were back front, back front, back front and we had thousands of servers in that data center.

So what we are now doing, already did to some extent in that data center is, we had to change the rack positions front, front to implement the cold aisle containment system. And we did — so IT did that together with facility management. So we had a big project running to move surplus shutdown, racks, turn whole — the racks in whole rows, front to front and then built together with some external companies, it was very, very normal easy method. Buying stock in the next super market more or less, build the containment systems and that increased where we have implemented it the cooling capacity by 20%.

Tom Raftery: Is there anything else you want to mention?

Jürgen Burkhardt: Within the last three to four years we crashed against every limit you can imagine from the various type of devices which are on the — available on the market, because of our growth in size. The main driver for our virtualization strategy is the low utilization of our development and training servers. So what we are currently implementing is more or less a corporate cloud.

When a few years ago, we had some cost saving measures, our board said, you know what, we have a nice idea, we shutdown everything, which has a utilization below 5% and we said well, that might not be a good idea, because in that case we have to shutdown everything, more or less. And the reason if you imagine a server and an SAP running, system running on it and a database for development purpose, maybe a few developers are logging in, this is from a CPU utilization, you hardly see it, you understand.
So the normal consumption of the database and the system itself are creating most of the load and the little bit development of the developers is really not worth mentioning. And even if they are sometimes running some test cases, it’s not really a lot. The same is true for training, during the training sessions there is a high load on the systems.

But on the other side these systems are utilized maybe 15% or 20% maximum, because the training starts on Monday, stops at — from 9:00 to 5:00. Some trainings even go only two or three days. So there is a very low utilization. And that was the main reason for us to say, we need virtualization, we need desperately and we achieved a lot of savings with that now and currently we are already live with our corporate cloud.

And we are now migrating more and more of our virtual systems and also the physical systems which are now migrated to virtualization into the corporate cloud. With a fully automated self service system and a lot of functionality which allows us to park systems, unpark systems, create masters and also the customers by himself. This is very interesting and this really gives us savings in the area of 50% to 60%.

Tom Raftery: Okay Jürgen that’s been fantastic, thanks a million for coming on the show.

post

Data center war stories sponsored blog series – help wanted!

Data center work

Sentilla are a client company of ours. They have asked me to start a discussion here around what are the day-to-day issues data center practitioners are coming up against.

This is a very hands-off project from their point of view.

The way I see it happening is that I’ll interview some DC practitioners either via Skype video, or over the phone, we’ll have a chat about DC stuff (war stories, day-to-day issues, that kind of thing), I’ll record the conversations and publish them here along with transcriptions. They’ll be short discussions – simply because people rarely listen to/watch rich media longer than 10 minutes.

There will be no ads for Sentilla during the discussions, and no mention of them by me – apart from an intro and outro simply saying the recording was sponsored by Sentilla. Interviewees are free to mention any solution providers and there are no restrictions whatsoever on what we talk about.

If you are a data center practitioner and you’d like to be part of this blog series, or simply want to know more, feel free to leave a comment here, or drop me an email to [email protected]

You should follow me on Twitter here
Photo credit clayirving

post

Top 10 Data Center blogs

Data center air and water flows

Out of curiosity, I decided to see if I could make a list of the top 10 data center focussed blogs. I did a bit of searching around, found around thirty blogs related to data centers (who knew they were so popular!). I went through the thirty blogs and eliminated them based on arbitrary things I made up on the spot like post frequency, off-topic posts, etc. until I came up with a list I felt was the best. Then I counted them and lo! I had exactly 10 – phew, no need to eliminate any of the good ones!

So without further ado – and in no particular order, I present you with my Top 10 Data Center blogs:

What great data center blogs have I missed?

The chances are there are superb data center blogs out there which my extensive 15 seconds of research on the topic failed to uncover. If you know of any, feel free to leave them in the comments below.

Image credit Tom Raftery

post

FaceBook open sources building an energy efficient data center

FaceBook's new custom-built Prineville Data Centre

Back in 2006 I was the co-founder of a Data Centre in Cork called Cork Internet eXchange. We decided, when building it out, that we would design and build it as a hyper energy-efficient data centre. At the time, I was also heavily involved in social media, so I had the crazy idea, well, if we are building out this data centre to be extremely energy-efficient, why not open source it? So we did.

We used blogs, flickr and video to show everything from the arrival of the builders on-site to dig out the foundations, right through to the installation of customer kit and beyond. This was a first. As far as I know, no-one had done this before and to be honest, as far as I know, no-one since has replicated it. Until today.

Today, Facebook is lifting the lid on its new custom-built data centre in Prineville, Oregon.

Not only are they announcing the bringing online of their new data centre, but they are open sourcing its design, specifications and even telling people who their suppliers were, so anyone (with enough capital) can approach the same suppliers and replicate the data centre.

Facebook are calling this the OpenCompute project and they have released a fact sheet [PDF] with details on their new data center and server design.

I received a pre-briefing from Facebook yesterday where they explained the innovations which went into making their data centre so efficient and boy, have they gone to town on it.

Data centre infrastructure
On the data centre infrastructure side of things, building the facility in Prineville, Oregon (a high desert area of Oregon, 3,200 ft above sea level with mild temperatures) will mean they will be able to take advantage of a lot of free cooling. Where they can’t use free cooling, they will utilise evaporative cooling, to cool the air circulating in the data centre room. This means they won’t have any chillers on-site, which will be a significant saving in capital costs, in maintenance and in energy consumption. And in the winter, they plan to take the return warm air from the servers and use it to heat their offices!

By moving from centralised UPS plants to 48V localised UPS’s serving 6 racks (around 180 Facebook servers), Facebook were able to re-design the electricity supply system, doing away with some of the conversion processes and creating a unique 480V distribution system which provides 277V directly to each server, resulting in more efficient power usage. This system reduces power losses going in the utility to server chain, from an industry average 11-17% down to Prineville’s 2%.

Finally, Facebook have significantly increased the operating temperature of the data center to 80.6F (27C) – which is the upper limit of the ASHRAE standards. They also confided that in their next data centre, currently being constructed in North Carolina, they expect to run it at 85F – this will save enormously on the costs of cooling. And they claim that the reduction in the number of parts in the data center means they go from 99.999% uptime, to 99.9999% uptime.

New Server design
Facebook also designed custom servers for their data centres. The servers contain no paint, logos, stickers bezels or front panel. They are designed to be bare bones (using 22% fewer materials than a typical 1U server) and for ease of serviceability (snap-together parts instead of screws).

The servers are 1.5U tall to allow for larger heat sinks and larger (slower turning and consequently more efficient) 60mm fans. These fans only take 2-4% of the energy of the server, compared to 10-20% for typical servers. The heat sinks are all spread at the back of the mother board so none of them will be receiving pre-heated air from another heat sink, reducing the work required of the fans.

The server power supply accepts both 277V AC power from the electrical distribution system and 44V DC from the UPS in the event of a utility power failure. These power supplies have a peak efficiency of 94.5% (compared to a more typical 90% for standard PSU’s) and they connect directly to the motherboard, simplifying the design and reducing airflow impedance.

Open Compute
Facebook relied heavily on open source in creating their site. Now, they say, they want to make sure the next generation of innovators don’t have to go through the same pain as Facebook in building out efficient data centre infrastructure. Consequently, Facebook is releasing all of the specification documentation which it gave to its suppliers for this project.

Some of the schematics and board layouts for the servers belong to the suppliers so they are not currently being published, though Facebook did say they are working with their suppliers to see if they will release them (or portions of them) but they haven’t reached agreement with the suppliers on this just yet.

Asked directly about their motivations for launching Open Compute Facebook’s Jay Park came up with this classic reply

… it would almost seem silly to do all this work and just keep it closed

Asking Facebook to unfriend coal
Greenpeace started a campaign to pressure Facebook into using more renewable energy resources due to the fact that Pacific Power, the energy supplier Facebook will be using for its Prineville data center produces almost 60% of its electricity from burning coal.

Greenpeace being Greenpeace, created a very viral campaign, using the Facebook site itself, and the usual cadre of humurous videos etc., to apply pressure on Facebook to thinking of sourcing its electricity from more renewable sources.

When we asked Facebook about this in our briefing, they did say that their data centre efforts are built around many more considerations than just the source of energy that comes into the data centre. They then went on to maintain that they are impressed by Pacific Power’s commitment to moving towards renewable sources of energy (they are targeting having 2,000MW of power from renewables by 2013). And they concluded by contending that the efficiencies they have achieved in Prineville more than offsets the use of coal which powers the site.

Conclusion
Facebook tell us this new custom data centre at Prineville has a PUE of 1.07, which is very impressive.

They have gone all out on innovating their data centre and the servers powering their hugely popular site. More than that though, they are launching the Open Compute Project giving away all the specs and vendor lists required to reproduce an equally efficient site. That is massively laudable.

It is unfortunate that their local utility has such a high gen-mix of coal in its supply to besmirch an otherwise great energy and sustainability win for Facebook. The good thing though is that as the utility adds to its portfolio of renewables, Facebook’s site will only get greener.

For more on this check out the discussions on Techmeme

You should follow me on Twitter here

Photo credit FaceBook’s Chuck Goolsbee

post

Sentilla thinks of data centers, as data factories!

Data center

If you have been following this blog, you’ll know I have been profiling Data Center efficiency companies over the last few weeks. This week I take a look at Sentilla.

I talked to Sentilla’s CTO and co-founder, Joe Polastre, the other day and Joe told me that Sentilla came out of Berkeley where they had been looking at data analytics problems around large, potentially incomplete or inaccurate, streaming datasets. The challenge was how to turn that into a complete picture of what’s going on so people could make business decisions.

Sentilla takes an industrial manufacturing approach to Data Centers – in manufacturing you have power going in one side, and products and (often) waste heat coming out the other. In the same way in data centers you have power going in one side and coming out the other side you have the product (compute cycles) and waste heat. To optimise your data center you need to get the maximum data/compute (product) output with the minimum power in and the least waste heat generated. Sentilla thinks of data centers, as data factories!

Unlike most of the data center people I have talked to, Sentilla don’t talk so much about energy savings. Instead they emphasise maximising performance – getting the most out of your existing data centers, your existing storage, your existing servers, your existing power supply. By far the greatest saving from deploying Sentilla, Joe claimed, is not from the energy savings. That pales in comparison to the capital deferment savings gained from being able to delay the building of extra data center facilities by however many years, he said.

So how does Sentilla help?

Well Sentilla analyses the energy profile of every asset in the data center, whether metered or not, and makes recommendations to improve the planning and management of data center operations. I highlighted the “whether metered or not” bit because this is an important differentiator for Sentilla – they have developed and patented what they call “virtual meters”. These are algorithms which look at the work that a device is doing, and based on models which Sentilla have built up, and measurements they have done, as well as some benchmarks which are out there, Sentilla computes how much power is being used by that equipment.

The reason this is so important is because the most inefficient equipment in the data center is not the new stuff (which is likely to already be metered) but the legacy devices. These are the ones which need to be most carefully managed, and the ones where the greatest performance gains for the data center can be made. And because Sentilla can pull usage information from management databases like Tivoli, it means the Sentilla doesn’t need to poll every piece of equipment in the data center (with the increased network traffic and data that would generate).

Also, because Sentilla has its virtual meters, it is a software-only product and can therefore be rolled out very quickly.

The other nice feature Sentilla has is that it can identify the energy utilisation of virtualised servers. This is important because with the increasing ease of deployment of virtual servers, under-utilised VM’s and VM clutter are starting to become issues for data centers. VM clutter isn’t just an issue for energy reasons – there are also implications for software licensing, maintenance and SLA requirements.

I asked Joe about whether Sentilla is a SaaS product and he said that while they have a SaaS version of the product, so far most of Sentilla’s clients prefer to keep their data in-house and they haven’t gone for the SaaS option.

Finally I asked about pricing and Joe said that Sentilla is priced on a subscription basis and, apparently, it is priced such that for any modest sized data center, for every $1 you put into Sentilla, you get $2 back. Or put another way, Joe said, deploying Sentilla will generally mean that you reclaim around 18-20% of your power capacity.

Disclosure: Sentilla are a client (but this post is not part of their client engagement)

You should follow me on Twitter here

Photo credit The Planet

post

Power Assure automates the reduction of data center power consumption

Data centre

If you’ve been following this blog in the last couple of weeks you’ll have noticed that I have profiled a couple of data centre energy management companies – well, today it is the turn of Power Assure.

The last time I talked to Power Assure was two years ago and they were still very early stage. At that time I talked to co-founder and CTO, Clemens Pfeiffer, this time I spoke with Power Assure’s President and CEO, Brad Wurtz.

The spin that Power Assure put on their energy management software is that, not only do they offer their Dynamic Power Management solution which provides realtime monitoring and analytics of power consumption across multiple sites, but their Dynamic Power Optimization application automatically reduces power consumption.

How does it do that?

Well, according to Brad, clients put an appliance in each of the data centres they are interested in optimising (Power Assure’s target customer base are large organisations with multiple data centres – government, financial services, healthcare, insurance, telco’s, etc.). The appliance uses the management network to gather data – data may come from devices (servers, PDU’s, UPS’s, chillers, etc.) directly, or more frequently, it gathers data directly from multiple existing databases (i.e. a Tivoli db, a BMS, an existing power monitoring system, and/or inventory system) and performs Data Centre analytics on those data.

Data centre

The optimisation module links into existing system management software to measures and track energy demand on a per applications basis in realtime. It then calculates the amount of compute capacity required to meet the service level agreements of that application and adds a little bit of headroom. From the compute it knows the number of servers needed, so it communicates with the load balancer (or hypervisor, depending on the data centre’s infrastructure) and adjusts the size of the server pool to meet the required demand.

Servers removed from the pool can be either power capped or put in sleep mode. As demand increases the servers can be brought fully online and the load balancer re-balanced so the enlarged pool can meet the new level of demand. This is the opposite of the smart grid demand response concept – this is supply-side management – matching your energy consumption (supply to the demand for compute resources).

A partnership with Intel means that future versions will be able to turn off and on individual components or cores to more precisely control power usage.

The software is agentless and interestingly, given the customer profile Brad outlined (pharmas, financial institutions, governments, etc.), customers log in to view and manage their power consumption data because it is SaaS delivered.

The two case studies on their site make for interesting reading and show reductions in power consumption from 56% – 68% which are not to be sneezed at.

The one client referred to in the call is NASA and Power Assure are involved in a data centre consolidation program with them. Based on the work they have done with Power Assure, Brad informed me that NASA now expects to be able to consolidate their current 75 Data Centres significantly. That’ll make a fascinating case study!

You should follow me on Twitter here

Photo credit cbowns

post

Friday Morning Green Numbers round-up 02/12/2010

Green numbers

Photo credit Unhindered by Talent

Here is this Friday’s Green Numbers round-up:

  • Iberdrola Renovables SA, the world?s largest operator of wind parks, agreed to buy Spain?s largest wind farm from Gamesa Corporacion Tecnologica SA.

    Renovables, based in Valencia, paid Gamesa 320 million euros ($441 million) for 244 megawatts of power capacity in Andevalo, Spain

    tags: iberdrola, iberdrola renovables, gamesa, Wind farm, greennumbers

  • IBM recently ran a ‘Jam’ – an online discussion – on environmental sustainability and why it is important for CIOs, CEOs and CFOs to address it. The Jam involved thousands of practitioners and subject matter experts from some 200 organisations. It focused primarily on business issues and practical actions.

    Take a look at the check list below and it becomes rapidly apparent, C-level management need to tackle the issue before it is foisted upon them.

    IBM’s Institute for Business Value will fully analyse the 2080 Jam contributions, but this is the essential CIO checklist derived from comments made during the Eco-Jam.

    tags: ibm, ecojam, eco jam, cio, greennumbers

  • Data centers are, thankfully, getting a lot of attention when it comes to making them more efficient. Considering that roughly 60% of the electricity used at a data center goes to keeping the servers cool, focusing on smart cooling tactics is essential. HP has taken this to heart and has opened it’s first wind-cooled data center, and it’s the company’s most efficient data center to date.

    In this piece, HP claims that their data center is the world’s first wind-cooled data center – I’m not sure just how valid this is as I have heard BT only do wind-cooled data centers!

    tags: hp, bt, data center, datacenter, wind cooled, air cooled, greennumbers

  • “Sir Richard Branson and fellow leading businessmen will warn ministers this week that the world is running out of oil and faces an oil crunch within five years.

    The founder of the Virgin group, whose rail, airline and travel companies are sensitive to energy prices, will say that the ?coming crisis could be even more serious than the credit crunch.

    “The next five years will see us face another crunch ? the oil crunch. This time, we do have the chance to prepare. The challenge is to use that time well,” Branson will say.”

    tags: richard branson, oil crunch, peak oil, virgin, greennumbers

  • “Fertile soil is being lost faster than it can be replenished making it much harder to grow crops around the world, according to a study by the University of Sydney.

    The study, reported in The Daily Telegraph, claims bad soil mismanagement, climate change and rising populations are leading to a decline in suitable farming soil.

    An estimated 75 billion tonnes of soil is lost annually with more than 80 per cent of the world’s farming land “moderately or severely eroded”, the report found.

    Soil is being lost in China 57 times faster than it can be replaced through natural processes, in Europe 17 times faster and in America 10 times faster.

    The study said all suitable farming soil could vanish within 60 years if quick action was not taken, leading to a global food crisis.”

    tags: greennumbers, soil, topsoil, soil fertility

  • In response to an environmental lawsuit filed against the oil giant, Chevron has fortified its defenses with at least twelve different public relations firms whose purpose is to debunk the claims made against the company by indigenous people living in the Amazon forests of Ecuador. According to them, Chevron dumped billions of gallons of toxic waste in the Amazon between 1964 and 1990, causing damages assessed at more than $27 billion.

    tags: chevron, ecuador, greennumbers, amazon rainforest, amazon, toxic waste, pollution

  • Indian mobile phone and commodity export firm Airvoice Group has formed a joint venture with public sector body Satluj Jal Vidyut Nigam to build 13GW of solar and wind capacity in a sparsely populated part of Karnataka district in south west India.

    The joint venture is budgeting to invest $50 billion over a period of 10 years, claiming it to be the largest single renewable energy project in the world.

    tags: greennumbers, india, airvoice, solar, wind, renewables, karnataka, renewable energy

  • Using coal for electricity produces CO2, and climate policy aims to prevent greenhouse gases from hurting our habitat. But it also produces SOx and NOx and particulate matter that have immediate health dangers.

    A University of Wisconsin study was able to put an economic value on just the immediate health benefits of enacting climate policy. Implications of incorporating air-quality co-benefits into climate change policymaking found coal is really costing us about $40 per each ton of CO2.

    tags: greennumbers, coal, sox, nox, particulate matter, greenhouse gases, health

Posted from Diigo. The rest of my favorite links are here.

post

Sun’s Mark Monroe on energy efficient data centers

Sun made an announcement the other day about the opening of its new Broomfield data center.

It sounded like they had done a superb job so I asked Sun’s Director of Sustainable Computing, Mark Monroe to come on and tell us a little more about the project.

Some of the highlights of Sun’s announcement were:

  • Greater space efficiency: A scalable, modular datacenter based on the Sun Pod Architecture led to a 66 percent footprint compression, by reducing 496,000 square feet from the former StorageTek campus in Louisville, Colo. to 126,000 square feet;
  • Reduced electrical consumption: By 1 million kWh per month, enough to power 1,000 homes in Colorado;
  • Reduced raised floor datacenter space: From 165,000 square feet to less than 700 square feet of raised floor datacenter space, representing a $4M cost avoidance;
  • Greener, cleaner architecture: Including flywheel UPS that eliminates lead and chemical waste by removing the need for batteries, and a non-chemical water treatment system, saving water and reducing chemical pollution;
  • Enhanced scalability: Incorporated 7 MW of capacity that scales up to 40 percent higher without major construction;
    Innovative cooling: The world’s first and largest installation of Liebert advanced XD cooling system with dynamic cooling controls capable of supporting rack loads up to 30kW and a chiller system 24 percent more efficient than ASHRAE standards;
  • Overall excellence: Recognized with two Ace awards for Project of the Year from the Associated Contractors of Colorado, presented for excellence in design, execution, complexity and environmental application.
post

More info please IBM…

IBM Green Data Center in Second Life

Speaking of data centers, I was delighted to read this morning of a partnership between IBM and Indian bank Kotak.

According to the release, IBM is helping the bank consolidate its server rooms into one data center and Kotak will save:

over US$1.2 million in operational efficiency and reduced energy costs over the next five years

I’d like to see some of the calcs behind those data – $1.2m over five years sounds low to me unless it is a modest data center.

Intriguingly, the release refers to:

a chilled water-based cooling and an automatic floor pressurization system

If that is water cooled servers (as opposed to water cooled air handling units) then this is nice. I’d love to know what an ” automatic floor pressurization system” system is. Anyone know? My guess is that it is something for maintaining underfloor airflow integrity but if it is that, then it sounds like traditional air cooled servers, not water cooled 🙁

Hello? Anyone from IBM have any more info on this?