post

Facebook and ebay’s data centers are now vastly more transparent

ebay's digital service efficiency

Facebook announced at the end of last week new way to report PUE and WUE for its datacenters.

This comes hot on the heels of ebay’s announcement of its Digital Service Efficiency dashboard – a single-screen reporting the cost, performance and environmental impact of customer buy and sell transactions on ebay.

These dashboards are a big step forward in terms of making data centers more transparent about the resources they are consuming. And about the efficiency, or otherwise, of the data centers.

Even better, both organisations are going about making their dashboards a standard, thus making their data centers cross comparable with other organisations using the same dashboard.

Facebook Prineville Data Center dashboard

There are a number of important differences between the two dashboards, however.

To start with, Facebook’s data is in near-realtime (updated every minute, with a 2.5 hour delay in the data), whereas ebay’s data is updated every quarter of a year. So, ebay’s data is nowhere near realtime.

Facebook also includes environmental data (external temperature and humidity), as well as options to review the PUE, WUE, humidity and temperature data for the last 7 days, the last 30 days, the last 90 days and the last year.

On the other hand, ebay’s dashboard is, perhaps unsurprisingly, more business focussed giving metrics like revenue per user ($54), the number of transactions per kWh (45,914), the number of active users (112.3 million), etc. Facebook makes no mention anywhere of its revenue data, user data nor its transactions per kWh.

ebay pulls ahead on the environmental front because it reports its Carbon Usage Effeftiveness (CUE) in its dashboard, whereas Facebook completely ignores this vital metric. As we’ve said here before, CUE is a far better metric for measuring how green your data center is.

Facebook does get some points for reporting its carbon footprint elsewhere, but not for these data centers. This was obviously decided at some point in the design of its dashboards, and one has to wonder why.

The last big difference between the two is in how they are trying to get their dashboards more widely used. Facebook say they will submit the code for theirs to the Opencompute repository on Github. ebay, on the other hand, launched theirs at the Green Grid Forum 2013 in Santa Clara. They also published a PDF solution paper, which is a handy backgrounder, but nothing like the equivalent of dropping your code into Github.

The two companies could learn a lot from each other on how to improve their current dashboard implementations, but more importantly, so could the rest of the industry.

What are IBM, SAP, Amazon, and the other cloud providers doing to provide these kinds of dashboards for their users? GreenQloud has had this for their users for ages, now Facebook and ebay have zoomed past them too. When Facebook contributes oits codebase to Github, then the cloud companies will have one less excuse.

Image credit nicadlr

post

The Switch SuperNAP data centre – one of the most impressive I’ve been in

Switch SuperNAP data centre

If you were going to build one of the world’s largest data centre’s you wouldn’t intuitively site it in the middle of the Nevada desert but that’s where Switch sited their SuperNAPs campus. I went on a tour of the data centre recently when in Las Vegas for IBM’s Pulse 2012 event.

The data centre is impressive. And I’ve been in a lot of data centre’s (I’ve even co-founded and been part of the design team of one in Ireland).

The first thing which strikes you when visiting the SuperNAP is just how seriously they take their security. They have outlined their various security layers in some detail on their website but nothing prepares you for the reality of it. As a simple example, throughout our entire guided tour of the data centre floor space we were followed by one of Switch’s armed security officers!

The data centre itself is just over 400,000 sq ft in size with plenty of room within the campus to build out two or three more similarly sized data centres should the need arise. And although the data centre is one of the world’s largest, at 1,500 Watts per square foot it is also quite dense as well. This facilitates racks of 25kW and during the tour we were shown cages containing 40 x 25kW racks which were being handled with apparent ease by Switch’s custom cooling infrastructure.

Switch custom cooling infrastructure

Because SuperNAP wanted to build out a large scale dense data centre, they had to custom design their own cooling infrastructure. They use a hot aisle containment system with the cold air coming in from overhead and the hot air drawn out through the top of the contained aisles.

The first immediate implication of this is that there are no raised floors required in this facility. It also means that walking around the data centre, you are walking in the data centre’s cold aisle. And as part of the design of the facility, the t-scif’s (thermal seperate compartment in facility – heat containment structures) are where the contained hot aisle’s air is extracted and the external TSC600 quad process chillers systems generate the cold air externally for delivery to the data floor. This form of design means that there is no need for any water piping within the data room which is a nice feature.

Through an accident of history (involving Enron!) the SuperNAP is arguably the best connected data centre in the world, a fact they can use to the advantage of their clients when negotiating connectivity pricing. And consequently, connectivity in the SuperNAP is some of the cheapest available.

As a result of all this, the vast majority of enterprise cloud computing providers have a base in the SuperNAP. As is the 56 petabyte ebay hadoop cluster – yes, 56 petabyte!

US electricity generation

Given that I have regularly bemoaned cloud computing’s increasing energy and carbon footprint on this blog, you won’t be surprised to know that one of my first questions to Switch was about their energy provider, NV Energy.

According to NV Energy’s 2010 Sustainability Report [PDF] coal makes up 21% of the generation mix and gas accounts for another 63.3%. While 84% electricity generation from fossil fuels sounds high, the 21% figure for coal is low by US standards, as the graph on the right details.

Still, it is a long way off the 100% of electricity from renewables that Verne Global’s new data centre has.

Apart from the power generation profile, which in fairness to Switch, is outside their control (and could be considerably worse) the SuperNAP is, by far, the most impressive data centre I have ever been in.

Photo Credit Switch

post

Colt Technology and Verne Global’s dual renewably sourced data centre in Iceland

Icelandic landscape

Iceland’s a funny place. Despite the name, it never actually gets very cold there. The average low temperature in Winter is around -5C (22F) and the summer highs average around 13C (55.5F). Iceland is also unusual in that 100% of its electricity production comes from renewable energy (about 70% from hydro and 30% from geothermal).

I have written here several times about the high carbon cost of cloud computing, so when I received an invitation from Colt Technology to view the new data centre they had built with Verne Global in Iceland, I nearly bit their hand off!

The data centre is built on an 18-hectare (approximately 45-acre) complex west of Reykjavik, just beside Keflav?k Airport. The site is geologically stable and according to Verne:

The facility is situated on the site of the former Naval Air Station Keflavik, a key strategic NATO base for over 50 years and chosen for its extremely low risk of natural disaster. Located well to the west of all of Iceland’s volcanic activity, arctic breezes and the Gulf Stream push volcanic effects away from the Verne Global site and toward Western Europe.

Cold aisle contained racks in Verne Global's Iceland facility

Cold aisle contained racks in Verne Global's Iceland facility

The data centre was built using Colt Telecom’s modular data center design. This is essentially a data centre in a box! The modular data centre is built by Colt, shipped to site, (in this case, it was literally put on a ship and shipped to site – but the modules fit on a standard wide-load 18-wheeler), where it is commissioned. In the case of the Verne Global data centre, the build of the data centre took just 4 months because of the modular nature of the Colt solution, instead of the more typical 18 months. Also, modularity means it will be relatively straightforward to add extra capacity to the site, while keeping up-front data centre development costs down.

The data center has an impressive number of configurable efficiency features built-in. In the Verne Global facility, cold aisle containment is used and it is a wise choice in this environment. The facility uses only outside air for cooling (no chillers) so it makes sense to vent the hot air from the servers into a room being cooled by outside air. In winter, if the outside air is too cold, it can be mixed with hot air from the servers before entering the underfloor space to cool the servers.

The underfloor space is kept free of plenums and obstructions to allow an unimpeded flow of air from the variable speed fans – this minimises the work needed to be done by the fans, increasing their efficiency.

From an energy perspective, though, what makes the site very unique is that it sources its electricity from dual renewable sources (hydro and geothermal). Iceland is in quite a unique situation with its excess of abundant, cheap renewable power. Energy is so cheap in Iceland that aluminium smelting plants locate themselves there to take advantage of the power. These plants require roughly 400-500MW of constant power, so adding even 10 large data centres to the grid there would hardly be noticed on the system!

Another unique aspect of the Icelandic electricity is that because it is renewably sourced, its pricing is predictable (unlike fossil fuels). In fact, the Icelandic electricity provider, Landsvirkjun, offers contracts with guaranteed pricing up to 12 years. Also, the Icelandic grid ranks 2nd in the world for reliability and has the most competitive pricing in Europe (currently offering $43/MWh for 12 years as public offering – with better private offerings potentially available).

Speaking to Verne Global’s Lisa Rhodes, while in Iceland, she told me that because Verne had guaranteed energy pricing from Landsvirkjun for the next 20 years, they would be able to pass this on to Verne’s hosting customers and, in fact, she claimed that hosting in the Verne facility would cost 50-60% of the cost of hosting in the East coast of the US.

On the connectivity front, Colt announced that they were putting a Colt POP in the Verne facility, so it is connected directly into the Colt backbone.

Also, the Emerald Express fibre-optic cable which is due to be commissioned late this year has been designed to support 100x100Gbs on each of its six fibre pairs, which should easily meet any connectivity requirements Iceland should have well into the future.

Interestingly, one of Verne’s Global’s first customers is greenqloud – a company offering green public compute cloud services (an AWS EC2 and S3 drop-in replacement). With this, can we finally say that cloud computing can be green? Unfortunately, with cloud’s propensity to promote consumption of services, no, but at least with Greenqloud, your cloud can have a vastly reduced carbon footprint.

Full disclosure – I had my travel and accommodation paid for to visit this facility. And Colt has a POP in CIX, the data centre I co-founded in Cork before joining GreenMonk.

post

Facebook hires Google’s former Green Energy Czar Bill Weihl, and increases its commitment to renewables

Christina Page, Yahoo & Bill Weihl, Google - Green:Net 2011

Google has had an impressive record in renewable energy. They invested over $850m dollars in renewable energy projects to do with geothermal, solar and wind energy. They entered into 20 year power purchase agreements with wind farm producers guaranteeing to buy their energy at an agreed price for twenty years giving the wind farms an income stream with which to approach investors about further investment and giving Google certainty about the price of their energy for the next twenty years – a definite win-win.

Google also set up RE < C – an ambitious research project looking at ways to make renewable energy cheaper than coal (unfortunately this project was shelved recently).

And Google set up a company called Google Energy to trade energy on the wholesale market. Google Energy buys renewable energy from renewable producers and when it has an excess over Google’s requirements, it sells this energy and gets Renewable Energy Certificates for it.

All hugely innovative stuff and all instituted under the stewardship of Google’s Green Energy Czar, Bill Weihl (on the right in the photo above).

However Bill, who left Google in November, is now set to start working for Facebook this coming January.

Facebook’s commitment to renewable energy has not been particularly inspiring to-date. They drew criticism for the placement of their Prineville data center because, although it is highly energy efficient, it sources its electricity from PacificCorp, a utility which mines 9.6 million tons of coal every year! Greenpeace mounted a highly visible campaign calling on Facebook to unfriend coal using Facebook’s own platform.

The campaign appears to have been quite successful – Facebook’s latest data center announcement has been about the opening of their latest facility in Lulea, Sweden. The data center, when it opens in 2012, will source most of its energy from renewable sources and the northerly latitudes in Lulea means it will have significant free cooling at its disposal.

Then in December of this year (2011) Facebook and Greenpeace issued a joint statement [PDF] where they say:

Facebook is committed to supporting the development of clean and renewable sources of energy, and our goal is to power all of our operations with clean and renewable energy.

In the statement Facebook commits to adopting a data center siting policy which states a preference for clean and renewable energy and crucially, they also commit to

Engaging in a dialogue with our utility providers about increasing the supply of clean energy that power Facebook data centers

So, not alone will Facebook decide where their future data centers will be located, based on the availability of renewable energy, but Facebook will encourage its existing utility providers to increase the amount of renewables in their mix. This is a seriously big deal as it increases the demand for renewable energy from utilities. As more and more people and companies demand renewable energy, utilities will need to source more renewable generation to meet this demand.

And all of this is before Google’s former Green Energy Czar officially joins Facebook this coming January.

If Bill Weihl can bring the amount of innovation and enthusiasm to Facebook that he engendered in Google, we could see some fascinating energy announcements coming from Facebook in the coming year.

Photo credit Jaymi Heimbuch

post

GreenMonk TV talks flywheel UPS’s with Active Power

I attended the 2011 DataCenterDynamics Converged conference in London recently and at it I chatted to a number of people in the data center industry about where the industry is going.

One of these was Active Power‘s Graham Evans. Active Power make flywheel UPS’s so we talked about the technology behind these and how they are now becoming a more mainstream option for data centers.

Tom Raftery: Hi everyone, welcome to GreenMonk TV, we are at the DCD Converge Conference in London and with me I have Graham Evans from Active Power. Graham you guys, you make the spinning UPSs.

Graham Evans: That?s right yes the flywheel UPSs, kinetic energy. So behind us here we have our powerhouse. So what we found with the flywheel UPS is because of its high density environment, the fact it doesn?t need cooling the fact that it is really suited to a containerized environment we?ve put it in a powerhouse to show the guys in DCD to show the benefits that we can provide from a systems perspective.

Tom Raftery: So what the flywheel UPS does is it takes in electricity, while the electricity is running, spins wheels really, really fast and then if there is a break it uses that kinetic energy to keep the system up.

Graham Evans: Not quite, so the flywheel itself is spinning all the time as an energy storage device. The UPS system is primarily conditioning the power. So as the power comes through it?s a parallel online system, all of the kilowatts flow through to the load and our converters regulate that power to make sure you get UPS grade output through to your critical load. At the same time the flywheel is spinning it?s sat there as a kinetic energy store ready to bridge the gap when its required to do so.

Active Power flywheel UPS

An Active Power flywheel UPS

So voltage or mains fails on the input, the flywheel itself changes state instantaneously from a motor to a generator and we extrapolate that kinetic energy through some converters to support the load, start our diesel engine, and that then becomes the primary power source through to the system.

Tom Raftery: And you got the diesel engine in fact built into the system here, it?s on the right hand side as we are looking at here. So there is a reason that you have your own diesel engine kind of fitted into in there.

Graham Evans: Yes, so we are not holding into one particular diesel engine manufacturer so what we do as a complete system is designed as a critical power solution. So the thought really is from a client point of view we can be flexible in terms of their requirements. We can size the engine to support the UPS load only or maybe we can pick up some mechanical loads as well. We make some enhancements to the diesel so we have our own diesel controller to start the diesel quickly. We have our own product we call GenSTART,which allows us to have a UPS backed starter mechanism to the system so we can use that UPS power to start it.

Tom Raftery: And that?s important because the flywheel don?t stay up as long as say a battery bank.

Graham Evans: Its important because this type of loads that we are supporting need that quick power restorations, so from a UPS point of view we need to restore or keep power instantaneously that?s the job of a UPS, no break power supply, but we also find with mechanical loads certainly in high density datacenter environments we need to restore the short break mechanical loads very quickly. So the system you see here is able to do that. We continuously support the UPS load and we can bring on the cooling load ten seconds afterwards. So very fast starting, very robust system.

Tom Raftery: And the whole flywheel UPS idea is relatively new comer to the datacenter environment?

Graham Evans: Not especially I think it feels like that sometimes but we have been around for 15 years as a business, we have 3000 plus installations worldwide, but certainly we are not as common place as some other technologies but we are probably one of the fastest growing companies globally. So, yeah not brand new 15 years in business, but yeah the concept?s really taken off and it?s been really successful for us.

Tom Raftery: Cool. Graham that?s been fantastic, thanks for coming to the show.

Graham Evans: No problem, thank you, cheers.

post

GreenMonk TV talks data center standardisation with Schneider Electric

I attended the 2011 DataCenterDynamics Converged conference in London recently and at it I chatted to a number of people in the data center industry about where the industry is going.

The first of these was Paul-François Cattier of Schneider Electric who talked about the need for standardisation of infrastructure in the data centers to speed up the time to build.

Tom Raftery: Hi everyone, welcome to GreenMonk TV. We’re at DCD London Converge and with me I have Paul-François Cattier, who is VP Data Center Solutions for Schneider Electric. Paul welcome to the show.

Paul-François Cattier: Thank you Tom.

Tom Raftery: So Paul you guys made a bit of an announcement at the show here, can you talk to us a little bit about that?

Paul-François Cattier: Yes we announced what we call the way to bring the standardization and modular IT into data centre to bring energy efficiency in the data centre. So basically what we think is today’s data centre industry is still very immature in its infancy and we need to bring this with stage of maturity to be better efficient.

Tom Raftery: So tell me why do you think a modular infrastructure is better for data centres?

Paul-François Cattier: It’s not really the modular infrastructure that is better for energy efficiency in data centre, it is really the standardization that allow this modular IT. In fact what we need to bring into the data centre, I think to bring it to maturity is the standardization and to be able to standardize the data centre, you need to find a point of granularity of modularity, where you bring the standardization and after a lot of these different data centers have been serving the different business peoples but this level of standardization will allow a lot of CapEx and OpEx efficiency in your data centre and you know that most of the OpEx of the data centre are in the energy.

Tom Raftery: So talk to me about standardization. What exactly are you talking about when you say standardization, standardization of?

Paul-François Cattier: Most of the data centre today are designed as unique design. So each time it’s very long process to design data centre because it’s a unique design so you have 24 months, 20 months before you decide to do a data centre when you are coming to the completion of the data centre. So with the standardization you are using subsystems that are completely standardized, manufacture build, manufacture interested, respectable performance and you are using these bricks or these blocks or these Lego if you want of subsystems, standardized subsystems to build your data centre.

Schneider Electric Power and Cooling modules

If doing this, you can really build your data centre in three to four months with optimized performance and ensure due to this standardization that the management that is needed to really tie the data centre physical infrastructure and its energy consumption to the effective IT of your data centre to be enabled in the data centre because as the data centre is very much in standardized module, it’s very easy to require this type of management system. So like if you want — if you would like to develop yourself, your GPS in your car, it will — you will spend maybe 20 years before being able to use your own GPS designed by you.

What you do is you share the R&D to have an excellent GPS system that is sold to many, many customer to spread out the cost. So this is what standardization and modularity will bring into the data centre world.

Tom Raftery: How far away do you reckon we are from that becoming the norm? I mean you talk about the data centre industry currently being quite immature. When do you reckon we will be that much further along that this will be the norm?

Paul-François Cattier: Well we have a long way to go. Today 80% of the new built data centre are still built in a very traditional way, that is totally inefficient in terms of CapEx use, in terms of OpEx use and in terms of energy efficiency, and you know that we are working in all the market, we are in Schneider Electric to bring energy efficiency into this market. And really we believe that the standardization in the modularity enables the management aspect of the challenge to be enabled, and — enabled to be enabled of course and allow this energy efficiency and when you save 1 kilowatt at the plug, you save most of the time 3 kilowatt as a generation plant.

Tom Raftery:
Great. Well François that?s been fantastic, thanks for coming on the show.

Paul-François Cattier: Thank you very much Tom.

post

Power Usage Efficiency (PUE) is a poor data center metric

Problems with PUE

Power Usage Effectiveness (PUE) is a widely used metric which is supposed to measure how efficient data centers are. It is the unit of data center efficiency regularly quoted by all the industry players (Facebook, Google, Microsoft, etc.).
However, despite it’s widespread usage, it is a very poor measure of data center energy efficiency or of a data center’s Green credentials.

Consider the example above (which I first saw espoused here) – in the first row, a typical data center has a total draw of 2MW of electricity for the entire facility. Of which 1MW goes to the IT equipment (servers, storage and networking equipment). This results in a PUE of 2.0.

If the data center owner then goes on an efficiency drive and reduces the IT equipment energy draw by 0.25MW (by turning off old servers, virtualising, etc.), then the total draw drops to 1.75MW (ignoring any reduced requirement for cooling from the lower IT draw). This causes the PUE to increase to 2.33.

When lower PUE’s are considered better (1.0 is the theoretical max), this is a ludicrous situation.

Then, consider that not alone is PUE a poor indicator of an data center’s energy efficiency, it is also a terrible indicator of how Green a data center is as Romonet’s Liam Newcombe points out.

Problems with PUE

Consider the example above – in the first row, a typical data center with a PUE of 1.5 uses an average energy supplier with a carbon intensity of 0.5kg CO2/kWh resulting in carbon emissions of 0.75kg CO2/kWh for the IT equipment.

Now look at the situation with a data center with a low PUE of 1.2 but sourcing energy from a supplier who burns a lot of coal, for example. Their carbon intensity of supply is 0.8kg CO2/kWh resulting in an IT equipment carbon intensity of 0.96kg CO2/kWh.

On the other hand look at the situation with a data center with a poor PUE of 3.0. If their energy supplier uses a lot of renewables (and/or nuclear) in their generation mix they could easily have a carbon intensity of 0.2kg CO2/kWh or lower. With 0.2 the IT equipment’s carbon emissions are 0.6kg CO2/kWh.

So, the data center with the lowest PUE by a long shot has the highest carbon footprint. While the data center with the ridiculously high PUE of 3.0 has by far the lowest carbon footprint. And that takes no consideration of the water footprint of the data center (nuclear power has an enormous water footprint) or its energy supplier.

The Green Grid is doing its best to address these deficiencies coming up with other useful metrics such as, Carbon Usage Effectiveness (CUE) and Water Usage Effectiveness (WUE).

Now, how to make these the standard measures for all data centers?

The images above are from the slides I used in the recent talk I gave on Cloud Computing’s Green Potential at a Green IT conference in Athens.

post

Learnings from Google’s European Data Center Summit

Google's EU Data Center Summit conference badge

I attended Google’s European Data Center Summit earlier this week and it was a superb event. The quality of the speakers was tremendous and the flow of useful information was non-stop.

The main take home from the event is that there is a considerable amount of energy being wasted still by data centers – and that this is often easy to fix.

Some of the talks showed exotic ways to cool your data center. DeepGreen, for example, chose to situate itself beside a deep lake, so that it could use the lake’s cold water for much cheaper cooling. Others used river water and Google mentioned their new facility in Finland where they are using seawater for cooling. Microsoft mentioned their Dublin facility where they are using air-side economisation (i.e. it just brings in air from outside the building) and so is completely chiller-less. This is a 300,00sq ft facility.

IBM’s Dr Bruno Michel did remind us that it takes ten times more energy to move a compressible medium like air, than it does to move an non-compressible one like water but then, not all data centers have the luxury of a deep lake nearby!

Google's Joe Kava addressing the European Data Center Summit

Both Google and UBS, the global financial services co., gave what were hugely practical talks about simple steps to reducing your data center’s energy footprint.

Google’s Director of Operations, Joe Kava (pic on right) talked about a retrofit project where Google dropped the PUE of five of its existing data centers from 2.4 down to 1.5. They did this with an investment of $25k per data center and the project yielded annual savings of $67k each!

What kind of steps did they take? They were all simple steps which didn’t incur any downtime.

The first step was to do lots of modelling of their airflow and temperatures in their facilities. With this as a baseline, they then went ahead and optimised the perforated tile layout! The next step was to get the server owners to buy into the new expanded ASHRAE limits – this allowed Google to nudge the setpoint for the CRACs up from its existing 22C to 27C – with significant savings accruing from the lack of cooling required from this step alone.

Further steps were to roll out cold aisle containment and movement sensitive lighting. The cold aisles were ‘sealed’ at the ends using Strip Doors (aka meat locker sheets). This was all quite low-tech, done with no downtime and again yielded impressive savings.

Google achieved further efficiencies by simply adding some intelligent rules to their CRACs so that they turned off when not needed and came on only if/when needed.

UBS’ Mark Eichenberger echoed a lot of this in his own presentation. UBS has a fleet of data centers globally whose average age is 10 years old and some are as old as 30. Again, simple, non-intrusive steps like cold-aisle containment and movement sensitive lighting is saving UBS 2m Swiss Francs annually.

Google’s Chris Malone had other tips. If you are at the design phase, try to minimise the number of conversion steps from AC<->DC for the electricity and look for energy efficient UPS’.

Finally, for the larger data center owners, eBay’s Dean Nelson made a very interesting point. When he looked at all of eBay’s apps, he saw they were all in Tier 4 data centers. He realised that 80% of them could reside in Tier 2 data centers and by moving them to Tier 2 data centers, he cut eBay’s opex and capex by 50%

Having been a co-founder of the Cork Internet eXchange data center, it was great to hear that the decisions we made back then around cold aisle containment and highly energy efficient UPS’ being vindicated.

Even better though was that so much of what was talked about at the summit was around relatively easy, but highly effective retrofits that can be done to existing data centers to make them far more energy efficient.

You should follow me on Twitter here
Photo credit Tom Raftery

post

Data center war stories sponsored blog series – help wanted!

Data center work

Sentilla are a client company of ours. They have asked me to start a discussion here around what are the day-to-day issues data center practitioners are coming up against.

This is a very hands-off project from their point of view.

The way I see it happening is that I’ll interview some DC practitioners either via Skype video, or over the phone, we’ll have a chat about DC stuff (war stories, day-to-day issues, that kind of thing), I’ll record the conversations and publish them here along with transcriptions. They’ll be short discussions – simply because people rarely listen to/watch rich media longer than 10 minutes.

There will be no ads for Sentilla during the discussions, and no mention of them by me – apart from an intro and outro simply saying the recording was sponsored by Sentilla. Interviewees are free to mention any solution providers and there are no restrictions whatsoever on what we talk about.

If you are a data center practitioner and you’d like to be part of this blog series, or simply want to know more, feel free to leave a comment here, or drop me an email to [email protected]

You should follow me on Twitter here
Photo credit clayirving

post

Top 10 Data Center blogs

Data center air and water flows

Out of curiosity, I decided to see if I could make a list of the top 10 data center focussed blogs. I did a bit of searching around, found around thirty blogs related to data centers (who knew they were so popular!). I went through the thirty blogs and eliminated them based on arbitrary things I made up on the spot like post frequency, off-topic posts, etc. until I came up with a list I felt was the best. Then I counted them and lo! I had exactly 10 – phew, no need to eliminate any of the good ones!

So without further ado – and in no particular order, I present you with my Top 10 Data Center blogs:

What great data center blogs have I missed?

The chances are there are superb data center blogs out there which my extensive 15 seconds of research on the topic failed to uncover. If you know of any, feel free to leave them in the comments below.

Image credit Tom Raftery