Thursday, 1 July 2010
Damn the laws of thermodynamics! There are power losses all over the data centre which contribute to poor energy utilisation. These losses present themselves as heat. If it wasn’t for energy losses we would have perpetual motion data centres.
Loss of energy in data centres has focused the minds of many in the industry, especially since the advent of the 21st century, high-density computing and our obsession with global warming. And out of the desire to produce the most energy-efficient data centre has come one profound acronym: PUE.
A bunch of smart, industry dudes from The Green Grid got around a table with the aim of getting a handle on how to measure energy efficiency so the data centre industry could measure whether it was going forward or backward. Out of this, The Green Grid derived the next best thing to E=mc², that is, PUE: power usage effectiveness. (PUE is another of those damn three-letter IT acronyms. Boring! If you and I got a dollar for every time it's quoted at data centre forums we probably would be billionaires by now. But, please, can we get some cool-looking equation instead?)
PUE is used to place a value on a data centre’s energy utilisation efficiency. PUE measures the amount of power delivered to IT equipment in the data centre, compared to the total power used by the data centre.
Power usage by the facility infrastructure—such as the UPS, the mechanical plant, etc.—and its losses contribute nothing to crunching data, which is where the real productivity lies for any business. If, at the end of the day, the silicon circuit that is doing the processing of 1’s and 0’s is only consuming a small percentage of the power in the data centre and the rest is being gobbled up by all of the backend power distribution and cooling, then the data centre—from an energy perspective—is pretty inefficient. Something to be ashamed of in today’s energy-conscientious world.
We can’t expect that every electron that flows into a data centre is sucked up by the logic of the servers or storage to produce answers and results for your Facebook page, or your web query, or your spreadsheet. Because of this, we are witnessing a technological battle in the data centres of this century. This battle looks to provide ever-smarter solutions to solve the energy losses of power and cooling equipment in data centres.
The perfect world would be to just expend all of the electrical energy on the silicon or, expressed in PUE terms, the data centre’s energy efficiency would have a value of ’1’. Mind you, the heat produced by the electronics (logic) is wasted energy, but as long as the sub-atomic world of protons, neutrons and electrons move there will always be heat (unless we reach the cosmological heat death, but that’s some time off).
If the holy grail of PUE was ‘0’ rather than ‘1’, then data centres would be switched off. A PUE any higher than ‘1’ and we are experiencing power losses to support the electronics in the servers and storage, etc.
I’ve been pondering how one could get a data centre to the PUE value of ‘1’ and I think that the solution is to get dirty and cheap. (This is a flight of pure fantasy, not measuring up to the data centre designers holy book of TIA-942.)
This data centre would be a basic shed/warehouse in which there are cabinets with predominately simple shelving. There would be no sophisticated power train or any cooling plant. It would be stacked to the gunnels with high-end laptops—laptops like the ones you and I use every day. They aren’t cooled by special infrastructure, they run off the mains power, they are knocked around, they survive in dirty environments, they are powered up and down continuously, and they just live on. My laptop lives in a domestic environment and has survived endless days in temperatures above 40 degrees Celsius. This dirty and cheap data centre would have a central DC supply connected to common DC busses (gets rid of the individual power packs that each laptop would have). The environment would be very domestic. The storage and networking equipment would also reside in this environment and it could have protection with regard to power outage, probably from a diesel rotary UPS. The laptops of course have their own battery backup which would allow them to ride through short-term disruptions. The whole idea of ‘A’ and ‘B’ feeds, with respect to power or even networking, would not be a design premise. Yep, it’s cheap.
There would be some losses in the power train, but the PUE would get very close to ‘1’. The intention is to keep it simple and just compute. Do away with UPSs as we know them, and do away with any complex cooling solutions other than maybe air circulated by fans. It wouldn’t be sexy, it wouldn’t be a showcase, but it would be as close to the pure dream of only using energy on silicon.
As I say, it’s pure fantasy, a dream. But that is the only way to reach PUE nirvana … unless you want to sit on top of a mountain contemplating your data centre’s navel.