Metrics have become commonplace in the data centre industry.
At a recent AFCOM Data Centre World Conference, metrics were defined as:
… parameters or measures of quantitative assessment used for measurement, comparison or to track performance.
Pre-circa 2005, the use of metrics to measure any aspect of a legacy computer room would have been rare.
Facility performance, facility design planning, operational capacity management and the objective comparison of data centres was unheard of.
These earlier environments were invariably the legacy of a computer room paradigm where the sciences of facility and IT infrastructure were very crude.
Post 2005, quantitative assessment can now be used to analyse the relationship of meaningful metrics
The modern data centre had developed very specific data centric facility design and IT infrastructure solutions.
Due to their closely integrated nature, they have allowed data centre science to develop whereby quantitative assessment can used to measure, compare and track relationships of:
- data hall power density(‘white space’ wfsm²/total IT kW)
- data hall space efficiency (wfsm²/rack units)
- power utilisation effectiveness PUE (IT power load/total facility power load)
- data hall power density
- rack power density (total available IT power /total number of racks)
- monthly reoccurring operational cost (MRC) to power ($/kW) which is actually deriving ‘the cost to compute’
to mention but a few.
A whole lot of what were thought to be disparate single parameters or measurements now have meaningful relationships through metrics.
This development has allowed data centre consultants, engineers and operational management to express more clearly to their senior managers — people who do not speak the language of data centres — everything from technical design to commercial issues, such as the business case to develop a new data centre.
The mathematics to derive these metrics can be simple, and is sometimes referred to as ‘table napkin’ maths.
This can be used to derive powerful metrics that can determine the outcome of IT infrastructure capacity management decisions, express power efficiencies that could cost millions in capital expenditure or operational costs, and measure the cost to compute, to mention a few.
Have these data centre metrics reached a hiatus?
It would seem not. When someone, somewhere has a light-bulb moment, a new metric is discovered that throws light on the data centre.
Who would have thought that by just taking two quantities — that is space (m²) and power (kW) —and comparing them in different ways, one could derive metrics that can allow a data centre to be designed on a table napkin (well, not really, but it could start on a single sheet of paper).
Never underestimate the power of data centre metrics as they have added considerably to the new data centre’s pedigree.
It has taken the guesswork out of its design, both technically and financially.
And it has added innumerably to determining efficiency in space and power, thus reducing operational costs, and it has removed the subjectiveness of data centre performance.
We take them for granted nowadays, and quote them often, but the various metrics have placed data centre development on some very firm footing.