In his book "Just Six Numbers: The Deep Forces That Shape The Universe", Sir Martin Rees discusses six numbers that, once you know their values, all sorts of things can be known about the universe: The way it has to look and behave. What physicist often call the universes "texture" (which of course is crunchy)
I read this book a number of years ago, and it has shaped the way I think, even if, upon one reading, I did not completely understand it. I am not a physicist, so I have to re-read such books from time to time to re-remember details and nuances. Even so, just knowing this idea of "a few basic numbers" has changed my point of view and one area it has influenced is how I think about data center design. You can not design a data center, more or less a Green one, unless you have a certain basic understanding of some numbers. Robert Heinlein said (in "Time Enough For Love") that (paraphrasing) anything that could not be done in math was opinion.
Data centers designed by opinion are unlikely to work well, so I have worked over the years to try and make sure I have the math right. Along the way I have discovered certain rules of thumb that are industry standards, but they may not apply, and knowing whether they do or not has a huge impact of the ultimate cost and efficiency of the DC.
What I would like to do here is to talk about some of those very basic numbers, with a focus on the idea that "If you can't measure it, you can't manage it".
Todays Number: Diversity Factor
When I was designing / building a data center for R&D a few years ago, I also learned about using the same words as the trades that you are working with. What I was calling the "Maximum Rating of the Power Supply" was causing blank looks from the building engineers. I finally learned that the name they like is "Nameplate rating" or just "Nameplate". I resisted it at first: the nameplate was on the front, and said "Dell" or something. No ratings. On the back, somewhere on the power supply was a label, and it had all sorts of numbers in tiny tiny print. The one of interest was "Watts", although sometimes that had to be inferred from "Volts" and "Amps" (volts times amps equals watts: maybe that should have been todays number, as it is key).
Side note: Never see anyone using V*A=W on an auto-sensing or auto-ranging power supply, because the volts and amps are ranges, and if you multiplied together the largest numbers in the range, you'd be way high on your number. Today it is common to have a power supply that can plug in to 100 volts or 208 volts or even 240 volts without setting a switch: It auto-senses the power. But at higher voltages it pulls less amps, so multiplying together the ranges is not useful. They can also be more or less efficent at different voltages: it used to be that power supplies were more effient at 208v than 110v for example, but not always and not so much on modern gear, where it seems to make no hugely significant difference.
Side side note: Sometimes you can not find wattage. I have had to look up specifications and find instead maximum BTU and reverse that in.. but that is another post.
If you know nameplate wattage of every server, you do not know how much power all those servers will consume. Here is the key point of todays entry. Consider the following: A server can be big or small, have few memory slots or many. have few disks or many. Few CPU's or many. Different CPU models in the same external model. Low power memory as an option.
For us, as an R&D shop, we often-but-not-always buy servers that are on the low end of the configuration. If it had 24 DIMM slots, 16 drive bays, and 2 CPU sockets, it would not be unusual to have half of the DIMM sockets populated, 2 of the drive bays, and one CPU socket (with 8 cores or something). The CPU might be the 95 watt model rather than the 130 watt one. Yet the power supply is sized to power *everything* that might be installed when the server is fully configured. They don't want to add to the confusion of buying a server by having 4 different wattages based off configuration, and more parts drives up costs too. Nope: Big power suppply for you most of the time.
Add to this that the older servers used less efficient power supplies. As low as 75% efficiency. What mix of what efficiencies sit there in that pile of servers?
Add to this that when disks power up they use way more power than when they settle down, that smaller disks use less power to spin up than big ones, and that disks seeking use more power than disks idling. Solid state disks use less power than spinning disks, but which generation of those matters too.
Look at that pile of computers again and wonder from the nameplate rating how can you tell anything about how much power they are actually consuming? Answer is you can't, and that is where Diversity Factor (DF hereafter) comes in.
Industry Standard Diversity Factor (DF)
Industry standard DF is to multiply the total of all the nameplate ratings by .6 (or even, in one reference I found, .67). If my server pile has two million watts total nameplate, then that DF says my data center has to be designed to deal with 1.2 million watts of power (1.34 megawatts at .67, but no one seems to use that one anymore so that is the last we'll speak of it today) . You would be very very safe in doing that if all you were worried about was making sure you had enough cooling in the data center.
If you can't measure it, then this keeps you from having a server roast, but maybe not your job as a data center designer, and maybe not happy as a stockholder either. My measured DF across the R&D data centers is .439. That means in the scenario here 880,000 watts, not 2 million or 1.2 million. Any DC built to those higher numbers does have one thing going for it: A long future as a data center, because it is over-built. Unless you designed the data center in zones where you can power up HVAC only as you need it, it will not be a very Green data center. HVAC sitting around idling still uses a lot of power: 40% of the power going into the data center is probably to run the HVAC. More if you overbuilt like this.
If your company could have used the extra money spent on the DC overbuild for something else, they might also be miffed at the designer. That is a lot of new Mac laptops.
If you are in a really large data center with separate power meters, and even have the HVAC metered separately from the lights and the racks, then you are lucky and also unusual. The more common case is where the power is measured at the building, and includes the offices and breaks rooms and whatnot.
Newer in-rack PDU's can be bought with monitoring intelligence. That is the other extreme in fact: You know how much power is being consumed right down to the plug, and you can pull those back and monitor them with something like Nlyte and know exactly what is going on at all times.
Most Data Center PDU's (the big ones on the DC floor, not the in in-rack, power strip looking ones) and UPS's have power metering built into them though, and the UPS is where I measure mine (Nylte can see those sources too). The only things on my UPS's are the servers and network switches, not the HVAC, so as long as I have all of those items in my nameplate list, I know the nameplate reading, the actual aggregate consumption, and can compute my DF.
Diversity factor is key to designing and building a power efficient data center. Know it. It will affect the texture of what is to come (which is still crunchy).
Next time: Getting Your Nameplate Number