Skip navigation

Green IT

November 2010 Previous month Next month
Steve Carl

By The Numbers

Posted by Steve Carl Nov 22, 2010
Share:|

In his book "Just Six Numbers: The Deep Forces That Shape The Universe", Sir Martin Rees discusses six numbers that, once you know their values, all sorts of things can be known about the universe: The way it has to look and behave. What physicist often call the universes "texture" (which of course is crunchy)

 

I read this book a number of years ago, and it has shaped the way I think, even if, upon one reading, I did not completely understand it. I am not a physicist, so I have to re-read such books from time to time to re-remember details and nuances. Even so, just knowing this idea of "a few basic numbers" has changed my point of view and one area it has influenced is how I think about data center design. You can not design a data center, more or less a Green one, unless you have a certain basic understanding of some numbers. Robert Heinlein said (in "Time Enough For Love") that (paraphrasing) anything that could not be done in math was opinion.

 

Data centers designed by opinion are unlikely to work well, so I have worked over the years to try and make sure I have the math right. Along the way I have discovered certain rules of thumb that are industry standards, but they may not apply, and knowing whether they do or not has a huge impact of the ultimate cost and efficiency of the DC.

 

What I would like to do here is to talk about some of those very basic numbers, with a focus on the idea that "If you can't measure it, you can't manage it".

 

Todays Number: Diversity Factor

 

When I was designing / building a data center for R&D a few years ago, I also learned about using the same words as the trades that you are working with. What I was calling the "Maximum Rating of the Power Supply" was causing blank looks from the building engineers. I finally learned that the name they like is "Nameplate rating" or just "Nameplate". I resisted it at first: the nameplate was on the front, and said "Dell" or something. No ratings. On the back, somewhere on the power supply was a label, and it had all sorts of numbers in tiny tiny print. The one of interest was "Watts", although sometimes that had to be inferred from "Volts" and "Amps" (volts times amps equals watts: maybe that should have been todays number, as it is key).

 

Side note: Never see anyone using V*A=W on an auto-sensing or auto-ranging power supply, because the volts and amps are ranges, and if you multiplied together the largest numbers in the range, you'd be way high on your number. Today it is common to have a power supply that can plug in to 100 volts or 208 volts or even 240 volts without setting a switch: It auto-senses the power. But at higher voltages it pulls less amps, so multiplying together the ranges is not useful. They can also be more or less efficent at different voltages: it used to be that power supplies were more effient at 208v than 110v for example, but not always and not so much on modern gear, where it seems to make no hugely significant difference.

 

Side side note: Sometimes you can not find wattage. I have had to look up specifications and find instead maximum BTU and reverse that in.. but that is another post.

 

If you know nameplate wattage of every server, you do not know how much power all those servers will consume. Here is the key point of todays entry. Consider the following: A server can be big or small, have few memory slots or many. have few disks or many. Few CPU's or many. Different CPU models in the same external model. Low power memory as an option.

 

For us, as an R&D shop, we often-but-not-always buy servers that are on the low end of the configuration. If it had 24 DIMM slots, 16 drive bays, and 2 CPU sockets, it would not be unusual to have half of the DIMM sockets populated, 2 of the drive bays, and one CPU socket (with 8 cores or something). The CPU might be the 95 watt model rather than the 130 watt one. Yet the power supply is sized to power *everything* that might be installed when the server is fully configured. They don't want to add to the confusion of buying a server by having 4 different wattages based off configuration, and more parts drives up costs too. Nope: Big power suppply for you most of the time.

 

Add to this that the older servers used less efficient power supplies. As low as 75% efficiency. What mix of what efficiencies sit there in that pile of servers?

 

Add to this that when disks power up they use way more power than when they settle down, that smaller disks use less power to spin up than big ones, and that disks seeking use more power than disks idling. Solid state disks use less power than spinning disks, but which generation of those matters too.

 

Look at that pile of computers again and wonder from the nameplate rating how can you tell anything about how much power they are actually consuming? Answer is you can't, and that is where Diversity Factor (DF hereafter) comes in.

 

Industry Standard Diversity Factor (DF)

 

Industry standard DF is to multiply the total of all the nameplate ratings by .6 (or even, in one reference I found, .67).  If my server pile has two million watts total nameplate, then that DF says my data center has to be designed to deal with 1.2 million watts of power (1.34 megawatts at .67, but no one seems to use that one anymore so that is the last we'll speak of it today) . You would be very very safe in doing that if all you were worried about was making sure you had enough cooling in the data center.

 

If you can't measure it, then this keeps you from having a server roast, but maybe not your job as a data center designer, and maybe not happy as a stockholder either. My measured DF across the R&D data centers is .439. That means  in the scenario here 880,000 watts, not 2 million or 1.2 million. Any DC built to those higher numbers does have one thing going for it: A long future as a data center, because it is over-built. Unless you designed the data center in zones where you can power up HVAC only as you need it, it will not be a very Green data center. HVAC sitting around idling still uses a lot of power: 40% of the power going into the data center is probably to run the HVAC. More if you overbuilt like this.

 

If your company could have used the extra money spent on the DC overbuild for something else, they might also be miffed at the designer. That is a lot of new Mac laptops.

 

Measure It

 

If you are in a really large data center with separate power meters, and even have the HVAC metered separately from the lights and the racks, then you are lucky and also unusual. The more common case is where the power is measured at the building, and includes the offices and breaks rooms and whatnot.

 

Newer in-rack PDU's can be bought with monitoring intelligence. That is the other extreme in fact: You know how much power is being consumed right down to the plug, and you can pull those back and monitor them with something like Nlyte and know exactly what is going on at all times.

 

Most Data Center PDU's (the big ones on the DC floor, not the in in-rack, power strip looking ones) and UPS's have power metering built into them though, and the UPS is where I measure mine (Nylte can see those sources too). The only things on my UPS's are the servers and network switches, not the HVAC, so as long as I have all of those items in my nameplate list, I know the nameplate reading, the actual aggregate consumption, and can compute my DF.

 

Diversity factor is key to designing and building a power efficient data center. Know it. It will affect the texture of what is to come (which is still crunchy).

 

Next time: Getting Your Nameplate Number

Share:|

Welcome to my new Green IT blog here at BMC. I have been writing "Adventures in Linux" here since 2005, and will continue to do so, however I have a new role inside BMC as the "Green IT spokesperson", so I added this blog as a way to also be the Green IT writer-person..

 

I have to admit that it was tempting to call this blog "Adventures in Green IT" but that seemed too derivative...

 

I do expect it to be an adventure however: Green IT is a pretty broad topic, and covers all sorts of things currently happening in the data center space: Internal and external clouds, IaaS, data center consolidation, virtualization... on and on. All of these are implemented with new technologies, power densities, changes in standards like ASHRAE, and so forth.

 

Green IT is one of those terms that, like "Cloud" can be interpreted many different ways, depending on where it is approached from, and as I peel back that onion layer by layer, I will be talking about it here. I have also seen that, as I have designed data centers, that standards like ASHRAE have to be interpreted to fit the current situation: I'll be talking about that in an early post.

 

How did I get here? What do I know about this Green IT thing?

 

Over my 21+ years here at BMC I have been many things: Mainframe Systems Programmer for VM, Production support, R&D Support, manager of Operations, and various similar roles. In every case I have been involved in the data center: Designing and building them, consolidating them, moving them, upgrading them to new standards, and so forth.

 

In the monolithic Mainframe days, this was in a way a lot easier. IBM had great physical planning guides, and you could look at them, look at your facility, and pretty much know what you had to do.

 

These days, and especially in our R&D environment, we have such a mix of vintages, models, manufacturers, systems sizes, virtualized and non-virtualized, SAN and NFS support devices, and all other sorts of heterogeneity that designing and building a data center is far more challenging.

 

That is of course the fun part. The fun is tempered by the simple notion that "If you can't measure it, you can't manage it". Very BSM'ish, and more than just "a good idea". Really: you don't want to get into designing a data center, with all the captial that involves, and not have a fully square rooted idea of where you are going with it. Not just what it will hold now (I.E. what the CMDB, the single source of truth, holds), but what its limits are, and what it will take to change them if you need to react to future circumstances.

 

Being green is also more than just about saving power, or making sure all your dead computers are disposed of in the correct manner. There is also the very simple fact that being green can save money. Here in the US, when our money was all green, there was a play on words in there.... in any case, one of my early posts will be about that aspect of it.

 

It is going to be fun to see how this all develops. I am looking forward to the new challanges of the job, and the new conversations here.

Filter Blog

By date:
By tag: