Skip navigation
1 2 3 Previous Next

Green IT

31 posts
Share This:

Blades and Going Big


Before I dive into Sun / Oracle blades, a blade overview. A very generic, non-vendor specific overview.


Way back in the day, when blades first arrived on the scene, they did not have the density to replace regular rack mount systems. Not in compute and even more not in memory. To even begin to be a serious candidate for virtualization, you have to have RAM, and lots of it.


Early blades just did not have that many DIMM sockets, and the density you could put in a DIMM slot was low. Later versions of blades let you get to higher density DIMM's, but to get near what you'd need, the DIMM's were a kings ransom. Totally making these numbers up, but it was something like this: if a 1 GB DIMM was 10 USD, a 2 GB DIMM was $25, a 4 GB $100 and an 8 GB $500, etc.


Point only this: The price went up way faster than the density.


Today's units have more DIM slots, better memory management in the chipset (more slots per socket), use more commodity priced DIMM's like DDR3, and so forth. Today's blade lets you get to useful RAM amounts.


As noted in "Go Big to Get Small", we are a "Virtual First" shop. That has helped us already make huge inroads in the reduction of power and CO2 emissions. To take this to the next level required a vendor by vendor look at their various compute solutions to see what we could use for a big increase in virtualization, as well as re-hosting older virtualization solution to latest / greatest / dense-est footprints.


Blades of come of age, by and large. In some vendors, we could actually be denser in rack mount still, but there is also value in standardization of form factor and process.


With the massive virtual server density, we wanted to make sure we were not introducing choke points outside the blades, so everything needed to be able to connect up with 10 GB Ethernet, and 8 GB fiber Channel for SAN. The SAN would need to be fast.


To enable fast re-hosting, and HA, everything needed to be able to boot from SAN. A blade in a chassis had to be interchangeable with any other blade like it. Carry the same workload. Boot from the same place. No internal disks in the blades.


So, fast network, fast disk access on a SAN, lots of RAM, boot from SAN, interchangeable, backwards compatible.


About that last one: As I have mentioned innumerable times over the years, we are an R&D shop. Not just working on the latest and greatest, but also copies of all sorts of older versions of things. We try, like any software company to limit how far back we have to go, because iterations and versions math is crazy. Its not just the OS, but the application or applications, various dependencies like data bases... on and on. Still, worst case, you'd at least like the option of going back more than one version on something.

As a general rule then, the less backwards compatible something is, the less useful it is in this shop.


Rule Of Thumb for Blades (This generation of tech...)


ROT's are useful things, and as blades go, here is one of mine: 128 GB of RAM per Socket, assuming that socket has 8 cores, and generally performs for its designated workload the way a Xeon 7000 does in AMD64 space.


I base that off watching our BCO data for VMware and other X86 virtualization environments. At that ratio, and with our workload (your workload is not my workload), then in general 80% of the workload runs out of RAM before it runs out of CPU, and 20% or the workload runs out of CPU before it runs out of RAM). By run out, I mean we are well into the 90 percentiles. We are starting to clip the peaks.


Pre-Nehalem, when all the virtual assists were still young, and the cores-per-socket were lower, the rule was 64 GB per socket, and for a pre-T4 chipped Sun it was more like 32 GB per socket.


That brings me to...


Sun Blade 6000


In an Oracle press release a few years back, I was quoted talking about how we use T class servers to save power. Reading that reminds me just how long we have been on this journey to save power, CO2, floor space, as well as modernize where possible the server footprint.


Sun's current blade offering is the 6000. The new announcements about the T5 chipset, specifically the T5-1B were of great interest to me because the T4-1B was head and shoulders faster in virtualization workloads than what came before it. However, its Achilles heal is that the blade only has 1 socket. Most of the time, that means putting more than 128GB is a blade is not going to be fully utilized.


The T5 is, on paper, 2.3 times faster than a T4, and from what I have read of the tech specs, I think that we are set for 256 GB per blade with this generation. You could put 256 GB is a T4-1B: It was just not clear how well used it would be. The CPU would bottleneck first.


All I have in house at this writing is T4-1B's. But I do have a few of them:




For HA reasons, there are two chassis. Each chassis has 5 T4-1B's. Each T4-1B has an 8 GB FC card in it. The 10 GB is shared because of a switch-like device on the back called the NEM.


Bottom Line: This set of two will replace 250 Sun systems when we are done.


All kinds. V240's. V100's. V880's and V890's. Ultra 5's. On an on. A list out of history and time.


The secret sauce is that we can virtualize Solaris 8, 9, 10, and 11. Solaris 8 and 9 have to run in zones rather than LDOMS (Yea yea: Oracle VM Server for SPARC.).




I promised numbers last post, so here goes:


We have retired 250 computers from all over the line. A Netra nameplates at 108 watts. A V240 pulls about 550 nameplate. A V880 2,900. Big spread. We mostly have the V100-V280's running around, but one or three of just about everything came off the line. A bunch of E450's.


The average watts was 474, and the average U was 4.


I was very careful to call that nameplate, because a computer does not use all of the power its power supply is rated to put out. According to measurements I have done comparing the UPS readings to known nameplate values, we use about .57 - .6 of them. This power factor is lower than the planning number you normally use for such things, which is .67. But that lead to me claiming less savings, which is fair. I am using averages too, so going lowside keeps it real.


- 250 Suns @ 474 watts = 118,000 watts, or 118 kw

- Apply  power factor of .6 and that means 70,000 watts of gear was replaced by those two chassis and their 10 blades. I rounded down at every turn there, to make this as low as reasonably possible.


Here is a place the Sun chassis is very different from other blade vendors. Rather than having six power supplies like Dell or IBM, for example, they have two. Either one can supply the whole chassis, so its a doozy: 5,740 watts each one. AC input power to the chassis is 6,272 watts. See the spec sheet for details.


We have two of them, each half full, so about the same power consumption as one chassis with a bit of overhead. Call it 7,000 watts.


The Magic Number


We have now arrive at a number I have seen popping up over and over in this. Giving away the end of this series here, but 70,000 watts versus 7,000 watts. 10 to 1. We see that number emerge time and again for us. Partly is a function of the age of the gear we are retiring. I see it over and over though.


That is 10 times less CO2, because CO2 is directly related to wattage.




The average server was 4U, and we got rid of 250 of them. 1000U now sits in 20 U. Way more than 10 to 1. 1000 U racked tightly (say 40 U per rack) is 25 racks, now sitting in one rack. The reality is far more organic (read: not that dense, but full of historical reasons for why it was the way it was) than that, and I'll get to what we have hauled out of the DC later.


Shelling out from Sun for a second about space: phase one of this project involves taking 37,000 square feet of DC, and making it into 11,000 square feet, and having enough capacity in that DC to go after other DC's in later phases. This time it was 3 to 1. But now we are set for absorbing other, smaller DC's, and later, even shrinking this one again. We are after 10 to 1. Power, CO2, and Space.


Next time: Another blade server.

Go Big to Get Small

Posted by Steve Carl Apr 8, 2013
Share This:

In "Not All Electrons are Created Equal" I talked about different ways to pump electrons up the wire, so that they can charge your smartphone, and light up your life. Yeah, OK, I was talking about power usage in the DC, and how electrons are pumped mattering, but it comes to the same thing. No matter what you use the power for, the way it is moved along the wires is different depending on where you are, and that directly impacts your power bill and your CO2 footprint.


Data Centers are well known for being one of the largest single areas of power consumption, using over 1% of the worldwide total for that one functional area. 1% may not seem like much, but again: one thing! Just data centers. And it equals Terawatts of power. The good news is that it is not growing as much as it was thought it would, and the reasons for that are simple. For us, it comes down to one thing: Virtualization.


Its easy to read the above linked articles, see some very large scale trend data, and find out that Google is (surprise ... not) the most power efficient builder of data centers. But what is real, and what can we, as smaller data center operators, do?


BMC is 8th largest publicly traded ISV, and of course that means we have data centers. Not American Express size data centers, but a fair number of them, and peanut butter spread across the world. Partly that is history: As we acquired companies, we did not necessarily move the data center / lab to a central location. That is disruptive, hard work, and requires planning to make sure that the products are still being developed and growing. Partly its the fact the the speed of light still has not been upgraded, despite years of trying. My geek moment here: Virtual Particles affecting the speed of light. But I digress.


Regardless of virtual particles and purchase models, many small data centers is not an opportunity for scale, and so for the last ten years we have reduced our North American data center physical footprint by about 40%. All targets of opportunity.


No big central project.


The Project


All that changed when Scott Crowder, a new (at the time) VP in IT walked into the place, looked around, shook his head, and said in so many words "This needs to change". His concepts are tried and true: If you are a mainframe person they are not even revolutionary sounding: "Buy Big". Don't get a hundred little disk bays and hook them up all over the place: Get three of four really big, enterprise class arrays and put them in large, central places. Ditto servers: we don't want amber waving fields of PC and rack mount servers: We want density. Blades are ready now. You can jam all sorts of capacity into 10 or 11 U now. Get rid of the old stuff. Move your reliability up. Reduce the average age of the gear. Have everything in one place so that support is all in one place.


The "Internal Cloud" or the "Glass House": Whatever you want to call it. It's back. The swings between centralized and decentralized never stopped. By any name, we are headed back to centralized. The variation this time is all the stuff at the edge: The tablets and smart phones and 24/7 high speed connectivity.


Nothing new about centralizing at all... except that it is completely hard to do. It takes being willing to stick to your guns, and also, in an R&D environment, making sure you are not throwing out the baby with the bath water. At the end of the day you want to be sure that R&D can still do their job. In a perfect world, they would never even know you were making changes, but unfortunately it does not work that way: You have to partner, and study, and discuss and adapt.


"Buy Big" became a project: "Go Big to Get Small". The goal was more than reliability, or centralization of support. It was power reduction. Big power reduction. 67% reduction, over a three year project. At the end of the project, the first servers bought would just be coming off support, and feed back into the next opportunity cycle.




Goals like these don't work without vision and without support at the highest levels. If anyone and their cat can buy computers and bring them into the DC, you have lost before you even started. Budget and buying authority have to be centralized, and the people doing those things have to understand the goals. Get the design. And understand when there have to be exceptions.


Exceptions are Exceptions


There are times when a product (actually, for R&D, a products server) can not be virtualized. If we develop and sell a product that does bare metal provisioning (we do) then you have to develop and test on bare metal. Your build machine can be in a VM though. All that matters for a build machine is that code be compiled into binaries as quickly as possible. These days the overhead of the virtual world is so low that this is not a problem.


In that example, there are exceptions. Some servers are not virtualized because of what they do. If we had no customers doing bare metal provisioning anymore... but that is not the case.


What about high I/O use cases? Again: many virtual technologies have gotten far better at I/O than they used to be, with the ability to dedicate I/O devices to virtual machines when required. Or you can move virtual I/O outboard of the server. Co-process it with dedicated virtualization infrastructure. Cache I/O in specialized RAM. variations abound. Sometimes there is just so much I/O, and it is hitting the disks or SSD's so hard that a VM is just not the right thing to do. The costs of making the VM work exceed just giving the process a real server to run that load on.


These are exceptions. Corner cases that require analysis to be sure that we are doing the right thing. This can't be the rule with more exceptions than adherence. This is not like the English languages so called rule "I before E except after C". Maybe. Maybe not.


Scalability is another case: If you want repeatable numbers, virtualization is not an easy place to get them. When you tell someone that the numbers were created in a virtual world, and then performance data used to massage them back into real world numbers, most people will have quit listening and perhaps started to suspect your honesty. Its not that it can't be done. It can. Its just not easy, and you have to be utterly trusted before you even start that conversation.


Maybe 5-10% of the servers we have need to stay "Real" for whatever reason. 90% or so do not: its a target rich environment


Regardless of Platform


You may want to virtualize more than you actually can. One exception would be the case where something does not virtualize. Maybe nothing was ever developed to virtualize it. Maybe its just too old.


We have one of most everything ever made in computerdom, and versions of things going back a decade or more. We still have a VAX chipped based server for example, even though VMS was ported to Alpha, and then to Itanium (and hopefully to AMD64[X86-64] soon...)


Some things virtualize in a limited way: AIX 5.2 and 5.3, Solaris 8 and 9. In these four cases they can be virtualized, but:


  1. Patches have to be applied to get them to a virtualizable level
  2. They run in "Zones" (WPAR's in IBM speak) rather than a more isolated LDOM / LPAR.


On the plus side, they use less system resources and are faster this way.


When doing "Virtual First", in an R&D environment, the words / terms / actual underlying code may change, but the concepts are the same. For the five major environments, there are viable solutions:


  • IBM: LPAR and WPAR
  • Mainframe: z/VM and LPAR
    • Amdahl UTS: We miss you...
  • Sun / Oracle: LDOM and Zone
  • HP: IVM's and Containers (Of very special interest here to me: HP 9000 Containers. Means I can reduce my use of PA-RISC based systems)
  • AMD64(X86-64): VMware, Virtual Box, KVM, Xen, Parallels...
    • I can not footnote a blog, so worth noting here that AMD and Intel both have microcode assists for virtualization. Borrowing from the mainframe concept of the SIE instruction and building upon the Intel compatible, 64 bit architecture pioneered by AMD most AMD64 compatible virtualization solutions are low overhead and near native in execution speeds.


Storage Story


Storage is every bit as virtualizable as servers these days, though what that means is slightly different. You can:


  • Thin Provision
  • Data De-dupe
  • Mix and match RAID levels, including across some RAID numbers/types that the standards commitee never heard of.
  • Tier (as in response time tiering: hot data on fast devices)


Even small storage solutions have some of these features now. But we don't want small arrays. We want big, central enterprise class arrays. Arrays that grow. Arrays that perform. Arrays that do not go down.




As things move back together, and start to scale up, we start to recall what the classic problems of this style environment is. A blade server chassis going down can be thousands of system images offline, and half of R&D sitting around looking at each other and wondering when the server will be back.


If you were silly enough to roll out something that had no ability to recover to another chassis in any case.


Same thing when it comes to storage only moreso. Centralize all the little disks to the big enterprise class array, and if it goes down, everything hooked to it is gone too. That could be petabytes going offline at once.


Oh. yes. Its all coming back now.


Decentralized may be more expensive, and not very power efficient, and failures frequent, but at least the risk was spread, and it was limited. It probably was not on backups out there someplace either, but it was contained. Lots of small problems over time rather than one big one if you did not do it right.


Speeds and Feeds


Next time out, more numbers. Yay numbers! A look at the vendors and their products and the power and CO2 we are looking to save.

Share This:

When I was at the Uptime Institute's annual conference a few years back, the title of this post was a sort of catchphrase everyone was using. It made sense to me then. Of course it takes more to actually create an electron than anything we do with our power generation technology. That is getting lost in the weeds of technical reality versus a kind of handy concept. Not all electrons are created equal is a nifty turn of phrase and contains the key concept: it matters how you push electrons up the wire.


Later, when I was researching CO2 emmisions of each of our major R&D data centers, the point came home for me. The first thing was simply just how much CO2 we are talking about in the first place.


Pounds of CO2 per KW / Hour


When I am doing all the math around a data center, the first thing I almost always mess up is watts versus kilowatts. The second is whether I have already converted from pounds to tons or not. Every input you get out there from the various data sources come expressed in some locally logical way, and keeping everything in the same unit has led to stupid mistakes over and over. Most recently when talking to the above mentioned Uptime Institute, I dropped something in as pounds, and expressed it as tons, but forgot to add the divisor into the spread sheet to convert it. Delusions of grandeur, to be sure. To compound it, in another place, I was trying to express in pounds, but labeled it as tons.


This is how you lose Mars orbiters.


So, back to basics for a second: how much CO2 is created to generate one kilowatt for one hour? That turns out to be not as straightforward as one would hope. Looking at the US Department of Energy website (, they list the following values for the major carbon sources:



Pounds CO2 per kWh:

Coal     2.117 - 2.095
Petroleum     1.915 - 1.969
Gas     1.314     1.321



I try to imagine what a pound of CO2 looks like, and of course it is fairly easy to think of dry ice, and what a pound of that looks like, but as a gas it expands out to a far large volume of space. A gram of dry ice is about 1.5 grams per milliliter  (1.0 × 10-6 cubic meters), but CO2 gas, at 20 C and 1 ATM, is about 1.8 grams per cubic meter. Big difference. Big surface area.


Each of those sources listed above would then make quite a difference in terms not just of the weight but the sheer volume of space that CO2 would expand out into. Ultimately of course it is diluted into the atmosphere and then thought of as a percentage of the gas mix, or expressed as Parts Per Million (PPM). PPM is the way NOAA reports things. According to NOAA, we saw about 3.3 ppm increase in CO2 in the atmosphere the last year (2012-2013). There seems like a lot of air up there, but that is partly perspective. Here is another one


Looks pretty thin there.


Carbon as an element is only about .5% of the total makeup of our planet. CO2 gets some size leverage because it joins up with two of its oxygen buddies, and while carbon is only 6 on the periodic table, its two cohorts in the molecule in question are 8, for an extra bump in size. This is how burning small things to make kilowatts creates relatively large poundage of CO2 output... but I digress.


Other Power Sources


Just so it is clear I have not skipped over and/or not given some thought to this, how power is generated goes beyond the three carbon-based methods above. Hydro, or wind, or wave, or geothermal, or solar or even nuclear are all generally considered to be zero CO2 emitting power sources. Like my post here at Green IT about concrete points out though, clearly these methods of power generation are not completely zero. It costs something in CO2 to make a water dam or a wind tower or whatever. It costs more CO2 to move the generation technology from the factory to the place where it will be used (and concrete for a water dam for example is very CO2 intensive to create, and very CO2 intensive to move about).


Entropy means that all power generation sources will need to be maintained, which of course means more CO2 somewhere along the way.


Overall though, they are such a tiny fraction of the of the power that is being created that it rounds down to near zero-ish.


To be fair, the same thing holds for a coal fired plant, or a gas generation plant. CO2 was used to create it too, unless the manufacturer is 100 percent on a zero emission power source.


Zero Emissions in Other Ways


There are other ways to generate electricity that have good CO2 math. Things like putting a methane collection dome over a garbage dump, and burning that to create power. Not zero CO2, but better than letting methane hover about. CO2 is a greenhouse gas, but methane (CH4) is a 20 times more powerful one.


Plant something organic, like a tree, burn it, and plant more to replace it, so that you are always absorbing more CO2 than you are emitting. Net Zero emmisions. Go negative even, by absorbing more than you release. I just listened to a very interesting TED talk about ways to reclaim desert and offset all the CO2 we are emitting.. using grasslands and animal herds the other way around from how we have been.


Point is: It matters how power is generated when discussing the CO2 we put into (and leave in) the atmosphere.


BMC does not build cars or houses. We build software. Our power usage is tied to our data centers used to create that software. The data center is our manufactoring plant, so how that data centers electrons are being shoved into it is key to how much CO2 we are indirectly (via the power plant) placing into the atmosphere.


Where and How


This is 2009 DOE data, but I just looked at Texas for 2010 (the latest on the web site) and it is not that different other than that Texas passed 10,000 Megawatts of wind generated power recently (July 2012 report). It is the first state to do so. I have been out in West Texas many times to see those farms. Amazing.


Where and therefore how power is generated adds up to quite a bit of difference in the total amount of CO2 the power generation adds to the atmosphere. Here are four examples:




Total Net Electricity Generation
9,928 thousand MWh
Petroleum-Fired - 00.03%
3 thousand MWh
Natural Gas-Fired - 27.8%
2,768 thousand MWh
Coal-Fired - 38%
3,778 thousand MWh
Nuclear - 28%
2,782 thousand MWh
Hydroelectric - 05.6%
552 thousand MWh
Other Renewables - 00.3%
29 thousand MWh

There a number of interesting solar projects in the works in AZ, like Solana and Agua Calente that will be changing this mix with time. Also interesting is that PG&E in California has a contract to buy the power generated by Agua Caliente, which explains these numbers:




Total Net Electricity Generation
16,861 thousand MWh
Petroleum-Fired - .04%
7 thousand MWh
Natural Gas-Fired - 36.4%
6,144 thousand MWh
Coal-Fired - 1%
184 thousand MWh
Nuclear - 19.3%
3,269 thousand MWh
Hydroelectric - 26%
4,379 thousand MWh
Other Renewables - 15%
2,528 thousand MWh




Total Net Electricity Generation
39,989 thousand MWh
Petroleum-Fired - .015%
6 thousand MWh
Natural Gas-Fired - 47.6%
19,061 thousand MWh
Coal-Fired - 35%
14,142 thousand MWh
Nuclear - 9%
3,613 thousand MWh
Hydroelectric - .25%
100 thousand MWh
Other Renewables - 6.5%
2,617 thousand MWh





Total Net Electricity Generation
3,869 thousand MWh
Petroleum-Fired - .49%
19 thousand MWh
Natural Gas-Fired - 62%
2,429 thousand MWh
Coal-Fired - 18%
697 thousand MWh
Nuclear - 12.6%
489 thousand MWh
Hydroelectric - 2%
82 thousand MWh
Other Renewables - 2.7%
106 thousand MWh


Putting It Together


It is interesting to note above just how little power comes, in those states, from petroleum fired generation. I picked those four because in North America, that is where BMC has a good sized R&D data center, and therefore where we, at some scale, consume power / generate CO2. How much CO2 per kWh you ask? Based off the major carbon intensive sources:




- Coal 38.00%

- Gas 27.00%


1.149 pounds CO2 per kWh




- Coal 1.00%

- Gas 36.40%


0.4942 pounds CO2 per kWh




- Coal     35.00%

- Gas 50.00%


1.385 pounds CO2 per kWh




- Coal 18.00%

- Gas 62.00%


1.184 pounds CO2 per kWh


The 2011 data should be out soon from the DOE, so I'll need to re-do these numbers. It is still interesting to me as a relative comparison to see how much different this is from state to state.




As different as these are, another thing that matters is the ability to conserve power usage, or perhaps, looked at another way, to use power more intelligently. If I use 1 kWh in Texas, and 2 kWh in Massachusetts to achieve the same end result, then I emitted more CO2 overall in Massachusetts, despite the lower CO2 per kWh there.


That is where I'm going to go next post.

Share This:

When I was looking at a a new personal home being built not too long ago, I was surprised to see how small the air ducts were. I asked the person showing me the house "Why are the air ducts so small? This is Texas! Air conditioning is the only way you can live here." They answered "Oh, those are the new high pressure ducts." as if it explained everything. I guess it did explain everything for them, but just being the new thing was meaningless to me.


I found this article over at toolbase about them. As the house in question was in Houston, the bit about this style of HVAC removing 30% more humidity suddenly made it all come into focus. Houston, Texas was built directly on top of a swamp, and one of its two founders (John Allen, of the The Allen Brothers) died within a year of moving there of Malaria or perhaps Yellow Fever. Houston is a damp place. Having your HVAC be better at removing water? Big win.


When I think about the differences in a data center raised floor, I wondered about similar issues. What are the considerations for how tall the floor should be. In the case of the data center I have in Houston, it has 18 inch raised floor and is set up to dissapate 38 watts per square foot. I have another data center on six inch raised floor, and it dissipates 76 watts a square foot. And as mentioned previously in this series, I have seen brand new data centers on 36 inch raised floors, set up for 250 watts per square foot.




Thinking about the reasons and design considerations for all of this, I looked at a number of web sites documenting HVAC design, and giving formulas for computing how big a duct needs to be to handle a given flow of air. Intrinsic to that is noise. The math works out mostly that if you want a certain level of air flow, the smaller the duct, the faster the air has to move (and, as it related to the example of the home HVAC, the higher the pressure, thus why the person showing me the tiny ducts called it the 'high pressure' system, when the industry appears to call it the 'high velocity' system). That seems intuitively obvious. If you look at the link above about the high velocity ductwork for homes, there is sound suppressing tubing in the design. This all makes perfect sense.


To move air quickly and with pressure requires tighter (less leaky) ducts, more specially designed fans, and there is more friction in the system, causing losses. Small ducts are therefore not as energy efficient.


In theory then, I could figure out how much air I would need to pump in a 6 inch raised floor to meet any given heat dispersion number, but at some point it is going to get very silly. The high velocity air will be whistling between the seams of the floor tiles, it will be noisy, power inefficient, and who knows? I might get the pressure up high enough to make floor tiles start popping up like geysers. Probably not, but that would be funny. Once.


For raised flooring (assuming the room is tall enough) the major expense is in the tiles and the grid, not in the pipes that extend the grid/tiles away from the slab. Those are little more than metal pipes, and it more or less costs the same in labor to bolt/glue the pipes to a floor no matter if the pipes are 6, 18, 36, or even 72 inches long. The bolt/glue and plate are the same.


So, when building new, why not go tall if you can? The taller the floor, the less you have to worry about figuring out the airflow. The more future proof it is in case you need to increase the airflow. Make it 72 inches, and you can walk around down there. With a tall floor you can deal with having some cables running around under the floor and not impeding airflow significantly.


Preexisting Floor


I am looking at something already in place. a 16,000 square foot room with a 250 PSI concrete subfloor, raised 18 inches and set up for 38 watts per square foot. I can add more air handlers to the space and decrease the available floor space, but I only think I need 10,000 square feet, so that is not an issue. The question becomes: what makes sense? How many CRAH's can I install? How many watts per square foot can I deal with in a room like this?


One other part of the equation is that the room was designed for water cooled mainframes back in 1993. I have all sorts of chill water and a dedicated 450 ton chiller up on the roof.


Cold Math


450 ton of dedicated chiller. 16,000 square feet. That means I can drop about 1.5 megawatts into that space. The most I can do in terms of heat dissipation without adding more chilled water is 100 watts per square foot. Drop that to 10,000 square feet, and that will be 160 watts a square foot.


Not bad, but not 250 watts a square foot. Still, if all I need is 160 watts or less, this seems promising.




The room I am looking at has a dropped ceiling. With cowls on the CRAHS, I can fairly easily set up hot / cold aisles, and control my cold air supply and warm air return, keeping them properly separated. I can drop the power and networking in from overhead rails, keeping the underfloor fairly free of air dams. So is 18 inches enough for 160 watts per square foot, without turning the floor into a whistling windy mess?


Easily. I have that data center on six inch floor, at 76 watts per square foot, and it is working fine, and makes no obvious noise beyond the usual white noise of data center fans. 18 inches is three times that height, and we are not looking at three times the heat dissipation. It should not even have too much velocity / pressure in it such that the CRAH's are having to work a great deal harder to pump the air around. Since I have 16,000 square feet,and only need 10,000 of it, I can further put any new CRAH's right next to the cold aisles consuming that air, keeping the airflow paths short.




I used my real world data center here by way of example. Looking out across the vast landscape of empty, low watts per square foot data centers, I wonder how many of those are in similiar situations. How many were designed for water cooled mainframes and have fairly tall floors to deal with how big things like 370 channel cables used to be. High PSI floors designed to have mainframe motor-generator units sitting on them. How many data centers sitting empty have good bones?


Next: I am going to try and figure out a cost model. How much should I pay per square foot for a new data center versus getting an old data center cheap and refitting it.

Share This:

I know a couple of builders. All have told me (speaking generally, not about data centers in particular) that retrofitting was where it was going to be for the next few years, but that they would much rather build new than retrofit. I of course asked why, and the simple answer was “It’s just easier."


Having been in a few buildings that have been retrofitted for HVAC in New York, and seeing both good and bad examples of that kind of thing, it is easy to see, even as a layman, what the issues might be.


For a data center with good infrastructure, it is not quite as bad as all that, at least until you hit design limits that you have to work around. What about new? Are there issues there that are less than obvious?


Everything is New


As I started to consider the issues that surround building a new data center, I wondered about some things. One example I had in mind was the issue of the hybrid car. Clearly it uses for more energy and resources to to build something from the ground up. From the machines that dig and flatten the ground through the amount of energy / CO2 used to create and shape the raw materials to the shipping and the power used to bolt, staple, hammer and glue the whole thing together a brand new building has to be far more CO2 intensive than a retrofit.


I keep hear anecdotal references to all the spare or empty data centers laying about, and every time I ask why they are that way the answer is power density. Its an 'old’ data center  and can’t handle the power density of today’s equipment. I talked about this a bit in the first post.


New, modern data centers are being built all over the place that can handle the more modern densities we are considering here, and as I was touring one such facility recently I was struck by just how much concrete was being used. A quick bit of the Google told me that concrete was used over twice as much as the next material in a new building (and I assume the next one is iron, for things like rebar (reinforcing bars, usually iron) in the concrete.)


CO2 and Concrete


According to the Portland Cement Association, concrete is the second most used ‘thing’ in the world after water. Since it requires a great deal of heat to make the stuff we all love to drive on, it is the third largest source of greenhouse gases in the US. According to Scientific American, one ton of cement makes one ton of CO2, more or less.


That seemed low to me, since 1 kilowatt of electricity generation, for one hour ranges anywhere from basically no CO2 (Wind, Water, Solar, not counting construction costs) to 2.1 tons for Coal (burning to create that power). That’s per Kilowatt hour. That make one ton of concrete generate the same about of CO2 as one kilowatt of electricity generated with a 50/50 mix of Coal and Solar.


If a cubic foot of concrete weighs in at 145 pounds, there are 13.7 cubic feet of concrete per ton and therefore 1 ton of CO2 emitted to created those 13.7 cubic feet of concrete. There are 27 cubic feet in a cubic yard, and that is a common measure for use of concrete in building things, or 2 tons of CO2 per cubic yard, in round numbers.


I have been working these numbers to get to some idea of how much CO2 just went into making the concrete for the theoretical new data center. I picked concrete because it is the majority material in the new building.


How much concrete is in the floor?


When I was looking at a new data center, a few things struck me about it. First was how much bigger the building was than the actual data center. The loading dock, offices, halls, ramps, conference rooms, UPS room, and such easily doubled the size of the building over the core data center.


Second was just how thick the floor was. I know that this can be changed with various building strategies, such as the way the floor was pre-stressed, the type and design of rebar, and so on. But still, this was a six foot slab! That means for every three square feet of data center floor, there are two cubic yards of concrete. Just the floor. Not counting the walls or anything else. A 10,000 square foor data center floor (about what I think I need) is then 3334 cubic yards of concrete. The other 10,000 square feet don’t need that depth of floor (Other than the loading dock), and I could not see what was used, but probably no more than a foot on average I am guessing. Another 1100 cubic yards or so.


Concrete in Walls


Bear with me....


The walls were two stories high, and four inches thick, and surrounded the perimeter, and the DC.


For the perimeter that is 2 100 foot long walls, and 2 200 foot long walls, each 30 feet tall. 600 linear feet.


For the DC that is 4 100 four long walls, or 400 linear feet.


1000 linear feet of 4 inch thick, 30 foot tall concrete walls. That is the same as 1000 linear foot of 10 foot tall, foot deep walls in terms of concrete. 1000x10x1=10,000 cubic feet / 27 = 370 cubic yards.


So, my low estimate as a non-building engineer is that my theoretical 10,000 square foot data center, based off what I have seen of the way data centers are built, would have around 5000 cubic yards of concrete.


And part of that CO2 generated, depending on the type of concrete gets re-absorbed. Use a carbon negative material (See Caveats) and you could be carbon negative even after you counted everything else that went into making the building. Steel, glass, transportation, and all sorts of other things.


On the other hand, not a bit of that probably incurred on a retrofit. It was incurred back when the data center was built.


Whiny Caveats


This is tricky math, because of number of things.


  • Cement is not concrete. Concrete is what we make things like data centers and roads out of, but it is filled with sand and rocks and other aggregates. Cement just holds it together, and is somewhere between 7 and 15% of the total, depending on type and aggregates used.
  • There are ways to create alternate forms of cement and concrete coming online that are far less CO2 producing
  • There are similarly new types of concrete being developed that over their life space actually are carbon negative. They absorb more CO2 than was used to create them.
  • There is a concept/process being proposed for using waste heat to create the cement.
  • Just normal everyday created by current practices cement re-absorbs about 60% of the CO2 that was used to create it.
  • Cement / Concrete are not, it seems, huge CO2 producers per ton. It is just we use so many tons of it. This in some ways matches up to the idea about CO2 itself. CO2 is not nearly the greenhouse gas that methane (for example) is, it is just that we liberate so much of it in order to generate the annual 98 quadrillion Btu of energy we used in 2010.19,108,767,320 metric tons of CO2 in that year, just in the USA alone.
  • I am trying to figure out things for a data center build based off pure observation. I cross checked this against a calculator I found for a standard office building this same size, and that came up about 1000 cubic yards of concrete rather than 5000. With all the things that could change in the design that would affect the depth of the data center floor alone, it seems I’m in the ball park.


This Affects Others


Balanced against all of the above is that building a new DC is not only easier because it can be exactly how you want it to be, it gives a lot of people jobs building it. Retrofitting might be cheaper, and generate less CO2, but it also will only pay the trades that make and install the new HVAC, ducting, PDU’s, UPS’s, Gensets, and whatnot.


There is also that the new building will be not only more power dense, but because it is built now, under today’s building codes, it will be cleaner, safer, and use less power over its operating life. It would be built to the newest LEED and ASHRAE parameters, and it potentially would be built someplace where free cooling at least part of the year is an option.


Finally, what struck me about this exercise was this: the energy used (CO2 created) to build a building is but a tiny fraction of the power that goes to run the computers it will house. That may seem intuitively obvious to some, but I was surprised at the numbers.


Next: To explore a retrofit

Build, or Retrofit?

Posted by Steve Carl Jan 13, 2012
Share This:

Over a number of years, we have reduced the number of R&D labs we have in North America to about a third of what we used to have. The reasons for the consolidations were driven not by Green IT but simple math around "It's easier to support fewer, bigger labs."


I have a very large data center, with these specifications:


  • 250PSI floor load for 16,000 square feet
  • 18 inch raised floor for 16,000 square feet
  • 60 PSI floor for 10,000 square feet
  • 6 inch raised for for those 10,000 square feet
  • Much greener water cooled CRAH’s rather than fan coil CRAC’s
  • It has dedicated chill water, and the ability to “borrow” chill water from the base HVAC chillers in an emergency.
  • It was built in 1992/1993, and was meant to hold four water cooled mainframes of the day    
    • It has one air cooled mainframe in it now      
  • State of the art back then at 38 watts per square foot.   


The DC has “good bones". The basic structure is sound, there is plenty of room to add things like additional generators and UPS capacity. This DC has been the target of the DC consolidation and has absorbed computers from a number of other locations.  When people walk through it now they ask “Why is this so empty?” and the answer is “We can’t put anything we have left any closer together than this or it gets hot.”


The next obvious question: Why not add more cooling? Before I talk about that, A quick look at why, even after all the consolidation (More servers in fewer data centers) I have empty space.


Virtualization and Space


In a word we have empty space because of virtualization. Above and beyond P2V and new requests being supplied from virtualized resources, when we moved a Data Center as part of consolidation, we looked for ways to virtualize before we moved to cut down on physical equipment moving about.  A virtual machine can move from place to place via a network (slowly) or an external hard drive (Quickly: Nothing beats the bandwidth of the mail).


As we moved to higher and higher levels of virtualization, we got back more and more floor space. The same large floor space is now used by servers that have a higher physical density and are more power dense.


Example: Seventy PC class systems acting as servers, each with 250 watt power supplies (burning around 150-200 watts of that) would be replaced with one Dell R810, and an 1100 watt power supply (using 800 or so watts of that). The R810 takes up 2U in a rack, the seventy PC class systems were loaded on technical furniture that covered the same floor space as nine racks, all full.


Extending that, we can put twenty Dell R810‘s in that same 42U rack (leaving room for a top of rack switch), at about 16,000 watts actual usage and displace 1400 PC style systems and 210,000 watts of actual usage. I’ll have to roll in some SAN storage to that equation (And this post isn’t exactly about these numbers anyway) so more or less it’s 13-to-1 reduction in terms of wattage. Lets be really conservative and use 10-to1 to account for the SAN storage.


Cell Size


When you count the aisles front and back, plus other ramps and hallways, a modern single rack has a “cell size” AKA “Work Cell Size” of anywhere between 16 and 30 square feet per rack. Depends on the shape of the room, if it has internal support columns or not, local codes for number of fire exits, and the needed paths to them. I usually use 20 square feet per rack as a nice round number that is fairly aggressive in terms of density.


Technical furniture like what we have (largely Ergotron 3000's and Wrightline) is about a 45 square foot cell size, in the same hot/cold aisle style layout. In theory it would take 56 of those units, and therefore 2,500 square feet to hold the same number of OS images as that one 42Urack. A 100-to-1 space reduction.


That is an extreme example because those PC’s that are being retired are fifteen or more years old. It is the best possible reduction. As the gear being targeted for retirement becomes newer, the advantages reduce. Still, with a 100-to-1 margin for this example, you can come forward quite a few generations, and even move from a PC on technical furniture to rack mounted gear being what is being replaced, and there is still a space advantage. If I was replacing seventy Dell 1850‘s, at 1 U each, that is still a 4 to 1 space reduction. That would be 23,000 watts in 80 square feet, or 280 watts per square foot, in round numbers


Watts per Cell Size


Our standard 42U rack, at 16,000 watts and 20 square feet comes in at 800 watts per square foot. The technical furniture held 5000 watts (25 systems) in 45 square feet, for 111 watts per square foot. Round numbers, an 8-to-1 relationship.


Reality Check


I have this big empty data center that currently can dissipate 38 watts per square foot. That means heat/wattage density of the data center was less than what I could potentially do with the relatively low density technical furniture, and does not approach what I could do with racks of R810's and the like, which in turn is nothing like what I could potentially do with blades.


So, I not only have space, I need space. I need to spread things out.


Or retrofit the data center with more HVAC


Or move to a data center with better watt-per-square foot capacity


And Therein Lies the Question


There is a cost per square foot. If you own the building, and its been depreciated for years, it may not be a high one, but it is there.


There is also tiering to consider: This is R&D gear. Does everything need Tier 3 or Tier 4 infrastructure?


There is a cost to build a modern data center. Hot/Cold aisles, ducted and forced air, get you to about 250 watts per square foot. Far above my expansive data centers 38 watts per square foot, but actually potentially low. What if I went with blade servers instead of rack servers? My 16KW per rack can turn into nearly 40KW per rack. That means specialized in-rack cooling. That in turn means expensive data center per square foot, just very small.


But I have a big data center that’s empty. Should I build something new, migrate everything to it, and dense-ify it to the next level? The same ideas I mentioned here about the R810s and Blades apply to all the UNIX equipment as well. I can virtualize and dense-ify at varying levels across all the major vendors, and the question about the size of the data center, and what to do about it remains.


How Much Raised Floor is Enough?


If I can retrofit the current data center with updated cooling, raise it to 250 watts per square foot, that has the potential to not only be less expensive, but less disruptive. Changes can happen over longer periods of time. What are the parameters? How far can I go with a six inch raised floor? An 18 inch raised floor? I looked at a new data center being built at 250 watts per square foot recently, and it was running on 36 inch raised floor, and all the power and wiring was being dropped in from the top: All the underfloor space was a common cooled air input plenum and nothing else.


Carbon And Calcium


What about the carbon of building a new data center? Lots of energy is used, and lots of concrete, and making concrete is very CO2 intensive. It does not matter if I build if or someone else does and I just rent it. It’s still going to be more carbon intensive than retrofitting an existing structure.


Next Time


Those are the general questions that frame the issue for me today. There are no doubt others, but from a Green IT / Data Center design point of view, this is a good starting point. Next post, a closer look at Concrete

Thinking about Green VDI

Posted by Steve Carl Jul 26, 2011
Share This:

In my last Green IT post "Low Hanging Virtualization Fruit" I mentioned that I thought VDI would be a great way to save power and therefore reduce CO2. I thought it would be interesting to walk that intuition though some actual assumptions (similar to what I did in "Watts Up" for virtualization)


It all starts with the idea of what the average number of computers is that a single person uses professionally. I know I am way above the curve on that one, with four, but I am not that unusual for an R&D person. High average users are balanced by the mobile users that have one or two systems, and one of those is a laptop.


So, here is assumption number one. My typical person in this model will have two computers. One desktop, one laptop. The desktop will stay at their primary workplace (office or home) and the laptop will follow them around wherever they are. In today's world it is also possible that the laptop is really a tablet computer such as a Motorola Xoom or Apple iPad.


Since this is about VDI, I am going to assume that the laptop / tablet will be one of the VDI access devices, and not be replaced, and therefore not considered for power reductions. As a power consumer, a laptop is already less than a desktop. My M4500 has a 130 watt power supply, and I have run it off a 70 watt unit. My Macbook has a 65 watt power supply, and so uses probably 50 or less.


Just the Desktop Then


The desktop in the model will have a 275 watt power supply (like my Dell 745 does..), and on average use, with a graphics cards that can drive dual monitors I will assume the data center diversity factor planning standard of .6, or 165 watts when in use.


I will also assume that the desktop has two monitors, and even go so far as to assume that they are at least LCD of some kind. My E197FP's use 40 watts each, but they have Cold Cathode backlights. A quick look at the current model U2211H shows it using almost half that, at 22 watts. I presume most of that difference to be the LED backlight technology. So, next assumption: Half the population will have LED, half CC, or on average about 30 watts per screen. Times two, for 60 watts total. Add this to the desktop number and we have 225 watts per person per desktop. Assume older PC's with CRT's and this number gets much bigger very quickly.


Cross check: I found this table over at the University of Pennsylvania web site, and it quotes the 745 as using 145 watts to boot, and then using between 111 and 133 watts when active, but this is in a single monitor configuration and includes a low power monitor, so I think my 165 number is probably OK. Maybe a little high, but I'll be extremely conservative estimate-wise in other areas so I think I'll run with this.


This is that "while being used number". When in standby, both the screen and the desktops use far less power (2 watts each, both panels and desktop computers, for a sleeping total of 6 watts)


How many hours a week is it using 225 watts and how many only 6 watts? I don't know anyone that puts in only 40 hours a week, and I also don't know many people that let their desktop system go to sleep. Their laptops? Sure. Their desktops? Hardly ever as near as I can tell.  Some people still run screen savers... on LCD's. I let the monitors go to sleep of course, but to be able to remotely access the desktops they can not be asleep. The uPenn data says that idle the desktop uses 111 watts, and that includes the Dell Ultrasharp monitor at 2 watts. Round number, lets just postulate that idle it probably uses 110 watts. And some people in the populace will let the desktop sleep, so that will bring the average down a bit. Call it 100 watts inactive then. Adjust this number whichever way needed to account for policies that enforce sleeping when inactive on a desktop. Since a fully hibernated desktop is using about 2 watts, if the majority of the PC's are set this way, it hugely changes the assumptions.


Another thing that would influence the assumptions is the general age of the desktop. My Dell 745 is middling old at about 4 - 5 years. The older the desktops you are looking to replace, the more power they will use, the less well they will sleep when idle.


How many desktops am I after here? We have more than 6200 people here: Lets see what replacing 5000 desktops with a thin client looks like.


Thin Client


How much power does the desktop replacement thin client use? I looked at the specs of the Wyse V50LE and it appears to use 15 watts when connected to 2 monitors. A quick survey of other options makes that appear fairly typical for a dual monitor thin client, so we'll go with that. The monitors would be the same as they were before: I'll assume that most of the LCD's are just moved over, that there are still two of them, and so we are using 60 watts when active and 4 when not for the screens. That is about 75 watts when active, and 6 watts when not active. The not-active wattage is a guess for the thin client,as it is not specified,so I assume that it is the same as the desktop.  It is probably less, but trying to go worst case here.


Disk Space


How much stuff is on the average desktop hard drive? How big does the internal disk really need to be? I imagine that this, more than most anything, is widely variable. There are pack rats that keep everything ever written since the dawn of time, and those people that keep asking me for a copy of something I sent them last week, because they already deleted it. I try not to take it personally: They do that to everyone.


My fully patched Windows XP guest has a 10GB disk, and is full at all times. My Win7 desktop has 80GB in use. My Win7 guest (oddly) has 98GB in use. Win7 looks like it uses about 9 or 10 times what XP does. For the model I'll assume that the virtual disk is 100GB, but that 20GB of that is not unique, so that only 80GB needs to be considered for end user space per desktop. This is probably high, but keeps me firmly on the conservative estimate side of things.


5000 times 80GB is 400 Terabytes. Wow. We have a ton of capacity at the edge of the networks.That number is probably high, and with hardware thin provisioning, virtual storage, in storage data de-dup, etc, there are all kinds of ways to reduce that footprint. Once again, to try and paint this as black as possible, what is the power for 400 TB? While we are at it, lets make this usable TB, and account for RAID and hot sparing of disks.


My sample design for this will be Dell Equalogic. A PS6500E, PS6500X, and PS6500XV. Tiered together so that hot data resides on the 15K disks of the XV, medium access data on the 10K disks of the X, and lightly referenced data on the mass storage of the E. Take 80% of each ones capacity, and rounding down, that is 105TB total, and uses about 2200 watts. 21 watts per TB, 8,400 watts for 400TB.


This is probably way high. There are all sorts of solutions inside VDI to save disk space, not even counting what modern storage can do. Virtual Bridges VERDE's use of local disks where available for example. VMware's Link Clones. Xen's Provisioning Server, etc.


Nothing Is Free: A Host


Of course, the thin client works because there is a host someplace in a glass house / internal / external cloud running the actual virtual desktop. On this server there are bunches of other virtual desktops similar to the one running here. Similar, but not the same. In our experimentation with VDI, we see that that things like the memory reuse/overcommit were lower than if we were running a bunch of similar servers on the same host. On the other hand, a desktop OS does not typically use as much RAM as a server OS.


My typical desktop OS assumption will be a 2GB with a single CPU. History tells me that using dual CPU in a virtual environment only makes sense if the VM has a history of using more than one real processor a great deal of the time. If it does not, then the overhead of dispatching the second virtual processor is actually a performance decrease rather than increase.


This is a fairly large VM: My desktop running Win7 only has 2GB, and runs fairly well most of the time. Windows XP can not even use 4GB without being 64 bit, due to some oddities in the way the memory management and other historic design issues work. I actually use my 2GB desktop as a virtual desktop when I am on the road, and it works quite well, so I think this is a good average design point for this model.


How many VDI's per host then? A survey of the sizing data out there from various vendors is pretty widely disparate on that number. It is also clear that there are heavy users and light users (See this VMware doc and this one from Parallels), and that unlike a typical server virtualization engagement where RAM is the first virtual host bottleneck, CPU and I/O seem to be the places that the hosts get hung up for virtual desktop. Going dense on the RAM probably means spending some time optimizing the I/O subsystem to be sure that the host can handle the maximum number of virtual desktops.


Then there is the way that the VDI solutions architecture influences that variables, such as Virtual Bridges Leaf and Branch design.


Clearly how many hosts it will take is going to be my loosest part of this estimate. Still, looking at the literature, and thinking about my general observations of the current VMware data from our BCO tool, I am going to drive a stake in the ground and say that a Dell R810 with 256GB of RAM and an optimized I/O subsystem should be able to handle a mix of heavy and light users of around 200. That may be high: That may be low. I admit, its something that would need a lot of validation, and would really depend on what the VDI solution is, and how you implement it, and how many people are remote. That means we'd need about 25 hosts though, and at about 1100 watts / host. That is the nameplate, so about 660 watts steady state after booting.


Power Costs and CO2 Emissions


How you power is generated, and where you are in the world directly impacts how the electrons are being pumped to your data center as well as how much you pay for that electron pressure. Here is some data for four locations we have data centers that I will use:


Tons / KWHrCost per Kwhr


These are averages, and only useful for comparison. For example, Austin Texas is working on moving 25% of the city power to wind generated over the next few years, so a DC in Austin will have a lower CO2 footprint than a DC in, say, Dallas or Houston or Amarillo. Yes: They have things other than cows in Amarillo.


The Numbers


That should line up all the assumptions for the model. I'll assume this is being studied over a three year period, since that is the shortest amount of time anyone would probably keep a desktop before replacing it. The reality is probably better than that, given that the average PC life-cycle has lengthened.


Power of the Desktop


Number of Desktops5,000
Watts Per Desktop (active)225
Kwatts per Hour total1,125
Kwatt hours / week39,375
Watts per Desktop (inactive)6
Kwatts per Hour total (inactive)30
Kwatt hours / week (inactive)3,990
Hours per week (active)35
Hours per week (inactive)133
Total Kwatt/Hrs week5,115
Total Kwatt/Hrs year265,980
Total Kwatt/Hrs 3 years797,940



Power of the Thin Client


Number of Thin Clients5000
Watts per Thin Client (active)75
Kwatts per Hour total375
Kwatt hours / week13,125
Watts per TC (inactive)6
Kwatts per Hour total (inactive)30
Kwatt hours / week (inactive)3,990
Hours per week (active)35
Hours per week (inactive)133
Total Kwatt/Hrs week4,365
Total Kwatt/Hrs year226,980
Total Kwatt/Hrs 3 years680,940


External SAN Storage Power for VDI hosts


400 TB @ 21 watts / TB8,400
Hours per week  168
Total Kwatt/Hrs week1,411
Total Kwatt/Hrs year73,382
Total Kwatt/Hrs 3 years220,147


Here is where it all comes together: How much money in power, and how much CO2 can be saved in this model, VDI over desktop?



3 yrs Desktop5,213,563$1,316,57812,121,376$961,05914,611,058$1,159,39012,490,608$1,716,404

3 yrs Thin Client336,521$84,981782,400$62,034943,102$74,835806,233$110,789
3 years VDI hosts209,892$53,004487,992$38,691588,223$46,676502,857$69,100
3 years storage108,797$27,474252,949$20,055304,904$24,194260,654$35,818

Total VDI solution655,209$165,4591,523,341$120,7801,836,229$145,7051,569,744$215,707

VDI Savings over Desktop4,558,354$1,151,11810,598,035$840,27912,774,828$1,013,68510,920,864$1,500,696


{Slightly updated: Found spreadsheet error on VDI hosts for MA. Fixed. Reuploaded SS}


One thing is for sure: I don't think I'd try to roll this out to 5000 people all at once. The assumptions here are large. The design / vendor choices influence this hugely. For fun I messed around with this model seeing how more and less efficient desktops affected the bottom line, and the answer of course was "in a big way". All my numbers are here, and I attached the OpenDoc format spreadsheet if you want to plug in your own numbers to see how your assumptions move the data around.


Finally, there are a bunch of good reasons other than being Green to use VDI. Moving your desktops inside your data center is a big one. it is much easier to back up your data assets when they are inside the glass house / internal cloud. Security is of course huge: Lost laptops are not nearly as upsetting if by losing it, all the data stayed right where it was, and nothing went missing. Being able to pick up, move from device to device, place to place, and resume where you were in your work is a nice productivity enhancer.


But this is a Green IT blog, so we won't talk about those things....

Share This:

So, there you are. You are standing in your garden of servers. You have picked all the low hanging fruit. You have virtualized most everything that can be. What next?


I have been thinking about this for a while. I have had a number of conversations with people about Green IT over the last year, and it is pretty clear that how much money you can save on power, and how much you can reduce your CO2 emissions as a result, really depends on where you are in your adoption curve, and what kind of servers you have.


In our case, go back seven years or so to our proof of concept time. We bought a few small (by today's standards) servers, and rolled a number of VM's into them to see how it all worked. Even with the maturity of the products, there was enough stability and enough cost savings that we kept buying more servers. Then we went to a "virtual first policy": Everything new was to go into virtual infrastructure unless a valid reason could be given for why it *had* to be real hardware. We are an R&D shop, so there were plenty of reasons, but still we were able to capture a large percentage of the new requests as virtual rather than real fulfillment.


As older servers failed due to age, we replaced them with virtual machines. It was a no brainer at so many levels. As chip technology advanced for the servers, overhead dropped for the VM's. A VM would be far faster than a (for example) 10 year old computer.


Note here that I have avoided mentioned any specific technology. X86/AMD64 chips matured more quickly, but every vendor's chips sets and servers (Coolthreads from Sun/Oracle, Power from IBM, and HP's IVM) were moving forward down the virtualization road. The road the mainframe blazed all those decades ago with VM/370.


We have now more than halved our power consumption for R&D since we started this, dropping two megawatts, down to a current 1.2 megawatt. Each successive drop is getting harder now. 10,000 X86 class systems are melted down and now spinning around as someone someplace's car wheels.


What next?


There are still opportunities, but the 80/20 rule is starting to kick in. We are getting close to the end of the easy 80%. Time to step back and rethink, and look for new opportunities.


I currently see two places to go next. I am sure there are others, and I am keeping my eyes and ears open.


First is HVAC. Most of our data centers are fairly old. They do not have variable speed motors in the CRAH's. They are not organized well to keep hot and cold air from mixing. If we assume that we are using 50% of our power consumption for computers to cool our computers, then that 1.2 megawatts is requiring another 600 Kilowatts of power. I can get 300KW of that back by modernizing the data center infrastructure itself.


The other is virtualizing the desktop. This is where the real low hanging fruit is. I roughly estimate that we use 4.2 Megawatts right now to power all the desktops in the office and at peoples homes. The reason is simple: We are using traditional desktops and laptops still. Even with a full technology refresh to current gear across the entire enterprise, that won't drop more than 500 KW or so. To get full power savings we'd also have to move to thin clients. Use 15-50 watts rather than 80-350 watts (depending on desktops, tablets, laptops, CRT, LCD, LED backlights, etc).


Of course, that puts more servers into the data center, but not nearly enough to overrun the power savings. That also puts a lot of work happening at the edge squarely back into the data center. Handy for backups. DR, security, etc.


Then there are apps and SaaS: If low power edge devices can get to everything they need via one of these routes, the potential is there to save power in the Megawatts again.


The problem is that all that power is spread out all across the globe. Literally thousands of power bills each seeing a tiny bit of it.


Posted by Steve Carl Mar 1, 2011
Share This:

Elementary particles that is.


In the Green IT community you often hear that "Not all Electrons are Created Equal". I like that concept because of the irony: Electrons are not really being created by power generation. Yeah: I'm a geek.


It is not that one can not create them: It is just that you need something like the Large Hadron Collider or similar high powered scientific alchemy to do it. Creating electrons requires a lot more power than is derived, but it is handy for finding out very basic things about how particles.. maybe even vibrating strings inside particles are put together. See Brian Greene for details. Besides, at our current level of technology, an electron just sitting there creates no usable power. It has to be moving around in wires, creating electrical "pressure"... You know what? Not important.


How those electrons that power all that computer gear in your data center are “Created/shoved” is quite important though, no matter what the semantics are.




In my last post, "Watts Up?", I wrote about the power savings of virtualization. Power savings is of course not only a way to save a lot of money, it is also a way to reduce the amount of carbon (in the form of carbon dioxide) that is emitted into the atmosphere.


No matter how it is measured, CO2 is on the rise. The "longest record of direct measurements" of CO2 is at Mauna Loa, but ice core samples and other records show the recent historical rise of the gas as well.


And it is not just any gas either. While not the most potent of the "greenhouse" gases (Methane, water vapor, and of course refrigerants are stronger. More complete list.), it certainly is one. CO2 is something that plants love too, in correct amounts. The problem is not that it is a greenhouse gas, or that it is in the atmosphere. The problem is the same one that the yeast have: Too much of a good thing.


If you have ever made beer or wine, you understand my reference: Start with a vapor sealed container full of liquid maltose or some other long chain hydrocarbon. Add yeast. The yeast go to work happily breaking the long chains apart, and living off the energy that creates. In the process they fill their environment with the byproducts of their particular hydrocarbon metabolism: Alcohol. Being a small jar (relative to Earth), it sooner or later concentrates the alcohol to a level that kills the yeast. Maybe 4 or 5% for beer yeast. 12 or 13% for wine yeast. Either way, sooner or later, the yeast fill their world with something that kills them. They don't stop eating the yummy sugar until it does.


We are a lot smarter than yeast in general. We know when we are doing bad things to ourselves and we change what we are doing. We know that taking two atoms of oxygen and binding them together with one atom of carbon (times millions of molecules) is something we do every time we breath. Its not just a hobby either.


Our friends the plants like that CO2 thing about us and they reciprocate, up to a point. They “inhale” the CO2, add in some sunlight and some handy green chemicals they have on hand, and break the CO2 apart, absorbing the carbon for house building (both them and perhaps later, us), and releasing the oxygen back to us for future breathing. Carbon cycle 101. All those plants-of-history made the atmosphere tolerable to life forms like us by reducing the total percentage of CO2 in the air. It has been millions of years since the CO2 was as high as it is today.


More Power


All of which brings me (eventually) to my main point here. Like breathing, we are not going to stop using electricity if we can avoid it. But we can be smart about where our electricity comes from. Here is the starting place:



Pounds CO2 per kWh


2.117 - 2.095


1.915 - 1.969


1.314 1.321


To create one Kilowatt of electricity, for one hour, carbon is being liberated from where it was sequestered away, mixed in with oxygen to create literally pounds of CO2. The poundage goes up way faster than seems obvious at first. I only put in this small amount of carbon: How did it turn into pounds? Its part of why carbon dioxide is a greenhouse gas.


Recipe: Take one carbon atom. Atomic number 6, atomic weight 12ish. Bind on two oxygen atoms. Atomic number 8, atomic weight 16ish. One newly created molecule, atomic weight 6+8+8 = 22, atomic number 12+16+16 = 44.


That one little carbon is now 3.67 times heavier than it was before it got married.


Its in the Mix


No state or country generates all of its power using the same mix of power sources as all the others. Some power sources emit CO2 principally when they are being built. Concrete creation emits CO2, about 1 ton of CO2 per ton of concrete, though concrete also absorbs some CO2 during its lifetime offsetting much of the emissions.


Solar and Wind and Hydro are all CO2-mostly-when-being-built kinds of power sources. Nuclear uses a lot of concrete, but also emits hazardous waste of a completely different kind.


Coal is not only the largest emitter of CO2 of the carbon based power sources, but it also can, depending upon the age of the power station, the regulations in place when it was built or retrofitted, and the source of the coal, emit mercury, sulphur, and soot.


Given all those things, it matters where the power feeding your data center originates. For the US, this web site is very useful:


Based of the information (from 2009) there, I calculated the pounds of CO2 per kwh, given the mix of power sources, for three states were we have large R&D data centers:


LocationSourcePds/KWHPercentageContribution Pounds per kwh
Texas (9% Nuclear)Coal2.135.00%0.735



California (19.3% Nuclear)Coal2.11.00%0.021



Massachusetts (12.6% Nuclear)Coal2.118.00%0.378




That is quite a spread in just three states. In 2009 nationally, coal was 30% of the mix, gas was 29%, and oil (not present in any of the states I looked at) was 19%. 11% was nuclear, and 11% was renewable, so that the average CO2 emitted per kwh in the US is 1.368 tons


That puts Texas right on the national average, and the other two below it, with California *well* below it.


Where a data center is located can have a dramatic impact on the CO2 that the data center is responsible for. Like "Free Cooling" and "Air Side Economizers" for data centers only working in certain locations and/or at certain times of the year, where the data center is matters.

Watts Up?

Posted by Steve Carl Feb 14, 2011
Share This:

In all the news and (let us admit it) hype around virtualization, one thing about it is generally accepted as true: Virtualization saves power. If you spend less on power, you therefore also save money, and emit less CO2.


Its intuitively obvious. I wondered, based on our current technology, what does that actually look like? Is it also measurably obvious? I have been talking here about some of the false paths Intuition take one on... How much are we really saving? What is the ROI?


I want to build this out carefully, and show assumptions and factors being used so that it should be reversible into other situations and platforms. This is a bit dry, and has math and stuff, but there is a pot of gold at the other end!


First of all: What is being replaced?


The Source


The root cause of why virtualization works is that computers are fantastically underused most of the time. My Linux laptop here, that I am writing this one, has 4 processors and 8GB of RAM. To write this blog, have email up, have Firefox up, A virtual machine running MS Windows, and a few things like weather widgets and inbox monitors is using about 8% CPU on average. 58% of the memory is in use, and most of that is dedicated to the virtual machine.


If the VM was not up, I would be using far less of everything. This laptop has a 130 watt power supply. If I was replacing a my Linux PC over there (see it? The one on the left...), it has 6GB of RAM that is 20% in use (the VM is not up) and the CPU is averaging about 5%. It has a 305 watt power supply rated at 76% efficiency. The older PC's we have in some of the R&D data centers that are acting like servers have up to 450 watt power supplies.


Figuring out how much you are using, and of what resource is always the first problem in virtualization if you have to do the ROI ahead of buying the infrastructure, which of course you do. You can not just jump up and down with money in your hand and sing about how it will save money to virtualize. They will not believe you. They might lock you up.


If you have no idea at all how much power your systems might be using, you can add up all the wattages of all the power supplies, and multiply that by .6 to get in the ballpark. I talk about this diversity factor in my "By the Numbers"post.


Here is where having a good performance and capacity planning strategy pays off: If you know how utilized these systems are, and how much power they use, you can figure all this out in the tool, or at least in a spreadsheet using data from the tool. Money people like spreadsheets.


The Target


You know what you have, now you need to decide where you want to go, both Hypervisor and hardware-wise. It matters what chipsets you use for this: For example in X86 space, the latest ones from AMD and Intel have many features that help improve the performance of the virtual machine to nearly native levels of execution. One or two generations back, and AMD was ahead in the virtualization assist department. Power 7 is better at virtualization then Power 6. T3 is better than T2 because more threads and memory slots. Etc.


For X86 we use a mix of hypervisors (KVM, Xen, VMware, Hyper-V, etc) and servers (Dell, Cisco UCS, Sun X series, etc) here because we do R&D and we support a wide range of platforms. Almost any virtualization one does will end up saving power. Your exact numbers depend only on your choices and whether you are able to convince people that fewer larger systems uses less power than more smaller ones. That the up-front acquisition costs have short or at least medium term cost recovery in them.


For ease of internal pricing and provisioning, we classify our virtual machines into several categories:


  • Small: 1 VCPU, 2 GB RAM, 40 GB of disk space
  • Medium: 1 / 4 / 60
  • Large: 2 / 8 / 100
  • X-Large: 4 / 8 / 180


We can also do custom versions but these are the standard sizes. Each one has a different cost allocation, so R&D can look at their budget and then pick what sizes they need from there.


What is key here is knowing how many of any given size of these we can fit on a VM server.


The Target: RAM It


Most of the time, unless you are doing something obviously CPU intensive like analyzing seismic data or crunching SETI results, the key is RAM. Buy as much of it as you can, then buy some more. For this example, we'll use the Dell R810 with 256GB of RAM.


The R810 is a nice green server. Two redundant 1100 watt power supplies. 2U rack space. Can go up to 512 GB of RAM, although that means using very expensive DIMM's, so 256GB is a good compromise between price and density (Please future people reading this: This was state of the art in 2011. Try not to laugh at our puny memory configs. We know that you'll have that in your Android phones soon...).


Memory is always our limiting factor. On average over our 10,000+ R&D VM's the CPU will be at about 50% and the memory over 80% utilized. That makes it easy to figure out how many VM's of any given size will fit on a server. For our example R810:


  • Small: 124
  • Medium: 62
  • Large: 31
  • Extra Large: 31


Knowing this, I need one other data point to figure out my first pass at watts per VM: What will the average virtual machine size be? Not everyone will buy the same size VM. Depends on what they are doing, and what OS they are running and whether there is an RDB inside there and all sorts of similarly unpredictable things.


Again, here is where that capacity planning pays off.


Our numbers of interest here are 1.4 Virtual CPU's and 3.3 GB of RAM on average per VM. Allowing some RAM for the hypervisor, that means I can run 75 VM's of average size on our target R810.


Taking into account the diversity factor, on average computers here consume 314 watts each (523 watts * .6 diversity factor) or a total of 23,550 watts. Even if the R810 was using all 1100 watts (which it isn't) it is clear to see the power reductions look promising. Intuition may be right after all.


The Net of it


This is also about 5 Ethernet ports per R810 rather than 75. 4 regular Ethernet ports, plus a management port. We'll use one to go off into a private network for the ISCSI storage, one for VC management, and 2 for general VM traffic. If you were using 24 port switches, this dropped 4 down to 2. One for public network, and one for private ISCSI traffic. A network switch only uses about 100 watts though, so that reduction is only about 400 watts down to 200. Not huge. Of course the second R810, and the third, and the forth don't need new network switches either.


The power reductions are not dramatic, but the capital outlay is. I bring it up only to drop it from the discussion, since I am also not going to look at the floor space reductions, the DC size reductions, the fewer lights it will take inside the new smaller DC, etc.


Storage Story


A stand alone server powers not just it's memory and CPU with its internal power supply, but its internal disk as well. We could put disks in our example R810: Easily enough to hold 75 VM's, but that does not scale out. A real VM deployment of any size is going to need external, sharable disks.


I need a watts per GB, and an average numbers of GB per VM to get to the next step of this story. Since we are talking about ISCSI and Dell stuff here, I'll keep it in that range and figure out the watts per GB for Equalogic. Our standard config for that is one:


  • PS6000X: Quantity 1, for faster storage, 511 watts (computed from max BTU per hour rating)
  • PS6000E: Quantity 2, for less accessed data. 456 watts x 2 = 912 watts (computed from max BTU per hour rating)


Total GB in RAID 50 with hot spares: 31,200. Total Watts: 1423. Watts per GB:.046


I can now use the standard and average sizes to computer that:


  • Small: 40GB: 1.84 watts
  • Medium: 60GB: 2.76 watts
  • Large: 100 GB: 4.6 watts
  • Extra Large: 180GB: 8.3 watts
  • Average VM GB allocation: 58.3 GB: 2.7 watts.


Reminder at this point: When I use averages here, these are our averages, derived from our performance data. Caution here: These VM sizes are ones we picked based off our studies of what our internal customer needed. Your sizes and mileages may vary, but the techniques for figuring our this ROI stays the same.


Keeping it cool


We are being fair to the stand alone server in the last section because its power supply had to power its storage. Seemed only right since this section will not be a happy one for the stand alone server. 




The stand alone machine would more than likely prefer we forgot about it.


A watt of power generates about 3.4 BTU of heat. 23,550 watts for the stand alone servers is 80,070 BTU that needs to be cooled back out of the room. The R810 running 75 VM's plus storage is going to be about 1200 watts or 4,080 BTU.


So, what is a good number for how many watts of HVAC is needed per BTU to be dealt with?


It varies. A lot.


Are you doing hot and cold aisling? Are your HVAC units maintained? How new are they? Do you have any option to use outside air to cool your DC? Is free cooling an option?


One the other hand, these stand alone servers or this virtual host are sitting in the same DC, and whichever number we find will be used for both, so it stays fair even if not 100% accurate for any given situation.


Common wisdom is that data center HVAC takes an additional 50-60% of whatever the power of the power of the DC is. If the DC is using 100 KW, then the HVAC is using another 50-60KW. Let go with the lower number for this, to assume that the data center is slightly more modern, and is using more efficient HVAC. I tried to get a look at the power nameplates for our 10 and 20 ton units here, and they are hidden away against the wall, or we would have a real number to use.


50% makes for easy math though.


  •     75 Standalone Servers: 23,550 watts + 11,775 HVAC watts = 35,325 watts (about 35 KW).  
  •     1 R810 with 75 VM's + storage: 4,080 watts + 2040 HVAC watts = 6,120 watts (about 6 KW)  


Round Numbers and the Pot of Gold


The stand alone servers are using just under six times as much power, and that directly maps to six times as much money and CO2. This does not even count that 75 servers would need three or four racks versus a 2U slot plus 12U for the shared storage. What does that look like in terms of money and CO2?


Money today. Rates vary from country to country and coast to coast. Here in the US it is ranging from 8.2 to 16 cents per kilowatt-hour for our offices. Park your DC next to a hydroelectric dam like Google did and you can probably do better.


Non-leap-years have 8760 hours in them and we'll look at three years lifecycle (26,280 hours), so:



3 Years Gold... err.. ROI


8.2 cents per 1 KW


16 cents per 1 KW


Per 1 KW






Standalone Power Price range






VM Price Range






Cost Savings







Note that I used the *full* rating of the virtualization servers power for this, and applied a diversity factor to the physical server, giving another slight advantage to the physical over the virtual... and still it came out this way. This is only one server, so it does not matter that much, but think about this same math applied across 10,000 real machines that later became virtual machines. Multiply the above numbers by about 133... big power reductions. Big 3 year ROI.


Also note that I used an all-Dell example here: The math applies to Cisco UCS or IBM X series or HP DL's... You can even use the same approach for LDOMS and LPARS and IVM's.


You just have to plug in the wattages and figure out how many VM's of a typical size can run on a given model


Yeah: Just that. OK: Did I mention that having a capacity planning capability is key here yet?


Next Time: What about the Carbon?

Spaced Out

Posted by Steve Carl Feb 4, 2011
Share This:

In my last post I mentioned something about using intuition in a data center design. In that context it was about how the DC is cooled in general: The idea being that how one, as a human being, goes about cooling oneself is not necessarily the best way to cool down a room full of computers.


I have an item I wanted to bring up, related to the same general idea about humans intuition:


    Rackable Servers cool front to back.


That's it. Pretty simple. Open up a server and you'll find a number of fans, and they are all pulling air in the front, and shoving it out the back.


Not top to bottom. There was a time when networking gear did that, but hopefully you do not have any of that around any more. Even if the server had top-looking vents in it like these:





These are not servers that need to be spaced out. They are not top-cooling. Here is where the natural human reaction gets you into trouble. These look like top vents. These look like you can not put anything else directly on them. But you can. In fact, I am not even sure why these vents are there. Here are two of these same model mounted with no spacing:




Not only does that blue handle not block all the air flow, look at all the holes across the back. The fans will be able to keep the air moving.


Because of that odd design, and the human reaction to it, some of the servers were mounted like this:



1/3 of a U between each server. Now the air not only flows front to back, it flows back to front, and hot air leaks out into the cold aisle.


Since this picture was taken, blanking plates were installed below the systems, but nothing can be installed between them because the smallest blanking plate is 1U. I guess we could stuff socks in there, but the last posts experiment was probably as far as I should take unusual air containment exercises. 


Also note that above the top server there is a 2/3's of a U gap. No blanking plate for that either, and as heat rises, and hits the ceiling, it then pushes out more at the top of the rack than at the bottom. That is literally the worst place to not be able to put a blanking plate.


A different person racked up the same server types this way, in the rack right next to this one:



This rack has proper blanking plates all around it now, and it does not leak hot air back into the cold aisle like the one right next to it does.


These servers are the same age, and have been kept at the same temperature all their lives. The ones mounted right next to each other have not had any heat related failures of any sort, but because of the air leakage I have less efficient HVAC.


This is not to say that you can not install with spaces. On 1 U servers, inserting 1 U blank spots between each one makes it far easier to do wire management, and therefore have clean airflow across the back. It also brings more of the servers up higher in the rack where they are easier to work on. There are 1U blanking plates to put between each one, so you can keep the hot air where it should be.


The Green IT intution should be this: Keep your hot air hot, and your cold air cold.

Isolation Experimentation

Posted by Steve Carl Feb 2, 2011
Share This:

Pick up almost any publication about Green IT: any magazine article, web article, white paper, or blog (including this one) and you will learn that one of the easiest ways to get your data center a bit more green is to isolate your hot air from your cold air. Even magazine like "Processor" (, which you would think would have more to do with...well.. processors... has, in their January 14th issue an entire section devoted to Green IT. Green IT is hot (we're coming for you Cloud!). And sure enough, in an article titled "10 Things You Can Do Right Now To Be More Green", the number one and two items have to do with not mixing your hot and cold air.


If you are a Green IT person wherever you are, then you might think this advice, and the fact that it is *everywhere* is obvious and maybe even oversubscribed. As the Green IT person here at BMC though I have found that there is actually nothing obvious about it. In some ways, it goes against the natural tendency. If you keep a room cool, that is good, and putting in big fans and moving air around is also good, at least at some lizard brain level. Thats how we stay cool as people after all. Never mind that computers don't sweat....hopefully.


I had the recent opportunity to put the "don't mix your hot/cold air" concept to a test. People thought there was a mad scientist loose in the building for a bit, and perhaps there was. It was informative and educational for some who thought I was being a big pedantic for insisting that hot and cold air be kept in their own spaces.


The key efficiency one is after with isolation is this: HVAC prefers hot air on the intake side. When one can drive a temperature differential of about 30 degrees, then the HVAC can run as much as 50% more efficiently. That either maps to less power for cooling, more cooling capacity available in the case of a failure of some HVAC component, or more equipment in the same amount of cooling: Any way you slice that you are saving power, CO2, and money.


The learning opportunity came, as they so often do, in the middle of failure.


Setting the Scene


One of my R&D data centers is fairly small. 60 tons of HVAC, split between two 20 ton units, and two 10 ton units. Over time this DC has been refit twice, with HVAC added, and airflow re-arranged each time. Originally it was three smaller data centers that were physically right next to each other, but they were made, over time, into one larger data center.


The history of all that means that the HVAC were installed at different times, and are therefore of different ages. The failure was twofold: The same 10 ton unit failed twice, separated by about 3 weeks, for two different reasons.


The history of this DC also means that it was not really arranged optimally for hot / cold isolation. Racks were in rows, but in some areas hot backs of racks sprayed directly into cold aisles. There were no rack blanking plates. There were holes in the rows where racks used to be but were pulled out of service.


It stayed cold because there was enough HVAC in the room, but everyone kept telling everyone else that no more gear could go into the room, or it would overheat. Even professional HVAC people kept telling us that the room was running at capacity, though since they had no data about the actual installed gear that was based solely on walking around and feeling the heat of the room.


Even professionals intuition can be wrong. My measurements of the power in the room, and my study of the gear in the CMDB told me that we had *theoretical* capacity left in there, even though when I walked around in the room my body told me that the room was getting too warm.


The First Failure


The first failure of the 10 ton unit was a fan coil. It would not hold refrigerant any more. Nothing to do but replace it. That would take over a day as the part had to be ordered. In the meantime the room overheated, with the cold aisle running well into the 96+ degrees Fahrenheit, and that was with 4 or 5 tons of supplemental HVAC (portable cooling units) brought in and arrayed in a sort of surreal, robbie-the-robot sculpture garden, their fat white arms all pointed at the servers.


Science Project


This event provided the inspiration for some changes to the room, and we ordered blanking plates for all the racks. That alone would be a huge improvement, but I was curious how far it could be taken. How far could I drive the temperature difference? Could I get to 30 degrees difference in the room? To test that, we used some tall cardboard to block and re-direct hot air into returns rather than into the cold aisle. A sheet of plastic went over the empty rack slots. Even cardboard slats on the tops of the racks to continue the hot air chimney to up closer the ceiling. Hot air does indeed rise, but it also likes to spread.


It was, in a word, ugly. Anyone who cared more about form over function would have been driven screaming from the room. Some interpreted the presence of all that new "air management" as indicating a new, unknown-to-them problem. One used the cardboard as a canvas to express latent artistic talents. Some looked, shook their heads, and sighed.




"The Experiment" was a success. It did in fact drive a 27 degree F difference from the hottest place in the hot aisle to the coldest place in a cold aisle. 76F to 103F (measured with a digital thermometer, and left to stabilize in each location for at least 5 minutes) It was not uniform: The average difference was maybe 20 degrees. That is still pretty good for an impromptu science project, done with parts found laying around the data center.


Failure, Part Two


I was getting ready to take the beast apart. The point had been made, and the data collected.


The 10 ton unit failed again. It was unexpected. It was hard to believe: it had just been fixed, hadn't it? How many parts to fail are there in that thing?


This time it was the fan. This time, because all the blanking plates and other isolation was still in place, the cold aisle spiked to only 87 degree F, and that with only 1 ton of supplemental air. All the other "Robbies" had been moved to other exhibits.


The increased efficiency of the remaining 50 tons of HVAC was not enough to completely offset the failed unit, but it was enough to keep the room from getting nearly as hot. The air isolation was far from perfect: Easily seen in the picture. It was made with cardboard! It leaked like a sieve. It was just way better than it was before.


Side note: The first failure led to 12 failed disks in various servers across the data center: that is to say, 12 *more* disks than would have failed in that same time frame. Disks fail all the time, but not at the observed "hot" rate. The heat increase had claimed its victims. The second HVAC failure had no additional, above and beyond normal disk failures.


Clearly part of this lower failure rate the second time around was that the marginal disks had just been replaced, so it was less likely there would be as big a spike in failures. Be that as it may I am fairly sure part of it was that the servers and their disks stayed cooler.


Back to Normal... Mostly


The cardboard is down: It would not do to block a sprinkler head. The lessons are learned and the results are tabulated. Two conversations afterwards in particular showed the value of the air management experiment.


One was with an Enterprise Architect. Another a former data center manager. This sums it up: they said in essence: "I thought the room was out of HVAC, and that we could not put anything else in there. Now I see. Now I feel it, when I walk around the room. The cold aisle is colder!.. and boy is that hot aisle hot!". People walk around, looking at the roof now going "Hey! I bet if we moved that return we'd get more isolation!"


This experiment proved the Green IT math works. The new, greener data center designs of the future that we'll do will take these lessons into account. As a side effect it underlined for me that isolation of airflow has consequences in how the fire suppression system would also need to be designed.


By just being in flight when the second failure happened, it probably saved us some server repairs and therefore outages along the way. The law of unintended consequences is not always against you.


The experience validated another assumption of mine, and that was the subject of my last post: The higher heat in the ASHRAE standards only works if you have gear built *after* they came into being. Our gear is a whole range of models and types, required because we support such an array of platforms. We'll have to keep that cold aisle a bit colder for a while longer yet. That makes it even more important from a Green IT point of view not to mix the hot and the cold air. It is important to both support our customers, and to be sure we run our DC's as good citizens of the planet as well.

Running Hot and Cold

Posted by Steve Carl Dec 15, 2010
Share This:

It is a really good idea to run your data center warmer these days. Except when it isn't.


Data centers need capacity planning, just like the computers that run in them. One size does not fit all. There are standards like ASHRAE that give great ideas and guidance, but at the end of it all, building a green data center requires more than just blind adherence to some standards. At the same time, physics underlies it all. There are some things you can do, and others that are ... Suboptimal.


Example: the most recent set ASHREA standards indicate that it is good to run your data center warmer than in the past. The range of allowable humidity is higher too.


If I were to blindly follow that, I would find computers failing all over the place. The reason is simple: no one told the older computers about it being ok to run warmer. A natural consequence of BMC's heterogenous and deep platform support is that we have quite a number of computers designed and built before the new standards were set. They like it cold.


It is even more complicated than that though. You knew it had to be.


Fans and Power


Fans use power as the cube of their rotational speed. Not the square. Not linear. Airflow is linear.


The best example of that I have seen was when I was building a new data center in San Jose. The DC was going into an existing structure, but we had more or less gutted the building and custom built out the space to meet our needs. I had loaded up a fair number of servers into the shiny new data center, as we had certified and commissioned the new DC before we had started moving the people into the adjacent office space.


Building management had scheduled the fire marshal to come over one evening to look at the fire panel for the occupied space. Someone tried the alarm on the panel, which was connected incorrectly to the UPS, and the UPS dropped the line power to the building. I guess this was a good thing, as we found out about the incorrect configuration of the firepanel / UPS.


There was no backup generator, so the data center HVAC went off line, but because the UPS was working, the servers stayed up and running. I happened to have been in the UPS room looking at the UPS control screens because it was a new model that I was not familiar with, and I was learning how to drive the various displays. Set up SMTP and SNMP, etc.


From the UPS room, I could hear the sounds of the data center, and therein began a howl. Slowly at first, but building to a nearly deafening roar, every cooling fan in every server came online, and sped up in increments to its maximum speed. I had no idea how quiet the room had been till it wasn't.


I watched the power drain on the UPS with interest. It increased from 160 KVA to 270 KVA. To drive the fans at their maximum speed was chewing power at an incredible rate. Even for all that, and for the outage not being all that long... about 20 minutes... I had two older computers lose hard drives from the heat.


The point here is that ASHRAE says that it is OK to warm your cold aisles, and it is to some degree, but what degree that is will depend utterly on what type of computers you have, and at what temperature they are going to start cranking up their fans to stay cool.


It is not more power efficient spend less on A/C power if you are spending more on making fans spin faster. What that inflection point is I can not say. Data required. I know for us it is not very far away from the old ASHRAE standards.


Too Much of a Good Thing


Your data center A/C, be it CRAC's or CRAH's like temperature differential. Give or take and elephant, 30 degrees F is good. In Hot/Cold aisles, with no air mixing, that means that if your cold aisle is 68 degrees, your hot aisle is 98 degrees. No one is going to want to spend much time in the hot aisle, and when they do they will be wearing Hawaiian shirts and Bermuda Shorts. By keeping the air from mixing, your A/C can be as much as 50% more efficient. Since DC HVAC is 40% of the power bill (and therefore contributes hugely to your CO2 emissions) running the HVAC at maximum efficiency it paramount to a Green DC.


Some people like to go to warmer climate's, especially during the winter, but there are limits. Increase the temperature in the cold aisle to the ASHAE maximum (assuming all your gear is new enough to be able to run at that temperature) and your cold aisle is at 80 degrees. People are not utterly uncomfortable at that temperature, though they will probably be thinking about putting on the shorts and tennis shoes. The hot aisle is another story. At 110 degrees, there will have to be hazard pay to go in there. Next to the fire extinguisher will be hydration stations.


To drive that kind of temperature differential also requires the servers be racked at a density that can generate that level of heat. Lots of ifs and caveats here, and so once again you have to know exactly what kinds of servers and densities are even possible with your specific mix of servers. In some cases, our gear is old enough that even though the power supplies are not very efficient, the server is so large that I can not pack them together that close. In the case of the old Tandems, I can't easily rack them at all...



Too much HVAC is inefficient and a poor use of power and therefore carbon-footprint intensive. Not enough and your servers run too hot and fail. It is fairly easy to figure out how much A/C is the right amount, as noted in my two previous posts here ("By The Numbers" and "Whats in a Name(plate)"), but not addressed there is the idea of failure, in the sense of what to do when you lose an HVAC unit of some sort. Things fail. Entropy is law.


A recent case for us was where a 10 ton CRAC failed in one of the R&D DC's. There were three other units in the DC: two 20 ton units, and another 10 ton. The problem was that there was not enough cooling left in the surviving units to deal with the heat load until the 10 ton could be repaired. 


In theory there should have been one more 20 ton unit available: powered down, but piped into the common plenum so that it could assume the workload of the largest possible single failure. Alternatively (and what we did) about 10 tons of heat load had to be powered off till the HVAC technical crew had a chance to get the unit repaired. In this case, parts had to be ordered, and there was a multi-day wait. Supplemental air was brought in. Not pretty.




We had some latitude in how this was dealt with because the room is normally cooled to about 70 F in the cold aisle. That extra 10 degrees bought us time, and meant that we did not have to power down quite as many systems. We could, for a short time, run warmer.


For this lab, an idle 20 ton unit would be a hugely expensive investment relative to the rooms workload. A spare HVAC unit may make sense when it is 5% of the total room or something, but not when it is 30%. Then it is expensive insurance.


The fans ran faster while the 10 ton was being repaired, and we used more power because of that for the duration, but it was a short duration.



I am not saying that one should pay no heed to ASHRAE: Far from it. I am saying that in the effort to both design and run a green data center, understand that ASHRAE issues guidelines, not rules of nature like second law of thermodynamics. Apply them knowledgably to your particular set of servers, and also to the future plans for the data center.


The lessons of BSM are clear: You can not manage what you can not measure (in this case, the potential heat load), and to manage effectively (I.E., to be efficient) not only saves you money, it makes your DC greener.

What's in a Name(plate)?

Posted by Steve Carl Dec 2, 2010
Share This:

In my last post, "By the Numbers", I talked about the Diversity Factor, and why it is important to know your real one. The DF is in turn based off your "NamePlate" power rating. I talked a little about this, but I think it is worth a deeper dive, as while it is straightforward most of the time, it is not always. Getting this wrong can be a disaster either in a new Green data center design, or in the case of moving something to a Co-Lo, having to go back and re-adjust the parameters of your contract at a disadvantage.


The Nameplate is a label attached somewhere on the power supply. It may not be visible from the back, and you may not be easily able to pull out the power supply to find it. Most computers have just one kind of power supply per model, but there can be sub-models and variations that mess with any assumptions you might make here. I have even seen in some recent models there being two listed power supply models, with one of the power supplies being intended as being more power efficient, and supporting things like taking the server into powered off states during times of inactivity, but being able to be powered back up without a command from a KVM or remote control or even someone standing there and pushing the button. Rather, it powers up as software determines that load in the cluster is increasing, and the servers RAM and CPU need to get on duty.


I also noted in my last post that most power supplies today are "World" power. They can deal with 50 or 60 hertz A/C, and voltages ranging from 100 through 250, all without giving you blue smoke. In the technical world, it is considered bad to let the blue smoke out of the computer, because you can never get it back inside. That same auto-ranging capability means that you have to know the wattage of the power supply by explicit vendor statement. It has to say something somewhere about 1100 watts or 2000 watts or whatever. Volts times Amps does not get you there.


So what if there is no nameplate, or if there is no easy way to get at the nameplate because the server is powered up and the users would be grumpy about you powering it down to find the nameplate?


Also, what if you can see the nameplate, but the server has more than one power supply? Is it active active? N+1 N*2? Active / passive? No way to tell from the Nameplate.


The Google is of course your friend.


Searching the Vendor Sites


Some Vendors are better than others about keeping the data about their computers online. Others are very aggressive about removing data that is on older systems that they have discontinued. Kudos here go to Sun (pre Oracle: I am watching to see if they maintain this level of goodness) and special mention to Dell. IBM is a problem for really old systems, and because things can be spread out quite a bit, and because they use the word "Power" in their server names, leading to many false trails. HP's and therefore Compaq and Digitals doc is very good. Or very bad. Or very missing, depending on what you are looking for. Cisco is pretty good, though the different generations are documented in different places of different docs.

Some old stuff is just not findable.


When trying to find out something like nameplate wattage, these keywords in various combinations are what I have found the most useful:


  • Model Name (like Enterprise 250 or 7015-r40)
  • The vendor name
  • watts
  • power (except for IBM where this is nearly useless)
  • specification / specs / "technical specification"
  • "power supply"
  • Searching used computer depots for replacement power supplies
  • Maximum BTU


About that last one: You can reverse into wattage from BTU. Make sure you use the Maximum BTU rating to keep everything in Maximum Wattage until you apply the DF. You only want to apply the DF once, and to the right number.


Wattage from BTU is 3.414 BTU's per Watt. 680 Maximum BTU's in a specification sheet means the maximum wattage of that power supply is 200 watts.


Again: Maximum rating. Keep it all the same. Some vendors will give you an estimate of the median power usage, and they may give it in watts or BTU's. Doesn't matter. They have no idea what your config is, or how it fits into your overall data center, and what your measured diversity factor is. Might be handy to know for figuring out potential hotspots if it looks like the median is close to the max. But track it separately and do not confuse them.




BMC's ADDM is another way to find out the nameplate of your servers, and to do it in an automated fashion. I have recently learned how to do some very basic things with ADDM, and the part of this that I really need for designing and maintaining my R&D data centers is this: ADDM can not only discover everything on my network, but it has a database (called the HRD, or Hardware Reference Data) of servers and other gear, with over 1,000 entries, that can enter into the CMDB not only all the other information about the server (OS, patch level, disk config, network config, etc), but update the MaxPower entry in the Atrium CMDB with a servers Max watts rating. Then it is a simple matter of pulling the data out of the CMDB by rack, row, room, or whatever, and having your max wattage right there.


In addition to wattage, you have to know the servers size to figure out rack configuration, in Rack Units, or EIA U. ADDM's database has that too, and can populate the CMDB with that as well.


This is only the tip if the ADDM iceberg of course. It does way more than just populate power and system size in the CMDB.


In combination with the diversity factor, you now have everything you need to figure out how you want to set up the servers in new configurations and densities.

By The Numbers

Posted by Steve Carl Nov 22, 2010
Share This:

In his book "Just Six Numbers: The Deep Forces That Shape The Universe", Sir Martin Rees discusses six numbers that, once you know their values, all sorts of things can be known about the universe: The way it has to look and behave. What physicist often call the universes "texture" (which of course is crunchy)


I read this book a number of years ago, and it has shaped the way I think, even if, upon one reading, I did not completely understand it. I am not a physicist, so I have to re-read such books from time to time to re-remember details and nuances. Even so, just knowing this idea of "a few basic numbers" has changed my point of view and one area it has influenced is how I think about data center design. You can not design a data center, more or less a Green one, unless you have a certain basic understanding of some numbers. Robert Heinlein said (in "Time Enough For Love") that (paraphrasing) anything that could not be done in math was opinion.


Data centers designed by opinion are unlikely to work well, so I have worked over the years to try and make sure I have the math right. Along the way I have discovered certain rules of thumb that are industry standards, but they may not apply, and knowing whether they do or not has a huge impact of the ultimate cost and efficiency of the DC.


What I would like to do here is to talk about some of those very basic numbers, with a focus on the idea that "If you can't measure it, you can't manage it".


Todays Number: Diversity Factor


When I was designing / building a data center for R&D a few years ago, I also learned about using the same words as the trades that you are working with. What I was calling the "Maximum Rating of the Power Supply" was causing blank looks from the building engineers. I finally learned that the name they like is "Nameplate rating" or just "Nameplate". I resisted it at first: the nameplate was on the front, and said "Dell" or something. No ratings. On the back, somewhere on the power supply was a label, and it had all sorts of numbers in tiny tiny print. The one of interest was "Watts", although sometimes that had to be inferred from "Volts" and "Amps" (volts times amps equals watts: maybe that should have been todays number, as it is key).


Side note: Never see anyone using V*A=W on an auto-sensing or auto-ranging power supply, because the volts and amps are ranges, and if you multiplied together the largest numbers in the range, you'd be way high on your number. Today it is common to have a power supply that can plug in to 100 volts or 208 volts or even 240 volts without setting a switch: It auto-senses the power. But at higher voltages it pulls less amps, so multiplying together the ranges is not useful. They can also be more or less efficent at different voltages: it used to be that power supplies were more effient at 208v than 110v for example, but not always and not so much on modern gear, where it seems to make no hugely significant difference.


Side side note: Sometimes you can not find wattage. I have had to look up specifications and find instead maximum BTU and reverse that in.. but that is another post.


If you know nameplate wattage of every server, you do not know how much power all those servers will consume. Here is the key point of todays entry. Consider the following: A server can be big or small, have few memory slots or many. have few disks or many. Few CPU's or many. Different CPU models in the same external model. Low power memory as an option.


For us, as an R&D shop, we often-but-not-always buy servers that are on the low end of the configuration. If it had 24 DIMM slots, 16 drive bays, and 2 CPU sockets, it would not be unusual to have half of the DIMM sockets populated, 2 of the drive bays, and one CPU socket (with 8 cores or something). The CPU might be the 95 watt model rather than the 130 watt one. Yet the power supply is sized to power *everything* that might be installed when the server is fully configured. They don't want to add to the confusion of buying a server by having 4 different wattages based off configuration, and more parts drives up costs too. Nope: Big power suppply for you most of the time.


Add to this that the older servers used less efficient power supplies. As low as 75% efficiency. What mix of what efficiencies sit there in that pile of servers?


Add to this that when disks power up they use way more power than when they settle down, that smaller disks use less power to spin up than big ones, and that disks seeking use more power than disks idling. Solid state disks use less power than spinning disks, but which generation of those matters too.


Look at that pile of computers again and wonder from the nameplate rating how can you tell anything about how much power they are actually consuming? Answer is you can't, and that is where Diversity Factor (DF hereafter) comes in.


Industry Standard Diversity Factor (DF)


Industry standard DF is to multiply the total of all the nameplate ratings by .6 (or even, in one reference I found, .67).  If my server pile has two million watts total nameplate, then that DF says my data center has to be designed to deal with 1.2 million watts of power (1.34 megawatts at .67, but no one seems to use that one anymore so that is the last we'll speak of it today) . You would be very very safe in doing that if all you were worried about was making sure you had enough cooling in the data center.


If you can't measure it, then this keeps you from having a server roast, but maybe not your job as a data center designer, and maybe not happy as a stockholder either. My measured DF across the R&D data centers is .439. That means  in the scenario here 880,000 watts, not 2 million or 1.2 million. Any DC built to those higher numbers does have one thing going for it: A long future as a data center, because it is over-built. Unless you designed the data center in zones where you can power up HVAC only as you need it, it will not be a very Green data center. HVAC sitting around idling still uses a lot of power: 40% of the power going into the data center is probably to run the HVAC. More if you overbuilt like this.


If your company could have used the extra money spent on the DC overbuild for something else, they might also be miffed at the designer. That is a lot of new Mac laptops.


Measure It


If you are in a really large data center with separate power meters, and even have the HVAC metered separately from the lights and the racks, then you are lucky and also unusual. The more common case is where the power is measured at the building, and includes the offices and breaks rooms and whatnot.


Newer in-rack PDU's can be bought with monitoring intelligence. That is the other extreme in fact: You know how much power is being consumed right down to the plug, and you can pull those back and monitor them with something like Nlyte and know exactly what is going on at all times.


Most Data Center PDU's (the big ones on the DC floor, not the in in-rack, power strip looking ones) and UPS's have power metering built into them though, and the UPS is where I measure mine (Nylte can see those sources too). The only things on my UPS's are the servers and network switches, not the HVAC, so as long as I have all of those items in my nameplate list, I know the nameplate reading, the actual aggregate consumption, and can compute my DF.


Diversity factor is key to designing and building a power efficient data center. Know it. It will affect the texture of what is to come (which is still crunchy).


Next time: Getting Your Nameplate Number

Filter Blog

By date:
By tag: