Search BMC.com
Search

Steve Carl

Build, or Retrofit?

Posted by Steve Carl Jan 13, 2012
Share: |


Over a number of years, we have reduced the number of R&D labs we have in North America to about a third of what we used to have. The reasons for the consolidations were driven not by Green IT but simple math around "It's easier to support fewer, bigger labs."

 

I have a very large data center, with these specifications:

 

  • 250PSI floor load for 16,000 square feet
  • 18 inch raised floor for 16,000 square feet
  • 60 PSI floor for 10,000 square feet
  • 6 inch raised for for those 10,000 square feet
  • Much greener water cooled CRAH’s rather than fan coil CRAC’s
  • It has dedicated chill water, and the ability to “borrow” chill water from the base HVAC chillers in an emergency.
  • It was built in 1992/1993, and was meant to hold four water cooled mainframes of the day    
    • It has one air cooled mainframe in it now      
  • State of the art back then at 38 watts per square foot.   

 

The DC has “good bones". The basic structure is sound, there is plenty of room to add things like additional generators and UPS capacity. This DC has been the target of the DC consolidation and has absorbed computers from a number of other locations.  When people walk through it now they ask “Why is this so empty?” and the answer is “We can’t put anything we have left any closer together than this or it gets hot.”

 

The next obvious question: Why not add more cooling? Before I talk about that, A quick look at why, even after all the consolidation (More servers in fewer data centers) I have empty space.

 

Virtualization and Space

 

In a word we have empty space because of virtualization. Above and beyond P2V and new requests being supplied from virtualized resources, when we moved a Data Center as part of consolidation, we looked for ways to virtualize before we moved to cut down on physical equipment moving about.  A virtual machine can move from place to place via a network (slowly) or an external hard drive (Quickly: Nothing beats the bandwidth of the mail).

 

As we moved to higher and higher levels of virtualization, we got back more and more floor space. The same large floor space is now used by servers that have a higher physical density and are more power dense.

 

Example: Seventy PC class systems acting as servers, each with 250 watt power supplies (burning around 150-200 watts of that) would be replaced with one Dell R810, and an 1100 watt power supply (using 800 or so watts of that). The R810 takes up 2U in a rack, the seventy PC class systems were loaded on technical furniture that covered the same floor space as nine racks, all full.

 

Extending that, we can put twenty Dell R810‘s in that same 42U rack (leaving room for a top of rack switch), at about 16,000 watts actual usage and displace 1400 PC style systems and 210,000 watts of actual usage. I’ll have to roll in some SAN storage to that equation (And this post isn’t exactly about these numbers anyway) so more or less it’s 13-to-1 reduction in terms of wattage. Lets be really conservative and use 10-to1 to account for the SAN storage.

 

Cell Size

 

When you count the aisles front and back, plus other ramps and hallways, a modern single rack has a “cell size” AKA “Work Cell Size” of anywhere between 16 and 30 square feet per rack. Depends on the shape of the room, if it has internal support columns or not, local codes for number of fire exits, and the needed paths to them. I usually use 20 square feet per rack as a nice round number that is fairly aggressive in terms of density.

 

Technical furniture like what we have (largely Ergotron 3000's and Wrightline) is about a 45 square foot cell size, in the same hot/cold aisle style layout. In theory it would take 56 of those units, and therefore 2,500 square feet to hold the same number of OS images as that one 42Urack. A 100-to-1 space reduction.

 

That is an extreme example because those PC’s that are being retired are fifteen or more years old. It is the best possible reduction. As the gear being targeted for retirement becomes newer, the advantages reduce. Still, with a 100-to-1 margin for this example, you can come forward quite a few generations, and even move from a PC on technical furniture to rack mounted gear being what is being replaced, and there is still a space advantage. If I was replacing seventy Dell 1850‘s, at 1 U each, that is still a 4 to 1 space reduction. That would be 23,000 watts in 80 square feet, or 280 watts per square foot, in round numbers

 

Watts per Cell Size

 

Our standard 42U rack, at 16,000 watts and 20 square feet comes in at 800 watts per square foot. The technical furniture held 5000 watts (25 systems) in 45 square feet, for 111 watts per square foot. Round numbers, an 8-to-1 relationship.

 

Reality Check

 

I have this big empty data center that currently can dissipate 38 watts per square foot. That means heat/wattage density of the data center was less than what I could potentially do with the relatively low density technical furniture, and does not approach what I could do with racks of R810's and the like, which in turn is nothing like what I could potentially do with blades.

 

So, I not only have space, I need space. I need to spread things out.

 

Or retrofit the data center with more HVAC

 

Or move to a data center with better watt-per-square foot capacity

 

And Therein Lies the Question

 

There is a cost per square foot. If you own the building, and its been depreciated for years, it may not be a high one, but it is there.

 

There is also tiering to consider: This is R&D gear. Does everything need Tier 3 or Tier 4 infrastructure?

 

There is a cost to build a modern data center. Hot/Cold aisles, ducted and forced air, get you to about 250 watts per square foot. Far above my expansive data centers 38 watts per square foot, but actually potentially low. What if I went with blade servers instead of rack servers? My 16KW per rack can turn into nearly 40KW per rack. That means specialized in-rack cooling. That in turn means expensive data center per square foot, just very small.

 

But I have a big data center that’s empty. Should I build something new, migrate everything to it, and dense-ify it to the next level? The same ideas I mentioned here about the R810s and Blades apply to all the UNIX equipment as well. I can virtualize and dense-ify at varying levels across all the major vendors, and the question about the size of the data center, and what to do about it remains.

 

How Much Raised Floor is Enough?

 

If I can retrofit the current data center with updated cooling, raise it to 250 watts per square foot, that has the potential to not only be less expensive, but less disruptive. Changes can happen over longer periods of time. What are the parameters? How far can I go with a six inch raised floor? An 18 inch raised floor? I looked at a new data center being built at 250 watts per square foot recently, and it was running on 36 inch raised floor, and all the power and wiring was being dropped in from the top: All the underfloor space was a common cooled air input plenum and nothing else.

 

Carbon And Calcium

 

What about the carbon of building a new data center? Lots of energy is used, and lots of concrete, and making concrete is very CO2 intensive. It does not matter if I build if or someone else does and I just rent it. It’s still going to be more carbon intensive than retrofitting an existing structure.

 

Next Time

 

Those are the general questions that frame the issue for me today. There are no doubt others, but from a Green IT / Data Center design point of view, this is a good starting point. Next post, a closer look at Concrete

Filter Blog

By date:
By tag:
It's amazing what I.T. was meant to be.