Share:|

My apologies for the massive delay in this series. Flu and Phase II took over my personal and professional lives for a while. The good news here is that we are now after our next 100 KW of power reduction. The bad news of course is that I was not done talking about the last 100+ KW reduction.

 

I know GBtGS is a fairly long series, so I will try not to assume here that you have read all of it. If you have read it, there may be some repeats of information. Please bear with me. I'll try to keep that at a minimum too.

 

The DC

 

The goal of this phase of the GBtGS was to take 37,000 square feet of DC down to 11,000 square feet. More: The 16,000 square foot part was designed with an 18" raised floor, and at 250 PSI floor loading. The 11,000 square feet was 6" raised floor and 72 PSI floor loading. Both spaces were originally 38 watts / SF, though over the years the 11,000 SF floor had been upgraded in some areas to be 50 watts / SF. Still nothing compared to a modern DC's 250 watts per Square Foot.

 

In the last post there are some pictures and discussion about cleaning air dams of cables out from under the 6" floor. That is key to the success of this phase of the densification. Not only are servers going to be closer together, they are going to be virtualized onto blade chassis, and by virtue of that, much more dense. I have fairly beaten virtualization to death in this series previous posts, so enough said about that.

 

In the old DC design, airflow was never really much of a consideration. Quite the opposite, it was utterly ignored. if a place got hot, it was easy to just move a few things around. We had the luxury of space.Further, this was originally a mainframe only DC, so airflow just did not matter. The chill water did the work.

 

I did not have a thermal camera or spreadsheets full of Fluid Dynamics equations in order to figure out a new design with. I did have good basic design principals, such as "Don't mix your hot and cold air". I have imaged the Austin DC with a thermal camera, so I knew what kinds of things to watch out for. I also had my skin: I could stand in the DC, feel the air flow, and feel the heat. See / feel the leaks. Come up with remediation.

 

All of that led to this 7,000 SF DC:

 

production.PNG.png

 

and this adjacent / connected 4,000 SF DC:

 

rnd.PNG.png

 

Those two rooms connect at the double doors, and total out to 11,000 square feet. The 7,000 SF room is about 50 watts / SF. The 4,000 SF room is 38 watts / SF. All are 6" raised floor.

 

You'll see lots of white space here in both DC's. I opened up the cold aisles so things could flow. This dropped air resistance where I wanted the cold air to arrive. I always had in mind being able to add a positive pressure floor tiles to be able to pull the air where I needed it, but it turned out that the big wide cold aisles did the trick.

 

The 7000 SF space has a drop ceiling that is the hot air return plenum, so I can use virtual chimneys to take the air up, and then back to the CRAC's. Since drop ceiling is used as a hot air return plenum, hot and cold don't mix as much. That is a good thing too, because I was not able to move everything in the room around the way I would have liked, and you can see there are a lot of weird airflows. With the 50 watts per SF, the room is literally colder. There are two places in the room where brand new hot / cold aisle could be implemented, and that allowed higher density server installations, including one area full of M1000e's

 

In the 4000 SF DC you'll see that the NOC takes up 25% of the room, there on the left, with a small mainframe right below that.That allowed me to densify the racks in the center of the room, at least as far as being able to fill the 42U racks full of gear. The three CRAC's can only vent along the centerline of the space.


I was not able to use the drop ceiling as a plenum in this DC, so airflow had to be managed purely by design of the rows.

 

Other parts of the room contained production gear that was in odd orientations, and networking gear cabinets with side blowing airflows so that air whooshes about in small circular paths. Not great, but enough density was achieved elsewhere in the space that after virtualization and tight-stacking, 37,000 square feet fit into 11,000, and there was room, power, and HVAC to spare.

 

Genset and UPS

 

In 1993 the DC's were fed by one 750 KVA UPS, and backed by a genset. We grew and grew, and by 1998 we added a second 750 KVA UPS. We then segmented that workload so that important things were on the older UPS, and things that could go down for short periods were on the UPS that had no genset behind it.

 

The goal here was to get the workload down to less than what that older UPS/Genset could manage, and to completely abandon the newer UPS. We would leave it behind, just like the 16,000 square foot floor.

 

I tracked that overall drop usage with a keen eye. It looked like this:

 

power-reduction.PNG.png

 

Mission accomplished and then some. We not only fit on the single UPS, we were not even going to be straining it.

 

Even better from the Green IT point of view was that the Virtualization efforts had saved us in one year 205 KW.

 

CO2 and Green

 

I mentioned in "Not all Electrons..." that in Texas, because of the general mix of ways that power is generated that on average about 1.4 pounds of CO2 is created for every KW/h consumed. That means that about 239 pounds of CO2 per hour less are going into the air because of our DC. That is 5,725 pounds less per day, or 2,089,728 pounds less per year.

 

Because this was a retrofit of an existing DC, we added no CO2 due to new construction either.

 

There is another green being saved here of course. The rent of the floor we left. The maintenance on all the infrastructure of the floor. The cost of the power. Total it all up, and we are about a million USD to the good. In other words, this project utterly paid for itself. In one year.

 

Retrofitting the space saved us a lot of time and money, and got the DC ready for the next phase of the GBtGS project. A phase we are well on the way to executing. At the end of it this next phase, we'll be down another 100 KW, and positioned for even more consolidation from other data centers around North America.

 

I'll wrap up this series next post. GBtGS entered phase II though, so there will be more to talk about soon.