Share:|

Or; "How to get There from Here"

 

We live in interesting times. The popular press love to call this the 'Post-PC Era". The PC is in theory dead, or irrelevant. Canonical's Ubuntu, Gnome 3, and Microsoft are all grafting tablet UI's onto their desktop systems, forcing people like me to run to alternative desktops like MATE and Cinnamon or stay on Windows 7 (the new XP?). I rather believe the idea voiced in one of those links to the effect that the Mint version of Linux's rise to dominance in the Linux desktop world is at least in part due to the fact they they created Cinnamon as an answer to the sub-optimal UX  that is Gnome 3 / Unity. My humble opinion only: I hear there are some that like the new UX. Still, Distrowatch has it as number 1 as I look now.

 

I don't know that this will go anywhere but there is a proposal to switch the default desktop of Fedora to Cinnamon.even. I hope they do, and I hope the Gnome folks figure this out eventually. From the looks of Windows 8.1, it appears MS is starting to realize they should not have gone down the same road these other two projects paved. Not everything is a phone. Not everything is a tablet. But this is not an "Adventures" post.

 

My thinking on all this, when viewed from the paradigm of the data center is that we are going to a tiered architecture. There is the great big central glass house (Cloudy, public or private), the mid-tier, powerful device, and the edge device. In the DC that's the central router, the distribution layer, and the edge switches. In computing terms, its the server, the desktop, and the tablet / phone / convertible devices. Nothing doomed. Nothing dramatic. Just a shift in usage to match the need, and to put the right device into use for the right workload.

 

Datacenter Envy

 

For that first tier: the central glass house. The middle of the cloud. Whatever you want to call it, the perfect world is high density. Maximum energy management / efficiency. High availability. Everything where you can monitor it. Standardization of parts. On and on.

 

Public clouds are designed from the ground up to be dense/hot/efficient like this. Private clouds probably should be too, and for the same reasons.

 

Dell-wall.jpg

 

We'll never live in that Nirvana of Data Center existence. One of our big customers might be an all Dell shop. Another an all IBM one. Many more historical mixes of other vendors. It does not take long before we are in the place of supporting all of it in our data centers. Rather than that small, hot-as-the-sun Data Center, and have the much less small Data Center with at least one of everything.

 

That does not mean we can not look at the small, hot Data Center as an ideal and not try to move towards it: That we can not do better than we have in the past. So far in this series I have detailed our approaches to the server and storage sides of that. Now: the room it all sits in.

 

Keeping up with the Times

 

When we started this project, we looked at all the cool kids, in their fancy Tier 4 Co-Lo's, and then looked out across our huge data centers un-densely packed, and scattered across the globe. There was a lot of low hanging fruit. The problem is we could not just take everything we had and jam it a modern data center. Even jammed into tall racks, it would take lots and lots and lots of space. Made no fiscal sense. Nothing changed about the CO2 just because it was jammed closer together in that nice new Co-Lo and / or modern DC we had built. If it uses a couple megawatts spread out, it uses a couple of megawatts jammed together, assuming nothing else changed. The watts per square foot goes up. The total watts, or the total required HVAC do not. For every 3500 watts of gear, you need a effective ton of HVAC, no matter how densely or un-densely it is arranged. Spread out, you are using lots of square footage, but you do not need a lot in the way of airflow management. Jam it together, you have to work a little harder to get the hot where it needs to be.

 

None of that counts the absolute inefficiency that is mixing your hot and cold air. Yikes. Old MF DC's didn't care about that because the MF heat was carried off by the chill water pipes: Never entered the room air at all.

 

Some of my early thinking about redesigning versus building DC's was in the "Build or Retrofit?" series ( [1], [2], [3]). Where we landed, decision-wise, was that, no matter what the future might hold (Co-Lo or build our own modern DC), where we were now had to be updated first. We had to do all the work to shrink the footprint and the power in the current space before we could design our future perfect. We did not want to take all the old stuff to the new place. We did not want to move everything, build out a huge cage, and then shrink it by a factor of 10 down the road. Get smaller and smaller, hotter and hotter, till we had a tiny cage that glowed in the dark. We wanted to start with the small, hot cage!

 

Looking at what we had to work with, internally there were all sorts of DC's, all sorts of sizes, with all sorts of capabilities. But there was one DC that stood out. The one that had the "Good Bones".  It had central UPS. It had a GenSet. Its own dedicated Chiller. Lots of available chill water. It was right next to lots of BMC employees, so accessibility was optimal. Even a lights-out DC has people going in and out of it.

 

Even it was too big though.

 

Mr Peabody and his boy, Sherman

 

Time to get into the WABAC (Wayback) machine and see why this place is what it is.

 

Its 1992. We have outgrown our current building. Our water cooled mainframe needed to be replaced with something better. Stronger. Faster. We used VM/XA to virtualize most everything. Aside: I was, among other things, a VM System Programmer. Virtualization was nothing new even back then even. It just did not have the cool factor it does these days.

 

That the one mainframe could appear to be many tens of MVS, VM, and VSE images, but we needed more. The plan was to take the 600 and make it a 720. Think about a second full size MF. We looked at the growth rate. We decided it was time to move the people and the data center to a new place, and design it to meet all our needs. It was the 1992 dream house (opened in 1993), and it had everything. a 16,000 square foot primary DC floor. a 7000 square floor DC on the floor above to house communications gear and the operations area. Massive chill water pipes, and enough cooling for four water cooled mainframes as big or bigger than the 720. Heat dissipation of 38 watts a square foot in the room, plus the chill water pipes.

 

In times before that we had lived in a DC without redundant power, and later one that had an old UPS, but no genset. This new one was beyond awesome.

 

Then, in 1993, we bought Patrol, and all these little computers needed a place to sit. No problem. We had a big, empty data center. We never bought another water cooled mainframe, as the air cooled CMOS based units became all the rage.

 

Then we connected to the Internet, and now we had more little computers for things like SMTP, firewalls, WWW, etc. (Personal Aside: This was when I learned UNIX and Linux.)

 

Twenty years passed, and that DC from the early 1990's became the little DC that could. In 1999 we added another 4000 square feet because we had so much gear coming into the DC. That 38 watts per square foot was still holding well, so we went with that in the new space.

 

The mainframe operators area was removed, and a NOC was installed. World wide control of everything, using Patrol and Mainview. The old Operator area was re-deployed as server space. More servers. We passed what 38 watts a square foot could handle, so we added CRACs.

 

We added another UPS, which was fun because now we had power outages again for some of the gear that were not on the old UPS: The new one had no genset behind it. A long enough power outage would bring things down.

 

Cowlings were added to the CRAC's in the original 7000 square feet, and air conditioning added to get to to nearly 50 watts a square foot.

 

The raised floor is only six inches, and had cables under it from 1993. 360 and 370 channel cables....

 

20121102_110742.jpg

 

.... Serial cables. Later generations of cables layered over this like sedimentary rock. Ethernet and fiber looked tiny compared to a MF bus and tag cable set, but there were hundreds of them.

 

From There to Here

 

The 20 year old data center had at least one more act in it. Under all those layers of cables were the good bones. Maybe not high heat density bones, but the good ones. A solid building. UPS. Genset. Chill water. It was far better than the labs in other places that had been built out of storage areas and had in-rack UPS and supplemental air units. the next best DC had good airflow management, modern wiring, etc, but no genset to back up the UPS.

 

It was going to take some work. Even as a temporary place to hold things. Consolidate things. Get ready for the next generation DC, whatever that may be,  A year ago we started Phase One of the Go Big to get Small project, and that meant fixing the sins of the past of the DC. Hey: They seemed like good ideas at the time.

 

It also meant getting rid of 16,000 square feet of DC (an entire floor), and making it all fit in the remaining 11,000 square feet. With room and power to spare to be able to absorb other DC's that needed modernization just as badly if not worse.

 

We gave ourselves a year to swizzle every platform into new footprints of gear. To redesign the airflow, re-lay out the room. Pull all the underfloor cables damming the air from flowing to where it was needed. Increase the density of the computing, but keep it spread out enough to live in the watts per square foot we had. The new/old DC would be more efficent because it would mix hot/cold air less. it would get more for the cooling dollar. Hold more gear for the same amount of power. We would not just reduce CO2 because we were virtualizing and densifyng, we would because the room would be more efficient.

 

We had to do all this without delaying any products or causing any outages to production things like the network or the virtualization infrastructure. We did not want to spend a great deal of money on the DC redesign because it was only going to be a stepping stone to the future perfect: Knowing there is not future perfect, because things change. Technology changes. Needs change: No matter what we design now or in the next year, it won't work for whatever comes twenty years from now more than likely.

 

As I write this, I am also getting ready to start Phase 2 of the project. I have an 11,000 square foot DC with room, power, and HVAC to spare, and I have other DC's that need to move in here. Go through the process. Shrink. Use less space and power. Get us ready for the DC of the future. the one that is half this size, handles 250 watts a square foot more more, and holds what, in 2001, was over 70,000 square feet of data centers and labs.

 

Next time: Numbers and pictures of the new / old DC.