Share: |


So, there you are. You are standing in your garden of servers. You have picked all the low hanging fruit. You have virtualized most everything that can be. What next?

 

I have been thinking about this for a while. I have had a number of conversations with people about Green IT over the last year, and it is pretty clear that how much money you can save on power, and how much you can reduce your CO2 emissions as a result, really depends on where you are in your adoption curve, and what kind of servers you have.

 

In our case, go back seven years or so to our proof of concept time. We bought a few small (by today's standards) servers, and rolled a number of VM's into them to see how it all worked. Even with the maturity of the products, there was enough stability and enough cost savings that we kept buying more servers. Then we went to a "virtual first policy": Everything new was to go into virtual infrastructure unless a valid reason could be given for why it *had* to be real hardware. We are an R&D shop, so there were plenty of reasons, but still we were able to capture a large percentage of the new requests as virtual rather than real fulfillment.

 

As older servers failed due to age, we replaced them with virtual machines. It was a no brainer at so many levels. As chip technology advanced for the servers, overhead dropped for the VM's. A VM would be far faster than a (for example) 10 year old computer.

 

Note here that I have avoided mentioned any specific technology. X86/AMD64 chips matured more quickly, but every vendor's chips sets and servers (Coolthreads from Sun/Oracle, Power from IBM, and HP's IVM) were moving forward down the virtualization road. The road the mainframe blazed all those decades ago with VM/370.

 

We have now more than halved our power consumption for R&D since we started this, dropping two megawatts, down to a current 1.2 megawatt. Each successive drop is getting harder now. 10,000 X86 class systems are melted down and now spinning around as someone someplace's car wheels.

 

What next?

 

There are still opportunities, but the 80/20 rule is starting to kick in. We are getting close to the end of the easy 80%. Time to step back and rethink, and look for new opportunities.

 

I currently see two places to go next. I am sure there are others, and I am keeping my eyes and ears open.

 

First is HVAC. Most of our data centers are fairly old. They do not have variable speed motors in the CRAH's. They are not organized well to keep hot and cold air from mixing. If we assume that we are using 50% of our power consumption for computers to cool our computers, then that 1.2 megawatts is requiring another 600 Kilowatts of power. I can get 300KW of that back by modernizing the data center infrastructure itself.

 

The other is virtualizing the desktop. This is where the real low hanging fruit is. I roughly estimate that we use 4.2 Megawatts right now to power all the desktops in the office and at peoples homes. The reason is simple: We are using traditional desktops and laptops still. Even with a full technology refresh to current gear across the entire enterprise, that won't drop more than 500 KW or so. To get full power savings we'd also have to move to thin clients. Use 15-50 watts rather than 80-350 watts (depending on desktops, tablets, laptops, CRT, LCD, LED backlights, etc).

 

Of course, that puts more servers into the data center, but not nearly enough to overrun the power savings. That also puts a lot of work happening at the edge squarely back into the data center. Handy for backups. DR, security, etc.

 

Then there are apps and SaaS: If low power edge devices can get to everything they need via one of these routes, the potential is there to save power in the Megawatts again.

 

The problem is that all that power is spread out all across the globe. Literally thousands of power bills each seeing a tiny bit of it.