Share This:

Todays post is about another of the UNIX blades, and it is our answer to how to get the same 10-to-1 space and power reductions that we have already seen on the Sun/Oracle and IBM Blades. Technically we'll also run VMS as well as HP-UX on the new Blade.


We just got the HP Blade, so everything I discuss here is slightly theoretical. I have no reason to suspect that we will not acheive exactly the same things we have with the other blade solutions, but I do not want to present this as a fiat accompli. Its a work in progress, but I wanted to post about it so it was clear we did not have a hole in the design / strategy.


800 Pound Gorilla in the Room


I would be less than honest if I did not discuss the Itanium chipset right up front. One reason we just got this blade was that we were waiting for the Poulson/9500-series chipset. Itanium does not get a lot of updates (there were nearly three years between the Poulson/9500 and the Tukwilla/9300), and so it lags the commodity offerings from AMD and Intel in terms of speed, and its RAS advantages are basically gone these days as well, with things like memory controllers that can take offline bad DIMM's, etc being things AMD64 chipsets can now do.


Intel is going to merge Itanium and Xeon according to ExtremeTech, and that's a logical move that should help HP keep Non-Stop, HP-UX and VMS on updated platforms, though I personally wish they would just release AMD64 versions of all and be done with it. The linked article above says that there are still niches for the IA64 instruction architecture around mathematical precision, so perhaps the merged architecture will bring these to the broader AMD64 (X84-64) world.


In previous posts about the blades, I discussed my rule of thumb for sizing a blade (it would work for a rack mount too): This is a point-in-time rule of thumb, and I am always questioning it, and looking at our BCO data to validate it. Basically, for a Xeon Nehalem / Sandy Bridge class processor (8 core per socket), we put two sockets (16 cores) with 256 GB of RAM for our virtual environments (Virtual is the vast majority of our R&D environment).


That ROT has worked for the IBM and Sun environments very well. Still, I was nervous about applying it to Itanium. We knew Poulson was bringing Itanium back up closer to where the AMD64 chips were at, and also more in line with Sparc and Power, but how close? This would be our first one.


We took a slight risk (or is that RISC?) and configured the BL860c I4 with two sockets / 8 cores each, and 256 GB of RAM each. The beauty of a blade is that if this does not perform at near parity with the other environments, we can add another blade configured differently to compensate. We'll already have the blade chassis!


Beauty Shots






Two BL860c i4 blades: Plenty of room to grow / adapt.



photo1 (2).JPG.jpg


What we like in a blade: redundant switches!


What We Are After


What we are after is more or less the same as what we were after in the Sun / Oracle and IBM environments: Lots of very old desktop and rackmount HP servers, running old releases of HP-UX. VMS is a possibility too, though that is a secondary goal to the HP-UX footprint right now. Here is a small sample of the model table (its much larger than this):


HP Model



9000 712/100 Workstation110
9000 712/60 Workstation110
9000 712/80 Workstation110
9000 A4001765
9000 A500900
9000 B1000500
9000 D380930
9000 K200 Server1225
9000 K200 Server515
9000 K210 Server1225
9000 K210 Server515
9000 L20002015
9000 L2000 362015
9000 rp2450 Server880
9000 rp2470 Server880
9000 rp3410 Server536
9000 rp34401350
9000 rp3440 Server600
9000 rp5450930
9000 rp54701200
9000 rp7400 Server3000
9000 rp7410 server3000
9000 rp8400 Server1336
9000 rp8400 Server1936
9000 rp8400 Server2436
9000 rp8400 Server2986
9000 rp8420 Server1111
9000 rp8420 Server2141
9000 rp8420 Server2812
9000 rp8420 Server3489
9000 Server rp74202015


That is a small sample. You can see from that table the variance in the nameplate wattage between the models, and even inside some of the models. I also chose that part of the list because it is all PA-RISC based. We have lots of Itanium HP gear too, but the real challenge will be the PA-RISC based stuff.


That was another attraction for the new technology. HP has introduced a Container that can run PA-RISC workloads on Itanium based servers. At the core of that is something called "Aries".


This "feels" a bit like using Zones to run Solaris 8 and 9 workloads, or WPARS to run AIX 5.2 and 5.3 workloads, except its more like Apples Rosetta, which allowed Apples Intel chipped computers to run binaries created for Apples Power chipped computers. I used Rosetta quite a bit and it worked extremely well. I hope that Aries is the same or even better.


We now have to learn how to bring images from old HP computers over to the new blade server. For Itanium, it will be either picking up and IVM and setting it down in the new place, or a P2V into an IVM. For PA-RISC workloads it will hopefully be P2V from PA-RISC to Ares container.


In theory then, even with binary translation of PA-RISC to Itanium IA64 going on, the Poulson class hardware should run things better and faster than some of the ancient gear it is coming from.


That is our bet, and based off Zones and WPARS and the success there, we have a great deal of optimism about it.




Look at the back of the c7000 and across the bottom you will see the six power cords-to-be. These connect to six 2400 watt power supplies. At 208V, these are max 2450 watts each (peak 2692) and 91% efficient. Each


The BL860c i4 blade is nominally 500 watts, and can peak at 595. We are booting from SAN, so we'll never quite reach that peak. No internal disks to spin up. There are only two blades (1000 watts) and a max of 8 (4000 watts), so even with the 10 Gig Ethernet, and the 8 GB Fiber Channel switches, running 3+3 on the power supplies, this blade chassis never uses more than 7350 watts, and most of the time much less than that. I only have to replace 6 servers like rp8400 to get back the power that 2 blades will use. We plan to replace sixty nine, of all types in the first wave.


If you have read the other posts, you should now be feeling a deja-vu: Isn't that starting to look like a 10-to-1 reduction in power?


EIA-U Who?


How much space does sixty nine HP servers, an wide assortment from the last 15 years, take? If they were all 2 U, then  130+ U. All being replaced in 10 U, with 6 empty slots. Technically 12, since there are half slot blades, but nothing we would buy is half height. The Itanium stuff we are interested in are all full height.


So, 10-to-1 rack height reduction, and if these were stuff into racks at 30 U each, then four racks becomes one 1/3 full rack. Three of these blades on one rack, fully populated would give you a theoretical maximum wattage of 21,000.  With a rack cell size (space a rack plus its physical access paths takes) of 20, that's 1000 watts a square foot!. Our DC is nowhere near that, but then, we are nowhere near needing to retire the theoretical 720 physical HP's that would be either.


P2V Density


I am after sixty nine servers. I have two blades. Is that realistic? experience with virtual says that memory is the bottleneck 80 percent of the time in our shop. This configuration gives, on average, 7 GB or RAM per physical server being retired. Is that enough?


Oh yes. Look at how old the stuff going away is. Some of those servers are 256 MB, and many are 1 or 2 GB. Looking over at the IBM and Sun, we are seeing about 50-to-1-blade types of consolidation rates. part of that is that Zones and WPARS are extremely efficient at sharing resources. Our original planning was 20-to-1, and we have gone way past that.


All the HP has to be is about 3/5's as efficient as either of those two platforms, and we are good to go.


If not: We have empty blade slots.



Next time: AMD64 Blades