Share This:

When IBM announced the new Pure line of systems, our attention went straight to the compute nodes. Even from a distance in the announcement glossies, it was clear that was a blade system, and it was not the one that had been around for the last decade. The original announcements had no real details about that tantalizing new blade chassis. Pure was being sold as a whole ecosystem.

 

But it was that enigmatic blade chassis that caught the eye. It looked like a brand new blade design, and it was.

 

I was looking at the Bladecenter H for our shop as part of the Go Big to Get Small project, and frankly, I was worried about it. It was at least a ten year old design, and had been designed to probably last about 10 years before the new design came out. The compute nodes were managed differently than the networking switches, and the word was that the power envelope was about done: That no new, significantly higher density blades were really going to be possible in that footprint.

 

This concern was more or less verified when Power 7+ came out and no Power 7+ blades were going to be issued for the Bladecenter. It was indeed done after its 10 years of life. You know, in computers, a design that lasts 10 years is actually pretty impressive...

 

I pondered the advisability of perhaps just going 740 or 750 rack mount rather than Bladecenter. We had a great deal of recent success retiring five racks worth of power 4 systems into two 740's with Power 6 chips. It went against the general idea of blade-ifying all the architectures, but in truth, if the chip architecture did not have a viable blade design, there was no point in being pedantic about it. The real goals were footprint reduction. Power reduction. CO2 reduction. if it took a rack mount design to do it because the blade was not there, then ... so what?

 

It was a close thing. When the Pure showed up I was days away from ordering hardware.

 

To get the PureFlex carved out from the rest of the Pure system required IBM doing a special config. From their point of view, I was undoing all the goodness of Pure. From mine, I was getting the tasty bit, and keeping the rest of my infrastructure as *my* infrastructure. Nothing against the storage or the switches that Pure contained: They just were not what we use.

 

Power 7 and Power 7+

 

That first year the two Power chipped blades were the p260 and the p460. Both had Power 7 chips. As of this writing, you can get Power 7+ versions. A 260 was one slot. A 460 was 2. Shades of the Thin and Wide nodes of the IBM SP2!

 

The p260 had half the CPU sockets, half the I/O, and half the DIMM slots of a p460. The one and only thing I can now see that could be done with 1 p460 that you could not do with 2 p260s is run a dual VIO server. The I/O daughter cards on the p260 only supported one VIO instance.

 

ROT

 

I mentioned in the post about the Sun blade my blade sizing rule of thumb: 128 GB per CPU socket, all things being approximately equal. a few years from now that will be a horribly dated rule, but it has worked for the last year or so. I thought through that with this new blade design for a while before deciding it was probably still good. I have not re-examined the assumptions in light of the Power 7+, but for the Power 7, the memory bus speeds were far slower than the Intel blades in the same chassis. PureFlex Xeon blades ran at 1600 Mhz, and the Power 7 at 1066 Mhz.

 

After much internal discussion about how much an R&D workload needs a dual VIO server, it was decided that (going forward) a new chassis would start with a p460, and that builds and other critical things (NIS. NIM, etc) would run on that blade. The rest of the blades would be either P260 or P460 depending on criticality of workload.

 

The very first one was bought with two p260s because at the time we did not understand the thing about dual VIO servers yet.

 

DSC04284.JPG.jpg

 

And there, in that picture is the story so far: The two P740's that retired the five racks, and their close personal friend, the PureFlex.

 

Note in the picture there are three blades: There is one Intel blade, running a closed Linux OS on X86, called the FSM. The FSM is the new fancy Pure system manager, and is based on an hugely enhanced Service Director. You don't manage these Power 7 blades with an HMC unfortunately. FSM may be *better* but if you are already an HMC shop, this is a new thing you have to learn.

 

I have occasionally wondered about the withdrawal of the SDMC, and the move back to HMC, and how it might relate to the FSM. It seems like perhaps the SDMC overlapped the FSM, and so it was decided to not have three different ways of managing this stuff. Who knows? Not me. And if none of that paragraphs acronymic pondering made sense, then maybe it really doesn't matter. The way it is now is that the PureFlex is managed by the FSM, and the rack mounted world by the HMC.

 

Chassis

 

The chassis of the Pureflex is 10U, and has 14 slots. the FSM can currently manage 3 chassis, so you only lose the slot in the first one. The second and third chassis have all 14 slots available, and soon the FSM will probably handle more than three chassis. Architecturally it seems a conservative decision by IBM to make sure everything scales linearly.

 

As it related to our idea about using p460's, the first chassis would need to have at least one p260, and six p460s to be full. You could not put a 7th p460 in the first chassis, but you can in chassis two and three.

 

Each p260 has 2 sockets, therefore 256 GB of RAM. Our p460 equipped chassis have double each of those. Per blade. But the blades twice as big, so the density is the same. In math terms 2x(p260) = 1x(p460).

 

Except for dual VIO.

 

Around on the back side we have six 2500 watt (nameplate) power supplies, the Chassis Management Module (CMM) or modules (there can be two), and four slots for things like network switches and fiber channel switches. We wanted full redundancy, so we filled the slots from the start with two 10GB ethernet switches, and 2 8GB Fiber Channel switches (16 GB is available, so already pretty future-ready).

 

The "North - South" communications of the chassis are done via the network, so having 10 GB is a big plus for a multi-chassis setup.

 

All the details are in this Redbook linked here.

 

When you compare the PureFlex to the Sun, you can see the IBM design is newer by virtue of more sockets per blade, and having FC switches shared amongst the blades rather than an FC card per blade like the Sun. The T5-1B blades did not change any of that (even if they upped the single socket speeds enough that perhaps that does not matter as much.)

 

And, HP and Dell blades have shared switching like that too... but those are other posts.

 

Density of Compute

 

At the end of the day, what we are after is the ability to retire physical systems into virtual images. Its an R&D workload, so putting fifty LPARS onto a 256 MB p260 is possible. Some of the systems being replaced don't even have a GB of RAM. Going virtual, onto this blade,  are things like 7026-H70's with 750 watt power supplies. 7043-260's with 640 watts nameplate. 7046-B50's with 140.

 

At the end of the day, its another 10 to 1 win. 10 times less power. 10 times less space. 10 times less CO2. And everything has moved from Power 3, 4, 5, or 6 to Power 7. Its Virtual. Its faster.

 

Compatibility

 

I mentioned before in the Sun blade post that backwards compatibility is critical. We are not just a heterogenous shop, we have workloads running in R&D on all sorts of releases inside any given vendor. To run AIX on the PureFlex you have to be on AIX 5.2 or higher, and there is a pile of caveats. AIX 5.2 and 5.3 only run in WPARS (think Sun Zones) and if you are P2V'ing a physical system to the WPAR, there are required minimum patch levels that may be a problem if you are trying to stay as backward compatible as possible. Hopefully no production shop is backlevel, even if still on 5.2.

 

Hopefully.

 

AIX 5.1 and before? No choice there but to stay on physical hardware. Still: 5.2 came out in 2002, and was EOL in 2009. And it can run in a WPAR on the PureFlex, so that is pretty good backwards compatibility. For Go Big to Get Small, it is a chance to try and get everything at least up to AIX 5.2.

 

But Wait, There More!

 

if you want to run i/Series on the PureFlex, you can. Not as far back as AIX, but i 6.1 and 7.1 are there.

 

For the record, i as the name of the series and OS? Really? Because everyone I know still calls it AS/400 or i/Series. Not just "i" or the very slightly better "IBM i".

 

But whatever you want to call it, it runs on the Power 7 blades. We are setting that up now, and will be retiring a 520 or two when we do.

 

Next time: Another Blade!