Share This:

blades-small.jpg

Some useful blades....

------

Before I move on to things like Storage or DC design, I wanted to take a post to look back at all the blades I have discussed or at least alluded to. By now most of what we are up to should be obvious, but what the heck: Never hurts to underline a point.

One of the main points is this twofer:

 

  • We think Virtualization is the key to successful DC consolidation, and across all major platforms these days
  • Also of a “These days” nature: Blades are ready for virtualization duty.

 

Density

 

Are blades the densest, most virtualization concentrated option a given vendor may have? Two examples from UNIX:

 

  • Our Sun blades are T4-1B's. That is 1 socket, and there are no two of four socket blades from Sun.
  • Each 10 U chassis holds 10 blades, so 10 sockets, and at 128 GB per blade, 1280 GB in 10 U.
  • Now, after the most recent product announcements, we would get T5-1B's, with 256 GB per blade, or 2560 GB per 10 sockets / 10U
  • The T5-8 is 8 sockets, and 8U, and enough DIMM sockets to result is the same memory density per socket as the blade solution.

 

So, more or less, running Sun blades is the same density as the the Sun rack mount solution. The upside of the blades is that the blade chassis can be upgraded a blade at a time. Need 4. Buy 4. Have empty slots left for future needs. In our case a new blade acquisition can be the newer, denser T5 based blades.

 

Advantage to the Sun Blade solution.

 

  • Our IBM Pureflex has the FSM, taking up one of the 14 blade slots. Leaves 13 slots for compute nodes,
  • Each slot can hold a two socket / 256 GB Power 7 based server.
  • That's 26 sockets and 3,328 GB in the first chassis. Later chassis don't have the FSM sucking up a blade slot, so a bit more capacity there.
  • We use a lower density DIMM for cost reasons, and because we think in a virtual environment that 256 GB of RAM to 2 sockets is the right ratio. You could double that RAM number with the higher density DIMM.
    • Side note: We wish the HMC could manage these blades. Save us a slot, and we are not using any of the advanced features the FSM gives us. But we knew it was a Pure solution when we bought it, and it was worth it to get to the latest blade tech.

  • Now that would be Power 7+ based blades, and like the Sun, we can make all the new blades Power 7+, and later Power 8, etc.

 

IBM has some seriously huge Power 7 based systems. The biggest is the 795. In one rack footprint, it will house 32 sockets / 256 cores, and 16 TB of RAM. Using the same DIMM's we use it would be 8 TB.

 

That's cool if you are after that much power in a single image, and also: It is a full 42U rack : We need to triple the blade number to compare :

  • Blade Chassis 1 (with FSM): 26 sockets / 3328 GB RAM
  • Blade Chassis 2 (14 available slots) : 28 sockets / 3584 GB RAM
  • Blade Chassis 3 (14 available slots) : 28 sockets / 3584 GB RAM
  • Three Chassis, 30U in a 42U rack: 82 Sockets / 10,496 RAM

 

I could choose a 48U rack, slide in another chassis, and go even further, but it is clear the Blade has higher density and RAM, and once again, upgrading the blade can be done a blade at a time, and as new processor and memory densities come out the new blades can incorporate those.

 

Our conclusion: Blades are the way to go these days. Having watched them for years, they have finally arrived as the most computing density with the most flexible deployment options available today. Perfect for that little private cloud you have always wanted to build.

 

Other Blade Advantages

 

Being a heterogeneous shop, since we are an R&D shop, we have no real reason to take advantage of one of the main selling points of a blade. That is the ability to run more than one processor architecture on different blades inside the same chassis.

 

We use Dell and Cisco for our AMD64 blades, but if we were, say, and all IBM shop, we could have a mix of Xeon and Power 7 blades in the PureFlex. Quick aside: To some degree we already take advantage of the multi-OS nature of the IBM solution, since we can run AIX, i Series, and Linux on Power on the same chassis.

 

Sun has Xeon blades for its Blade Chassis. HP has Xeon blades for its blade chassis. In addition we plan to run VMS on the Itanium blades we already have, in addition to HP-UX.

 

Between the four chassis types (Sorry Dell and Cisco: Lumping you together here) we can run numerous operating environments. KVM. Xen. VMware. Linux. Linux on Power. AIX. i Series. Solaris. HP-UX. VMS. Windows. Hyper-V.

 

If you are more dedicated to one of those vendors than another, you can (for example) get both your RedHat KVM on AMD64, and Redhat KVM on Power (Red Hat announced KVM on Power at the 2013 summit: had to slip it in...) from the same vendor on the same chassis. Or, in the more likely scenario, run AIX on Power, and an AMD64 operating system of some kind on the same chassis. If you are a shop trying to sole source, its handy.

 

Its something that Dell or Cisco or any of the pure AMD64 blade vendors can not match, although there are enough OS's for AMD64 to go around, so I doubt they are losing any sleep over it.

 

Commonalities

 

You can not put hundreds upon hundreds of operating environments into this density without also having redundant bandwidth. Take VMware for example: We have something like 15,000 VM's of all types, spread across the globe on the VMware servers. When they talk to each other inside their host server, this all happens at virtual network speeds. When they talk outside the host, it goes at whatever speed the server is hooked up as. A single Dell chassis with 16 fully configured blades in theory will run 700-800 VM's.

 

When they chat with each other inside that blade, is all virtual fast. When they talk across blades (the east / west communications), it happens across the internal switches in the back of the chassis. Still extremely fast.

 

When they chat outside the chassis, they hop to the Top of Rack (TOR) switch, and move on out from there. With that many OS images running around inside the one chassis, the times it wants to talk outside the chassis will be frequent. We went with 10 Gb Ethernet, and have an eye on 40 GB should the need emerge.

 

All the vendors support 10 Gb. No problem. We had to retool the TOR and core capabilities to handle it, but it only makes sense. This is not an area to scrimp on.

 

Similarly, we went 8GB on the Fiber Channel, and will watch to see if there is any need to go to 16. Everything is boot-from SAN, so all the disk I/O happens here. There are no internal disks on the chassis to take any of the I/O load off.

 

Being fast at I/O is also a big deal inside the VM. The biggest knock a VM gets is that they are bad at I/O, and that is because, in the not too distant past, VM's were bad at I/O. The fastest way to get your customer fighting your "Virtual First" policy is to be really bad at Virtual I/O in a customers I/O intensive environment.

 

10U seems to be the consensus for how tall a blade enclosure should be.

 

Differences

 

Every one of these blade environments has its own particular set of management tools. the Sun's have Ops Center. IBM has the FSM. Etc. So on. So forth. Each technology has different remote access (though, this is also a similarity as they all have remote access.)

 

Most have redundant switches for Ethernet and Fiberchannel. Some add Infiniband or QBR options. But the Sun does not have a common / redundant fiber channel switch in the backplane: Rather each blade has FC cards. Side note: We had hoped the new T5 Blades would change this, but they did not. So the good news is that the T5 blades going into the chassis we already have. The bad news is we still have to buy Express cards on a per-blade basis, which consumes far more ports on the central FC switching.

 

Most have six power supplies, and run 3+3, but some like the Sun have 2 and runs 1+1. More FC connections to the Sun, but less AC power cords...

 

Most have two socket blade options, but the Sun only has one socket per blade. Some go all the way to four sockets per blade.

 

They all may have settled on 10U, but how many blades fits in that chassis varies. For Sun, its 10. No half height. For IBM, its 13 half height, and one FSM in the first chassis, and then 14 half or seven full width blades from there on. Etc.

 

We will not get the exact same number of VM's per socket either: different virtualization technologies vary in what they can do here. With KVM, I can over-commit the RAM more than with VMware. That changes the per-socket VM counts, since 80% of the AMD64 workload is RAM constrained, not CPU constrained. With Sun's Zones / Containers we can really ramp up the virtualization counts because of the way RAM is shared, but the reverse is true when using LDOMS. IBM is the same way when looking at WPARS versus LPARS. Etc.

 

No matter what the virtualization tech is, the result is more or less the same: Higher density than if we were using rack mounted servers. Much higher denisty than the fleet of gear we just retired.

 

10-to-1 and Two Thirds Reduction is Space / Power

 

I have said it many times before, but to repeat myself: We are an R&D shop. That means many things, among them Heterogeneity. We get to be virtual first... right up until we have a workload that can not be virtualized.

 

Here are some reasons we can not virtualize something:

 

  • The platform is too old: No virtualization solution exists. Example: VMS version 7.x
  • Repeatability: The rock that virtualization dashes upon: if you need to know how long something takes, you have to do it on real hardware. Anything else is something no one would believe the number on. VM, on the mainframe, has had this problem for going on 5 decades, depending on how you consider the early work at Cambridge.
  • Scalability: Same thing as above.
  • I/O: Maybe. Maybe not. With the right hardware, you can dedicate I/O to important VM's, and cut out the virtual middleman. But that is expensive in terms of hardware footprint, and starts reducing some of the goodness you get from virtualization. This is a case-by-case call. SAN matters too, and I'll get to that in another post.
  • A BMC product needs access "Under the hood". For example, bare metal provisioning like Blade Logic. Can't test that only on a virtual solution. At some point, you have to run on the real iron.
  • Workload exceeds certain size parameters. Right now if its bigger than 16 GB, has more the 4 CPU's, and needs terabytes and terabytes of local storage for whatever reasons, I would take a long hard look at it to see if it should be real or virtual. With KVM solutions like RHEV you can, in theory, runs absolutely monstrous VM's (2 Terabytes of RAM in one VM? Nice to know it can do it, but nothing I will need soon.): But why consume nearly an entire host just because you can?
    • With Blades I can just slide the end user into a dedicated blade, and if they stop using it, I can turn it back to serving as a virtualization host. Most blades can scale to whatever real need an end user might have, and if they want less, we'll get a big one so we don't waste the slot later.
  • Portable demo environment because the Internet is not available
    • That may be virtual. Probably is. But it may be on a laptop, not on a Blade based server in the DC.
    • See this use case less and less, and the Internet continues to insert more of its tubes all over the place, and people figure out secure ways to let people access it. The second one can get to the Interwebs, they are able to be in to the internal Blade based cloud.

 

Here are some reasons I will be inevitably be offered that are not real reasons to not virtualize on a blade:

 

  • I want a real machine.
  • I work from home
  • It needs to be inside network x.y.z
    • This one may still have some truth, but with SDN, it is going to die as a valid reason soon if it did not already.
  • VM's are too slow.
    • If your VM's are too slow, then the hosts need work. My Windows 7 environment is almost solely a VM these days, and its like being on a real machine. I get to it from my Mac or Linux desktops. No issues. And its nice not having to run it locally.
  • I need physical access to my machine so I can:
    • Insert media
    • Reboot it
  • My real machine is cheaper than your virtual one
    • If hardware were the only factor in price, that would probably be true, but support costs are always a huge multiple of the real hardware cost.

 

There are valid reasons to keep real, physical rack mount servers around, and that reduces the amount of space and power we can recover. I have mentioned over and over in this series the 10-to-1 reduction number for both power and space, and from that it would be easy to assume that means I expect, overall, to get to that kind of reduction. That, for example, my 38,000 square foot DC will become 3,800 instead.

 

Maybe. Someday. Right now we have a more modest goal. 2/3's reduction in space and power. That is nothing to make fun of. It equals real carbon reductions and real cost savings. We get there because we can leverage the 10-to-1 reductions we can get from Blades and Virtualization to offset the hardware we have to keep around at full size. Over time those reasons will fade. We won't have to support a VMS 7 system anymore, and so once everything is on 8.2 or later, it can all be virtual.

 

The low hanging fruit still allows us, with a strong investment in time, design, people, and infrastructure to get on the road to that 2/3's reduction goal. With each success we learn something, and we earn the reputation of having a new way of doing things that works. Helps with those that resist the change, and just want to keep everything under their desk.

 

So: Enough on Blades. Next time, Storage.