Share:|

Last post I finished up on the UNIX blade servers by talking about HP's blades. That clears the deck to get into what is actually the much larger footprint blade-space, AMD64 blades. We have Cisco UCS blades as well, but I am going to cover the Dell M1000e here.

 

The Wide Wide World of AMD64

 

On Power chips you can run well supported Linux or AIX. On Sparc chips you can run Solaris, plus an assortment of special purpose OS's (most of which I honestly never heard of before today). On Itanium you can run HP-UX, Non-Stop, and VMS, and some Linux and BSD versions are still available. Red Hat and Microsoft famously announced they would remove support back in 2009.

 

No chip supports as many different operating systems as AMD64 architecture chips. Intel's new Haswell line looks to keep that going with better performance, and substantially better power savings.

 

Its not just the entire line of every MS OS there is, and pretty much every version of Linux and BSD. Solaris continues to have X86 versions. Android has been ported (admittedly, Android is Linux). On and on. The big kids in the room as far as servers go is currently Linux and MS Windows Server of course.

 

Apples OS.X is on this architecture (though tightly tied to the Intel chipsets for now: know of no AMD chips that commercially run OS.X). Even though it is an AMD64 design, Apple limits OS.X to running only on their own hardware, so we can not leverage the AMD64 blades to support that OS. (Note: We *do* support OS.X with a number of products. IOS too, but those are different posts)

 

Just in terms of total number of OS images, we have well north of ten thousand OS images running on AMD64 based hardware. We are moving as much of that as we can, worldwide, to blades.

 

The reasons are the same as they were for UNIX: we are after 10-to-1 power reductions, significant CO2 reductions that follow power reductions, 10-to-1 space reductions, and a modernization of our entire data center footprint.

 

Side Note: It will be extremely interesting to watch ARM in this space. ARM has just about as many OS's running on it as there are AMD64 OS's these days, and they are very interested in creating low power server solutions. If that should "become a thing", then I can see another chipset entering the data center in quantity. The good new here is that it will already be a low power solution, and therefore fit into our overall goals, though it remains to be seen how it will be packaged. maybe a rack shelf full of smart phones with cracked screens? According to some, we are still a year or so away from adding ARM based servers to the DC.

 

If nothing else, the commercial success of Linux and Windows means we would have *lots* of AMD64. The more of our customers that run those platforms, the more R&D we do on those platforms. Only makes sense.

 

M1000e

 

The M1000e chassis that holds the blades should, by now, "sound" familiar. Up in the front you can stick blades, and blades can be half or full height.  8 full height, or 16 half height per chassis. We have a mix:

 

20130606_112945-small.jpg

 

There is a midplane that everything connects into

 

Around back there are power supplies (six 2700 watt units) and slots for a plethora of switches. I do mean a metric ton of options. A total of six slots are there to slide all those switch options into, so everything can be fully redundant (three redundant fabrics of whatever type you need)

 

For our needs, the 8 GB Fiber Channel and 10 GB Ethernet, matching what we have on the UNIX blades, is available. There are switches from Brocade, Dell (PowerConnect) and Cisco.

 

M620

 

The M620 is a beautiful blade for virtualization. You can run 2 sockets, 8 cores each, and 256 GB or RAM (actually way more than that...). it matches our R.O.T about the ratio of processor power to RAM perfectly. It supports all the major AMD64 OS's, including Red Hat Enterprise Virtualization, Citrix XenServer, and of course VMware ESX.

 

Unlike some of the blades, it does all that in a half height blade, so that we can slot in 16 servers in one 10U space. Since it is boot from SAN, there are no internal hard drives (as we configure it: you can put two in) to mess with.

 

Crammed for Power

 

One of the ongoing things we looked at with something this dense is how to wire it up. Six power supplies, each rated at 2700 watts. Connected to 208V (US) power, that's 13 amps each plug. The power supplies can run anywhere from 3+3 to 5+1, depending on how much power you need for the servers installed in the chassis. Hooking them to C20 plugs, and backing the PDU's with 30 amps each gets it done.

 

That math may seem screwy but its not. Not for this configuration

 

For the M620, consider the 115 watts per socket E5-2665, 2.4 Ghz part as the processor

 

In our case, with switches and 16 M620 blades, we are looking at about 5,300 watts total per chassis. max. Probably less, because virtualization is memory intensive, and that uses less of the power hungry CPU.

 

Easily fits in the 3+3 configuration of the power supplies, and so the total draw across the entire chassis is only about  25 amps, and we'll have at least two 30 amps circuits feeding it. In one install I am looking at 6 different 30 amp circuits feeding three M1000e chassis: The ultimate in redundant power: Six different feeds to six different power supplies. Well.. 18 different power supplies, but on a per-chassis basis it's only 6.

 

For that we get 256 cores and 4,096 GB of RAM. Per chassis. Three of them in a 42U rack, and we are still only at about 16,000 watts.

 

Sure, that's about 800 watts a square foot (20 SF cell size), but still: That is a lot of virtualization capacity in a very small space.

 

Since they are all boot from SAN, any given blade can easily be replaced. Its just a compute node in the array of M620 servers.

 

Mix and Match / Real and Virtual

 

Looking at the picture above you can see we have things other than the M620 in those chassis. The nice thing about a blade chassis is that, even if you are not going to use all of it for virtualization, you can still replace a standard rack mount server with a blade server.

 

We have reasons at BMC to need real hardware. We are a virtual-first shop, but not to the point of not being able to develop some of our products that need real hardware to test against. Performance. Scalability. Development against real iron. Whatever the reason might be, we can slide X86 hardware designed to meet a need into the chassis, and it instantly gets access to the high speed SAN and network of the chassis. It still takes less space. It still uses less power.

 

These same things are true for all the UNIX blades too of course: This was just the first time in this series I had a picture with an obvious difference to talk about it.

 

The other thing that is true is that if a blade was bought for one thing, and the reason goes away, it is easy to re-purpose it virtually, inside the chassis. All the remote support things are there that make reconfiguring / repurposing a matter of minutes. Between being a blade, and having BladeLogic.... its a snap.

 

High Speed Network

 

Mentioned before, but worth repeating is that the integrated switches give us the ability to have high speed and redundant switches for both Fiber channel, at 8 Gb, and Ethernet at 10 Gb. Like all the other blades I have talked about in this series, when we are ready for 40 Gb Ethernet, or 16 Gb Fiber channel, I only have to change the switches and possibly the mezzanine cards.

 

The Sun blade is a slight exception there: Its NEM is already capable of 40 Gb Ethernet, but there is no shared FC switch, so I have to replace lots of Express cards on each blade to take it to the next speed. 8 Gb is the fastest card at the time of this writing (just looked to be sure...), but IBM, HP, and Dell are all 16 Gb FC capable already.

 

Blade Summary

 

Before I start in on storage, that last paragraph is a good place to stop, because it is probably worth a post to go over the entire blade field we have. As noted there, the Sun blades differ from all the other we have in how they do some things, and of course many of the blades have very similar feature that it is none-the-less worth talking about them. But that is a full post, and its for next time.