Share This:

-by Steve Carl, Senior Technologist, R&D Support


I was out on vacation last week when Cisco announced their new Unified Computing System (UCS), but it appears in reading through the trades this week that it has created quite a stir. BMC was present at that party with a full line of tools ready to go.


There were many comparisons of UCS to the idea that this was a new mainframe. Appropriate as far as it goes.  These are not the same technology, so direct comparison would be suspect if you got too technical about it. One example: The Mainframe is still the king of the I/O hill, as near as I can see. The consolidation of the servers and the network into the chassis has obvious mainframe like features. If you assume that the mainframe does not respond, then it is easy to see this solution growing to overcome the mainframe in a few generations. The mainframe won't stand still of course. But I digress...


I looked over the design and technical specs for UCS that are currently publicly available, I was struck by what such a device means to the concept of "Cloud Computing". My last couple posts ("Convergence" and "Clear to Partly Cloudy") established my thinking on the current trends of the data central back towards a centralized, glass house style data center. It also laid out my position in so many words that "Cloud Computing" is a concept, not a specific technology, at least not yet.


"Cloud Computing"


  • The Network is the computer
  • Compute clouds have the advantage of standardization and centralization so that economies of scale, and economies of power and location can be leveraged. Build the compute clouds where the power is plentiful, and inexpensive, such as next to a hydroelectric dam. Service the client cloud via standard TCP/IP networking.
  • Compute Clouds tend to leverage fast provisioning and virtualization to provide their service. It does not have to, but it does tend to.
  • The compute / storage resource in the cloud can be anything: A Mainframe, a rack of blade servers, a grid of laptops: Anything that can talk on the network and respond to transactions from the client cloud.
  • The Cloud metaphor is reversible: Not only is the computer / storage resource a cloud, but all the clients of the Compute / Storage resource are also a cloud. Maybe a swarm.




The mainframe community has told anyone that would listen for years and years that the cost of the mainframe does not tell you everything you need to know about running a computing resource. They were not lying. There is always this semi-elusive concept of Total Cost of Ownership. Up-front costs versus TCO. Power, lights, real estate, taxes, and how many people it takes to support the thing. One of the barriers to adoption I have seen of compute clouds is that they have to be built: You can not just take down the current running servers and reconfigure them for virtualization and rapid provisioning, and hope that the folks that were using that application don't mind waiting. There is always some sort of swing-capacity required to make the leap into the new world. Of course, you can rent that from folks like Amazon with their EC2 offering as well, so it does not have to be something that you install inside your glass house.


Or maybe it does: Not everyone is comfortable with the idea of running their most critical apps and data someplace other than right where they can see it and control it. With that in mind, think about what the UCS solution is or can be: The core of an in-house Cloud. It is not just about virtualization of the server or rapid provisioning: it has Cisco's latest and greatest Nexus networking built in. Nexus has all sorts of nifty new things about it, but in some ways it is a clean break with the Cisco gear of the past. Deciding to go to Nexus means looking for a way to cleanly and non-disruptively inject it into your data center.


To get to the TCO benefits that UCS offers requires that one be ready to make the plunge into the new world. This is not a bad thing, but you need to know going in that this is more than just a P2V of some server or the other. This is taking that application and enabling it for the whole new world of modern computing. Its the same leap in concept and power as Web enabling a CICS application or something! To leverage the new thing something's means an up-front cost in order to derive the long term benefit.


The savings are very real: In one new data center that we are moving to we are also taking the Nexus plunge, and the reason was that the capabilities to virtualize the network meant a real reduction in the amount of hardware we required to support the data center. That reduction translated into a reduction in Capex and Opex. Less up-front gear. Less power and support costs over the life of the gear.


Hardware Vendors


I don't really have any idea how Dell, Sun, HP or IBM will respond to the UCS announcements. It is clear that they will need to. What all three have the UCS does not is storage stories. In particular, HP and IBM have in-band virtualization. All currently rely on network vendors like Cisco or perhaps Extreme for their network fabric. I have to wonder if we'll see an alliance with a Cisco competitor from one of four in the near term future.


Dell has EqualLogic, and that is a perfect fit into the currently missing part of UCS. UCS provides fat pipes into the storage, but in all my poking around, I can not find a storage product integrated into UCS at this time. Equallogic's ISCSI virtual storage farm is a perfect fit into UCS's fat 10G network storage pipes. Dell will probably be very happy they have this.


HP has in band virtualization and the Storage Works line: Should be able to make that fit. Ditto IBM. IBM's SVC is a terrific fit for UCS, allowing you to leverage all your current storage investments in the new UCS environment.


None of them will be happy with the idea that they have been cut out of the server side of this, especially as all have blade server offerings. IBM has a Blade that runs both X86 and Power. Sun has UltraSparc and X86. It will be hugely interesting to see how all that plays out. If UCS has a problem, it is that it is X86 centric, and while X86 appears to ever so slowly be winning the CPU architecture war, it is far from over. I never would have predicted Itanium would still be around for example. And X86 needs to be very wary of ARM. But I digress....


Not having the storage integrated actually works for me, because I need to support all the virtual environments (Power, Sun LDOM, HP): If I want to standardize my virtual storage back end (and I do), then having the storage decoupled from the rest of the solution is a "Good Thing". For now.




Hey Steve! Isn't this "Adventures in Linux?". Yes: Good point. Hopefully it is obvious that UCS is a place where virtualized Linux servers will run. Nothing makes for better TCO than a cloud running Linux I always say. OK. I lie. I just said that for the first time. Gotta start sometime.


What is also interesting is how much Linux there is in enabling all this to work. IBM SVC's run Linux as their core OS. Ditto Dell's EqualLogic. I have found no details about how UCS itself runs its control applications yet, but I do know that BMC's Bladelogic is in there for the provisioning, and I know that it runs under Linux as a possibility. I will be watching with interest to see how that works out. I also hope to get my hands on one of these in the lab, so that I can see how the rubber meets the road. In all honesty, I only know what I have read so far. Example: a single fully popped UCS has 320 servers and 384 GB or RAM per server. A single UCS virtual server cluster is then 122,880 GB! That is a pretty good sized VMware farm.


You could make a fair sized Cloud with that.




When it started to appear that, for single CPU's (now cores), that the speed of light and quantum mechanics were going to put an end to the doubling of speed and halving of price every 18-24 months, it also became clear that parallelism rather than single threaded, sheer speed were going to have to step in to keep Moore's Law on track. What Virtualization adds is the ability to drive usage of computing resources up: Virtual computers run at more images on few processors, virtual networking runs more connection over few wires, and Virtual storage runs more data in less capacity. Virtualization is a cheap way to to parallelize what are essentially single threaded apps / processes.


It seems like a Black Hole: Computers getting every smaller, more dense, do more in less footprint. Blade servers have finally matured to the place where you can cram enough memory and CPU's onto single blades to make them useful VMware servers. Now UCS upps that ante by dense packing in the network. Storage will almost certainly follow. When will it all fit on the head of a pin or in my Netbook?


The savings are very real. I documented a while back what just our prototype VMware work saved us in just one data center ("Virtually Greener"), and that was 15 months ago. We have saved far more power and space than that since then. At the same time, our OS server image footprint has increased, to keep pace with all the requirements R&D has had.


What would all that look like with UCS coming online? How small (and how hot per square foot) will my next data center be?


The postings in this blog are my own and don't necessarily represent BMC's opinion or position.