Share This:

Introducing a new Open Source tool from BMC, available at Sourceforge, for managing Linux guests under VM on the mainframe.

 

http://vmlmat.wiki.sourceforge.net/

 

If you have been following this blog, you know that my background is the mainframe. I started as a VM system programmer, and VM is still near and dear to my heart. When MF Linux became a reality, it was, to me, a natural thing that it would be a guest OS on VM the same way VM has hosted MF operating systems since it was invented over four decades ago (and the research that went into creating it stretches back even further).

 

Linux on the MF is just a logical progression of all the things that came before it on the mainframe... things like UTS and AIX/370. It is also natural for Linux: The ultimate in multi-platform OS's.

 

However...

 

Linux on the mainframe, like any other technology deployment of human kind, is not without considerations and issues. If you have worked with the X86 spiritual baby brother of VM, VMware, you might know what some of those issues for Linux on the mainframe are: Things like server sprawl, and the tendency for many end users to treat the resources as essentially infinite.

 

There is also a cultural issue. If you have programmers working on Linux, be it web apps or anything else, and they are less than four decades of age, they probably learned Linux on their Laptop / Desktop / small-server-in-the-cube-next-to-them (shades of "departmental computing") kind of thing. To them, for the most part, Linux is Linux, and it does not really matter where it is, and they most certainly do *not* want to learn anything about the MF in order to access their Linux there.

 

That means that they don't want to call the data center and have the operator autolog their Linux VM, they don't want to learn how to log into VM, IPL the Linux system, and then do a #CP SET RUN ON#CP DISC. No TN3270 stuff. No green screen. It is not the way of the GUI world, and very much not the way of the Web 2.0 world. CMS is only for us VM'ers it usually seems. I know my track record in teaching people how to use the mainframe who started out in computing on Linux, UNIX, or MS Windows is not good. It is not zero, but it is not the next generation of MF people either. The MF is just too different, and its attractions are often not obvious at first glance. Sure XEDIT is the best text editor *ever*, but it takes a while to come to appreciate it....

In general, Linux users are used to owning the resource, and being able to boot it whenever they want (during development anyway). There is a sort of security in owning the "Big Red Button" so that one is the master of their own computers destiny.

 

This all stands at odds with a great deal of the culture of the MF: The ultimate in data center glass houses. To solve the problems above, often all the Linux VM's are just autologged when the MF is IPL'ed , and they run whether they are needed or not. This would not be a problem (at least not nearly as much of one) if these were all CMS VM's, but Linux in a VM turns out to have a few design points that operate against the "Just IPL and let it run" way of working. The main one is memory management. Think of the way Linux allocates memory, which is more or less "use everything I can". What is not programs is in-memory cache. It is a great design, and it makes perfect sense if you are running natively on real hardware: Why let the extra memory go to waste? Why not use it to speed things up? The idea is not even original to Linux. UNIX before it had it as a central design precept. The idea was that disks are way slow, and so if there was spare memory laying about, use it to cache the I/O, and speed up the programs. Let the I/O happen when it could. This design point is still valid today, as disks are still extremely slow relative to RAM.

 

As a guest on the MF though, that means to the host OS...to VM, it looks like the guest memory is 100% busy all the time, and the way the VM likes to page out unused memory on guests so that only active memory is in core is violated. The solution is to keep virtual memory trimmed to just what the guest actually needs to do it's job... At least it was before VMLMAT.

 

We do R&D on MF Linux. in the pre-VMLMAT days, at any given time, using the above autologging method, we had over 100 Linux VM's running. VM was running out of real memory. VM would look around, find the least recently referenced pages, and page / swap them out to DASD. But Linux would then reference the page, and back in they would come. Paging / swapping is fine when what is being moved is rarely referenced, but it is called thrashing when it goes out and then comes right back in over and over. There are limits to what you can do to tune this, and we were at them.

 

Many of the VM's were idle in reality, but we in the support group had no way of knowing what was being actively used, what was up as reference, and what was up because it was autologged.

 

Sure, things like the Build systems need to be up all the time. That was just a handful of the total systems we are talking about here though. We needed a way to make it so that end users could, without knowing *anything* about VM, bring up and down their own Linux VM's.

 

Enter VMLMAT.


VMLMAT

 

Virtual Machine / Linux Management and Archiving Tool.

 

As system's programmers, we are of course not marketers. Our name for our tool is descriptive, not beautiful. The VM part tells you right away this has nothing to do with LPARS or MVS of DOS/VSE. This is a VM tool, and that only makes sense: where else but the best hypervisor on the planet can you manipulate the guests without them knowing what you are up to?

 

The Linux part lets one know we are not dealing with CMS here. The Management and Archiving part is descriptive of function. Thats just the way we roll. Since this is Open Source, maybe someone can contribute a snazzier name some day.

 

VMLMAT runs a standards compliant bit of HTML under an Apache web server: This is the way that the Linux users interact with the program. The system programmer has a few things to do on the install that do not relate to the Web interface, but the idea is that Linux users are used to doing stuff via the web browser. Moreover, we did not want to assume anything about where the MF Linux user was starting from. Could be AIX, Solaris, Linux, or MS Windows (perhaps with some sort of X loaded up on it). Given the platform diversity, standard HTML was a requirement even if we were not already a pretty standards driven group.

 

Now when we IPL VM, just the Build and Packaging Linux guests are autologged, and the end users can go to the web interface and IPL their Linux whenever they want. Or bring it down. It seems a simple thing, but that one feature saved us an MF upgrade. Only the Linuxii we need... that we are actually actively using... are up at any given time. 100+ Linux VM's up and running at the same time dropped to one fifth that number.

 

That is just the tip of the VMLMAT ice burg though. Here is another nifty feature: Disk space savings


VMLMAT and DASD

 

MF disks are very expensive things. One of the primary criticisms of Linux on the MF is that Linux was normally run on commodity priced hardware, and now by running it on MF gear all the price advantage was lost. Many saw of course that there were advantages: I/O pathing and monster transaction capabilities, best in the business HA, and so forth: Perfect for the production Linux environment. Not so perfect for all the possible iterations of a development environment.

 

Since we in R&D Support do care and feeding of all sorts of Linuxii internally on the MF, all the way back to the very first bootable kernels, before RedHat or SUSE were making MF versions of their distros, up to the latest and greatest stuff, we had a real diversity issue. All those VM's were sitting around.

 

Everyone needed a separate VM for each version they were  working on, plus a set of their minus one versions of code, plus the betas and alphas for announced code: This was not server sprawl from the "Its all free" point of view i mentioned earlier, but it was expensive server sprawl nonetheless.

 

VMLMAT takes a different route to this. We do not store all the versions on the MF DASD at all. Everything is archived to inexpensive NAS. If you have been following "Adventures", I have been writing about that inexpensive NAS for a while now. Now you know one of the things it holds: Our MF Linux images.

 

VMLMAT can package up via TAR any VM, and store it off to the NAS. Even better, it can restore that archive to ANY OTHER Virtual Machine. VMLMAT unrolls the archive, and then personalizes the archive to match the VM it is being restored to.

 

We then leverage that in many ways:

  1. When a new release comes out, we install it for the first time and then archive it. Now everyone can use that new version without anyone in tech support ever getting involved.

  2. If applications need to be added, the base archive can be installed, updated with, say for instance, Oracle. Now it can be archived again, and anyone who needs that release of Linux with that release of Oracle can leverage the work.

  3. Business Continuance / Disaster Recovery: Now we can take the NAS archives and replicate them wherever and however we like, and get back all this work, and it is simple and easy.

 

Only the currently being used copy of Linux is up on the MF DASD. The rest is spinning far less expensively out on the NAS. Multiply by the number of Linux VM's and the number of Linux versions and the number of application setups and that is a bunch of DASD being saved.

 

With the Web interface to VMLMAT, the end user is now an empowered individual. They can bring up their VM, shut it down, change it, install stuff, archive their changes, share their changes with other teams, and leverage other teams work. Say they are running RH AS 5 with Orcale, and they want SUSE with MySQL for their next test.  They can archive the work, retrieve SUSE with MySQL from archive, and start testing. The whole backup and restore take less then 30 minutes in our shop. The new SUSE VM has the same name and same IP as it did before: VMLMAT took care of editing all the stuff in /etc so that the VM name never changes. Just the Linux version.

 

At no point did a system programmer get involved in that transaction: We have seen a workload that was frankly swamping Ron drop to the place where rather than him needing to work crazy hours to keep up, he puts in a few hours a week on MF Linux maintenance (like creating new archives of new releases of Linux when they come out) and then moves on to other things. Not only was the end user enabled but we got back most of an employee for other tasks.

There is even more, and I'll write about that in future posts, but for now I want to tell you how anyone can get VMLMAT if they want it. Did I mention that BMC is Open Sourcing it yet?


Sourceforge and BSD

 

VMLMAT is available to anyone who is interested at Sourceforge:

http://vmlmat.wiki.sourceforge.net/

 

VMLMAT is licensed under BSD, one of the most open and permissive of open source licenses, and the one that we at BMC have chosen to use when we release Open Source projects. Our little site was just set up yesterday, and we are still knocking around learning how to use Sourceforge, so bear with us. We are loading up the documentation on the Wiki, and the tarball for the current 1.1.0 version is there. We'll be checking it all into SVN soon, but till then this is the way to get it.

 

VMLMAT is created entirely out of Open Source projects like Apache, PHP, and Samba, and the VM portions are the very VM standard REXX. No C or assembler was harmed in the creation of the tool. The  HTML is scanned and certified as being 100% open standards compliant. As we have it written, it leverages NFS to archive Linux images to NAS.

 

Ron did a slideshow style presentation to walk through some of the features of VMLMAT, and it is loaded to Sourceforge as well.

 

I'll be your host on the project, along with the BMC internal author of VMLMAT, Ron Michael.

 

Ron also has a blog at TalkBMC called "Open for Mainframe", and he has a bunch of posts in various stages of readiness for posting there about VMLMAT. We'll also start cross-posting with the blogs over at Sourceforge soon, so stay tuned for that.

 

In the meantime, I am jazzed. VMLMAT is a simple concept, and an amazing tool, and it has been saving us all sorts of time and money. I am thrilled to be able to share it with anyone else who is interested out there in two of my great geek-loves: VM and Linux. Further, we knew going in that VMLMAT was created to meet our real world requirements, but it also matches our R&D Support environment. Knowing that it might be Open Sourced, Ron created it so that it can be easily enhanced to add new features. For example, we chose to use Active Directory for a end user authentication (via Samba) rather than maintaining a separate user / password file or perhaps having it in LDAP. The code is modular so that anyone can come in an write their own module to match their internal needs, and hopefully they will contribute that back so the VMLMAT can grow to meet a broader set of real world, Linux on the mainframe management challenges.

 

As Rachel Maddow says: "One More Thing:" Please do not confuse VMLMAT with BMC's VM Cloning Tool from a number of years ago. It shares no design and no code with that tool.