I am going to go out on a limb and guess that if anyone in your shop is using Linux, they started using it on a PC of some kind. Probably an X86 box like a laptop or desktop. If Linux was studied in school for something like a compiler of OS architecture course, it was probably taught on small, desktop class gear. Point being, they are used to having control of their system.
In a small development group there is probably a re-purposed PC or small rack mounted server or a VMware Virtual Machine, and everyone accesses the Linux server via X or SSH from the Linux laptop or desktop in their office. Someone wants to reboot, they send out an email, IM, twitter, or just shout up and down the hall that the server is going down.
The traditional MF shop is the opposite of this. It only makes sense: the MF is too expensive to not have it be centrally managed. I often think that ITIL was developed by people that had MF background because a large number of the disciplines it discussed are things we had to do in the MF world long before they had formal names. Change Control for example: You make a change to the MF, you have the potential to blow up a whole bunch of people or a whole lot of batch work, all at one time. This tends to make people grumpy.
As one who has lived in both worlds, I have seen the culture shock of someone coming from the small Linux platform to the MF. Something like: "What do you mean I have to call the data center to get my VM recycled. Are you nuts? Just get me a 1U server and let me control it. I don't need these people in my way!" It is just not intuitive that they should not have control of their own Linux, unless it is the production web / app server or something. There is a sense of ownership: That is my Linux server.
This is compounded of course by the fact that with the general Hossness (TM) of the MF and the best OS on the planet, z/VM, there can be hundreds of Linux VM's. Thousands even. Now we get a real clash of cultures (the folks who want to own their compute resources vs. the central data center), and a real stress on time and resources (all the behind the scenes work it takes to keep the shop running and up to date vs. being responsive to the end user). If one runs this Linux shop the way that the MF has done it in the past, then a system programmer makes all the changes to the guest operating systems, and a system operator brings up and shuts down the same Guest OS's and the "owner" just uses it for whatever they need to do. And oddly, no one is happy.
The Sysprog or Sysprog team is/are stress cadets because someone forgot to add more hours to the day to do all the work.
The Operators are really really tired of all the reboot phone calls and emails and equally unhappy when they get a complaint that the reboot did not happen in five minutes, even though they were running backups over on MVS and mounting a tape, retrieving an archive from the vault, or something similar at the time.
The end user is not happy because this is not nearly as good as if they just had their own computer. They could install what they wanted when they wanted and reboot all day long and no one would say anything about it.
Part of the problem here is that, for the most part and to the end user, Linux is Linux regardless of the platform it is running on. If it does not look different, why must their behavior towards it be different.
VMLMAT solves this culture clash by returning control of the individual Linux VM's to the end user that "owns" them. It is not the exact same thing as having a physical big red button, or the server tucked under the desk, but via a standards compliant, any browser should work, web page they can shutdown, restart, or rebuild their Linux to any version of Linux in the repository. This is something like Vmware's VI Client, except that it adds provisioning to the mix, and where Vmware has snapshots, VMLMAT has archives. Not the same thing, though they can function in the same role more or less. Now the Operators are focused on the things that need their physical presence, the sysprogs are working on more technical things, such as putting the latest version of Linux into the archive so that all can benefit from it.
Install once, run many (tm).
And of course, the reboots are happening right when they are requested. Everyone is much happier
VMLMAT takes the paradigm of real machine ownership a bit further: In the real world, one person may have several Linux machines, or a group or people working together might have just one... or a group of people have a group of machines. In database-speak, it is a one to many, many to one, or many to many relationship. VMLMAT implements this so that there can be individual machine ownership, or group machine ownership, except that the 'machine(s)' is/are really Linux Virtual Machine(s).
The time saving are real too: Ron noted in his "Beyond Linux Cloning..." post that VMLMAT returned 30 hours a week to his time. Time that he used to spend working on tickets for the users to make various changes to their Linux VM's to match their current requirements he got back, plus now the end user was not waiting on him to have time to get to their tickets either. Everyone was more productive. A huge win-win.
Our savings may be a bit overstated: We are an R&D environment, and we make changes all the time. We have test versions and test versions and test versions. We need something at base level, another with a particular set of patches installed, another with a different set of patches, another totally bleeding edge current, plus Alphas and Betas... Still, I think that the savings are there for everyone.