Search BMC.com
Search

Share: |


In my last post I mentioned using SUSE's Studio product to build an appliance to test Evolution with. I did not go into detail then because I wanted to come back to that particular technology stack and talk about it as a separate-from-Evolution subject. If you read that last post, you will know that the reason I used Studio was to create a quick appliance to test the state of the art for MAPI connections to MS Exchange 2007. That is hardly everything that could conceivably be done with SUSE Studio, though perhaps it is a good example of the ability to quickly create a one-off test system.

 

For my use, I wanted the ability to build a LiveCD of OpenSUSE 11.1, with all the latest versions of all the related -packages. I wanted it to be  a LiveCD so that I could download it, boot it on the Dell D620 laptop, and run a quick test. It took about 10 minutes on my first drive through to assemble all the things I wanted, and then another 10 or so while Studio created the CD image. I then downloaded and burned that and ran my test.

 

The disk image was small: just over 400 MB. Studio had left out everything I did not need for the test, although were I to do it again I would probably go back and add a few more Gnome packages so that I would have a more complete / familiar desktop. Not required for the test, but were I to be interested in showing it to someone else, I would want the look and feel to be more "Gnome regular".

 

This showed the beauty of the tool to quickly build a sort of appliance that I needed once. Studio is far bigger than that though.

 

Lookie

 

One thing that is interesting is that you can customize the look and feel. Add in your own EULA. Add in your own software packages outside of whatever is provided standard on the Distro. It is easy to see how for a trade show this would be a nice thing to have. And since it works with both OpenSUSE and SLES/SLED, you could conceivably build an appliance that could be converted / licensed for production usage.

 

Related to that is the fact that neither KDE or Gnome is really the "premiere" desktop for SUSE. This war of the desktop that isn't (a war) has gotten a fair amount of electronic ink recently, when SUSE announced that they were going to set the SUSE default desktop to KDE in the next release. This furor was about what radio button was pre-populated apparently, and SUSE has said that both desktops are equally supported on the distro.

 

In Studio, you start with JeOS: Just enough OS. The kernel and some bits. You add X and Gnome or KDE on top of that if you need it. You can also pick a "Server" mode, which lines up more server related packages, but no GUI (unless you click to add it)

 

Also nifty is that, on the very first screen, while you are picking the GUI, you can pick 32 or 64 bit, right from the get-go. My appliance was 32 bit in order to keep things small and simple, but given how many things need to be tested under 64 bit, I see how this could be very handy.

 

Next, add some software: Whatever you like from the Distro

 

Getting Soft

 

The screens take you by the hand. The workflow is easy and intuitive. You go to the software tab, and here you can add repositories and packages. The updates repository is already there so you get the latest stuff, but if you need to test an older version, you can remove it.

 

The search dialog, in this day and age of Googling everything, is the easiest way to find and install more packages. Pre-reqs / Co-Reqs, and so forth are added automatically in a very apt-get kind of way. A dialog on the left of the web page tells you how much space the image will take, and how many packages you have.

 

Because the starting point is JeOS, if you want OpenOffice, you have to add it. If you want Firefox, you have to add it. Many things that one might take for granted as being present is a regular Distro download are not there by default. Easy to add. Easy to update.

 

Being Templative

 

All that is really happening essentially is that a template for a system is being built. Even after you build and download your appliance, and SUSE cleans up the disk space of the appliance image, the template stays, and can be updated and changed at any time. Forget Firefox? Opps. Just go back and add it, and build the image again. As Sookie Stackhouse says: "Easy Peasy".

 

The secret sauce here is that under the covers, this is all KIWI based.

 

More than Just a Pretty Package

 

Studio then asks you configuration questions. Things like whether you want assigned IP addresses of DHCP. Firewall Y/N/ Color. Runlevel. If you want to configure MySQL (the only application it appears able to pre-configure at this time, but maybe I did not load in any others that it can deal with in my test image)

 

Once you have it all tweaked out, the next dialog lets you add files. Need to put some .PDF's in the home directory? A test data base? Here is your chance. I did not do it, but it appears like a simple, browser style upload.

 

Build it. Download it. Test it. Tweak it. All very clean and easy.

 

Not just LiveCD's are supported as an output format either. USB / Hard drive, VMware virtual disk, and Xen Virtual disk are also options.

 

What is Not in the Studio

 

For all its beauty and ease of use, Studio has some drawbacks, at least for our use.

 

  1. No Mainframe SUSE image support. The packages are all X86 and X86-64.
    1. We have *lots* of SUSE on the mainframe. SUSE has something like 80% of the mainframe Linux market at this writing. Would be nice to have....
  2. No OS support other than SUSE
    1. Sure: One would expect that. But the tool is so easy and so nice that one wishes they could use it for *everything*.
    2. In our heterogeneous world, we would like to standardize our OS build procedures as much as possible. It is not clear to me that being able to build such a customized version of SUSE would be a good idea since what we support with our products is the standard versions of the Distros.

 

For free though, this is an great tool, and handy to keep in the SUSE Linux System Programmers toolbox.

Share: |


I mentioned in my Enterprise Desktops: Linux, OS.X, and Win7 post that I never expected to see OS.X pass Linux in the race to MS Exchange compatibility.

 

OS.X 10.6, codenamed "Snow Leopard" got there first.

 

As a Linux maven, this has been a hard loss to accept, but as I also have a Mac, it has been an easy loss to accept... Yes: I am feeling very split-brain about it all.

 

Just to be sure, I loaded up Ubuntu 9.10 Alpha 4, and updated to the very bleeding edge, to see if Gnome 2.28 / Evo 2.28 and its built in MAPI support was going to catch up, or even be close. But it has not. It is not even close yet. When I try to enter the server name or IP address in the setup dialog, it just crashes, and it does not even ask if I want to report the problem. It's Alpha, so I can not really criticize it. I was just hoping. I was just looking for a glimmer of MS Exchange 2007 interoperability light.

 

To be even more sure I loaded up SUSE Linux Enterprise Desktop 11 (SLED 11) and applied all the maintenance. I can enter the MS Exchange server by name rather than address, but the GAL (Global Address List) does not work, and calendaring hangs. I am told some have working calendars, so this does appear to be variable, but it does not work on my calendar, as built up over the years, so I assume that it will not work for others as well.

 

I also built a SLED 11 appliance with SUSE Studio (very cool) and had the same results.

 

Last try: I downloaded OpenSUSE 11.2 Milestone 6 and installed it, but that does not have MAPI in it at all yet.

 

OpenSUSE 11.2 and the GA of Ubuntu 9.10 are still months away, and I have no idea if full MAPI is going to make it even then. The forums I watch about the subject have been very quiet about MAPI status. The Wiki has:

 

 

But the last updates there are severely out of date. I scoured the forums, and Googled with fervent hope, but at the end of the day, OS.X was there with fully functional MS Exchange support, and Linux is not yet.

 

Nope. This round goes to OS.X. That is not to say that the support for Exchange in OS.X is perfect yet. I found a bug with scheduling meetings this morning. I have not seen any public discussion of this problem yet either, but then 10.6 is brand new, so there may not have been time. It appears to be an issue with the Global Address List (GAL) looking up the name.

 

I am also having another problem, but this appears to be a MS bug. The 'affinity server' is, after 3 days of steady use, suddenly rejecting my password. It is my password though, and I can not seem to convince the affinity server that it is OK. Whatever this little issue is, it locks out my Mac from email, but Linux (using IMAP) and Win7 (using whatever RPC's and MAPI bits Outlook 2007 uses) are both still able to access the Inbox.

 

There is an easy "work around" though: Look them up in the address book, and then drag and drop them on the appointment. In retrospect this is probably what Apple thought people would do anyway, rather than trying to do direct adds in the meeting itself. Its kind of funny: the meeting invite is sent the second that the person is dropped onto the meeting, rather than when the edit of the meeting is finished. But it works, and very well.

 

All of this does not even count the fact that MS will release Outlook for the Mac too, so that there will be two ways to access the Exchange server on a Mac. Outlook does not arrive till the end of 2010 though, so the built in MS Exchange 2007 support in OS.X will have plenty of time to mature and have a great deal of uptake.

 

The reason that this all works is probably that Apple did not take the MAPI/RPC route with 10.6. They are using Web based API's. I traced out a conversation with MS Exchange just to verify this was true. In this regard it seems like that the MS Exchange support in 10.6 is a bit like the Exchange Connector support used to be in Evolution... except that was WebDAV based, and with MS Exchange 2007 WebDAV is dropped in favor of these new API's.

 

This is also why 10.6 only supports MS Exchange 2007 and not 2003 and earlier. When MAPI / RPC support is finally fully working in Linux / Evolution it will have that over 10.6: MAPI / RPC means that Evolution will be able to talk to any version of MS Exchange all the way back to 5.5 more than likely. But then Outlook will arrive in the Macstack at the end of 2010, and probably negate that advantage, unless MS releases a Web API only version of Outlook. They might... never know.

 

The Mac I am using for all this is a 3.5 year old unit, and 10.6 has also had the side effect of making the unit feel like it has had a new processor installed. The system has a 2.1 Ghz Core processor (not Core 2) and 2Gb of RAM, and while it has never felt slow, it now "feels" every bit as fast as my Macbook with 4GB or RAM and 2.4 Ghz Core 2 processors. I used the word "Feels" there very intentionally, since I have not done actual objective measurements. Still, Safari seems to snap open, and Filezilla seems to transfer things with great speed, etc. The mail.app is quick, and the interface clean. The emails are sent quickly.

 

Does all of this mean the Mac is now "Enterprise ready"?

 

I have read this question over and over in the trades, along with endless (and endlessly vapid, IMHO) 10.6 / Win7 "Shootouts" and "Death Matches" and other similar cruft.

 

The answer is of course "Yes". Unless it is "No" in your shop.

 

MS Exchange is at something like 50% market share in the email server space, so having this support was critical *if* you are in a place that uses MS Exchange. If you were in a place that uses some other email server, or maybe have it SaaS'ed out to Google Apps or something, then you already were ready to use a Mac in the Enterprise. Whether or not you do is probably more about the size of your organization, the enlightenment of your IT department, and so forth. I was talking to one person recently whose IT department had a very cool hardware standard for their laptops: They gave folks a budget and they bought whatever they wanted to schelp around. If they bought a Windows based unit, it had to be locked down with a corporate software stack, but OS.X or Linux were not nearly as restricted.

 

Right after I was told about this, I got curious what I could buy for their stated budget. I have done this a couple times in the past, but I wanted to be sure the numbers had not changed much. According to a couple of vendors online configurators, that I could get a Mac for about the same price when configured the same way. And I got the Macbook unibody to boot. To be sure, I could not buy a 500 dollar Mac laptop or anything: I was comparing 13.3 inch screened, 1033 Mhz buss'ed, fast, large disked, corporate units only. Combine this with what, at least for me, has been a high level of reliability / durability / schelp-ability, and I can see why some would want to bring their Macbooks into their office settings, rather than their normal habitats like graphics studios and print shops and Hollywood offices and other parts of the creative world.

 

In the very strict confines of an MS-infrastructure-only shop, Mac's were historically harder to use: Same as Linux. Also like Linux, Macs have the same coping mechanisms now. Examples:

 

  • Office Apps:
    • OpenOffice  (Have had NeoOffice for years): I just loaded up 3.1.1, and it has had no problems with an MS formatted documents
    • iWork:
      • Pages opens MS formatted stuff as well, and usually with high fidelity.
      • Ditto Keynote for PowerPoint.
      • Numbers: I have had slightly less luck with Numbers. The problem is, as always, macros, although it also does not like outlined and sorted spreadsheets. Numbers is the new kid on the iWork block, and it is a great spreadsheet on its own: it is just not fully MS compatible. Yet.
  • Browsers:
    • Firefox
    • Opera
    • Chrome
    • Seamonkey
      • I like the Composer HTML editor. NVU stopped at 1.0 and its child Kompozer often goes stale (although I see some movement over in Komposer, and I am using both Composer and Komposer on this post on 10.6, to see what is what. Komposer is buggy and feature-full, and Composer is solid and feature-few. Sigh.)
    • And of course, Safari 4.

 

... and so forth: OS.X has benefited greatly from the Open Source world, to be sure.

 

And Of Course, with Web 2.0+ All This Matters Less Anyway

 

As the screams around the Internet reverberate every time Gmail has a multi-minute outage, it is clear that a huge part of the world now uses online infrastructure rather than dedicated, installed in the computer or personal datacenter based infrastructure. Out there in Cloudland, you need a computer to access the cloud, and it matters not if it is a Mac, Linux (or some varient / imbedded version of it), BSD, Solaris, AIX, HP-UX, or something else. All that matters is if you have a good standards compliant browser available for your platform. That was the idea behind the Netbook, and my Dell Mini-9 came with a 2GB SSD hard drive: Enough to run Ubuntu and a browser, and it works extremely well.

 

The more standard (as in Open Standard) the less the client platform matters. The trends are that the people using one platform will be able to communicate with those of all the other platforms, and never know if they are talking to someone like them or not like them, computer-choice-wise.

 

That is good for Linux.

 

Or, looked at another way: I can tweet from anywhere. And anything. Change "tweet" to be whatever you need it to be.

Share: |


In a fairly early post I did at "TalkBMC", I wrote about One Laptop Per Child (OLPC) and the possible future consequences of such a project. I called the post "The Linux Inflection Point".  Even though posted that on April 13th, 2006 (A Thursday....), I think its main points hold up fairly well.

 

What has not quite come to pass that OLPC was hoping for is that their little XO-1 would be 100.00 USD or less by now, and that therefore it would be more widely adopted than it is. They had the same problems predicting the future as all would-be Cassandra's: The future marches to the beat of not only its only drummer, but has eddies and counter-currents that make it utterly impossible to predict even with the best information. Who would have thought that 40 years after we put two men on the moon, and safely returned them that we would barely be in space at all anymore?

 

OLPC's problems are many, and what they exactly are is a point of much debate and opinion. Some think they ran afoul of not being ready to sell the units to individuals rather than to governments more than anything else. Others think it is that the unit was too threatening a technology to be allowed to succeed.

 

OLPC Unintended Consequences

 

The little AMD Geode chipped, Linux based XO-1 is surely, especially back then, a counter cultural device. Of the many stories around its creation one is about the falling out with Intel over the CPU. It makes sense that, as a hardware reference platform designed to be as inexpensive and durable as was technologically possible, the XO-1 did not need many different mother boards and competing chip sets. While Linux would not really care that much one way or the other, the underlying design would get more expensive, and they were having enough trouble getting down to the 100 USD price point. When I bought mine during the first Give One Get One program, it came to 188 or 198 USD for each unit.. nearly 400.00 for two of them. The current G1G1 program, being run at Amazon.com has them for 199.00 today: Three years, and the price has not budged.

 

The Intel Classmate, Intel's answer to the XO-1 is also around the price point: When researching that for this article it was 200.00-549.00 was the range, depending on features. None of these are the 100.00 US per unit that was the hoped for design goal, based off the prediction that as we moved forward in time, and various sub-components became more and more commodity priced, the total would be nearing 100.00. That is not what happened. We took a left turn. We got "Netbooks" instead.

 

Are Netbooks Notebooks?

 

Short answer yes. But it is a silly question. So are Laptops. The "L" in OLPC is "Laptop" after all... and the little XO-1 was arguably the first "netbook"

 

Microsoft found itself in a very unhappy place when all this OLPC, ClassMate, and then later the wave of Netbooks came flooding out. The same ideas and tech and OS behind the OLPC and the Intel Classmate were making a new class of computers called "Netbooks". Aside: The Classmate has a Windows option, but also has several versions of Linux available for it.

 

One of the funniest recent wars of words was over the label "Netbook". Microsoft has had to revive Windows XP, and drop it's price to around 10 USD per unit to be able to get installed on these inexpensive "NetBooks". In the process, Microsoft has been tell all who will listen that a Netbook is much a small Notebook Computer. At the core of this is more than semantics: It all comes back to which version of MS Windows one can legally run on the Netbook. MS puts limits around the amount of RAM and various other parts of the computer in order to qualify for their special upcoming Windows 7 "starter edition". The only way MS can kill XP is to have a Netbook OS. These limits will not stand. They can not. Already MS had to back off on their plan to artificially limit the number of running processes on the Netbook edition because of the howls of protest, not to mention threats to just put Linux back on as the primary OS.

 

Now MS faces Android and Chrome OS, and these are both Linux based OS's that come from the company that, more than any other, made the idea of the "Net" part of the Netbook possible: All the Netbook needs is enough screen, enough RAM, and a big enough keyboard to get to the Internet. From there on, the 'Net based apps like Google Docs, Gmail, etc are all you need to get your job done, whatever it may be. IE's market share continues to fall, and in the newest battle of the browsers we are seeing that the key is how well one can run a Net-based application. That means being fast at Javascript, which means Chrome or Firefox or Seamonkey. The IE only Active-X web pages of the world are becoming fewer and fewer... and that is a good thing for all of us.

 

So, while of course a Netbook is just a small Notebook and there are a lot of us that just call them big and little laptops, the Netbook is not going to stay pinned in that category any more than MS was able to hold to limiting processes. OLPC bet that commiditization would continue to drive the price point down, but what we have seen in the last three years is that the bottom arrived at 200 USD, and instead capabilities at that price point increased. A case in point is the much loved Dell Mini-9 Netbook. When it was introduced, there were very few SSD disk options in terms of size (and they only use SSD "disks"), and all were small. Mine was a 2GB SSD unit that I paid (you guessed it) 200.00 USD for. I have since doubled the RAM to 2GB, and increased the SSD to 32GB, and with an SSD unit that runs 4 times faster. I had a choice of a fairly affordable 64 GB unit, and a less affordable 128 GB SSD. Either way, all of these prices had fallen, and the speed had increased if anything faster than Moores Law would have predicted.

 

200.00 seems like a pretty had wall to get through right now, but what we are getting for that money keeps getting better and better. And I saw out local Microcenter is now selling Acer 15 inch, full size laptops for 299.00..... With more screen, CPU, memory, and disk capacity than a Netbook. Not as portable, to be sure, but still.

 

32 GB SSD on a Netbook?

 

The first thing many folks did with a netbook of course was to turn it back into a computing device like what they already had in some other size. I installed Linux Mint on my Acer Aspire One, and loaded it up with Firefox and Chrome, but also OpenOffice and Seamonkey (for the offline HTML editor) and so forth. The probe now is the same as the problem three years ago: The Internet is not everywhere here yet. A Netbook has to be able to work offline to be useful. Which means enough local memory and storage to run applications.

 

Apple learned this with the iPhone quickly enough. People saw the device, and the first thing they wanted was not to run web based apps, but locally based ones. Apple being Apple then created the App Store, and had 1.5 billion apps download in just over a year!

 

The iPhone is more or less 100% Internet connected, and still people wanted local apps. Either people have trouble changing paradigms, or people don't yet trust the Internet. I'm in that later group, although more from the point of view that I want what I want and I want it now. I don't want to take a chance that the Internet will not be available when I want to do something like, say for example, write a blog post.

 

And Then Came Apple

 

It would be silly to deny that the iPhone has been a real inflection point in the smart phone market. The iPhone, especially the new 3Gs, are as much sub-netbook form factor netbooks as they are iPhone. I tend to think of mine most as an email and web browser platform that can also make phone calls.

 

The rumors are running fast and deep right now that Apple is going to get into the Netbook category of computing devices, and that when they do, the price point is going to shift *up*. towards 700 or 800 USD rather than the current 250.00-400.00 (my estimate, based on shopping at Fry's and Microcenter). Being Apple, it is expected that they will do what they have done over and over: Redefine the category. An Apple laptop does not actually cost more than any other on a feature by feature basis, it is just that they do not make a unit with the same specs as the lower cost laptops.

 

The best guide to what an Apple netbook will be like is probably to look at how different an Apple iPhone was when compared to the other smart phones of just over two years ago. What Netbooks will look like two years from now is probably also clued in when one looks at the smart phones of today. The influence of the iPhone is everywhere, even if it has not yet been matched feature for feature.

 

That is just me acting Cassandra-ey. I need to be careful. Heinlein says that Cassandra did not get half the kicking around she deserved. :) To be clear though: I have no inside info here: I just think it likely there will be an Apple Netbook, and that it will look nothing like my Acer Aspire One or Dell Mini 9. That it will do and be things I want, and that I will get one because it will do and be things I never thought about till I saw it.

 

The Foggy Future

 

The future then is as foggy as ever. Web 2.0. Cloud Computing. Browser based apps. Throw in Apple. And Google. And Microsoft's inevitable counter moves. Lots of things will change, and the low end of the computer market is going to be a rich place to be.

 

I worry though that it will be mostly features and functions that the financially better off countries can afford. OLPC may have started off a whole revolution that is still unfolding, but the *need* is still there for the worlds kids to learn about technology and computers and to use modern learning tools to speed their educations along. To make it easier for teachers to teach. Only 600,000 XO-1's are out there right now, meaning that not only is the computer to child ratio still utterly wrong, but in fact that they are few enough that they may not be getting to or staying with the children for whom they were intended.

 

The good news is that, despite its troubles, OLPC is still a going concern. The XO-1.5 is a simpler version of the original XO, with fewer parts and an even better wireless radio (My XO-1 already pulls in signal better than any other computer I have). That should map to even higher level of sturdiness: Something that the XO-1 was no slouch at. Come 2010 the XO-2 should arrive, using less power (about 1 watt!) with a target price point of 75 USD!. Meanwhile, even though more expensive, the Intel Classmate has been through three revisions in the same time that the XO-1 has been through, more or less,a half revision.  I am sure that they, and all the other netbook makers, are going to have to respond to this new hardware and price point.

 

I hope that Amazon still has the G1G1 program for the XO-2. I'll be wanting a copy, and to send a copy to a child somewhere else on the planet that needs one.

 

Can I have Sugar With That?

 

One of the better bits of news, to me, came out of the challenges of building both the hardware, the OS, and the user interface. At the end of the day, this attempt to control so many aspects of the XO-1 led to straining the resources of the project, and it did not really leverage the Open Source community as it could have. That has been rectified.

 

Sugar is the simple to use, multi-cultural, kid oriented user interface originally designed for the XO-1. It is now spun off, and is at sugarlabs.org. To be honest, I wish my XO-1 was running Gnome or some other more familiar X desktop: I realize that is because I use Linux nearly every day, and am familiar with the usage paradigm of computers as it has developed over the last 40 or so years. I have resisted temptation to install something like Fedora 11, and am current running the latest 8.2.1 release. It looks like I may have to make the jump to Fedora or Ubuntu in the near future though. OLPC is getting out of the OS business, letting the distros deal with the platform support. That is as it should be. Again, this is a far better way to leverage the Open Source community.

 

Soon I will have to choose not just a distro, but which user interface. Sugar will be one of those choices, but I may decide to move to something else, if for no other reason than curiosity. This *is* Linux. I can always put it back the other way.

 

Kids who have never seen a computer pick up an XO-1 and understand Sugar immediately. With it spun off, not only will more people find it easier to participate in its development, more platforms other than the XO-1 will be able to use it. In fact, I count 8 Linux distros, plus documented ways to use it on OS.X and even MS Windows.

 

I really love this. Now the goal of helping the children of the world is less tied to the politics and maneuverings, the technologies and the missteps of the companies/foundations of the world. At the same time, the disruptive change that was the XO-1 is still there, still innovating.

Of particular interest to me is the idea that the XO line of computers are meant not to be speedy or feature rich, but to just be a rugged platform that can ingest power from a wide variety of sources, last for years, and ultimately are not really about the computer itself but what they represent. How they can help kids.

 

It occurs to me that the space program might like these XO's. The way they are designed meets with the design goals of traveling in space. The less power something uses, the better, when you are standing on Mars and the sun is farther away. The more reliable and rugged, the less spare parts you have to carry around. They would need different keyboards though....

 

It will be interesting how these efforts, among many other influences, will drive the consumer choices we have. How Apple and others will respond. I am not even going to try to predict it, other than to say Linux will be in there somewhere.

 

- The postings in this blog are my own and don't necessarily represent BMC's opinion or position.

Steve Carl

A quick look at Fedora 11

Posted by Steve Carl Jun 25, 2009
Share: |


In last weeks post I mentioned Fedora 11. Here is a slightly deeper dive.

 

The reason I was personally looking at Fedora 11 is that I wanted to see what the very latest MAPI setup in Linux looked like. Fedora is not only the most recent release of the major distros: Fedora also prides itself on being the most bleeding edge of the Distros. Fedora makes no pretense about being an enterprise desktop, or even useful as a daily use platform. Fedora is about being out on the edge and testing the latest and greatest... unless you are in Rawhide (Fedora's development channel) then one is supposed to be the most leading, ragged edge of Linux when using Fedora. Lean forward a bit (into Rawhide) and you can see them writing the code that is flowing into your Linux computers veins.

 

In theory then, since Fedora just released, and since it is so edgy, if there is new Gnome / Evolution / MAPI stuff integrated, it should be here.

 

Not so much.

 

F11, Evolution, and MAPI

 

First the packages:

 

[steve@f11-steve ~]$ rpm -qa | grep -i evolution
evolution-2.26.2-1.fc11.i586
..
evolution-mapi-0.26.1-1.fc11.i586

 

My Ubuntu 9.04 daily driver looks like this:

 

steve@bock:~$ dpkg -l | grep -i mapi
ii  evolution-mapi     0.26.0.1-0ubuntu2     Evolution extension for MS Exchange 2007 ser
..
ii  libmapi0               1:0.8-2ubuntu1          Client library for the MAPI protocol
ii  libmapiadmin0     1:0.8-2ubuntu1          Administration client library for the MAPI (
..

 

Evolution is at 2.26.0 as well.

 

Point releases can mean a great deal sometimes, but in this case, I can see no difference between the MAPI functionality of F11's 2.26.2 (MAPI is at 2.26.1...) and Ubuntu's 2.26.0. Both need to have the server IP address rather than the name just to hook up to the Exchange server, and load the Inbox. Neither can reply to email. Fedora can't even send email if you type in a valid address in fact. No calendar. No address book (GAL).

 

I do not know what the last .2 that Fedora put into the Evolution / MAPI packages is. It does not make MAPI viable yet though.

 

Fedora is Not Meant to be an Enterprise Desktop

 

I think I should stop here and reiterate that Fedora in not an enterprise desktop. Fedora makes no claims that it is, and RedHat, the corporate sponsor of Fedora, will tell you that they take the technology developed and tested in Fedora and roll it into their RedHat line of products when and if it is supportable. No one would claim Linux MAPI support is ready for primetime I think. If you want a simple thing like Flash or MP3 playback, you have to modify the Distro. It is easy to do, and resources like the Unofficial Fedora FAQ take you through it. It is not made more stable and more supportable that way though.

 

I mention this here because even though I kinow better, I have a tendency to think of the big three Linux Distros as Ubuntu (and its kin like Mint), OpenSUSE, and Fedora. I might even be forgiven that because those are in fact the top three over at Distrowatch as I type this. The truth is that of those three, only Ubuntu can be considered for Enterprise use, since you can buy support for it from Canonical.

 

OpenSUSE works well enough, and integrates with enough management tools that I think one could make a case that it could be an Enterprise desktop, though Novell will most likely tell you that is really their supported Novell Linux SLED.

 

I was looking at Fedora for pretty much the exact reason it exists: I wonted to do a technology evaluation of MAPI. Since I was there though, how about the rest of it? Anything interesting going on in Fedora 11?

 

Fedora 11

 

I downloaded the LiveCD from one of the install mirrors: I like to be sure that the OS looks like it will work on the system before I install it. That means that the installer is not exactly the same as the one that is used in the older style boot-and-install style disks. It is a simple process to get started once the LiveCD is booted: Just clcik the install icon. Then the fight starts.

 

I suppose if I had let it just take over the boot disk, and lay it out however it wanted it might have gone better, but this system also has Vista Service Pack 2 on it, and I needed it to dual boot. The back half of the disk is set aside for Linux, and that should be all it needs. It took three installs before I had one that would stay installed. It kept forgetting the disk layouts. It would boot once, but if I installed new kernel or something, it would not reboot, and a quick look at the disk showed that it appeared that the disk partitions were not as I had set them. They were not gone either. Vista was never affected. But the systems was not bootable.

 

All of it appears to revolve around the fact that the LiveCD uses Ext4 as the default file system for '/'... but Linux can not yet boot an ext4 file system, so there had to be a special 200 MB '/boot' set up as Ext3. This meant that my standard dual boot config did not work. I could not do a Windows | / | swap | /home layout. Having more than four partitions mean extended partition or LVM. I tried extended but that appeared to fail, so I finally ended up in an LVM config:

 

[steve@f11-steve ~]$ df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/vg_fed11steve-lv_root
                      10077504   3655204   6320024  37% /
/dev/sda2               198337     21964    166133  12% /boot
/dev/mapper/vg_fed11steve-LogVol02
                      81787616    407736  77225308   1% /home

 

/dev/sda1 is Vista still..

 

Once I was able to stay up past a simple reboot, I updated everything with "sudo yum update" (after I used "visudo" to add myself to the '/etc/sudoers' file of course).

 

The Scenery

 

Once up and logged in, the view is that of a clean, simple Gnome 2.26 desktop. No messing around and adding the Mint or SUSE modes that make Gnome look more WIndows-y. on this Dell 745 with its ATI  (lspci says: 01:00.0 VGA compatible controller: ATI Technologies Inc RV516 [Radeon X1300/X1550 Series]) desktop effects were not enabled by default. When enabled via System / Preferences / Appearance, it was a pretty reduced set of effects, and none of the ones I care about. Wobbly Windows: Meh.

 

I used Yum to install Compiz-control-center so I could get control over what effects were on. I wanted Expo and Windows Preview. I Also loaded up something called OpenGL Desktop. When I try to use the later I get a nasty error about not being able to save my preferences, so while Compiz is up, it is not doing what I want it to:

 

Screenshot.png

 

It has been a while since I had loaded up Thunderbird. Since Evolution was no more useful than what I had on Ubuntu already, I decided to see how Thunderbird had changed. F11 ships:

 

[steve@f11-steve ~]$ rpm -qa | grep -i thunderbird
thunderbird-lightning-1.0-0.3.20090302hg.fc11.i586
thunderbird-3.0-2.3.beta2.fc11.i586

 

I added Lightening to get a calendar going. I was sort of sorry, as it would not let me dismiss any alarms for meetings. For fun, I installed the same on Ubuntu, and it worked fine over there, so I assume it was because F11 was shipping the Beta, and this was a bug that had not been dealt with yet. One of what was turning out to be the many bugs not dealt with yet.

 

I used to live in Fedora. I loved it because it taught me so very much about Linux. Great forums and general information on the Internet and by being totally open source, everything is there to see. I must be getting old, because these days, after using Ubuntu and Mint, Fedora's rawness is something I have to remind myself is a more or less intentional act.

 

Interesting Fedora 11 Happenings

 

There are two Fedora efforts under way that have my interest. One is that there is going to be a Fedora 11 spin against the mainframe. Here was the recent announcement about this on the Linux-390 list:

 

Hello,

The Fedora s390x team is pleased to announce a first preview of Fedora
11 for s390x
in form of a prebuilt hercules image and as a tarball which can be
unpacked on
a free DASD of your z9 or z10.
We currently have ~11600 binary packages of Fedora 11/s390x and are
working on
getting real boot images.

Hercules images with instructions can be downloaded from
http://secondary.fedoraproject.org/pub/alt/spins/S390/

Individual packages are available at
  https://mirrors.fedoraproject.org/mirrorlist?repo=rawhide&arch=s390x

More info will be added in the next few days at
  https://fedoraproject.org/wiki/Architectures/s390x

If you're interested, please join our mailing list at
  https://admin.fedoraproject.org/mailman/listinfo/fedora-s390x
or our IRC channel #fedora-s390x on freenode.net


Regards

     Karsten Hopp, s390x secondary arch maintainer<fedora-s390x@lists.fedoraproject.org>

 

As a mainframer (if not a currently active one) I thought that was very very cool. The other thing I found interesting was that, as an owner of OLPC's XO-1 there is now a Fedora 11 install for it:

 

http://dev.laptop.org/~cjb/rawhide-xo/

 

My XO-1 is in a practically unusable state at the moment from all my experimenting with it, so this looks like a way to get it back into a functional state. Not only that, but to move to a Gnome desktop from Sugar. I get why Sugar exists, and for kids that have never used a computer before I think it is brilliant. It drives me nuts. I sense Fedora 11 in my XO-1's future....

Share: |


<I’m back! Had to go move an R&D data center from one place to another. Took a while...>

 

Read through any of my recent posts about Linux and MAPI and a picture should develop of hope that in the very near future, even in a shop that runs Microsoft infrastructure like MS Exchange that there will soon be new choices.

 

This does not even address the idea that one can feasibly use Google Mail and Calendar for everything that MS Exchange does now: I have a friend who in setting up a new shop went that way rather than choosing to build their own email infrastructure or go with a more traditional outsourced email solution like hosted Lotus Notes or MS Exchange.

 

It is also not really my way to criticize companies or products here. I do not think using a forum like this is appropriate for that. That and I think constructive comments are more useful. I have stated over the years my reasons for preferring Linux, and if you go far enough back in my posts I wrote a series that is the true core of it: Heterogeneity. In summary, a computer ecosystem, like desktop computers, is more vulnerable to attack when it is homogenous, and I saw that demonstrated during the Code Red and Nimda virus outbreaks when only MS Windows computers were affected, but everything else was working fine... and in fact I was using Linux to build software disks full of stuff for cleaning off the virus’s on the MS Windows computers.

 

This is not to say that Linux or OS.X can not get a computer worm or virus. Anything created by people can be hacked by people. Cross-platform attacks are an order of magnitude harder to create though. Shoot: These days most malware targets particular releases of MS Windows, such that Windows XP might be affected, but that same thing attacking Windows 2000 or NT fails.

 

Barriers Dropping

 

The big barrier to entry for using either OS.X or Linux as an Enterprise desktop has always been MS Exchange and its closed / undocumented protocols. As I have written here, the EU has changed that by forcing Microsoft (among other things) to document how MS Exchange “talks” to Outlook via MAPI and something like 85 other Remote Procedure Calls (RPC’s). When I say MAPI hereafter, I am including all the requisite interactions between server and client, even though it is not technically accurate to just call it MAPI.

 

This is of course different than using POP or IMAP protocols. MS Exchange supports them, but these protocols are for email only. Contacts, Tasks, and Calendars are “safely” locked away on the MS Exchange server where only those that speak MAPI and the related RPC’s can have full access.

 

Rather than having to slowly read wire traces and figure out how it all works (The way Samba was created: It can be done) there is documentation about how to interact with MS Exchange for the first time. I have written here about work under way in Linux to be able to take advantage of those protocols. Now it has been revealed at the World Wide Apple Developer Conference that OS.X 10.6, shipping in September of 2009 will also have MS Exchange compatibility. Around that same time, Windows 7 will go GA.

 

Windows Vista Service Pack 3

 

I have tested Windows 7 quite a bit: In my role as a senior technologist, I can not really have a favorite platform: One of the secret sauces of BMC is that we support a wide range of platforms. Opps... I probably should not have let that slip.

 

As a technologist, I also have and use Vista and XP and so forth. I have to say that I do not understand the positive buzz for Windows 7 relative to Vista. I also do not understand why Vista was treated so poorly. All of it seems to lose sight of history. Windows XP was a suboptimal place to be until Service Pack 2 came out. Ditto Windows 2000 and Windows NT and Windows 98. Vista was no better and no worse out of the gate than those. It had problems, but my Vista Service Pack 2 install is now pretty stable, and does not have the speed problems that Vista and Vista SP1 had. Throw another three years of development on top of Vista, and you arrive at Vista Service Pack Three, A.K.A. Windows 7. We have been here before. Windows 98 Second Edition anyone?

 

Here is another thing I do not understand: I read recently one pundit say that Windows 7 and OS.X were now just two flavors of the same user interface. Huh? I use OS.X all the time. I’m writing this post with my Macbook. I do not see the resemblance. By that logic all dogs and cats and horses and cows are just various looks on the exact same animal.

 

Just because OS.X and Win7 both have compositing video interfaces, they are hardly the same, any more than Compiz on Linux makes it the same thing as Windows or OS.X. Sure, you can theme up Linux or Windows to make them look a lot like OS.X, but they are not the same. OS.X and Linux are more the same, given OS.X’s BSD roots, but there are still enough differences that no theme will cover up.

 

Nor is it hard to jump back and forth between Linux, OS.X, and MS Windows. When you are looking at a composited GUI, and using a keyboard and mouse to interact, there are bound to be similarities in the usage paradigm. There is always some adapting: I have to get used to my older Macbook Pro not having all the trackpad gestures that my Macbook has for example.

 

Therein lies the point of confusion I believe. The way we humans interact with computers follows a fairly simple usage paradigm. Till we have voice control or mind / computer interfaces, all computer desktops follow from the current technology. Keyboards, pointing devices, and displays. Regardless of platform, people want to write code in languages they know and love: Perl, Java, C+, Python, and so forth. All of this leads by necessity to there be some similarity in how one interacts with a computer platform, no matter which one it is.

 

Windows 7 is not a bad place to spend time. It runs OpenOffice, Firefox and Chrome well. The new super-command-prompt A.K.A Windows Power Shell is more in line with what xterm/konsole/gnome terminal have been for years. Would have been nice to just have bash....

 

Win7 with Aero is nice to look at. Some of the compositing eye candy now does useful things in addition to just being chrome. Its hardware requirements are in reach of most current gear, although like Vista before it forget running it on something more than about three or four years old. Not gonna work well. It is possible Win7 is getting good press in part because the hardware of three additional years finally caught up to Aero and Vista. That and the UAC prompt has been tamed a bit.

 

Win7 without Aero (in the case of something like a low end video card or a virtual machine) is pretty much like XP but with all the menus jumbled about in some way that might make sense to someone someplace but I just use the search bar to find things anymore. The hardware activation stuff is a major pain: Change the video RAM: reactivate the Win7 guest.

 

Key for me after Nimda and Code Red is that after years of work (http://www.sfgate.com/cgi-bin/article.cgi?file=/chronicle/archive/2002/01/17/BU102125.DTL), Win 7 is less vulnerable to black hat attack than any of the predecessor versions of MS Windows.

 

OS.X 10.6

 

The choice of what makes a new release versus what makes a new point release is often very arbitrary. OS.X 10.6 and Windows 7 have a great deal in common on that point. The new OS.X, according to everything we have read, is going to be mostly focused on internal differences. Full 64 bit exploitation. New dispatcher called “Grand Central” that will allow OS.X to work better on multi-core systems (and one would think, something that the server version will need more than the desktop edition). Big focus on security loopholes. Not much new in the user interface.

 

Like Win7 could be thought of as Vista SP3, OS.X 10.6 could be considered more of a point release of 10.5. One OS.X pundit thought that was in fact the entire point of the new releases code name: Snow Leopard follows Leopard. The way that the 10.6 release is priced also seems to echo that: 29 USD rather than 129 USD.

 

Except for the part about MS Exchange. The new 10.6 version will run as a native client of MS Exchange. Email, calendaring, etc from OS.X with no third party software. If that works, then that is huge. That means my main office desktop is going to be OS.X or Linux. No more Windows virtual machines to get to my Calendar. No more webmail calendar interface that is intentionally low function to try an get people to use IE. OS.X as a native MS Exchange client is enough for me to call it a new release. It is enough that I will buy it day one. The fact that it will make my existing hardware feel like it is running faster will be a bonus.

 

Linux

 

As I write about here in “Adventures” quite a bit, MS Exchange client function is also coming to Linux. Very very slowly. What I never expected to see was OS.X pass Linux standing still in something like this: Linux has always been the OS platform that has worked the hardest to get along with everyone else. On Linux I can load up HFS drivers so I can read and write to non-journalized Mac disks. I can load up Macutils so I can format and repair Mac disks. I can load up Samba and NTFS and get along with MS Windows disks and Active Directory. Linux is always the kid trying hard to please everyone. Yet, as I write this, the MAPI functionality I have in Linux right now is more or less the same as what I had 6 months ago.  It is there, but it is not usable. I am trying to load up Fedora 11 to see if that will change anything: Ubuntu 9.04, Mint 7, and OpenSUSE 11.1 all work at more or less the same level as far as MS Exchange access is concerned. I can read email. I can send email as long as I type in the email address. I can not reply to email because all the email addresses in the RFC822 headers are munged. No server-side group calendaring. No server side contacts. Yet.

 

I use the word “try” about Fedora there because unlike OpenSUSE or Ubuntu on the exact same system, Fedora is not wanting to install at all. It does not like the disk format. ‘/boot’ has to be ext3 but ‘/’ has to be ext4. It really really wants to install everything in logical volumes, not hard partitions. I will get it installed, sooner or later, but it sure feels like a step back in time. Fedora prides itself as being the most bleeding edge Distro going, and that is why I hope the MAPI functionality is better than what I have seen before in Ubuntu or OpenSUSE, but it’s installer is not up to the other distros standards. A freind of mine described it as “fragile”, and now I see what he means. OpenSUSE 11.1, looking at the same system, picks a disk layout exactly like I would have done manually.

 

Like Fedora going in eventually, MS Exchange MAPI support will be in Linux eventually. When it works, you’ll know it here! My guess is that OS.X will beat it by at least 6 months. I could be wrong. Knowing OS.X is getting ready to pass them might set a few coding fires.

 

One last thing on this point: I have said it before in other posts, but it bears repeating here. This is all about MAPI. If you have Exchange 2000 or 2003, you are good to go on Linux. You still have the WebDAV access mode that MS eliminated in Exchange 2007, so the “Evolution Connector” plug-in still works, and you still have everything. Email, calendars, contacts, task lists, out of office settings... the works.

 

MS Exchange 2010

 

As if to acknowledge that choice of desktop client has entered the workplace (or perhaps that eliminating WebDAV came off as a bit surly in the marketplace), one of the new features of MS Exchange 2010 is going to be fully enabling the web client so that, like Google Mail, full feature functionality is available to everyone, regardless of platform. One will not have to run IE to see advanced/more fully featured webmail functions.

 

MS’s Outlook Webmail will finally be Web 2.0-ish. Reportedly. I have not had a chance to try it yet...

 

If it does work as advertised: If I can use Firefox or Safari or Opera to access a fully featured Webmail, then that will probably go further to cementing MS Exchange’s market share in the data center than any of the exclusionary things that have proceeded it.

 

At the same time, the ability to have diversity on the desktop will go a long way to containing future computer worms and viruses

Share: |


-by Steve Carl, Senior Technologist, R&D Support

 

I last wrote about this topic on February 1st, 2009. Not much has changed in the last two months. Ubuntu 9.04 has raced towards GA (it ships tomorrow as I write this), and therein lay not just my hope, but the hope of many many others.

 

Ubuntu 9.04 looks pretty solid in most ways. I have it running on a desktop, a laptop, and a netbook. I have been testing it daily since its Alpha 3 release. It is fast. It is stable. On my Acer Aspire One netbook, it runs very well in both classic and Ubuntu Netbook Remix (UNR) modes. The UNR USB boot image is terrific for testing what will work on any netbook without installing first.

 

For all its goodness, if you are a Microsoft Exchange 2007 shop, this is probably not your production desktop. Yet.

 

The issue is not Ubuntu. The issue is MAPI. OpenSUSE 11.1 and Fedora 10/11 both have more or less the same version of the Evolution MAPI support that Ubuntu will make available GA tomorrow. The problem is not in the distro: it is in the Evolution-MAPI plugin and the underlying OpenChange MAPI access code. Not that projects fault either really: This is brand new code that just is not fully baked yet.

 

In fact, when you think about it another way, it is amazing that MAPI is coming to Linux at all, even if it is not here yet in any useful sense. This one protocol, and its related RPC's have been hugely difficult for anyone to implement before now, for a raft of reasons. HP was getting close with OpenMail years ago, but allowed that work to be derailed.

 

For the office system the motivation to get Ubuntu up and tested was to get to Gnome 2.26 and Evolution 2.26, because that is where MAPI support in Evolution is supposed to debut. Ubuntu does not ship the MAPI plugin on the install disk, but you can install "evolution-mapi" from the "Universe" repository. That bad news is that MAPI is not ready for prime time. See https://bugs.launchpad.net/bugs/338982 for details. As a measure of the interest in this feature in the Ubuntu community, here are the dups of that bug at this writing:

 

Bug #202287 Bug #333855 ,  Bug #337785 ,  Bug #340399 ,  Bug #340500 ,  Bug #341184 ,  Bug #342251 ,  Bug #342363 ,  Bug #344864 ,  Bug #345228 ,  Bug #345753 ,  Bug #346046 ,  Bug #346326 ,  Bug #347037 ,  Bug #348309 ,  Bug #348458 ,  Bug #348621 ,  Bug #349148 ,  Bug #351991 ,  Bug #352230 ,  Bug #352327 ,  Bug #352450 ,  Bug #353029 ,  Bug #353044 ,  Bug #353063 ,  Bug #353204 ,  Bug #353538 ,  Bug #354101 ,  Bug #356260 ,  Bug #356681 ,  Bug #357874 ,  Bug #357962 ,  Bug #358040 ,  Bug #358221 ,  Bug #360509 ,  Bug #361521 ,  Bug #361751

 

Lots of people want Linux MAPI working it would appear. Evolution MAPI won't be ready when Ubuntu 9.04 GA's. Ubuntu is not going to hold up a release for a feature that is not in their base code. Expect post 9.04 patch stream for Evolution and OpenChange to be fairly busy. When this bug gets fixed, there is still much missing functionality.

 

Right now, if you enter your Exchange 2007 servers IP address, rather than its name (or its Cluster IP alias) you can get to the place where you can see your Exchange inbox from Evolution via MAPI. Again: That is no mean feat. In one sense it is a marvel to think about the fact that you are seeing your MS Exchange 2007 inbox via MAPI! However cool this is conceptually and historically, it is not compelling if you are looking to use Linux all day long as your main workstation. If nothing else, you can already see your inbox with IMAP (if it is enabled, and apparently some shops disable it by default for some odd reason). Worse, since MAPI has not yet implemented the Global Address List (GAL) the email addresses in the "from:" and "cc:" fields are often useless. Not an issue when you use IMAP.

 

Then there is the speed, which is not yet blazingly fast. Sure, MAPI and its related RPC's are chatty on the wire, but as it stands now, Evolution is slower than MS Outlook when running MAPI, and that, long term, will not fly.

 

The real point of running Evolution to access MS Exchange is calendar access: it is so one can replace MS Outlook with something else. As it stands today, with IP address work-around, all one has is Inbox and tasks. Click on an email with a calendar invite, and Evolution freezes. All of it. Even if you have a second account defined to a different server, you are locked out. Evo is now only standing up because it has been nailed there.

 

For now, if you are MS Exchange 2007, its IMAP or Web for the Linux desktop. If your shop is MS Exchange 2003 or 2000, you can still use the Exchange-Connector to access the WebDAV protocol, and have full functionality from Evolution. Inbox, Calendar, tasks, Out-of-Office settings, etc.


The Web

 

The good news is that, while we are waiting for OpenChange and Evolution to get MAPI fully dialed in (I am guessing another 6-12 months), the new Outlook Web Access "Light" client is not too bad. I like it better than Exchange 2000 and 2003's. Much better in fact.

 

Exchange 2010 is also coming, and it looks like it will be a foot race to see if it arrives sooner than a working MAPI stack on Linux. The Web interface on 2010 is going to be high function enough that if MAPI never arrives on Linux, I may not care. Like so many other things I use, the Web clients are getting good enough that I do not need a local app anymore. See Gmail for details.

 

The postings in this blog are my own and don't necessarily represent BMC's opinion or position.

Share: |


-by Steve Carl, Senior Technologist, R&D Support

 

BMC is one of the largest VMware shops anywhere. We have nearly 9000 Virtual Machines running in our ESX server farms alone. Our growth trajectory will have us break 10,000 VM's before the end of the summer. The is just VMware, which is not the only virtual player in our shop. We are even bigger users of virtualization than that, with the granddaddy-of-all-virtualization VM on the mainframe, Virtualbox, Parallels, AIX LPARS, Sun LDOMs, HP's VSE (not to be confused with IBM's DOS/VSE...) and so forth.

 

Not all that long ago, our worldwide "real" server count for R&D was a large number: well north of 10,000 real, physical computers. BMC grew, more products came online: entire product categories even... and the real hardware footprint has shrunk to about half what it was three years ago. Ditto the data center space. The current R&D DC move I am working on has us taking over 7000 jam packed square feet down to 5000 square feet... and leaving room to absorb another 1000 square foot lab later. In this one lab, we have leveraged virtualization to more than halve the number of real servers.

 

Converting real, physical machines to the virtual world (P2V) of older gear is part of that virtual growth, but so are new requests for new environments. Think of those latter as "Real Server Avoidance". The impact is huge in terms of BMC becoming, among other things, a greener company. It is not just real coffee mugs in the kitchen (rather than Styrofoam cups) and recycling the Diet Dr Pepper cans (Contrary to popular belief, not all software development is powered by Mountain Dew.). We use less power than we did before. Much less power.

 

Conservatively speaking, if we used 100 less watts per virtualized server than for a real computer doing the same things, then that alone would be 900,000 watts! 900k watts here, 900k watts there, pretty soon we are talking real carbon footprint reduction. 100 watts is a very lowball figure: Even new computers with high efficiency power supplies like a Dell 1950 use well more the 200 watts at static, post booted load. The 1950's  power supply is max rated (Nameplate rating)  at 670 watts. Depending on your local code, when planning a data center, it is assumed that somewhere between 40% and 60% of the Nameplate rating is the amount of wattage used once the server has settled down after booting.

 

At some point I'll probably develop a tighter number than 100 watts per server savings, but it will do for now. I'll have to go add up the real wattages of every server we decommissioned, and then add up the wattages of the all the virtual servers in order to get a better estimate, and that would take a while, given the number of servers we are talking about here!

 

I talked about this saving power with virtualization topic a while back ("Virtually Greener") and in that post I was focused only on what we had done in Houston. This is company wide, and clearly we have come pretty far down the road from where we were only 1.5 years ago. Here is what I noted about Houston's power savings back then:

 

"That means 80 Kilowatts or 80,000 Watts have "left the building". 80 KW reduction is 160 pounds of CO2 reduction each and every hour they are off (assuming Coal as the power feedstock). 3,840 pounds per day. 1,401,600 pounds per year. Half those numbers for natural gas as the power generation feedstock"

 

So, using those same numbers, and expanding the scope from Houston to all of the R&D data centers world wide, we are now talking about 11.25 times those amounts. 5625 pounds of CO2 an hour, 135,000 pounds of CO2 a day, and 49,275,000 pounds of CO2 a year that we are now *not* adding to our shared atmosphere. Remember that is *low* because of the estimate: the numbers are really better than that. Maybe twice as good even.

 

Call me corny, but this makes me happy. I am visiting our corporate headquarters in Houston as I write this, and it is early in April. Just barely Spring. It is hot. I am glad we are doing what we can to not make it hotter.

 

P2C

 

With all this virtualization, and the addition of Bladelogic to our corporate tool chest, we have created quite a change in our internal R&D compute capabilities. We have a compute cloud. We have gone Physical to Cloud (P2C: TM) . While

 

  • Virtualization is not a Cloud, and
  • Provisioning is not a Cloud, and
  • Image Management is not a cloud, and
  • Performance and Capacity Planning (PCP) is not a cloud and
  • Configuration Management is not a cloud, and
  • Service Request Management is not a cloud ....

 

.... add all these things together and put them in the service in fewer, more regionally consolidated data centers connected with point to point network clouds and you pretty much have, by any definition of Cloud Computing, an internal Compute Cloud. One with more OS images than before, more capabilities than before, faster turn around than ever, and that is using far less of our planets shared resources.

 

In my list above I noted some of the common Cloud Computing building blocks. In particular, I think the key enablers for the Cloud concept are Virtualization and Provisioning. You could reasonably argue that neither are required: That is is about having a computing resource available via the network only, and I would not argue with you. That in fact has been an underlying theme of my last few posts.A good example of a Computing Cloud that is not virtualized is the recent information we just got about how Google designs their data centers. Fascinating stuff. No virtualization is sight, but clearly a Compute Cloud.

 

Virtualization and Provisioning are tools that make delivery faster. Make availability easier. In point of fact, you would not need many of the things on my list to build a cloud, as long as you were keeping the operations fairly small.

 

The bigger it gets (ignoring cases like Google where a single task scales beyond the size of a single computer), the more important each of those tools become, and if you are planning ahead, you will be ready with the tool set *before* you actually need it. Performance and Capacity Planning is a great example of tool that gets more important as the virtual server farm goes. I was recently using our BPA / CME tools set to create standard configurations for our next set of server purchases, for example. I am ready with data from BPA to show that we need to put more memory into our configs than we have to date. When you are talking about a server farm with hundreds of servers, even if they replaced thousands of servers and you are already saving serious CO2 emissions and expense money, it is still a serious investment.

 

The other thing one has to be careful of when building one of these Compute Clouds is virtual server sprawl of course. When it is cheap and easy to deploy new vservers to meet new requirements, the tendency is to leave servers running till someone tells you they don't need it anymore. More often than not, no one will tell you that. Everyone is looking at the next project, not the last one. One does not want to undo the goodness of P2C by having way more server farm than current plus part of peak plus growth demands.

 

The Linux Connection


I have not felt particularly constrained recently to keep my blog just about Linux. Partly this is because as a technologist my role is much wider than just Linux. Partly it is because my biggest project recently has been designing, building, and getting ready to consolidate five R&D data centers down into one, smaller data center.

 

This does not mean "Adventures" will never have Linux stuff in it again. In fact, the next post I am planning is pretty much pure Linux, with an update about where MAPI is in Evolution.

 

The other part of it is just that Linux is not something people think about anymore and ask "Will it make it?". Redhat's last quarter alone should be proof of that. Linux is ubiqutous. It's at the core of VMware. It's embedded in the lights out management cards. its in the netbooks, fast boot BIOS's and the SaaS bits of Cloud Computing. It is where virtualization is often first developed, and first deployed. It is the core of supercomputing, The "L" in the LAMP stack, which provides so much of the Internet. It is seriously challanging OS.X in the Smartphone market. It is making inroads in Real-Time, where VMS has been king for so long. No one ever asks me if BMC supports Linux anymore. They just assume we do.

 

It's everywhere and now we are starting to just assume its presence. My wife, long a hold out because of her love of OS.X, runs Ubuntu 9.04 on her Dell Mini 9 even. It is everywhere, and in every thing. The question becomes not "Should we run this on Linux?" but "Is there any reason *not* to run this on Linux?"

 

At some point, "Adventures" is going to probably be, at least in part, about finding Linux in all the places it is hiding around us.

 

The postings in this blog are my own and don't necessarily represent BMC's opinion or position.

Steve Carl

Cloudy Terminology

Posted by Steve Carl Apr 9, 2009
Share: |


In my last three posts (Clouds in the Glass House, Clear to Partly Cloudy,  and Convergence) I have spent a fair amount of time talking about the current  latest greatest computing paradigm: clouds. My position has been somewhat counter to the current fashion trends. If you have read the other posts, hopefully I have been clear on my central point that "Cloud Computing" is not a thing, but a concept or a paradigm. It is a way of thinking of a certain set of technologies being used, and not even in one particular way.

 

The more I think about "Cloud Computing" the more I think the term fits the concept: it is every bit as fuzzy, ill defined, and nebulous as a real cloud. Ironically, unlike other trends of the past, and unlike a cloud, "Cloud Computing" is *not* vaporware either. It is real, and there are actual products behind it. That is because in many cases it is an old product or concept with the serial numbers filed off, and in others it is just a collection of products meant to be used in a "Cloudy" way. I am thinking here a great example is provisioning / rapid provisioning. It was there before the cloud, but it is key to a clouds basic concepts, at least as most people think about and discuss "Cloud Computing". Same thing with Software As A Service (SAAS) although it would be more accurate to think of "Cloud Computing" as delivering the platform for SAAS.

 

Thin Provisioning

 

I have to say that I dislike the term "Thin Provisioning" as is has been applied storage, usually virtual storage. "Hey: has your provisioning lost weight?"

 

If :

 

  • "Bringing provisions" in the traditional sense was something like "I packed a lunch"  and ...
  • In the server sense is that "I bought and set up a server so you can use it"  and ...
  • "Rapid Provisioning" means "We used automation to rapidly configure or reconfigure the server to meet your current needs" ....

 

 

... Then I somehow have trouble with how that leaped technical meanings in block storage. Seems like "Thin Provisioning" would be a better term for "You asked for something, so I gave you less".

 

Sure, in storage thin provisioning does kind of mean that: "You asked for 100GB, and I gave you 5GB with the ability to invisibly grow to 100GB should it turn out you really need it. But the OS thinks it has a 100GB LUN there, and will happily use it all, so  the reality to the OS is that it has 100GB LUN. End of Story.

 

The term I like better is "Storage Overcommitment" (and yes, I know that is usually used to discuss RAM in a virtual system) because that is the real win of Virtual Storage. In the real world, you can set upp 100 LUNS, each with only 5GB in use at the set up time. Because the LUN is virtualized, and only allocates blocks as they are used, the actual on-disk storage of such a setup would be 500GB plus a litte overhead, when the hosts using the storage total up to 10,000 GB. You can they provision this storage on a device with *more* than the 500GB, but way less than he 10,000 GB and *overcommit* the storage.

 

You monitor and watch it of course. Some will actually end up using more than their starting allocation, but in the real world there are many computers that are only using a tiny fraction of their built in storage capacity: My mom has 120GB disk in her iMac, and even with all her music and pictures she is not using more than 25% of that.

 

This does not even count the possibility in "Storage Overcommitment" (SAN / WAN Storage overcommitment? DASD overcommitment? Block storage overcommitment?) of data-deduplication If all 100 of those hosts were the same OS: Say Ubuntu 9.04, and your storage could just keep pointers to the same data item.  All of the common parts of the OS and applications therein would be stored once, with pointers, rather than a separate copy for each. Now you have the *first* hosts 5GB storage, but every hosts after that storage only having its unique elements stored: maybe 2 or 3 GB each. Pictures and music and /var and /etc mostly get separate storage, but the kernel and the .deb archives and whatnot are only stored once. If half the hosts are running Oracle, then the Oracle Binaries are also only stored once, but the database is stored uniquely.. unless it is the same data base.... On and on....

 

It is this set of features that to me make the term "Thin Provisioning" a very weak term, and not at all descriptive. Nothing can be done about it though: It is the common usage. I supposed some would take my term and think it means that the storage is clingy and needy and jealous. "Have you been seeing other SAN's?!?! I thought we had a commitment!!!!"

 

Whats in a Word?

 

I hit a terminology road bump like this recently. I was designing a new R&D data center, and when talking to the contractors about the total heat load of the room, I keep referring to the maximum rating of the power supplies for the computers. They were very confused until we hit upon the term "Nameplate". Thereafter I used that term, because there was no understanding of the idea that "maximum rating", "maximum output" or UL rating or any other terms like that was the same thing. I thought that was unique until I was talking to a different vendor about wattage and they also were confused breifly till I hastily added the term "Nameplate" to the conversation. The funny thing was that it was not really a conceptual problem with the idea that a computer does not *use* the max rating of its power supply very often: We were in fact talking about what the local code was when it came to UPS, Air Conditioning, Amperages required to feed racks, and so forth, and what the ratio of the max rating to the static rating is. (I did not say static rating: That would have been confusion too). All were clear on the concept, but had hung their understanding of the system on a term that had, when you think about it, no meaningful relationship to what we were discussing. "Nameplate". Nothing in that term screams wattage at me. In fact, when I think nameplate, I think the name on the front of the asset, and it may not even be a computer. Dodge Ram is a nameplate. Honda Fit is a nameplate. Has nothing to do with computers.

 

I guess this why ITIL is so important. People have to speak the same language and use the same terms even if the terms are not conceptually accurate. I was listening in to an architect talk about designing a building once, and they kept talking about "Programming" a room. Really? The room runs programs? How cool is that? I'll bet the rooms OS is Linux....

 

Next time: Virtualization update

 

The postings in this blog are my own and don't necessarily represent BMC's opinion or position.

Steve Carl

Clouds in the Glass House

Posted by Steve Carl Mar 24, 2009
Share: |


-by Steve Carl, Senior Technologist, R&D Support

 

I was out on vacation last week when Cisco announced their new Unified Computing System (UCS), but it appears in reading through the trades this week that it has created quite a stir. BMC was present at that party with a full line of tools ready to go.

 

There were many comparisons of UCS to the idea that this was a new mainframe. Appropriate as far as it goes.  These are not the same technology, so direct comparison would be suspect if you got too technical about it. One example: The Mainframe is still the king of the I/O hill, as near as I can see. The consolidation of the servers and the network into the chassis has obvious mainframe like features. If you assume that the mainframe does not respond, then it is easy to see this solution growing to overcome the mainframe in a few generations. The mainframe won't stand still of course. But I digress...

 

I looked over the design and technical specs for UCS that are currently publicly available, I was struck by what such a device means to the concept of "Cloud Computing". My last couple posts ("Convergence" and "Clear to Partly Cloudy") established my thinking on the current trends of the data central back towards a centralized, glass house style data center. It also laid out my position in so many words that "Cloud Computing" is a concept, not a specific technology, at least not yet.

 

"Cloud Computing"

 

  • The Network is the computer
  • Compute clouds have the advantage of standardization and centralization so that economies of scale, and economies of power and location can be leveraged. Build the compute clouds where the power is plentiful, and inexpensive, such as next to a hydroelectric dam. Service the client cloud via standard TCP/IP networking.
  • Compute Clouds tend to leverage fast provisioning and virtualization to provide their service. It does not have to, but it does tend to.
  • The compute / storage resource in the cloud can be anything: A Mainframe, a rack of blade servers, a grid of laptops: Anything that can talk on the network and respond to transactions from the client cloud.
  • The Cloud metaphor is reversible: Not only is the computer / storage resource a cloud, but all the clients of the Compute / Storage resource are also a cloud. Maybe a swarm.

 

TCO

 

The mainframe community has told anyone that would listen for years and years that the cost of the mainframe does not tell you everything you need to know about running a computing resource. They were not lying. There is always this semi-elusive concept of Total Cost of Ownership. Up-front costs versus TCO. Power, lights, real estate, taxes, and how many people it takes to support the thing. One of the barriers to adoption I have seen of compute clouds is that they have to be built: You can not just take down the current running servers and reconfigure them for virtualization and rapid provisioning, and hope that the folks that were using that application don't mind waiting. There is always some sort of swing-capacity required to make the leap into the new world. Of course, you can rent that from folks like Amazon with their EC2 offering as well, so it does not have to be something that you install inside your glass house.

 

Or maybe it does: Not everyone is comfortable with the idea of running their most critical apps and data someplace other than right where they can see it and control it. With that in mind, think about what the UCS solution is or can be: The core of an in-house Cloud. It is not just about virtualization of the server or rapid provisioning: it has Cisco's latest and greatest Nexus networking built in. Nexus has all sorts of nifty new things about it, but in some ways it is a clean break with the Cisco gear of the past. Deciding to go to Nexus means looking for a way to cleanly and non-disruptively inject it into your data center.

 

To get to the TCO benefits that UCS offers requires that one be ready to make the plunge into the new world. This is not a bad thing, but you need to know going in that this is more than just a P2V of some server or the other. This is taking that application and enabling it for the whole new world of modern computing. Its the same leap in concept and power as Web enabling a CICS application or something! To leverage the new thing something's means an up-front cost in order to derive the long term benefit.

 

The savings are very real: In one new data center that we are moving to we are also taking the Nexus plunge, and the reason was that the capabilities to virtualize the network meant a real reduction in the amount of hardware we required to support the data center. That reduction translated into a reduction in Capex and Opex. Less up-front gear. Less power and support costs over the life of the gear.

 

Hardware Vendors

 

I don't really have any idea how Dell, Sun, HP or IBM will respond to the UCS announcements. It is clear that they will need to. What all three have the UCS does not is storage stories. In particular, HP and IBM have in-band virtualization. All currently rely on network vendors like Cisco or perhaps Extreme for their network fabric. I have to wonder if we'll see an alliance with a Cisco competitor from one of four in the near term future.

 

Dell has EqualLogic, and that is a perfect fit into the currently missing part of UCS. UCS provides fat pipes into the storage, but in all my poking around, I can not find a storage product integrated into UCS at this time. Equallogic's ISCSI virtual storage farm is a perfect fit into UCS's fat 10G network storage pipes. Dell will probably be very happy they have this.

 

HP has in band virtualization and the Storage Works line: Should be able to make that fit. Ditto IBM. IBM's SVC is a terrific fit for UCS, allowing you to leverage all your current storage investments in the new UCS environment.

 

None of them will be happy with the idea that they have been cut out of the server side of this, especially as all have blade server offerings. IBM has a Blade that runs both X86 and Power. Sun has UltraSparc and X86. It will be hugely interesting to see how all that plays out. If UCS has a problem, it is that it is X86 centric, and while X86 appears to ever so slowly be winning the CPU architecture war, it is far from over. I never would have predicted Itanium would still be around for example. And X86 needs to be very wary of ARM. But I digress....

 

Not having the storage integrated actually works for me, because I need to support all the virtual environments (Power, Sun LDOM, HP): If I want to standardize my virtual storage back end (and I do), then having the storage decoupled from the rest of the solution is a "Good Thing". For now.

 

Linux

 

Hey Steve! Isn't this "Adventures in Linux?". Yes: Good point. Hopefully it is obvious that UCS is a place where virtualized Linux servers will run. Nothing makes for better TCO than a cloud running Linux I always say. OK. I lie. I just said that for the first time. Gotta start sometime.

 

What is also interesting is how much Linux there is in enabling all this to work. IBM SVC's run Linux as their core OS. Ditto Dell's EqualLogic. I have found no details about how UCS itself runs its control applications yet, but I do know that BMC's Bladelogic is in there for the provisioning, and I know that it runs under Linux as a possibility. I will be watching with interest to see how that works out. I also hope to get my hands on one of these in the lab, so that I can see how the rubber meets the road. In all honesty, I only know what I have read so far. Example: a single fully popped UCS has 320 servers and 384 GB or RAM per server. A single UCS virtual server cluster is then 122,880 GB! That is a pretty good sized VMware farm.

 

You could make a fair sized Cloud with that.

 

Moore

 

When it started to appear that, for single CPU's (now cores), that the speed of light and quantum mechanics were going to put an end to the doubling of speed and halving of price every 18-24 months, it also became clear that parallelism rather than single threaded, sheer speed were going to have to step in to keep Moore's Law on track. What Virtualization adds is the ability to drive usage of computing resources up: Virtual computers run at more images on few processors, virtual networking runs more connection over few wires, and Virtual storage runs more data in less capacity. Virtualization is a cheap way to to parallelize what are essentially single threaded apps / processes.

 

It seems like a Black Hole: Computers getting every smaller, more dense, do more in less footprint. Blade servers have finally matured to the place where you can cram enough memory and CPU's onto single blades to make them useful VMware servers. Now UCS upps that ante by dense packing in the network. Storage will almost certainly follow. When will it all fit on the head of a pin or in my Netbook?

 

The savings are very real. I documented a while back what just our prototype VMware work saved us in just one data center ("Virtually Greener"), and that was 15 months ago. We have saved far more power and space than that since then. At the same time, our OS server image footprint has increased, to keep pace with all the requirements R&D has had.

 

What would all that look like with UCS coming online? How small (and how hot per square foot) will my next data center be?

 

The postings in this blog are my own and don't necessarily represent BMC's opinion or position.

Steve Carl

Clear to Partly Cloudy

Posted by Steve Carl Mar 3, 2009
Share: |



-by Steve Carl, Senior Technologist, R&D Support

The Cloud Computing Metaphor

It sometimes seems to me that the cleverest thing Gartner ever did was to create the concept of the hype curve or Hype Cycle . I like the term Hype Curve better because a hype cycle sounds like a song by Queen.[I want to ride my hype cycle, I want to ride my hype cycle!... great. I'm going to have that playing in my head the rest of the day]. For Cloud Computing you can argue the time frames for speed of adoption, because not everything moves at the same pace, but Gartner firmly places "Cloud Computing"in the initial "Technology Trigger" section, climbing the curve for all it is worth.

 

Yes: that is right. It has not even achieved the tip-top height of the Hype yet. You are going to hear about it even more! Not even counting this post(though I prefer to not believe that this post will be hype).

Convergence

In my last post, I talked about how we have been here before, and that post was largely meant to step back a bit from the day to day execution of technology and look at the concepts behind it.When looked at conceptually, the technical side of the Cloud is not all that different from the old glass house.

 

Another concept the Cloud reminds me of being the same kind as the term as"Web 2.0" was. In and of itself it is not a discrete technology, but rather a critical mass of various other technologies assembled and used in a particular way. Even that is not 100% accurate though, because cloud computing is not just one thing. Maybe "set of ways"?.

I rather like the idea, voiced in the comments of whurley's InfoWorld Cloud Computing Column that the idea for Cloud Computing came from someone seeing a network diagram (for perhaps the first time) and suddenly having a vision of what the concept of the network was.

 

At its very core Cloud Computing a very old concept, originated in the saying by Sun that "The Network Is The Computer." For Cloud Computing this is more true than ever.

Another Day, Another Cycle

If you think about a computing cloud, and what it actually is under the covers, there is a data center someplace (or set of places) with computers in it running transactions and servicing requests from the network. Where the data center is is masked by the network: as long as it is close enough to you that there are no significant speed of light delays, then you have no idea where it exactly is, or what it exactly is made from. Your transactions could be running on Linux or BSD or some other OS, on real hardware or as a virtual machine, and that Linux-or-other-OS could be on X86 or a mainframe. You do not know or care as an end user. You just want it to work when you access it.

 

What connects one to this data center is the network, and under that is the protocol of the Internet, TCP/IP. Where did that come from? Work done years and years ago by the Defense Department (DARPA), with a goal that the network should be resilient: survive war taking out parts of the physical wires by routing dynamically around the damage, with protocols that knew they were inherently unreliable so that they dealt with re-queuing and retransmission.

 

Perfect for a Cloud.

 

Many point to the commodity, rent a cycle nature as being a differentiator between the Cloud and other concepts, but wasn't that what "Utility Computing was about, at least in part?.

 

I can not help but think of the very first time I wrote a computer program.On a Telex machine, connected to a modem, accessing a shared IBM S/360installed someplace where that modem went. I know the mainframe was shared,because one of the other people that used that computer had access to an account from a different school and could see and print out their programs and data.

 

Our school rented time on that computer, and so did other schools. The mysterious computer on the other end of the network was chopped up into bits and rented by the TIP and the Kilobyte.  At the end of every school year, the student accounts were "re-provisioned", so that the next wave of students started with a clean slate. Yes, it was incredibly dated by today's standards,but it was conceptually not that different. A question of degree, not of concept.

Pendulum

We have been here before. The pendulum swings back and forth between centralized and decentralized computing, but here is where I think one of the major differences of the Cloud might be: It is potentially both centralized and decentralized.

 

One reason that people wanted to run screaming from the glass house was issues around span of control. The folks in the data center forgot who the customer was, and started acting like they were the reason that the computer was there. This was enhanced by things like the BOFH series of comedy bits, wherein a computer operator torments the end users for their personal pleasure. I will freely admit that this is not exclusively a problem of IT people forgetting who their customer is, as I have been on the receiving end over my many years, or customers who think IT's primary function is fall guy and punching bag. Let us just say that it is a troubled relationship that requires constant work from everyone.

 

Cloud computing gives people the option to not tolerate being treated poorly by their internal IT staff. Say "No" too many times to the end users request,and if they have a corporate card, they'll just end run you by buying the computing resources they need elsewhere. This is not that different than when the folks started buying PC's and departmental servers in order to be in control of their own destiny, and it points to the same possible set of problems. If people are sticking things in the cloud because they can, then at some point no one knows what is where or who owns it.

 

My first thought about data in the cloud is not that data is not secure,although that is certainly possible, but that data put in the cloud is unique,and that all it would take is the corporate card funding it being canceled to have the unique and possibly critical data disappear back into the disk cloud from whence it came, to be provisioned over the top of. Yesterdays critical data location is re-provisioned and is now occupied by today's shopping list.

 

Don't forget: Milk. Eggs. Backup data... Opps.

 

A truism in decentralized computing is that yesterdays experiment or escape from central support is today's crisis because something critical got lost someplace. Looked at another way: Just because it is in the cloud someplace does not mean that BSM and ITIL principals do not apply. If anything, it is the reverse.

Saving Money in the Cloud

It is very easy to see how a centralized large data center can be more cost efficient that anything that a small group of people / small business could build themselves. With scale comes leverage for discounts from the entire supply chain. By being able to choose the location the data center it can be located near inexpensive power. This is probably a bigger cost factor than most think about, but super-critical to keeping costs down. A data center next to a hydroelectric dam is both cheaper and greener than one near a coal fired power plant.

 

You can also make sure that there are qualified people nearby to support the data center. By using large commodity servers or even the mainframe,virtualization, provisioning, and so forth computers can be run at 80% plus utilization's rather than the more common 2% of the end user PC. That is not just good for the company providing the Cloud, but less expensive for the end user, and less impact on the planet as a whole. Triple win.

 

None of that concept is new, or unique to cloud computing. It is just adapting an old paradigm to the current generation of computer gear.

 

IBM has had, for years, the capability to provide variable capacity on demand in many of their computers. Stick a Mainframe one the data center floor,and enable (for example) 10 processors for the daily workload. Now quarter close hits, and sudden you need 20 CPU's. No problem. They are actually really in the mainframe, and can be enabled on the fly for a price. At the end of the quarter, disable them and you are back to 10 CPU's till you need them again.

 

The Cloud can do this as well. As long as every single customer of a particular Cloud provider does not go computer crazy all at once, the Cloud provider should have reactive capacity if it is well run... Here is the usual balancing act scaled up. Try to run at 80% capacity to make efficient use of resources, but have enough in reserve to deal with the workload bubbles. How big that "enough" is is based on careful measurement, modeling, historical data, etc. Golly: Sounds like the mainframe and Best/1 (OK: Called BPA now...) all over again.

Not Saving Money in the Cloud

Fair warning though: Most of the numbers I have seen about saving money in the Cloud quote a number around 30%. That may seem like a lot, but it would be easy to lose that savings quickly without Cloud purchasing discipline.

 

Think for a moment about the many many spam ads for free things like laptops. I looked into that a bit last night, and it turns out that it is a matter of terminology: Even when there was a real laptop or some other nifty thing to be had, it required going through and doing things like accepting offers that were not free: that had registrations, shipping, handling, you gave away personal information, and you spent a great deal of your personal time sorting through all of it trying to get to the not-so-free laptop/device for the least amount of money.

 

Now consider the distributed systems swing of the pendulum: It was always less expensive than the mainframe. Except when it was more expensive. It had lower up-front costs, but it also had hidden costs all over the place. Support contracts, and time tracking hundreds of computers rather than just a few. Virus outbreaks and the time spent dealing with those. Lost data because it was not on backups cost what exactly?

 

The Cloud has the same issues: If it is not approached as just another computing resource: If all one can see if zero up front costs, then how long till everyone with a corporate card is charging up some tasty Cloud? How long till that 30% is long gone because the right hand department and the left hand department are doing the same thing in two different clouds from two different vendors?

 

The Cloud will save you money right up until it doesn't, and we have been there before.

Rapid Provisioning

It is pretty easy to see how one key to success in the Cloud would be the ability to rapidly provision operating systems and applications.If you look over many of the offerings from many of the Cloud vendors that is exactly what they have. Rapid provisioning is a success factor for a Cloud offering, but it is not a defining feature of the Cloud.

 

Rapid provisioning is one of the holy grails of the in-house data center as well, and, full disclosure here, something we offer via our Server Configuration Automation née BladeLogic tool. If rapid provisioning is the key factor one is after with the cloud then it is possible with provisioning tools to create in-house computing clouds.

 

Taking the example from the previous section, with rapid provisioning there is a solution to the 80%/20% problem. Assuming that there are computers that are set up to create a progression to production, then there are probably Alpha, Beta, pre-GA, and maybe other levels or flavors of test / QA / Customer support resources either in or near the data center. Don't want to take a chance on your critical data leaving your premises and being manipulated in the Cloud? Create your own cloud inside your own enterprise. Take the idea of Cloud Computing, and implement it internally. In fact, take the whole attitude of the Cloud: the the customer is the person paying, and that the computing resource is a tool: a means to an end, not the end itself. <Craig Ferguson> I knoowwww! </Craig Ferguson>... weird to think about it that way to a computer geek sometimes.

API or not to API

It does not really take any special API's to create a computing cloud. TCP/IP and all the things that run on it are enough: See Google Docs, or any other AJAX based application for details. When I wrote this entry, I mostly used Google Docs to create the HTML, and for me, it was off in the Cloud. I don't know where the actual HTML is being stored: Which of many of Google's servers have it. For all I know, I am being shuffled back and forth between multiple computers, statelessly on my end, and statefully on their end. This is not to say that one can not create API's to enable the Cloud, like Microsoft's Azure. When Cloud enabling legacy applications API's can be downright handy it seems. Be that as it may, having or not having an API is not a defining feature of the cloud, and in fact the API's run over .... TCP/IP.

Cloud Defined

It appears to me that the definition of Cloud Computing is a very fuzzy,nebulous, cloud-like thing. At its simplest all that is required is two network enabled devices, and a network between them. The network is the cloud, just like in the classic diagrams. The network still, after all these years, is the computer. Probably why Sun is big into Cloud Computing.

 

 

The postings in this blog are my own and don't necessarily represent BMC's opinion or position.
Steve Carl

Convergence

Posted by Steve Carl Feb 18, 2009
Share: |


-by Steve Carl, Senior Technologist, R&D Support

 

What goes round comes round. What is old is new. Here we go again. Haven't we been here before?

 

I have been doing several things lately that all intersect. One of them is designing a new data center for R&D labs. Another is looking at one of our VMware server farms that was an early proof of concept for the whole server virtualization thing, and trying to figure out where it needs to go *next*. Another is looking at Cloud Computing: where it is now, and where it is headed.

 

At the center of almost all of those things is virtualization technology, and that is where I started my career on the mainframe back in 1980. VM SP it was called back then. At the center of the current X86 version of virtualization is another old friend of mine, Linux.

 

Quick terminology note: The mainframe OS called VM created VM's. Yep. They called the OS what it did. Kind of like people being named "Smith". VM over the years has had many flavors: VM/370, VM/SP, VM/XA, VM/ESA, and the current z/VM. VM system programmers just called it VM mostly, and knew by context if the discussion was about the OS, or the Guest Operating Systems. A Guest OS is a virtual machine, running as a guest of the host OS.. named VM. I'll try an be contextually clear here. VM created virtual mainframe hardware that the Guest OS used without knowing it was not real. Mostly.

 

VMWare, Xen, and others are slightly less confusing here because the Virtual Machine tag is reserved for the virtualized, or guest OS only. When you talk about the host OS, it has a different name. Don't even get me started on bare metal virtualization, or hypervisors. Not going there.

Hardware Assist

VM on the mainframe predates me by a good bit. Depending on how you interpret a few things computer virtualization started in either the late 1950's or mid 1960's. The exact start date is not really that important to this post. What is important is that the early experiments turned up the fact the hardware virtualization worked better when hardware assisted in its own virtualization. To keep memory from being bashed, trashed and generally abused there were features added to allow indexing and translation assistance.

While the implementations are technically different, they are not that conceptually different from what AMD did with Pacifica (AKA AMD-V) or Intel did with Vanderpool (AKA VT-x). It was really a very old lesson.

 

As the mainframe evolved, the technology that it used changed and evolved until we reach the place where almost all the virtualization formerly being done by the VM Operating system started being done in the hardware via the SIE (Start Interpretive Execution) microcode.

 

We have not quite reached that place in X86 land yet, but AMD and Intel keep adding more and more hardware features to support OS virtualization to their processors. The Nehalem generation of processors from Intel adds Extended Page Tables (EPT) for example. Wow... where I have I seen that before? Oh yeah... I remember. AMD is adding I/O Memory Management to increase virtual machine isolation. Think I have seen that before too....

 

Don't take my tone wrong: these are great ideas, and my point here only is that we have been here before, and so if you want to know where we are going, just have a look at today's Z10 mainframe because the hardware features it has now to assist in virtualization will appear sooner or later, in some form, in X86 space.

Shared Memory

Back in the day, when we were running a large number of virtual machines under VM on the mainframe (oh.. wait. We still do that...), the thing we needed first, before anything else, was RAM. That is as true if not more so today.

 

One of the things that was discovered early on about virtualization was that if you do shared memory wrong, all you get is a computer that beats itself to death trying to manage memory. Thrashing. Doing nothing but paging. I was not there, but think that is more than likely why some of the very first hardware assists for virtualization back in the early 1960's came in the form of memory management. I was here when Intel and AMD added hardware assists for memory, so I think my theory has legs.

 

In our use of VMware internally, we find that more often than not the number of virtual machines that we can deploy on any given ESX server is more a function of RAM, not CPU, being the bottleneck. Even with features like memory over-commit, it is common for us to see in our BMC Performance Assurance data a two to one ratio of memory usage to CPU usage. It would be easy to assume from that data point that the speed of the CPU has outstripped the other components of the general computer.

 

When CPU's were not the fastest thing in the box, IBM spent a great deal of time and engineering trying make sure that they only they only did work that was high value. I/O was spun off to dedicated I/O processors. Memory was used for the I/O processors to communicate the actual data the CPU needed to work on next. In general, the mission was 'keep the CPUs busy'. All this customer engineering made for expensive computers, and so no mainframe data center worth their salt would buy new capacity before they needed it, and an underused mainframe was considered a failure on the part of the people that designed and recommended that configuration.

 

Aside: When I was at NASA as a subcontractor, I heard a quite believable story about a mainframe system programmer who wrote a program that was a CPU soaker. After a new upgrade, he would make it use more CPU, and keep end user response time more or less the same. When load increased, they would dial back the soak task, and things would return to normal. When there was no soak left in the soak task, it was time for a new mainframe.

 

Whether that story is true or not, that fact it was told says something about the way that the resources of the mainframe were viewed.

 

The constraint for us with VMware is RAM. With 256 GB of RAM I can virtualize twice as many virtual machines as with 128 GB of RAM, without adding any CPU resources. But the cost the 8GB SIMMS to do that more than double the cost of the system many times... It was often literally less expensive to buy two ESX servers with 128GB of RAM than one ESX server with 256GB of RAM. Boy will that set of numbers look stupid in a few years... but the point will still be the same.

 

Factor in the three year ROI, and that is not true of course. Power, air conditioning, rack space, network and KVM connections all more or less double with two ESX Servers instead of one. The problem in tough economic times is to take three year ROI into account. The good news is that being green: using less of this planets resources, is also a good thing and makes the discussion about buying fewer, more expensive computers feel a lot like the ones we used to have back in the heydays of the mainframe.

 

For a project I am working on I did a three year ROI on two 256GB VMware servers (Dell R900's, but the same held true for Del R905's) versus four 128Gb ESX servers (Also Dell R900's). Not even counting the VMware licenses for the CPU sockets, the costs came back with the 128Gb config costing 25% *more* over the three years of the server. It is even better than that, because I rounded down all the power costs in my model, did not include taxes on the purchase or the power, and did not count the VMware per-socket CPU license charges. The real number is probably closer to 50% in the real world, but I wanted this estimate to be utterly fiscally low-ball. Under promise, over-deliver, just like Mr. Scott on Star Trek.

 

[Geek Points for the Star Trek reference!!!!]

I/O and Channels

The mainframe is still the king of the I/O mountain, and at the core of that is the way the the I/O subsystem on the hardware is designed. There are lots of I/O channels, they can load balance, and more than one can connect to any given I/O device for not just load balance but redundancy. It also further allows virtual machines to start I/O on any given channel to any given device in a totally shared and transparent but still isolated way. Now add that the Z10 can have 1024 I/O channels to a single mainframe. X86 is not anywhere near this yet. Not even close.

 

Not being near and not being close is not the same thing as not knowing where they need to head over time though. Have a look at the Virtensys web site for example. Clearly they have in mind the same thing that the mainframe did: Decouple the I/O from the processor, and share it. The picture doesn't have 1024 I/O channels in it: There is a reason why the mainframe costs more for starters. Of course you could argue that makes the MF the perfect convergence platform at the core of server consolidation... and you would be right.

Virtual Networking

VM on the mainframe has virtual network switches interconnecting virtual machines. VMware has the same thing in ESX. TCP/IP allows all sorts of possibilities for tunneling other protocols inside it. Stepping back a second, it seems like whoever has the least expensive, fattest pipe can become the transport de jour for all the other I/O in the shop. iSCSI, FCoE... you name it.

 

In a reverse of the way things normally happen, distributed system brought networking to the mainframe. Well... not exactly. The mainframe has it's own way of networking before (SNA) but it did not survive "contact with the enemy". Mainframe people just had trouble getting their heads around the idea that the protocol did not guarantee the packet would get where it was sent. But I digress.

 

The mainframe has virtual network I/O long long before it was cool. We used to lash Virtual Machines together into virtual networks with Virtual Channel to Channel (VCTC) adapters. There was real hardware that let one mainframe talk to another over its high speed channels (high speed then...). It was a complicated sort of flipping transmit and receive, except that the mainframe channels were parallel, not serial. VM could do that same trick virtually, and then to guest OS's could converse not at hardware speed (4.5 Megabytes a second on a fully spiffed S/370 set of cables), but at *memory* speeds.

 

If you buy a large VMware server, it can have inside it a virtual network where a large number of its virtual machines converse with each other without ever touching a real wire. We have been here before, and it was good then too.

Virtual Disks

UNIX and other platforms have long had the concept of a virtual disk. The hardware design made this vary, but at the very base of this was the disk slice. Take a disk, and instead of using the whole thing as one disk address, write a partition table on it, and have it contain more than one disk image. PATA had, for example, four primary partitions available by design. In Linux terms, HDA became HDA1, HDA2, HDA3, and HDA4. When that was not enough, PATA layered on the "extended" partition, so that one of the four disk slices could be sub-divided into slices again.

 

VM (the MF OS) has always had the idea of a "mini-disk". Unlike a partition table, it was not written to the disk being sliced how the disk was divided up. The disk was blissfully ignorant and just stored the data written to it where it was told to by VM. The disk slicing was defined in the VM directory. MF disks mostly came in the Count Key Data flavor (with a few exceptions) and that meant that they were subdivided into "Cylinders". Different disk models have different numbers of cylinders. The smallest minidisk is one cylinder, meaning that a MF disk could contain literally thousands of mini-disks. The limit was how many cylinders the hardware presented to the OS.

 

VM would then take these minidisks and assign them to VM's or Guests. The Guest OS would have no idea that the minidisk was not a real, but smaller version of, a real disk.

In another echo of the past, I have seen various Linux recommendations being made to use disk labels rather than device addresses. In Linux, this is mostly in /etc/fstab. Mainframe has use disk labels forever. The Directory that defines the minidisks is keyed of the disk labels in fact. The reason is the same too: with labels, the underlying disk hardware address can change and it will not affect the operations of the system. Handy for system recovery and such.

 

Disk slice limitations were a real problem for UNIX and other OS's. Logical Volume managers sprang up to deal with that by abstracting the real underlying hardware from the applications on the OS. LUN's became VLUN's. A VLUN could be part of a disk, a disk, or many disks. In this the mainframe was exceeded for a while: to aggregate minidisks would be a long time coming. The first time I thought is was *easy* was in fact the convergence of Linux and VM, where Linux creates VLUN's over the top of VM supplied minidisks.

Driving Utilization Up and System Count Down

As noted before, when I started in the mainframe biz, it was just generally accepted that you did not run your MF at less than 80%. If you did, you had overbought your capacity. At the same time, there had to be headroom for peaks: things like quarter close, and billing runs and stuff like that which made your averages and your peaks both things your capacity planner / system programmer took into account when figuring out what kind of mainframe they needed to buy next time. Not counting the CPU soaker person. That story ends with them being fired.

 

One of the reasons that people loved distributed systems is that they could buy all these little computers with tons of spare capacity and just not worry about that anymore. Of course we now understand that the freedom came with a cost: OS license counts, applications license counts, and amber waving fields of systems that needed to be replaced every three years or so. This contrasted with the more structured world of the glass house where those things were planned for and dealt with by small staffs of people that understood all the issues around the troubles that come from things like hardware and OS upgrades.

 

There may not have been capacity planners in the distributed world, but now everyone was a desktop admin.

 

All those computers sitting about and doing nothing when someone was not sitting in front of them. Even when they are sitting in front of them, I am watching the CPU meter as I type this. The computer is not even noticing the keystrokes. I have two browsers open, email, and a couple other applications and the CPU is looking out the window and is frankly rather bored. Memory is at 60% though. When I need the CPU, I will spike it to 100% for a few seconds, but then it will return to its lackadaisical state. The average computer runs at about 2% CPU usage most of the time. With enough memory, this CPU should be able to handle 40 or 50 people doing the same things I am right now.

 

The mainframe stayed as busy as possible by buying fewer CPU's, offloading its I/O, and leveraging RAM and disk storage (VM's paging / swapping subsystem is a thing of beauty).

For the end user, the so-called "green screen" sat connected to a terminal controller, and the terminal controller buffered together the I/O of about 32 users, and batched it up to the mainframe when it needed to. These days you can do more or less the same thing with a web browser, AJAX, and a Linux server. That Linux server can be running with a bunch of other Linux servers at the same time on an ESX, Xen, or other virtualized server inside the data center. Work is offloaded onto my computer, and batched back to the server as needed. As I am writing this in Google Docs, the server is off someplace in a server cloud: I know not where.

Feet on the Ground

Cloud Computing. Seems like another name for "Central Data Center" in so many ways. What are the concepts that drive the cloud?

 

First off, while people sometime act like the idea that they are in one place, but their data is in another is new. They get jazzed about the idea that a modern computer screen looks so much better and is so much higher resolution and has all the nifty colors and nice icons and all. All that is true of course: We use tons of computer cycles and RAM to make all the pretty screens. We'll be using a lot more whenever we get to the place we can talk to the computers.

 

I was talking to a banker the other day, and he was talking about how they are supposed to open new accounts. There is this pretty GUI based thing and using it requires patience, especially when it crashes. All he wants to do is open an account, and the person is waiting right there! When the GUI widget crashes, he goes to a green screen hidden away somewhere in the back ... I am guessing an IBM terminal, but they did not know. May have been ASCII. There he flies through a series of screens, typing in codes and data, and in a minute or so, all is done.

 

That terminal accesses a computer someplace else. He does not know where. It is "out there". Sure, it is an internal to the bank cloud, but it is still mysterious and magical. Of course it helps he knows how to use the green screen. A new comer to the bank would more than likely not know the old system, and will have to be patient waiting for the new system to either catch up, or maybe even come back up.

 

Is the magic of the cloud the protocols? Does one really care where the computer is? Of course not. What they care about is whether or not someone can steal their data, their identity, or embarrass them.

 

I am not saying there is nothing new under the sun... or is that Sun here? The fact that there are over a billion wireless phones on the planet, and having a full browser in the wireless palm (or is that iPhone) of your hand is clearly new. It is nothing that science fiction did not think about for years, but now it is real. All those itty bitty computers in our hands would be useless without the cloud of computers behind them. On my iPhone, I have no idea where the computer is that makes Google Maps work, but I am fairly sure that it is not the same one as the computer that makes my Twitter app work. the protocols and the interfaces, and the speeds and feeds have all changed, but conceptually, how different is that then the green screen on the desk that somehow accesses the customer data and sets up the account? If the bank has web banking, that customer can now go and access the exact same information from their iPhone.

 

I'll have more to say about the clouds in the future, but as it related to this post I think I have beaten it enough. Probably not into submission. the cloud crowd are pretty loud and proud.

 

[More geek points for alliteration!]

Looking Forward

One of the areas I am looking into right now is storage virtualization. Making disk farms into blocks of storage, and abstracting the volumes at a layer above that. Spread I/O as far and wide as is required for the application at hand.

 

Even at that, I have been here before. The mainframe had a bit of disk storage back in the 90's called the Iceberg, from then-STK. The 'Berg was a real disruptive technology. It took Fixed Block, SCSI disks and virtualized them into Count Key Data volumes. The mainframe no longer knew where the actual blocks were anymore. IBM later came up with the RVA, then the Shark, and now the DS line, and all of this virtualization has continued and increased. There is, as far as I am aware, no such thing as a *real* Count Key Data disk anymore.

 

The IBM SVA , Xsigo Systems VP*, or HP SVSP do the same thing for disk arrays from a wide range of disk vendors, or, if you prefer, there are devices like the ones from Compellent (to name but one) where the disk back end is provided, but utterly virtualized.

 

Deja Vu. I feel like I have been here before. Not exactly. There are differences. Still, I wonder what is going to get mined from the mainframe next. It is not like the Mainframe is sitting still waiting to be overtaken either. Like TCP/IP, it has absorbed some ideas, concepts, and even operating systems like Linux. UTS would be proud

 

The postings in this blog are my own and don't necessarily represent BMC's opinion or position.

Share: |


Update on the new MAPI functionality coming soon to Evolution, with help from the OpenChange project

In my last post, I made a reference to queuing up and testing the new MAPI connector for Evolution under OpenSUSE 11.1. I have done that now.

First off, I added the repository to YAST using the information at the Evolution Wiki. The Repo file looks like this:

 


After adding the repo to YAST, updating the repositories, and then installing the MAPI bits, my system now had this on it:

steve@indiapaleale:~> rpm -qa | grep -i mapi

 

evolution-mapi-debuginfo-0.25.6-3.1
evolution-mapi-0.25.6-3.1
evolution-mapi-lang-0.25.6-3.1

steve@indiapaleale:~> rpm -qa | grep -i openchange

 

openchange-0.8-3.1

steve@indiapaleale:~> rpm -qa | grep -i samba4

 

samba4-4.0-19.1
libtdb1-samba4-1.1.3-14.1
libtalloc1-samba4-1.2.0-14.1
samba4-libs-4.0-14.1

The MAPI code was updated at the end of January, going from OpenChange .7 to .8, with all the related packages similarly revving.

 

MAPI-Clause is Coming to Town

 

The fact that MAPI has been added to Evolution so quickly is really very impressive. It is not an easy thing to do. Having the doc (as noted last time) certainly helps. That being said, this is called .8 for a reason. It is working against my MS Exchange 2007 server, but not without significant issues.

 

I want to stress right here and now that I am not testing this for prime time readiness, nor does the project make any claims that this is anything but Alpha level code. Use this at your own risk!! Really!!

 

I lost email with this. Really really. This is pre-GA.

 

I had tested .5, .6, and .7 of this MAPI code, and it never worked for me at all. I was testing it against MS Exchange 2003 though, so two factors changed: I moved to MS Exchange 2007, and Evolution MAPI .8 at the same time. Now it works. Sort of.

Here is what works:

  • I can send* and receive email.

(* sending works as long as the email address is fully qualified: Many things in the Inbox can not be replied to because they do not have valid email addresses)

Here is what kind of works:

  • I can see part of my calendar. I have not figured out the rhyme or reason to why Evolution can only see a small subset of my total calendar. Using today as an example, I can see one of seven meetings.

 

Flat does not work yet:

  • Expunge really does Expunge. Everything. Entire Inbox, whether or not a note has actually been deleted. It does ask if you really want to do it first though.
  • GAL: The Global Address List is still a work in progress. MAPI dropped the WebDAV / LDAP version of the code in favor of a protocol called NSPI (Name Service Provider Interface). It is not done yet, and not included in the current binaries.
  • No setting any rules / filters
  • No Exchange special stuff, like "Out of Office" autoreplies. Not even a screen to work with them yet.

 

Stability is not there either. I can walk away from the computer to go to lunch, and come back to find I have to restart all of the Evolution processes because it has hung. No messages. No tracebacks.

In related news, when I updated my Ubuntu 9.04 Alpha system this morning, it pulled down Evolution 2.25.90, so it looks like Evolution 2.26 is getting close. There is no MAPI available yet in the repository though, so it is still a race to see what goes first, 2.26, or MAPI support for 2.26.

I am encouraged. A version of this will ship with Gnome 2.26, and I assume that it will be revved very frequently after that for a while, but still: after all these years, MAPI / RFC support for MS Exchange is clearly headed to Linux. When it gets there, a *huge* wall to corporate Linux desktops will have fallen.

Share: |


We upgrade to MS Exchange 2007 before Linux get MAPI/RPCs for MS Exchange. Back to the drawing board.

I have, for years, with varying degrees of stability, been able to access my Exchange based Calendar and email from Linux via Novell (really, Ximian's) Evolution product. I have written about all that at length here.

 

No more. Welcome MS Exchange 2007. Goodbye WebDAV. Microsoft's grand experiment in email open standards is over, and where Exchange 2000 and Exchange 2003 were accessible via the WebDAV protocol, Exchange 2007 drops this.

 

I do not know why. It was not because it did not work.

 

WebDAV was part of how MS created Web access to the MS Exchange inbox, Contacts, and Calendars. E2007 replaces that with a heavy and light client. The heavy client only works with IE, and is all ActiveX stuff as near as I can tell. The 'light' client appears to be mostly an HTML effort, and works with Safari and Firefox, among others. The light client is noticeably faster than the old light client was, and is cleaner and brighter to look at. It reminds me more than anything else of the Yahoo webmail interface.

 

It is serviceable, and will have to do for now, because with WebDAV removed, all I can access from Evolution is the Inbox via IMAP. That is not insignificant either: IMAP is faster than the old MS Exchange connector was: Clearly a lighter protocol. I also have Win7 and Outlook 2007 if I need it.

 

It Could be Worse: MAPI / RPC *is* coming to Linux. Slowly.

 

MS kept their access protocols carefully undocumented, non-Open-Standard, and in fact kind of catch as catch can. Need a new feature? MAPI protocol was not envisioned for that? No big deal: Add a Remote Procedure call. In addition to the MAPI protocol itself, when Outlook and MS Exchange talk, there are apparently 150 or so RPCs involved.

 

There is nothing about any of this that would keep any host platform that can talk TCP/IP from talking to MS Exchange. Neither MAPI nor RPC's are the exclusive realm of MS Windows. What has kept it exclusive has been lack of documentation. If you wanted to implement an email client with Calendaring, Contacts, and Tasks that talked to MS Exchange the same way Outlook does you had to grab the wire conversations and figure out how they worked. What they were doing.

 

This can be done. It is tedious and time consuming, but the Samba project figured out SMB this way. It can be done. What WebDAV did for projects like KDE's Kontact and Gnomes Evolution is make it far easier to figure out things. The wire protocol was WebDAV. They could see the mailstore, the Contacts, and other objects on the Exchange server via WebDAV. They still had to figure out the interactions, but by being readable, it was far easier than trying to start at zero like one would have with MAPI and undocumented RPCs (And we are talking about the undocumented MAPI here, not the documented SMAPI from years ago)

 

Even as relatively easy as it might have been, Evolution was never all that stable (At least when using the Exchange Connector, and some point releases were better than others and depended also on the Distro in odd ways that I have documented here in the past), and KDE never called their MS Exchange / WebDAV effort anything but experimental, and my experience of it was that while you could read your calendar, you could never add events to it with Kontact.

 

The EU has changed all this. MS has been told that if they want to do business there they have to document things like MAPI and the RPC's they have kept so under wraps for all this time. They have. In fact, MS also worked with Novell to get Silverlight going on Linux (the so-called 'Moonlight' project) so people could watch the Obama Inauguration on the Internet with Linux.

 

Now both KDE and Gnome are working with OpenChange to get support for MS Exchange into their projects. The first MAPI / RPC support is set for Evolution 2.26, due in March with the rest of Gnome 2.26. It will apparently implement a subset of the RPC's required to get started at a basic level with MS Exchange server access. Some 80 or so of the 150 RPC's MS has documented. In support of this, OpenChange just release a new library of fixes and new feature function on January 20th, 2009.

 

I have an OpenSUSE 11.1 / Gnome 2.24 based system set up and ready to test the new libraries as soon as I get a spare moment from my regular day job. That link also has repos for Fedora 9, 10, and OpenSUSE 11.0. I am also tracking Ubuntu 9.04 since it should ship with Gnome 2.26.

KDE is farther behind on this that Gnome, but they never really had WebDAV working as well either. This article documents the KDE's current status. In related news, after the setback that was the KDE 4.0 release, it looks like KDE is starting to get their Mojo back in general. KDE 4.2 is supposed to be much better, and by the time the MAPI / RPC support is added they should be well on their way to being a fully viable desktop again. Not that they stopped being one, as long as you stayed in the 3.x tree. But 4.x should be back to having all the feature/function of the 3.x tree, with the new underlying architectural improvements in place. It was painful, but it looks like the environment is nearly back. Just in time for Gnome to have a spasm of architecture changes no doubt.

 

Aside: I have no problem really with what KDE did when they moved to the new 4.x series. I get that they had to make some underlying changes to position themselves for the future. I just think that 4.0 and 4.1 were still Beta's. I have not yet tried 4.2 to see what it looks like: I will as soon as I have a chance. If nothing else, I will be tracking how KDE adds in the MAPI / RPC functionality. I like having options. It is probably telling that KDE centric distros like PCLinuxOS have chosen to stay with the 3.x tree so far. The exact quote, in reference to their upcoming PCLinux 2009 release:

 

"We decided to use kde3-5-10 as our default desktop as the we could not achieve a similar functionality from kde4. We will however offer KDE4 as an alternative desktop environment available from the repo once we stabilize it."

 

Waiting Is....

 

Geek points! I got in a "Stranger in a Strange Land" reference! In this case, it is not martian patience, it is just that there is not choice. MAPI support is coming soon, but it is not here yet, and it is getting here far faster than it might otherwise have, since the various projects have access to the actual protocols this time around. It still will take some time. I fully expect that Evolution 2.26.0 will be followed by a series of point releases while all the bugs get worked out on this brand new feature set.

 

The funny thing about all this is that it probably still is only a short term thing before all the angst about these protocols fades from relevance. Cloud Computing, Google Gears,, SaaS, Linux based Netbooks, and all the current technology has us heading away these paradigms can not help but have an impact here.

Share: |


Literally. Where the VMLMAT tool goes, and what it does going forward are defined by the people that use it, just as what it is so far was defined by the needs of the people that created it.

http://vmlmat.wiki.sourceforge.net

 

Background, in case this is the first post you have ever read about VMLMAT. VMLMAT is not a product of BMC's R&D organization. As we are organized internally, R&D Support is actually in the IT department, and VMLMAT was designed and built by Ron Michael, the man charged internally to support all of R&D Linux images on the mainframe. It expressly is written for the needs of our internal environment, and some of those parameters are:

 

  • All Mainframe Linux images run under z/VM. While we have      LPAR's, we do not use them for Linux.

  • We made an arbitrary design decision to use Active Directory      as our repository for users and passwords. VMLMAT stores the      metadata about what each user can do, and which Linux images they      own, but not the actual userids / passwords themselves

  • Our R&D community was needing to treat MainFrame Linux as      "Just Another Linux": the same as the Linux on the X86      server, the one in VMware, or the one on their desktop or laptop. As      developers, they know Linux, but they may not know much or      even anything about z/VM or the mainframe.

  • Related to that: We had no idea what platform the R&D      people would be using when they were managing their Linux images.      Could be AIX, Solaris, Linux, HP-UX, OS.X, Tru64, VMS, MS Windows      from 2000 thru 2008...

  • With terrific z/VM system programmers in the support      organization, we were not looking to do anything that either got in      their way, or add any system z/VM management features.

  • We have no MF attached SAN devices, therefore no MF Linux      running on FCP devices.

 

Because of Whurley and the new direction he represented for BMC, and discussions with same, we knew from a fairly early point that VMLMAT might become one of BMC's first Open Source projects. However, that did not translate into making VMLMAT the be-all, end-all of Linux on the mainframe management tools. We needed VMLMAT to do certain things internally, and that was all we could justify from a time / cost point of view of Ron's efforts. All of VMLMAT's features are funded by IT, therefore they had to meet IT's needs, and in this case, that meant R&D's needs. The good news is that Ron knew this might go Open Source, and so he was very judicious about the codes documentation both inline and via the Wiki. This serves us well internally for anyone that might come along behind Ron, and it serves the Open Source community as well, since they were going to get clean, well documented code from the start.

 

Being well documented was key: We knew that there were missing features, and we knew we might not be the ones adding them. We made the leap of faith that any submissions back to the project from the community at large that were based off the codebase stood a better chance of being equally well documented and written. We will not accept anything into the main code tree that is not so (or that we can easily make so).

 

Taking my list above in order then, here are the implications about what VMLMAT could be made to do in future:

 

  • VMLMAT can be thought of, at least in part, as a cold      provisioning tool. It takes Virtual Machines and makes them into any      version of Linux in the archive. It can also be used to store      particular configurations so that they can be promoted from Alpha to      Beta to pre-GA / release candidate, then to to production. But if      Production is in an LPAR... not so much. Not yet anyway. We think      adding LPAR support is very do-able though.

    • Related to that: VMLMAT is a stand alone tool right now, but           what if you already have a provisioning tool in house? Something           like BladeLogic? Could VMLMAT be made to interoperate? Again, we           think the answer is yes.

  • David      Boyes, over on the VM      Linux mailing list made a great suggestion, that VMLMAT use PAM      rather than our Samba code, so that authentication could be made      pluggable, and therefore it would not matter what the target shop      has for userid / password authentication. LDAP, NIS, NIS+, AD...      whatever.

  • In a production environment, not everyone and their cat      should have full access to the production versions of Linux. VMLMAT      already has a pretty good security model for this, since it allows      one to many, many to one, and many to many groupings of users to      Linux VM's. But I can see how some shops might want to tie this is      to something like RACF or VM:Secure. For our part, we would just      like to store all this is something like PostGres or MySQL in the      future, rather than in locked down flat files.

  • Our choice about the user interface was Web 2.0ish (even      though it is pure HTML right now). We assumed everyone would have a      web browser, even if we did not know what the web browser might be.      Other shops may prefer a Curses or an X interface. We have no      internal demand for that, and I can not see us adding those other      interfaces right now. What I can see us doing is getting far more      sophisticated over time with the web interface. It is currently      designed to be about as fast and simple as it can be: Real Web 2.0      technology Open Standards (such as AJAX) could make it much more      active in nature. Whatever is done here, Open Standards will be at      the core though. If we learned nothing from the web browser wars of      the 1990's it is that using proprietary browser extensions is a      "very bad idea" (tm).

  • There is an API in z/VM called System Management API, or      SMAPI, and not to be confused with Microsoft's Simple Messaging API,      also called SMAPI. Other than name, and being an API, not even      similar. Via VM's SMAPI, VMLMAT could be extended to "talk"      to the host OS, and have it do things like create and destroy VM's,      add mini-disks, change memory parameters, add Diag permissions...      whatever is needed. While VMLMAT has steered pretty clear of most of      the features of BMC's original cloning tool, such features would put      it squarely back in that realm. We don't need such things, so they      would more than likely not be added unless something extraordinary      happened, or they were added by the community at large.

    • Such features could also be added by having VMLMAT make           calls to whatever the shops systems management tool is. This is the           way the the BMC Cloning tool went about it. VMLMAT shares no code           with that tool though, so adding such a feature set would more than           likely be a community endeavor.

  • A key design point of VMLMAT was device and file system      independence. The way that the systems are archived and restored      does not care about what the underlying device is. It does not care      if it is EXT2, EXT3, or (probably, but not tested) XFS, JFS, or      Reiser. We take whatever device VM gives us, and we are glad to have      it. Thus should it ever be. While we have never tested it, adding      SAN devices should not be that hard: Mostly a matter of dealing with      the device names themselves.

    • Related to that: We built the archive feature to match a           resource that we had: that being our spiffy Linux NAS servers.           Other shops might prefer to archive to FCP devices, CIFS, or even           to MF DASD, leveraging at least the fact that the archives are           gzipped and still saving space. All of this should be easy to add.

    • Further related to that: We internally use only VM presented           Mini-disks as "raw" devices for Linux. No LVM or EVMS           layer. Since VMLMAT is device independent, adding support for a           Logical Volume Manager(s) should not be that difficult. In fact,           installing Redhat without LVM has gotten more and more difficult,           so this is something we may have to add for internal reasons.

 

Those are just some of the obvious things we have talked about (and we have talked about other, more esoteric things still), and to some degree they are based purely on our projections of what others needs might be. I freely admit that I have no idea all the possible things others might need or want from a tool such as VMLMAT. We tend to have a very BSM mindset internally for some reason, so that may color what we think about when we consider what we might do to VMLMAT.

 

In its current form, it has more than paid us back for the time Ron spent creating it. How it might be extended to be of similar help to others is an "Open" question.

Alena Hitzemann Bowen

Mint 6 RC1

Posted by Alena Hitzemann Bowen Nov 17, 2008
Share: |


Looking at the release candidate of Mint 6 to see how well it works as an enterprise desktop.

I recently wrote up a post on my personal blog about installing Mint 6 RC1 on my Acer Aspire One. This is a followup to that one, with the focus shifted from personal to professional use.

Evolution

I noted in a previous post that I had very good success with Ubuntu 8.10 and Evolution 2.24. Since Mint 6 is based off Ubuntu 8.10, I would expect that the results would be similar. There is room for doubt though, because as I noted in my personal blog, Mint 6 does act differently about a few things than Ubuntu 8.10. For sanity, I did a comparison between the packages I have installed on the Ubuntu 8.10 system and the new Mint 6 system. Here is Ubuntu 8.10:

 

ii  evolution                                   2.24.1-0ubuntu2
ii evolution-common                            2.24.1-0ubuntu2
ii evolution-data-server                       2.24.1-0ubuntu1
ii evolution-data-server-common                2.24.1-0ubuntu1
ii evolution-dbg                               2.24.1-0ubuntu2
ii evolution-exchange                          2.24.1-0ubuntu1
ii evolution-exchange-dbg                      2.24.1-0ubuntu1
ii evolution-plugins                           2.24.1-0ubuntu2
ii evolution-rss                               0.1.0-1ubuntu2
ii evolution-webcal                            2.24.0-0ubuntu1
rc libcamel1.2-13                              2.24.0-0ubuntu3
ii libcamel1.2-14                              2.24.1-0ubuntu1
ii libebackend1.2-0                            2.24.1-0ubuntu1
ii libebook1.2-9                               2.24.1-0ubuntu1
ii libecal1.2-7                                2.24.1-0ubuntu1
ii libedata-book1.2-2                          2.24.1-0ubuntu1
ii libedata-cal1.2-6                           2.24.1-0ubuntu1
ii libedataserver1.2-11                        2.24.1-0ubuntu1
ii libedataserverui1.2-8                       2.24.1-0ubuntu1
ii mail-notification-evolution                 5.4.dfsg.1-1build1
ii nautilus-sendto                             1.1.0-0ubuntu1

 

Here are the ones from Mint 6 RC1

 

ii  evolution                                 2.24.1-0ubuntu2
ii  evolution-common                          2.24.1-0ubuntu2
ii  evolution-data-server                     2.24.1-0ubuntu1
ii  evolution-data-server-common              2.24.1-0ubuntu1
ii  evolution-data-server-dbg                 2.24.1-0ubuntu1
ii  evolution-dbg                             2.24.1-0ubuntu2
ii  evolution-exchange                        2.24.1-0ubuntu1
ii  evolution-plugins                         2.24.1-0ubuntu2
ii  evolution-plugins-experimental            2.24.1-0ubuntu2
ii  evolution-webcal                          2.24.0-0ubuntu1
ii  libcamel1.2-14                            2.24.1-0ubuntu1
ii  libebackend1.2-0                          2.24.1-0ubuntu1
ii  libebook1.2-9                             2.24.1-0ubuntu1
ii  libecal1.2-7                              2.24.1-0ubuntu1
ii  libedata-book1.2-2                        2.24.1-0ubuntu1
ii  libedata-cal1.2-6                         2.24.1-0ubuntu1
ii  libedataserver1.2-11                      2.24.1-0ubuntu1
ii  libedataserverui1.2-8                     2.24.1-0ubuntu1
ii  libevolution3.0-cil                       0.17.5-0ubuntu1
ii  mail-notification-evolution               5.4.dfsg.1-1build1
ii  nautilus-sendto                           1.1.0-0ubuntu1
ii  openoffice.org-evolution                   1:2.4.1-11ubuntu2

 

Not that this makes a difference, but Ubuntu is installed on a Dell 745 desktop, and Mint 6 is on a Dell D620 laptop. Evolution is not an application that should care about such things though. The Mint and Ubuntu packages match in all their core parts: Mint does not change anything from Ubuntu so I expected that Mint will work just as well as Ubuntu in the office.

 

Mint does change one thing about Evolution, and that is that they do not install it by default. Thunderbird is the email client of choice for Mint. Hard to argue with, except I need Evolution and the exchange connector. Ubuntu 8.10 installs Evo, but not the "evolution-exchange" package. Either way, I have to tweak out the install with Synaptic or apt-get in order to have my MS Exchange 2003 resources available on my Linux desktop.

 

Evolution works exactly the same in both places. It has the same problems too, such as having trouble figuring out what the mail folder index should look like if I do a mass delete in one place. The other instance of Evolution often never sees the delete correctly, and loses track of what is in the INBOX folder. I wrote about this back in February, and nothing has changed. It is very annoying but not life threatening. I just delete the mail folder index, and everything re-syncs from MS Exchange. It would be nice if there was a resync button, or even better if it would detect that it lost sync and do it itself. Probably all of this is moot though, since focus appears to be on MAPI Exchange server access for 2.26 of Evolution.

 

I should note that in the comments section of my post about Ubuntu 8.10 there is a comment titled "Non-crashing evolution?  I don't believe it"                  

                  

Posted by hyrcan, the post says that they have not been able to get Evolution to work for them against MS Exchange 2003.
I have no explanation. I have done nothing special, installed nothing special, nor am I aware of our MS Exchange admins doing anything special to make it work better. There is a clear difference in success, but I have no idea why. I would be more than happy to try and work through a triage effort to see if we can figure that out though.


OpenOffice.org

 

Both Ubuntu 8.10 and Mint 6 RC1 ship OOo 2.4.1 with the the addition that they have the ability to read and write to Office 2007 formatted documents. This is because they reached ahead and grabbed the Go-oo patch set, so 2.4.1 from Ubuntu 8.10 and Mint 6 has one of the big new features of 3.0 included. I have not seen many office 2007 documents yet, but I am glad I can already deal with them

I was disappointed enough about 3.0 on Ubuntu that I went ahead and added a repository and added it. I did not do this on Mint though. 2.4.1 is more stable nad I am thinking about backing 3.0 off Ubuntu. The whole reason why they did not put 3.0 on Ubuntu is here:

 

http://brainstorm.ubuntu.com/idea/14433/

Developer comments
Unfortunately, since the final release of OpenOffice 3 was delayed, there was not enough testing time to include it by default in Intrepid.
OpenOffice 3.0.1, to be released on Dec. 2, is a bugfix only release and should prove to be much more stable than the current release. This release will be available on the backport repository.
More infos:
http://www.tectonic.co.za/?p=3447

 

Mint 6 appears to have followed the same path that Ubuntu chose, and stayed away from OOo 3 for now, even though they shipped enough after both the Ubuntu 8.10 and the OOo 3 releases that they could have included it if they had thought it wise.


Active Content

 

I have never really talked about things like flash and media player being things that an office desktop should or has to do. I'd be willing to bet that there are many IT departments that keep such things very locked down. On the other hand, in the Web 2.0, active content world we live in, being able to access active content or watch short movies (say, internal training programs or the like) is probably required. This was always one of the reasons I liked Mint so well. It made content a no-brainer. Flash was already installed. Many of the non-free non-Open Source stuff that so many Linux distros (like Fedora) steer clear of like the plague are installed and ready to go.

 

Turns out Ubuntu has made real strides there as well. As a test (and I hope the IT guys don't swoop in on me) I played the new Star Trek Trailer on both the Ubuntu and Mint machines. it worked on both, but it loaded faster on Mint. This is cool, because the ST trailer is in Quicktime format. I did not do anything special. It just worked.


Hardware Support

 

Ubuntu 8.10 works extremely well on the Dell 745 desktop, and Mint 6 works extremely well on the Dell D620 laptop. Each has their own challenges. The Dell desktop has an Nvidia graphics card and two monitors. the laptop is... well... a laptop. Wireless works out of the box and is the Intel Corporation PRO/Wireless 3945ABG. Intel and Atheros are my two favorite wireless vendors, because their stuff usually just works under Linux.

 

Both systems enabled Compiz by default and it works in both places without issue, even though the laptop has the relatively speaking graphics-challenged Intel Corporation Mobile 945GM/GMS. I say it is graphics challenged, but Compiz works without any issues at all, so I guess it is good enough!

Volume up/down buttons on the laptop are enabled by default, and that is always very nice to see. Those special laptop buttons are often orphaned.

Mint 6, even in its RC version, appears to just work at work.

Filter Blog

By date:
By tag:
It's amazing what I.T. was meant to be.