Skip navigation
Share This:
Quite literally a stack. Of identical computer systems. Weekend hardware retirements net four of a kind systems for new testing of Enterprise Linux Desktop

 

There are some advantages to being in my job. Not only do I get to see all the cool new hardware, "play" with the systems of my youth (VAX 7000 anyone?), and see the same diversity of software as hardware, but I get to sort through the hardware discard pile. The discard pile has been pretty tall lately too, what with all the machines we are able to retire because of new technology like X86 virtualization. Sometimes a computer gem or two appears in that pile that gets hauled back to my office and then late one night after everyone has gone home, gets Linux installed on it.

 

He Started It!!

 

I moderately recently wrote a post about the possibility of using Mepis as an Enterprise Linux Desktop in an MS Windows infrastructure based shop. I thought I was being fairly clear about the ground rules of that particular evaluation, especially the "Needs to work with MS Exchange to work here" part. I said nothing about its suitability in places where one is lucky enough to *not* have to deal with undocumented and arcane MS Windows protocols. My thesis is that until Web 2.0 is able to abstract the end user away from the MS-created protocols or the special way MS creates non-standard versions of standards like Kerberos or WebDAV, a successful Enterprise Linux desktop will have to be able to deal with them directly.

 

 

"Off Label Mepis" was not universally popular with the Mepians of the world, especially over at MepisLovers. In the discussion of my post at MepisLovers one of the factors called into question about my testing method was that I had done it in a virtual machine. Every time I read a comment like that it is like going back thirty years to the early days of VM on the mainframe and frequently having OS/VS2, DOS/VSE, or MVS people tell me that VM was not a good place to test things. What is old is new again. Still, it is within the realm of the possible that things like timing issues of a VM [I.E. the way a virtual machine does not really know what is happening in real time, since all it sees are the times it is being dispatched] can affect a test. I have never had that happen on a test of Evolution against MS Exchange from a VM before, but anything is possible.

 

 

I repeated the test on real hardware, and my results did not vary on the key point that Evolution did not work against our MS Exchange server. That pretty much was what I expected, but it didn't take me that long to set it up to verify it so it was worth doing. I try not to ever dismiss a criticism if it might have any validity.

 

 

Ever since that comment I have been thinking it would be nice to have some standard hardware to be able to compare one version of Linux to another *at the same time*. Serial OS loads for a comparison are a pain in the stern. What if I want to go back and check a different thing or forgot to test something? Easy on a VM. A pain to reload and repatch on real hardware, even as fast as stuff like Ubuntu or Mint load these days.

 

 

I do have a small collection of old laptops: Compaq M300's. These are interesting to test with, especially for Linux on a laptop type things. But they are slow and have different amounts of memory. I talked about these units a while back, when I was first comparing Kubuntu and Ubuntu.

 

What came into my possession was four identical Dell GX260's. They had all sorts of advantages over the M300s:

 

  • Small Desktop form factor. The GX series came in three case sizes. These were the smallest. These computers are old enough there is no picture on the Dell website though.
  • Stackable: Little feet line up with dents in the case of the other unit. Four tall, it is almost a prefect cube shape.
  • Moderately fast for my normal level of test gear: 2 Ghz Pentium 4, 512 MB RAM, 20 GB hard drives.

 

 

Also very nice was that my old production desktop system was a substantially similar DX340, so things I do with the 260's are mostly comparable with what I do on the 340 [same 2.0 Ghz Pentium 4 CPU], other than that the 340 has 1.2 GB of RAM and an 80 GB hard drive.

 

Virtualization Strikes Again

 

 

It is worth noting that the four DX260's came into my Linux-loving arms *because* of the success in our R&D labs of virtualization. I talked about this a bit in "Virtually Greener". The particular lab these came out of has gone from just over 250 computers pre-virtualization to todays 120 computers: more than a 50% reduction in the lab. These four plus a couple of others are the only ones that were re-deployed in other missions. The rest have been sent to the great computer recycler in the sky. Or maybe New Jersey. Someplace.

 

 

These four desktop systems had been sitting side by side on a shelve in a 19" rack, their small form factor actually working pretty well in that regard. They had been running various levels of MS Windows Server for testing things. Now they are going to show up over at Linux Counter.

 

The Six

 

 

Add in my current production desktop, a Dell 745, and I have six different systems to run six different versions of Linux *at the same time*. The 745 is currently running Mint 4.0, and will go either to Mint 5.0 or Ubuntu 8.04 in the near future. I have been testing 8.04 on a my personal laptops (IBM X30, Acer 5610) for a while now, and it is very impressive.

 

  • The 340 has been running PCLinuxOS 2007 since I last posted about it here. I donated money to that project to get access to the faster servers and more recent / additional packages and updates, so it is fully set up and tweaked out the way I like it.
  • 260 number one: Ubuntu 8.04 beta (LiveCD)
  • 260 two: Fedora 9 Alpha (LiveCD)
  • 260 three: OpenSUSE 11.0 Alpha (LiveCD)
  • 260 four: Mandriva 2008.1

 

 

The 260's and 340 are hooked to an Avocent switch, a Dell 1280x1024 17 inch LCD panel (172FP), a Sun USB keyboard, and a Dell USB mouse. I was able to get all but one of them running correctly at 1280x1024, but Mandriva and OpenSUSE has to be told to use that resolution, preferring 1024x768.

 

 

All were installed on the entire hard drive. All use GRUB. Poor LILO. Seems its fortunes have passed.

 

 

The one 'running correctly' hold out is Fedora. It works fine off the LiveCD, but gave several problems on the install. One of which is that there can be no swap space defined while installing it. It is a documented problem that Fedora knows about so I assume the next release of two will fix it. The other is that once installed it will not boot at all. Just won't. When I want to look at Fedora 9, I just run it on the LiveCD for now.

 

 

Only Mandriva is an official release, so I will not make any judgments here about the relative anything about these OS's, other than to say Ubuntu as a Beta is farther down the road to GA readiness, and was dead easy to install, but the updates are still coming fast and furious in 'Update Manager', so it clearly is not done quite yet. It is less than a week away from GA as I write this in mid-April. Fedora 9 is set for Mid-May, and OpenSUSE 11 mid June.

 

I wanted to get this config and what I am planning on doing with them set up here in this post, so that I can refer to this test set up as these releases come to GA'ness over the next few months, and I can look at them on a more level playing field. As always, I will be trying to figure out the big question: Which of these desktops work as Linux Enterprise desktop OS's (whether they were designed to or not).

 

 

Finally, you might have noticed Mepis is not among the test stuff. If I had one more 260 computer, it might have been. But probably not. Mepis, according to the folks in the MepisLovers forum, is waiting for KDE 4 to add all the bits and pieces required to support MS Exchange, eschewing Gnomes stuff that is already there. I do not know if that is accurate or just the opinion of the poster, but until I see a hint someplace that Mepis or KDE 4 has made some moves such that they interact better with the MS Infrastructure I have to deal with here at the office, I will probably not spend any more time on it.

 

And now for something completely different..

 

 

i just wanted to insert a quick note here, in case anyone was wonder what has happened to the rate I have been posting recently. The answer is:

 

  1. Bladelogic
  2. End of quarter, end of fiscal year
  3. Reviews

 

 

I have been involved in the activities around bring BMC's latest member of our family into the fold. The BladeLogic acquisition has been hugely exciting, but it has kept me pretty busy.

 

 

Then, we not only closed a quarter but closed a fiscal year, and at time like that I take off my R&D Support hat, put on my Production IT hat, and help where I can.

 

 

Finally, this is review time for my team, and writing reviews take me a great deal of time and effort...writing time that I don't spend writing here.

 

That's my story, and I'm sticking to it.

Share This:
Wrap up of the migration from the Tru64 TruCluster mission critical NAS server to the CentOS 5 Linux NAS server

 

 

This post is to do a wrap-up of the topic I have been posting about on and off here for a while about the new mission critical NAS server cluster based off CentOS5. Previous posts in this series, starting August 29th of 2007:

 

  1. Tru64 NAS Server Replacement Project
  2. NFS, GFS, nodirplus / readdirplus, and Tru64 updates
  3. CentOS 5 NAS Cluster
  4. CentOS 5 HA Cluster Speeds and Feeds
  5. Kernel Hackage
  6. One Week Later
  7. Bug 431253
  8. GFS or NFSD?

 

 

We are not quite done with the migration of all the file systems off of the Tru64 TruCluster. It's original ~4.5 Terabytes have been slowly absorbed by the new Linux cluster. We have been very cautious. We wanted to make sure that we introduced change in a controlled manner, in case we had any more of those HP-UX client type issues lurking in the woodwork. Dan Goetzman, chief NAS abuser, did find another one, and only this week too. More on that below.

 

Semantics

 

We also have the fact that we are still running our modified version of the CentOS 5 OS. Neither RedHat nor CentOS either one has closed the issue we opened (See post "Bug 431253" above), and I think that is a smoking gun waiting to shoot some folks in the toes. Here is why I think that: The file open / close semantics used to "live" inside the code provided by each file system. Ext3 file open / close code could therefore could be slightly (or even very) different from GFS or XFS or some other file system, since each file system was written at different times and places by different people for different reasons, and in some cases like XFS or JFS, for different operating systems than Linux. XFS comes to us from SGI, therefore Irix, and JFS is from IBM / AIX.

 

Recent kernels have provided the file access semantics internally. An installable file system is not required to use them, but they are available to all. The file system maintainers have started to move from the code inside each of the various file system types to routines in the kernel. It makes sense: Why maintain this common code in all these different places?

 

 

GFS went 'there' (to using the kernel file access routines) first, and it is our belief that that this is where the HP-UX client issue was introduced. The kernel routines (written by a subset of people who more than likely did not write all the internal routines contained in all the different file systems) don't work 100% the same way as those buried in the file system code. This might be a bit of understatement.

 

Since Dan's reading on the subject leads him to believe that the other FS types were going to migrate to letting the kernel handle the semantics, that was/is going to put everyone in the same boat. The broken HP NAS client boat. So the metaphor is not too mixed, the smoking gun is then used to shoot a hole in the bottom of the boat, passing through ones toes and perhaps some aquatic life forms.

 

 

We don't have to migrate to a new version of the CentOS OS any time soon though. CentOS is working fine. Dan's file semantics kernel patch is working and has long runtime on it, so we have confidence we can move forward. We do have some motivation to move forward if we can: The TruCluster is off both hardware and software support.

 

Ouroboros Tru64 TruCluster

 

 

The Tru64 TruCluster hardware now has so much excess capacity, since its formerly brimming file systems have been "drained" over to the CentOS cluster that any hardware failure could easily be dealt with by self-cannibalization. Ehww. Sounds ugly when I type it that way. True though: we have two ES40 server nodes, each with four GB of RAM and four CPU's. There are empty RAID sets of all disk capacities (36GB, 72GB, 144GB). The fiber channel cards, Brocade switches, memory channel, etc are all twinned out for the TruCluster. If something fails, it fails over to the surviving bits, and in the seven years we have had this gear the only failures we have had have been either of disks or failures of imagination. In failure mode, we can choose to either ignore it now, or use the redundant  capacity, raid other Alpha based gear for parts (I still have VMS servers running on Alpha gear which in a pinch might give up their lives), or worst case do a time and material call to HPQ. More than likely, the TruCluster will just eat itself though, reducing in size and capacity as it goes. That takes care of the hardware.

 

The software is a different story. It can not eat itself ... hopefully. It never has anyway. It is stable and we have not patched it in literally years. Before that the patch rate was pretty low, and consisted of mostly point patches for specific problems. Stability of the OS / NAS bits is good news and bad news.  Good that it is stable. Bad when things like NFS V4 are starting to creep into the shop, which the TruCluster just will not deal with other than by forcing the client to downshift to V3 or V2.

 

Easy Does It

 

 

This slow migration of critical file systems allowed Dan to not be spending such a concentrated, focused time on data migration, but to go slow, do a good job, and think about each move in depth. Quality still counts, especially when you are moving your most critical bits and bytes!

 

 

As I write this, I just looked at the status of the move on the internal Wiki: the vast majority of the file systems that have for literally years lived on the TruCluster are now over on the CentOS 5 cluster. We have been running builds and packaging against them for months.

 

 

The uptime of the cluster as a whole has been satisfactory. We have had no customer facing service outages at all, and even if there have been rolling upgrades or individual node outages, they have been inside the design parameters. The point of doing this as a cluster was to be able to offline a node, work on it, then have it rejoin the cluster, and Dan has taken advantage of that to upgrade the ILO cards and do various other service related things. I looked at one of the three nodes a moment ago, and it has over sixty days of uptime. That does not matter though: the customer facing service uptime has been pretty much since we put it into service last December.

 

 

The main thing, and this is the key point is that our customer never knew we did anything to the cluster, and that was just exactly like what we used to do with the TruCluster, even if the underlying OS, and clustering technology, and hardware, and therefore technical procedures are completely different.

 

Sun Client Bug

 

 

Since I last posted here, we have discovered one more unruly client. This time is is Solaris, and the fix is a patch to that OS, not something to the server. Dan as usual has been all over the problem. Here is what he found. First a note in web forum from Casper Dik at Sun:

 

Casper H.S. *** <Casper.***@xxxxxxx> writes:

 

 

"Ross"  <nospam@xxxxxxxx> writes:

 

Thank  you, Casper!
Here is the output:
bash-2.05$ cd testdir
bash-2.05$ ls  -f
.. testfile .

 

Ah,  yes.

 

The  chmod code is broken and can't deal with "." and ".." not
being the first two  entries of a directory.

Bug id: 4171523 which was filed  eons ago and not fixed (being a P4 it
dropped of the radar screen, it  seems)

I've upped the priority, pinged the responsible engineer  and
added that chown suffers from the same issue.

 

Casper
--
Expressed in this posting are my opinions. They are in no way related
to  opinions held by my employer, Sun Microsystems.
Statements on Sun products  included here are not gospel and may
be fiction rather than truth

 

 

Casper appears to be a pretty valid authority on such things, according to some research someone on my team did, turning up this:

 

 

 

 

Dan used Casper's information to find this:

 

 

"There is a Solaris BugID for this exact problem, they seem to know about it.
It appears to be only fixed for Solaris 9 and 10;

 

125499-01 - For  Solaris 10 on sparc
123394-01 - For Solaris 9 on sparc

 

I [Dan] applied the patch to [a Sun system we use a lot], and all is well.
Fix is going to be on the Solaris side for this one...

 

The patch fixed chmod/chown as that is what it patched. It looks like chgrp is still broken, same exact defect.
So far, I cannot find where Sun has fixed  chgrp for the same problem"

 

This is not a show stopper as near as we can tell, at least for us. Your shop, and mileage of course will vary. Peeling back the covers a bit, Dan found the underlying bits to this that were causing the problem:

 

The GFS filesystem getdents() call returns the directory entries in no  particular order. These get returned back, via NFS, to the Solaris client where  the user space utils chmod/chown/chgrp EXPECT items #1 and #2 to be "." and  "..". Depending on the returned list order, a loop can develop, and does in our  example, until the ch* command has exhausted it's user space open file limit. I  confirmed that our LCFS server is NOT returning the list as the Solaris client  expects. Note, that as far as I know all other NFS clients have no problem with  the list returned. Just SOLARIS!

 

Great! A Solaris bug that seem to be in  most/all clients (I have tested [a solaris client] and [and another solaris client]) triggered by a abnormal, but  not illegal, return by the NFS server.
... I did test  with XFS as the backing store filesystem, no problem. So it must be in the GFS  getdents() quirk.

 

Relative Costs, Relative Features

 

I have noted here before why we went to the complexity and expense of the TruCluster, but assuming you have not read everything in this blog over the years about that subject. That goes all the way back to the beginning in 2005, in posts like "Linux and NAS", where I noted this:

 

 

"We take a 2 tiered approach to NAS storage for R&D Support. In our first tier is the 5 9’s type storage. The stuff that just can’t go down. The bits and pieces that are used on our “assembly line” to build and manufacturer our own products. The kind of storage that, if it were down would idle hundreds of people around the world in R&D and endanger our time to market. And we know with a great deal of pain just how critical this storage is, because we used to use a storage appliance there, and it could not survive our network. It crashed all the time, and we paid for it dearly."

 

 

We paid pretty dearly for the TruCluster too: round numbers about 140k per Terabyte. Sure, a single SATA disk has a Terabyte now, and for a bit less money per TB. For fun, I divided the cost of the TruCluster per TB cost by the cost of a TB SATA disk, and the spreadsheet said that the disk basically cost nothing, as a percentage. Tweaking up the accuracy a bit higher in OpenOffice Calc, I get 0.00277. Pretty near free.

 

I will not say that the CentOS 5 based system is as good as our TruCluster is/was. It is both better and worse, and depends on how you look at it. How you define "better". That it achieves high customer facing uptime was a requirement. That it is as fast or faster (and it is faster at some things, such as CIFS) was also a requirement. It would not even be worth pursuing without those very minimal goals. It is less expensive. On the down side, our little Linux machine is not as HA, since nothing invented on this planet yet today can match TruCluster on that score. Sigh. <tongue-in-cheek> I guess that is why it had to die. </tongue-in-cheek>

 

There are things the new server did not have to be. One obvious thing is that it did not have to be the same SSI cluster architecture as what came before it. TruCluster is just one possible SSI cluster. The best one out there, but there are others. There is a Linux SSI project, although we gave up waiting for it to mature to the same place (or near enough for our needs) as TruCluster. According to the feature matrix of the current OpenSSI product it looks like it might be viable for NAS now: NFS-HA is listed in any case, plus " A highly available cluster filesystem with transparent failover.". Maybe our next generation NAS server will have a look there. The technology moves so fast that every generation has been significantly different than the one that came before it. But I slightly digress.

 

 

The primary design goal of the TruCluster based NAS server was not to use the technology for its own sake but to have NO customer facing service outages. The new server did not have to be SSI. It just had to achieve the same thing  from the point of view of our R&D customer. Serve files fast and reliably: be NAS data-tone.

 

The TruCluster is/was far more high performance on the I/O subsystem, to say the least. Hundreds of disk arms versus the one SATA one in this comparison. Cache in the NAS heads. Cache in the HSG80 disk controllers. Cache inside the disks. Disks spinning 25% faster. It is not a fair or even sane comparison. At best if gives a hint about how one might go about building a lower cost solution with high density disks as a starting place. Even at the high cost of the 2001 Tru64 based solution, the avoided cost of downtime paid for the TruCluster over and over and over. I tracked it once. I figure, based on how bad the NAS appliances had hurt us, and based on a few times when the TruCluster stumbled on various issues, but did not fall due to its design, that we came out about two million dollars *ahead*.

 

 

With the passing of Tru64 into that dark night, the new kid on the block comes with a very different price point and way of doing things. I have posted the design here already (in the links above), and the speeds and feeds. so I will not beat that to death. The main point here is that our current tier 1 file server solution, 7 years down the road from out last one, is not the same technical solution, but leverages commodity parts and prices, is assembled a slightly different way to achieve the same service goal, and runs about 1.5% of the cost per Terabyte.

 

That cost is not the whole story. The Tru64 Trucluster came with vendor support. Hardware and Software. Some of the best in the biz too: Ex Digital folks with a passion for their gear. Our solution is supported by us, and while the hardware has support contracts with Sun and Apple, we also have onsite spares of most of the major subsystems so that we can get the unit subsystems back up and running fast. If it works right, most downs should *not* be customer facing.

So far, so good.

Share This:
The little green laptop that could

 

I have read a great deal about the OLPC XO-1 over the last two years, and I have written about it myself a few times, starting with my "Linux Inflection Point" post from April 13th, 2006. More recently I spent some time with a couple actual XO-1's at the BarCampAustin III event, which I talked about a couple of posts ago. Yesterday, my "Give One, Get One" unit showed up at my house. It even had a blue "head" ( O ) on the case, which I hoped for, but could not ask for. You get what you get. There is a large emoticon on the case made with the X0 that sideways looks like a persons body and head. There are twenty different head colors and twenty different X/body colors for the large XO emoticon that decorates the head unit of laptop, for four hundred different color combinations. I wanted Blue/Blue, got Blue / Light Green. Close enough.

 

In all the reading I have seen about the XO-1, outside the OLPC website and Wiki, most of them have been reviews of the technology: Hey, look how small it is. How can a 433 Mhz processor possibly be fast enough? 256 MB RAM? How is that usable? How do you open this thing?. Lookie: it's Fedora Core 7 with a 2.6.22 kernel and a totally new user interface (called Sugar)! I wonder if I can put Ubuntu on here... USB and SD slots! Cool.

Sometimes the articles are reviews of how kids interact with the cute little boxes: Opened it in no time. Had web cam going in less than five minutes. Lots of laughing and smiling and happiness.

 

I couldn't wait to get one and play with it myself and see what my reactions would be. Having read a bit, and played with Anne Gentle's units, I had a head start. I was not going to be the dopey adult how could not even get the thing open or anything. I had some idea what the special keys did.

 

I admit it: Like most techies, I went for the technical side first. I hunted for, found command line and poked around the OS. Under the covers, it looked like every modern Linux I know. I inserted a 2GB SD card, and it mounted it. I stored files there. Then I started to think about how kids might play with it. What they might learn. How this might appear to someone who had never seen a computer before, or at least had never had a chance to touch one. Play with one.

 

It started to remind me of my first chemistry set. I did the regimented experiments that came with the box for a while, but then I just started to play with it. Try different things. I accidentally made a clear silicate looking ball of material without knowing how I did it or what it was. I also thought of the time I pulled a broken alarm clock out of the trash can and took it apart. My dad asked me what I was doing: he didn't know it was from the trash, and thought I was taking apart a perfectly good alarm clock. Back then alarm clocks had a motor and gears, and I had it apart to see what it was, how it worked, and to see if I could fix it. My dad thought I had just busted a perfectly good alarm clock. It went back together, and it worked for years after that, but in truth I was not really sure how exactly I had fixed it. I intuited my way around the thing poking and prodding and turning things till everything spun, and it was all OK after that.

 

That is what the XO is, but for a new world. A world where not having computer / technical skills means you are limited in the things that you can do.

 

After a night of messing with the music application, and the Python programming application, adding new applications like Gmail via the free downloads at the OLPC Wiki, downloading US Grants biography from the Gutenburg project to use the XO-1 as an ebook reader, and generally messing around, I put the XO-1 down around 12:30, and picked up my iPhone to look at what todays schedule looked like.

iPhone Attack

 

My technologist mind reeled. My wife, who was sitting next to me working a Sudoku Puzzle on her iPhone (national rank: 425 on the NYT puzzle she tells me, unless they had another of her favorite "diabolicals" in which case it is probably lower) asked me a question about the XO. I am unable to recall it. My mind was flashing back out of the play zone I had been in and back to the other world I live in. The contrast between the technology of the iPhone and the technology of the XO stood out in stark contrast.

 

Here in my one hand was this tiny Internet tablet, with a processor 1.5 times the Mhz speed of the XO, same RAM, 8 times the "disk" space, hugely brighter and more vivid screen (even if lower resolution) with higher DPI. 1/6 or less the weight. 4 times as many radios. Sure, the iPhone is more than twice as much money, and that is right now before the XO gets the volume up. No educational software to speak of: not yet anyway. The iPhone is, frankly, a rich person / countries toy or perhaps instrument. A very shiny, blinding one. For a brief moment I lost sight of what the XO was. Suddenly it was was this other thing. This seemingly slow, low tech thing that made no sense to me. I lost sight of the child. All I saw was the high tech thing in my hand and wondered why I would ever use this other device.

 

By and large I won't, other than to play. It is not for me. The concept of the XO is hard to grasp. Mysterious. It is not a toy. It is not technology. It is not, at least to start, Linux.

 

It is like Zero

Zero

 

Charles Siefe wrote a great book back in 2000 about the mysterious and dangerous number Zero. Zero is a very interesting and not very well understood number. Without it, all of modern science does not work. I know this not just because of the book, but because Commander Samatha Carter told Daniel Jackson that on Stargate: SG1. Without Zero modern physics can not exist. Infinity can not be dealt with. Math breaks down and can only do very basic things. As obvious or at least as accepted as zero is today, it had a long and tortured trip to acceptance, and there might still be a few that have not accepted it. Without zero there is no space program, no computers, no modern medicine. Without knowing how to use zero, one is doomed to a life of low tech.

 

You don't have to know zero if all you plan on doing with your life is living with things you can build with your hands, or perhaps certain other low tech pursuits. There is not anything wrong with only wanting those things either. However, if you want to get involved in the modern (read: technology oriented)  world there are things you have to know. Things you have to learn. Concepts that have to be mastered. Zero is one of them.

 

Technology is another. Learning the simple mechanics of a keyboard, or the logic of a program, or music or any of a myriad of other things that we all take for granted in the western world. My kids grew up with Linux computers from the time they could first type. They could install software, and later even tear down and rebuild desktops and laptops. They had access to this stuff from the very beginning of their days. They were very lucky.

XO-1

 

The XO-1 is a learning tool, and it is a superior one. It may be slow by modern standards, but it is not in a Mhz race. We technological types often get lost on the spec sheet, forgetting things like my first computer (A TRS-80 Model 1) ran at 4 Mhz. It only ran on 110v, could not be networked other than by a 1200 baud modem, and could not be schlepped about. It still managed to help me learn Basic though.

 

The XO is the delivery vehicle of a much larger concept than speed and feeds and the underlying technological platform that seems so woefully slow and small to my iPhone educated eyes.

 

It's job is to teach. To enable. The principles on the OLPC website take us part of the way there:

  
Our five core principles
Child Ownership: I wear my XO like my pair of shoes.
Low Ages: I have good XO shoes for a long walk.
Saturation: A healthy education is a vaccination, it reaches everybody and protects from ignorance and intolerance.
Connection: When we talk together we stay together.
Free and Open Source: Give me a free and open environment and I will learn and teach with joy.

It is far more than that though. If the XO-1 is never delivered to every child on the planet (the only way one could keep these cute little things off eBay and in the hands of the kids is if they are ubiquitous), it does not matter. OLPC has been criticized as being heavy handed. Home schoolers in the US are angry because it is not easy for them to get them (it takes 30,000 USD minimum commitment outside the Give One Get One program that is now over to get one here in the US). Intel is woofed because AMD provides the processor. Microsoft is woofed because Linux is at the core of the OS.

 

None of that matters. Sure, as a Linux person I am happy about Linux as the Core OS, but that is not the key thing. The key thing is that the people that love the XO-1 and are giving them to kids, and the people that hate it and are doing everything they can to get *their* computer into the hands of the kids of the world are all doing the same thing. A US President used to call such things a hand-up, not a handout. They are giving kids who would never otherwise have had a chance to get a hold of technology and related ideas and paradigms a chance.

 

They are delivering Zero to the world.

Steve Carl

Missing Evolution API's

Posted by Steve Carl Mar 11, 2008
Share This:
My bet as to why the MAPI stuff from Evolution is not working yet

 

Last week in "New MAPI connector project for Evolution" I wrote about the new MAPI access project I had uncovered for Evolution. I mentioned there I was a bit dubious about how they would be able to do the API work that HP was not able to do over the course of years with HP OpenMail. I think the plan to get it going relied on the fact the MS was going to publish the API's as part of the EC's ongoing anti-monopoly legal actions and MS's responses. A very long and complicated and generally hairy subject beyond the scope of this simple post.

 

 

Over at ZDNet, I read this article today titled Gaps found in Microsoft Exchange API documentation.

 

 

I am reading a bit between the lines here, and extrapolating from history, but I am pretty sure this is the cause of the problems I have had with testing the new MAPI service provider.

 

 

Last week I grabbed up my trusty IBM T41 laptop, which is running OpenSUSE 10.3. I had two big experiments I wanted to do. In the late night hours I loaded up both the KDE4 desktop, and the MAPI stuff from jjohnny.

 

MAPI

 

 

As one would expect from a basically Alpha / early beta project, there was no real doc to speak of. The FAQ gave me the clues I needed once I had installed the RPM's to get the MAPI service to even show up. That was in fact the hardest bit. Here is what basically worked, right before it did not work at all.

 

 

  1. Downloaded the three required RPMS from jjohnny (at the link above) to a special directory
  2. Installed the RPMS with 'sudo rpm -uvh *.rpm' while in that directory. No Errors.
  3. Rebooted to be sure the box was clean of pre-running processes
  4. Brought up Evolution
  5. Enabled the MAPI service provider in 'edit/plugins'. This was key! Before that, the "domain" question would not appear in the account setup dialogs.
  6. Set up MS Exchange account info in 'edit / preferences'
  7. Exited Evolution and again rebooted just to be sure everything was all clear. Felt like I was using MS Windows. :)

 

 

 

Everything looked OK, except that when I started Evolution, it would crash dump. I sent that to the Gnome folks.

Now I see this article and lights start going on. I wondered how the Evo people were planning on getting this ramped up so fast: They had 2.24 as their target release!

 

 

I checked right before penning this article: no new RPMs for MAPI since last week. I will keep trying as I find or have new things to try. For now though, WebDAV/Connector is still my Linux way into my MS sourced calendar.

Share This:
Last weekends BMC Co-sponsored, must-attend event in review

 

If you have ever been to a BarCamp like BarCampAustinIII, then you know that each one is a unique, lightening in a bottle, not to be missed if at all possible. If you have never been, then it should be added to the list of things to do before you ... err... something less dramatic than “Die” but makes the point about the event. Insert your own drama there. Lets just say that it is right up there with sliced bread.

 

If you have not been, describing one is not easy. Here is a travelogue to attempt to give a flavor of what this event is like.

 

This year I was in the "official" role of volunteer. I showed up a day ahead of the event to see what needed helping-with, and tried to help. This evolved into meeting the others on the organizing committee that were working with Whurley to put the event together.

 

The venue was the very impressive “Idea City” of GSD&M, on West 6th Street near Lamar in Austin (as the name implies...). This is the place where they come up with some of the best commercials on TV: My favorite being the one where the designer is taking the couple through a building showing them pictures of he previous work, including the IBM building in Seattle, and basically being full of himself. Finally, he settles into his desk chair and addresses the couple, asking how he might help them. The woman pulls a water faucet out of her purse and says “Design a house around this”, then waits amused for his reaction. The building was perfect for what BarCamp needed.

 

There were last minute crisis: The train with the t-shirts for the event had literally derailed. Last minute scrambling had another set on the way, and they arrived less than 12 hours before the doors opened.

 

The folks from Viewzi.tv set up an impromptu studio in a corner of the lobby, while some of us assembled the lanyards and badges. Turns out this was Viewzi’s launch event, but I did not know that at the time. Some of us went home for so sleep before the main event, others carried on well into the night.

 

Saturday morning at 8:00am chaos was in the process of not so much being tamed but having pointers placed into it. BarCamps have some people planning what they will talk about for months, and others deciding at the last minute what the will do. part of it depends on who comes, and what the wisdom of the group is. In this sense, a BarCamp is to a technical conference what Open Source is to closed. I being new to the BarCamp “Staff” thing had started to figure out that no one was going to tell me what to do or how to help. If I had to be told, it appeared it was going to be easier to just work around me. So I started finding things to do and owning them. Unpacking T-Shirts onto tables. Laying out the tables. Retrieving the badge Sharpie pens from whoever just stole them *this* time. Near me others set up paper charts on the wall that people could put post-it notes on with session names for the various conference rooms. They like Sharpie Pens over there it should be noted. Someone else was dealing with the bands, and the food, and the beer, and the battle bot, and the zillion other details that have to be done, and the all had been decided (as near as I could tell) at the last possible moment.

 

The Wiki was being updated as things were decided, and as sessions were set. Not too many: just a few that the folks setting up knew were coming or were giving themselves. By 10:00 AM the doors were open, and it was clear that chaos was to continue. I decided to be the greeter, and show folks where to sign in on the iMacs, hand them badges and Sharpies, and point them to the swag table (now laden with over 2000 free T-Shirts), the free drinks from Vitamin Water and Sweet Leaf Tea, and the sessions wall. The iPhone DevCamp folks arrived and were set up in a room, only to be seen from time to time thereafter when they needed food.

 

I don’t know when the session board filled up. It was pretty quickly after that: All sorts of things were up there, from the announcement and demonstration of Operas new mobile and mini-browsers (8:00pm, Omega Room), to cloud and other social networking sessions to a hands on session at 12:00 by Anne Gentle about the OLPC XO laptop. I went to that session, abandoning my post of greeter after two hours. Someone else flowed in to the obvious vacuum and greeted folks. At this point I was really starting to get the magic that is a BarCamp, where everyone helps out to do what has to be done and keep chaos just slightly at bay. It even got better.

 

The XO’s are tiny: Smaller than I expected, but they are way cool. Anne had two: Her personal unit from “Give One, Get One”, and a spanish keyboarded developer version she was using to write documentation for the units. Among other things in that session, we set them up and used the “Dolphin” application to have them acoustically ping each other and report their distance to each other in meters. 3.4 Meters.

 

Lunch arrived (Banzai Burritos from Wahoo’s), and Anne decided to set up the XO’s in a central area so others could see them. When she had to go for a while, I stood with them so others could walk by and have a look. For the next three or four hours people flowed by the XO’s and we talked about them and the OLPC project. There were two consensus items: US schools need to have these things. All of them. Home Schools. 1A schools. All of them. The 30,000 US dollar price that is the entry to buying them (outside of the now past “Give One, Get One” program) is just too high for small and / or rural schools. The other was that the “Give One, Get One” program needs to return.

 

The sessions continued (about eighty or so sessions were now on the board), and the iPhone guys peeked out, blinked in the light, scooped up some more caffeinated beverages, and ran back to their conference room to continue their battle with the new SDK for the iPhone. Early thinking was reported to be that most preferred the “Jailbreak” method so far.

 

At some point the Battle Bot went crazy but I was off running an errand for whurley and missed the whole thing. Viewzi has it on their web site, and whurley has a link in his blog already.

 

The live music kicked off with Soulhat out back in a courtyard area. “Dillo Dogs” from Wahoos arrived. That is what someone told me they were called anyway. They are not on the Wahoo menu, and that is too bad: Best Hot Dog I have ever had. I admit, I do not eat much meat and I am not really sure that Hot Dogs normally are classifiable as meat, but I made an exception for these and disappeared them.

 

Beer flowed from Independence Brewery, (where my favorite as a home-brewer myself turned out to be a very nicely done Pale Ale.) Darkness fell, sessions continued. Music, conversations with all sorts of people.

 

Then came the next band: Karaoke Apocalypse. A terrific cover band with people singing lead for them from the audience. At least the band always sounded good. The people that had been spread over the courtyard moved up close to the stage. A Unicorn kicked it off, quickly followed by “Snax”, and then it went all over the place from there. I went inside to attend a session given by Opera.

 

At 10:00 PM, local ordinance required that the outdoor music cease, which most thought was a shame as it was still rocking. Consensus opinion: Isn’t this Austin? Music capital of Texas? What is up with this 10 PM thing? Sessions continued indoors: the iPhone guys having moved into a different conference room, and ordering then snarfing down a few Pizzas. I guess they missed the Dillo Dogs.

 

The conversations continued: this was one day, and there was just no way to meet everyone, and find all the people you might have something in common with. Every person I met was interesting, open, outgoing, and intelligent: Clearly BarCamp selects for positive traits in people.

Midnight arrived, and BarCampAustinIII came to an end. We cleaned up Idea City so that GSD&M would not regret having had us in for the day, and then folks went their various ways. Some were getting ready for more SouthBySouthWest activities on Sunday. Others were headed out for a very late / very early bit of food. I had three hours yet to drive to get back to the swamplands from Austin. Sleep came very easy, but it was not a good weekend to have the DST thing kick in. Monday is not going to be fun.

 

There just is not any way to really describe the way a BarCamp like this is. I can tell you we gave away over 2000 T-Shirts, saw more than 700 folks attend, had more impromptu, in-the-moment technical conversations that are possible at a more regimented conference, and on and on, and it still would not come close to giving you the flavor of what one of these is like. The good news is if you missed BarCampAustinIII, BarCampAustin4 has already been announced. I have no idea why the switch from roman numerals to numbers, but I assume it is because the Roman numerals were about to become a great big pain, given the success of these events..

 

Pictures of the whole thing are available on Flickr: http://flickr.com/groups/barcampaustin3/pool/

Share This:
Looks like Brutus has some competition

 

On January 18th, 2008, Jacob Johnny posted to the evolution-list@gnome.org the following very interesting note:

 

This is an announce mail for the preview of Evolution MAPI provider. This provider can connect to Exchange 2007 servers and also to Exchange 2003, 2000 and 5.5 (untested). After seeing enormous interest by the users in Exchange 2007 connectivity, we have prepared a preview of the current development code from the branch. The evolution-mapi-provider is a standalone rpm but in future it may be part of the Evolution/EDS rpms. It has a dependency on OpenChange's ( http://openchange.org ) libmapi and Samba4.  I'm maintaining the build service project for the provider and I'm planning to give RPMs for OpenSUSE, SLED, Fedora and Ubuntu. We would be doing incremental releases of this periodically and may have nightly builds for this pretty soon (Don't ask me when ;-) The below url should let you access the Samba4, libmapi and Evolution MAPI Provider rpms. http://download.opensuse.org/repositories/home:/jjohnny:/evolution-exchange-mapi-provider Due to the recent outage of OpenSUSE Build Service, we aren't able to get the rpms ready. So I have built RPMs for opensuse 10.3/i586 alone and is available at: http://gnomebangalore.org/~sragavan/exchange-mapi/i586/ .  The build for the project is already queued. So it is possible that by the time, you read the mail, the rpms might have been published already. So go check out and give your valuable feedback.

 

If you follow the "openchange" link above, there is more very interesting verbiage and links:

 

 

...the Evolution plugin download page has been updated to reflect the current state of development. The plugin which is now maintained by the Novell Evolution team since October 2007 has greatly been improved and now offer support for Calendar, Tasks, Address Book and Emails. All the information needed to try it out is

available here

 

 

These are for Fedora 8 and OpenSUSE only right now. No Debian .debs to be found here, which only makes sense. If Novell is maintaining it, that set of Distro's is RPM based. Kind of interesting that Fedora is all over it already.

 

The Debian Alien import tool could probably be used to pull in the provider, but I am guessing at this early a stage that would be problematic. The download page states that they are trying to get this code complete by the end of March so that it can be included with Gnome 2.24. I am thinking that

 

MAPI and SMAPI and Bears, oh my

 

I find all of this very interesting for a historical reason. Back in the early days, before I was Manager of R&D Support, but after I was a full time VM System Programmer, I spent some time on the Production support team. I whiled away my days doing things like hooking up BMC to the Internet, installing Linux for the first time, and managing our internal email systems. We used something called HP OpenMail back then, which later was sold to Samsung, where it was rev'ved a few times (from 2001 to 2007), then died.

 

OpenMail was a terrific tool, letting clients of all types from all sorts of platforms connect and read email and access their calendars if they were able. One of it's cool tricks was that there was a version of MS Outlook that ran against it. HP had tried over the years to make MAPI work, but ran into some snags. One of the snags was that the standard MS had published for MAPI, called Simple MAPI (or sometimes called SMAPI back then) was not a feature-complete standard. This MS Knowledge base article compares MAPI and SMAPI.

 

The problem for HP was that they found out that not everything that MS Outlook does when talking to the MS Exchange mail server was documented in the standard. There were apparently quite a number of undocumented RPC's that let all the extended functionality of Outlook work. MS's response at the time was to state that MAPI was a complete protocol, in that it let you get at your email, even if it was a “reduced experience” relative to the Outlook/Exchange combo burrito.

 

HP was looking at putting OpenMail on NT as a server platform, and reverse engineering the RPC stuff so that MS Outlook users could have a “full experience”, but entered instead into a deal with MS that effectively killed OpenMail a few years down the road, even though HP was seeing some big successes with the product.

 

Full Circle

 

The history of this ties in to this post in a couple of ways.

 

First: If you are in an MS Exchange shop (A place where you use MS Exchange as your mail servers), then it is historically a very high wall to use any clients other than MS Outlook and its MS sourced brethren with MS Exchange. If you want calendars and to-dos, and server side out-of-office greetings, and you are not using MS Outlook, you have to use the web client. If you are using the web client, and it is not via MS's own IE, you will get a “reduced experience”.

 

Evolution, with the Connector changed all that, since it plugged into the same place on the MS Exchange server as the MS web client. That “plug-in” was the WebDAV protocol. Not 100% standard WebDAV, but close enough that the Ximian-now-Novell folks were able to make it work fairly quickly.

 

Evolution supports IMAP and POP protocols of course, but those standards do not define anything about calendars or in fact anything other than email. useful if you want to email enable a Visual basic program, but not for a full blown email client. I was also told by some folks a while back at a LinuxWorld that it is fairly common that MS Exchange shops turn off the open protocols for some reason. Ostensibly (as related to me) for “security” reasons. But they could not turn off WebDAV without killing the web client at the same time, and that was/is Evolution Connectors way in.

 

Second: MAPI is not WebDAV obviously, and while I have not torn deeply into it, the MAPI-plus-RPC soup one needs to figure out to make Evolution work as a 100% native MS Exchange client is a seemingly complicated bit or work. HP did not go there. Yet the Novell Evolution MAPI Provider project is shooting for March, after starting on it last October!

 

It will be interesting to see when Ubuntu picks up the packages. Hardy Heron is set for April, and its Gnome version will be 2.22 as near as I can tell from internal Gnome package numbers of the alpha 5 release of 8.04. That puts 8.10 as the first possible 2.24 Gnome release for Ubuntu, and that is not till the fall of 2008. I'll want to be testing this well before that! Looks like it will be time to drag out the OpenSUSE system and get to testing.

 

If this works, then unlike Brutus, where there is an MS Windows system acting as a software shim, Linux would be able to natively connect to MS Exchange. That will be of deep interest to any wanting to run Linux Desktops in an MS infrastructure shop. If EvoMAPI can avoid the code fragility I have seen with Connector (such as in my recent Mepis posts [ -2- ]), that will be very very welcome.

Stay tuned....

Steve Carl

Evolutionary Annoyance

Posted by Steve Carl Feb 26, 2008
Share This:
Filters on Mint / Ubuntu and Evolution 2.12 stop working from time to time. Here is a workaround.

 

I subscribe to many different mailing lists on the Internet. Things like KDE and Evolution's development lists are pretty active. I already get hundred of emails a day that are part of my regular day job. To keep this all separated out and clean, I use the built in filtering capabilities of Evolution. The filtering facility is pretty mature and fairly powerful. It does what I need it to do. Most of the time.

 

Since loading up Mint 4.0 on my desktop system, where I run the filters, I from time to time, and for no apparent reason, lose the filters. Sort of.

 

The filters are still there and defined. I can select a message or range of messages, then use cntrl-y to fire them, and they work. They just do not work when new email arrives.

 

The solution / workaround is easy, and I found it here:

http://ubuntuforums.org/showthread.php?t=599203

The fix coming from the Ubuntu forum is further proof that Mint is not far from its Ubuntu roots, at least for apps like this.

In case you don't want to dig the fix out of that post, here it is:

 

  1. Stop Evolution
  2. Delete/rename the file ~/.evolution/mail/exchange/YOUR_EXCHANGE_ACCOUNT/personal/subfolders/Inbox/summary
  3. Delete/rename the file ~/.evolution/mail/exchange/YOUR_EXCHANGE_ACCOUNT/personal/subfolders/Inbox/summary-meta
  4. Restart Evolution, and let it re-filter things

That is all there is to it. Should be pretty easy to script that, but it is annoying to have to do it at all. In my last post, "More Mepis", I referred towards the end to Evolution being fragile, and this is they type of issue that I was referring to. Evolution just does not do a good job noticing and correcting for (or prompting for correction) to cruft in its configuration and meta-data files. It just starts acting weird.

 

I do wish Koffice would get up to speed on MS Exchange interactions via WebDAV. I would like to have another choice than Evolution for when it is having bad days. Or bad releases. The real solution would to be to not use MS Exchange of course, rather something that plays nicer with others. That is not a choice I have.

Steve Carl

More Mepis

Posted by Steve Carl Feb 23, 2008
Share This:

Historical Note from 2010: The comment copied in full in this post was on the old TalkBMC web site, and when this blog's historical posts were moved to "communities.bmc.com", the original comment was lost along with all comments to all other posts.

 

 

Mike P writes in with answers and concerns regarding "Off Label Mepis 7"

 

  One of the very best things about the Open Source community is when we can have open, honest conversations about various topics. This does not mean agreement on all points of course. Understanding is the goal.

 

  In response to my post from last week, Mike P wrote a response titled "Feedback from a Mepis User". It was a long comment that Mike clearly took a good amount of time to compose, and his response is full of good information, filling in some blanks. I am going to re-post the comment here in its entirety, and I am going to intersperse some addition comments / information of my own. I will add some formatting and headers along the way as well: Mike's original comment can be seen in its original format at the end of my last post.

  Hardware Detection Under Mepis

  As I noted in the post, I did not test hardware detection under Mepis, but the Mepis web page asserts it is the best going. Mike adds:

 

    Concerning hardware detection - I have tried them all, mepis out-performs all the others in terms of overall hardware detection when tested with multiple brands and types of systems, which is not really surprising, seeings as Mepis was the first Live CD with a fully functional GUI installer. I work in a very busy computer repair shop and not other live CD can consistently detect and use OOTB as much hardware as Mepis can, period. Ubuntu detects it, but it can't fully use it until well after the first reboot, pclinuxOS may be the highest on DW, but its HW capabilities are well below par.

 

  I would imagine working in a computer repair shop that the depth of the hardware detection is much more deeply tested than anything I can normally do. Based on this comment, I will try a test install of Mepis 7 on my Dell 745. When I installed PCLinuxOS I had some problems when it came to setting up the video with the Dell 172FP monitor. Nothing I was not able to overcome, but it took some work. I documented that in my "PCLinuxOS 2007 and Mint 4.0: ELDs?" post. Mint / Ubuntu did not have issues with that same hardware, so all this will perhaps prove is that Mepis can do what Mint / Ubuntu can do. It should still be interesting. The Dell 745 has an ATI video card in it. That might be a problem. Mike writes:

 

    Concerning your experience with the graphics issue during installation, there is a known issue, not only with Mepis, but with xorg 7.1 on any machine with an ATI graphics chip, and it does not matter what flavor of Linux they are running. If your MBP has an ATI GPU, then you are affected. Xorg 7.2 is better by a long shot, though it a portion of the issue still remains

 

  The MBP does in fact have an ATI X1600, but since I was running Mepis virtually I would not have thought that to be an issue. The Dell 745 has a ATI GPU chip, so this may remain an issue. I will not get a chance to test this one way or another till I get the 745 back from Dan and his work on the GFS issues with Centos / RedHat AS though. I may see if I can scare up some other hardware for the Mepis test, although I really do want to see the ATI issues first level so I can know if I am seeing the same thing under Parallels or not.

 

    Warrens indications earlier this year was that he may roll out an update to Mepis7 somewhere in the first quarter. I think it may have been related to the graphics issue as that can certainly be a show stopper for anybody trying out Mepis for the first time. By and large, Mepis will boot on 70% of the machines out there, but it struggles with the s3 savage gpu of 4-5 years vintage and some of the newest widescreen LCD's, but there again, so does windows.

 

  I hope that he does. I know it is a great deal of work to roll a point release of a distro, but this sounds serious enough to be frustrating to a first time user, and that is clearly the target audience of the Mepis distro, even though Mepis appears to have the chops to keep more experienced folks using it too.

 

  These days, and for a long time now, I expect Linux to work better than MS Windows, and it usually does.

  Mepis History

 

    Concerning the naming of Mepis, in the earlier days, there used to be different versions, but that is no longer, except for a 32 vs 64-bit variant. Any reference to "Simply" Mepis is nothing more than a familiar association, much like the way a past student may still call a former school teacher by their "Mr and Mrs" title, even though they now have an adult relationship and the teacher would prefer to be on a first name basis. Warren himself states that the name of the latest version is Mepis 7, not Simply Mepis 7. It would be nice to see an addendum to your report to correct that piece as it would appear that insufficient research was conducted before posting your article.

 

  Well, as they say in Radio: OK. Done. Although it was not me setting the record straight. :)

 

  I did do a fair amount of research for the piece though: One can not write 4400+ words and just blather on. Well. OK. Some can, but I do try not to. I noted in the post that I had never been able to discover what the deal with the name was, and it is not obvious on the site. I could have dug through all the history of all the forums and probably come up with it. But that was not the point of why I even brought it up.

 

  My intended point was that a *new* user would find that confusing. Not enough to turn them off or send them screaming into the arms of Apple. It just is historical and non-obvious. I am not a new user of Linux, and I found it odd till Mike explained it.

 

  What would be pretty cool here is not that I have to do research, but that there was a link on the home page of http://www.mepis.org that was "The History Of Mepis". Something light and accessible but that tells people things like where the name came from (it is a corruption of "Memphis"), how long the Distro has been around (since 2002), why it was created (Because Warren Woodford had a different vision for what a Linux Distro should be in terms of ease of use), who created it (Warren), How it started out as one thing and took some turns along the way to land where it is today, the version history and names... that kind of stuff. At the risk of starting a flame war, kind of like what PCLinuxOS does with their "About Us" link on their home page. The Mepis history page has a distinctly different flavor and approach.

  Preferences

 

    On the subject of the licker panel, the first thing I do is change it. I too do not like a dynamically resizing panel, though I have seen a system with a Mac desktop skin that fooled the pro's on first glance, then it just really p'd them off because they're purists and it was running on an AMD based notebook originally sold with windows preloaded.

 

  I do try to be very clear about when I am stating a personal preference, and I hope that I am equally clear at all times that I do not expect others to feel the same way about this kind of stuff as me. I talk more about this in my "Color Theory" post.

  Gnome apps under KDE on Mepis

 

    Now, the hot potatoe...Evolution

    You quoted "Mepis shipped with a back....." and this us totally untrue, Mepis does not ship with evolution and your choice of description would make it appear that evolution is part of the original hard disk installation. I would have preferred to have read that you installed evolution after the fact, as that would read in a truer sense. Warren takes Linus Torvalds stand on Gnome vs KDE, and like Linus, Warren does not like it or endorse Gnome, so it's hardly surprising that little effort is put into a fully integrated Gnome groupware client.

 

  "Totally untrue" is a bit of an overstatement I think: Mike is 100% correct in that Evolution and Connector were not on the LiveCD, and that I used the built in, as shipped package manager to install it. Apt and Synaptic, pointed at the Mepis repositories. I'll give Mike "not stated clearly" and 10 penalty points to me. In writing that post I did have a note to go back and write a paragraph describing how I dialed in some packages that were not shipped to tweak out the install, and I forgot to go back and do it, even though I later made oblique reference to this when I talked about enabling Debian "testing".

  I am pretty sure Mike has not read the body of my work, but even then he appears to know that he was heading down a slippery slope with this one when he called it a hot potato.

 

  I do not have a great deal of investment in how others set up the colors of their desktops, or how pristine they leave their distros. I am not a purist about most things. I like to ask people why they do things they way they do them, in case they have thought of something I have not. If nothing else, there is a reason that Ice Cream comes in so many flavors.

 

  However......

 

  I do have strong beliefs around certain subjects, and this one just crossed over into one of those areas. I do not think I can stand up and shout to the mountain tops about Open Source, and Open Standards, and the like and at the same time sign on to the internecine warfare some get into over KDE versus Gnome.

 

  I use both. I interchange parts. I use the best of each. They should and can interoperate, and not interoperating is going to be cause for me to stop, throw down the caution flag, and point out the problem. Open Good. Being intentionally incompatible: Bad. Very Bad.

 

  The crux of the matter here is that Mepis *does* in fact *provide* in their official Apt repositories a specially packaged, Mepis labeled, version of Evolution. And they do say on the web site that Mepis "includes the very best business and multimedia programs". Further, I have been nothing if not clear over the years that the only way a Linux desktop user can *currently* interoperate against MS Exchange infrastructure is, for better or worse, with Evolution (more on that after the next section). I have at least 10 or 20 posts about that one topic alone here at TalkBMC. So many in fact that I have often think that someday I will get a "Please, not more Evolution posts" comment. Did I mention I am planning a quick update on a problem with Evo and Mint? Opps....

 

  I want to be very clear here: If I thought that, as a point of distro purity, no Gnome packages were being packaged for Mepis at all, I would never have even looked at the distro at all. Not because I dislike KDE. Because I dislike the promulgation of "KDE versus Gnome" that so poisoned the early histories of those two projects. I get that schism occurs over stuff like this, and that is fine. The Mepis history doc linked above says (or at least implies) that the Mepis schism point is over usability.

 

  I am also on record multiple times that I think KDE is the better desktop for the new Linux user coming from the MS Windows world because the two are closer in usage paradigm. I made some reference to this in "Which GUI" and direct reference to it in "Getting ready for IT360 / Linuxworld".

 

  I would have just set Mepis gently down and walked away from it a long time ago if I thought the project was of a divisive nature. That would be their privilege, just as it is mine not to use projects that get strident in their divisiveness. I do not think Mepis in general is one of these. It is clear when you look at the project over at Distrowatch that they do not list any Gnome applications as installed in the tracked packages section of the page. They are available though.

 

  Linus has his opinions about such things, and when they relate to the way that the kernel is built, the opinions carry a great deal of weight. On KDE or Gnome preferences what Linus does in this regard is merely interesting but not defining. That is the power of Open.

 

  The Evolution and Connector stuff is packaged, it is packaged by the Mepis development team, and as delivered, it does not work in my environment. This was the entire point of the post: is Mepis of any use as an ELD? Answer is (and this is for me only, but I think it applies to many others): Not as long as I can not connect to MS Exchange servers in *this* shop. It would be really really nifty if MS Exchange was not in the picture or even better, if it played nice with others, but that just in not the current business reality.

  KDE 4 and Beyond

 

    KDE has the Kontact PIM suite, which is KDE's answer to Ximan Evolution, but the current 3.5.x iteration is certainly not MS exchange friendly. The next KDE PIM is reported to be undergoing heavy re-working to bring it into line for use in an enterprise environment, so it would have been nice to have had a mention about the existing systems and the future path, rather that your personal preferences only, because what you write may appear to be true for the time being, but if a reader stumbles across it a year or so later, it may be completely untrue for the then current version and you may have steered the reader away from the answer they were looking for.

 
   Mike P

  As I hope to have pointed out here, I do 'talk' about these things quite a lot here on this blog. When I go to SHARE or LinuxWorld, I often give a 3 hour lab where I bring up an MS Exchange server, and take the attendees through setting up Evolution to connect to it. Then we go back and set up KDE's Kontact, for compare and contrast reasons. Kontacts capabilities to hook to MS Exchange via WebDAV were added in minor form a number of point releases ago (I have been giving that session for years) and Kontact has never been updated to be fully functional with MS Exchange. In fact, it has laid so fallow that I have just about given up hope that it will ever work. Will KDE 4 finally get it going? I have no idea. History is not on its side. They should be able to. If nothing else, the Evolution Connector code is Open Source. They can see how it works if they want and at the very least replicate the functionality. To date, they never have. This is the basis of my point for why, in an MS Exchange shop, one is stuck with using Evolution.

 

  Stuck? Did I not just do a diatribe on Gnome versus KDE?" Yes, and this is not that. My problem with Evolution is not that it is from Gnome, but that it appears to me, based on years of experience, that it is fragile. One of the reasons I test and report on Evolution here so much is to try and keep Evolution going (by testing and filing bug reports, etc), at least until there is something that works as well or better with MS Exchange.

 

  Related to this, MS has been making some public comments about trying to increase their interoperability [ -2- ], but till that day comes, if it comes, I have to use what works now.

 

  A few weeks ago, I started looking at the current state of KDE 4 for an upcoming post here. Along the way I installed it on my Mac and an MS Windows computer, just to see what the state of the art was. Part of the idea originally of taking the time to do a Mepis look was to see where it was at with KDE. I took a bunch of side roads, noticed the Mepis had Evolution as an installable option, and that led to my post.

 

  Long term I would like to see Kontact and Kmail and the whole Koffice suite made more usable in an enterprise shop, because that gives me more options for how I stay a Linux Desktop user.

 

  More on that when I know it....

Steve Carl

Off-Label Mepis 7

Posted by Steve Carl Feb 19, 2008
Share This:

Looking at Mepis 7 through Enterprise Linux Desktop Eyes

 

In the world of prescription drugs, there is this practice called “Off-Label Usage”. This is where someone decided to try drug for a different medical problem than the drugs label (and FDA Approval in the case of the US) indicated it was for. They are not doing this to be scofflaws or anything: perhaps someone noticed that the drug they were taking for one thing seemed to be helping with another, different, unrelated thing. Perhaps someone really good at Biochemistry looked at the way in which the drug worked, and generalized to say "Humm: I wonder if that might help with this other biological process that uses a similar biochemistry". This off-label usage might be something along the lines of an Anti-depressant that appears to be useful in some cases for treating Chronic Fatigue Syndrome, or the way they noticed some stimulants actually calmed down people with certain medical conditions. That kind of stuff.

 

It is logical that since Linux and all the other projects that go together to make various distros are all sourced from the same code trees (More or less) that, for example, a Linux Distro that is setting itself up as an easy-to-use end user Distro might have use as an office desktop as well. They may never ever say they are intended for that usage, but... might it work?

 

What is the Mepis thing of which you Speak?

 

In case you don't know, here is a thumbnail sketch of what Mepis is. As of this writing it is the 9th most active Linux Distro over at Distrowatch, with 700 hits per day. For comparision, number one at the same time is PCLinuxOS with 2509 hits per day. My favorite, Mint, is at 1209 HPD, and running 5th.

 

Mepis is mostly Debian based, and it's core concept is to make Linux easier to use. It uses Synaptic for package management, and supplies a few utilities like a X configuration tool to make a few of the overlooked parts of Linux easier to set up. I'll get more into some specifics as I go. Here is the blurb off their web site about what Mepis is:

 

Why SimplyMEPIS Linux?

 

  • SimplyMEPIS allows you to test and try the software before you install to your harddrive
  • SimplyMEPIS includes the very best business and multimedia programs
  • SimplyMEPIS features unique hardware detection and configuration superior to any others
  • SimplyMEPIS is pre-configured for simplicity and ease of use, you're productive in a matter of minutes, not hours

Caution

 

Even though the title hopefully implied this, I need to be very very clear about something. At no point anywhere at any time have I ever read anyone involved in the Mepis project ever claim that Mepis 7... or any other version of Mepis... was ever meant to be looked at as an Enterprise Linux Desktop.

 

Well... bullet two in the previous section did mention that the Distro includes the very best business software. But other than that...

 

Given that the guiding light of Mepis, Warren Woodford, is an alumni of Next (and Next of course is the design basis for what became Apple's OS.X), one might be tempted to think of it as a Distro that would probably be targeted at the average home user. The look of the default KDE 3.5.8 desktop would even seem to support that, since it modifies the defaults of KDE to be reminiscent of OS.X: task bar centered on bottom, Task bar shorter than the width of the screen. Blue desktop background with swirly designs. It all looks vaguely familiar to a an OS.X user.

 

The resemblance largely stops there though. Once you click on the Mepis 'Globe and Pyramid' icon (where the KDE 'K' normally lives), familiar looking KDE menus appear. Unlike stock KDE, the menus have task descriptions like 'image editor' as well as names of apps like 'GIMP' on them. This task based orientation is nice for the folks that may not know the complete mapping of application name to application usage. They are not all intuitively obvious. (Evolution is email? I thought it was a game)

 

Linuxerati probably do know a great number of the applications names and what they do, but there are over 12,000 applications / software packages in the Debian software repositories. Pop quiz time! Does anyone know them all? Then there is the fact that Mepis is not so much for experienced Linuxii as folks coming there from other places like OS.X and MS Win. Mepis does not map every single application from the Debian repositories either. Just the ones that they package, which is probably something between 10 and 15% of them. Still, 1500-2000 packages is a lot of packages to remember what they do, and Linux needs no help with new users being more obscure.

 

Wikipedia says of Mepis that:

 

"MEPIS was designed as an alternative to SUSE Linux, Red Hat Linux, and Mandriva Linux (formerly Mandrake) which, in the creator Warren Woodford's opinion, were too difficult for the average user."

 

Since SUSE and RedHat *are* targeted at enterprise desktops, it does not seem totally off the beam to have a look at Mepis as an ELD. After all, being an enterprise user does not mean wanting a hard-to-use desktop. At least part of why folks stay using something like MS Windows XP is because they already know that environment, and they don't really want to know any more about computers than they have to in order to get the job done, whatever that job may be.

 

Sad but true. Not everyone is a computer geek. Not even everyone at BMC is a computer geek.

 

In this corner, we have the world of computers. A place so vast and complicated that no one knows everything about them. In this corner, we have all the people that don't want to know more they they absolutely have to about computers. Into that gap jumps Distros like Mepis.

 

So why test Mepis?

 

I came across Mepis in a flame war on a bulletin board a number of years ago, where some people were trying to help a new user ease into using Linux. I think the new user probably ran screaming to an Apple, given the general tenor of that particular discussion, as I recall it devolved to drawing lines in the sand and calling each other names.

 

Another "Sad But True" (TM): I think this sort of knee jerk, flame war behavior is what many people think of when they think about Open Source. They wonder why they should ever look at it when the conversation starts with people calling each other idiots. That it too bad, since such discussions have nothing to do with Open Source, and everything to do with the general signal-to-noise level of the Internet. You can find similarly well reasoned discussions over in alt.captain.kirk.rules in the NNTP news groups.

 

At that time I had never heard of Mepis so I searched it out and downloaded it, and have tried to follow it ever since as a matter of interest. One early point of confusion for me was what the difference between Mepis and SimplyMepis was. I assumed that this was sort of like "Mepis-Lite", or a stripped down, single CD version or something. As near as I can tell over the years though is that they are actually the same thing. Mepis 7 is the same thing as SimplyMepis 7.

 

I think.

 

I actually have never found anything definitive about that. It is just that I can not find two versions of the Distro. I found a number of ports to various chip architectures, and a Live CD, but nothing that appears to be a different version from a package or packaging point of view. Like Ubuntu spawns Kubuntu or Edbuntu. That sort of thing. SimplyMepis is not a simpler version.. it is simply Mepis. And that is confusing. The first thing this easy-to-use distro did? Confuse me. That has happened before though. See my posts about my early days with Ubuntu here. I was clearly confused.

 

Then there is this AntiX Mepis thing. No idea....

 

One thing that kind of fascinated me about Mepis was its history for the basis of its versions. Version 5 was based off Debian. Version 6 was based off Ubuntu. Version 7 went largely back to Debian (although Warren Woodford did say that they still source some things from Ubuntu). That sure seemed like a lot of work, but computer purism knows no bounds sometimes. This time it may not have even been totally off base. Here is a quote from Warren Woodford about it:

 

"Ubuntu is almost a whole new distro each time it's released," he said. "By using the EXPERIMENTAL code, each and every time, the Ubuntu code tree is inherently less stable than the Debian code tree, which contains additional levels of testing and vetting and fixing of code."

 

If Warren Woodford is after stability over bleeding edge (although I find Ubuntu/Mint pretty stable), then that is another argument that Mepis might be ELD material.

 

So.. is it? I spent this much time writing about Mepis because I never have here at TalkBMC before and so I needed to set up what I was going for here, and why.

 

The Experiment

 

At some future point in time, it would be useful to know how well Mepis 7 installs and configures on various bits of real hardware. Laptops and Desktops. To do a real deployment in the heterogeneous world of PC hardware Mepis would need hardware detection / configuration as good or better than Knoppix or Ubuntu.  Bullet three on the homepage said in part:

 

"... features unique hardware detection and configuration superior to any others"

 

I did not test that here though.

 

My criteria for an ELD first is how well it supports the daily tasks of a desktop computer. It does not matter if it installs well if it is dysfunctional as a work desktop. Since I live in an MS Windows world when I am at the office as far as document formats and email infrastructure, that comes down to the two killer apps of Linux: Evolution and OpenOffice. I'll take it as a given that between all the Linux browsers that are available (Firefox, Opera, Konqueror, etc) that the web is covered. That may not be true if you have to deal with intranets designed and deployed by people that have no idea what they are doing. If so, you will probably need IE too. Getting IE going under Linux is not that hard though, with tools like IEs4Linux, so I am going to pretend that need is covered or moot. Nothing protects you from the ills of an Active-X only [ -2- ] Intranet like IE on Linux. Well, that and re-doing the Intranet with a clue. But I digress.

 

I was/am on the road for two weeks at the time of this testing/writing and not wanting to carry an entire computer lab with me, I used my MacBook Pro running 10.5.2 with its installed internal second hard drive and Parallels Desktop 3.0 to set up the test bed.

 

Pain in the (Virtual) Hard Drive to Install

 

My first challenge was to get Mepis to install at all in Parallels. It took about twenty or so tries before I hit the magic incantation that got the boot past early stages of hardware discovery or ACPI bits. It appeared from the failures to be an issue with the virtual screen device driver. Maybe it was the unique and better than anyone else's hardware detector not liking the virtual world. What ever the problem was it was not enough to tell the Mepis installer to use VESA (which it recommends right there on the boot screen) for a VMWare environment. I also had to press F4 and specify the X by Y dimensions of the virtual screen. Once that was done, it booted, and was fairly easy to install after that.

 

Parallels does not source a Linux virtual device driver for xorg as far as I know. Certainly no Linux I have installed ever had an option to have Parallels Tools version as an option for the graphics driver, even though some Linuxii do source a VMware Tools xorg driver option. Some quick Googling showed nothing about Parallels Tools, Linux version.

 

That might be an argument for using VMware Fusion on the MBP rather than Parallels, but I really like Parallels in most ways so much it would be hard to give up.

 

The MacBook has 2GB of RAM, so I gave the guest 512 MB, and that seemed to be plenty.

 

My Macbook has two hard drives inside it: The 120GB disk it came with and a 160GB where the DVD drive used to be. I keep all the Parallels guest OS images on the 160GB disk. I also in this case put the downloaded Mepis 7 .iso image on the 160. That does not seem to be a best practice. :) Every now and then I would see install timeouts:  It probably would have been better to put the ISO on the OS.X OS hard drive instead . It installed, but there were a fair number of hangs for I/O.

 

Once up I hunted around and found the Mepis X server setup widget. It has many nice setup options for when one is running on real hardware, but it appears to be largely useless in a virtual world. Nothing I changed the settings to had any effect on the screen size of the VM. Finally I went in as root and hand edited the /etc/X11/xorg.conf file to get the geometry I wanted for full screen.

 

I am not dinging Mepis for any of this. It was a pain however I would need real hardware to know if any of this matters to a real user in a a real ELD. Just noting the issues in the VM install in case someone else comes down this path.

 

Up and Running Virtually

 

Once I had the screen real estate I wanted, I got serious about configuring the Mepis 7 guest as if it were an Enterprise Desktop. OpenOffice was already installed, and at version 2.3.0, it was pretty current. 2.3.1 is out as I write this. Mint 4.0 still has 2.3.0, and that is my main, production, ELD.

 

Side Trip: The more I messed with the system, the more the default look and feel of Mepis made me crazy. I use a real Mac. I do not think this desktop would bother anyone else but me. But my thinking is that if you are going to make the taskbar look like the one on the OS.X desktop, it should also act like it, at least a little bit. This thread over at KDE (KDE is the default desktop of Mepis, but the way) probably explains the disconnect between me and the world here:

 

"Nearly all mac users, myself included, don't particularly like the OSX Leopard dock. Nearly all turn off the icon zooming and many apply a hack that makes it 2D and closer to kicker.

 

It is representative of most flashy desktop effects i.e. they are great for impressing people who haven't seen them before but generally get in the way when you're actually trying to work"

 

Oddly, I have never met anyone that has ever changed this setting on their Mac. Given the way Apple implemented the graphics layer on OS.X, graphical effects do not use much CPU, and low end hardware like my 500 Mhz Power PC based  iBook ran it without pause. I also rail about useless graphics stuff all the time, and admit that if it does slow my computer down, I turn it off. I have almost everything in the effects department turned off in MS Vista for example.

 

There are ways to make a more OS.X style taskbar appear in KDE. Ksmoothdock and KXDocker are two I have seen and played with at one time or another. There is also one called Kiba-dock, but I have never looked at it. The two I did play with I soon disabled because unlike OS.X they were quite clearly not integrated into the window manager / composite layer and they were slow.

 

KDE 4 probably changes this speed issue around since the KDE 4 development team utterly redid the compositing window manager. Well... assuming that the OS.X style docks take advantage of the new features. Mepis 7 is still KDE 3.5.8 though, and most of these distros appear to be waiting for KDE 4.1 before they make their move.

 

One never has to waste time being crazy about look and feel issues with Linux. One can always make it look however they like. I put the task bar at the top of the screen (less neck craning on a laptop), Gnome style, full width, made it smaller, and was happy.

 

Devolution

 

OpenOffice worked on everything I pointed it at. Spreadsheets that came my way during the week. Documents. Presentations. Pretty much what OpenOffice always does these days. OO 2.3.0 on Mepis was more or less the same experience as OO 2.3.0 on Mint, except for the KDE (Mepis) versus Gnome (Mint) decorations.

 

Evolution was not such a happy story. First off, Mepis shipped with a back level version of Evo: 2.10. Current Evo stable version is 2.12, and 2.12 was a major stability release for me, at least when it comes to using Evolution against an MS Exchange server. Mint and Ubuntu and PCLOS all had 2.12 at least as options. Mepis... not so much.

 

I worry about doing something to a distro that the folks that created it did not test, because now I am not only off label, I am off road too. I tried 2.10.

 

No luck. 2.10 was not working well against MS Exchange 2003. To be fair to Mepis / Evo combo, I was over half a second delay in the network away from the MS Exchange 2003 server, and being on the road I did something stupid. I forgot to enable my desktop system for remote access. It is locked down, and I can not get to it to archive my emails there. That means I have 1500+ emails in my inbox. The Evolutions of the past usually do not like large inboxes.

 

Having said that: Mint 4.0 had no problem running Evo 2.12 from that exact same place and with that exact same network delay. Even if that inbox size, and that network delay were the *problem*, they were not an excuse for Evolution version 2.10, Mepis special packaged version. It would not even try to connect to MS Exchange via Connector.

 

Plan B and off-road time.

 

From an Enterprise Linux Desktop in an MS Exchange shop point of view, this problem with Evolution was probably a head shot wound. In an MS Exchange based shop, this just would not fly. Still, I was (as usual) curious. What was the problem? Was it the special package? was it the back level release? Was it a Gnome app under KDE problem? Time to dig a little deeper.

 

A bit of poking around in the forums and I decided the way to get what I wanted was to enable the Debian "Testing" repository inside Synaptic. This took a bit of work. I had to manually add the repository, and then in "preferences" I had to tell Synaptic to prefer "Testing" over "Mepis".

 

A refresh of the repos, and I had 2.12.3 from Debian available. I picked Evolution and Evolution Exchange connectors only, plus I took Synaptics recommendations for co-reqs. After about 50 packages were installed, I pulled my changes back out of Synaptic so that it went back to preferring Mepis as a source. The idea here was to contain the changes to only those needed to get Evolution up to date. I was not trying to test the entire Debian 4.0 Etch "Testing" code tree, but a mildly-modified-Mepis (say that fast three times).

 

Evolution via Connector now worked, but only email. Calendaring was a no-show.

 

To this point in the testing I had not changed any config files. I was using the ones created by 2.10. Usually when issues occur during a version upgrade, I just need to let Evolution re-create the config files under the new release.

 

I deleted .evolution, .gconf/apps/evolution, killed Evolution and Bonobo processes, and restarted clean. Normally I would not delete these things. I would move them out of the way in case I needed something back out of them, like all my old emails. This was a test system. I could blow them up completely. It did not matter though. The Evolution account configuration widget wouldn't let me add Exchange as a server type on server type choosing dialog.

 

IMAP: yes. Exchange: No.

 

Exchange is there on the chooser, but there is no place forward from there. It does not appear to understand that I have picked Exchange as the server type and that I have entered my userid. It seems to be waiting for input. If I pick IMAP, the 'forward' button on the bottom right hand side of the dialog panel un-grays, and I can continue.

 

That is the end to anything I can do with Mepis 7. It was never meant to be here in ELD-Space. Well, unless your ELD space does not have to deal with MS infrastructure. If that is the case, then you are probably in good shape.

 

Assumptions Do Not Always Pan Out

 

Sometimes going off-label you get the same results as if you had taken the placebo.

 

I documented why I thought that the use of Mepis 7 as an ELD was probably pretty safe. It probably would be quite good in a non-MS Exchange environment, but since so many shops using MS Exchange, the question has to be asked and answered about how well it would work in that specific kind of shop. For better or worse, it is unlikely that the glass house folks are going to toss out MS Exchange in the near future if they already have it.

 

I am going to venture off into the underbrush some more (because sometimes, off-road just isn't far enough into the weeds), and speculate as to why Evolution Connector 2.10 (Mepis edition) and 2.12.3 (Debian version) does not work under Mepis when connecting to MS Exchange.

 

First: Evolution/Connector is a Gnome application. Mepis is a default KDE distro (although there is a Gnome Desktop installable for it). Evo can easily be made to work under KDE (and in fact the IMAP stuff does work). It appears that sometimes it takes a bit of work to get gnome stuff as complicated as Evolution going under KDE, and that work does not appear to have happened here, at least as far as Connector is concerned. This makes sense to me: for Mepis to test Connector, they would have to have access to an MS Exchange server, and most Open Source projects either outsource their email to folks like Google, or run an Open Source email server. I would expect Evolution on SUSE to work because Connector is their code, and they truly are positioned as an ELD.

 

Second: The Connector bit has a long history of being persnickity, and it appears that the general trend might be away from Connector toward Brutus.  I did not test Brutus. Not this time, not ever. I do not like the general idea of Brutus. If I am forced to go to Brutus in the future, I will, but not till I hear that the Evolution folks are throwing in the towel on Connector. And even then, I may just decide to just live with the Outlook web interface rather than go to Brutus. I try not to be overly purist, but it really gets up my nose to use an MS Windows box as a shim to interact with MS Windows infrastructure. I can do that by just running MS Windows as a guest OS on VMware.

 

I suppose the advantage of a separate box running MS Windows and the Brutus would be if something gets sideways in MS Windows land, like another Nimda or something, it is at least not running on my machine. I can just burn down the Brutus gateway / shim / interface / proxy / whatever it is called and rebuild one box rather than many. But I digress.

 

Three: It matters if you test stuff. Evolution and Connector are complicated bits of code. If changes have been made to the distro, Evo / Connector might lose track of something they need. Move one file that Evo requires, or change it's format slightly, and it suddenly starts acting whacky. I don't know that this happened: it is pure speculation. But the fact that there is a Mepis packaged version of Evo implies to me that someone on the project at least thought they had to do something to re-package it for Mepis. The fact that I can not even define a new Evolution account when Connector is selected tells me this combo-burrito never was QA'ed. The Mepis special package of Evolution let me define the account. Something changed someplace.

 

All of this is speculation. All I know it that right here, 1400 miles from the MS Exchange Server, Mint 4.0 and Evolution 2.12.1 work, and Mepis does not with either 2.10 (Mepis) or 2.12.3 (Debian Testing). Without Evolution's MS Exchange access, I can't easily use it as my Enterprise Linux Desktop.

Steve Carl

GFS or NFSD?

Posted by Steve Carl Feb 15, 2008
Share This:

A quick update on bug 431253

 

Robert Peterson over at Redhat has now been able to re-create our bug. He posted on the 13th of February:

 

By using a non-root user, I've recreated the problem using the hp-ux
machine.  I've also compared what's happening against a Fedora 8 nfs
client with suggestions from Steve Dickson.  The difference between
the two calls is this:

The Fedora 8 nfs client specifies a create mode of 2 (exclusive)
whereas the hp-ux client specifies a create mode of 0 (unchecked).
Now I'll backtrack how create mode 0 is handled by nfs and gfs as
opposed to ext3.

 

 

There has been some back and forth since then, and a few things tried, but no headshot fix just yet.

In the meantime, we have another week of uptime with no issues on Dan's patch, and are moving more files to the cluster.

I *said* it was going to be quick.... which is just as well. Next Tuesdays about Mepis is over 3300 words so far....

Steve Carl

Bug 431253

Posted by Steve Carl Feb 12, 2008
Share This:

The chief NAS blaster of R&D Support gets bugged about the NFS Create problem

 

Specifically, this bug:

https://bugzilla.redhat.com/show_bug.cgi?id=431253

 

Which is the same as this one:

http://bugs.centos.org/view.php?id=2583

 

Attention Deficit

 

When the bug we had out at CentOS had languished for a while, Dan and I talked about it for a bit, and then he scooped up my test computer (goodbye to PCLinuxOS for now) and went back to his desk to try something out. The goal was to see if he could recreate the bug we had found on CentOS 5 also on RedHat EL5. Given the way CentOS is derived, it made sense to use that CentOS would have all the same bugs as RH EL5, and that the inverse was probably true, that RH would have all the same bugs as CentOS.

 

Carefully putting together a test RH NFS Server that uses GFS as the file system, he carefully made sure that all the same packages at all the same releases were installed. And then he tested the HP-UX clients, and sure enough, they failed the exact same way. Here is the text from the RH Bugzilla bug (Formatting is mine...):

 


Description of problem:

HP-UX NFS clients fail creating a new file on a CentOS 5 NFS server updated with the 2.6.18-53.1.6.el5 when GFS is used as the backing filesystem.

Version-Release number of selected component (if applicable):

kmod-gfs-0.1.16-5.2.6.18_8.el5
kernel-2.6.18-53.1.6.el5

How reproducible:

Using any HP-UX client.
I tested using both 11.00 and 11.23

Steps to Reproduce:

1. Create a GFS filesystem on the server
2. NFS export the filesystem
3. Mount on any hp-ux client
4. hp-ux$ cp anyfile
/to/nfs/fs/on/rhas5

Actual results:

hp-ux$ cp: cannot create anyfile: Permission denied

Expected results:

No error

Additional info:

HP-UX clients first create the new file using the NFS procedure CREATE (using
UNCHECKED and mode=0) and the server returns NFS3_EACCESS. I traced the -EACCES
error to the generic_permission kernel function. Apparently ext3 and xfs
filesystems do not use generic_permission but gfs does and it returns -EACCES to the nfsd.

I have created a simple patch to check for the -EACCES error and allow access
if the FSUID=Inode-UID. This resolves the problem but is probably not the best
way to fix the bug. I will attach the patch for reference. Hopefully those more
knowlegable than I can determine the correct fix needed to resolve the root
cause of this bug.


More Debugging

 

The case was picked up by Robert Peterson of RedHat, and the first thing he wanted us to do was to check and be sure the behavior still existed in the latest, CVS version of this code. There were some changes in the GFS extended attribute code that might have impact on this. Robert did not have access to an HP-UX workstation to verify.

 

Dan loaded up the CVS version of the code and verified that it did not fix the problem, submitted the traces he had from both CentOS and RH on the problem, and that is where the matter now stands.

Why?

 

It appears, looking through the various bugzilla reports that the RedHat folks do not get wrapped around the axle when bugs show up and are reported against CentOS instead of their distro: the spirit of Open Source appears to hold true, and that they are more concerned about the bugs getting fixed than where they were found.

 

At the same time, reporting it against CentOS did not get any obvious attention. We knew we were out on the bleeding edge with our configuration anyway, but we theorized that it would get some cycled if we did some work to reproduce it in RedHat code. We did, and it did. We now have someone looking at it, and what we hope is that a fix will get generated that will both roll into everyones base code, and be sure to be Posix compliant as well. Given the way CentOS is derived, fixing it in Redhat fixes it in CentOS at the same time, and sooner or later everyone in all the distros that use the GFS code will get it.

Works for Us

 

In the meantime, Dan's patch has had the CentOS cluster stable and doing exactly what we want for going on a month now. We are putting new file systems on it, able to take nodes down without disrupting service, and so forth. It is pretty much exactly what we were hoping for.

 

Why would we need to take a node down? The most recent example was the lights out management cards. Dan had discovered that they also run Linux, and that there was a new version of the embedded code for them. We had suffered a few hangs on the cards, and wanted them patched up to current. By being in a cluster, Dan could down a node, flash the ILO card, and bring it back up, then do the next node, and then the next, and our customers never even noticed he was working on the system.

 

When it works like that, it is a beautiful thing.

Share This:

Ron Michael joins the ranks of bloggers here at TalkBMC concerned with all things “Open”, but with a mainframe twist to it

 

There is a new blog in town, and for those of you who prefer a more technical bent to your blogging, have I got a deal for you!

 

I have worked with Ron Michael for years, and he has been the guy who has kept our Linux on the mainframe healthy and happy for a long time. Ron is not afraid to go after *anything*. Just because  he does not yet actually know anything about something does not stop him from diving in and learning it.

 

When he and I first sat down to talk about where I wanted to take our Z series Linux over the next few years (In R&D Support, my team has the Linux part of the mainframe as well as all other platforms that it runs on), he brought to the table years of mainframe experience, diagnostics, and coding, as well as the kind of holistic understanding one gets from being a well rounded geek. What I am trying to say here is that Ron builds lasers for fun. Real ones. Like for 3D holography and stuff. He is also a machinist. And a helicopter pilot. There is nothing he can not do when he decides he wants to do it.

 

I brought general Linux knowledge, and VM systems programming experience. He listened, learned, generalized, and has come up with some amazing stuff. Ron was my collaborator for the SHARE presentations I have given about Linux image cloning as well.

 

Open For Mainframe” is where Ron will be talking about all sorts of technical things as they apply to the mainframe. It will more than likely be fare for folks who want the deeper dive into the kind of things most folks don't really consider about what it takes to run Linux on the mainframe. Tips and tricks. Maybe some code snippets. All sorts of things. Along the way I am sure he will be posting articles of a more general nature, since he is both a specialist and a generalist.

 

Watch that space for all sorts of new and interesting things. I have it added to my RSS feed. That way I can stay informed as to what he is up to! If nothing else, I need to know if he is equipping any sharks with lasers..

Steve Carl

Virtual Lessons

Posted by Steve Carl Feb 5, 2008
Share This:

Try a little X86 VM infrastructure, succeed, grow organically. At some point it becomes time to stop and look around and figure out some better ways to do some things in the virtual world. One stop along that road today: Storage Virtualization.

 

As noted in my last post, "Virtually Greener", this post is a deeper dive into some of the things we have learned along the way about virtualizing X86, specifically using VMWare.

Culture Change

 

The first problem any X86 virtualization project faces is education and culture change. There is Fear, Uncertainty and Doubt, and this is not the normal vendor generated FUD against a competitor stuff. This 'real' FUD is based on:

 

  • Fear of Change

  • Fear of the Unknown

  • Fear of Loss of Control

 

Hopefully the 'Fear of Change' is pretty obvious.

The Unknown

 

The Virtual Machine story sounds so unreal to someone not initiated into the mystery's of virtualization:

 

RDS: "Hi, I'm your friendly neighborhood R&D Support person. I would like to take that ancient computer you are using away from you and replace it with one that does not actually exist, but you'll get better performance."

 

RDP (R&D Person): "Come again?"

RDS: "I'd like to take that computer you have been using for years and shut it down and scrap it, but first I want to P2V it so that you can keep using it, except that the Virtual version will be much better than the one you have now."

RDP: "Err... right. How will it be better?"

RDS: "I can add more memory if you need it, part of which may be shared with other virtual computers The other computers are because you virtual machine will live inside this great big computer with a bunch of other virtual computers similar to yours."

RDP: "A bunch of others... won't that be slower?"

RDS: "No, because the new computer is 10 or 20 times faster than the one you have not now."

RDP: "How many others?"

RDS: "Maybe 30 or 40 or 50. No more than 75 probably. It depends."

RDP: "And this will be faster? How? I can count..."

 

And so forth....

 

OK. I admit that the above conversation never happened exactly. At least not all at once, and not with just one person. But it shows the confusion that surrounds this whole thing. And it smacks of Big Brother, centralized, Mainframe, Glass House stuff that some people ran screaming from because of all the restrictions.

 

The truth of the matter is that most people just don't use all the resources of their computer very often, and that most of the time it is sitting idle. Right now, as I type this in on my MacBook, the CPU gauge in the task bar is barely visible, with less than 7% out of 200% currently being used. And this is a two year old unit, not even the fastest laptop going anymore.. and it's a laptop. As long as I am not doing image processing or massive file conversions, or at least as long as I am not doing them at the same time as someone else using my computer at the same time, there is room on here for many of us.

Even with the overhead of virtualization (and VMWare is very high right now, relative to the mainframe: We use 30% as a planning number for VMWare, whereas most stuff on the mainframe is at about 5% these days), the fact is that you can layer in a large number of users on a fairly inexpensive large, data center grade computer, and have the net sum of that be far less expensive, and perform better. As noted in "Virtually Greener" there is a great deal of power to be saved here as well.

 

The hard part is drawing this all out in ways that people can latch onto. We decided to go after it by creating a "proof of concept" VMWare farm.

Proof of Concept

 

How big one makes such a thing as a "Virtual Farm" depends on many factors. We had pretty lofty goals, and a fairly large scale data center. We are after about one in three X86 computers over a 12 month period, and we are well on out way to achieving that goal. Our R&D data center seemed like a prime candidate for this as we had so many older, more inefficient computers still in use. This old gear remained for reasons of customer support, and therefore by keeping around a raft of hardware and their related OS and application releases, we had many thousands of square feet of pre-1998 computers still in service.

 

The problem was how to keep the functionality: keep our internal customers supported (remembering that my customer is other groups inside BMC, like R&D and Customer Support), but reduce the footprint. A conversation I never want to have with R&D or Customer Support:

 

CS: "You made a change to our infrastructure and it affected how well we were able to support [insert customer name here]. I am coming over to your office as soon as Sally returns my brass knuckles."

 

Kidding. Our CS folks are a passionate bunch though.

 

We started small: Dell 1850s, 1950s, 2950's, and then a Sun X4600. 4 GB, 8Gb, 16GB, 32Gb and then finally the VMWare release 3/3.01/3.02 limit of 64 GB. We created the ESX servers all with internal disks, and started putting up the smallest of VM's. When the target is a 1993 computer with an average of 128 MB of RAM, even a 4GB ESX server can hold a good number of OS images.

 

When a request for a new computer would arrive, we'd look at it, and ask if it was a performance, benchmarking, capacity planning, or device driver related need. If it was not, we would offer a VM instead of a real computer.

 

The first advantage was that we could provision that immediately. In less than two hours from the time that the request had arrived in the Remedy inbox, we could turn around an exact environment that met the needs of the requester. And they worked. They were not slow. With the VMWare VI (Virtual Infrastructure) console they had complete control over their VM. They could install things in it, reboot it at will, and never need to get us involved. With more and more people using the environments, we were able to build out templates that allowed us to turn around requests even faster.

Organic Growth

 

Like any pilot program, there comes a time when it is time to go with it, or time to throw in the towel. Did the FUD win, or did the facts? Where the facts on your side? This project was a go, but now we had a problem. It started small, it grew like a weed, and now we had a pile of servers. The big ones worked better because they allowed more resources to be shared: a Sun X4600 or Dell 6950 could easily run 50 VMs, or even 75 of the really small VM's. To take advantage of features like DRS so that workloads could be balanced across multiple systems, and VMotion, and HA clustering so that if hardware fails the VM's can re-start on surviving members of the cluster takes additional investment. It take a SAN, and switches, and paying attention to what type of HBA you buy so that VMWare supports it. It takes planning and thought, and in some cases some outside the box thinking. One does not want to have all their cost savings and power savings and data center floor space eaten right back up.

 

The inverse problem is that one does not want to cheap out on the gear. When there are 50 VM's running on one server, even if all of that workload is not considered production *individually*, a server failure that can not recover quickly elsewhere means that at a minimum 50 people were just idled, and probably more if these were multi-user OS's running inside the VM's.

 

VMWare simplifies the math here by publishing what hardware they support. We have a fair number of Apple Xserve RAID (XSR) disk arrays, and we like them a great deal. We would have liked to have been able to use them for VMWare, but they are not certified. Tests showed they worked just find for most things, except work load with a high amount of random read. Virtual machines can do exactly that sort of randon read access pattern often, so XSR's are not optimal.

 

Or are they?

Storage Virtualization

One of the big myths of disk space is that SATA is way slower than SCSI or Fiber Channel. It is... and it is not. Most SATA disks these days have fluid bearing designs, and extremely high MTBF (Mean Time Between Failure rating)... for all that it worth. The newer revisions of the SATA interface are pretty fast. http://www.sata-io.org/3g.asp documents 3g at 300 Megabytes a Second. The current spec is only half that, but that is still a crisp 150 MB/Sec. Way faster than I can type.

 

Part of what slows down SATA versus SCSI is the number of arms versus the density of the data: the biggest SCSI I have seen as of this writing is 300GB. The biggest SATA: 1 TeraByte. Give or take an elephant, three SCSI disks with three data transferring arms is going to be faster than one SATA disk with one lonely little actuator. Then there is the fact that SCSI disks can be had that spin faster than the normal 7200 RPM of SATA: 10,000 and even 15,000 RPM. if you have three of the 15k SCSI units, you are going to go *way* faster then the 1 little 7200 RPM unit. You will pay for that premium speed though, and not just in Capital to purchase, but power. Spinning a disk at 15,000 RPM takes engineering, testing, expensive parts, and *power*.

 

Looked at another way: The same number of disks, SCSI or SATA or FC, spinning at the same RPM, with the same density per track, and the same amount of on-board disk cache, is going to perform near enough the same as to make the price difference between the three favor SATA. SCSI or FC might be a little faster, but not dramatically so.

 

That is where storage virtualization can help. Storage Virtualization scares most people even more than OS virtualization. The FUD reasons are more or less the same. SV comes with a huge unknown: With Storage Virtualization the link between the physical location of the disk, and the data is largely broken.

 

One example might be a LUN where an OS keeps '/'. Normally, that is the first 10GB of any computer I build. With SV, I can 'spray' that out over multiple disks (kind of like RAID does) but even farther and wider, over more than one disk array, and even multiple different models and vendors. The VLUN can be across literally hundreds of disks, and many tens of controllers. The IBM SVC limit is somewhere around 1024 devices. That is a lot of arms to throw at data. It is not one block per disk though. There is a chunk size involved here too. Since that 1024 devices can in turn each be a RAID5 array, then the I/O could be across thousands of actual disks.

 

Hopefully it is easy to see how in that scenario, the advantages of SCSI are diminished, and also see how a system programmer is going to be looking very closely at the storage virtualization device to be sure it is always healthy. It is now the only one that knows where the data is. The storage virtualization devices we played with are the ones from IBM, called the SAN Volume Controller or SVC. The SVC is an IBM X Series computer running Linux (yea! I had to get Linux in here *someplace!*) that sits between the hosts and the disks. The disks are just providers of blocks, and all the hosts are looking for on the SAN are LUNS, so the SVC creates VLUNS, and uses a block table to keep track of it all.

 

The SVC's come in pairs, and are active / active clusters so that there is no single point of failure. You can add more SVCs to the group to extend the speeds and feeds in a nearly linear fashion. They are quite amazing.

 

We tested the SVC's using the Apple XSR storage, and the technical term for that is that they "rocked". Our test set up was 2 Apple XSR's. Each XSR was fully configured with 750GB drives, 14 in each one. The XSR uses one controller for each seven disks, and the two controllers are not aware of each other, so this is essentially two disk arrays in each tray. We set each strip up as RAID5, with one hot spare, so that leave  data disks in each strip. 4 controllers = 4 stripes = 20 disks. The SVC created VLUNS over the top of all of this.

 

The Apple XSR has one very annoying feature. When setting up the LUNS across a RAID group, they can not be of different sizes. You can pick the number of LUNs, but the XSR sets them up all equally sized. Here is another way the IBM SVC can be very useful, since the VLUNs are created out of block pools.

 

The IBM SVC also acts as a write cache (in addition to any the devices themselves might have) so the VLUNS it presents appear to be quite speedy. When you think about how you can write across as many arms as you want to put into a VLUN, they actually are though, so this is not just an appearance. IBM currently owns the high water mark for speeds and feeds in the virtual storage sub-market:

 

http://www-03.ibm.com/press/us/en/pressrelease/21856.wss

 

I have brought all this up to point out two interesting things:

 

  1. The IBM SVC *is* certified by      VMWare

  2. The IBM SVC will use the Apples, but only as a generic block      device. The SVC has certified certain disk devices at different      service levels, and the SVC needs to create a special VLUN for a      cluster quorum disk, and it will not use the Apple XSR for this      because they are not certified.

 

So... you CAN use the Apple XSR, in a certified way with VMWare, but you need at least one certified disk set to keep the SVC happy, and more importantly, safe. This *is* all your data we are talking about here.

 

The SVC also lets you implements classes of storage, so you can use (for example) the Apple XSR blocks as a place to keep mirrors, snapshots, or perhaps templates, but put the running VM's on what is more normally considered "Enterprise" storage. You can save a serious amount of money this way, at least as long as your "Farm"  is big enough so that the costs of a virtualization device is offset by the savings of being able to use tier II storage for some things.

 

Big IBM SVC caveat: The San Volume Controller assumes that you have already done the right thing as far as data reliability. It does not implement things like RAID at the virtual layer. One way to look at that is that your VLUN is only as stable as your least reliable disk in the block data strip. I believe that other storage virtualization products implement RAID at the virtual layer, and that would have some real advantages. With the SVC, plan for disk failures. Failures are not the end of the world, if they are planned for.

 

It also might be a good idea not to tell everyone you don't really know where their data actually is....

 

Truth be told, we have not yet pulled the trigger on storage virtualization in the R&D production environment. We are not quite ready to, but growth is going to force the issue sooner or later.

 

Before I leave this part, one last thing about SATA versus SCSI or FC. In our experience of failure rates (and it is nothing like the scale of Googles in this area), SATA based disks do fail more often. Once you get past "Infant Death Syndrome" (The tendency for new electronic gear to either fail early, within a few weeks, or last a fairly long time), then the SATA disks do seem to fail more often in mid-life than SCSI or FC. To use SATA means always using RAID, Hot Spares, Cold Spares, monitoring, and generally planning for failure. Even with all that, done right, they can be worth the money one saves, at least in a large scale deployment.

Is It Worth It? This VM thing?

 

It can be. There are new problems to deal with in the virtual world, but if you come from a VM mainframe shop, you already know what they are. The number one issue: VMware sprawl. It is just so easy to create VM's to test every little thing that before you know it you are hip deep in the things. The good news: Everything is getting tested. The bad news: that SAN... even that lovely storage virtualized, multi-tiered unit with massive TeraBytes of capacity is full. It can not be helped: You will need a policy about the life cycle for Virtual Machine. When it is born, you should already be planning its funeral. Hey... that's really BSM / ITIL'ish!.

 

The other thing that is true is that a big central service needs a capacity plan. We are deploying BMC's own Performance Assurance product (formerly Best/1) across our VM farms so that we can start figuring out where the bottle necks are, and where the biggest bang for our IT buck is to improve the service. That is pretty BSM / ITIL'ish too!

 

In the traditional story of IT skill sets and the evolving Data Center, you may be able to run this setup with a higher ratio of OS images to System Programmers / Admins, but the people doing the work do have to be more skilled than ever.

 

There has been one benefit of the organic growth though: With the capacity plans we can show that fewer bigger machines is better than more smaller machines, and that should let us retire *from* the VMWare farm the first wave of smaller systems we used to figure the whole thing out in the first place. This is great for us because there are some classes of needs that just do not work in the Virtual World. They need real, dedicated hardware. By re-using the first, smaller, VMWare servers for those needs, I can either defer other new purchases or, even better, retire some of that 1998 gear next. It is a win either way.

And finally...

 

Hopefully this post did not come off as a commercial for VMWare, Apple, or IBM (or, now that I think about it, Sun or Dell). In order to keep this real, I talked about gear we actually tested and used, but there are other storage virtualizers, other inexpensive disk arrays, and other OS virtualization solutions. For example: I use Parallels on my Mac and I love it. I could do the same thing with Xserves and Parallels. The only problem I would have with an Apple Xserve solution is there is no 2U or 4U larger version of the Xserve with even more CPU's available.

 

We have tested Xen and like many of its features. Virtual Iron seems to be a very mature and usable product with which you could do most everything I describe here, and with which I theorize you would have the same sets of issues, especially about VM image sprawl. It happened in the 1970's and onward on the mainframe with VM/370 and its children. It will happen again in this new space. Everything that is old is new again.

Steve Carl

Virtually Greener

Posted by Steve Carl Jan 31, 2008
Share This:

While Virtualization has received all sorts of attention, and more than its fair share of hype, there are real savings to be had with it, and not just of money.

 

This is an update to "Real World Virtualization" from June 15th, 2007.

 

It would not be hyperbole or understatement to say that Virtualization gets a great deal of press. A huge amount. Volumes and Volumes. As a VM system programmer I find most of it to be slightly amusing. Anywhere from flat out wrong to claiming to cure all the ills of the data center and Cancer besides.

 

What is real is that Virtualization of X86 hardware can save a company a great deal of money, and even better these days, a great deal of power. I already ran the numbers in "Real World Virt" so I am not going to beat that to death. Today I just want to report a real world result:

 

100 KVA

 

 

Since June of last year, that is how much power we have dropped off the primary R&D Central UPS by decommissioning servers. Real servers. We do *not* have fewer OS images running around here. Quite the opposite: We have more. What we have fewer off is 1993-1995 vintage, older less efficient power supply, slower computers that we were using for things like synthetic workload generation and various other non-benchmark non-device driver related applications. Literally hundred of computers have left the building.

 

Not all of these are X86 either. Some have been IBM Power series stuff, where the new P5 generation gear has allowed us to begin to do virtualization of more recent AIX images as well.

 

100 KVA can not easily be turned back into an exact number of watts without knowing what kind of power supply each computer that was decommissioned had. .8 is probably a good round number though, strictly based on experience with this gear mix in the past. That means 80 Kilowatts or 80,000 Watts have "left the building". 80 KW reduction is 160 pounds of CO2 reduction each and every hour they are off (assuming Coal as the power feedstock). 3,840 pounds per day. 1,401,600 pounds per year. Half those numbers for natural gas as the power generation feedstock, but even then, that is a very serious and very real reduction in the amount of CO2 being added to the atmosphere by R&D activities at BMC. And this is just Houston: We have been doing the same thing at our other R&D locations across the globe.

 

This of course will also save BMC money, and it is funny how things like that work sometimes: In the push to become more efficient at one thing, the "Law of Unintended Consequences" occasionally has a bright side to it. This one works either way too: Push for cost and space savings, do something good for the whole world. Try to do something good for the Earth (or at least the life forms that live on it), save some money along the way.

 

PS: Other than P series, I did not mention the technology stack we used for Virtualization here. Mostly that is because it does not matter: You can do this same thing with Virtual Iron, VMWare, Xen, etc. The specific technology of virtualization matters less than just doing it. I do have a post with more specifics coming up about our setup.

Steve Carl

One Week Later

Posted by Steve Carl Jan 25, 2008
Share This:

Quickie update on the Kernel Hackage post

 

I just wanted to get a quick update out before the weekend started on how the hacked CentOS NAS server is doing. See http://talk.bmc.com/blogs/blog-carl/steve-carl/Kernel-hackage for a complete description of the problem.

 

The patient is fine, and is resting comfortably. It does not appear to have suffered for our addition of code. HP-UX clients are burbling along happily, as are all the bajillion others.

 

A postmortem of how this slipped through showed that HP-UX clients running Connectathon ( http://www.connectathon.org/ ) work just fine against the unpatched CentOS. Something about Connectathon does not use the same code path to create a file that the regular HP-UX utilities do. Since we do not have source to HP-UX, we can not even begin to guess what that is. Bottom line: Connectathon certification of a new NAS server is now no longer sufficient: We will need to do some manual certification too.

 

The good news is that now it acts just like is should. Quiet, no trouble, acting like a cluster should and failing over things when it ought to. We'll let it cruise along a bit longer and then migrate a few more things to it.

Filter Blog

By date:
By tag: