Skip navigation

A Good Failed Testing Plan

Posted by Steve Carl Sep 28, 2007
Share This:
Testing assumptions about Evolutions stability issues.


Last time I posted about running Evolution 2.10.3 under PCLinuxOS, and the stability issues I was starting to see after about three days of Evolutions Exchange Connector use against MS Exchange 2003. I said in that post that I thought it probably was not a PCLOS issue, but an Evolution 2.10.3 issue. In the same post I talked about the other oddities I was seeing with PCLOS on the Dell D620 laptop: ACPI reporting insane fan speeds, and occasional system going to sleep right as I am typing away at emails. Neither problems were serious, but the going to sleep thing was disconcerting. Talk about having train of though derailed! Was my computer making value judgments on the quality of the email I was writing?


A comment that was posted to that entry told me I needed to try a newer kernel by updating the apt repositories and enabling "testing". That seemed very reasonable, so I did, and installed and booted the 2.6.22 kernel (Same as Fedora 7's more or less).


Not one thing changed. Nothing. 2.6.18 and 2.6.22 ran on the D620 from my point of view the exact same way. ACPI was still whacky, and the system still went away for sleepytime from time to time while I was using it.


"What if this is a hardware issue?" I started to wonder. Mint 3.0 had never done any of this on this exact same computer, but this could be something new. Maybe someone spilled a drink into the laptop while I was not looking. I needed another data point. First I installed bug-buddy and the available symbol files for Evolution on PCLOS, and submitted the next Evolution Connector failure.


Having done what I could there, my thinking was that I would collect the same data from a different distro. Late last Friday, I installed Fedora 7 with the 2.6.22 kernel, to see what changed.


Everything changed.


Well, duh. I suppose. I think it is somewhat reasonable to assume that a 2.6.22 kernel from PCLOS and a 2.6.22 kernel from Fedora would have some passing resemblance, and they do. But the things that are different are just not what I was expecting.


First of all, now I can not even see the fan speed in ACPI. I was using KDE with PCLOS, and now Gnome with Fedora 7, so that is not totally apples to apples. I'll need to fire up KDE in a spare moment to see how that might be different.


The system does not go off to sleep randomly anymore. That could again be KDE I suppose. It has power management awareness. Maybe it was confused.


Suspend worked way better on PCLOS. So did Beryl.


The default fonts of KDE on PCLOS are much easier to read than the default fonts of Gnome of F7. Again, not apples to apples.


And.... Evolution has stopped failing now that I am on F7. I didn't even erase the .evolution file when I converted from PCLOS to F7. F7 has worked now for one solid week without one single failure. And the package levels look to be the same. These are the ones I have on F7:


rpm -qa | grep -i evolution


This was not at all scientific. I changed way way too many variables. But it is kind of interesting. It also changes my plans. I was thinking this weekend I'd be putting Mint 3.1 (now GA) onto the D620. Not any more.


I did put Mint 3.1 on my Acer 5610, and other than the new artwork, it runs and acts in every significant way like Mint 3.0 did. That is to say, terrific. Mint 3.0 was having Evolution stability issues the same as PCLOS. Mint 3.1 ships the exact same packages. The biggest advantage of Mint 3.0 or 3.1 to my diagnostic way of thinking was that Ubuntu 7.04 (which Mint is 100% compatible with) has a fairly complete set if .debs for Evolution debugging: Symbols included in everything but the part I care about: Connector.


If Evolution under F7 is going to be stable though then I am staying here for a while. I can live with crummy suspend if Evolution works! If it crashes and looks the same, I'll submit against my open bug report. If it doesn't, I'll have to come up with the new theory, and a new way to test it.

Share This:
A week of using PCLinuxOS as the main office desktop and the office "Killer App"


After about a week of using PCLinuxOS (PCLOS) as my primary Linux desktop at the office, I have an update to my last post.



If you are a Linux person, and you are living with MS created infrastructure, then one of the things you have to deal with is how to cross-calendar and email with the MS Windows users around you. MS has of course made it all "easy" as long as you do not stray from their application stack. I say “easy”, because actually MS has to work very hard to maintain that. They have a pile of RPC's and undocumented-to-the-world programming interfaces that all the applications have to know about and use or at least tolerate correctly in order to make something like MS Outlook work with MS Exchange. Put a sniffer on the line and watch the conversation MS Outlook has with MS Exchange sometime. It is amazing. What *are* they talking about? :)


There is hope on the horizon: With IBM's announcement the other day about Lotus Symphony, plus things like Yahoo buying Zimbra, and Google's apps stuff (especially now that Google Apps has added collaborative presentations), it seems like the possibility will arise the MS will have to respond to all of this, and hopefully in ways that enable Linux, OS.X, and other platforms users easier access to their infrastructure without issuing messages of second class citizenship.


What I mean by "second class citizenship" is that I often see messages like "You are not running IE: this function will not work. Use IE for a fully featured experience." when I am accessing something on MS Sharepoint. It is true as far as it goes: I am in fact not running IE. Might be because I am not running MS Windows either?


In this day and age there is no technical reason to *have* to run any particular browser. Any standard compliant modern browser should get it done. There are plenty of proof points out there. Gmail works fine from Firefox or Opera, and has every bit the dynamic screen content the stuff in Sharepoint does. From a feature / function point of view, everything that Sharepoint or the like does *should* be available to Linux or OS.X users on an equal basis. All it would take is being Open: using standards and coding tools that were inclusive rather than exclusive. The humorous thing (to me anyway) that something like Sharepoint, a supposed collaboration tool, is that its functionality is exclusive. Its hidden message: “You can collaborate with us, but only if you are one of us.” But as usual I digress.


Evolution Not Evolving


Until something good happens here, we Linuxii must interact with the tools at our disposal. And for MS Exchange, that is Evolution.


Evolution is an optional install on PCLOS, and is packaged up especially for the Distro by the PCLOS packaging team. Here are my current relevant packages:

rpm -qa | grep -i evolution


The first thing I noticed about using Evolution under PCLOS is that the packager appears to have fixed a long standing Evolution-under-KDE issue. Perhaps it was fixed in Mandrake or some other KDE place and adopted by PCLOS: I am not sure. PCLOS is very inclusive when it comes to where they get their fixes according to the interview with Texstar in the current issue of the PCLinuxOS magazine, posted on their website in HTML format. The fixed problem is the "Where is the password?" screen prompt behavior / bug / loss of focus. On Ubuntu or Mint or Fedora running KDE, bring up Evolution with the MS Exchange connector, and the password prompt disappears behind the main Evolution screen. Unless you know it is there, you will just sit there waiting for something to happen. With the PCLOS packaged version of Evolution, the password prompt appears, and the main Evolution prompt then opens up *behind* the password prompt dialog. The prompt dialog loses focus but stays on top where you can see it. That is the way the Evolution works under Gnome, so this is goodness.


Evolution looks nice: the fonts are well done and match the rest of the KDE desktop. Anti-aliasing works well on the Dell D620's LCD panel. Stability was good for the first two days. I did not even start with a clean .evolution file in my home directory, but used the one Mint had created.


Evolution started crashing on day three. Well, not Evo, but the back-end "Connector" process. Same difference effectively as I had to restart Evolution to get things back. I put up with that for a couple of days, and then I tried deleting the Evolution information in .gconf and .evolution, and redefining the connection, and went all of about an hour before connector crashed again. This is exactly the same thing as was happening with Mint 3.0 before when it on this same computer, so I am inclined to say it is that Evolution Exchange Connector is still not as fault tolerant as it needs to be. Or maybe it is not as "Microsoft Tolerant" as it needs to be. Or both. Recall that Evolution is not talking to MS Exchange via MAPI and the mish-mash of various RPC's that MS uses to make Outlook connect. Instead, Evolution's Connector talks to Outlook Web Access, or OWA. WebDAV, the Open Standard transport that underlies the current Connector-to-OWA code may be a standard, but that does not mean the OWA is 100% WebDAV standards compliant, or even that the WebDAV standard was not written with some wiggle room in a couple places that allows vendors to create incompatible versions of WebDAV.


Attention Deficit Syndrome



I don't know which of the many possibilities is the root of the problem. What is clear is that the Exchange Connector does not get the same development attention from the Open Source community that other MS interop software like Samba does. Samba being a complete re-implementation of one of the most complicated, most historical baggage protocols ever (SMB) works like a champ. It in fact can out-perform the same protocol in its supposed native OS, or at least it did last time I benchmarked it. This is an example of what is possible when a great deal of time, energy, intelligence, passion, and Open Source community collaboration are focused on a problem.


Connector continues to suffer by comparison. I do not know why. It would seem that the need to do this would be strong. MS Exchange has a huge chunk of the mail server market. Law of averages would seem to indicate that critical mass had been reached ages ago, so that there would be plenty of people needing MS Exchange interoperability from Linux.


Email is easy enough of course. If all I needed was email against MS Exchange, I'd just load up Sylpheed or Thunderbird or Mulberry or any of a bunch of other POP / IMAP capable email clients, and I would be done with it. But part of the MS Exchange exclusion factor is calendaring. For some reason folks like the integrated email / calendar functions, and POP and IMAP know nothing about calendaring.


At the same time, MS Exchange 2003 doesn't work with WebCAL enabled Open Clients. If you want to accept a meeting that an MS Windows user created, you either have to load up the calendar in your web browser, or use the Evolution Connector. In another subtle message of exclusion, OWA will look to see if you are running IE or not. If you are running anything other than IE, you can see and interact with MS Exchange still, but in a flatter, less dynamic way. At least you don't get any messages about being a second class citizen from OWA.


OWA makes a great deal of effort to look a lot like the MS Windows native MS Outlook application. I suppose that is a comfort to some, but I dislike the extra time it takes to load the look and feel stuff. Gmail runs rings around OWA in look, feel, functionality, inclusion... too bad there is not a Google appliance that translates OWA into Gmail. That would be sweet.


Whats up Doc?



So... why the relative lack of attention to a seemingly critical piece of technology like the Exchange Connector? Why the abends? How come we are years and years into this, and still dealing with stability issues like this? it is clearly not a PCLOS issue. I have the same sets of issues on Fedora, Ubuntu, Mint, FreeSpire... you name it.


What I have not tried it something like Novell Desktop 10. Right or wrong, I assume that since Open Source is what it is, that Novell will not have implemented anything differently here. In fact, since Novell sells Groupwise, I have assumed that Exchange Connector will be something that the packagers of that distro will not have focused on. If they had, and if they had done anything to make it work better there, why would that not have made it back out to the community?


One possible explanation (and I am totally guessing here) might be that the Open Source community isn't spending enough time of this because they are focused instead on other mail servers. Hula or Bongo perhaps? Open Xchange? Citadel, Kolab, Scalix, or Zimbra? Another I didn't recall? Something where the work of building something as complex as this is simplified by being able to use openly defined protocols. Where the difficulties of testing and integrating are not complicated by also having to figure out the on-the-wire protocols by trial and error. Read the early Samba story and see what craziness they went through to get things working.


Here is an area where I usually try to help by loading up Bug-Buddy and the debugging versions of things so that I can send the diagnostic data back to the people that know the products enough to fix the problems. That might be a problem with PCLOS. In Synaptic, running against the default PCLOS repositories, I did not see the debugging, symbols included, versions of Evolution and Connector. Without those, Bug Buddy's submissions are pretty information free.





That might be the death of PCLOS on my office desktop. I think the time has come to do some research and find out which distros have what I need so I can submit some Evolution bug reports.


If this gets fixed: If Evolution Connector can stabilize, that should help the Linux adoption rate in ways that little else can. If Linux is to make inroads on the corporate desktop, it has to do so knowing the realities of the data center software lifecycle. Companies have huge investments in things for MS Exchange: backup programs, server farms, virus scanning, spam blocking, email re-directors for email satellite devices like Blackberries. Linux desktops have to live in the current world until time and technological movement take their toll and make these issues moot.





There are other reasons I might soon be done with PCLOS. There are many many things I like about the Distro, but there are a few things that are starting to drive me nuts. One is that over the last week at least four different times, while I was right in the middle of typing an email the Dell D620 laptop just went to sleep. It was plugged it. the battery fully charged. There were no power interruptions or other external state changes. I press the power button and it comes right back, but why in the heck did it do it in the first place?


The year old kernel might be part of the problem. The D620 has already been replaced by Dell with a new model, but a year ago it was a brand new laptop. Perhaps it has some new hardware in the guts of the laptop case someplace that the 2.6.18 kernel did not yet include stable support for in September of 2006 when the kernel was new.. ACPI is obviously whacked. My internal fan is just not turning at 8000 RPM.


In the good news column are CrossOver and VMware. Both installed easily, and in fact VMware Player 2.0.1 compiled with via with no error messages. That might be a first. The older Kernel operating in its favor probably. The guest seems crisper under PCLOS than Mint too, but that is utterly subjective. Might just be that “newly washed car” phenomena in action.


Posted by Steve Carl Sep 16, 2007
Share This:

Looking at PCLinuxOS as a Corporate Desktop


In a post I made a while back, the Linux distro  PCLinuxOS showed up in the comments. When I stuck my foot into it in a reply, the creator of the distro, Texstar jumped in to set me straight. My mistake was to say the PCLinuxOS was now based on Debian. Another poster pointed out I needed to do my research better. Too true. I had two data points.


  1. I had read someplace on the Internet of a forum that PCLinuxOS was now Debian based, and
  2. I had looked at PCLinuxOS briefly when I was trying to decide what Distro to use for my last LinuxWorld lab, saw that it used Synaptic, forget that there was an RPM version of that tool, and moved on.

It is even worse than that though: The article I read that gave me this idea in the first place is lost to history, but I saw a post at Desktop Linux that reinforced it:


Once installed, more than 5,000 additional packages are available through PCLinuxOS's Synaptic software manager and file repositories. This works essentially in the same way as Debian and Ubuntu's update system. “


I stopped reading there. Had I just finished the paragraph, the very next sentence was:


However, instead of using deb packages, it uses RPM. Thus, PCLinuxOS users must use their distribution's own repository -- it is not interchangeable with the Debian or Ubuntu program libraries.


Well fine. Just fine. IRNIdjut. I normally do actually research stuff better than that. Really. I should have know better: A few years back, I used an early release of PCLinuxOS in the Linuxworld lab as the class boot-able LiveCD. I only stopped using it because a newer version of PCLinuxOS dropped Evolution for space on the LiveCD disk reasons. I needed Evolution for the lab.


Even worse: I used to be a [RPM based] Mandrake user! I used Mandrake for years. Texstar used to package up all sorts of things to make Mandrake better, and was one of the, if not the most popular packagers for Mandrake “back in the day.” [Mandrake is called Mandriva now] Worse: Texstar even lives in the same city I work! I should know his stuff better that this. Argg!!! This is just embarrassing, to say the least.


I Have No Choice. I Must Do Research.


Having stepped into this royally, the only way to be able to look in the mirror is to test PCLinuxOS 2007. One of the comments to my post about PCLinuxOS asserted that PCLinuxOS is 100 times better than Mint. I am surrounded by Mint systems. Mint is based off of Ubuntu, and there is an article in the PCLinuxOS Magazine September 2007 Issue 13 is titled "Ubuntu's Hype is Misleading".


I am always open to change. Starting back in the post "Most Popular Linux Desktop?" post, I wondered what the hype around Ubuntu was all about and set about testing it. I ended up using Mint as my primary Linux desktop ever since.


I decided to use the Dell D620 that I posted about in "". Mint had been on the unit for about a month. Everything was working more or less in a well known way. Evolution was the main pain point, since it crashed fairly often, at least until I figured out a new way to set it up (I pointed it at the OWA web server rather than the MS Exchange server for the WebDAV connection. That made it run much better. No idea why yet.). The other, slight lessor issue was that the wiggle stick and the track pad used different mouse acceleration profiles. I could not get them synced to save my life. The wiggle stick fast. The trackpad slow and therefore mostly useless. The computer itself is speedy fast, what with its T7200 Core 2 Duo.


Mint on the D620 grabbed the CD image from a mirror on the net, burned it, and then moved out of the way on the hard drive to make room for PCLOS (PCLinuxOS's official abbreviation). Aside: Interesting that there is no LiveDVD version of PCLOS. There is clearly a large number of packages in the repository that they could put on by default if they had a LiveDVD version. From my last LinuxWorld lab I learned most everyone was able to boot a DVD these days. That would be nice, because then they could put Evolution back on the install media, and I would have another option of the lab!


LiveCDs are the Best-est


Any Linux Distro that does not have a LiveCD/LiveDVD version is missing the boat. It is so nice to be able to test the distro *from* the CD, to be sure there are not major issues before you roll the Distro down.  As easy as Linux is to install these days, I still spend a fair amount of my after-hours life doing these types of things, and the LiveCD is just too nice to not use. Even Fedora 7 is on board with a LiveCD you can install from! PCLinuxOS booted, asked a few questions about the network, and I was up.


The LiveCD gives you the option of "root" or "guest" to log in as. I go in as 'guest', password 'guest', and a normal looking KDE desktop with some distro specific artwork appears. There is an icon on the task bar to start downloading updates (Synaptic, but labeled "package manager")... which is odd in a way. I'm running off a LiveCD. My read/write space is limited by the ramdisk. No way I could put new packages on this while it is running LiveCD. Next to Synaptic icon on the taskbar is something that anyone that has ever used Mandrake knows: the DrakConf utility. I'll need to use both of these once I am running PCLOS installed off the hard drive, so I note their presence and move on.


The trackpad and wiggle stick are accelerated the same. Nice.


Finding no issues running PCLinuxOS from the LiveCD, I clicked on the installer icon, reworked the disk layout to preserve my /home, and watched the D620 CD drain the LiveCD onto the harddrive in short order. Screaming fast install. Here's the disk layout:




Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes



   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        1824    14651248+   7  HPFS/NTFS
/dev/sda2            1825        3040     9767520   83  Linux
/dev/sda3            3041        3283     1951897+  82  Linux swap / Solaris
/dev/sda4            3284        9729    51777495   83  Linux

On the Dell D620, sda1 is the BMC corporate MS WinXP image. sda2 is "/". sda4 is "/home". I formatted sda2 and sda3 (swap). As a first challenge to PCLOS, I was going to leave all my files from Mint laying about my home directory. Mint was set up with both Gnome and KDE, so all the config files from a months use were there. There was no issue at all with anything. KDE (the default GUI of PCLOS) fired up, went in, and everything was great. Well. Mostly.


Intel Graphics Chip


Like Ubuntu, but not like Fedora or Mint, PCLOS did *not* set up the Intel graphics chip correctly at first. Worse, the Drakconf utility that lets you fix this was nowhere to be found on the desktop anymore. It was there on the taskbat when it was booted on the LiveCD: now it was gone. It actually took me a few moments to dig up what the config utility is called (been a while since I used Mandrake), and fire it up from command line. *Now* it noticed I needed to install the "915Resolution" package to deal with the hardware. I said it could. Download. Install.


  • Logged out, restarted X, back in. Still 1024x768. Another trip into Drakconf, and I reset the panel type from 1024x768 to 1280x800.
  • Log out. Restart X. Still whacked. Back into Drakconf. Go into "Screen Resolution". 1280x800 not an option. 1280x960 is. At the bottom are the words "Other". I pick that, and now it redraws the menu with all sorts of resolutions, including the one I need.
  • Exit, out. In. Now the fonts look like crud. KDE control center can fix that: Turn on anti-aliasing, force the DPI to 96, save, exit.
  • Come back in, and now it looks right.

This is not better than Mint so far. But after this it gets better...


Building the Perfect Corporate Linux


Launching Synaptic, running update installing north of 200 updates (about the same ball park in terms of number of packages as the first time I fired up Mint), rebooting, going into Synaptic again, start tweaking this out as a corporate desktop. I noted along the way that the Synaptic update has Firefox moved from to that is goodness. I was afraid I was going to have to override the default browser install.


The PCLOS kernel is oddly old, at 2.6.18. That kernel version was released last September, 2006. Mint on my Acer is at kernel 2.6.20, Fedora on my Dell C400 is at 2.6.22, and is at 2.6.22 as well. This only matters of course if I turn out to need one of the new features of the later kernels. So far, on the D620, that does not appear to be the case, other than possibly ACPI.


If you have to work with MS Exchange mail servers and do cross calendaring with MS Windows users , then you have pretty much only one choice for email client: Evolution. Two, if you count Outlook Web Access and your favorite browser.


Evolution is not installed by default. PCLOS dropped it in favor of Thunderbird a while back. But it is still in the PCLOS apt repositories, and Synaptic can pull it down. Even though the kernel is "backlevel", the Evolution groupware suite is the same release (more or less) as Mint's: 2.10. Normally when I change out the OS level on Linux, I move all the Evolution dot files into backups, and create all new, since Evolution has a history of not dealing well with upgrades. I took a flyer, brought Evolution up as is, and it happily accessed the .evolution, and other .gnomeish files and worked like a champ. Out of the box. No issues so far. That might be a first.


I need other things: Avahi, HFS+, Quanta, Bluefish, NVU (yes... I know. I like to mess around with different HTML generation utilities, even though I almost always just fall back to Google Docs in the end.). I need Planner, and it is available.


At this point, I am noticing another thing. The package repositories are actually pretty fast. One of the reasons I stopped using Mandrake was that their repositories seemed to get slower and more painful to use every release. In fact, the Mandrake package manager was no great shakes back then (No idea now: I put up Mandriva 2006 last year, but on hardware so slow that I could not evaluate how fast anything was), so I can see why Texstar opted to use Synaptic instead when he forked Mandrake/Mandriva to create PCLinuxOS. These PCLOS repositories are on par with Mints in terms of download times. Mint, being on top of Ubuntu, which is on top of Debian means that there are well over 20,000 packages available. PCLOS has less than that, at 6882.  Does that matter? Not so far. Everything I have looked for was out there as an installable. Whatever is not there, it is so far not something I use.


Office Sweet


OpenOffice is nice and current at 2.2.1. Even better, on the KDE menu, I can invoke the "Web Writer" mode separately from the "Writer" mode. This is the way it should be IMHO: OpenOffice acts differently in those two modes, and the Web Writer not only lets me edit the HTML directly, it tends to generate far cleaner HTML than OpenOffice Writer, doing a "Save as HTML". That latter mode puts in all sorts of extra text layout directives that try and make the doc look exactly the same on the web as it does on the printed page. That means the HTML pretty much sucks rocks. It may be 100% standards compliant, but it is a mess to work with. Better than MS Word doing a "Save as HTML", I will give you that. That really is not saying much though, as MS Word is well known for creating bad HTML. Not sure about the new 2007 version. Never seen it. I suppose it is possible it got better there: the new version of Frontpage was supposed to be getting fixed in terms of its standards compliance. Maybe the same HTML engine? I digress....


On Screen Mr Sulu.


When I press the tiny, dedicated volume up, down, or mute buttons on the D620 (special keys above the keyboard) when it was running Mint, an on screen display popped up, showing me what I was doing, and thye keys worked. PCLOS does not appear to know how to deal with those keys. I don't really care that much. Kmix is easy to click on and tweak, but on the other hand, it is another thing Mint does that PCLOS doesn't. I poked around in Synaptic looking for a package that might not be installed, but nothing was apparent. Keywords like 'laptop' and 'Acer' and 'Dell' turned up nothing when searching in Synaptic. I was looking for Acer because there is a new project to make Acers special keys work, similar to the IBM laptop package like "tpb".


Install as Root


I do not like to log into root. Its a security thing. I do it when I have to: Single user mode, FSCK, etc. But mostly I like to work from my userid and leverage sudo to power me through the additional software package installs from my userid. Perhaps the missing DrakConf is not installed in my pedestrian accounts taskbar by default but available on Roots.. Maybe it is buried on a menu someplace. I tried to "switch user" over to root, but the screen flashes brightly and I can't get in to look. I did not try logging out and then back in as root. The key point was that Drakconf was hiding, not that I might be able to find it on an extended search.


/etc/sudoers is not pre-configured by the install to include my account, so I fixed that, and I also added an icon button for Drakconf to the KDE task bar. I think it is pretty elegant the Ubuntu / Mint add the first user defined *automatically* to /etc/sudoers.


I supposed this is a schism point for Linux: how separate do you keep root? How powerful do you make the default user account? On one end, some distros just let you into root and are done with it. Ubuntu/Mint hides root altogether, but sets up sudo cleverly so that by entering your password you can install things. PCLOS is more on the "Use two accounts to do separate things" end of the spectrum.


I don't really know which one is better. In PCLOS, I made my userid able to do rootish things via sudo authentication. On Mint, I set roots password, and configure so I can log into it *if I want to*. These are all personal preference things, and as long as the distro lets me set it up how I like, I guess it does not matter much.


ACPI out to lunch


There is a little KDE app called Kima the PCLOS includes. Kima reads stuff out of ACPI and displays it on the task bar. Actually, I set up a second task bar. I moved the KDE default one to the top of the screen, added a new panel, and put Kima and the active tasks on the new bottom bar.


Kima or ACPI or both have lost their minds. Where Mint displayed sane fan speeds, cpu frequencies, and whatnot with its gnome apps for such things, Kima says that the CPU's never drop off 2 Ghz, and that the fan is running at 7974 - 8016 rpm. If the fan was running that fast, it would be a banshee of supersonic screaming, and the D620 laptop would be hovering. In fact, something is lying, because when I put hand at the back, left hand side of the computer where the Core 2 Duo heat sink is, there is only a tiny whisper of a breeze coming out of it. I assume that this is an effect of the back-level kernel. It is not serious, but it is annoying.


That being said, here is another subjective observation to place into the pipe and smoke a bit: PCLOS seems faster than Mint. I have not measured anything, but it just feels crisper. That is not something I would have expected from a 2.6.18 kernel, since for one thing a number of the enhancements up the road have to do with laptops, ACPI, and the scheduler.


Suspend is a real hit or miss affair. This also worked better on Mint. I am thinking it is the new kernel again.




IMHO Beryl is fun if not utterly useful. Ubuntu has announced that Beryl's soul-mate Compiz will be enabled by default in their next release [7.10, or "Gutsy Gibbon"]. PCLOS ships Beryl and Compiz installed, and either can be enabled from the KDE Control Center. This is a very nicely integrated bit of work. I wish all the DrakConf functionality was in the KDE control center too. Or KDE Control center was tossed and all it's functions put into DrakConf. Having two apps for system config is really not intuitively obvious. I digress: Beryl / Compiz is the point here.... I enabled Beryl.


Beryl runs well, is fast, is stable, and all the effects do not seem to slow the computer in the slightest. Mint was/is good in this regard as well.




When I installed PCLOS, I defined the wired interface. It was late at night, I was at the office still. No wireless. Just Cat5. When I took it home to continue the testing, PCLOS did not automatically decide that it should switch to the wireless interface instead. Mint does. Ubuntu does. DrakConf brings up the interface easily, sees the access point, and connects to it no problem. The Intel wifi hardware is no issue (unlike Fedora).


I do not know if I had started by defining the wireless interface first if PCLOS would do better here. Clearly all the parts are in place. The hardware is located, defined to the OS. There just is not a process that looks to see which interface has a valid connection available. My iPhone can do that, switching between Edge and Wifi, and preferring Wifi when it is available.




This whole test has given me an idea for my next post over at I'll start on that as soon as I finish this. [Update from the future: Schism>/a> now posted ]


It might have occurred that this is hardly a great comparison: Mint versus PCLOS is a Gnome interfaced Distro (by default: KDE can be installed, and like Ubuntu, there is a KDE version of Mint ) versus a KDE default interfaced version (and while I did not do it, it looked like I could install Gnome onto PCLOS with Synaptic). If someone says that PCLOS is better than Mint, what I can not tell is whether they are at least in part saying that KDE is better than Gnome. This is part of why I mostly focused on things that underlie the GUI. I have no GUI axes to grind. I can use KDE or Gnome. I like them both. I know most folks have a strong preference for one over the other. My only statement of opinion in this regard is that I think people with MS Windows backgrounds will be more comfortable with KDE to start.


I am also not sure looking at PCLOS as a corporate desktop is a valid thing to do, any more than when I looked at Fedora for the same thing. The home page of PCLOS contains a history of PCLinuxOS, and that has the following line in it: "Since Mandrake was a trademarked name myself and others decided to name the livecd after our news site and forum pclinuxonline thus PCLinuxOS". The news site and forum has no history, mission statement, why we are here kind of thing that I can find, so I am going to make a supposition. The PC of PCLinuxOS is from the same machine most people use to run OS's, I.E. PC = Personal Computer. There is no server edition. There no paid-for support option (The home page link "About Us" in fact says :


Customer Support:




If you need customer support, please visit our friendly forums .


Nothing about central management tools (although plenty third party stuff is out there for Linux that would probably work with PCLOS).


Mint has a helpdesk support option, and Ubuntu has Canonical support.


My point is I just wrote an article about using a community supported personal computing Linux version running KDE and compared it to a completely different kind of version of Linux. They both have that chewy Linux goodness inside, but they are not really valid comparisons in any other meaningful way. My one constraint: That I was looking at it as a Corporate Desktop should have set alarms bells ringing.


Oddly, that initial criteria worked in PCLOS's favor here. Not having wireless at the office, the way wireless must be manually set up every time is no big deal most of the time. For me.


Go back through "Adventures", and in posts too numerous to link here you will probably get the idea of why my thinking is on what the killer application is for a Linux corporate desktop. Well, two of them: OpenOffice is a given. The other is Evolution. If you are an MS Exchange shop, and you don't want to live with a web browser interface to email and calendar, then Evolution is a must. PCLOS's version of Evolution (even though it was not on the default install) has so far been more stable than Mints, and that means I'll keep PCLOS on the Dell D620 for now.


At least until Mint 3.1 ships.

Share This:

A batch up updates about various issues raised by last weeks post. The news is all good too!


Last week I posted about some things we were up to in the Network Attached Storage area, and this post was new in that it was not about what we had already done, but what we were actively doing. I don't usually let it get out there that I don't know everything, but this seemed like a good time to get the truth out :)


Part of the idea of posting this way was to get some information out there as soon as possible, and to get feedback from any others that might be in the same boat as us. This week I have a batch of updates to various things in that post. I will not spend as much time doing background as I usually do: You are either going to care about this and already know about it, or have no idea what I am talking about and go see if I posted something more interesting over at I haven't.... For one thing I have been traveling all week, and have not had time.


This post would have been up sooner, but I was waiting to hear back on permission to post some things that were mailed to me, and also to see how an experiment turned out. The experiment is probably of more interest to the general Linux folks, so I'll cover that first:


In the aforementioned post, I pointed at a Bugzilla link... actually Dan Goetzman, Chief NAS Officer (CNO: Like CFO, only different) pointed at the link. Dan looked to see what the patch was about, and saw that it was "only" a deletion of three lines: the code was checking for:

GFS(s) expects NFS fh_type and fh_len would have the same value

that would apparently never be true, and that was causing a problem for NFS when it was accessing GFS-provided file systems. Here is Dan's news:



I found the simple patch instructions for that Bugzilla report (OK: I won't make you dig that out of the last post: 229346 : Ed.) that I thought was the root cause of the NFSV2 over GFS problem I discovered on the CentOS cluster. The good news, it is indeed the problem! I applied the patch and rebuilt the gfs kernel module on all 3 nodes and that problem is gone!


I guess when it rolls out in a future CentOS update I can remove this specific patch.

I have restarted my testing, back from square one. So far, so good...

I still have the "other" problem (same as with Red Hat 5). Older Solaris clients (7-8) are not able to mount NFSV2, but the default NFSV3 works fine. So, for now I will assume we can live with that limitation.


I was thinking about gen'ing a custom kernel with NFS_ACL support off to see if that would prevent this problem. Apparently there is no command line or modules.conf way to turn that off. The NFSD module is compiled to make calls into the NFS_ACL module.


For now, I will forge ahead assume we don't need NFSV2 on the older Solaris clients.


That is terrific news, because if this works, that means the GFS layer will prevent us from having to do all sorts of customer failover code for the NFS and CIFS servers in the cluster! Well, that is the theory anyway. Warning: Law of Unintended Consequences in full effect here!

Tru64, TruCluster, and READDIRPLUS with some NFS Clients.


The other problem we have been having has to do with Tru64 and some NFS clients that do not deal well with each other. The CentOS cluster above is meant to replace the Tru64 TruCluster as soon as we can qualify it, but in the meantime various clients that are being 100% protocol complaint are failing because Tru64 does not do the right thing with READDIRPLUS calls. Graham Allan, IT Manager in the School of Physics and Astronomy at University of Minnesota, Dan, and I have been emailing about this at some length over the last week.


The good news is that the problem has been fixed in the Tru64 community: Here is the news from Graham:


One of our students here also found a reference to the issue in the
Redhat bugzilla database, which alludes to some possible fixes at the
Tru64 end:

In particular:

     Comment #11 From Ahmon Dancy        on 2007-03-08 15:35 EST             

     (In reply to comment #8)
     > > Just curious... is there a way to turn off READDIRPLUS
     > > support on Tru64 servers?

     Yes!  The following worked for me.  There is probably some better way to
     do this
     so that it remains permanent across boots, but I haven't figured it out yet:

     ladebug -k
     assign doreaddirplus=0


     Comment #12 From Ahmon Dancy      on 2007-03-08 19:47 EST           

     If anyone is interested, I have hacked up a new nfs_server.mod kernel
     module for
     Tru64 which fixes the bug.  I have it deployed on a local system and it
     fine.   I built it against a host that identifies itself as Tru64 V5.1B
     2650).  It can probably be adjusted for other versions as well.


     Comment #14 From Ole Holm Nielsen        on 2007-04-17 16:48 EST             
     Regarding Comment #11 I have some further advice:  You may not have
     installed on your Tru64 server, but probably you have /usr/bin/dbx. 
     You may then modify the "doreaddirplus" variable until the next reboot by:

     # dbx -k /vmunix
     (dbx) assign doreaddirplus=0
     (dbx) quit

     If you want the change to be persistent across reboots do:

     # dbx -k /vmunix
     (dbx) patch doreaddirplus=0
     (dbx) quit

     If you upgrade the kernel this will have to be repeated.

     We have verified that this workaround solves the NFS problem
     with a Tru64 NFS server and a RHEL5 Linux NFS client.


     Comment #15 From Steve Dickson        on 2007-05-15 11:00 EST             
     Fixed in nfs-utils-1.0.10-10.fc6 by added the -o nordirplus mount option
     which will have the kernel support in the next kernel update.


I (Graham: E
d) had an interesting reply as well from Ahmon Dancy who had 
mentioned his "hacked" nfs server kernel module in the bugzilla discussion.
I will paste his reply for you:

   There are two approaches available now, and mine is the more
   complicated of the two.

   Approach #1:

   See post 11 on that bug, followed by post 14.  This disables
   readdirplus support altogether in the NFS module.  Then, when linux
   makes a readdirplus call, it will return a not-supported code, in
   which case linux will revert to standard readdir, which works.

   Approach #2 (mine):

   My approach was to spend a (bunch) of time disassembling the nfs
   kernel module and patching it w/ a fix so that readdirplus works


Impressive bit of reverse engineering! I use Ahmon's name here because it is in a public forum on this issue already. Graham said I could mention him...

Even better, HP *appears* to be working on a formal patch for this, so Tru64 is still being supported inside the company!


That is all this time. I hope this information helps others fighting one of these two problems!

Share This:

The Tru64/TruCluster based fileserver has served us well as the R&D Highly Available central fileserver. But, as the Tru64 Unix OS has been put on limited support by HP and the hardware is now seven years old, it's time to retire the system and replace it with current technology.


Today I want to start on a new series, and a different approach to some of the Internal R&D Support projects I post about in this blog. What I have in mind is to start talking about an in-flight project, warts and all, so that what I end up with is truly an open conversation about this project. Previously I have always reported things once we knew the outcome. This way, some of the process itself will be under open discussion.


The other thing that is new.. well new-ish, is that most of this post comes from the person that is actually doing the work. In this case, our "Master Abuser of Network Protocols", Dan Goetzman.


There are about a zillion posts about the server we are replacing here in "Adventures", going all the way back to the beginning. Way too many to link here without creating something that looks like a table of contents. The summary below gives some of the history, so hopefully that will serve to level-set things.


Dan started by defining the project on our Wiki thus:



  1. Support NFS file serving
  2. Support CIFS/SMB file serving
  3. High Availability
  4. High Performance
  5. Cost Effective

Leveraging the Past


For several years now, R&D Support has been building cost effective file servers using commodity hardware and using Linux. We have evolved these to the most recent "generation 2" designs, nicknamed "Snapple". A summary of the "Snapple" based design;

  • Linux - Currently using Fedora Core 6 as it has proven to be a stable NFS server
  • OS and DATA separation - OS is mirrored on internal hard drives, DATA is on the shared SAN storage
  • XFS filesystems for user data - High performance and scales well
  • Sun X2200 servers
  • SAN shared storage
  • Apple XServe Raid storage subsystems

The only item not addressed using the Snapple [Sun and Apple hardware: Snapple. We are so punny - Steve]  based fileservers is high availability. The Snapple configuration allows for a spare server on the SAN to be manually switched to recover a failed server to allow rapid recovery of services. But, that is short of a highly available solution.


Designing the Future


To address the manual service failover versus automatic service failover, it seems logical to look at Linux cluster technology. As the central R&D fileservers are considered a production service, is also seem to make sense to look at the "enterprise" Linux distros. Red Hat Enterprise Linux, or in the case of this project, the CentOS variant of the Red Hat EL distribution. The new design starts to look like;


  • CentOS 5.0 with Cluster Suite and Cluster Storage Support
  • OS boot drives will remain simple "md" mirrors on the internal disks in the server heads, not under any logical volume manager.
  • DATA filesystems will be GFS, as the cluster is simplified if a cluster "parallel" filesystem is used.
  • Shared SAN for storage, using dual SAN switches operating as a single fabric.
  • Apple XSR storage, decent performance at a great price for SAN based storage.

The NFS service will become a cluster service that the CentOS cluster will make highly available. The Samba CIFS/SMB service can also be another cluster service, configured to run on another cluster node by default.


Optional / Later


Storage Virtualization. With the shared storage on the SAN, using a true parallel cluster filesystem, the next step is to look a virtualizing the storage. Advantages with storage virtualization include;


  • Remote replication of data at the SAN block level
  • Mirror data at the SAN block level
  • Create storage classes that can align data with class of service on the storage farm.

We have, available to us, IBM SVC (Storage Virtulization Controllers) subsystem that we can insert into the SAN to test and qualify the storage virtualization option. After initial testing using normal "Zoned" SAN storage LUN's we will insert the SVC and virtualize the storage and then compare functionality and performance.

Wikis are great for this kind of thing. I can see what is happening when I am ready, and I can fix spelling errors if I notice them. Unfortunately for both Dan and I, English is not our native language. Neither is anything else either. We do what we can....


A quick note about the storage virtualization bit: We pulled that out of the initial pass to minimize variables. Once we know we have a working solution, we'll layer that in, because this is how we plan to enable some advance features like block replication across the WAN. All that comes later though.


Once the project was defined, The "Server Beater Most Excellent" (Dan: He has a lot of titles) went to work. We bought the hardware, assembled it, had some internal discussions, and decided that the first pass at this new server would be CentOS 5 based, and leverage the Cluster LVM and GFS to make fail-over between the three Sun X2200's easy.

Well, we had hoped it would be easy. The Wiki problem tracking page for the project currently looks like this:


NFS test CentOS Cluster - Problem Tracking

NFS V2 "STALE File Handle" with GFS filesystems


Only using NFSV2 over a GFS filesystem!
NFSV3 over GFS is OK. NFSV2 over XFS is also OK.


From any NFSV2 client we could duplicate this;

  • cd /data/rnd-clunfs-v2t - To trigger the automount
  • ls - Locate one of the test directories, a simple folder called "superman"
  • cd superman - Step down into the folder
  • ls - Attempt to look at the contents, returns the error:
ls: cannot open directory .: Stale NFS file handle

Note: This might be the same problem as in Red Hat bugzilla #229346
Not sure, and it appears to be in a status of ON_Q, so it is not yet released as a update. If this is the same problem, it's clearly a problem in the GFS code.


NFS V2 Mount "Permission Denied" on Solaris clients


This problem was detected on a previous test/evaluation of Red Hat AS 5 and expected with CentOS 5.

Certain Solaris clients, Solaris 7, 8, and maybe 9 fail to mount using NFSV2. Apparently the problem is a known issue in Solaris where the NFS server (in this case CentOS) offers NFS ACL support. Solaris attempts to use NFS ACL's even with NFSV2 where they are NOT supported.

The correct behavior is to have the Solaris clients NOT try to use NFS ACL's on version 2. This problem has been fixed on more recent versions of Solaris (like some 9 and 10+).


And that is where we are at! We don't have all the answers, and in this case, not even all the answers on what will happen next here. Lots of questions though.

We know that the bugzilla bug on GFS is still open, and that probably means that we'll have to take GFS out of the equation, at least for now. That is not good, since that means we'll have to script the NFS and CIFS failover. Yuch.


More on this as it unrolls: Let me know if you find this approach too talking about this project interesting (rather than just summarizing it at the end, once everything is all known and decided)

Share This:

Turning  the printing problem around from the last time: Printing from MS   Windows to a Linux system, where Linux does not support the printer


A while back   I   posted a question here, posed to me at a recent LinuxWorld, about  how one   might print from Linux to an MS Windows attached printer, where the  the   printer is unsupported by Linux. Richard Meyer asked around his local  Linux   User Group, and got two replies, which I posted here [   post   one,   post   two ].


There is the inverse problem, which appeared to bug Richard as well:  Where   Linux does not support the printer, therefore provides no drivers for  it, but   the printer is attached to a Linux server and MS Windows wants to  print to it.   Richard found this post from Chris Balmforth in a public forum on  Codeweavers   about it:


Hi Steve, when you come back from vacation, here's another approach to   printing on printers that Linux doesn't understand ...


Linux Counter user #306629

Subject: Re: [cw-discuss] Windows printer question on codeweaver?
From: Chris Balmforth>
Date: Thu, 26 Jul 2007 18:47:09 +0100

I get good results by installing my printer (which has no Linux driver)  on my Ubuntu server as a RAW CUPS printer (no Linux driver needed), then  making it available to the network via Samba. When I try connect to it with a  Windows machine it tells me that the server has the wrong driver and asks me to  install the correct driver locally. Install the printer's Windows driver on the  Windows machine and everything works fine. You might need to Google a bit to  solve some CUPS and Samba RAW printing issues, but I got it working fine in the  end.

I used Chris's name above to both give credit where credit it due, and  because   it was already in the public over at Codeweavers on the subject.


Welcome to new TalkBMC Open Source Blogger


We have a new Open Source blogger here at TalkBMC: Nitin Pande. I have worked with Nitin for a while now internally:  Funny thing, it was not until very recently that I found out he was an  Open Source maven too! I have a feeling he will not be the last we   discover as BMC continues on it's new Open Source journey. I look  forward to seeing what Nitin has to say! Nice thing about Open Source:  its a huge world, and there are always new things to learn.


Getting Re-oriented


After being gone for two weeks, and having my only tether to the  electronic   world being my iPhone (as   discussed    on my other blog ) I am trying to get going again here. 1000+  emails that   survived spam and other filters awaited me: Took two days just to get  those   read and done!


A quick look around shows no new   Mint version, unless you count the fact that now the KDE and XFCE versions  of 3.0   have hit the site. The Acer Linux Laptop had something like 80 updates  to do,   and they were largely KDE related (I flip flop between KDE and Gnome  on Mint   these days: I just can't decide which one I like better!). Since  Ubuntu has   been a really busy bee lately (New   Dell   computers with Ubuntu pre-loaded, Etc.), I have to think that Mint  will be   taking another point release pretty soon.....


Apple announced all sorts of new stuff of course: new iMacs, new iWork  that   finally has a spreadsheet, and so forth. Nothing that really needs a  deep dive   here. Looks interesting from an Enterprise point of view though.


Mostly it feels like nothing much really happened out there while I  was gone.   Must be summer. After two weeks, I expected to have to re-install  everything!


BMC of course didn't go on vacation: Whurley and Fred Johannessen  posted   a   new podcast about the new Open Source stuff we are doing, and how  it ties   in to the newly announced   BMC   Developer Network: I'll give that a listen! But it is quiet this  week:   whurley is off stirring up LinuxWorld in SanFran right now.


Whurley can't help but to leave behind one thing:   His   current post announces an upcoming BarCamp for Systems Management!


Somewhat dis-orienting is the IBM T41 I have. It is dieing. I know I  have   beaten that thing, and carried it all over the place, and everything,  but I   just replaced the keyboard, and now the hard drive is about to go.  You'd think   it was high mileage or something! Clearly going to be a "Rebuilding  the T41"   experience and post in the near future.


More soon, once I get my feet back on the ground.

The Secret Linux Agenda

Posted by Steve Carl Aug 22, 2007
Share This:

Now it can be told for the first time anywhere: the secret agenda of the Linux community is... is... ahhhhhggg! They got me..... Its all going dark.... sinking....



OK. Fine. I lied. There is no Linux agenda. Well. Maybe one: to be the best Operating System the Open Source community can make it be. And even then the BSD and OpenSolaris camp are going to be wanting to voice an opinion....


Here is what got me to thinking about this.


I was doing the work on the two laptops that I posted about over at on-being-open. That post, in a circular posting kind of way , was a followup to my last post here here about installing Mint 3.0 on a Dell C400 trash-laptop. The post in turn was a follow-up to... oh never mind. Lets just say I have been on a theme lately.


I realized at some point along the install that the Dell C400, with it's Orinoco based TrueMobile 1150 Wifi card was a better Fedora 7 platform than the IBM X30 with its Atheros based Dlink PCMCIA Wifi card. That in turn had me thinking about the purity... or purism.. something... of one Linux Distro over another. Mint 3.0, which was on the Dell C400 hard drive would work just fine on the IBM. Mint would not care what the Wifi chip was either way. Of the two distros I was working with on the two laptops, only Fedora is persnickty like that. There is a reason why Fedora is that way. More about that in a bit.


I did what I suggested I might in that on-being-open post: I pulled the hard drives out of each laptop, and I switched them.


  • The 40GB Hitachi with Mint 3.0 went from the Dell C400 into the IBM X30. Wifi chip went from Orinoco to Atheros
  • The Samsung 80GB went from the IBM X30 to the Dell C400. Wifi Chip went from Atheros to Orinoco.

Batteries in. Plugs in. Power on. Boot.


That was it. They both came up, they both figured out their new hardware situation. They both reconfigured what they needed to. They both found the home Wifi network, and auto-configured to join, even though they had just switched (among other things) the chip on the Wifi cards. It was brain dead easy. Hardest bit was keeping track of the tiny screws for the disk cradles when the cats were trying to help. It turns out all projects require cats to help, at least according to them.


Take That Other OS's!


I have developed a rep in some places as having gone over to the dark side. In this case, Apple and OS.X. I have certainly made no secret that I am a fan of many things Apple. I have been informed by those who dislike Apple that just saying that it has chewy BSD goodness at its OS.X core is not enough. Be that as it may, I never even considered an Apple before they made it's core something I trusted. Mac OS9 and its predecessors may have been perfectly good OS's but I never liked them much. I'm pretty old school. With no command prompt to ease my way in, I always was lost on a Classic Mac. One time in the early 1990's it took me over an hour to eject a disk out of a Mac. I ended up taking it apart. Turns out I was supposed to drag the icon of the disk to the trash can icon. I never would have done that, fearing it would delete the data on the disk. But I digress.....


This disk switching is a case where Linux does something that neither Apple nor MS Windows will. Not can: will.


Linux boots in the new hardware because it has no axe to grind. No master to please. No agenda. It can focus instead on trying to do the right thing... which in this case is merely to boot. It does more than boot though. After the two laptops are booted, things like the 3d desktop (Beryl) still works on both platforms. It not only works, it works well. It dealt with the BIOS change, the graphic chip change, the Wifi chip change... all of it. No muss or fuss.


I, as a customer am having a very nice experience here.


I noted in the X30/C400 post that despite coming from two different vendors, the hardware was similar. Same 1.2 Ghz processor. Same 1 GB of memory. Same 1024x768 screen size. Same general target market: the year of 2002's Sub-notebook market.


I guarantee that I could not have done the disk swap thing between the IBM and the Dell with MS Windows XP and had it work. That would have required all sorts of non-fun things, like re-installing the OS, or at least pre-running sysprep to undo the way MS Windows has tied itself to the specific hardware. OS.X would not have booted at all of course: it only works on Apple hardware (not counting some serious hackery out there).


It is not even that MS Windows can not boot on more generalized hardware: when one first installs MS Windows, a generalized version does the installation. There are many recovery disks like Bart PE that run generalized MS Windows OS stacks. In fact, I believe the Bart PE is a version of MS Windows called Windows PE.


The reason MS Windows would not boot in this example is that once installed MS Windows does not want to be moved till you can re-verify your right to install it anywhere. And that is MS's right. They wrote the EULA. Running MS Win means you agree to the hassle that implies should you want to change your hardware. Vista is worse in this regard by all reports. MS is not targeting people like me who do stuff like this as a customer anyway. Not any more. Maybe not since DOS days...


This non-booting without major incantations is not a premium end-hacker-user experience. Not like this hard drive swap of the X30 and the C400, where everything just works.


I have moved harddrives much further afield than these two similar computers, and had the same experience. From an ancient Compaq M300 to a eMachine 5312 to a brand new (at the time) Toshiba to the IBM X30. Now the Dell C400. That 80 GB Samsung drive gets around. :)


Whurley Gets It


If you listen to the videocasts Whurley recently posted, one of the things he and Cote talk about as it related to Open Source is that it is about support. With Open Source, the customer is always right. Even if the customer had to write the feature themselves. Open Source gives a customer options.


Example: If an Open Source tool does *almost* what a customer needs or wants, they can:


  1. Ask the creator of the code to add the feature
  2. Commission someone to add the features they really want.
  3. Do the code work themselves, in house.

Since they wanted this new feature bad enough to write it, what are the chances someone else wanted or would benefit from the new feature? Pretty good, I'd say.


The funny thing was that, in most cases, Open Source is not about the customer necessarily wanting the source code. Whurley points out the discrepancies between source and binary downloads of most products as an example. Most downloads are of the binaries.


In my day job in R&D Support, having access to the Linux code has meant having access to the ultimate manual. We have not used it often, but if you look back in the early TalkBMC "Adventures" posts about some of the debugging we were doing with NAS, we were in the source code trying to figure out what the programmed behavior was so that we could have intelligent conversations about it with the developer.


I used to do the same thing with VM on the mainframe, reading the dump and the source code before I reported a problem so that I was sure what I was reporting actually was a problem.


Distro Focus


Another thing I have been thinking about and posting on a fair amount recently: What the focus of various distributions are. Here there are agendas, at least of a sort. Not hidden ones though.


Example: What does Fedora want to be?


I spent a great deal of time with Fedora over the years, and there are things I really like about it, but after using Mint 3.0 for a while I have come to the place where some of the purity really gets old when all I want is a working Linux computer.


I had hoped that Fedora 7, what with it's LiveCD and merging of the "Extras" with "Core" and all, was moving more in the direction of Ubuntu and other easy to use Distros. That projects that were "tainted" in the eyes of the Distro would be dealt with in some similar way as Ubuntu and it's "Restricted Source Manager". I was disappointed though.


Fedora views their stance about not including certain projects as being a good thing. Fedora 7 does not support either the Atheros Wifi cards, or the Intel Wifi cards out of the box but does support the Orinoco based cards because of the question of Open Source.


Huh? Didn't I just finish saying that Open Source was all about being easy and having nice customer experiences and all? Am I bifurcated?


Two Things Can Be True, Even If They Seem to be the Opposite


In Open Source, both of these statements about customer support are true. It is all about Point of View.


Fedora won't include anything that is not 100% open source, and in the case of the Intel and Atheros drivers, while the drivers are Open Source, the firmware of the cards is not. They are vendor provided binaries that the Open Source drivers load when the card is initialized. No source code to the card firmware. The card manufacturer has decided that having the firmware code would mean that their competitors would have too big an advantage on them. They are therefore not 100% Open Source, and Fedora wants vendors to get the message that not being Open means not being included. The Fedora FAQsays what Fedora wants to be when it says in reference to a closed standard:


"...we'd much rather change the world instead of going along with it."


I find that deeply admirable, and it is one of the reasons I stick with at least one system running it, despite the frustrations of hacking Fedora from time to time to get my Wifi cards going. As an end user, I can not really tell the difference between an Atheros chipped Wifi Card and an Intel one. Whatever market advantage vendors think they derive from having closed source firmware, from the end user point of view, it all looks the same. Wifi is a commodity item. It hooks up laptops and iPhones to Wifi access points. It lets me access the well known series of tubes we call the Interweb. In fact, the real value to me is not anywhere inside the commodity Wifi chip. It is how well the antenna is designed and placed in the case!


Mint does not get to claim such Open Source purity, and instead uses the Ubuntu Restricted Source Manager. It tells you that you have an impure system, but it loads everything up if you tell it to, and away you go. You, the end user, know which vendors are being sticks in the mud, but it does not stop you from getting going.


The core difference is that Ubuntu will supply things that are free and unencumbered, but do not have to be Open Source. This difference is making a big difference to Ubuntu and its kin. Ubuntu is always the top Distro at whenever I look, and has twice the download numbers Fedora has.


POV and Pol


Polarization that is.


I am still trying to get my head around some of this.


As I have said here, I take it as axiomatic that open is better than closed (tm).


It can be deduced from the above example of Mint vs. Fedora that there are degrees of being open. Fedora goes for purity of Openness, and is lampooned in some corners because of how hard it is to get going on any hardware that does not match the 100% open criteria. The Dell C400 works *great* with Fedora because every bit of hardware in it has a totally Open Source solution.


Ubuntu and its kin like Mint work far more easily but some criticize them because they have given in to the closed source forces of darkness and evil, and shipped Binary bits.


This is not even a new issue: When IBM started to pull the source code to VM on the mainframe, a huge outcry from the customer base ensued. "OCO is LOCO" was the badge at SHARE. I still have mine.


Fedora and its goals are laudable and I support them. At the same time, when my brother needed a Linux computer, I built him one based on Ubuntu. He would not care one whit about the purity of the Open Source. He just wants Google Earth to run.


These POV issues all show up in discussions about whose Open Source license is better. Whurley is currently pointing in his blog at a poll and panel about that at SXSW .


I know this is not all that politically correct (but then, I rarely am)... but I think Open/open is better than closed. Any open. Any spelling or capitalization.


At the same time, I am always a bit dismayed by the signal to noise ratio of the Internet on things like this. I have said it before, and I say it again: I'm really old. I remember when you could read Netnews newsgroups, and get useful information, and help from a community of like minded people. A time before the noisy, just-like-to-tear-things-down-no-matter-what-they-are types moved in, and destroyed Netnews. The downside of being open on Netnews was needing to have a news client like Pan with a good killfile / filter function.


Spam certainly took (and still takes) advantage of the openness of the email transport of the Internet, reducing the value of email, and in some cases doing real harm.


I see the move to Open Source for anyone doing it as having an issue like this. No matter which license one chooses, someone...maybe many someones... maybe really loud and self righteous someones, will yell to the rafters about how using license X means one is being less than open, or less than perfect. Google said that a guiding principal of their company was to "Not Be Evil" and every thing they do now gets the "Is that Evil" yardstick hauled out and yammered about.


That high level of noise is not very useful, and can push some away in disgust. In my opinion only: A company does not announce an Open Source direction lightly. Not because of the business risk, but because no matter what you do, in some corner will be the voice saying "You did not do that right". No matter what you did.


Personal Example


To close this thought and post out: I was talking the other month to someone that was getting ready to open source some code they had written. A very useful tool. Their number one fear: that the code they had written would be savaged by the folks they were giving it away to. Sort of like:


"Hi. Here is this tool. It did this useful thing for me. If you want it, you can have it, and the code to it, in case you would find it useful."


"Oh. My. Ever. Loving. STARS! I can't BELIEVE you gave this away! What a piece of junk! Where did you learn how to code: A fish and tackle shop? Look at this DO loop! Have you ever seen such a thing in your LIFE. And these comments. What language is this?......" On and on.


Some build. Some innovate. Some tear down and destroy.


For everyone like that critic though, there will be those that thank you for taking the time, and being willing to share. One just has to have a mental killfile / filter.


For all its problems: many self inflicted, I still think Open is better. I'm a glass-half-full type. I also remind myself all the time that despite appearances, Open Source is not a computer religion. It is just a good idea.


And it is... The Secret Linux Agenda

Minty Dell(icious)

Posted by Steve Carl Aug 16, 2007
Share This:

Success with one Dell leads to trying another. The "old" Dell also runs Linux like it was made for it. Oh. Wait. Linux is made for almost everything.


Before I get too far into the new Dell experience, a few updates on the last post about the D620.


First: I said in that post that the D620 had a keyboard light, but it did not work under Linux. That was not correct. The D620 has a key sequence to activate a keyboard light. Fn-right arrow. This appears to be a case where the D620 must share a keyboard with some other model. The bit I thought was the non-working light is apparently an ambient light sensor.


Second: I made a statement about the screen being nicer than my Acer 5610. That was not utterly accurate either. The screen is higher resolution, with better Dots Per Inch (DPI), making for nice looking fonts and an easy to read but fairly small screen. But that is not all there is to a screen.


I brought up a picture I took while on vacation. It is of my mountain home, from a distance. Green in the foreground, blue sky. Fluffy clouds. Shadows of the clouds on the ground. Very bright day. Terrific exposure and saturation. On my Apple iMac or MacBook, the picture is amazing. (OK: I admit: I like looking at "my" mountain. The picture may not be all that great) On the Acer, it is pretty nice. On the Dell, it is washed out and low contrast.


This particular D620 LCD panel is fine for email and general business use, but I would not choose this panel if I was going to be spending a great deal of time doing image editing in the GIMP or something along those lines. As a last minute thought, I loaded up the picture on the IBM T41, and have to say that it is pretty washed out there too. High rez: 1440 x 1050. Just washed out. The T41 is also old, and the Cold Cathode Tube is fading in intensity as time goes by as well. Still: It makes me wonder if laptop vendors target resolution over accurate color rendition for the business market computer. Another thought: I bet that is why Apple has a rep for expensive hardware: They clearly did not cut any corners on their LCD panels. Makes sense, since the Apple is often used in graphic applications: There is a reason creative types like Apples, and now I see it is not just the applications.


Finally (on the D620 front today): Another Linuxen here at the office installed Mint 3.0 on their D620. Exactly the same hardware as mine as near as I can tell. Works great there too, with one exception: he can not move the mouse via the trackpad without highlighting everything on the screen. He is going to try and load up the "gsynaptics" package and see if he can fix that.


One other data point of a more general Dell / Linux laptop nature: According to Issue 52 of the Ubuntu Weekly Newsletter, in "United Kingdom, France and Germany [consumers] can order an Inspiron 6400 notebook or an Inspiron 530N desktop with Ubuntu 7.04 pre-installed".


Dell C400 and Linux Mint 3.0


First off, a bit of background about why I would be on this Dell / Linux Laptop riff lately. Up till recently, I have only ever installed Linux on one Dell, and that was back in the 1990's. It worked fine. The Dell hardware of the day was a real brick. Solid. Worked. Easy to work on. Very very slow by today's standards: 286 or 386 processor. I had a few negative Dell laptop hardware experiences on some of their later 1990's / early 2000's gear that put me off working with them any more. Part of my definition of a good computer is one I can fix when it breaks, and that particular time-series of Dells appeared to be designed to not be serviceable by anyone with less than four hands.


Things change.


I was first intrigued again by Dell when they started to talk about creating Linux supported consumer gear. That always gets my attention. Then, in talking to whurley, he mentioned he liked his new Dell (running Ubuntu I believe). He said in our podcast a while back that the Dell had some features he really liked. He was not specific then, but I started to wonder what those features were.


Then, as I was getting ready for LinuxWorld, I went around booting Knoppix on everything I could get my hands on, and all the Dells I tried from other people at the office all worked fine. The deal was sealed when the D620 of the last post arrived and worked so well. It was time to try Linux on the Dell C400.


C400 Heritage and Specs


I would be remiss if I did not point out that the Dell C400 I have is another of my trash pile specials. It started out life as multiple computers that were in a Star Trek Transporter Accident (tm) and merged to become this one computer. The battery is taped in. The top cover has cracks. The keyboard is small, cramped, and highly polished on the keytops. But the C400 works. It in fact surprised me over time in that it just kept running and running.


What it was running was MS Windows XP though. On a 12 GB, 4200 RPM, extremely noisy hard drive. I have no external CD that the C400 will boot. Looks like there is a connector on the side for some special external one it was supposed to have. I don't have one.


Back in its day... about 2002 according to the review at PC Magazine, the C400 was considered a nifty ultra-expensive ultra-portable. It was "fast", with a Pentium portable 1.2 Ghz processor. It started with 256 MB RAM.


My trash heap Dell's hard drive was clearly not stock: the review says the base disk was 30 GB. This unit was 12 GB. It booted XP though. Slowly. I had stuffed 1 GB of RAM in SIMM slots: I had several of the slow PC 133 memory sticks laying around from other departed systems.


The single USB port is 1.1, and not bootable. Not that I did not try.


Without a boot-able media to start it from, the C400 kept running XP not-so-silently in the corner while I watched it to see if it would die because of its rough treatment by the transporter. The D620 working so well with Linux meant now I really wanted to test that C400. The Transporter Chief (err... me.) came up with another way to stuff Linux under the covers. First we get a hair from the hair brush, and then we re-program the transporter Bio-filters..... no... wait. That was Dr. Pulaski on Season two on ST:TNG.


I Regret That I Have Only One T40 to ...


Another computer in the parts pile is an IBM T40. This one must have had some drink with a lot of sugar spilled into it. It booted still. It's screen was bright and beautiful. I had a 40 GB , 5400 RPM drive in it, and because the T40 has an internal CD, I had Mint 3.0 installed. Problem was, the keyboard was uselessly grunged with sticky junk. And wiggle the monitor... or even type very hard, the the screen flickers and turns off and on and does other random goofiness. And because the keyboard is slimed and sticky, you have to pound it to make the keys register correctly. Not useable.




I had that hard drive in there. All built. Ready to go. Doh.


Beam Me Up Linux


If there is anything cool about Linux (and there are so many things that are) it is that it does not really care when you take the hard drive from a computer from IBM and install it in a computer from Dell. I would have preferred the T40 be more use-able: the screen and the keyboard are (normally) way way better than the C400's. That was not to be (although I still have the T40 in case I can figure out how to fix the screen flicker thing).


The Mint equipped Dell booted, the screen went bonkers, X died, and I ended up in a really flaky TTY session. Huge fonts. This was not unexpected. The /etc/X11/xorg.conf from the IBM was totally wrong for the Dell. I mv'ed the file with "sudo mv /etc/X11/xorg.conf /etc/X11/", entered my password, and then rebooted.


X found no config, and built a new one that was correct for the hardware (and boy is that nicer than having to hack it together by hand like it used to be). Screen came up.


Surprise Surprise Surprise ...


(A Star Trek and a Gomer Pyle reference in one post. Double geek points!)


Now the nice stuff started happening. The new hard drive was clearly faster than the old one. What had felt sluggish before under XP now felt fairly fast and crisp. OK. I changed the OS on the hard drive too, I admit.


Mint 3.0 was clearly happy in its new home. It found the Wifi Card and configured it, no problem. It turns out that the Dell TrueMobile 1150 card that was inside the unit had the extremely-well-supported-by-Linux Orinoco chipset on it, according to "lspci".


On screen displays for volume up/down/mute appear. Suspend works, even from the "Fn-F1" keyboard sequence. Mint clearly supports all the hardware features I have thought to test, but given the wireless card being Orinoco, so would Fedora. This is very Linux simpatico gear.


So: How long has Dell been thinking about supporting Linux anyway?

Share This:

An innocent young laptop computer walks through a mysterious office, is set upon, and finds a whole new world opened unto it.


A new computer... A Dell D620 laptop... has become the latest Linux Mint 3.0 computer. It seems to happen every time a computer falls withing my reach, I will admit. What was interesting about this particular one was the following:


  • Dell has started shipping Linux Computers, as has Lenovo. Dells are Ubuntu, and Lenovo's are Novell / SUSE.

  • Dell recently expanded the list of supported hardware, but the D620 was not on their list.

  • Mint 3.0, an Ubuntu 7.04 derivative, runs like a scalded dog on the D620. A sign of things to come from Dell? One can hope.


If you hop over to the Dell / Ubuntu website, you'll see the current Ubuntu notebook (depending on when you are reading this to be sure) is the 1420 N. Not the D620. Despite this, the D620 is working and very well. The volume buttons (little special buttons above the keypad) work, and there is an on-screen display of their status. The screen backlight keys work, but no onscreen status appears for them. The keyboard light (hey: there is a keyboard light! How Thinkpad!) does not activate. Not yet anyway.  BIOS thing? Does it work under XP? No idea. Future thing to check.


The Intel Pro 3945ABG works without issue, and at full G speed.


The Synaptics touchpad config utility had to be loaded, via the Synaptic package manager. No that is not stuttering. And then I had to pop a line about the "SHMConfig" into /etc/X11/xorg.conf:


Section "InputDevice"
        Identifier      "Synaptics Touchpad"
        Driver          "synaptics"
        Option          "SendCoreEvents"        "true"
        Option          "Device"                "/dev/psaux"
        Option          "Protocol"              "auto-dev"
        Option          "HorizScrollDelta"      "0"
        Option          "SHMConfig"             "true"

When I make changes to the mouse acceleration speed in Gnome, it only affects the wiggle stick: the touchpad needs a separate control program (gsnaptics under Gnome, ksynaptics under KDE), and there is no acceleration setting for it even then. It is kind of funny that the D620 has both pointing devices. Easier than two SKU's I guess. I can turn off the touchpad altogether and just use the wiggle stick if  I like.


The 1440 x 900 screen is pretty, bright, and the Intel 945 graphics card controlling it automatically detected and configured, because Mint includes the  "915Resolution" package by default. Last time I checked Ubuntu, I had to manually load that. Given the prevalence of these Intel cards on "low end" (non-high-end-graphics, no games) laptops, seems like 915Resolution should be included by all Distros.


This particular D620 unit has two Gigabytes of RAM, and 80 GB hard drive set to dual boot MS Win XP (emergencies only!), and and a Core 2, T7200, 2Ghz processor good for 7984 BogoMIPS. It is fast, but also runs cool: 48C most of the time according to LMSensors. Only one thermal zone found in the ACPI. HDDTEMP (loaded with Synaptic) says that the hard drive is only 40C. Both of these are after the computer has been up for over four hours, and been used to write this for over an hour.


I did boot over to XP first, configured the hardware, let the corporate image do whatever it does upon booting, and left it for a while so all the Marimba patches, scans, and certification and inventories and whatnot could be done. It was fast under XP, and ran cool there as well. I assume that the correct drivers for XP were all installed to make that work. I have certainly seen other computers (like my wife's Toshiba M45 or my old eMachines 5312) that run way hotter under MS Windows than Linux. MS Windows playtime over,  I grabbed Mint 3.0, and did the LiveCD boot. Manually done disk partition is pretty typical for a dual boot laptop I have built. Four partitions:


Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        1824    14651248+   7  HPFS/NTFS
/dev/sda2            1825        3040     9767520   83  Linux
/dev/sda3            3041        3283     1951897+  82  Linux swap / Solaris
/dev/sda4            3284        9729    51777495   83  Linux

Mint sets up grub so that everything boots OK. All auto-magic.


In fact, setting up Mint 3.0 on the D620 has so far mirrored the experience of setting it up on my main personal Linux computer, the Acer 5610, except the Acer occasionally boots Vista off /dev/sda1, not XP. Even though I literally just received the D620, it is not on the Dell web site any more. It has been replaced by the D630. There are differences between the Dell and the Acer that are odd: the Acer has a Flash memory card reader, which does not appear to even be an option on the Dell: I guess business class computers only read USB flash fobs... The Dell has four USB ports though, so that is not too bad.  The Acer screen resolution is not as good (1280 x 800 versus 1440x900), but given the price difference in the computers, I would expect the Dell to have the better screen.


Linux runs great. Everything important seems to work. Mint provides all the bits to make this a business class Linux computer: Evolution and OpenOffice are available or installed. So far, so good. Dell may not list this as a Linux supported computer, but I see no reason they shouldn't.


It Was Not Always This Way, Except When It Was


I used to have a Dell laptop, way way back. It was built like a brick, lasted forever, and even back in the 1990's I ran Linux on it. But then, in my opinion, the quality of the Dell laptops lapsed. My father in law sent me one to fix that was from four or five years *after* the first one I had, and I took it apart, and decided it was a total loss. I sent him back a eBay-purchased, hand-assembled Compaq M300 with his data migrated from the Dell. That generation Dell laptop was a nightmare to work on, and the build quality was just not happy making. I would have been supporting that thing every two months from then on. Assuming I could get it back together at all (and at the time, I was rebuilding Power 4 based Apple iBooks: no small feat.). I took apart a few other slightly later Dells we had in the trash pile at the office, and assembled some working ones out of those, but they were still awful to work on, and to use a technical term, persnickity.


I bring all this up not to say bad things about Dell but to say a good one: The D620 is head and shoulders better built than my father in laws. It is great to see that Dell has heard the voice of the customer, and in so many different ways. Ubuntu support. Quality improvements. Linux compatibility hardware even in the non-officially supported gear.


Parts is Parts


It is also worth noting that all computer vendors use parts that are the same, manufacturer to manufacturer. Hard drives from any given vendor can be found all over the place, for example. In Dell, Lenovo, Acer, Toshiba, and Apple you might find the same 120GB hard drive from, say, Samsung, or Toshiba, or Seagate. The recent flammable battery issue with Dells was hardly limited to Dell: they just happened to have bought as a percentage more of the bad batteries from the vendor than others.


Hard drives are a particular issue since they are complex, tiny, close tolerance, have moving parts, and in laptops, get schlepped about. Sometimes what is different is how the vendor engineers all the bits around the common parts: are there shock cradles and adequate cooling and the like? The same part may work well in one machine, and fail often in a different machine.


I have two reasons for bringing all that up: one will be that I will be interested to see how well the D620 stands the test of time. The other is to mention that I recently had to rebuild my Apple MacBook Pro, because the hard drive had an issue. I'll have a post up soon about that over at [Hello from the future: post is now up! Kind of like surfs up, only indoors and only involves computers.]


In the meantime, it is the commonality of the parts stack that makes it so companies like Dell can assemble computers that run Linux well. By choosing parts that Linux already has support for, like the Intel WiFi card, they make it easy to run Linux on their gear. Obviously, I have not tried this with every single possible Dell computer. There may be some for which this is not the case. But it *is* the case for this D620, newest member of the Linux laptop clan here at BMC.

Vacation, Week Two

Posted by Steve Carl Aug 2, 2007
Share This:

Back to work next week.  Meantime a few observations about the BMC open world from this  (vacation) side of it


In a way it has been very frustrating to be on vacation this  particular set of   weeks. I know I have to go on  vacation   sometimes (I am sure you can feel my pain), but being out here in the  "real"   world while BMC has been busy announcing things in the open source  world has   been a little frustrating. You might guess I have things to say about  it,   since "Adventures" has been in large part about open-ness, Open  Source, and   other related topics.


BMC started dipping their toes in the Open world at the turn of the  new   millennium, releasing internal projects like CMSFS (Allow Linux under  VM on   the mainframe to read the CMS File system), and other cool stuff. That  was   only the start. TalkBMC is all about being open. BMCDN takes that to a  whole   new level.


I have been talking here for nearly two years about the things that my  team   has done with Linux and other Open Source projects like Samba. But  that is all   in the nature of what we have been leveraging. Coming soon I'll be  talking   about how my team will be giving something back, via the   BMC   Developers Network.


It is odd to be looking forward to coming back from vacation, but I  can't wait   to start this new phase of being open.

Vacation, Week One

Posted by Steve Carl Jul 26, 2007
Share This:

Nothing doing on the Linux  front this week... but BMC is becoming far more Open!


"Adventures in Linux" is on vacation for two weeks, while the author   fights volcanic soil to plant olive trees, remodels the kitchen on the  "A"   frame house, and other distinctly non-restful things.


It is a vacation if only because there are high desert mountains  all   about, no electro-magnetic radiation nearby, and lots of healthy,  clean air and water.


When "Adventures" returns, all sorts of new things are in the air   though....


For a really big clue about some of the new stuff, have a peek at   whurley's latest posting: "Yes, We're Open". I have a whole series of posts I wrote about this, and have just been waiting to pull the trigger....

Fedora 7 for the office

Posted by Steve Carl Jul 18, 2007
Share This:

A companion to the Fedora 7 as a home Linux blog



Last week I wrote a post on my personal weblog about using Fedora 7 as a home Linux OS. My conclusion there was that, barring Linux aficionados who experiment all over the place (like myself actually) that Fedora 7 was not well suited to use as a home Linux.


I have noted here that I set up my brother on Ubuntu a while back, and he has *never* called to ask me a question about how to use it. Not once. Either he just uses his Mac for everything, or it is dead easy.


Fedora ... not so much. While I would have no issue setting up just about anyone at any level of computer experience with Ubuntu, or even better, Mint, I would only recommend Fedora to those who really want to get to know a great deal about Linux from the get-go.


What about the office then?


As noted in the last post, Fedora was pretty handy at exposing a bleeding edge NFS V4 protocol issue. Fedora as a technology preview is pretty much unrivaled. If you want to know what is coming soon from other distros, it is hard to beat. They are usually a kernel revision or two ahead of anyone else. Almost all their package sets are about as fresh as they can be and not still be in the "Cooker"... and maybe in some cases some packages should have still been in development only trees, but those issues usually get fixed very quickly.


We use Fedora a great deal inside R&D Support as desktop OS's, and even more importantly, we use it as a R&D production data center OS. Our current Tier II storage server (as noted in my NAS series of postings a week or two ago) is Fedora Core 6 based.


Having said all that, am I truly advocating Fedora 7 as an office OS? Data center or desktop? Errr... no.


I have a team of people in R&D Support whose average work experience is about twenty years each. They are multi-platform, Multi-OS proficient, and for them, Linux is an old friend. They don't get too wrapped around the axle about distro or whatever because they all know how to peel back the covers if they have to and dig into problems. They will read the source code, and talk to the original developer and work to solutions. They are self reliant and extremely competent. That is why they do R&D Support.


What does an Office Linux look like then?


Most people that are going to be using Linux as their full time desktop are no more interested in how it works than they are "what holds airplanes up in the air" (Mr. Weasley, Harry Potter Book 6 reference... geek points!). They just want something that works, and solves their business needs. In my Fedora at home post I list a few things home users probably don't care about, and one of them is central patch management. For an office Linux, in an office big enough to have a desktop support person, central patch management will be far more important. Example: If OpenOffice is doing something weird with a spreadsheet, and the new version fixes that, the desktop support folks will want to be able to push that out. one time. Much easier than visiting every PC, or having the end users do the updates. And yes: All operating systems, even Linux, need to be patched from time to time. Being off MS Windows is only a relief in magnitude, not protocol. Operating Systems are way too big, even with many eyes looking at them, to not have bugs appear from time to time. Linux of course adds the dimension of extremely rapid rate of change to add new features and new packages: things the major "production" distros slow down and rationalize.


Each of the major distros like SUSE and RedHat have their own internal central patch management tool sets, and heterogeneous environments have tools from vendors like BMC's Marimba or BigFix.


Companies with very little Linux experience, but willing to take the plunge will in fact want to run a version of Linux that is certified for their hardware by the vendor. HP certifies different versions of Linux depending on which laptop you are considering for example. Some are RH. Some are SUSE. All are certified for one or the other. Dell certifies Ubuntu as most people know by now.


Data Center Linux


If data center Linux varies in any way from desktop Linux, it is probably the versions of Linux that are certified to the servers. While data centers are far more likely to have experienced staff on hand similar to what I have on my team, they are also servicing a high up time SLA more than likely, and most support teams like to be backstopped by the vendor. Fedora 7 then is probably not on the candidate list there either.


Just us crazies then...?


Clearly Fedora 7 is a niche OS. If you are experienced (Jimi Hendrix reference: double geek points!) , or if you are wanting to learn more about Linux. If you are a support person and you want a preview of what is to come. Each of those incomplete sentences is a case where Fedora 7 is viable. Fedora 7 can be made stable, and because it is so current, is pretty fast, taking advantage of all the latest and greatest tweaks. But it can be a real bear to work with. If you ever watched the Colbert Report, you know what a threat bears are. If not, then the last two sentences might make no sense at all...


Here is a short example: I was testing Fedora 7's Evolution version against MS Exchange 2003. 2.10.2, and the brand new 2.10.3. Other than having to follow my usual delete-and-start-over protocol, there were no issues. Evolution worked fine. The problem was that during this, I was carrying around my IBM X30 laptop upon which Fedora 7 is installed. I left the wireless cards in the office. I got home, and noticed the storage pocket where I keep the cards in the X30 transport sleeve was wide open. Frantic looking produced no place they had dropped out. I thought they were gone forever. IR an idiot, but that is a different story.


I wanted to write some on the blog, and I wanted to do so on the X30, but without a wireless card, it was less useful. I do most of the writing on Google Docs. Network connection required. Sun had it right: the network is the computer, especially here in "Web 2.0" days. Its really true on an iPhone... But I digress.....


It was late Sunday, after all the good computer stores had closed. Honestly, are there no stores outside Silicon Valley that realize a geek needs to shop, all hours of the day and night?


I went to Walmart. I know. I know. Don't get me started. Walmart is not computer geek paradise. But at least they were open and had something.


There were two cards in stock: A Belkin I knew nothing about, and a Linksys 802.11N card I knew was not yet working on Linux. The Belkin was a plain old "G" card (the F5D7010 V7), and I assumed I could fire it up under ndiswrapper at the very least.


I went home. I installed ndiswrapper using yum. I copied the drivers from the Internet per the instructions, and installed the .inf files. The last step to get the card up and running is "modprobe", to load the ndiswrapper module... "modprobe ndiswrapper" failed. This made no sense till I looked at the obvious.


I finally figured out I had a module mismatch: Yum had installed ndiswrapper, but not the version I needed for the currently running kernel. I fixed that via the ATRPMS web site, and now modprobe installed the ndiswrapper module. The lights on the card lit up. And...... the computer hung, Solid as a rock. Never seen Linux do anything like it other than the time I installed another, different, fried wireless card, and it hung everything I put it in.


Nuts. A bad card. Maybe that was why the box was open slightly. I gave it all up as a bad job.


Next day at the office I found the Atheros chipped D-link card, as well as the Orinoco chipped Linksys. I jacked both of those in and they worked. Happiness and this blog ensued.


The Fedora 7 Linux Experience(d)


My point is this:


  • I documented in my other weblog the heartburn I had on getting Fedora 7 going with the Atheros chipped D-Link card on the X30. MADWifi modules were mismatched, same as the ndiswrapper ones were this time.
  • The Orinoco chipped Linksys card works out of the box. No problem. I still don't know why Fedora thinks this driver is more open than the MADWifi one.
  • This new Belkin F5D7010 V7 (pci id 1799:701f, Realtek 8185 chipset, see ndiswrapper site for details) required I already know ahead of time to use Ndiswrapper rather than hunt for "native" Linux support.


Therein lies the issue: Fedora Linux works best if you already know Linux, and at more-than-a-cursory level. You might be lucky and have the Orinoco card or similar and a plain install (I.E. no bit twiddling to get it working) works just fine. It is more likely you will have to "work for your supper".


When you are done, you will have a current... bleeding edge even... version of Linux. Nice for knowledge and practice. Not so nice if you are going to have to support it for people for whom this is their first experience with Linux.

Stopping the NFS mystery

Posted by Steve Carl Jul 10, 2007
Share This:

Clarifying the exact nature of the Tru64 NAS server issue we and others have had, and the short term solution to it


In my most recent post over at "on being open", I talk about the suitability of Fedora 7 for a home Linux. One of the things I close with there is the idea that Fedora 7 is actually a pretty good OS for the data center or the professional desktop. Very bleeding edge. Chases out problems well in advance of some other OS's. One problem we found with it, and later kerneled versions of Fedora Core 6 was that they did not work at all well with our Tru64 TruCluster NAS.


I am not one to be mysterious most of the time, but apparently I was not quite detailed enough in my March 1st, 2007 post about the cause of the problem, and the short term solution we are using. Why I say "Mysterious" should become clear in a moment.

Names in this Blog

I wanted to take a quick detour to mention my policy about publishing names in this or my other weblog. When someone writes me offline (I.E., not using the comments button, but the "Contact Me" button), I assume that they do not want their name published. Most of the time I do not publish any of what they have written.


Today's post is an exception because there is a great deal of technical detail in the email, and the problem under discussion while not common unless you are using Tru64 as a NAS server is going to start happening for anyone who does. NFS V4 will create this issue. Recent Linux clients will therefore create this issue. Maybe other NAS servers that behave the way Tru64 does in some way will create this issue.


Today's post then is largely Google and other search engine bait, to help anyone else who finds this problem to a quick solution. Like the OS.X / 64 bit NFS server problem before it (in this weblog: way back there), there will be people hitting this, and it is not well documented as a problem out there yet.

The NAS readdirplus Problem in a Nutshell

The author of this email does a far better job than I of really peeling back the covers on this, so lets dive into their letter:

I've googled my way to your blog - seeking answers to a problem we've just encountered here (at -deleted for privacy-) with Fedora Core 6 NFS (via autofs) dealing with a Tru64 NFS server. Excuse the length of this mail, and my presumption in seeking your advice, but I'm at the limits of my knowledge/understanding here.

I just want to stop here for a second and say that this blog is all about knowledge sharing. I do this blog after hours (and for many many hours at a time) because this is a subject I care about, and I truly enjoy the interactions I have with others around the world. I hate to jinx this or anything, but only once in the years that I have been doing this have I ever gotten anything like hate mail. Everyone else has been kind and decent and concerned and knowledgeable (as this writer is), and that makes it worthwhile.


So, there is no presumption, and no imposition. Its why I do this. In fact, thank you for writing me about this so that hopefully everyone can benefit.

Back to the problem at hand:

Everything's been OK up till now. We have Linux NFS clients going back years (to RedHat 7.3 even), plus Suns (running Solaris) and SGI systems (running SLES). Now, on a newly installed and yum-updated Fedora Core 6 system, we're seeing "Not a directory" messages when trying to cd down a filesystem that is automounted from our Tru64 server.

Breaking in again to note here that default Fedora Core 6, out of the box works *fine*. It is when the 2.6.20 kernel is loaded that the new NFS client code is brought along, and the readdirplus problem begins to occur. It happens all day long on Fedora Core 7.

Accessing the same automounted filesystem from a Fedora Core 5 system (or earlier) is fine. The only difference between the two cases is on the client side. The Tru64 server is unchanged and the nfs mount is from a NIS map. Some examples by way of illustration: First, a working FC5 system (automount version 4.1.4-29):


% cd /home_a/user/d1
% cd d2
% ls -l
drwxrwxr-x 3 user user 8192 Jul  3 14:22 d2
% cd d3
% cd d4
% cd d5
% cd d6
% pwd


% grep home_a /proc/mounts


automount(pid2155) /home_a autofs
rw,fd=4,pgrp=2155,timeout=600,minproto=2,maxproto=4,indirect 0 0


tru64:/export/fs1/user /home_a/user nfs
0 0


Now on a "broken" FC6 system (automount version 5.0.1-0.rc3.31):


% cd /home_a/user/d1
% ls -l
drwxrwxr-x  3 user user 8192 Jul  3 14:22 d2/
% cd d2
d2: Not a directory.


% grep home_a /proc/mounts


auto.home_a /home_a autofs
rw,fd=19,pgrp=6020,timeout=600,minproto=5,maxproto=5,indirect 0 0


tru64:/export/fs1/user /home_a/user nfs
0 0


tru64:/export/fs1/user/d1/d2 /home_a/user/d1/d2 nfs
0 0


Notice the additional line purportedly showing a mount of /home_a/user/d1/d2.


The behaviour is interesting in some aspects, as it seems that it's
possible to cd directly way down an automounted filesystem provided you
don't attempt any long directory listings. Then one can cd back up the
tree OK.  It's doing the long directory listing that seems to cause the
additional entry to be made in /proc/mounts and to result in the subsequent
"Not a directory" error when attempting to cd further down.


In your blog of March 01, you've written one or two tantalizing paragraphs
that say that say that "the NFS server is where the real problem is" and that you'll have to work around the problem. The question is, how?
Is the "Not a directory" error that I've noted above the sort of thing
that you have seen?

There it is in a nutshell. Well. It was making us nuts anyway.


It was and is very weird. If you unmounted and remounted, you could navigate directly to places you knew existed and all would be OK with the world. Do anything that causes 'readdirplus' (like 'ls') to be issued, and it is game over. "Not a Directory" from there on out.

And, again in the tantalizing category, the man page for NFS(5) on my FC6
system documents this new option ...


  Disables  NFSv3  READDIRPLUS  RPCs. Use this options when mounting
  servers that don't support or have broken READDIRPLUS  implementations.


This option doesn't appear in the man page on my FC5 system. Is
READDIRPLUS what happens when a ls -l is issued on an NFS client? I wonder
if the nordirplus option being implemented as result of the sort of things
that you've blogged about? Regardless. I've tried to use nordirplus as a
mount option on FC6. It's accepted on the command line, but is not listed
in a /proc/mounts output and it doesn't seem to alter the "Not a
directory" problem behaviour either :-(

In our email conversation I never answered this part of the query, because I was already sure that this was the exact problem we were having. Our Twister of All Things Network Storage mentioned in passing to me one day that he had noticed that the options did not work the way the way he thought they should... and I think this was the option he mentioned. But I won't swear to it.


It does seem like a buggy behavior: or it may be the man page has the doc before the feature is actually implemented? I don't know. Sigh. I guess we'll have another update when someone someplace knows the right answer on this.

Letter 2 (with a portion of my response)


>You have hit the exact problem I was referencing.
>Simple fix is to make the export version 2.


It's nice to have that confirmation, and forcing the mount to be NFS v2
does indeed work. I've set that in the auto.master file on the Fedora Core
6 client only:


  # For details of the format look at autofs(8)
  /-     --ghost  nfsvers=2
  /home_a         yp:auto.home_a          nfsvers=2


Now, I just need to work out how to do some more sophisticated
configuration of the automounter maps so that the nfsvers=2 option is only
applied to automounts from the Tru64 server and not to everything in the
direct map (as happens with the setup above).


Then, I need to work on retiring the Tru64 server :-)


I'll also keep an eye on the nordirplus option that I mentioned in my
first email, as I'm curious to know why this didn't seem to work.

Page 2

The NFS V4 client of later Fedora 6 and and all of Fedora 7 is going to try to use the "readdirplus" if it thinks it can. It appears if it sees a V3 export, it thinks it can. So far, to date this problem has only manifested for us as an issue with Tru64 as a the server and Fedora Linux as the client.


I see no reason based on the actual problem that this will not get worse over time. Seems a cert in fact. How many other older-kerneled NAS servers are going to do the wrong thing when challenged with this new client behavior?


My Senior NAS Beater tells me that he worked this problem with network traces and looked at the client side code (the wonders of Open Source), and talked to the Linux NFS client author (another wonder of the Open Source community). At the end of this investigation, he is satisfied the the client is playing 100% by the NFS rules. Even though we have only seen it to date with Fedora clients, this will change. Fedora is just farther down the NFS client code adoption bunny trail right now.


To verify this in fact, the Chief Mugwump of NAS Destruction (yep: I steal from J.K. Rowling too...) modified the Fedora client code to always force READDIR rather than READDIRPLUS, and the problem stopped with the Tru64 NAS server.


For now, we use NFS V2 over TCP to any clients with the problem. In future, the very near future, we'll retire the formerly awesome Tru64 TruCluster from NAS duties. We'll be very sad on that day because we know that it happened not for technical reasons but because of parental neglect.

Share This:

A sequel of sorts


Proving I have moderately high geek points, the title of this post were the opening words of the "Babylon 5" spinoff series "Crusade". These words seem to work here for todays post, especially if you know the TV show at all. To be clear if you are not Crusade-saavy: It was not a show about the religious Crusades of 1095 to 1291. It was about a race against time to save Earth from a plague created by an alien species. In typical Babylon-5 tradition (and in fact, all good Science Fiction), the show was as much about current times as anything happening in the future.


I always wanted to get a Babylon 5 quote in here. :)


When this question (or is it questions) are applied to the topic de-jour: Open Source, today's post becomes a sequel to my previous one about “Egoless Programming”.


Common and Community... same root word.


Open source is at its core a community of the creative. The innovative. A community of people with common interests and talents. There are many communities kinds of communities of common interest. Sporting events are communities of people with a common interest in what sport in under way. The first time my new-at-the-time wife came with me on a ski trip, she found out she was not a member of the community of skiers. She did not know that when we were not on the slope that all we would talk about was skiing. Even though we had just battered ourselves all day long against a mountain, where the mountain once again won as it always does, we would sit in front of cheap TV set that evening after dinner (where we recalled the days events in gory detail) glued to Warren Miller films about people who ski way better than we do. She thought when we weren't skiing we might do other things. I don't think it was self delusion: I just don't think she understood.


Other communities: I have been to many Mensa events: my wife is a member. I know people in the Society for Creative Anachronism (SCA). I literally knew rocket scientists when I was at NASA. Here at BMC I have met many product authors and product designers. Smart / creative people are still in need of the same things as anyone else. They need community. They need other people. It is just that what they are communicating with each other about can sometimes make ones brain hurt. When inside their community, they are in their zone of comfort, and they speak of things in ways that people outside the community just may not so easily grok. It is not meant to exclude, it is just a feeling, kind of like that one you get when you arrive home after a long trip. Kind of like the feeling I just got getting in a Heinlein reference and a Babylon 5 reference all in one post... :)


I used to date a waitress. When she was with me, and we were with co-workers and friends of mine from the computer world, she had no idea what we were talking about. She once asked me later if we were “being dirty” because to her ears, all the talk about bits and bytes had a vaguely obscene sound to it.  She was an intelligent person with other interests: just had never been involved with computers at all. This was before PC's were commonplace too. Did I mention there is dirt younger than me? I did make an effort not to talk about computers after that when she was with me. She still dropped me like a hot potato. I'm pretty sure it was the computer thing... :)


Common... or maybe uncommon interests and abilities.


Here is another bit of the puzzle: intelligence / creativity is not something that can be measured and expressed in a unitary number like IQ. See Stephan J. Gould's excellent book "The Mismeasure of Man" for why we humans are far too complicated for such simplistic ideas too have any application beyond that of party novelty. No believers in hogwash like "The Bell Curve" need apply here. That is a different community. The community of people that wish life was easier and less complex than it really is, where people all line up in neatly classifiable little rows.


I know this about IQ to be true because I am an example of it. Please do not interpret this as braggadocio or any other form of feeling self important, but I am not of average IQ. At any given point in my life though, I have taken IQ tests and returned scores that varied by over 30 points!


One reason is that intelligence and ability is about exercise: the more one uses their brain, the better it becomes. Another is that tests measure different aspects of ability will return different results relative to the average. If you were to measure my ability to perform math today relative to when I studied it in school, there would be a huge drop off. In my job I use math, but only parts like statistics and double entry bookkeeping and the like. Some geometry, for building houses. Anything else would require intensive study to refresh. All skills are organic when one is a human being.


This is deeply complicated: I am not even talking here about Savant talents, or cultural bias in testing. You have to read Gould's book or something like it to get ones head around the whole thing.


Add in now "interest". One is far more likely to gain skills or stay fresh and skillful if it is an area of interest. Over time I think this will tend to skew, so that one becomes even better at areas of interest, and even worse in other areas. This is probably further reinforced by community. When you are with like minded people, you tend to focus on your areas of common interest.


The results of the community that is Open Source are tangible, if not easily measured. Results like Linux, Gnome, KDE, Xorg OpenOffice, GIMP, Beryl/Compiz, On and on and on.


How does one create an environment of creativity ?


If you look at some of the most successful companies these days it is clear that they have internal environments that encourage creativity and innovation. Apple is clearly one of them: I am writing the first draft of this entry on my iPhone in an antique mall while my wife does ... something.... in a cabinet full of antiques. Not sure exactly what she is doing: I am not a member of the community on antiques. And I'm blogging on my iPhone, so I am not paying attention either. In that world of antiques I have found that my tastes run to either the very cheap or the very expensive. I can not tell the difference between one or the other when I am looking at it. I do know that most of what I like is often called "Art Deco". But I digress.


Another example company would of course be Google.


I admit freely that as a manager I have stolen freely from Google, specifically their 70/20/10 "rule". I have been around creative geniuses all my life (my dad has something like 17 patents he can talk about at NASA) so I know a good idea when I lift one. I have written to some degree about some of the creative things we are up to inside R&D support, but since I adopted this rule, some major changes have occurred in the team. I'll get into those in future posts. Some really good things happening here though.


Most corporate environments are not so flexible as to be able to deal well with the needs of the creator. Humans being what they are, needs are met one way or another. A stultifying day job is a recipe for creating all nighters, and weekends of creative frenzies. Or drinking. Sometimes both. They are not mutually exclusive.




I was reading a very interesting article this last weekend in the New York Times about amateurs contributing to the Space Program (subscription may be required) . The article was interesting, and well written but at one point the author was essentially going to sleep during an in depth technical discussion that was occurring between those he was writing about. Even though the author was interested enough in the topic to do the research and write the article, they could not quite hang in there for all the detail that was required to make the project work.


I hit this phenomena again recently. I was talking for more than four hours with someone I had never met before. In the course of that conversation I found out that they were deeply intelligent, well traveled, well read, and had some truly original and fascinating insights into some things : things I had thought about for years and never picked up on. Then another in the room mentioned the idea that they kept a laptop by their bed, for late night computing needs, and they got the oddest reaction. This person could not believe anyone would do such a thing as have a laptop near their bed. I keep two there : a Linux laptop, and an Apple laptop, so I thought nothing of it.


Somehow this crossed some line for this individual between intelligent and geeky and it is always an education to find where that line is. It is completely individual, just like we all are. I am sure when I quoted Babylon 5, geek alarms went off for some, and for others it was more like “Hey, I didn't know he liked Babylon 5 too”.


My next door neighbors have a sort of mixed marriage. He is an engineer and a homebody. She loves to travel and I am insanely jealous of all the places she has been. She has seen the Taj Mahal twice! They are in their 80's now, so clearly they have figured out how to make that work. They are both intelligent, creative people. He likes to create from his home, in the world of his mind. She likes to learn new things, meet new people, and is trying to see everything that can be seen. He walks the dog while she is away. What is clear is that part of staying young is staying interested in things.


Meeting the needs of the creator


My theory about why there are so many Open Source projects with so many people so good at such complicated and esoteric things as writing operating systems like Linux or FreeBSD or amazingly sophisticated graphical applications like GIMP (or, any of the other 21,000 projects in Debian alone) then is this:


  1. This is their interest


  2. This is their community


  3. This is who values them for what they want to be valued for


I know when I used to talk to Rick Troth about things like “suloginv” or “CMSFS” or any of the other Open Source things he was working on while he was here, there was always a special note of affection in his voice. He truly loved that work and that community of people. It was not just that they valued and love Rick either. it was that they were in a position to better understand exactly what it was he was doing, how hard it was, what it's value would be... and they valued him for that which he wanted to be valued for. I used to get a warm glow just standing next to him.


There was an interesting article I read today about the number of contributors to the Linux kernel growing. For the purposes of this posts focus, it was especially interesting that the total number of people contributing code to the Linux kernel just since the 2.6.11 kernel came out has doubled. Change rate is doubled too: from two changes an hour to four. Per hour!


Moving back to a personal example: what about oneself does one wish to be liked for? Does it feel better to have someone say you look good today, or that they really liked that patch you checked into the repository, or that you really made a great play in the game... what about you is what you want to be appreciated for? I was very gratified when I started this weblog by the response to it. As personal an act as writing is (a virtual putting your heart out on your sleeve if you will) it would have been hard to have had a huge groundswell of "You Stink" kind of responses.


Swimming against the needs


Companies that do not get this are going to be in trouble. The power of the creative / innovative is stacked against them. They will be facing companies that *do* get it. Entire groups and communities. Literally millions of smart creative innovative people motivated by common interest and community.


This is not even really a we-versus-them scenario, as much as some would personalize and demonize. To utterly torture a metaphor: This is a drive your stake in the ground and resist change thing, only to find your stake is really a sand castle, and that thing moving at you is wave. Big or little, fast or slow, the wave will win.


I said it was going to be tortured. But I like the mental image of the wave slowly lapping at the sand castle and think it appropriate. If you have ever been to one of those build a sand castle competitions, and then gone by the beach the next week you see all that remains, you get the picture. Beach. Waves. Skin cancer. Ok. Fine. Moving on.


Coming back then to the Babylon 5 quote at the top of the column. People always serve themselves. Their needs must be met. Look at any company in crisis, and most of the people that are leaving first are the good ones. They are not rats abandoning the ship, they are talents whose needs are not being satisfied.


My Brother spent his life battling alcohol. In the recovery programs he attended they talked about the fact the he had to work on himself before he could help anyone else. A sort of self protecting self involvment that was required to make progress against the disease. They had a saying: "You have to work on yourself before you can work on anyone else.". It may seem odd to say it that way, but it was and is based in a very solid bit of understanding about the way we humans work. If one does not take care of ones self, one can not have enough resources, mentally, emotionally, whatever, to be there for anyone else.


Call it enlightened self interest, or whatever you want. Creative and innovative people need to create and innovate. It is part of who they are. This is true of any subject: not just creation of Open Source programs for those gifted with talents in the area of programming, but any artistic endevour. And those of like interests tend to form communities: Just take a trip to Santa Fe, New Mexico or Marfa, Texas or Jerome, Arizona. Three places that pop to my mind whenever I think of the term "Artist Colony". There are many others....


Writers must write. Coders must code. All are acts of creation. All creative needs must be satisfied.


It is the way we are. Oddly, few companies really understand this about their creative talent. Some may even actively work against it.


Many companies fight a delaying game. They throw up as many obstacles as they can to try and slow things down. See SCO for details on how that worked out.


Here is another example: MS is seeing a drop-off in third party apps for MS Windows. This report says 10%, with another 2% this year. I am sure it will be denied as being true though. What I will be curious to see is if Ray Ozzie "gets it". Signs are not good: the patent deals with vendors like Novell and Xandros and Linspire look like another delaying action to me.


People will met their own needs. They will ultimately serve themselves, and their creative desires. They will go to where they feel good about what they do. They have to. They are people. That is just how people work.


As for the second half of the B5 quote: They will trust those that think and feel and act the same way they do.


That would actually be a whole other post I think.

Share This:
Experimenting with a new Weblog to talk about Linux at home

It would be fair to say that I have probably pushed the boundaries of the content in this weblog. Not by being controversial: that is whurley's job. I admit freely that when I picked my topic for this weblog, I intentionally picked one that was broad enough to allow me a fair amount of latitude. I am a computer generalist at the end of the day, so I do tend to wander far and wide personally, and in this weblog.


BMC creates over 600 products, many of which run on Linux. We have open sourced various things like the CMSFS on mainframe Linux. Many in R&D use Linux as their desktop OS as I do. So Open Source, Linux in the data center, Linux on the professional desktop would all seem to be in-bounds topics.

BMC does not create any products of which I am aware that would be used on a home Linux computer. That has hardly stopped me from talking about it here though. Every time I do though, I wonder about it: Is this the right place to talk about Mint in the strict case of just how I use it at home?


I have decided to try and experiment. I have created a new weblog for just those home Linux conversations:

I have no idea how this will all work out. I may decide this is more trouble than it is worth. There are so many gray areas here. This is in many ways an artificial division.

Home Computing


I read once, a while back, that the reason that MS Windows is so popular at home is that people wanted to use at home what they used at the office. I do not recall if they had any supporting data for that opinion, but it feels logical to think that is at least true in many cases. MS had been successful back in the late 80's and 90's getting businesses to use MS Windows (With early help from Lotus 123 and WordPerfect), and so it was just logical to have DOS and later MS Windows computers at the house too.


If that is true, then as Linux and OS.X make inroads in the office environment (as they slowly are) then more people will want to be using Linux at home too.


Linux is ready for that. My brother is a professional carpenter and computers for him are just a tool. Ubuntu 7.04 and OS.X are his OS's on his home systems. Since I installed his Ubuntu 7.04 computer over a month ago, I have not had one call to ask my any questions about how anything worked.

Partly this is because he, like me, uses the same applications on every computer he has ever had, starting back when he did have MS Windows computers. But Firebird is Firebird, and Thunderbird is Thunderbird, and OpenOffice is OpenOffice pretty much everywhere. Since he already knew all those applications, he was good to go on Ubuntu. We even had Google Earth and Picasa up and running. I see the Google Desktop for Linux is now out too.


My mother is also not a computer professional, runs on an iMac that replaced MS Win98 a while back. Her application stack is the same, except she uses iPhoto rather than Picasa, and her email is gmail. The point being that there is nothing magical about MS Windows for working on computer things, and that is true at home and in the office.


I'll see if this division of weblog content makes any sense at all. If it does I'll keep it up, but otherwise I'll just keep pushing the boundaries here. Where there seems to be content crossover, I'll note that here or at on-being-open. If nothing else, it is a new adventure....


I do have to say one thing about blogging in two places though: Blogspot is a dead easy place to write a weblog. It is interfaced with Scribefire and Google Documents in some very lovely ways. I hope talk.bmc picks up some of those features in it's next release!

Filter Blog

By date:
By tag: