Skip navigation

The Tru64/TruCluster based fileserver has served us well as the R&D Highly Available central fileserver. But, as the Tru64 Unix OS has been put on limited support by HP and the hardware is now seven years old, it's time to retire the system and replace it with current technology.


Today I want to start on a new series, and a different approach to some of the Internal R&D Support projects I post about in this blog. What I have in mind is to start talking about an in-flight project, warts and all, so that what I end up with is truly an open conversation about this project. Previously I have always reported things once we knew the outcome. This way, some of the process itself will be under open discussion.


The other thing that is new.. well new-ish, is that most of this post comes from the person that is actually doing the work. In this case, our "Master Abuser of Network Protocols", Dan Goetzman.


There are about a zillion posts about the server we are replacing here in "Adventures", going all the way back to the beginning. Way too many to link here without creating something that looks like a table of contents. The summary below gives some of the history, so hopefully that will serve to level-set things.


Dan started by defining the project on our Wiki thus:



  1. Support NFS file serving
  2. Support CIFS/SMB file serving
  3. High Availability
  4. High Performance
  5. Cost Effective

Leveraging the Past


For several years now, R&D Support has been building cost effective file servers using commodity hardware and using Linux. We have evolved these to the most recent "generation 2" designs, nicknamed "Snapple". A summary of the "Snapple" based design;

  • Linux - Currently using Fedora Core 6 as it has proven to be a stable NFS server
  • OS and DATA separation - OS is mirrored on internal hard drives, DATA is on the shared SAN storage
  • XFS filesystems for user data - High performance and scales well
  • Sun X2200 servers
  • SAN shared storage
  • Apple XServe Raid storage subsystems

The only item not addressed using the Snapple [Sun and Apple hardware: Snapple. We are so punny - Steve]  based fileservers is high availability. The Snapple configuration allows for a spare server on the SAN to be manually switched to recover a failed server to allow rapid recovery of services. But, that is short of a highly available solution.


Designing the Future


To address the manual service failover versus automatic service failover, it seems logical to look at Linux cluster technology. As the central R&D fileservers are considered a production service, is also seem to make sense to look at the "enterprise" Linux distros. Red Hat Enterprise Linux, or in the case of this project, the CentOS variant of the Red Hat EL distribution. The new design starts to look like;


  • CentOS 5.0 with Cluster Suite and Cluster Storage Support
  • OS boot drives will remain simple "md" mirrors on the internal disks in the server heads, not under any logical volume manager.
  • DATA filesystems will be GFS, as the cluster is simplified if a cluster "parallel" filesystem is used.
  • Shared SAN for storage, using dual SAN switches operating as a single fabric.
  • Apple XSR storage, decent performance at a great price for SAN based storage.

The NFS service will become a cluster service that the CentOS cluster will make highly available. The Samba CIFS/SMB service can also be another cluster service, configured to run on another cluster node by default.


Optional / Later


Storage Virtualization. With the shared storage on the SAN, using a true parallel cluster filesystem, the next step is to look a virtualizing the storage. Advantages with storage virtualization include;


  • Remote replication of data at the SAN block level
  • Mirror data at the SAN block level
  • Create storage classes that can align data with class of service on the storage farm.

We have, available to us, IBM SVC (Storage Virtulization Controllers) subsystem that we can insert into the SAN to test and qualify the storage virtualization option. After initial testing using normal "Zoned" SAN storage LUN's we will insert the SVC and virtualize the storage and then compare functionality and performance.

Wikis are great for this kind of thing. I can see what is happening when I am ready, and I can fix spelling errors if I notice them. Unfortunately for both Dan and I, English is not our native language. Neither is anything else either. We do what we can....


A quick note about the storage virtualization bit: We pulled that out of the initial pass to minimize variables. Once we know we have a working solution, we'll layer that in, because this is how we plan to enable some advance features like block replication across the WAN. All that comes later though.


Once the project was defined, The "Server Beater Most Excellent" (Dan: He has a lot of titles) went to work. We bought the hardware, assembled it, had some internal discussions, and decided that the first pass at this new server would be CentOS 5 based, and leverage the Cluster LVM and GFS to make fail-over between the three Sun X2200's easy.

Well, we had hoped it would be easy. The Wiki problem tracking page for the project currently looks like this:


NFS test CentOS Cluster - Problem Tracking

NFS V2 "STALE File Handle" with GFS filesystems


Only using NFSV2 over a GFS filesystem!
NFSV3 over GFS is OK. NFSV2 over XFS is also OK.


From any NFSV2 client we could duplicate this;

  • cd /data/rnd-clunfs-v2t - To trigger the automount
  • ls - Locate one of the test directories, a simple folder called "superman"
  • cd superman - Step down into the folder
  • ls - Attempt to look at the contents, returns the error:
ls: cannot open directory .: Stale NFS file handle

Note: This might be the same problem as in Red Hat bugzilla #229346
Not sure, and it appears to be in a status of ON_Q, so it is not yet released as a update. If this is the same problem, it's clearly a problem in the GFS code.


NFS V2 Mount "Permission Denied" on Solaris clients


This problem was detected on a previous test/evaluation of Red Hat AS 5 and expected with CentOS 5.

Certain Solaris clients, Solaris 7, 8, and maybe 9 fail to mount using NFSV2. Apparently the problem is a known issue in Solaris where the NFS server (in this case CentOS) offers NFS ACL support. Solaris attempts to use NFS ACL's even with NFSV2 where they are NOT supported.

The correct behavior is to have the Solaris clients NOT try to use NFS ACL's on version 2. This problem has been fixed on more recent versions of Solaris (like some 9 and 10+).


And that is where we are at! We don't have all the answers, and in this case, not even all the answers on what will happen next here. Lots of questions though.

We know that the bugzilla bug on GFS is still open, and that probably means that we'll have to take GFS out of the equation, at least for now. That is not good, since that means we'll have to script the NFS and CIFS failover. Yuch.


More on this as it unrolls: Let me know if you find this approach too talking about this project interesting (rather than just summarizing it at the end, once everything is all known and decided)


Turning  the printing problem around from the last time: Printing from MS   Windows to a Linux system, where Linux does not support the printer


A while back   I   posted a question here, posed to me at a recent LinuxWorld, about  how one   might print from Linux to an MS Windows attached printer, where the  the   printer is unsupported by Linux. Richard Meyer asked around his local  Linux   User Group, and got two replies, which I posted here [   post   one,   post   two ].


There is the inverse problem, which appeared to bug Richard as well:  Where   Linux does not support the printer, therefore provides no drivers for  it, but   the printer is attached to a Linux server and MS Windows wants to  print to it.   Richard found this post from Chris Balmforth in a public forum on  Codeweavers   about it:


Hi Steve, when you come back from vacation, here's another approach to   printing on printers that Linux doesn't understand ...


Linux Counter user #306629

Subject: Re: [cw-discuss] Windows printer question on codeweaver?
From: Chris Balmforth>
Date: Thu, 26 Jul 2007 18:47:09 +0100

I get good results by installing my printer (which has no Linux driver)  on my Ubuntu server as a RAW CUPS printer (no Linux driver needed), then  making it available to the network via Samba. When I try connect to it with a  Windows machine it tells me that the server has the wrong driver and asks me to  install the correct driver locally. Install the printer's Windows driver on the  Windows machine and everything works fine. You might need to Google a bit to  solve some CUPS and Samba RAW printing issues, but I got it working fine in the  end.

I used Chris's name above to both give credit where credit it due, and  because   it was already in the public over at Codeweavers on the subject.


Welcome to new TalkBMC Open Source Blogger


We have a new Open Source blogger here at TalkBMC: Nitin Pande. I have worked with Nitin for a while now internally:  Funny thing, it was not until very recently that I found out he was an  Open Source maven too! I have a feeling he will not be the last we   discover as BMC continues on it's new Open Source journey. I look  forward to seeing what Nitin has to say! Nice thing about Open Source:  its a huge world, and there are always new things to learn.


Getting Re-oriented


After being gone for two weeks, and having my only tether to the  electronic   world being my iPhone (as   discussed    on my other blog ) I am trying to get going again here. 1000+  emails that   survived spam and other filters awaited me: Took two days just to get  those   read and done!


A quick look around shows no new   Mint version, unless you count the fact that now the KDE and XFCE versions  of 3.0   have hit the site. The Acer Linux Laptop had something like 80 updates  to do,   and they were largely KDE related (I flip flop between KDE and Gnome  on Mint   these days: I just can't decide which one I like better!). Since  Ubuntu has   been a really busy bee lately (New   Dell   computers with Ubuntu pre-loaded, Etc.), I have to think that Mint  will be   taking another point release pretty soon.....


Apple announced all sorts of new stuff of course: new iMacs, new iWork  that   finally has a spreadsheet, and so forth. Nothing that really needs a  deep dive   here. Looks interesting from an Enterprise point of view though.


Mostly it feels like nothing much really happened out there while I  was gone.   Must be summer. After two weeks, I expected to have to re-install  everything!


BMC of course didn't go on vacation: Whurley and Fred Johannessen  posted   a   new podcast about the new Open Source stuff we are doing, and how  it ties   in to the newly announced   BMC   Developer Network: I'll give that a listen! But it is quiet this  week:   whurley is off stirring up LinuxWorld in SanFran right now.


Whurley can't help but to leave behind one thing:   His   current post announces an upcoming BarCamp for Systems Management!


Somewhat dis-orienting is the IBM T41 I have. It is dieing. I know I  have   beaten that thing, and carried it all over the place, and everything,  but I   just replaced the keyboard, and now the hard drive is about to go.  You'd think   it was high mileage or something! Clearly going to be a "Rebuilding  the T41"   experience and post in the near future.


More soon, once I get my feet back on the ground.

Steve Carl

The Secret Linux Agenda

Posted by Steve Carl Aug 22, 2007

Now it can be told for the first time anywhere: the secret agenda of the Linux community is... is... ahhhhhggg! They got me..... Its all going dark.... sinking....



OK. Fine. I lied. There is no Linux agenda. Well. Maybe one: to be the best Operating System the Open Source community can make it be. And even then the BSD and OpenSolaris camp are going to be wanting to voice an opinion....


Here is what got me to thinking about this.


I was doing the work on the two laptops that I posted about over at on-being-open. That post, in a circular posting kind of way , was a followup to my last post here here about installing Mint 3.0 on a Dell C400 trash-laptop. The post in turn was a follow-up to... oh never mind. Lets just say I have been on a theme lately.


I realized at some point along the install that the Dell C400, with it's Orinoco based TrueMobile 1150 Wifi card was a better Fedora 7 platform than the IBM X30 with its Atheros based Dlink PCMCIA Wifi card. That in turn had me thinking about the purity... or purism.. something... of one Linux Distro over another. Mint 3.0, which was on the Dell C400 hard drive would work just fine on the IBM. Mint would not care what the Wifi chip was either way. Of the two distros I was working with on the two laptops, only Fedora is persnickty like that. There is a reason why Fedora is that way. More about that in a bit.


I did what I suggested I might in that on-being-open post: I pulled the hard drives out of each laptop, and I switched them.


  • The 40GB Hitachi with Mint 3.0 went from the Dell C400 into the IBM X30. Wifi chip went from Orinoco to Atheros
  • The Samsung 80GB went from the IBM X30 to the Dell C400. Wifi Chip went from Atheros to Orinoco.

Batteries in. Plugs in. Power on. Boot.


That was it. They both came up, they both figured out their new hardware situation. They both reconfigured what they needed to. They both found the home Wifi network, and auto-configured to join, even though they had just switched (among other things) the chip on the Wifi cards. It was brain dead easy. Hardest bit was keeping track of the tiny screws for the disk cradles when the cats were trying to help. It turns out all projects require cats to help, at least according to them.


Take That Other OS's!


I have developed a rep in some places as having gone over to the dark side. In this case, Apple and OS.X. I have certainly made no secret that I am a fan of many things Apple. I have been informed by those who dislike Apple that just saying that it has chewy BSD goodness at its OS.X core is not enough. Be that as it may, I never even considered an Apple before they made it's core something I trusted. Mac OS9 and its predecessors may have been perfectly good OS's but I never liked them much. I'm pretty old school. With no command prompt to ease my way in, I always was lost on a Classic Mac. One time in the early 1990's it took me over an hour to eject a disk out of a Mac. I ended up taking it apart. Turns out I was supposed to drag the icon of the disk to the trash can icon. I never would have done that, fearing it would delete the data on the disk. But I digress.....


This disk switching is a case where Linux does something that neither Apple nor MS Windows will. Not can: will.


Linux boots in the new hardware because it has no axe to grind. No master to please. No agenda. It can focus instead on trying to do the right thing... which in this case is merely to boot. It does more than boot though. After the two laptops are booted, things like the 3d desktop (Beryl) still works on both platforms. It not only works, it works well. It dealt with the BIOS change, the graphic chip change, the Wifi chip change... all of it. No muss or fuss.


I, as a customer am having a very nice experience here.


I noted in the X30/C400 post that despite coming from two different vendors, the hardware was similar. Same 1.2 Ghz processor. Same 1 GB of memory. Same 1024x768 screen size. Same general target market: the year of 2002's Sub-notebook market.


I guarantee that I could not have done the disk swap thing between the IBM and the Dell with MS Windows XP and had it work. That would have required all sorts of non-fun things, like re-installing the OS, or at least pre-running sysprep to undo the way MS Windows has tied itself to the specific hardware. OS.X would not have booted at all of course: it only works on Apple hardware (not counting some serious hackery out there).


It is not even that MS Windows can not boot on more generalized hardware: when one first installs MS Windows, a generalized version does the installation. There are many recovery disks like Bart PE that run generalized MS Windows OS stacks. In fact, I believe the Bart PE is a version of MS Windows called Windows PE.


The reason MS Windows would not boot in this example is that once installed MS Windows does not want to be moved till you can re-verify your right to install it anywhere. And that is MS's right. They wrote the EULA. Running MS Win means you agree to the hassle that implies should you want to change your hardware. Vista is worse in this regard by all reports. MS is not targeting people like me who do stuff like this as a customer anyway. Not any more. Maybe not since DOS days...


This non-booting without major incantations is not a premium end-hacker-user experience. Not like this hard drive swap of the X30 and the C400, where everything just works.


I have moved harddrives much further afield than these two similar computers, and had the same experience. From an ancient Compaq M300 to a eMachine 5312 to a brand new (at the time) Toshiba to the IBM X30. Now the Dell C400. That 80 GB Samsung drive gets around. :)


Whurley Gets It


If you listen to the videocasts Whurley recently posted, one of the things he and Cote talk about as it related to Open Source is that it is about support. With Open Source, the customer is always right. Even if the customer had to write the feature themselves. Open Source gives a customer options.


Example: If an Open Source tool does *almost* what a customer needs or wants, they can:


  1. Ask the creator of the code to add the feature
  2. Commission someone to add the features they really want.
  3. Do the code work themselves, in house.

Since they wanted this new feature bad enough to write it, what are the chances someone else wanted or would benefit from the new feature? Pretty good, I'd say.


The funny thing was that, in most cases, Open Source is not about the customer necessarily wanting the source code. Whurley points out the discrepancies between source and binary downloads of most products as an example. Most downloads are of the binaries.


In my day job in R&D Support, having access to the Linux code has meant having access to the ultimate manual. We have not used it often, but if you look back in the early TalkBMC "Adventures" posts about some of the debugging we were doing with NAS, we were in the source code trying to figure out what the programmed behavior was so that we could have intelligent conversations about it with the developer.


I used to do the same thing with VM on the mainframe, reading the dump and the source code before I reported a problem so that I was sure what I was reporting actually was a problem.


Distro Focus


Another thing I have been thinking about and posting on a fair amount recently: What the focus of various distributions are. Here there are agendas, at least of a sort. Not hidden ones though.


Example: What does Fedora want to be?


I spent a great deal of time with Fedora over the years, and there are things I really like about it, but after using Mint 3.0 for a while I have come to the place where some of the purity really gets old when all I want is a working Linux computer.


I had hoped that Fedora 7, what with it's LiveCD and merging of the "Extras" with "Core" and all, was moving more in the direction of Ubuntu and other easy to use Distros. That projects that were "tainted" in the eyes of the Distro would be dealt with in some similar way as Ubuntu and it's "Restricted Source Manager". I was disappointed though.


Fedora views their stance about not including certain projects as being a good thing. Fedora 7 does not support either the Atheros Wifi cards, or the Intel Wifi cards out of the box but does support the Orinoco based cards because of the question of Open Source.


Huh? Didn't I just finish saying that Open Source was all about being easy and having nice customer experiences and all? Am I bifurcated?


Two Things Can Be True, Even If They Seem to be the Opposite


In Open Source, both of these statements about customer support are true. It is all about Point of View.


Fedora won't include anything that is not 100% open source, and in the case of the Intel and Atheros drivers, while the drivers are Open Source, the firmware of the cards is not. They are vendor provided binaries that the Open Source drivers load when the card is initialized. No source code to the card firmware. The card manufacturer has decided that having the firmware code would mean that their competitors would have too big an advantage on them. They are therefore not 100% Open Source, and Fedora wants vendors to get the message that not being Open means not being included. The Fedora FAQsays what Fedora wants to be when it says in reference to a closed standard:


"...we'd much rather change the world instead of going along with it."


I find that deeply admirable, and it is one of the reasons I stick with at least one system running it, despite the frustrations of hacking Fedora from time to time to get my Wifi cards going. As an end user, I can not really tell the difference between an Atheros chipped Wifi Card and an Intel one. Whatever market advantage vendors think they derive from having closed source firmware, from the end user point of view, it all looks the same. Wifi is a commodity item. It hooks up laptops and iPhones to Wifi access points. It lets me access the well known series of tubes we call the Interweb. In fact, the real value to me is not anywhere inside the commodity Wifi chip. It is how well the antenna is designed and placed in the case!


Mint does not get to claim such Open Source purity, and instead uses the Ubuntu Restricted Source Manager. It tells you that you have an impure system, but it loads everything up if you tell it to, and away you go. You, the end user, know which vendors are being sticks in the mud, but it does not stop you from getting going.


The core difference is that Ubuntu will supply things that are free and unencumbered, but do not have to be Open Source. This difference is making a big difference to Ubuntu and its kin. Ubuntu is always the top Distro at whenever I look, and has twice the download numbers Fedora has.


POV and Pol


Polarization that is.


I am still trying to get my head around some of this.


As I have said here, I take it as axiomatic that open is better than closed (tm).


It can be deduced from the above example of Mint vs. Fedora that there are degrees of being open. Fedora goes for purity of Openness, and is lampooned in some corners because of how hard it is to get going on any hardware that does not match the 100% open criteria. The Dell C400 works *great* with Fedora because every bit of hardware in it has a totally Open Source solution.


Ubuntu and its kin like Mint work far more easily but some criticize them because they have given in to the closed source forces of darkness and evil, and shipped Binary bits.


This is not even a new issue: When IBM started to pull the source code to VM on the mainframe, a huge outcry from the customer base ensued. "OCO is LOCO" was the badge at SHARE. I still have mine.


Fedora and its goals are laudable and I support them. At the same time, when my brother needed a Linux computer, I built him one based on Ubuntu. He would not care one whit about the purity of the Open Source. He just wants Google Earth to run.


These POV issues all show up in discussions about whose Open Source license is better. Whurley is currently pointing in his blog at a poll and panel about that at SXSW .


I know this is not all that politically correct (but then, I rarely am)... but I think Open/open is better than closed. Any open. Any spelling or capitalization.


At the same time, I am always a bit dismayed by the signal to noise ratio of the Internet on things like this. I have said it before, and I say it again: I'm really old. I remember when you could read Netnews newsgroups, and get useful information, and help from a community of like minded people. A time before the noisy, just-like-to-tear-things-down-no-matter-what-they-are types moved in, and destroyed Netnews. The downside of being open on Netnews was needing to have a news client like Pan with a good killfile / filter function.


Spam certainly took (and still takes) advantage of the openness of the email transport of the Internet, reducing the value of email, and in some cases doing real harm.


I see the move to Open Source for anyone doing it as having an issue like this. No matter which license one chooses, someone...maybe many someones... maybe really loud and self righteous someones, will yell to the rafters about how using license X means one is being less than open, or less than perfect. Google said that a guiding principal of their company was to "Not Be Evil" and every thing they do now gets the "Is that Evil" yardstick hauled out and yammered about.


That high level of noise is not very useful, and can push some away in disgust. In my opinion only: A company does not announce an Open Source direction lightly. Not because of the business risk, but because no matter what you do, in some corner will be the voice saying "You did not do that right". No matter what you did.


Personal Example


To close this thought and post out: I was talking the other month to someone that was getting ready to open source some code they had written. A very useful tool. Their number one fear: that the code they had written would be savaged by the folks they were giving it away to. Sort of like:


"Hi. Here is this tool. It did this useful thing for me. If you want it, you can have it, and the code to it, in case you would find it useful."


"Oh. My. Ever. Loving. STARS! I can't BELIEVE you gave this away! What a piece of junk! Where did you learn how to code: A fish and tackle shop? Look at this DO loop! Have you ever seen such a thing in your LIFE. And these comments. What language is this?......" On and on.


Some build. Some innovate. Some tear down and destroy.


For everyone like that critic though, there will be those that thank you for taking the time, and being willing to share. One just has to have a mental killfile / filter.


For all its problems: many self inflicted, I still think Open is better. I'm a glass-half-full type. I also remind myself all the time that despite appearances, Open Source is not a computer religion. It is just a good idea.


And it is... The Secret Linux Agenda

Steve Carl

Minty Dell(icious)

Posted by Steve Carl Aug 16, 2007

Success with one Dell leads to trying another. The "old" Dell also runs Linux like it was made for it. Oh. Wait. Linux is made for almost everything.


Before I get too far into the new Dell experience, a few updates on the last post about the D620.


First: I said in that post that the D620 had a keyboard light, but it did not work under Linux. That was not correct. The D620 has a key sequence to activate a keyboard light. Fn-right arrow. This appears to be a case where the D620 must share a keyboard with some other model. The bit I thought was the non-working light is apparently an ambient light sensor.


Second: I made a statement about the screen being nicer than my Acer 5610. That was not utterly accurate either. The screen is higher resolution, with better Dots Per Inch (DPI), making for nice looking fonts and an easy to read but fairly small screen. But that is not all there is to a screen.


I brought up a picture I took while on vacation. It is of my mountain home, from a distance. Green in the foreground, blue sky. Fluffy clouds. Shadows of the clouds on the ground. Very bright day. Terrific exposure and saturation. On my Apple iMac or MacBook, the picture is amazing. (OK: I admit: I like looking at "my" mountain. The picture may not be all that great) On the Acer, it is pretty nice. On the Dell, it is washed out and low contrast.


This particular D620 LCD panel is fine for email and general business use, but I would not choose this panel if I was going to be spending a great deal of time doing image editing in the GIMP or something along those lines. As a last minute thought, I loaded up the picture on the IBM T41, and have to say that it is pretty washed out there too. High rez: 1440 x 1050. Just washed out. The T41 is also old, and the Cold Cathode Tube is fading in intensity as time goes by as well. Still: It makes me wonder if laptop vendors target resolution over accurate color rendition for the business market computer. Another thought: I bet that is why Apple has a rep for expensive hardware: They clearly did not cut any corners on their LCD panels. Makes sense, since the Apple is often used in graphic applications: There is a reason creative types like Apples, and now I see it is not just the applications.


Finally (on the D620 front today): Another Linuxen here at the office installed Mint 3.0 on their D620. Exactly the same hardware as mine as near as I can tell. Works great there too, with one exception: he can not move the mouse via the trackpad without highlighting everything on the screen. He is going to try and load up the "gsynaptics" package and see if he can fix that.


One other data point of a more general Dell / Linux laptop nature: According to Issue 52 of the Ubuntu Weekly Newsletter, in "United Kingdom, France and Germany [consumers] can order an Inspiron 6400 notebook or an Inspiron 530N desktop with Ubuntu 7.04 pre-installed".


Dell C400 and Linux Mint 3.0


First off, a bit of background about why I would be on this Dell / Linux Laptop riff lately. Up till recently, I have only ever installed Linux on one Dell, and that was back in the 1990's. It worked fine. The Dell hardware of the day was a real brick. Solid. Worked. Easy to work on. Very very slow by today's standards: 286 or 386 processor. I had a few negative Dell laptop hardware experiences on some of their later 1990's / early 2000's gear that put me off working with them any more. Part of my definition of a good computer is one I can fix when it breaks, and that particular time-series of Dells appeared to be designed to not be serviceable by anyone with less than four hands.


Things change.


I was first intrigued again by Dell when they started to talk about creating Linux supported consumer gear. That always gets my attention. Then, in talking to whurley, he mentioned he liked his new Dell (running Ubuntu I believe). He said in our podcast a while back that the Dell had some features he really liked. He was not specific then, but I started to wonder what those features were.


Then, as I was getting ready for LinuxWorld, I went around booting Knoppix on everything I could get my hands on, and all the Dells I tried from other people at the office all worked fine. The deal was sealed when the D620 of the last post arrived and worked so well. It was time to try Linux on the Dell C400.


C400 Heritage and Specs


I would be remiss if I did not point out that the Dell C400 I have is another of my trash pile specials. It started out life as multiple computers that were in a Star Trek Transporter Accident (tm) and merged to become this one computer. The battery is taped in. The top cover has cracks. The keyboard is small, cramped, and highly polished on the keytops. But the C400 works. It in fact surprised me over time in that it just kept running and running.


What it was running was MS Windows XP though. On a 12 GB, 4200 RPM, extremely noisy hard drive. I have no external CD that the C400 will boot. Looks like there is a connector on the side for some special external one it was supposed to have. I don't have one.


Back in its day... about 2002 according to the review at PC Magazine, the C400 was considered a nifty ultra-expensive ultra-portable. It was "fast", with a Pentium portable 1.2 Ghz processor. It started with 256 MB RAM.


My trash heap Dell's hard drive was clearly not stock: the review says the base disk was 30 GB. This unit was 12 GB. It booted XP though. Slowly. I had stuffed 1 GB of RAM in SIMM slots: I had several of the slow PC 133 memory sticks laying around from other departed systems.


The single USB port is 1.1, and not bootable. Not that I did not try.


Without a boot-able media to start it from, the C400 kept running XP not-so-silently in the corner while I watched it to see if it would die because of its rough treatment by the transporter. The D620 working so well with Linux meant now I really wanted to test that C400. The Transporter Chief (err... me.) came up with another way to stuff Linux under the covers. First we get a hair from the hair brush, and then we re-program the transporter Bio-filters..... no... wait. That was Dr. Pulaski on Season two on ST:TNG.


I Regret That I Have Only One T40 to ...


Another computer in the parts pile is an IBM T40. This one must have had some drink with a lot of sugar spilled into it. It booted still. It's screen was bright and beautiful. I had a 40 GB , 5400 RPM drive in it, and because the T40 has an internal CD, I had Mint 3.0 installed. Problem was, the keyboard was uselessly grunged with sticky junk. And wiggle the monitor... or even type very hard, the the screen flickers and turns off and on and does other random goofiness. And because the keyboard is slimed and sticky, you have to pound it to make the keys register correctly. Not useable.




I had that hard drive in there. All built. Ready to go. Doh.


Beam Me Up Linux


If there is anything cool about Linux (and there are so many things that are) it is that it does not really care when you take the hard drive from a computer from IBM and install it in a computer from Dell. I would have preferred the T40 be more use-able: the screen and the keyboard are (normally) way way better than the C400's. That was not to be (although I still have the T40 in case I can figure out how to fix the screen flicker thing).


The Mint equipped Dell booted, the screen went bonkers, X died, and I ended up in a really flaky TTY session. Huge fonts. This was not unexpected. The /etc/X11/xorg.conf from the IBM was totally wrong for the Dell. I mv'ed the file with "sudo mv /etc/X11/xorg.conf /etc/X11/", entered my password, and then rebooted.


X found no config, and built a new one that was correct for the hardware (and boy is that nicer than having to hack it together by hand like it used to be). Screen came up.


Surprise Surprise Surprise ...


(A Star Trek and a Gomer Pyle reference in one post. Double geek points!)


Now the nice stuff started happening. The new hard drive was clearly faster than the old one. What had felt sluggish before under XP now felt fairly fast and crisp. OK. I changed the OS on the hard drive too, I admit.


Mint 3.0 was clearly happy in its new home. It found the Wifi Card and configured it, no problem. It turns out that the Dell TrueMobile 1150 card that was inside the unit had the extremely-well-supported-by-Linux Orinoco chipset on it, according to "lspci".


On screen displays for volume up/down/mute appear. Suspend works, even from the "Fn-F1" keyboard sequence. Mint clearly supports all the hardware features I have thought to test, but given the wireless card being Orinoco, so would Fedora. This is very Linux simpatico gear.


So: How long has Dell been thinking about supporting Linux anyway?


An innocent young laptop computer walks through a mysterious office, is set upon, and finds a whole new world opened unto it.


A new computer... A Dell D620 laptop... has become the latest Linux Mint 3.0 computer. It seems to happen every time a computer falls withing my reach, I will admit. What was interesting about this particular one was the following:


  • Dell has started shipping Linux Computers, as has Lenovo. Dells are Ubuntu, and Lenovo's are Novell / SUSE.

  • Dell recently expanded the list of supported hardware, but the D620 was not on their list.

  • Mint 3.0, an Ubuntu 7.04 derivative, runs like a scalded dog on the D620. A sign of things to come from Dell? One can hope.


If you hop over to the Dell / Ubuntu website, you'll see the current Ubuntu notebook (depending on when you are reading this to be sure) is the 1420 N. Not the D620. Despite this, the D620 is working and very well. The volume buttons (little special buttons above the keypad) work, and there is an on-screen display of their status. The screen backlight keys work, but no onscreen status appears for them. The keyboard light (hey: there is a keyboard light! How Thinkpad!) does not activate. Not yet anyway.  BIOS thing? Does it work under XP? No idea. Future thing to check.


The Intel Pro 3945ABG works without issue, and at full G speed.


The Synaptics touchpad config utility had to be loaded, via the Synaptic package manager. No that is not stuttering. And then I had to pop a line about the "SHMConfig" into /etc/X11/xorg.conf:


Section "InputDevice"
        Identifier      "Synaptics Touchpad"
        Driver          "synaptics"
        Option          "SendCoreEvents"        "true"
        Option          "Device"                "/dev/psaux"
        Option          "Protocol"              "auto-dev"
        Option          "HorizScrollDelta"      "0"
        Option          "SHMConfig"             "true"

When I make changes to the mouse acceleration speed in Gnome, it only affects the wiggle stick: the touchpad needs a separate control program (gsnaptics under Gnome, ksynaptics under KDE), and there is no acceleration setting for it even then. It is kind of funny that the D620 has both pointing devices. Easier than two SKU's I guess. I can turn off the touchpad altogether and just use the wiggle stick if  I like.


The 1440 x 900 screen is pretty, bright, and the Intel 945 graphics card controlling it automatically detected and configured, because Mint includes the  "915Resolution" package by default. Last time I checked Ubuntu, I had to manually load that. Given the prevalence of these Intel cards on "low end" (non-high-end-graphics, no games) laptops, seems like 915Resolution should be included by all Distros.


This particular D620 unit has two Gigabytes of RAM, and 80 GB hard drive set to dual boot MS Win XP (emergencies only!), and and a Core 2, T7200, 2Ghz processor good for 7984 BogoMIPS. It is fast, but also runs cool: 48C most of the time according to LMSensors. Only one thermal zone found in the ACPI. HDDTEMP (loaded with Synaptic) says that the hard drive is only 40C. Both of these are after the computer has been up for over four hours, and been used to write this for over an hour.


I did boot over to XP first, configured the hardware, let the corporate image do whatever it does upon booting, and left it for a while so all the Marimba patches, scans, and certification and inventories and whatnot could be done. It was fast under XP, and ran cool there as well. I assume that the correct drivers for XP were all installed to make that work. I have certainly seen other computers (like my wife's Toshiba M45 or my old eMachines 5312) that run way hotter under MS Windows than Linux. MS Windows playtime over,  I grabbed Mint 3.0, and did the LiveCD boot. Manually done disk partition is pretty typical for a dual boot laptop I have built. Four partitions:


Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        1824    14651248+   7  HPFS/NTFS
/dev/sda2            1825        3040     9767520   83  Linux
/dev/sda3            3041        3283     1951897+  82  Linux swap / Solaris
/dev/sda4            3284        9729    51777495   83  Linux

Mint sets up grub so that everything boots OK. All auto-magic.


In fact, setting up Mint 3.0 on the D620 has so far mirrored the experience of setting it up on my main personal Linux computer, the Acer 5610, except the Acer occasionally boots Vista off /dev/sda1, not XP. Even though I literally just received the D620, it is not on the Dell web site any more. It has been replaced by the D630. There are differences between the Dell and the Acer that are odd: the Acer has a Flash memory card reader, which does not appear to even be an option on the Dell: I guess business class computers only read USB flash fobs... The Dell has four USB ports though, so that is not too bad.  The Acer screen resolution is not as good (1280 x 800 versus 1440x900), but given the price difference in the computers, I would expect the Dell to have the better screen.


Linux runs great. Everything important seems to work. Mint provides all the bits to make this a business class Linux computer: Evolution and OpenOffice are available or installed. So far, so good. Dell may not list this as a Linux supported computer, but I see no reason they shouldn't.


It Was Not Always This Way, Except When It Was


I used to have a Dell laptop, way way back. It was built like a brick, lasted forever, and even back in the 1990's I ran Linux on it. But then, in my opinion, the quality of the Dell laptops lapsed. My father in law sent me one to fix that was from four or five years *after* the first one I had, and I took it apart, and decided it was a total loss. I sent him back a eBay-purchased, hand-assembled Compaq M300 with his data migrated from the Dell. That generation Dell laptop was a nightmare to work on, and the build quality was just not happy making. I would have been supporting that thing every two months from then on. Assuming I could get it back together at all (and at the time, I was rebuilding Power 4 based Apple iBooks: no small feat.). I took apart a few other slightly later Dells we had in the trash pile at the office, and assembled some working ones out of those, but they were still awful to work on, and to use a technical term, persnickity.


I bring all this up not to say bad things about Dell but to say a good one: The D620 is head and shoulders better built than my father in laws. It is great to see that Dell has heard the voice of the customer, and in so many different ways. Ubuntu support. Quality improvements. Linux compatibility hardware even in the non-officially supported gear.


Parts is Parts


It is also worth noting that all computer vendors use parts that are the same, manufacturer to manufacturer. Hard drives from any given vendor can be found all over the place, for example. In Dell, Lenovo, Acer, Toshiba, and Apple you might find the same 120GB hard drive from, say, Samsung, or Toshiba, or Seagate. The recent flammable battery issue with Dells was hardly limited to Dell: they just happened to have bought as a percentage more of the bad batteries from the vendor than others.


Hard drives are a particular issue since they are complex, tiny, close tolerance, have moving parts, and in laptops, get schlepped about. Sometimes what is different is how the vendor engineers all the bits around the common parts: are there shock cradles and adequate cooling and the like? The same part may work well in one machine, and fail often in a different machine.


I have two reasons for bringing all that up: one will be that I will be interested to see how well the D620 stands the test of time. The other is to mention that I recently had to rebuild my Apple MacBook Pro, because the hard drive had an issue. I'll have a post up soon about that over at [Hello from the future: post is now up! Kind of like surfs up, only indoors and only involves computers.]


In the meantime, it is the commonality of the parts stack that makes it so companies like Dell can assemble computers that run Linux well. By choosing parts that Linux already has support for, like the Intel WiFi card, they make it easy to run Linux on their gear. Obviously, I have not tried this with every single possible Dell computer. There may be some for which this is not the case. But it *is* the case for this D620, newest member of the Linux laptop clan here at BMC.

Steve Carl

Vacation, Week Two

Posted by Steve Carl Aug 2, 2007

Back to work next week.  Meantime a few observations about the BMC open world from this  (vacation) side of it


In a way it has been very frustrating to be on vacation this  particular set of   weeks. I know I have to go on  vacation   sometimes (I am sure you can feel my pain), but being out here in the  "real"   world while BMC has been busy announcing things in the open source  world has   been a little frustrating. You might guess I have things to say about  it,   since "Adventures" has been in large part about open-ness, Open  Source, and   other related topics.


BMC started dipping their toes in the Open world at the turn of the  new   millennium, releasing internal projects like CMSFS (Allow Linux under  VM on   the mainframe to read the CMS File system), and other cool stuff. That  was   only the start. TalkBMC is all about being open. BMCDN takes that to a  whole   new level.


I have been talking here for nearly two years about the things that my  team   has done with Linux and other Open Source projects like Samba. But  that is all   in the nature of what we have been leveraging. Coming soon I'll be  talking   about how my team will be giving something back, via the   BMC   Developers Network.


It is odd to be looking forward to coming back from vacation, but I  can't wait   to start this new phase of being open.

Filter Blog

By date:
By tag: