Search BMC.com
Search

Steve Carl

Virtually Greener

Posted by Steve Carl Jan 31, 2008
Share: |


While Virtualization has received all sorts of attention, and more than its fair share of hype, there are real savings to be had with it, and not just of money.

 

This is an update to "Real World Virtualization" from June 15th, 2007.

 

It would not be hyperbole or understatement to say that Virtualization gets a great deal of press. A huge amount. Volumes and Volumes. As a VM system programmer I find most of it to be slightly amusing. Anywhere from flat out wrong to claiming to cure all the ills of the data center and Cancer besides.

 

What is real is that Virtualization of X86 hardware can save a company a great deal of money, and even better these days, a great deal of power. I already ran the numbers in "Real World Virt" so I am not going to beat that to death. Today I just want to report a real world result:

 

100 KVA

 

 

Since June of last year, that is how much power we have dropped off the primary R&D Central UPS by decommissioning servers. Real servers. We do *not* have fewer OS images running around here. Quite the opposite: We have more. What we have fewer off is 1993-1995 vintage, older less efficient power supply, slower computers that we were using for things like synthetic workload generation and various other non-benchmark non-device driver related applications. Literally hundred of computers have left the building.

 

Not all of these are X86 either. Some have been IBM Power series stuff, where the new P5 generation gear has allowed us to begin to do virtualization of more recent AIX images as well.

 

100 KVA can not easily be turned back into an exact number of watts without knowing what kind of power supply each computer that was decommissioned had. .8 is probably a good round number though, strictly based on experience with this gear mix in the past. That means 80 Kilowatts or 80,000 Watts have "left the building". 80 KW reduction is 160 pounds of CO2 reduction each and every hour they are off (assuming Coal as the power feedstock). 3,840 pounds per day. 1,401,600 pounds per year. Half those numbers for natural gas as the power generation feedstock, but even then, that is a very serious and very real reduction in the amount of CO2 being added to the atmosphere by R&D activities at BMC. And this is just Houston: We have been doing the same thing at our other R&D locations across the globe.

 

This of course will also save BMC money, and it is funny how things like that work sometimes: In the push to become more efficient at one thing, the "Law of Unintended Consequences" occasionally has a bright side to it. This one works either way too: Push for cost and space savings, do something good for the whole world. Try to do something good for the Earth (or at least the life forms that live on it), save some money along the way.

 

PS: Other than P series, I did not mention the technology stack we used for Virtualization here. Mostly that is because it does not matter: You can do this same thing with Virtual Iron, VMWare, Xen, etc. The specific technology of virtualization matters less than just doing it. I do have a post with more specifics coming up about our setup.

Steve Carl

One Week Later

Posted by Steve Carl Jan 25, 2008
Share: |


Quickie update on the Kernel Hackage post

 

I just wanted to get a quick update out before the weekend started on how the hacked CentOS NAS server is doing. See http://talk.bmc.com/blogs/blog-carl/steve-carl/Kernel-hackage for a complete description of the problem.

 

The patient is fine, and is resting comfortably. It does not appear to have suffered for our addition of code. HP-UX clients are burbling along happily, as are all the bajillion others.

 

A postmortem of how this slipped through showed that HP-UX clients running Connectathon ( http://www.connectathon.org/ ) work just fine against the unpatched CentOS. Something about Connectathon does not use the same code path to create a file that the regular HP-UX utilities do. Since we do not have source to HP-UX, we can not even begin to guess what that is. Bottom line: Connectathon certification of a new NAS server is now no longer sufficient: We will need to do some manual certification too.

 

The good news is that now it acts just like is should. Quiet, no trouble, acting like a cluster should and failing over things when it ought to. We'll let it cruise along a bit longer and then migrate a few more things to it.

Steve Carl

Kernel Hackage

Posted by Steve Carl Jan 22, 2008
Share: |


The Chief R&D Support NAS Basher takes a deep dive into kernel code to fix our CentOS cluster for HP-UX clients

 

  Since   I   last posted here about out CentOS NAS cluster, we have been in the weeds.   Our hopes for Linux being able to deal with this Enterprise class level of   support have been shaken *and* stirred. I will let Dan Goetzman tell the story   in a sec, but first some background since my last post.

 

  When we first released the CentOS server, it was not into full production,   move everything from the Tru64 server mode. We were more cautious than that.   The Tru64 file server, despite being out of support and now running on   hardware with no support contract was still not causing any problems. Not any   *new* ones anyway. So we migrated our groups home directories first, and then   a few "lower availability required" file systems, and then sat back and   evaluated.

 

  At first it looked like we would go ahead and live with the Sun NFSV2 Stale   Handle problem   (noted   in my first post), but then a raft of patches to the kernel came out, and   there were quite a number of them that hit areas of the kernel that were of   interest to us, specifically in NFS and GFS.

 

Dan and I talked about it, and decided to try the new kernel. That meant   re-certification but we decided to try it on the test hardware. Immediately   Dan found a problem with HP-UX clients, and it was *deadly*. Worse, we found   out the old server had this problem too! We had not actually tested the entire   mix of HP-UX clients possible.

  The HP-UX Problem

 

UNIX and Linux have the concept of bits set to define the read and write   ability of a file. If I own file 'xyz' and the write bits are turned off, I   can not write to the file even though I own it. I can use the 'chmod' command   to turn the write bit on, and then I can write to it.

 

The funny thing about that is that by using the chmod command, I am   technically writing to the file, actually the inode of the file. That means   that there is a bit of code someplace that makes sure I own the file and can   do it.

 

With GFS, and *only* GFS as the backing store, and HP-UX and only certain   versions of HP-UX as the client, accessing via NFS, we went down a code path   where HP-UX would attempt to creat a file, and then get rejected when it tried   to write to the file.

 

Dan's initial look at this came back with the theory that the GFS team had   begun to use certain generic kernel file system semantics, and that other file   systems like EXT3 and XFS had not.

 

This was a show stopping problem. Our environment is far too heterogeneous to   work with a gaping hole like this. We talked about it some more. Dan's   research had found one other post about this issue, meaning we were out   someplace in the code that very few had followed into. That was going to mean   that the "Many Eyes, Shallow Bugs" leverage of Open Source was not working to   our advantage.

 

Having the source code meant that we could see if this was something that we   could fix, but Dan told me at least three times that he was not a kernel guy,   and that he was not even sure what the Posix compliant behavior should be. He   decided to take a swing at it anyway. I turn it over to him here:

  Dan's Kernel Story

 

  I finally have "hacked together" a fix for the "HP-UX NFS client   on a el5 based NFS server with GFS filesystems" problem!
  
   After adding a bunch of "printk's" to the kernel and many kernel builds, I was   able to trace down the kernel function that was at the root of our problem. It   seems that NFSD calls
vfs_create (and that returns OK)   and then calls nfsd_setattr to set the file attributes   correctly. nfsd_setattr does a few things and ends up   calling notify_change, and down the road a bit more will   end up calling gfs_setattr and then farther down the path   will end up calling generic_permission (a regular kernel   routine).
  
   It's this
generic_permission call that returns   -EACCES. Apparently due to the fact that the file was   created with the correct owner, but with NO access   permissions in the case of the HP-UX NFS client. Interesting, this   generic_permission call is supposed to replace the   gfs_permission call that was the way it was done in the   pre 2.6.10 days. Apparently GFS is the only filesystem (as of the el5 vintage)   that has made this change. ext3 does not yet call   generic_permission. I found patches to make this change   to XFS, but a trace of XFS on el5 reveals it does not call   generic_permission at this time. So, that's why it only   fails on GFS on CentOS5!
  
   Not really wanting to change a kernel function that other things might call, I   elected to change where the nfsd layer in the kernel gets the error returned   (by
notify_change). My hack simply checks if   notify_change returns error=-EACCES   and then IF (NFS uid == inode uid) reset the error var to   0. That is, if the owner of the inode is the same as the calling owner uid   then allow access. I added a printk at the kernel.debug level so I can see   this via syslog if I have the kernel.debug level set to log. To verify it   works...
  
   Initial tests indicate success. I have all 3 nodes on the cluster up on this   "BMCFIX" kernel now. HP-UX NFS clients seem to work AOK now.
   I posted this to bugs.centos.com case, to have the experts look at how to   provide a more permanent fix. As I am not really up on things like POSIX   compliance and all. This is just a hack to prove that I am on the correct path   and it will resolve the problem.
  
   Anyhow, here is the patch if you are interested in the exact code that was   added to fs/nfsd/vfs.c to the nfsd_setattr function:

 

 

  --- vfs.c_save  2008-01-18 13:06:50.000000000 -0600
   +++ vfs.c       2008-01-18 13:18:40.000000000 -0600
   @@ -348,6 +348,11 @@
           if (!check_guard || guardtime == inode->i_ctime.tv_sec) {
                   fh_lock(fhp);
                   err = notify_change(dentry, iap);
   +               /* Allow access override if owner for HP-UX NFS client bug on GFS */
   +               if (err == -EACCES & (current->fsuid == inode->i_uid)) {
   +                       printk (KERN_DEBUG "nfsd_setattr: Bug detected! Ignoring -EACCES error for owner\n");
   +                       err = 0;
   +               }
                   err = nfserrno(err);
                   fh_unlock(fhp);

  Dan's bug number is at http://bugs.centos.org and is 2583.

  We Admit: Its a Hack

 

  I post this all here in the spirit of openness, should anyone follow us out   here to the bare edge of GFS based NAS servers. We do not know what the right   way to really fix this problem would be, but we looked at it as being like my   example about owning a file system being an implicit authority to at least   write to the inode.

 

Dan stood this code up a four days ago, and so far, so good. In fact, we know that it is doing what we want in terms of being a cluster because a "network burp" caused thre NFS service to migrate from one node to another. We only knew about it because we saw it in the log. The customer facing service kept right on running.

  Open Source

  This problem has all sorts of things about the advantages and dis-advantages   of Open Source, all wrapped into one neat bug number.

  1.      By having the source code, and a guy good enough to read and understand it,     we were able to fix a severe problem in-house, with relying on anyone  
  2.      Because we were on the bleeding edge where very few folks appear to be, we     were on our own. "Many Eyes, Shallow Bugs" principle does not work when     there are not many sets of eyes looking at all the possible cases  
  3.      Linux is great for a heterogeneous environment, as long as one is willing to     put in the time and effort sometimes. Along the way of shooting this bug,     Dan was laughing about some of the code comments about all the other patches     in Linux to deal with various corner cases for things like Irix and other     more obscure combinations of problems. It is easy to see why the embedded     market loves Linux.
      
  4.      By choosing CentOS, we chose not having a support option, but one way out of     this would be to use the equivalent version of RedHat, and taking out a     support contract. That back door possibility was part of the attraction of     CentOS.  
  5.      By tripping over this now, and documenting it, we have hopefully made life     easier for whomever comes this way next: Dan notes that XFS is getting ready     to start using the kernel provided file systems semantics, so they would     have seen this next.
      
Steve Carl

Two Heads

Posted by Steve Carl Jan 17, 2008
Share: |


A sequel of sorts to PCLinuxOS 2007 and Mint 4.0: ELDs?

 

I have read over and over that having two screens on a single computer increases productivity by 20-30%. Linux has had dual monitor support via X for a while. What with one thing and another, the only computers I have ever actually done dual head support on before now were my laptops. While I liked it a great deal, especially for presentations, I have never had it at my desk with a machine that it is much more logical that I would have: my Linux desktop.

 

Ubuntu 7.10, and by extension, Mint 4.0 has a new monitor management widget with dual head support so that direct editing of the /etc/X11/xorg.conf file is not longer required. I know there are serious geek points being lost here by admitting that I prefer not to edit the xorg.conf file directly unless I have to, but so be it. Fedora has had this dual head setup GUI widget for a long time, so all I can say is “About time there Ubuntu”.

 

The new Dell 745 running Mint 4.0 just needed a dual head card to make it happen. It has a PCI-E video card, and an on-board video card, but the BIOS won't let both be active at the same time. Doh.

 

Some quick research found a cheap one with the Nvidia GeForce 7300 GS chipset. This PCI-E card has a VGA port and a DVI port with a VGA adapter. Both the flat panels I have are VGA: Dell 197 FP and Dell 172FP. A 19 and a 17 inch panel, both 4:3 ratio. The 197 is whiter on the panel backlight, probably because it is newer.

 

I pulled the ATI based card out of the Dell 745, installed the nVidia based card, and booted. Linux Mint immediately told me it was in low resolution, and did I want to do anything about that. I click “yes”, and we went into the display setup do-dad. It saw I had two heads on the box, and let me configure them, and then continued the boot.

 

Once up, I had a weird desktop. It was in Xinerama mode, and the Dell 197FP was in 1024x768 mode, but the pan mode was in 1280x1024, so the desktop was all slippy slidy and panning all over the place when I moved the mouse. The 172FP was stuck in 1024x768 mode even though it can do 1280x1024 so once I 'slid' off the left panel onto the right hand one it would work right. Generally odd.

 

Clicking on the restricted drivers controller, and enabling the nVidia drivers did not change this. Going into the System / Administration / Screens and Graphics (I have the Gnome menus enabled, not the SLAB looking thing) applet, I poked around with various settings before figuring out a few things. No matter what I did in there though, I could not get the screens set up the way I wanted them to be. I could not get the 172FP out of 1024x768 mode. It worked, but the panel was doing some funny things to the fonts to make them look OK. Sort of like smooth but blurry.

 

I had read on the Mint wiki that the Envy app had been maintained in the Mint distro because it had more and better control over the screens than the current Ubuntu application did. I went to Applications / Systems Tools / Envy and fired that up. It asked if I wanted to install the nVidia drivers. I thought I had them on via the restricted source manager, but decided that there would be no harm if I said yes.

 

There was no harm, but I did not expect what happened. Envy pulled off the current nVidia packages, downloaded 76 new packages, re-compiled the driver, and finally launched the nVidia configuration applet.

 

I now had far better control of the system set up, but it took a bit more tweaking to get it to do what I wanted. First off I had to tell the applet that the right hand head was relative and to the right of the left hand head. And I had to be in Xinerama mode to allow things I clicked on in email to launch in the browser started on the right hand head, otherwise they were in separate X sessions and isolated from each other.

 

VMWare Server started and ran just fine on the new setup, and now I could make a full screen guest appear on the right hand screen and work with it at the same time as the left hand screen stayed talking to the host OS. Very cool.

 

One last oddity. I had to enable 'root' mode to be able to use the nVidia screen management widget. Running it as me generated error messages about not being able to write to /etc/X11/xorg.conf. It came with no SU or SUDO style wrapper. It was not hard, and I probably could have dug up the name of the binary and launched it with SUDO and achieved the same thing.

 

So, in the few minutes I have had as a two headed Linux person, am I 30% more productive? I would say yes. The screen real estate allowing me to refer to things and “highlight and paste” from browser to email alone has made it worth it. The down side: My PCLOS box now has no monitor on it. Guest I'll have to dig up a CRT for it, at least till I configure VNC Server.

Steve Carl

BarCampESM

Posted by Steve Carl Jan 17, 2008
Share: |


Join us January 18th and 19th in Austin TX for BarCampESM!

 

If you happen to be able to get over to Austin, TX Friday or Saturday, and you are interested in open and informal discussions around Open Management or Enterprise Systems Managment or ITIL or BSM or if you just  like hangin' with other geeks, then drop by J. Blacks on West 6th Street in downtown Austin. Here is the Wiki:

 

http://barcamp.org/BarCampESM

 

The first 50 folks signed up there get free food on Saturday. After that first 50 you have to pay for the food, but not for the company.

 

The details, including a link to Google Maps, are on the Wiki. I hope to see some of you there!

Steve Carl

YAB

Posted by Steve Carl Jan 17, 2008
Share: |


I must be crazy. Yet Another Blog: "On Being Open", at the Open Managment Consortium new website.

<note from 2010: Posted for historical completeness only. The OMC website is no more>

 

Back when I only had this blog, I posted here at least twice a week, except on vacations and whatnot. That activity all happened at nights and on weekends, and when my family would see me bent over the laptop typing away, they would say "Writing another Blog?"

 

Last year I added a second blog over at blogger. The URL is http://on-being-open.blogspot.com/, but the name of that blog is "Adventures in Linux and Open Source". It was meant to be an extension of what I do here, but with a personal use slant. Of course these things have a way of taking on a life of their own, and I, like here, have strayed from time to time from my central theme.

 

Now comes a new phase of my Open Source life, and beginning to work with the Open Managment Consortium. Which meant a new blog on that new website:

 

http://beta.openmanagement.org/blogs/stevecarl/

 

The beta is because this is a new web page just assembled, and I'll have to update this when it goes GA.

 

The title of this blog was suggested by the URL to my personal one, and is called "On Being Open"

 

Now my family does not ask me if I am working on a blog, but "Which Blog is This?" [sung to the tune of "What Child is This". I have a musical family.]

 

PS: my newest post at the OMC went up last night, and it is "The VW Beetle Principle"

Share: |


Fresh installs on nearly fresh computers: A new Enterprise Linux Desktop Adventure

 

Featuring an old desktop Linux guy...

 

Happy New Year!

 

Since I last posted anything here, having spent three weeks on vacation in Far West Texas, some things have changed on the Enterprise Linux Desktop (ELD) in my office.

 

First off, a new Dell Optiplex 745 appeared on my desk. It never booted anything before it booted my Mint 4.0 LiveCD. I messed for a bit with the LiveCD, surfing and editing and generally making sure it looked like Mint liked this new hardware. Once that checked out OK, Mint 4.0 spun down the the hard-drive, replacing whatever OS that was on there before. Pretty sure it was not Linux in any case. Whatever it was, it is not there now!:

 

/dev/sda1   *           1        1216     9767488+  83  Linux
/dev/sda2            1217        1459     1951897+  82  Linux swap / Solaris
/dev/sda3            1460       19452   144528772+  83  Linux

 

This is my normal "/" separated from "/home" config.

 

The Optiplex 745 replaced a Precision 340. The 340 was running Mint 3.1. The new gear is better in every possible way: Dual Core. More memory, etc. It lines up like this:

 

 


New Optiplex 745Old Precision 340
CPUIntel 6300 Dual Core, 1.87 Ghz, 7445 BogoMIPSIntel Pentium 4, 2.0 Ghz, 3991 BogoMIPS
RAM2 GigaBytes1.25 Gigabytes
Disk (/dev/sda, /dev/hda)Seagate, 8 MB Cache, 160 GB, SATAWD, 2MB cache, 80 GB, ATA
Videofglrx driver, RV516, ATI X1300/X1550ATI Rage 128
NetworkNetXtreme BCM5754 Gigabit Ethernet PCI Express3com 3c905c, 100 Mb

 

(

updated 1/14/2008 to fix the hard drive spec swap. Thanks David!

)

You'd be tempted, based on that specification lineup, to think that the new system is twice as fast as the old one, and you'd be correct. Mostly. The dual processors make it so a single runaway thread is easy to cancel and recover from, and of course Linux is beautifully SMP these days, so it feels 2x fast. But there is more than meets the eye - or - at least that meets the BogoMIPS here. BogoMIPS aren't called that for nothing. Core processors are far better at out-of-order instruction execution and predictive pipe-lining, and the Intel 6300 has VT so that VMware Server is a better at guest hosting on the new system than the old.

 

None of that shows up in a BogoMIPS rating. Bogo is short for Bogus. That is a good name. It is not that a BogoMIPS rating is useless. It just has to be kept in perspective.

 

The hard drives in each computer are both 7200 RPM units, and there is just one arm, so anything that goes I/O intensive is not seeing 2x. SATA w/ 8MB cache is better than ATA with 2MB cache, but not that much better.

 

Random thought of the day: Anyone recall when IBM called these things "Hardfiles"?

 

Mint 4.0 Install

 

Mint 4.0 installed without any issues in the 745. Compiz enabled. The newer graphics card handles the Compiz effects without any apparent strain. Of course, I keep most of them turned off except for things like window preview and other functional or informational effects. Be that as it may, the 340 would not run Compiz on its Rage 128 card. Not in any reduced mode that I tried anyway. Evolution, Openoffice, everything all just snap along. OO 2.3 launches in particular are about 1 second. Wow.

 

Evolution 2.12.1 has not had any issues at all. It blazes along, and very clearly benefits from the underlying speedy hardware.

 

One interesting thing is that the Dell E197FP LCD flat panel is supported much better. I have no idea what exactly made it better, but the OS detects and sets up the panel in /etc/X11/xorg.conf without any interference on my part. The fonts are nicer looking, better aliased, and the overall effect is that the entire screen is much bigger and more useful than it was before. I changed out the OS and the computer, so only the flat panel is the same, so no way to know what fixed this. More on this later in this program.

 

In late breaking news about video: I put a new version of Compiz on and now the 3D desktop does not work anymore. I don't actually care that much, since there are few things  in there I really use other than the Expose-like feature and the Window preview, but this is a bummer from the point of view of stability. In a real ELD of course this Compiz package change would have been tested by the desktop support folks before it was certified to roll out to the environment.

 

A new PCLinuxOS 2007 system

 

With Mint 4.0 happily spinning on the 745, and my Evolution email and other Enterprise desktop stuff brought over, it was time to decide what to do with the 340. It is not a bad little box. Sure, it would not run Vista well or anything, but then, it is over three years old. Vista is not a valid benchmark of computer / OS viability. Linux runs well on way less than this Dell 340 box, especially with the 1.25 GB RAM in place.

 

I gave some thought to Ubuntu Server, just to see it in action... and to see what its GFS code looks like. In a soon-to-happen post, GFS has been a real pain on the CentOS cluster and we have been giving serious thought to what to do about it. More on that in another post.

 

I decided to load back up PCLinuxOS 2007. I had never run it on a desktop class system, only laptops. On old grungy laptops it felt really crisp, so I wondered how it would do on this fairly good spec 340.

 

It runs crisply.

 

Installing it was not the pain free Mint 4.0 experience though. Not horrible or anything, Just two issues.

 

  1. GRUB does not work
  2. The default video settings were a slight pain.

GRUB / LILO and the Dell 340

I fixed GRUB by re-installing PCLinuxOS and selecting LILO as the boot loader. I messed around with GRUB for a bit first, and finally determined that for some reason that GRUB could not see /dev/hda, which it thinks of as hd0. Well, it should have thought that. But it didn't. It would boot the MBR, then fall into the GRUB prompt and nothing manual I tried would load the kernel. I booted back to the LiveCD, and used chroot to mess around with it for a while and then decided I was having a bad GRUB / PC BIOS interaction.

 

A light went on about something Mint 3.0 / 3.1 has been doing on the 340. It would not boot on the 340 either, but it was failing in a different way. Mint booting would fail FSCK on /dev/hda1. I'd then mount /dev/hda1 (/) and /dev/hda3(/home) and type exit and it would come up.

PCLinuxOS failed to see the disk at all with GRUB.

 

LILO fixed everything, and thank goodness PCLinuxOS still includes the option of using either Bootloader!

<gripe>GRUB changes the names of the disks just enough to be confusing. The first disk it finds (be it hda or sda) is called HD0. That is just close enough to HDA visually to be confusing. I wish it was DISK0, or even better, DISK1 instead.</gripe>

 

Video Set Up

 

PCLinuxOS is KDE 3.5.8, where Mint uses Gnome (unless you load up the KDE packages or the Kubuntu-ish version [of which there is no 4.0 GA version yet]{At the time of this writing, and that will just about be enough on the brackets!}).

 

On the 340, Mint did the right thing as far as screen resolution and whatnot. PCLinuxOS came in at 1024x768. To fix this required going into DrakConf (System / Configuration / Configuring Your Computer). The KDE Control Center app for configuring the display will not help you, even though it is not greyed out or anything. Really, would it be a huge mode to the KDE code to put a little note in the KDE Control Center to tell people to go use DrakConf?

 

Two things had to be changed: First I had to tell it that the display was a generic 1280x1024 flat panel (It is a Dell E172FP, but this was not detected). This is at Hardware / Configure your monitor in DrakConf aka the "PCLinuxOS Control Center".

 

Also in that same place is "Hardware / Change the screen resolution". Once this is done, and X is restarted, I had 1280x1024. At it was lovely. Far nicer anti-aliasing than what Mint had done with this particular video card. But this is not Apples and Apples. This could be KDE and the way PCLinuxOS sets it up rather than anything about xorg. Mint used Gnome and I have started to notice that there are some corner cases in hardware where Gnome does a better job, and others where KDE looks better. I have not chased this to ground.

General Linux Video Ramblings

 

There is greatness in the way Linux does video support. Having different layers like X and the desktop, it (like any other OS done this way) is able to keep the look and feel stuff largely separate from the hardware support, allowing different groups to focus on what they are are interested in. At the same time, KDE and Gnome drive enough stuff that they do have a very tight level of reliance upon the X server. It is a testament to Open Standards that KDE/Gnome do not seem to care if they are installed on top of XFree86 or Xorg. Still, despite all this abstraction, it remains that there are some set ups where one looks better than the other.

 

Case in point:

 

Previous set up was the 340 with the Dell E197FP display. Distro was Mint. It looked OK, but not as good as Mint on the Dell 620, or my personal Acer. I could never figure it out, never get the fonts to look smooth and anti-aliased.

 

I moved the E197FP to the 745, and added a Dell E172FP to the 340. Same sync speeds, same resolution: 1280x1024. Mint still looked jaggy on the 340 / E172, but the 745 / E197 now looks great. Smooth a and clean: In a funny way it feels like the E197FP gained more screen real estate. I guess because everything can be smaller and be readable without looking jagged.

 

Then PCLOS rolled in to the 340/E172 combo, and now it looks great. Every bit as nice to look at as the 745 / E197.

 

There are subtle interactions in the stack of OS, X Server, Video Card, Monitor, Distro, and chosen desktop environment that can make a huge difference in the way the whole shooting match looks and feels.

ELD and PCLOS

 

As noted in my previous PCLOS on a laptop foray, PCLOS works great as an Enterprise Linux Desktop. It would be an easy argument to make in fact that Kubuntu / PCLOS would be a better choice for ELD, at least if the people using it were recently using an OS from Redmond. The KDE 3.5 user interface paradigm is closer to the one on MS's XP than the Gnome 2.20 of Mint 4.0. When I give the ELD lab at places like LinuxWorld or SHARE, I always use a KDE based desktop.

 

And "Yes, there is a KDE version of Mint 4.0 coming soon". The Mint blog says that they are having to work harder than they thought porting the Mint add-in tools to KDE.

 

Dell and Linux

 

Even though these are not officially supported for Linux by Dell, both of these computers run it without issue. The Ubuntu Linux supported hardware is an Inspiron 530 N at this writing. Other than the GRUB thing on the 340, both these distros work great. Either one is viable as an Enterprise Linux desktop.

 

While I have looked at many Distros over the years of this blog as Enterprise Linux, I ususally do it on laptops. This was my first all-real-desktop hardware look. My theory has always been that laptop hardware is harder to support for the OS, and if laptops work, desktops should be a breeze.

 

I don't think I have totally invalidated that idea here today. The GRUB issue on the 340, but not on the 745 does show that *all* hardware needs to be evaluated before a particular Linux distro is deployed to the enterprise desktop.

 

The point is more that there are other viable distros for the ELD role than just SUSE/Novell and RedHat. These two work fine.

Filter Blog

By date:
By tag:
It's amazing what I.T. was meant to be.