First, the news. BMC bought a company last week: Gridapp. I am, as I write this, in the newly minted BMC office with my Dell M4500 laptop.


I wrote about the M4500 laptop here, back when I was first setting it up as a triple booting device. Acquisitions like this are the reason I have it.


A quick review of the M4500 assuming you did not run back and read that post: the M4500 is a quad core i7 system with 8GB or RAM and a 500GB 7200 RPM hard drive. It triple boots: Right now it is Linux Mint 10, MS Windows 7, and the ADDM appliance, which is RedHat 5.5 based.


The ADDM appliance as configured on the M4500 can execute 360 concurrent discovery requests. The limit is set by this particular laptops hardware configuration. It is a big laptop, but a small server, so real server class hardware could do far more at the same time. While here at the former GridApp office, I am running daily scans of their network, and getting ready to be able to update the central Atrium CMDB with all of the information I have found.


Does BMC do Linux? Circa 2003


In the early part of last decade, I would be in the BMC booth at LinuxWorld, and would get comments for attendees along the lines like "I didn't know BMC had a Linux product" or "BMC? Isn't that a mainframe company?". It was fun to be surprising to people about the breadth and depth of the BMC product line, and in the last few years, between internal organic growth and acquisitions, the number of products that either have a Linux version, or maybe were even started / heavily developed on Linux has grown.


This adventure in Linux was about using one of our newest products.


Atrium Discovery and Dependency Mapping: ADDM


We only acquired the technology that is ADDM a little while back, so this is the first time we have ever had a chance during an acquisition to set up the appliance and scan the new network. It is therefore the first time we have ever had such in depth knowledge about the servers and applications of the company we acquired so quickly. This takes out a ton of manual effort for the IT team in the early days post the acquisition, and allows us to make fact based decisions about not just the R&D lab hardware, but I can tell our networking team what switches have been found, and our security team what desktop OS's are hanging around ahead of when they would have had a chance to get Marimba fully deployed.


Every acquisition and every company are different. One variable is how fast the deal goes from due diligence to closure. If it is fast there is no time to completely plan and deploy network integrations. New network connections require lead time with the vendors. There may be security issues that need to be resolved before the networks can be connected.


If the networks are not connected, it is very hard to scan them from the existing servers. That is where the Linux laptop based appliance comes in. The networks need no connection for there to be actionable intelligence created with the handy dandy ADDM in the M4500-can.


Non-automated due diligence and financial data will take you to a certain level of understanding of what the new company has inside it. One does not buy a company without knowing a fair amount about it. That is not the same thing as having the information needed to integrate and manage it though.  You cannot for example easily find out how many clusters there are, and at what level of the software. You can not see the way the servers "gather" around other servers like NAS or SAN. The network topologies will more than likely be manually created and have a good chance to be missing something or another. Devices like consumer grade wireless access points might even be around and forgotten.


How many VM's are there? What are the Hypervisor types and levels? LDOM's? LPAR's? IVM's? With Virtualization, it is easy to get sprawl. The chances of the documentation being up-to-date on the VM environment are a tiny fraction of the documentation on the network being current. Yet, for an R&D integration,  all these things need to be known so that the product can be quickly and efficiently integrated into the larger product line it is probably now part of.


What Does it Take to Get Going?


Day One. The ground team is there, and the new employees are being oriented to the new company. Everyone is excited about the changes and ready to dive in and get started. No. Really.


What does it take to get the ADDM appliance up and running? Before I got here I had installed the appliance onto the laptop obviously (full disclosure: I had a lot of help from my ADDM expert). I just booted up to RedHat 5.5 from the Grub menu rather than my usual Linux Mint.


I pre-configured the subnets I wanted it to scan based off some doc for the due diligence. One less thing to du while the day one fur is flying. Since I had never done this before, I tested the appliance back in Austin to make sure everything was working from the web interface. My poor Linux desktop was repeatedly scanned just to make sure I had some idea what I was doing.


Quick side note: I am *deeply* impressed with the ADDM web management interface. I tried several browsers, like Chrome and Firefox and several OS's like Linux and OS.x, and the web interface to the appliance works flawlessly. I love to see web-standards based work like this!


Set up the Slaves


The first thing I did was to add MS Windows Slaves. The MS Windows discovery is controlled by the appliance, but it has some Santa's helpers. We set up some small (2 GB, 1 VCPU in this case) VM's, browsed over to the appliance web interface, downloaded a small program, and set up the slave. The appliance is very self contained that way: Everything needed all built in including the downloads for the slave servers. There is one slave per MS Windows domain. Each slave is given a domain admin account so that they can log into (via WMI) and read out information about each system that logs into the domain. No changes are made on the systems being scanned.


For an R&D environment, there are potentially quite a number of domains, so being able to spread the work around, while it takes a bit of setup, speeds up the actual work. And the slaves can be pretty much anything. 32 bit. 64 bit. Server version. Desktop version. Laptop. VM on a laptop. The slave code is not picky. the M4500 has enough CPU and RAM to scan at least 5000 hosts in 24 hours, and obviously co-processing it via server slavery helps, even if it sounds bad.


For systems that do not log into a domain, another slave (or slaves.. no limit on the number of non-domain slaves there can be) is created and given a list of subnets and local credentials to try.


The Appliance Leads by Example


All the SNMP, UNIX and Linux scanning is done from the appliance itself, so the appliance is given the SNMP strings, usernames and passwords to try as well, and these can be arranged by subnets.


Run Appliance, Run


So: Subnets, Slaves, SNMP strings, and usernames. Once that is set up, you next configure runs. These can be one shots, or scheduled. I set up a daily run for each subnet, and over the course of the week as I learned things I added, subtracted and tweaked a bit to drive up the capture rate. My first scan looks like it will have been about 80% of the total I will have when I am done here, so pretty good first capture run. It helped that GridApp-now-BMC had terrific manually maintained documentation to work with.


Todays Topology


The last thing will be Topology. ADDM has been learning all week about the relationships between host-type servers and other servers as well as infrastructure. I can already see the clusters and the NAS and the network switches. Tomorrow I'll add a set of topology runs to pull all that together. ADDM will run a few other checks on the network to paint the whole topology picture, but most of the data will come from things it has already learned at that point.


Linux can Multi-task, and so can ADDM


The appliance sits right next to my Mac here on my temporary desk. It is GUI-less, but I can log in and run "top" to see what is going on internally to Linux. When the full tilt scan is running, the "8" CPU's (quad core, hyperthreaded, looks like 8 CPU's to Linux) are all running full tilt. A light warm breeze wafts from the left side of the M4500. At most 7 of the 8 gigabytes of the RAM on the M4500 are used, but the swap space remains untouched.


Even while the the current scan is running, I can look at the web interface and answer peoples questions about the assets that have already been found on the last scans. Look at the dashboards and start thinking about what my server consolidation opportunities are. What model the network switches it has found are, and what version of firmware they have. How many MAC addresses, and what IP address they are associated with. On and on. The web interface is a bit slower when the scan is under way, but it is still usable.


I am an ADDM newbie. this is my first time to ever do anything with it. Fortunately ADDM is an application with a fairly short learning curve, so that I have been nearly instantly productive. At the same time, there is an incredible amount of depth here, and our ADDM expert is able to make it do all sorts of amazing things with its advanced functionality.


And there it runs as I type this, over there on my Linux laptop. Very cool.