Search BMC.com
Search

1 2 Previous Next

BMC Atrium CMDB

25 Posts
Share: |


This post shares some diagnostic enhancements included in the Remedy Configuration Check Utility in BMC Remedy AR System Server (ARSystem) and BMC Atrium CMDB Suite (CMDB) version 8.1.01 (Service Pack 1).  The goal of the enhancements it to simplify the process of identifying, correcting, and reporting on configuration issues in the product.

 

 

 

A little background

 

The earlier post 12 steps toward a systems approach to diagnostics outlines different kinds of diagnostics which may be required for products such as BMC Atrium CMDB, and the subsequent post 7 Tools to verify BMC Atrium CMDB is working well describes the diagnostics available in CMDB at the time.

In Service Pack 1, we looked at how we could automate or simplify these diagnostics so they can be executed and collected more easily when required.   We looked at the AtriumCoreMaintenanceTool Health Check and the pre-checker described in this post BMC Remedy Pre-checker for Remedy 8.1 (unsupported) to see which would be the appropriate tool to extend.  The Health Check functionality in the Maintenance Tool is also at the completion of the installation – as the “Post Install” check.  This design limits which checks can or should be run from it. For example, if we automated the process of checking recommended configurations – they would always fail in the post install check as there would be no opportunity to configure the product yet.   So we decided the next step in the journey of automating diagnostics was to extend the  Pre-checker to provide a simplified user interface to execute diagnostics.

 

 

See the product documentation to learn more about the features of the BMC Remedy Configuration Check utility and how to access it.

 

The Pre-Checker was originally designed to detect environmental issues for install and upgrade, but since the scope of the tool has changed, it has been renamed the Remedy Configuration Check utility. It can be used not only to check the environment configuration before install or upgrade, but also to detect product configuration issues which are the causes of many post installation issues. This tool will also enable us to automate frequent issue troubleshooting steps.

 

 

 

BMC Remedy Configuration Check utility


The goals are to -

  1. Help administrators to troubleshoot configuration issues
  2. If the administrator is not able to resolve the issue, make it easy to gather and share test results

 

 

 

 

How it works


The BMC Remedy Configuration Check Utility is included with BMC Remedy AR System 8.1.01 (Service Pack 1) media.  The file can be extracted to the system to begin using it. For more information on downloading and extracting the utility, see To Obtain the BMC Remedy Configuration Check utility.




CMDB checks included in BMC Remedy Configuration Check utility


In Service Pack 1, the following list of checks has been added for CMDB.

 

 

1) CMDB - System Information

 

This feature makes it easy to collect information about the system easily. This feature may also make it easier to compare different systems running BMC Atrium CMDB or report it when working with Customer Support.

Note - Going forward, this check will be moved in the information gathering category.

 

 

 

2) CMDB Metadata Check

 

This check detects for pending CMDB class changes.  This check performs the same test which can be performed manually using the command line cdmchecker tool with the –g option.

You can find more info on cdmchecker in the product documentation.

 

 

 

3) CMDB Class Overlay Check

 

Overlay on CMDB could cause issue in upgrade process. This check detects overlay on CMDB classes

 

 

 

4) CMDB - RE Private Port

 

We recommend that Reconciliation Engine private queue configuration for performance reasons. If private queue configuration is not configured correctly, then there won’t be any performance gain. This check can be used to detect improper private queue configuration.

 

 

 

5) CMDB - Index Check

 

This check will provide references to out of the box indexes in CMDB version 8.1.01 and validate the indexes exist.  It will report error if any index is missing.

 

 

 

 

This blog post hopefully provides a better way of why the Pre-check utility was renamed and some of the new capabilities added.  This expanded functionality should make it easier to diagnose issues.   A few checks were added to this tool in version 8.1.01.

 

Stay tuned for new additions in future releases.

 

 

I hope you found this blog useful, please rate it below or add comments.  To find similar information, see BMC Remedy Blog Posts.

Share: |


Hi All,

I have recently joined BMC Software as the new Product Manager for the Atrium CMDB and thought I should do an introduction to myself so that you can get an idea of what makes me tick and also understand my drive to make the Atrium CMDB a tool that can drive realisation of value for you our Customers.

I have been part of the BMC Remedy world since 2004 when I worked for an ISP and was given the mantle of 'the Remedy guy' for the entire organisation, I had never seen Remedy before and actually had no idea what it was. After a mild baptism by fire I immediately saw the value that the Remedy Action Request System could give the Network Operations Centre I ran at the time and also the wider organisation, back then we were rolling out ITSM 5.1.2.

Since that time I have been involved in many rollouts of BMC Remedy ARS based solutions, and most recently I lead the Remedy development organisation for BlackBerry rolling out ITSM across that organisation and recently spent sometime at ServiceNow before coming to BMC Software.

My desire is to actively engage with the community (you) and have what I hope are active discussions through this forum and others in order to ensure that we are building a tool that helps you solve your business challenges on a daily basis. Part of my role here at BMC is to ensure that your voice is heard and that we are investing in the right areas to drive our product and your success forward.

When I'm not talking about CMDB, working on the product roadmap or out talking with you, our customers, I spend my spare time watching movies, enjoying company of friends and a new past time of Husky Scootering, feel free to ask me more about that if you really want to know!

I'm based in the University town of Cambridge in the UK and look forward to our interactions whatever the medium as we at BMC evolve Atrium CMDB to meet your needs and supporting your success.

Contact me via twitter @flirble or via this community of course :-)

If you are having issues with Atrium CMDB I strongly suggest you log an issue with support and your account team before engaging me directly but if you feel I might be able to help then I'm happy to do so where I can.

TTFN

Stephen

Share: |


CMDB is great out of the box. All the classes and relationships we ship are very well inline with the business environments that our products serve. However! Every once in a while you'll have to extend the common data model with a couple more attributes, class definitions or product data loads.  Last October, I presented a Webinar "How to unlock the potential of Common Data Model in BMC Atrium CMDB" covering how to extend the CDM using Class Manager.  You can view the recording here. In this blog post, I would like to extend on the topic to talk about products which extend the CDM as part of an installer or by loading a CMDB Extension with the Maintenance Tool.

 

 

Several current products use extensions to the CMDB including BMC ProactiveNet Performance Manager (BPPM), Configuration Discovery Integration for CMDB (CDI), and it is also used for loading Product Catalog data updates. The reason for the separate install step is to make all the required changes semi-automatically, and hopefully painlessly.  This takes the element of human error out of it, so it can be an either "it worked, no big deal" or "it didn’t work " what happened?" experience.  Whenever I look at a failure, I always ask the question " "What was unexpected in the environment that blocked the operations the extension was trying to perform?"  I seek to understand.  There is a "Zen" to it. It should be a rational exercise, not a physical one like trying to stuff pythons in a paper bag. Below, I will highlight some of the ways the product has tried to prevent, improve, or minimize the room for failure, and share some of the ways I think about it as I look into issues.  Hopefully it leads to a more peaceful state for your CMDB and yourself.

 

As far as ensuring the server is in a good state before running the install, this is a general feature.  It is a good feature to have even when not running installations, so the AtriumCoreMaintenanceTool has a Health Check feature.  You can read more about it in the documentation here.  You can find more about other tools that can help in this regard in Jesse’s post 7 Tools to verify BMC Atrium CMDB is working well in the section on Verifying Product Installation and Environment.

 

If you've had to extend the model or planning on doing so then this is what you should know:

 

Extending the CDM means that you're altering tables in the database to add additional columns or maybe even creating an altogether new table to house the data you'll be collecting from your environment. Simply said, we need to add more labels so we have to define containers for our data.

 

With that in mind imagine that already have a physical container defined like a cupboard or jar and you needed to add just a few things to it and you were able to add two labels but could not fit the third because it ran out of space. This would make sense to us humans, but the installer still thinks that 1 and 2 need to be added because it was not told otherwise. You could argue "can’t the installer be made more intelligent, to examine what is already in the cupboard or jar?".  Now consider the case that the items interact with one another as they are added, and have different rules on what can be stored multiple times.

 

Does that make the challenge of ensuring reliable completion more complex?  You bet it does!So the extension installer follows strict orders " install, check for errors. In case of unforeseen circumstances, wait for further instruction. Basically the extension installer is instructed to add all three items and that is exactly what it tries to do. If it fails during the install, and the install is attempted again This causes a data structure collision because 1 and 2 already exist and hence a failure of the extension loader. Running it again will not change these results. It will keep failing on exactly the same collisions points it failed before. So, don't run the extension loader again hoping for different results. Instead look at the logs and see where the installer hit its first issue. There could have been a requirement to create a dataset first, either a manual step that was missed or a dependency that was violated. When investigating issues, it is sometimes useful to look at the manifest of files in the extension to see what it is trying to load. This helps to understand why an error occurs.

 

 

There are two types of extension loaders. One that comes with an executable e.g. simExtLoader.exe or pnExtLoader.exe) and AtriumCore\AtriumCoreMaintenanceTool.cmd.  AtriumCoreMaintenanceTool is installed with Atrium Core version 7.5 and later and provides the tool for loading CMDB extensions, so more information about extensions and what they contain can be found in the Atrium Core documentation.

 

Executable loaders can use CMDBDRIVER to deliver their "payload" from each subdirectory of the loader. For example 500-SIM-CDM-Extensions directory for simExtLoader has Class extensions as well as *OSD.txt files that have instructions on what to do.  The reason for executable loaders is to perform additional steps or checks as part of the install, but the subset which is installing CMDB extensions is largely the same.

 

Some loaders also add Normalization or Reconciliation jobs, Federated Launch Links and so on. These will be stored in the "arx" files in subdirectories of the extension loader. These additional records can also be only added once and if you run the installer again they will cause further failures but this time as a data collision rather than data structure collision. Again, here the installer is programmed to install all these things as if they never existed and the instructions basically say:

 

"Create New", rather than using this logic: "If you find it there already, then update it or move on to the next".

 

This is so because the original need to extend the CMDB still applies and the installer just knows that it has not been completed yet.

 

Above, I mentioned the marching orders of the Extension loader:  install, check for errors. In case of unforeseen circumstances, wait for further instruction.  The latter part of that was added in CMDB 7.6.04.  If an extension is currently loading, or has attempted to load and has failed, why should it be allowed to run again and make a mess of things?  It shouldn’t, so a simple mechanism was put in place to prevent that situation. When it runs, it adds records to a form called "Share:Application_Properties" that reflects the version of the extension and record the Status of the installation progress. If the installer needs to install Product Catalog data, which is also considered to be "an extension" of the CMDB then you'd be referencing the ProductCatalogData-2012_07_03.xml. The name of this file reflects the 2013 Product Catalog data load made available in July 3rd. Its contents will have GUID reference for Share:Application_Properties that checks the version of the PCT installed on your system.

 

 

In the case of Product Catalog that ID is "PD00C04FA081BA0SvxQgaxH66Q1wQA" and it is validated for version 7.6.04 or greater.

The next GUID it will then add to Share:Application_Properties (SAP) form is going to be "BMCPC00C04FA081BAbpfqSA9gV41Ar". This particular ID is then used to track the progress of the data load. This is done by adding a record to SAP with Name of Status and Value of Running. If the install fails the value will be changed to Failed.

 

 

At the conclusion of the install's completion this Status record is removed. If this record still exists and has Failed status then the installer is not going to let you do it again.

 

 

@BMC Software we have designed this part specifically for the reasons described above which are:

 

- Run the Health Check or verify the system is in a good state before installing extensions

- Run it once only

- Evaluate reasons for failure and address them individually

- If you can't complete them manually then restore the database and fix the original condition and run the installer once. Repeat if necessary, or identify individual component failures and complete the extension loading manually component by component.

 

 

I hope this post provides a better understanding of the rules that the extension loaders live by, and some of the thinking behind them.Hopefully this leads to more zen-like experiences with extending the CDM. I have probably skipped something so I am looking forward to see further questions on this topic so that we can have a full disclosure here for anyone to follow.

 

If you like content like this, see BMC Remedy Pulse Blogs for more like it.

 

If you have ideas on ways for Customer Support work better with you to enable success, join the Customer Support Community and provide ideas, feedback, or suggested improvements.

 

 

Thank you for reading!

Daniel

Share: |


BMC Atrium Integrator (Practical Example of Data Transformation Using Spoon)

 

 

 

Kettle (K.E.T.T.L.E - Kettle ETTL Environment) has been recently aquired by the Pentaho group and renamed to Pentaho Data Integration. Kettle is a leading open source ETL application on the market. It is classified as an ETL tool, however the concept of classic ETL process (extract, transform, load) has been slightly modified in Kettle as it is composed of four elements, ETTL, which stands for:

 

Shows how to generate data warehouse surrogate keys in Pentaho Data Integration

 

 

Data Sanitization Pentaho Data Integration (PDI) example

 

 

Data Allocation Pentaho Data Integration example

 

 

Parameters and Variables - Atrium Integrator (Spoon)

 

 

BMC Atrium Integrator (About and Useful Links)

 

Share: |


About Atrium Integrator

The Atrium Integrator (AI) product is, as the name implies, an integration tool that facilitates the loading of data into the Atrium CMDB.  Atrium Integrator allows for a wide variety of input sources such as JBDC, JMS, CSV, web services and complex XML.  It leverages the "best of breed" ETL tool that has a very broad range of transformation capabilities.  The engine that powers AI is actually referred to as an "ETTL" tool.  This stands for "Extract, Transform, Transport, and Load"  It is based off the Pentaho Data Integration (PDI) tool with a common name of "Kettle". Kettle has a designer tool named "Spoon" that utilizes a drag and drop UI, speeding up the design of complex jobs.

 

Value Statement

* Simplify importing CI's and their Relationships through Wizard based UI
* Out of the box Templates for CI field and relationship mappings ensuring consistency
* Reduce effort and time with a Graphical, Drag and Drop Interface
* Powerful Extraction, Transformation and Loading engine (Pentaho Spoon) for massaging of data
* Scalable for large enterprises with millions of CI's

 

Videos How To's    

    

          Click here

Terminology

  • Repository - A relational database in which jobs and transformation are stored, along with the logs and execution history of the jobs.
  • Transformation - A collection of steps and hops that form the path through which data flows.
  • Step - Minimal unit inside a transformation.  These are group in categories based off of the function they provide (i.e. input, output..)
  • Hop - Represents the data flow between two steps.  It has a source and a destination.  A hop can only be between one source step and one destination step, but each step can have multiple hops ("paths").
  • Job - A process control component.  A job consists of Job Entries.  These can be either transformations or other jobs that are executed in a particular order, managed by hops.

Useful Links

Atrium Integrator is based on Pentaho Data Integration (aka Kettle, aka Spoon).Some useful links

Pentaho - Spoon User Guide - Gives you information on the steps that are available in AI, excluding the BMC specific ones (ARInput/AROutput)

Pentaho Community Forums - The Forums are great for asking questions about the product.

Pentaho Community - Lots of information here and you can also download a community edition of Pentaho Spoon to understand the product and it's uses... Obviously it will not be AI with the enhancements BMC have made but nevertheless a good learning tool.

 

Why not AIE?

There are various shortcomings with the AIE product that forced BMC to look into another direction.  Some of the shortcomings are listed below:

  • It lacks the ability to extract data from a myriad of sources that our customers have data in (MySQL, Sybase, Excel files...)
  • Very limited transformation capability.
  • Too many calls into BMC support to get the product to work.
  • Need to be well versed in the CMDB in order to model the data accurately.  No wizard is available.

AIE to AI Migration coming soon



Transformation Repository

BMC Atrium Integrator Transformation Repository (Really wish this acronym spelled something cool...)

Share: |


I ran into the issue with ADDM synching to CMDB and although it was perceived as a CMDB outage (via ARS Server) the root cause was based in Windows configuration.

 

In my experience the issue is related network and If it is sometimes CMDB related, sometimes network, sometimes ARserver, it would be useful to clarify how to determine which of those is the case.

 

Given that the CMDB is hosted on the ARS Server, it would really have to be an outage of the ARS Server or a network issue as CMDB would never respond with an ARERR code of 90.

 

First I'd like to refer to this article https://communities.bmc.com/community/bmcdn/bmc_remedy_ondemand/blog/2013/10/14/the-pulse-optimizing-addm-to-cmdb-sync-connections-for-remedy-ondemand-environments

 

So, please make sure that network access is established first by using "ping" from the ADDM appliance host, although often only specific ports are open and the company network may block the ping port. However if all of these have been ruled out you can still do a more direct check with the CMDB by using CMDBDRIVER or the (AR) DRIVER utility to connect.

 

You can ask for these binaries from the CMDB administrator although these clients will have to be executed directly from the ADDM host machine which can be running on a different Operating System than what the CMDB is installed on. This client is OS specific. For example if CMDB is running on an ARS Server that is hosted on MS Windows 2008 server and ADDM is hosted on Linux Red Hat then you would need the CMDBDRIVER client for Linux RH before you can run it from the ADDM appliance host.

 

If there is interest then I can produce a staging area with a CMDBDRIVER client for all operating systems from a staging area. Since the version check with the CMDBDRIVER is compatible across many versions you'll only need one version.

 

For convenience we've staged the 7604 version for Linux here:

 

ftp://ftp.bmc.com/pub/BSM/AtriumCore/utilities/cmdbbin.tar.gz

 

Download this to the ADDM appliance system and make sure you change execute permissions for your current user on the Linux system. ADDM only supports Linux, hence we are only providing the Linux binary here.

 

Once you have this in place you can start the client create a file called "options.txt" and put this in the txt file:

 


init

gver

1

OB00C04FA081BABZlxQAmyflAg1wEA

q


 

NOTE: The linked binary above already has everything setup. All you need to do there is to run the following command with 4 arguments:

 

./cmdbdriver.sh <ARS HOST> <AR User> <AR User Password> <AR TCP PORT>

 

Example:  ./cmdbdriver.sh bmc_arshostname Demo Demo 0

 

Results should look like this:

 


Command: INITIALIZATION   
CMDBInitialization  results


ReturnCode:  OK
Status List : 0 items
Command: GET VERSION


Number of Application Versions to get (0):    Id ():   
CMDBGetVersion  results ReturnCode:  OK
Version Information: 1 item(s)


Application ID: OB00C04FA081BABZlxQAmyflAg1wEA <----- ApplicationId of CMDB
Application Name: BMC Atrium CMDB Version: 8.1.00 <----- This is what you're looking for and proves connection can be established.


Status List : 0 items ( This just means that there no other items in the list)


 

 

If you don't get this result then your connection to the CMDB is not open.

 

This is just one method to achieve a simple test at a moments notice. There are other articles that accomplish similar thing with java:

 

https://docs.bmc.com/docs/display/public/ars81/Running+arconnect

 

 

Daniel

Share: |


Upgrading from versions prior to the 7.6.04 release can be tricky, but fairly simple to acheive if you know the steps covered below.

There are various KA's on this topic already. This pulse post rolls up all the known causes into one.

Starting point : BMC AtriumCore 7.5 CMDB SP1

Finish: BMC AtriumCore 8.1

 

 

The following areas are known to cause issues at some time during the installer:

 

  • Prechecker Utility of the CMDB Installer
  • Best Practice Conversion Utility - BPCU
  • Custom Workflow for cmdb fields
  • Attachment Size Limit
  • Warning suppression

 

Prechecker Utility of the CMDB Installer

 

Part of the installation includes a CMDB Pre-Checker that validates field ids and names. This prechecker should be only executed if the version of CMDB is already 7604 and not executed if the version is prior to 7603 SP1, however it is part of the installer and therefore it will run anyway and add ERRORs to the install log. This is OK and can be ignored.

 

Here is some background on this. If you run the prechecker before 7603 SP1 then the prechecker will have some failure exit codes that may also show up in the arerror.log. These errors are there because there are Field IDs for Company that get checked in the following forms:

 

PCT:VersionCompanyAssocStatusFlags
PCT:PatchCompanyAssocStatusFlags

 

The installer needs to change these Field IDs from value "420000165" to this value "1000000001". Since this is part of the installer and the prechecker runs before that can happen and hence these errors are ignored.

 

The log will show an exit code of 1025 after running this command:

 

LOG EVENT {Description=[AR ChangeID command],Detail=[C:\Users\E020156\AppData\Local\Temp\Utilities\rik\archgid.exe -x ARSALIAS -u Action Request Installer Account -p ****** -t 0 -c 10003 -N 7200 -L 14400 -X 21600 -F C:\Users\CURENTUSER\AppData\Local\Temp\Utilities\pc\changeFiledIDs.txt

 

Since there isn't a way to easily remove the Pre-Checker from the available AtriumCore Installer program then all you need to do is to understand that this is a false positive result. You'll basically see list of errors in the end where the "Success" status of the install is show giving you the option to review the log. That log review console will have some "red" lines in it.

 

BPCU

 

BMC has produced a customization conversion utility that converts custom workflow into Overlays. For customers who have Asset Management and other applications where the CMDB is providing data structure service and have customized workflow this utility is best used AFTER the AtriumCore Upgrade is completed. Althoug the BPCU utility does not overlay any CMDB schemas and workflow even if customizations are found, however running this utility sets the expectation that even if the CMDB was overlaid than this would be corrected by the BPCU. This is not true.

 

Please see KA380649 for reference.

 

Bottom line for this point is - run the BPCU after the upgrade of CMDB is completed, was succesful and was not impacted by overlay issues. We've done these upgrades now with 100% success rate and are confident to support this practice.

 

 

Custom Workflow for cmdb fields

 

This item is related to the previous one, but more specific to data that is being inserted into BMC.CORE CDM forms.

The installer inserts various template data into the BMC.CORE forms and this step can be blocked if additional data restrictions or attribute requirements are enforced by custom workflow. These fields may include MarkAsDeleted, Company, Region, Site and so on. If you have customizations in place that require a value in a field that is not already required out of the box then you should disable the workflow for those fields before you start the installer.

 

The install log would have captured such failure with exit code of 1025 and log entry would look like this:

 

C:\Users\CurrentUser\AppData\Local\Temp\Utilities\rik\rik.exe" loadapp -x ARSALIAS -t 0 -u "Action Request Installer Account" -p <not_displayed> -l "D:\Program Files\BMC Software\AtriumCore\" -n CMDB-RIK_Install -f "D:\Program Files\BMC Software\AtriumCore\cmdb\en\workflow\upgrade\764patch000\wf-RIK-CMDB.xml" -L -C

 

Where the name of the log that actually captured the real failure codes is "CMDB-RIK_Install.log"

This log would then show an error like this

 


INFO  - Error importing record 1: ERROR (806201): ; Please supply a Category, Type and Item for this Configuration Item. Those fields require and entry to create or modify a CI.
DATA "" "" "" "" "" "" "" "BMC_BUSINESSSERVICE" "0;" "" "" "" "" 1282162748 "BMC.ASSET" "" "" "" "" 0 "" "STANDARD" "BMC_GLOBAL_DEFAULT_SRVC" "" "" "Demo" "" "" "" "" "" "" "" "" "" "" 1285101154 "BMC_GLOBAL_DEFAULT_SRVC" "" 30 "" "" "" "" "" 0 "BMC_GLOBAL_DEFAULT_SRVC" "" "" "" "000000000000011|000000000000383" "" 10 10 "Default Service" "" "" "" "" "" "" "" "" "" "" 0 "1284595003Demo" "Demo" 0 "" "0" "" "" "" "" "" 0
INFO  - Import Completed in 8.237 seconds. 0 records were imported to BMC.CORE:BMC_BusinessService; 1 Records were not. From File D:\Program Files\BMC Software\AtriumCore\cmdb\en\workflow\upgrade\764patch000\.\USM_BusinessService_DefaultData.arx

[ERROR] LoadComponent- Data Import failed with code 1025 for file D:\Program Files\BMC Software\AtriumCore\cmdb\en\workflow\upgrade\764patch000\.\USM_BusinessService_DefaultData.arx

 


And there would be additional records like this that fail. Exit code of 1025 can almost always be associated with a data collision of some type. Either the record could not be imported, or it could not be deleted (code 1024) for what ever reason.

Please disable workflow that would prevent the installer from managing records in the CMDB. No customer records are deleted during the upgrade, only CMDB template or default loads like the BusinessService data will be impacted.

 

 

Attach Size Limit

 

We've found that the ARDBC setting of "Db-Max-Attach-Size: 0" has impact on attachments during upgrades. If you have this setting in the ar.cfg (ar.conf if Unix/Linux) then change the value to 200000000 (~200 MB) or just remove it from the config file altogether.

 

ARS also has a maximum attachment size limit that can conflict with the Data Visualization Components that need to be loaded into the Data Visualization forms during the install. The size of these attachments varies from few kilobytes to several megabytes.
Rule of thumb is to remove the attachment size restriction for the installer to complete successfully. This means that you should set the size limit to 0 (unlimited).

 

Set Attachment Size Maximum to 0 for the ARDBC (ARS Data Base Configuration values a.k.a. the ARDBC metadata).

You can also run a test before the upgrade by attaching a sample file that's no more than 100 MB to see if the attachment will work. This setting is in the AR Server Configuration Panel. You can leave it at 0 if it already is set that way.

 

Here is the brief explanation of attachment size topic :

 

Both config items ‘Db-Max-Attach-Size’ and ‘AR-Max-Attach-Size’ are the same. ‘Db-Max-Attach-Size’ was introduced in early releases of AR (not sure about version) and was specific to Oracle DB only. Later we introduced AR-Max-Attach-Size for all databases including Oracle. You should remove ‘Db-Max-Attach-Size’ from config file at this point.

 

Unlimited size applies if

AR-Max-Attach-Size = 0 or no entry exists in ar.cfg.

 

 

Warning Supression (ARS)

 

There are several Warnings interpreted by the installer as failures. These warning can be ingored by supressing them during the time of the installation.

 

You can add the following the Warning suppressions:

 

See : KA364458

 

Suppress-Warnings: 9936

 

Additional warnings maybe there already, so just add 9936 to the list for the upgrade to complete successfully.

 

Our expectation and coding of the Installer should not have this issue surface at all. However I always like to mention it here for those customers that are using this KA for upgrading to 7.6.04 SP4 of AtriumCore where this issue has manifested itself.

 

For this last point please see KA405615 for additional information.

 

A KA does exist with content similar to this PULSE post: KA406262

 

All Knowledge Article references require BMC Support Login.

Share: |


I found a way to run a query that can help diagnosing CMDB data that fails AutomaticIdentification and that is to use the AR DRIVER with GLSQL. Syntax to login to ARS DRIVER with this client is similar to CMDBDRIVER so I am not going to go into details on that here. Instead I'll skip straight to the point query, although to avoid any confusion I need to add that AR DRIVER is a client that is native to any version of ARS, not CMDB. CMDB driver does not wrap ARS DRIVER. These two are different AR clients. Start AR DRIVER from ARSystem API folder on Windows or run it from bin dir on Unix. If you're having hard time find it on UNIX then just run this command:

 

find $BMC_AR_SYSTEM_HOME -name driver

 

You'll get couple locations to chose from.

 

Once you initilize and login to the AR DRIVER then you can take advantage of the GLSQL query to find out how many classes are failing Automatic Identification. The results will then tell me which rules are in need of review. Now, the reason I love this is because I don't have to wait for a DBA to give me access to the database. Keep in mind that you still need to have CMDB admin access to do this.

 

So, here is an example how to run a query on the BMC.ADDM dataset and group the results by ClassId. As follows:

 

Command: glsql

GETLIST SQL

SQL command: select count(ClassId), ClassId from BMC_CORE_BMC_BaseElement where DatasetId = 'BMC.ADDM' AND FailedAutomaticIdentification = 1 group by ClassId;

Maximum number of entries to retrieve (500):

Get number of matches? (F):

 

ARGetListSQL  results

ReturnCode:  OK

Value List List : 3 items

   Value List : 2 items

      Value:  (integer)   51

      Value:  (char)   BMC_COMPUTERSYSTEM

   Value List : 2 items

      Value:  (integer)   5963

      Value:  (char)   BMC_DATABASE

   Value List : 2 items

      Value:  (integer)   18815

      Value:  (char)   BMC_SOFTWARESERVER

Status List : 0 items

 

This tells me that I'll need to review (RECON) Identification Rules for classes ComputerSystem, Database, and SoftwareServer related to my job. The last one must have a very ineffective rule as there are 18815 failed CIs there.

 

Looking at the Identification rules for these classes and the source dataset I can see that I have created 3 rules:

 

Rule 1:

 

 

'TokenId' != "0" AND 'TokenId' != $\NULL$ AND 'TokenId' = $TokenId$

 

 

Rule 2:

 

 

'ADDMIntegrationId' != $\NULL$ AND 'ADDMIntegrationId' = $ADDMIntegrationId$

 

 

Rule 3:

 

 

'SoftwareServerType' != $\NULL$ AND 'SoftwareServerType' = $SoftwareServerType$

 

 

 

So, since Rule #3 is only checking for SoftwareServerType which is an enumerated field then that tells me that this identification is going to fail to identify New CIs as soon as there is already one CI present in the BMC.ASSET dataset with the same enumeration. Recon Engine will not Auto-Identify new CIs if any of the rules have failed and that rule would definitelly cause errors after the first merge. All subsequent jobs would find a match of most of the SoftwareServerTypes because there can only be so many in the option list. Basically an enumerated field should never be used for unique identification by itself. So, that would be an very ineffective rule to have and I'll need to change it.

 

If I combine Rule 2 and Rule 3 together to say:

 

 

Rule 2:

 

 

'ADDMIntegrationId' != $\NULL$ AND 'ADDMIntegrationId' = $ADDMIntegrationId$ AND 'SoftwareServerType' != $\NULL$ AND 'SoftwareServerType' = $SoftwareServerType$

 

Then now I have a much more effective rule that should find a match and accuratelly identify existing CIs in BMC.ASSET and also Autoidentify properly.

 

Once I make that change to the SoftwareServer Identification rule and run the job, then I can use the same GLSQL query to see if the rule now correctly identifies the CIs in BMC.ASSET. The CI must also not fail and Auto-Identify new CIs.

 

 

Command: glsql
GETLIST SQL
SQL command: select count(ClassId), ClassId from BMC_CORE_BMC_BaseElement where D
atasetId = 'BMC.ADDM' AND FailedAutomaticIdentification = 1 group by ClassId;
Maximum number of entries to retrieve (500):
Get number of matches? (F):

 

  ARGetListSQL  results
ReturnCode:  OK
Value List List : 2 items
   Value List : 2 items
      Value:  (integer)   51
      Value:  (char)   BMC_COMPUTERSYSTEM
   Value List : 2 items
      Value:  (integer)   5963
      Value:  (char)   BMC_DATABASE

 

This is the result I was hopping for. Next I'll need to look at Database and maybe the ComputerSystems.

 

I find this method most effective with RE ID Rule and Data validation.

Share: |


There have been several enhancements in recent releases of BMC Remedy IT Service Management Suite, BMC Atrium CMDB Suite, and BMC Remedy AR System Server that change where data is stored, how to customize the user interface (UI), and preserve those changes during application upgrades.  These changes include:

  • BMC Remedy IT Service Management moved asset management lifecycle attributes out of the CMDB to the AST:Attributes form
  • BMC Remedy AR System Server introduced granular overlays, see The Pulse: Using granular overlays to manage customizations
  • BMC Atrium CMDB Suite addressed issues with the Synchronize UI feature, also known as Sync UI to Asset or cmdb2asset

 

In this post, I will discuss how these changes impact the guidelines for successfully extending the data model and customizing the user interface.  I will also try to put some issues and experiences in context of their role and best usage of these features.

 

Adding data to AST:Attributes in relation to CIs

 

In Asset Management version 8.0, asset lifecycle attributes such as those that track status and costs, those were moved out of the CMDB to the AST:Attributes form.  The Data structure change is described in more detail in the BMC IT Service Management architecture documentation.   The relationship between the CI and the AST:Attributes record is stored as a foreign key relationship where:

 

  • If the CI is not identified, Reconciliation Identity on AST:Attributes  = Instance Id of the CI
  • If the CI is identified, Reconciliation Identity on AST:Attributes = Reconciliation Identity of the CI

 

The first case is temporary - once the CI is identified in the CMDB, the AST:Attributes record is updated to contain the Reconciliation Identity of the CI and becomes available to the Asset viewer through the Asset Management application. 

 

Below are different scenarios of how data is populated to AST:Attributes.

 

  1. ITSM is upgraded from version 7.6.04 to 8.x
    Once you upgrade to ITSM 8.x, the upgrade process moves the data from the CMDB into AST:Attributes records and removes the attributes from the CMDB.  Any workflow or integration referencing them can be updated after phase 2 of the ITSM upgrade which uses workflow to sync the legacy attribute values to AST:Attributes. The third phase of the ITSM upgrade removes the legacy attributes from the CMDB.
  2. An ITSM user adds a new CI from the Manage CIs feature, using an Asset Sandbox. (BMC.ASSET.SANDBOX)
    An entry is created in AST:Attributes with the instanceid of the CI. This is a temporary foreign key relationship until the CI is identified.  When the CI is identified by Reconciliation, workflow updates the AST:Attributes entry to have the correct value of Reconciliation Identity of the CI.
  3. An ITSM user adds a new CI from the Manage CIs console, without using an Asset Sandbox.
    The CI and AST:Attributes entry are created with a value of Reconciliation Identity to relate them.
  4. A data provider like ADDM populates data to the CMDB.
    The AST:Attributes entry is created when the CI is reconciled.
    However, an AST:Attributes record is not created for all classes.  Workflow checks if the class is in
    one of the classes which tracks asset lifecyle attributes.  This list is stored in the AST:ClassAttributes form and you can add new entries to the form if there are new classes for which asset life cycle attributes should be tracked.
  5. Data is loaded to Asset Management application from a legacy data import or third party integration.
    Two different methods can be used - either the Data Management Tool with Transactional_CI spreadsheet or the process in knowledge article KA400491.
  6. In another scenario, data is loaded and reconciled in CMDB 8.1 and  ITSM 8.1 is installed later on the system. How are the AST:Attributes records created in this situation?
    This can be done several ways. Either create a one-time escalation to create entries in AST:Attributes with default values, or export data from CMDB to get the Reconciliation Identity and Instance IDs for the CIs, and use one of the above methods to import it with AST:Attribute data.

 

Determining the proper location of new attributes

 

Before defining a new attribute, consider where is the correct place to store it. Since there are two possible locations – in the CMDB vs. on the AST:Attributes form - you have to ask the question: 
  "Should it be added as a field on AST:Attributes or should it be added as an attribute in CMDB?"

 

Below are some criteria to make that decision:

  • Is the attribute discoverable from the CI itself?
  • Is it a property of the CI itself, or an attribute which is being track about the CI?
  • Will the data be primarily accessed using the BMC Remedy IT Service Management applications?

 

Using this criteria, if the attribute will be added to the CMDB, the process works the same as in previous versions - add it in Atrium Class Manager and run Sync UI to update the AST form to make it visible to users of the Asset Management application.

 

If the field will be added to the AST:Attributes form, the process is essentially to add a new field to the form, and update the AST forms to expose it in the join forms.  The use of additive view overlays is recommended so these changes are preserved when you upgrade the ITSM applications.   In the example below, I added the field MyNewLifecycleAttribute to AST:Attributes and used Remedy Developer Studio to update the Computer System form to make it visible on the AST form. See Knowledge article KA404401 which describes the process in greater detail.

 

assetoverlay.jpg

 

Best Practices on making changes to the data model – in CMDB or AST:Attributes

 

Following these guidelines will avoid introducing issues when making data model changes:

  1. Extend or add to the data model, rather than changing, removing, or making attribute/field properties more restrictive.
    When a new class or attribute is added, it is certain that no existing applications or use cases will be affected. This is because the new object did not exist – so nothing could be using it.  The only concern when adding an attribute is making a new attribute or field required – because there may be a significant amount of existing data without a value specified, so adding a required attribute of field can introduce issues.
    Modifying an existing attribute – for example, to make the size smaller or to make it a required attribute – can introduce many types of issues.  There may be application features or data use cases which do not meet the new restrictions and issues may occur well after making the changes.
  2. Back up the original definitions and data before making the changes.   See the recent Connect with Atrium webinar on the Common Data Model for suggestions on how to do this for changes to the CDM.
  3. Consider the impact on existing data.  If a new attribute or field is added – even with a default value – existing records will not have this value populated.  When adding a new attribute or field with a default value, existing CMDB data import methods can be used to update existing data.  If the field was added to AST:Attributes form, either use the Data Management Tool (DMT) to load the data or see knowledge article KA400491 for a method of loading data using other import mechanisms.

 

Maintaining a consistent UI

 

After the class or attribute is added to the data model, it needs to be included in the user interface.  This happens automatically for extension of the CMDB data model because the class forms are updated to include the attribute – at least in the CMDB user interface forms. There are more details in the documentation on Control of the layout of class forms.

 

If BMC Remedy IT Service Management applications are installed on the server, it is recommended to also run the Sync UI process to update the Asset Management UI with these changes so users can access the new classes and attributes.

This feature is accessible in accessible from the Application Administration Console, by choosing from the menu:

   Custom Configuration > Asset Management > Advanced Options > Sync Asset UI with CMDB

 

This Sync UI feature – also known as Synchronize UI, also known as cmdb2asset – performs much of the work of updating forms and attaching workflow.  However, the new attribute is added to a Custom page which is not visible.  It is expected that the administrator will make the decision to either make the page visible, or perhaps move the attribute to another page on the form.  These manual changes are made in BMC Remedy Developer Studio.  For changes made in Remedy Developer Studio, the recommendation to preserve customizations is to use overlays, so we’ll explore the impact of doing that in the paragraphs below.

 

Changes made in both tools - Atrium Class Manager and Sync UI – drive workflow changes to AR System workflow base objects.  They do not directly update or interact with overlays.  This does not mean they are completely independent, as we will get into below.

 

Retaining changes during upgrades

 

Changes made through the CMDB API – using Class Manager or the CMDB API – store metadata which describes the changes to the data model.  During CMDB upgrades, CMDB understands and preserves changes and additions to the data model.  Changes in the AR System workflow and database objects are driven from the CMDB metadata, so upgrading or making additional changes in Class Manager preserve the changes to the workflow objects.

 

As for the AST:Attributes form, changes made in BMC Remedy Developer Studio should always use the Best Practice Customization Mode so the workflow changes are stored on overlays.  When ITSM upgrades update forms or workflow, they update the base objects and the overlay objects are retained – except in the rare case where the base object is deleted during the upgrade. You can view the details of the workflow affected in the upgrade, which is included with the ITSM release.

 

Overlays and CMDB forms and workflow

 

As noted above, changes to CMDB classes should be made through the CMDB Class Manager or CMDB API calls.  This process updates the CMDB metadata and all related AR System forms and workflow.  Changes to the functional behavior of the CMDB should never be made directly in the AR System tools such as BMC Remedy Developer Studio because this introduces inconsistencies between the CMDB metadata and the AR System objects. This is unsupported, can cause problems, and is contrary to best practices.   The guideline to understand “functional behavior” is as follows:
  If the change can be made in Class Manager, then it should be made in Class Manager.

 

Customizing the user interface by moving attributes around on the form does not affect the functional behavior of the CMDB, nor is it a task that can be performed in Class Manager.  As this is not a change to the functional behavior of the CMDB it can be made in BMC Remedy Developer Studio.  All the standard guidelines to performing Remedy workflow customization apply here including testing on development servers, documenting changes, backing up existing workflow, and acquiring the necessary training and knowledge to understand how to customize Remedy workflow.

 

Now let’s consider an example of using overlays to make these changes, first using AR System version 7.6.04 and then with AR System version 8.1.

In this example, let’s consider the case of making changes to the user interface by modifying views to move attributes on the form BMC.CORE:BMC_ComputerSystem using BMC Remedy Developer Studio.  For servers with BMC Remedy IT Service Management installed there the Asset Management forms perform the role of user interface so this example is likely only applicable to servers without ITSM installed.

 

In AR System version 7.6.04, BMC Remedy Developer Studio is launched, switched to Best Practice Customization mode, and the BMC.CORE:BMC_ComputerSystem form is edited.   An overlay of the form is created. This makes a copy of all of the properties of the form – the fields on the form, the field properties, and the views. Changes are made to the views according to the requirements and the changes are saved. At this point everything works fine – AR System is using the overlay instead of the definition of the BMC.CORE:BMC_ComputerSystem form, but the only difference between the two are the changes to the view.  Next, let’s consider the case that a new attribute is added to the class in Class Manager.  This process updates the underlying AR System forms and workflow to include this attribute – it updates the base objects not the overlay.  This is the designed behavior – because BMC code changes are made in the base.  However, note that we have just introduced a discrepancy between the CMDB metadata and the AR System objects!  The CMDB metadata includes the new attribute, and it expects the AR System forms also have this field.  However, since AR System is using the overlay instead of the base object, AR System believes the field does NOT exist on the form.

 

This is sometimes explained as "overlays being incompatible” with CMDB forms, and sometimes explained as they “conflict” with CMDB changes.  But in reality, the challenge is that the use of overlays introduces data model inconsistencies between the CMDB and AR System objects.   This is usually where errors would be encountered. One way to fix this is to delete the overlay and re-create it. This is essentially making another, more recent, copy of the form properties to address the data model inconsistency, until another attribute is added via CMDB Class Manager. Ironically, the use of overlays to adjust the view is mostly unnecessary because the CMDB data model changes used an inheritance model to update the forms, so customizations to the view are generally preserved when classes are updated.    So on CMDB versions 7.6.04 – the best advice is still “do not use overlays on CMDB forms”.

 

Creating an overlay on AR System 7.6.04 required creating an Overwrite overlay for the entire form – data model properties and view properties alike.  In AR System 8.1, the concept of overlays was extended to support more types of overlays – additive, overwrite, inherit – and to be more granular.  So on AR System 8.1, you can create an additive overlay and create it on only the views of a form.  This allows making changes to the view without impacting the functional behavior of the data model.  So in AR System 8.1 or later overlays can be used on CMDB forms if proper care is taken to only create additive overlays and only on view properties.

 

 

Sync UI and overlays

 

The role of Sync UI to Asset is to update the Asset Management user interface forms to reflect additional classes or attributes added to the data model in the CMDB.  It leverages metadata in the SHR:SchemaNames form, plus an inheritance model that leverages existing forms and workflow for parent classes to either build or update the form.  Sync UI creates and updates base objects – which is appropriate for BMC-made workflow changes.

 

The SynchronizeUI log generated on the server gives a good picture of the activity it performs – checking if the form already exists, looking for new attributes, and attaching workflow which is related to parent class forms to ensure the form works consistently.  This log is also a good reference while syncing to determine when the process is complete. Sync UI updates the existing form rather than removing and recreating it, so existing changes to the form are retained. Sync UI does not have metadata to tell it where to add new attributes so it follows a standard routine of adding new attributes in the next available position on a set of custom tabs. This avoids the need to manually edit each AST form in the class hierarchy to add the attribute as described in KA404401 for adding an attribute via AST:Attributes.

 

How does this behavior impact the use of overlays?

In AR System 7.6.04, all overlays are what we now call “overwrite” overlays.  When you create an overlay of a form, for example AST:ComputerSystem, AR System uses your overlay instead of the base object.  Sync UI updates the base object, so the expected behavior is that changes made via Sync UI are not visible on the form – only the changes in your overlay exist.    Since the fields are propagated to the base object, BMC Remedy Developer Studio could be used to simply add them to the overlay – but this is a manual step. Creating an overlay of workflow which governs the behavior of the AST form also causes problems on AR System 7.604.  When you create a new class in CMDB and run Sync UI, it updates many active links and filters to also run on the new AST form generated.  If there is an overlay on the workflow, it supersedes the base object and the workflow does not execute on the new form. To avoid problems like this, it is better to generate all the classes in CMDB, use Sync UI to build the AST forms, and only then create overlays on the AST forms.

 

In AR System 8.1 and later, granular overlays allow overlaying smaller components of the form and the can be designated as Additive overlays.  This allows the overlaid objects to be maintained separately and the both base and overlaid attributes are displayed on the form. As long as granular additive overlays are used and fields are not positioned in a way to conflict with the usual placement of new attributes added by Sync UI, the overlays should work fine with Sync UI. But it means you may need to review your coordinates of fields on the form.

 

 

Sync UI fixes

 

A number of fixes have been made to Sync UI in recent CMDB service packs.  The expected behavior is for Sync UI to bypass AR System configurable options meant to restrict end user behavior.  For example, the Max-Entries-Returned-by-Getlist setting should not block Sync UI.  This is considered a defect because the automated tool Sync UI should not be bound by this restriction.   This defect is fixed in CMDB 7.6.04 SP5 and will be fixed in the upcoming CMDB 8.1 Service Pack 1, along with an issue described in knowledge article KA399521 regarding Sync UI updating the Status attribute to be invisible.  See the release notes for specific issues fixed in Sync UI.  I only mention it here because some of these issues are sometimes considered evidence of a conflict with overlays, when in fact, they are just defects fixed in the latest service pack.

 

How Auditing and Archiving changes with data stored in AST:Attributes

 

Asset lifecycle attributes are often the ones which would be the most interested in auditing changes since they are made by users instead of discovered automatically.  Since these attributes are no longer stored in the CMDB, there is a slightly different way of enabling auditing, but both mechanisms ultimately use the same AR System feature to perform the auditing. See my colleague Jared’s recent blog post – The Pulse: Auditing your Assets for more details on this topic.

 

AST:Attributes only stores current attribute values so consider whether this data needs to be archived when archiving data from the CMDB, if the goal will be to capture data at a particular point in time.

 

Terminology Clarification

 

The names of the forms AST:Attributes and AST:ClassAttributes are similar in name to CMDB metadata forms which perform completely different roles, so for clarity, I always refer to these forms by their explicit full names.  Below is a quick summary of forms, whether they belong to CMDB or Asset Management, and their purpose.

 

FormPurpose
AST:AttributesAsset Management form that contains data instances
Attribute Definitions (OBJSTR:Attributes)CMDB definiton of attributes and their properties
AST:ClassAttributesAsset Management form to hold list of forms which use AST:Attributes
Class AttributesCMDB Definition of attributes on the class

 

 

Key takeaways

 

Below is a summary of the key points addressed in the article:

 

Both CMDB data model and Sync UI changes to the interface forms act on the AR System base workflow objects.  Neither is aware of or directly interacts with overlays in AR System.  AR System overlays can be used effectively with CMDB and AST forms by using additive overlays specifically to views – to adjust the user interface but not change the functional behavior.  On AR System versions prior to version 8.1, it was not possible to create such granular or additive overlays, so using overlays if you plan to afterwards run Sync UI on those versions is not recommended.

 

When adding an attribute, consider whether it is a discoverable attribute of the CI itself or a non-discoverable attribute which is used to track the lifecycle of the asset.   This will help determine if the attribute should be added to the CMDB or to AST:Attributes.  See Knowledge article KA404401 regarding steps for adding to AST:Attributes.

 

Always use additive view overlays for changes made in BMC Remedy Developer Studio, including changes to the AST forms after running Sync UI.

 

When auditing changes for attributes stored on AST:Attributes,  make the changes via BMC Remedy Developer Studio.

 

Archiving, deleting or resetting ReconciliationIdentity data in BMC.ASSET dataset – It was never a good idea to do this, because it orphans ITSM relationships which refer to the CI’s by ReconciliationIdentity. Now data in AST:Attributes is added to that list of data that would be lost or orphaned.  When data is removed or reset using cmdbdiag, it operates at the database level so filters which would remove or update data in ITSM forms do not fire.

 

I hope this summary provides clarity on how to use these features effectively, and make sense of past experiences and issues encountered.  Please use comments below to provide feedback.  See BMC Remedy Pulse Blogs for similar posts.

Share: |


As a member of BMC Customer Support, I have helped many customers work through data issues in their CMDB.  I have found many issues are caused by implementation decisions, unexpected dependencies, and poor data loaded into the CMDB long before problem symptoms appeared. I recently contributed a topic to the BMC Atrium CMDB Suite 8.1 product documentation on Investigating CMDB Data issues which shares my experience on how to investigate, troubleshoot, and resolve these kinds of issues when they do occur.   The goal of this blog post is to share ways to successfully manage data in your CMDB so you never have to look at that troubleshooting document.

 

Several of the Atrium webinars have discussed best practice principles when using BMC Atrium CMDB Suite and BMC Atrium Discovery and Dependency Mapping features which provide great context on overall planning, implementation decisions, and designing for performance.  In this post, I will try to restrict the topic to only the managing your data aspects of the overall plan and data flow.

CMDBdata.png

 

Define your use cases and determine what data you need in the CMDB

 

The first step is to define your use cases and determine what data you need in the CMDB.   The single worst decision you can make is to put everything you have in the CMDB and let the tools sort out the mess later. A good use case includes each of the following:

  • Which devices do you need to manage in the CMDB?
  • What kind of data – which CI classes and relationships?
  • Which applications and users will consume the data in the CMDB?
  • How will the users and applications consume or access the data?

 

A few examples illustrate the point:

 

Example 1:  I need server and application information for all servers in the data center.  This includes computer systems, application services, databases and relationships between them. I will use this data to manually define service models which define how these CIs support business services. I will use Event Management monitoring of these servers to implement Intelligent Ticketing by creating incidents in ServiceDesk that show the impact on critical business services.  Automated compliance jobs will verify these servers in the data center are configured properly and create change requests to bring them under compliance if necessary.

 

Example 2: Service Desk technicians need comprehensive, current information about workstations so they can investigate issues effectively.

 

The first example illustrates a case where several classes of CIs and relationships need to be populated to the CMDB.  Some features such as a defining the relationship to business services must be done in Atrium CMDB Suite because the features which implement it require the data in this location.  There are also fewer server systems, they change less frequently, and they are managed more closely – so in addition to being a requirement to store the data in the CMDB for several applications – there is also more value in propagating the information to a central location where it can be leveraged by all the different applications.

 

The second example is an opportunity for federation.  Since the main use case is for service desk technicians to view the information, check if this information can be viewed in the discovery application by reference to the computer system.  This scenario, where you populate minimal information to the CMDB reduces the amount of data to be managed.  Now assume a third use case is added to perform software license management on Microsoft Office products.  This functionality in BMC Remedy Asset Management requires the product information to be stored in Atrium CMDB.  To accommodate this third use case, you can populate product information from these computer systems to the CMDB – but only Microsoft Office products because those are the only products required for the use case.

 

This strategy of populating only what you need in the CMDB reduces the amount of data to be managed, which simplifies many aspects of managing the data.

 

Identify the fewest, best data providers necessary for the use cases

 

Let’s assume we have the following data providers available for the use cases described above:

  • BMC Atrium Discovery and Dependency Mapping (ADDM), which discovers information about servers in the data center
  • BMC BladeLogic Client Automation, which discovers information about workstations, and the last time installed products were accessed.
  • Microsoft System Center Configuration Manager (SCCM), which contains information on both servers in the data center and end user workstations.
  • A data export from a legacy asset tracking application, which has manually entered information which cannot be discovered from the systems.

 

Which data provider should you use for discovering servers in the data center?  Which data provider should you use for discovering workstations?

Putting the data export aside for the moment, there are two products that have server information. Let’s assume that after comparing BMC ADDM and Microsoft SCCM, that the former was considered the better product for server discovery in the data center because it discovers more comprehensive application relationships.  Is there value in also discovering the same servers using Microsoft SCCM and populating that data to CMDB?  There are three factors to consider before adding the second data source for the same CI’s:

  • Do both data sources provide identical values for sufficiently many identification attributes?
  • Does the second data source provide better data that the first for non-identification attributes?
  • Are the strengths of the second product needed for planned use cases?

 

Note: The first requirement is described further in a later section on identifying data deficiencies.

 

Similarly, the same evaluation should be made when considering the best single data provider to use for discovering workstations and user devices.

 

In my experience, the best starting point is to use:

  • one discovery provider for servers in the data center, which provides the best information about servers, and only discovers servers
  • a second discovery provider for workstations and non-server devices, which is specialized for this purpose, and only discovers workstations
  • no devices are discovered by multiple discovery sources

 

Secondary data providers can be added later if they meet all three requirements described above.

 

By contrast, consider the case of using a single discovery provider for both servers and workstations because it is already deployed and appears to be good enough for the initial use cases for the CMDB.  A year later you determine there is a need for a more specialized discovery product for servers to capture the detail you need, and a third discovery product to discover and manage workstations.   Since operators have been working with the existing data, you cannot simply remove it and replace with a new data provider because the existing relationships must be retained.  This can lead to the situation where data from extra data providers must be managed in the CMDB for historical reasons, though they provide no benefit over the preferred data provider.

 

The key point here is that evaluating discovery requirements up front can save a lot of effort managing data in the CMDB in the future.

 

 

Handling legacy data imports

 

It is often recommended to use an automatic discovery process to populate and periodically update data in the CMDB. Discovery and data source providers are a key participant in data management in the CMDB, so if the data source does not maintain the data, who does?

 

The general principle is: use automatic discovery for any attributes which can be automatically discovered by a discovery product, and use manual data only for attributes which are not discoverable.

 

A frequent challenge I see in managing data in the CMDB is legacy data or manual data that was imported as the first data source in the CMDB.  This introduces any data issues from the legacy system directly into the CMDB, which may not be recognized until much later when it is difficult to fix.

A far better approach is to do the following:

  1. Use an automatic discovery data provider to populate a source data set, and reconcile it as the first into the golden dataset.
  2. Load the legacy data into a source dataset, for example BMC.LEGACY
  3. Create a Reconciliation job which does NOT use standard rules.  Add an identification activity which does NOT Generate IDs for the source dataset, BMC.LEGACY in our example. Define identification rules based on the data which is populated in the legacy dataset which should match the discovered data, but do not auto-identify so any data that does not find a match remains unidentified.
  4. Run the reconciliation job.
  5. Investigate any CIs in the Legacy dataset which are not identified. This may reveal cases where the legacy data has incorrect data for identification attributes, cases where the CI is no longer in the environment, or cases where discovery has not discovered the device for other reasons.
  6. Add a precedence group which assigns precedence for attributes which are not discoverable, or for which the legacy data is preferred.
  7. Add a merge job to the custom Reconciliation job, and run the reconciliation job to merge the data together.
  8. Look at the data in the golden dataset to verify it correctly includes the best data from both sources.

 

This process ensures legacy data only updates data which is discoverable and available in the environment, and only updates attributes which have more useful values.

 

Analyze data dependencies and deficiencies

 

Earlier, I introduced the need for data sources to provide identical identification attribute values for the same CI.   This is important so they can be identified as the same CI through Reconciliation Identification.  For example, the standard identification rules for computer system include combinations of the following attributes: TokenId, Hostname, Domain, SerialNumber, isVirtual, and PartitionId.  Correctly matching the same computer system discovered by two different data sources requires matching values in at least one of the sets of identification attributes. For example, the third standard identification rule for ComputerSystems looks for a match on three attributes: Hostname, Domain, and isVirtual.   Looking at the data itself is important here.    If the hostname is stored as “Unknown” when it cannot be discovered, this can present challenges when using this attribute value for identification.

 

Note: Many classes of CI derive part of their identity from the computer system which hosts it.  For example, the default identification rules for an IPEndpoint are:

  1. <Related to the same computer system> AND <matching values of TokenID>
  2. <Related to the same computer system> AND <matching values of Name>

And since the earlier step validates data requirements are met for the Computer System, this step is just to confirm one of TokenId and Name attributes are populated consistently between the two datasets.

 

The key takeaway here is to understand the data dependencies and examine the data before reconciling it in the CMDB.  Failure to do this can lead to duplicate CIs in the CMDB and other challenges managing data in the CMDB.  It is better to understand the deficiencies in the data before introducing it into production.

 

Another kind of data deficiency is incomplete data in the data source.   For example, the combination of ManufacturerVendor and ProductName are used for product catalog normalization.  How should you handle the case where the data source only includes one of these values, potentially even after discovering the data from the device itself?   Normalization aliases can address some of these challenges, but the right way to address them depends on the particular situation and the uniqueness of the data which is available.  See knowledge article KA403062 for further detail.


Determine how to handle deleted data

 

Another important element of managing data in the CMDB is requiring discovery sources and data providers to update data in the CMDB when it changes. This includes removing the CIs and relationships when they are removed from the data source or the environment.  The data provider updates the CIs in the source dataset to be soft-deleted, Reconciliation propagates this change to the data in the golden dataset, and Reconciliation purge jobs remove it from both datasets in the CMDB.   At the completion of this process, the data is no longer in the data source, nor in the source dataset in CMDB, nor in the golden dataset in CMDB – so it is consistent throughout.

 

There is sometimes a requirement to retain Computer Systems after they are removed from the environment for reporting purposes, or to retain relationships to incidents or change requests in the BMC Remedy IT Service Management Suite.   This presents a few challenges:

  • When a computer system is soft-deleted, all the relationships to hosted CIs are also soft-deleted.
  • Some of the computer system details are not visible because the relationships are deleted.
  • The computer system still resides in the golden dataset with identification attributes populated.
  • If the computer system is later replaced with a device with similar identification attribute values, the new computer system can reconcile with and update the old one, creating a mix of obsolete and current data.

 

This series of events can cause big problems for data management.  One effective method to address this situation appears to be:

  • Use a reconciliation Copy dataset activity to archive data to a separate dataset, before the computer system is soft-deleted.  This preserves the relationships between the computer system and its processors, memory, IPEndpoints in a separate dataset for archival purposes.
  • Purge classes other than ComputerSystem in the golden dataset.
  • When the Computer System is soft-deleted, update the AssetLifecycleStatus and Name to indicate it is no longer in the environment, and update the identification attributes so they will never match.  For example, append –old to all of the values.

 

This approach seems to avoid most data management problems because it removes most of the data as appropriate, only retains a minimum number of CIs which no longer exist in the environment which are kept for historical reasons.

 

Examine data distribution periodically

 

In my experience, following the guidelines described above leads down a much more successful path for managing data in your CMDB.   It reduces the number of participants and overlapping data and it identifies and avoids problem areas by evaluating data quality and dependencies before failures are encountered downstream.    Periodically examining the amount of data in your CMDB is another good practice. Some interesting counts would include:

  • Number of Computer Systems per dataset
  • Number of CIs in related classes per dataset
  • Number of CI’s per class by value of NormalizationStatus
  • Number of unidentified CIs by class and dataset
  • Number of soft-deleted CIs by class and dataset

 

 

These queries can provide insight into the size and location of data discrepancies, and where to focus your efforts in investigating data issues.

 

I shared some of the queries I use for evaluating these counts in Database queries for evaluating CMDB data distribution in the product documentation and there is also a data report snapshot idea proposed on Communities for a way to make this data more accessible in the product.

 

Hopefully this article provides some insight into ways to manage data in your CMDB more successfully by using a few good practices including:

  • Define your use cases and determine what data you need in the CMDB
  • Identify the fewest, best data providers necessary for the use cases
  • Handle legacy data imports conservatively
  • Analyze data dependencies and deficiencies
  • Determine how to treat deleted data
  • Examine data distribution periodically

 

These practices have worked well for me to avoid some of the challenges of managing your data in the CMDB.  I am interested in your experiences in this area.  Are there other good practices that should be added to the list? Please provide feedback or your own tips and techniques used to address challenges and lead to a low maintenance CMDB with high quality data.

Share: |


My blog post 12 Steps to a systems approach to diagnostics covers a complete list of diagnostics that can be used to check the overall health of enterprise applications. In this post, I will share thoughts on how features of the BMC Atrium CMDB Suite map to them, some of the lesser known diagnostic features, and ways to validate the overall health of the application.

 

 

Verify the Product Installation and Environment

 

It is important to verify the product is successfully installed or upgraded before you start using it.  For this reason, there is a post-install routine included in the installer to verify the installation was successful by unit testing a few features.   Did you know this tool can also be run months later to verify the features are still working?  Launch AtriumCoreMaintenanceTool from the server and access the Health Check page.  See Performing a Health Check in the product documentation for more details on it. This is an example of a component test diagnostic. The Health Check can take some time to complete, so be sure to perform it on a development server first to understand the impact and duration before running it on a production system.

 

Another useful tool to bring back from install time is the BMC Remedy Pre-Checker for Remedy ITSM 8.1 (unsupported) which checks environmental dependencies before running an ITSM install or upgrade.  The ITSM dependencies are generally a superset of the Atrium CMDB install dependencies.  Some of these values may have been modified since installation time, so this utility can quickly point to those discrepancies.   The Pre-checker is what I referred to as a configuration analyzer.  It is available in communities instead of bundled in the product. That is one way to address the need for automatic updates to analyzers - the latest version is posted in the community.  This allows additional verification options to be added easily based on experience after the product version is released.

 

Update: March 2014:  See The Pulse: The Pre-checker becomes Remedy Configuration Check Utility for a change in BMC Atrium CMDB Suite version 8.1.01 - including an expansion of the functionality to cover some post-installation configuration checks.

 

Verifying Job Run Status

 

Most of the work performed in BMC Atrium CMDB is performed by jobs. To quickly assess the health of the system, you can perform these steps to view the Job History:

  1. Access Atrium Integrator, set ‘Show all Job Runs With Status’ to “Failed”, and select each job, check if any job encountered a failure. The default behavior is to show the past week of job runs.
  2. Access Normalization, set ‘Show all Job Runs With Status’ to “Failed”, and select each job, check if any job encountered a failure.
  3. Access Reconciliation, set ‘Show all Job Runs With Status’ to “Failed”, and select each job, check if any job encountered a failure.

·  

An understanding of the implementation strategy helps in performing this analysis.  For example, on several occasions when checking the job run status, the issue was discovered to be a job that was running unexpectedly.  So in these cases, it was particularly helpful to NOT filter on Failed job runs – viewing all the job runs was more helpful. Another reason to look at the latest job runs and not exclusively Failures is because the job may have completed but encountered failures on some CIs.  Checking the last run of the job to see whether it was successful, how many errors were encountered, and the next scheduled job run is a better way to evaluate the health of particularly relevant business processes.

 

Below is a screenshot of the Reconciliation Job History for a particular job, where it shows the job runs daily and has succeeded each time it was run in the last week:

 

REJobHistory.jpg

Integration / Federation / Connectivity unit tests

 

Several Integrations for BMC Atrium CMDB Suite involve connecting to Atrium, and others connecting from Atrium.  Connectivity tests are more meaningful when performed from the consuming or acting application.   For example, Atrium Integrator may retrieve data from many different data sources to import into CMDB, and since Atrium Integrator is the actor, it is best to check the status there.  In the case of Atrium Web Services, you can verify it is working, but unit testing the integration means testing from the consuming application.

 

Atrium CMDB Suite integrates with other applications as a consumer via Federation.  One useful way to unit test these integrations is to maintain a single CI – such as a particular computer system – which integrates to each of the federated data sources.  This allows a quick unit test by viewing the CI in Atrium Explorer and launching to each federated source.

 

Data model and data quality

 

Two of the least known diagnostics to test the overall health of the product are command-line utilities accessed from the server – cdmchecker and cmdbdiag.

 

Cdmchecker is what I referred to as a “meta data consistency checker” – it checks for issues in the CMDB class definition and underlying ARSystem form definitions. Making changes to class definitions in Atrium Core Class Manager makes corresponding changes to the ARSystem forms and workflow, which is the designed behavior and rarely encounters issues. Discrepancies can occur when migrating workflow changes via BMC Remedy Developer Studio, or when failed changes are left in place and encountered later.  Cdmchecker provides a quick way to validate the Common Data Model definitions and either confirm or deny issues are present in the class definitions.  Learn more about cdmchecker in the documentation.

 

Cmdbdiag is another command line utility, used for identifying or correcting issues in the instance data. This utility is included in the product and can be run from the system console.  It is an administrator-only diagnostic which can check for and address data quality issues by running queries against the database to identify data which is incorrect, or to perform bulk.   Data updates run directly against the database so they run quicker and go around validation workflow, but it is important to make a database backup and perform the steps carefully because user error can impact a large amount of data.  Learn more about cmdbdiag in the documentation.

 

 

Diagnostic collection and error analysis

 

AtriumCoreMaintenanceTool is accessed from the server Clicking Zip Logs or Zip All Logs collects system, configuration, and log files to a zip file. See Collecting diagnostics using Log Zipper for more information about how to run it.  The Maintenance Tool is used by many BMC Software products, and the procedure described in this video gives more details on how to analyze the files captured.

 

There is no error log analyzer used with Atrium CMDB Suite at this time.  It does not appear to be a productive approach to use to evaluate the overall health of BMC Atrium CMDB Suite for a few reasons:

  • The other methods described above provide a better overview of what is working or not working in the application.
  • Errors are recorded to different files, with file names based on the job and max size of logs
  • Processes are multi-threaded and process CIs in parallel
  • Actions with fail and are subsequently fixed still report the original error
  • Log files outside the time of interest must be archived or removed, which is a manual process at present

 

-     It is possible that future improvements in these areas may make investment in a log analyzer more productive.  Today, log analysis is better addressed when investigating an encountered issue rather than a test of overall product health.

 

End-to-end system validation

 

As described in 12 Steps to a systems approach to diagnostics, an end to end system validation is most successful when implemented as part of a solution which drives or limits post installation deployment options.  The features of BMC Atrium CMDB allow for easy automation, so it is possible to build but there is nothing productized.  The effectiveness of such a diagnostic is largely influenced by picking uses cases that are meaningful but not disruptive.  For example, to test functionality of:

  1. Importing computer systems from an external data source
  2. Normalization of the computer system
  3. Reconciling it into BMC.ASSET

-   

An   Atrium Integrator job could be scheduled to import a single computer system to update it in a source dataset, this could trigger normalization via In-line Normalization, and the Reconciliation job could be either scheduled or continuous to update the computer system with a value passed in from the import.   The single computer system could be the same one described above which is used to test federation.  Would this be a meaningful test to automate?   Not necessarily.  Most data updates will involve a more substantial amount of data to be loaded, will be scheduled to occur in appropriate windows, and errors encountered would reflect data and workflow constraints. Automating a single transaction to occur frequently may point to simple issues such as a job not running, but may also encounter false positives when an operation is slow because it occurs at the same time as the business processes it is meant to test.

 

A BMC Atrium CMDB Systems Approach to Diagnostics

 

In this post, I covered the main features used to take a systems approach to diagnosing issues with BMC Atrium CMDB Suite, including:

  • AtriumCoreMaintenanceTool Health Check and Pre-checker to validate the system configuration and installation
  • Job History for Atrium Integrator, Normalization, and Reconciliation
  • Atrium Explorer to view a test Computer System for validating federation
  • Cdmchecker and to validate the data model and class definitions
  • Cmdbdiag to validate and correct instance or relationship data
  • Log Zipper to collect relevant diagnostics and view them

 

I also discussed a few diagnostics which are not used to take a systems approach to diagnosing issues - at least not yet - with BMC Atrium CMDB, including:

  • Error analyzer
  • End-to-end system validation

 

and some of the challenges to making those kind of diagnostics relevant and meaningful.

 

This post represents my own opinions and experiences and do not necessarily represent BMC Software's position, strategies or opinions.  I am interested in feedback on your own experiences.  What diagnostic steps or utilities have you found useful to isolate an issue in the product by evaluating the overall health of the product?  What features would you like to see in the product?  Please add your comments below.

 

Jesse

Share: |


BMC Atrium Core version 8.1 was released in late February, but the majority of issues raised in the month are from earlier versions:

  • version 8.1 = 4%
  • version 8.0 = 14%
  • version 7.6.04 = 61%
  • older versions = 21%

    

A recent trend has been an increase in Reconciliation identification issues.   This post shares useful resources to get to the bottom of these issues quickly. 

 

Successful identification involves four steps:

  1. Discovery populates identification attributes in the CMDB with values consistent with other discovery sources, which together can uniquely identify the CI.  See more info on docs.bmc.com
  2. Weak-related CIs are related to a computer system or CI which has a strong identity. This is in addition to populating identification attributes, typically the Name attribute.
  3. The target dataset being identified against, typically BMC.ASSET, has consistent data in these attributes.
  4. Reconciliation identification rules look for matches in the target dataset, successfully identifying the CI as one in the target dataset.

 
The reconciliation activity is typically the step which reveals an issue – either a multiple match error or a failure identifying the CI – but investigation requires looking at all 4 steps in the process to determine where the issue originated. 
  

 

 

When investigating issues, below are a few standard recommendations:

 

  1. Pick an example to examine in detail, looking at the data in the source and target datasets.  Verify weak-related CIs have an active relationship to their strong member.
  2. Check for weak members with no relationship to the strong instance.  For example a processor with no hostedsystemcomponent relationship to a computer system does not have enough information to be identified.   This is typically encountered when the CI is soft-deleted, but still considered for identficiation.
  3. Use cmdbdiag to reset the Reconciliation Identify of any CIs in testing individual tests.  This properly simulates the beginning of identification of a Computer System and its related components.
  4. Once the issue is identified, query on the data to understand the scope of the issue and when the issue was introduced. 

 

The troubleshooting documentation includes a more exhaustive process to investigate unidentified CIs , multiple match errors, and duplicate CIs in BMC.ASSET.   When examining an example in detail, check each fo the dependencies as in the flowchart:

CMDB_DuplicatesinMaster.png

It is worth noting the standard reconciliation identification rules only work if the discovery provider can populate the identification attributes used by them.  In cases where discovery does not provide these values consistently, an alternative approach is to:

  1. Restrict the computer systems discovered by the discovery source so there is no overlap of systems
  2. Populate a unique identifier to a CMDB attribute, as part of the bulk load integration
  3. Define reconciliation identification rules that match on the IntegrationId used above.

 

See the Bulk Load Integration documentation regarding design best practices.

 

 

There are a few known issues with Reconciliation identification with workarounds which are covered in the knowledgebase, see the following articles for more information:

KA344625 Reconciliation fails with multiple match errors for LPAR computer systems populated by ADDM, because they have duplicate serial numbers

KA344629 Reconciliation identification fails on some BMC_SoftwareServer, BMC_Database, BMC_ApplicationServices CIs with identical names.

Share: |


-by Van Wiles, Lead Integration Engineer

 

I was trying to write an AXIS client to invoke an ARS web service tonight, and found something annoying in the WSDL that ARS creates for its web services.

 

There's a section for "message" elements that defines three message parts - one for authentication (ARAuthenticate), one for input to the web service, and one for the response from the web service. Each message element has a "part" defined which has the name "parameters". That's not really right, and it causes problems when the "binding" refers to the "S0:ARAuthenticate" message part "parameters". Some WSDL processors can get over this issue without trouble, but not AXIS 1.4 which I was using.

 

The generated AXIS client stub created this wierd input message to the web service where all the "parameters" appeared in the SOAP header, and the body was empty. That gave me some interesting research work tonight. I found this thread which describes the exact problem I was having:

 

http://saloon.javaranch.com/cgi-bin/ubb/ultimatebb.cgi?ubb=get_topic&f=51&t=003290

 

That forum sort-of blamed Axis for this problem, but in fact it's a WSDL error generated by ARS. I created a defect for it tonight. Of course I stand to be corrected by superior Axis technicians, but I think this can be resolved by just giving each message part a unique name. I fixed mine that way.

 

The postings in this blog are my own and don't necessarily represent BMC's opinion or position.

Share: |


-by Van Wiles, Lead Integration Engineer

 

I see a lot of questions about modeling and querying relationships. The last one I saw was on BMC Developer Network, related to a poll Anand Ahire raised (Re: List Top CMDB searches) about common queries in CMDB. The basic question is how to find every CI related to an employee when they leave a company. This might be a graph query like Organization -member-> Person -related to-> CI (forget arrow direction for now since that is another big topic).

 

Classifying the Person relationships is more subtle than you might think. A person (from which employee is derived) may have a relationship to many configuration items. Some of these CIs are computer systems in the possession of the employee that need to be returned. Other application system CIs may have been administered by the employee and need a new administrator. Maybe the person is a member of a "role" which administers a group of CIs. Maybe the person was located in a room and there are computer system CIs in the same room, but there is no instance in CMDB to represent the relationship between the person and the computer systems. Or, an analyst may have determined the dependency of the person on the computer systems and created additional instances of "dependency" or "impact" relationship classes in CMDB.

 

This flexibility in modeling makes impact models easier to create, but harder to consume. BMC can help by proposing modeling practices, but of course that will limit flexibility so there are tradeoffs to be considered.

 

Standards like CIM help a little, by providing defined associations between managed elements of certain classes. But CIM is really designed to represent and manage operational elements in a CIM Object Manager (i.e. an in-memory model). The concept of an ITIL CMDB is not a perfect fit for CIM. For example, storing and managing all the CIM association classes as CMDB relationships is simply not scalable in most cases.

 

The CIM "meta-model" could be used to represent relationships as instances of association classes. But what if a relationship has both component and dependency characteristics? CIM requires at least two instances of the association classes (subclasses of CIM_Component and CIM_Dependency) to represent these two aspects of the same relationship. Extrapolate on this and consider the multitude of possibilities and you start to see the management problem that can result from a pure CIM approach to CMDB.

 

So I have two basic issues with using CIM_Dependency as a base class for CMDB dependency relationships. First, about half of CIM association classes are derived from CIM_Dependency, but I think in most cases dependency is only one characteristic of the relationship, not the fundamental type of the relationship itself. The fact that there is a dependency arises from the fact that one element is connected to, hosted by, a functional part of, used by, legally bound to (, etc.) another element.

 

For example, my child is dependent on me, but this is not a dependency relationship; it is a family relationship. The dependency will end soon, but the family relationship will not end. Actually I now depend on him in some situations (like moving furniture for example).

 

That brings me to my second issue. The direction and level of impact in a dependency relationship can vary depending on the situation. Modeling the parent-child relationship as dependency might require me to create a "financial dependency" where the child is dependent, and a "labor dependency" where parent is the dependent. A really smart analyst or analysis program should be able to determine, based on our actual family relationship and respective financial and physical attributes, what dependencies exist for a given situation.

 

Ideally, the roles in a CMDB relationship record should say nothing about the roles of the endpoints. This should be described (as much as possible) by metadata about the relationship type, and by analysis of the situation in which the relationship is being evaluated.

 

All this is just to illustrate a fundamental principle of modeling in CMDB - minimize the number of records that must be maintained about an IT configuration, and try to capture the static, actual nature of the configuration rather than the situational or real-time aspects. More "brains" should be invested in the analysis of the model, and more "brawn" used to capture reliable configuration information.

 

The postings in this blog are my own and don't necessarily represent BMC's opinion or position.

Share: |


-by Van Wiles, Lead Integration Engineer

 

ITIL version 3 introduced the term Configuration Record to describe "A record containing the details of a Configuration Item...Configuration Records are stored in a Configuration Management Database." A Configuration Item (CI) is defined as "Any component that needs to be managed in order to deliver an IT Service. Information about each CI is recorded in a Configuration Record..."

 

So, most of the time (both in documentation and in communications) when we are talking about CIs in the CMDB, we are really talking about configuration records. The CI is the actual thing that is managed, not the record in CMDB. Other valid terms for configuration records include "CI records" and "instances (of a CMDB class)".

 

If you are writing about Configuration Items or Configuration Records, it is important to make this distinction. Unfortunately Configuration Record has no abbreviation (CR is an overloaded term already), so I usually just add "record" after "CI" to indicate I am talking about a record in CMDB, not the actual CI. For example, if I say "delete a computer system CI", technically I'm talking about picking up the box and physically removing it. To describe deleting the record in CMDB, I should say "delete the computer system CI record" instead, or "delete the instance of the ComputerSystem class."

 

This is my first post on the new BMC Developer Network blog so I hope you see it. The UI is sure easier to use, so maybe I'll post more often!

 

The postings in this blog are my own and don't necessarily represent BMC's opinion or position.

Filter Blog

By date:
By tag:
It's amazing what I.T. was meant to be.