1 2 3 Previous Next

Atrium CMDB

37 posts
Share: |

Hello and thank you for reading. First let me apologize for the highly granular technical details in this discussion.


This information has been provided to you by the official BMC Software AtriumCore Support organization in cooperation with the AtriumCore Customer Engineering team in Austin, TX. and San Jose, CA. We guarantee its accuracy and apologize for any typos that were not identified during publishing. Other publications on this topic are to be considered inferior unless otherwise specified. By inferior we mean that this article has the highest precedence weight in value and supersedes any previous discussions on this topic. 

Additional references are available and recommended for review:


BMC Documentation:

Best Practices for the Common Data Model - BMC Atrium Core 9.0 - BMC Documentation


Knowledge modules:



The following is an explanation of the CMDB Common Data Model design and its reaction to being introduced to overlays. Let’s begin by making the following general statement:


“Forms owned by the BMC AtriumCore CMDB application should never be overlaid, customized or otherwise altered outside of the CMDB Class Manager. AR Developer Studio does not have any role in managing CMDB data structures and its use to modify and overlay CMDB schemas can have severe impact on functionality of the CMDB and lead to data loss.“


Making the right and informed decisions while making modification to CMDB data structures can make all the difference to prevent long term impact on performance and even AR Server stability. I’ll begin by explaining the CMDB design a little.


CDMB Application - BMC.CORE


CMDB Forms are created in the same fashion as inner joins where each joined form has a common field. Querying these tables as a "joined view" will return only the records where both tables share the exact same value in the common field. With the CMDB these inner joins are then using InstanceId to find a unique match. To clarify this point the T(#) tables are indeed physical tables in whatever RDBMS you have deployed on. BMC_BaseElement and BMC_ComputerSystem_ and BMC_ComputerSystem are not these tables or joins, they are totally different DB objects ( views) created for ease of use as part of the CMDB installation in the ARSYSTEM DB.



For example consider this "joining" of


T503 (BMC_CORE_BMC_BaseElement) and T522 (BMC_CORE_BMC_ComputerSystem)


gives the result of the intersection of both tables where InstanceId (179) matches a record with the same value in both tables . This then gives the results as defined by the View with values end users understand as Configuration Item (CI). Here views show the joining of two or more tables that have common data and are used for data presentation for end users. With this design we can manage all aspects of data with extreme precision.


NOTE: Please note that the schemaids used here, id number 503 and 522, are not static id's from one database to another and can change. They are not exclusively identified as BaseElement or ComputerSystem table ids. Deployments in your environment would likely have different ID. Conversely all attributes will have the same Column ID from one database to another. For example Status will always be C7. There are some exceptions to this for column IDs and attribute names, but in general this will apply to most attributes.


I am going to add some SQL statements here to illustrate it. Running two queries to find a CI record with the same InstanceId in two different tables may seem to be impossible because we understand a BMC.CORE:<CLASS> schema to have a unique value for field InstanceId(179). It has a unique index on it and finding only one record should not be possible. That is true, but not if you separate the inner join into two tables.


Consider the following query that looks in each table:


select * from T522 where C179 = 'OI-306843f7d118455882a5847bf9200c8f'

select * from T503 where C179 = 'OI-306843f7d118455882a5847bf9200c8f'


Where in my example table T522 is BMC_CORE_BMC_ComputerSystem_ and T503 is the BMC_CORE_BaseElement. Note here that both are actual tables. We are more familiar with name BMC.CORE:BMC_ComputerSystem which is a view of the two tables where the joined data is displayed.


select * from T503 INNER JOIN T522 on T503.C179 = T522.C179 and T503.C179 = 'OI-306843f7d118455882a5847bf9200c8f'


This is essentially the same thing as running:


select * from BMC_CORE_BMC_ComputerSystem where InstanceId = 'OI-306843f7d118455882a5847bf9200c8f'


Here is an illustration showing these results in MSSQL Studio:




Note that the last result where the RequestId (Field C1) results in a composite record from both tables.


This is just to illustrate how most CMDB data is stored and looked up.


  If either table is altered outside the inner join (view) then the view will show the result as a whole record. We do not recommend doing this in practice, however some data manipulation can be performed with a certified CDMB Data Administrator. I can give two reasons for not doing it for you to consider. For one, making changes at the database level bypasses workflow. Secondly, altering and specifically deleting data through SQL query will impact only one table and can make ‘half’ of the record disappear. Cases where BMC_BaseElement part of joined record exists while BMC_ComputerSystem half is not found have been reported and are direct result of manipulation of data through the SQL back end.


For example if you run this SQL:


"delete from BMC_ComputerSystem_ where MarkAsDeleted = 1"


then you're only clearing out the BaseElement side while leaving the ComputerSystem_ table still full of that data. If you think you've done something like that in the past then you can check it by running this type of SQL query:


select count(*), ClassId  from BMC_CORE_BMC_BASEELEMENT where classid = 'BMC_COMPUTERSYSTEM' and instanceid not in (select instanceid from BMC_CORE_bmc_COMPUTERSYSTEM) group by ClassId


Any results from the above query mean that there are records in BMC_BaseElement table that are not found in BMC.CORE:BMC_ComputerSystem class container. You can substitute the COMPUTERSYSTEM ClassId in the above query with other class id values like BMC_IPENDPOINT and so on.


Please refrain from such practice. Use CMDBDiag, and Reconciliation Delete or Purge activity instead. These tools can help you delete data without violating the data consistency.


Next layer is the AR Metadata with set of tables that have references to schema ID of the actual table structures.


                - Table structures use prefixes like T, H and B

                 - AR Metadata is comprised of the various tables, but for the CMDB the ones that matter the most are:

                                  - arschema - references tables by ID (schemaid)

                                 - field - references columns in tables by fieldid and associations with schemaid.


There are other tables like “escalation” - references all escalations, “filter” - references all filters and others. These are not relevant for this topic, so I will not list them all here.


ARSCHEMA is usually the naming convention for "out of the box" name or ID of the database, however this name can be different if AR Server was installed into a database that was created before the installer was started. For the context of this posting just note that the name of this database can have different names but the "arschema" table itself can only have "arschema" as name. It is a table inside the ARSCHEMA database. So, when I mention the "arschema" in the following paragraphs I mean the "arschema table" rather than the database.


Records in "arschema" table are mostly pointers to tables that will use schemaid to look up the data. For example SchemaId 456 is referring to a table T456. Unfortunately these structures have no restrictions set on them and  can  be modified via BMC Developer Studio and become Overlays with "Best Practice Mode" and Customizations if "Development Mode" is used. The important thing to understand is that even CMDB data has a T table. This discussion is not about the design of AR Schema tables as it is about the CMDB data structures, so please forgive me if I am leaving out some details related to AR Schema database.



CMDB Metadata and the Common Data Model (CDM)


CMDB application is made of several components. Some API's, minor workflow, database tables and CDMB metadata. The last one is built in memory when the CMDB API is loaded by AR Monitor. It mainly queries forms like "OBJSTR:AttributeDefinitions" and "OBJSTR:Class" as the two main components for the hash table (arrays) where the data will be processed. The main difference between CMDB data structures and "arschema" data structures is that CMDB uses class hierarchy which "inherits" fields from its parent tables. The inheritance is simulated and for the data structure of a CI or Relationship this just means that tables have common purpose for data that is distributed. There is no other structure to date that uses this approach to storing data within the ARSCHEMA. Only the CMDB does this.


If you like to know more about the data management structures then please see DMTF (Data Management Task Force) design (http://www.dmtf.org).


In this design all common attributes of objects are stored at the Base Parent class and get more specialized with their child classes. A parent class that has a child class with specialized attributes will not be aware of its child attributes. For example attribute "MACAddress(C260140111)" will only be found in BMC_LANEndPoint that could not be associated with a BMC_Person record where it has no meaning. The attribute that is appropriate to define an object will be owned by a class that closely matches the attributes of real world object. Sending a SQL query to BMC_BaseElement class where the MACAddress is included in the where clause will result in error: "Invalid column name 'MACAddress'."



These tables were designed to show data for various reasons, including data visualization. Volume of data is to reflect the data size of discovered items of various types and hence these tables were intended to be trimmed down for faster data processing. Typically the more data is in the table the longer it will take to look it up. Our expectation was somewhere in the millions and some of the largest deployments seen in our experience is in the range of ~80,000,000 records. But those are just the CI's. There are also Relationships, which also use inheritance. Total size for the CMDB data can be in the billions, although such data volumes would require specialized hardware that can handle processing it.


All CMDB tables typically begin with BMC.CORE, also known as the NameSpace, and the Object has human understandable ClassId that adds meaning. ComputerSystem, Person, NetworkPort, these all mean something to us within the context of ITIL. Each class has attributes that increase in number and the more the class is removed from its parent the more specialized it will be. For example all attributes of BaseElement class are inherited by all CI classes, but BaseElement is not aware of any attributes of its children. Editing BMC.CORE forms in Dev Studio only edits that one table, but does not push the change to any other table. For this you need the Class Manager exclusively.


BMC AR Developer studio has the ability to create overlays, which is the "modern" way to introduce customizations to ARSCHEMA. However, these are still customizations that are outside of the CMDB. Unfortunately we cannot prevent AR Administrators from adding customizations to the CMDB due to the nature of features the Dev Studio offers, but it does allow us to easily separate customizations compared to what was available in the past.


All CMDB data structure changes need to be done with Class Manager that then builds the set of instructions for CMDBDRIVER that then does the work of data structure changes in the CMDB. Changes to a parent class will always propagate to its children. If you are using the BMC Developer Studio to add overlays to BMC.CORE:BMC_BaseElement for example, then that change will only be set on the BMC.CORE:BMC_BaseElement class and not propagate to its child classes.


Asset Management (AM) as an example is an application that takes advantage of the CMDB data store, but it does not use the inheritance model of the CMDB. Since Version 8.0 AM stores its data in AST:Attributes and qualifies CMDB classes by records in AST:ClassAttributes. Please refer to Updates to CI lifecycle data attributes - BMC Remedy IT Service Management Suite 8.0 - BMC Documentation for more details of this change.


In AM you then have two halves of a composite record that is then presented to the Asset Operator in one interface. They see the data through AST:<CLASS> views as one record. One half of the data is in AST:Attributes and the other half is stored in the BMC.CORE:<CLASS> and both are joined into one visual interface. In order to join the data into a view that offers Asset Operators the access they need we need to run CMDB to Asset Synch. In this synch all classes that are selected to have a UI in Asset are processed by a triggered synch job. Updates to individual fields (attributes) are also done this way by altering the tables in Asset. Adding overlays to AST:Attributes put locks on these Asset views and this often prevents the CMDB to Asset (SynchUI) functionality to complete successfully .


AST:Attributes must be edited in Base Development mode so that these changes can be added during the synch. CMDB2Asset Synch is Overlay agnostic. It has no idea that overlays exist. The expectations by end users is that CMDB2Asset Synch will update overlays is unrealized. Unfortunately this leads to a significant amount of extra work that is done before reaching out to BMC Support for help.


In conclusion use overlays with caution, or rather not at all with the CMDB data structure and please set your expectations that you'll be likely asked to undo all overlays related to CMDB and joins related to CMDB data structures. Upgrading to AtriumCore version 9.0 will require all previously created overlays of the CMDB forms to be removed.


Daniel Hudsky

Share: |

working together.jpg



I always try and have some fun in blog posts around training and this post is no different.  I found a funny website, I mean laugh out loud funny, the other day...100 Random Fun Facts


Fact #1 is “Banging your head against a wall burns 150 calories an hour.”  That is a pretty impressive calorie count, just not sure how your head will feel after an hour. 


Like me, most of you probably want to bang your head against a wall once in a while and it could be around anything – work, drivers on the road, people walking down the street with their head buried in their smartphone, your significant other, or even your kids (or someone else’s).


What about banging your head on the wall when your BMC solution is not working the way you think it should work? Did you not know how to do something but found the info on communities?  Have you had official training on the product or just knowledge transfer?  Communities is a great venue but did you know that BMC also offers training for our products. We have training for your specific role or we have What’s New course where you learn new features of a product, or we have admin courses, the list goes on.  We also have different methods to deliver our training – online or in a classroom. 


INSTRUCTOR-LED TRAINING (ILT/ILO – in classroom or online)

Our admin courses (and others) are delivered by a BMC instructor...we hold classes in our training centers or online around the world.  The classes include lecture and labs.  Pretty cool, right?  You can use what you learn in a safe environment.  After you learn it all, then you can go back to your job and show the world how you can be more productive and efficient – make the solution work for you. 



Beginning today, BMC Education Services is excited to announce our end of year promotion for instructor-led training. Receive 10% off* all remaining public courses in 2015. 


start now.jpg

View our training paths to find the course that fits your needs.  When you process your registration, enter the

coupon code EOY15 to receive the discount during check out.


View more information about the promotion, the process to register, and if you need assistance.


*The discount does not apply to prior purchases including Learning Pass Credits.



My hopes, dreams, and aspirations for the remainder of the year are that “we” do not bang our heads on a wall too much. I really don’t want the headache and I don’t think you do either. 


Take an admin class from BMC and become a rock star at work....like I said before make the BMC solution work for you – stop banging your head on the wall. 

Share: |






     Handling Roundtrips


Share: |

Our Customer Programs team have posted the following as a call for Configuration Managers to take part in our UX study sessions, please take this opportunity to contribute the future of Atrium CMDB and Configuration Management - see the post at bmc Remedy with Smart IT UX Design Sessions for Configuration Managers

Share: |

This blog is not about configuring BMC Atrium (data) Integrator. Instead I wanted to blog about how Atrium Integrator trouble tickets get routed to the SME's. However if you're reading this because you want to understand AI then look here first:


Understanding Atrium Integrator




For this blog I just want to refer to Atrium Integrator for what it does. Its function is defined as a method to transfer data from various sources into data store structured within the AR SCHEMA.


BMC has chosen the PENTAHO technology after researching alternatives and found this Java based tool to best qualify for data transfers. This means that AI can be used to import data into any form of the AR Schema. It does not necessarily mean that any issue encountered with the transfer will be related to Atrium Core data stores or AR Server configuration.


Data transfers with Atrium Integrator intended for the CMDB data store are created via the Atrium Integrator console. Assignment of issues related to this is easy: BMC AtriumCore Support. 


It diversifies from there. Other BMC Remedy Applications can also receive data by using the Atrium Integrator. Asset Management, Change Management and other apps can get data by adding transformation mappings with the Pentaho Spoon client. These will still show up as jobs in the Atrium Integrator console and have the ability to be triggered or run by scheduler. Any PENTAHO plugin issues can be resolved by AR Server support and Atrium Core support, but not if the trouble is with the data mapping itself. Atrium Core or AR Server support teams are not going to be familiar with the requirements of applications outside of their respective support boundary. For example if the destination form for data is SRM or Incident Management then sending support tickets to Atrium Core support or AR Server will be rerouted to SRM or Incident Management anyway.


We always try to achieve the fastest resolution possible. That is true for any issue and applies to any group within BMC Support organization. Customers may not see it that way because their ticket seems to be getting any attention at first and that is also our concern. Our internal routing of tickets is not transparent externally. This very experience is the reason for my blog post today. We want to work with the community and that requires communication.


This is what I want to achieve with this post today. Anyone that needs support with Atrium Integrator can help with the routing of the issue using the following logic:



If issue is with feature in:Best BMC Support Team Assignment:
Spoon Client (not application specific)AR Server or UDM
Pentaho Plugin (KETTLE)AR Server or UDM
CMDBOutput,CMDBInput,CMDBLookup methods

Atrium Core

Creating AI Jobs or Schedules with existing jobs in AI ConsoleAtrium Core
AROutput, ARInput, ARXInput methodsAR Server or UDM
UDM forms in general


Carte Server install and configurationAR Server
AI Users/Roles and PermissionsAtrium Core
UDM Users/Roles and PermissionsUDM
AIE to AI job migration toolAtrium Core

Installation of Atrium Integrator Server or Client

Atrium Core
Midtier related issue with AI console accessAtrium Core or AR Midtier
Application specific support other than CMDB Core formsApplication team that owns the destination form.
Share: |


An article published by Forbes (sponsored by SunguardAS) details why a large proportion of CMDB implementations fail.

I wanted to complement and provide our perspective on the topic, based on feedback from our market and the capabilities provided by Atrium.

The trends in IT more than ever require a solid control over configurations:

  • Larger, more complex, and dynamic data centers accelerate the risk of bad changes, and push the need for automation
  • Adoption of public and private clouds result in more vendors, more operators, and integration layers
  • The accelerating demand for digital services from the business places IT in tough situations where reactivity and efficiency are key ingredients for success

This drives benefits of Configuration Management beyond what was outlined in the article:

  • Change control/change management: Documenting your environment illustrates the many interdependencies among the various components. The better you understand your existing environment, the better you can foresee the “domino effect” that changing any component of that environment will have on other elements. The end result: increased discipline in your IT change control and change management environment.
  • Disaster recovery: In the event of a disaster, how do you know what to recover if you don’t know what you started with? A production CMDB forms the basis for a recovery CMDB, which is a key element in any business continuity/disaster recovery plan. That comprehensive view of what your environment should look like can help you more quickly regain normal operations.

But also:

  • Automation: With the growing scale of data centers, there is no option but to automate routine tasks. That spans IT Operations Management which need business-driven provisioning, patching or compliance, IT Service Management which need to accelerate incident resolution by efficiently prioritizing/categorizing the work, etc.
  • Performance and availability: With availability being so critical to business success, how can IT be proactive and fulfill SLAs if it cannot map events that impact the infrastructure to the business service that is affected? How can capacity decisions be business driven without an accurate picture of the environment?

The article lists 4 reasons for CMDB failure (competing priorities, limited resources, complacency and overly manual approach).
The fact that a “CMDB Project” is mentioned here is symptomatic that many organizations have initially only considered the technology aspects, rather than establishing Configuration Management as a key discipline that relies on CMDB technology. The human factor is in most cases the #1 source of failure, and there are key questions that cannot be ignored, nor forgotten throughout the implementation:

  • What is the business reason for Configuration Management?
  • What current and future problems is Configuration Management going to address?
  • Who is the sponsor for this implementation?
  • What are the processes that will interface with Configuration Management, either to provide data or to consume data?

This ensures a top-down approach, that starts with a vision, drives the boundaries of the data model, the types of integrations, etc.

Once the implementation has kicked off, there are other reasons that can lead to failure such as:

  • The data getting into the CMDB is not governed correctly: it is Configuration Management’s responsibility to ensure that the data is accurate, and transformed appropriately, so it can be referenced reliably. This needs regular reviews of the rules and filters that automatically govern data accuracy
  • Expecting 100% coverage before going into production is playing on the perception that CMDB fails. Configuration Management is a continuous practice, and CMDB implementations need incremental success because the target will always be moving

When it comes to tips for success, I can’t agree more with the article about the absolute necessity to “automatically update the comprehensive picture of your environment to reflect the potentially tens of thousands of changes per year to your environment.” Atrium Discovery and Dependency Mapping (ADDMDiscovery (ADDM) can witness how efficient it is at feeding Atrium CMDB with trustable data, that can be automatically synchronized with service models.

Atrium CMDB definitely provides the most comprehensive solution, in terms of its capabilities to handle incoming data, possible interfaces for data consumers, scalability, and the wealth of integrations that exist with BMC or other vendor products.

Recommended reading: Critical Capabilities for Configuration Management Database (Gartner, June 2014).

A main benefit is that it does not require different tools for different data transformation operations. Now, because of this richness, an implementation has to start with the right understanding of the tool, as well as how it should be used. To that purpose, the documentation includes Best Practices that guide implementation towards meeting success with understanding the data model, loading data, normalizing it and ensuring correct reconciliation with other sources of data.

In a summary, Configuration Management is needed more than ever, and needs to be addressed as a discipline, that leverages the most appropriate tools which will guarantee data accuracy, high levels of automation, and strong integrations to drive the most value.

Atrium is the most widely deployed CMDB so it probably has the largest track record in failed implementations. The other side of the glass also means that it has the largest number of successes. This is confirmed by its users, and the rate of 85% failure is certainly not right when applied to it.

Share: |

In the first of the Effect-Tech CMDB webinar series we discussed the upfront aspects of properly setting the scope of your CMDB initiative. We discussed the high level implementation choices and why a use-case driven approach might be the most optimal method to deliver value more quickly. After discussing these options, we concluded part 1 with the introduction of a service model architecture that can be used to initially model your IT environment and expand over time. If you missed the first webinar you can watch a replay of Part 1 at Effect-Tech Webinars


Please join us on October 9th as we continue the conversation in part 2 of this webinar series.  We will explore the BMC Atrium classes and which classes are most relevant to support the service model architecture introduced in part 1. Furthermore, we will talk about the role of discovery and how it can and cannot be leveraged to keep your CI's up-to-date. After discussing the CMDB classes, we broach the topic of CI relationships and simplifying which relationship types you should use to drive meaningful value without added complexity.


With classes and relationships out of the way, we steer clear of CI attributes for the time being and introduce the need to define multiple service model views that allows users to better understand the numerous CI's and their relationships that result when a complex service model is built out. Finally, we will explore the role of the CMDB to assist application support teams in the areas of event and incident management - and potentially why integration to discovery tools are NOT required to provide value for these app groups.


Time permitting, we will introduce Effect-Tech's CMDB methodology and best practices that your organization can use to implement CMDB in a structured and repeatable way. This methodology avoids the common implementation mistake that essentially turns your CMDB project into a data, discovery, and reconciliation exercise. By implementing the CMDB using this systematic approach, we believe your organization will gain more value from your CMDB project - more quickly.


This webinar series is presented by Rick Chen, Managing Principle at Effect-Tech.  Rick shares from his wealth of CMDB knowledge and field experience. 



  • CMDB class discussion
  • If it's all about relationships - what, why, and how much?
  • CMDB service model views - and why it matters
  • Addressing the needs of application support teams
  • Introducing implementation best practices to get more value, more quickly out of your CMS / CMDB


Date/Time: October 9th  at 9am (PST)


Space is limited so reserve your webinar seat today - Register

Stephen Earl

CMDB @Engage

Posted by Stephen Earl Sep 14, 2014
Share: |

I'm really looking forward to meeting our customers at BMC Engage in Orlando, Florida starting October 13th. I'm hoping to catch up with those of you I have met before and also to meet those of you I haven't met yet! I hope you will catch me in the corridors or at break or meal times to talk CMDB.


As part of my Engage activities I will be presenting Session 63 CMDB, With Great Power Comes Great Responsibility along with Darius Wallace where we will be discussing CMDB Futures, Best Practices for Implementation of CMDB in your business. The CMDB is part of the core of any ITSM implementation and implemented correctly can bring great benefits to your ITIL based processes, a great CMDB implementation can make you a Hero. However it is easy to become so focused on the CMDB implementation that it, unintentionally, becomes the Super Villain of your environment.


We will be discussing our new Best Practices resource, how we at BMC intend to help your implementation become a Hero and discuss our experiences in the field working with customers.


This session contains information you will find useful no matter what stage your implementation is at, no matter your skill level Introductory, Intermediate or Advanced


We look forward to meeting you all in Orlando at the Swan & Dolphin in Walt Disney World from 13th - 16th October!

Share: |

It is often said service is everybody’s business.  If you approach work in I.T.as a function of Business Service Management, then “service” is literally at the very center of your business and your acronym.  So how do you measure your impact on service?  Everywhere and always. This is a story of a personal journey looking where to configure it, and finding it built into the very fabric of the application.  Just like service in the general sense.


So, I work in support and I thought I was doing pretty good with the daily "here I come to save the day" routine by now. Meeting after meeting, knocking out one solution after another. Life was great. Then one day I got faced with an upgrade of AtriumCore 8.1 which added a new link to the AtriumCore console that was labeled "Service Context Administration". So, of course I had to pursue it because something new is always interesting to me and clicked the link out of curiosity. Nothing happened. I asked about it during a meeting with some colleagues and learned "It's an Asset Management feature, it's configured for the Asset Management console". I was fine with that and went on with my life, but a seed of curiosity was planted.


Then one night while roasting peanuts and deep frying a fish, couple dwarves from the Lone Mountain came over. OK, they were not exactly dwarves, but my adventure with Service Context began. I think it started with someone asking about UDDI Atrium Web Services, which turned out to actually be the AR Server Midtier UDDI which does not resolve and hence we wrote that off as "AR Server" issue. But then "Atrium Web Service" and "Service Context" came up in the same sentence. I looked at the documentation and once again realized that this is a service provided by the AR Web Services and dismissed it. Once again, the murky waters of lake "Service Contextia" have been laid to rest to live another day.


But then, the date is August 12th 2014. We're having a team meeting and I am showing tricks on how to check and verify functionality of the AtriumCore console which includes Federation configuration. I noticed that Service Context was now added to my lab system's Federation Manager console where I had installed the 8.1 SP1 patch. Suddenly a government approved light bulb went off in my head. This must be it! Service Context was staring right back at me.



At this point I thought that this is going to be very simple. Federation is just a matter of having a plugin configured in the ar.cfg and loaded as java process with the Atrium Shared Java Plugin (via pluginsvr_config.xml)


I checked that all is in order and then saw the Launch link in AtriumExplorer. At this point I thought I conquered the mountain and published a Webex recording on how to make the best of the Service Context. That recording is now available on the WebEx service site. Click the link below to play it if you like to see the details:


"How to enable ServiceContext link in AtriumExplorer"


Duration: 5 min 26 sec


At this point I thought the matter is closed. I've put on my fedora hat, flipped my rain coat collar up and walked into the rain.


Next day I got an unexpected reply: This was still not it!


So, what could this possibly be? What is this "Service Context"? Let's go back to basics and see what the documentation says about it.


I downloaded the entire AtriumCore Help Documents package from EPD and started to look for "Service Context" and also checked the iDocs: Troubleshooting BMC Atrium Service Context - BMC Atrium Core 8.1 - BMC Documentation


And now I hit the jack pot. Finally I found what I was looking for. So, if you're still reading this you probably want to know what it really is.


Here it goes in my own words. Say that you're in Asset Management console and you're looking at a list of ComputerSystem you need to work with. You need to know if this system has any Business Services associated with it and what impact on those services you'd have if you were to put this system in maintenance.


So, my confusion was whether this is an AtriumCore Web Service or if it's served by the Midtier. And the answer is The Midtier!!



I've captured the service registration below:




Here you can see that the Service Context is using the midtier arsys root directory to register it.

That's it. The installer actually did this already, so there was nothing for me to configure.


So, basically this is the outcome of the quest. Start here:




And get this "Service Context" for that computer system Dell Rack System - 517P95J:




Seems too simple in hind sight.


In summary, if you're interested in using Service Context, but making that work seems like a quest to the Lost Mines of the Lonely Mountain, then perhaps sharing my personal experience with this module can be of some help. I think just understanding of what the "context" means can help. A ComputerSystem can be investigated from ITSM Asset Management console for context of related business services and this is what the Service Context feature of the AtriumCore is all about. Did you have a similar experience?


To see more like this, see BMC Remedy Pulse blogs.

Share: |

Recently, I was involved in helping customer to resolve some data issues. There were duplicate data in different CMDB classes populated in the ADDM dataset as well as in the production dataset. The consuming applications were impacted due to inaccurate data in CMDB. While analyzing this, we found that the root cause of duplicate CIs in the production dataset (i.e. the asset dataset) was mainly due to improper reconciliation identification rules. The ADDM reconciliation job was using ADDMIntegrationid as part of identification rule which caused duplicate CIs E.g. there were two identical computer system CIs in the asset dataset with different ADDMIntegrationIds. This is an example of how critical the reconciliation process is to maintaining the quality and accuracy of configuration items in your CMDB.


Before we go further on ADDMIntegrationId use, let’s understand how the ADDMIntegrationID is used by ADDM and CMDB.


The ADDMIntegrationId attribute in CMDB class holds a unique key populated by ADDM as part of CMDB Sync operation. This ADDM specific key helps the CMDB sync operation to decide whether a CI already exists in CMDB before performing the insert or update operation.


Knowing that the ADDMIntegrationid is a unique Key to identify a CI, it’s tempting to use it in reconciliation identification rules. This is the most common mistake seen in CMDB data issues. This attribute has in the past been used as work around in CMDB reconciliation rules for Software Server, Database and Cluster CI classes. But with the fix described in KA411090 and this work around is no longer needed.


For reconciliation identification activity, it is very important to use attribute(s) which can provide CI uniqueness in reconciliation identification rules, and it’s equally important to use discoverable CI attributes. E.g. for computer System, serial number, host name & domain attributes to determine CI uniqueness.


Here are the few rules I follow when investigating issues like this:


1)     Make sure you understand why there are errors before implementing changes to RE rules.

2)     Avoid using ADDMIntegrationId for strong member CIs e.g. Computer System.  If I am tempted to do this, revisit rule 1 above.

3)      Using ADDMIntegrationId for weak class members is less risky because they don’t have sufficient information in attributes to identify CIs uniquely e.g. weak member like Processor, Monitor, Product etc. You can learn more about how to investigate data issues here



I hope this helps. Please rate my blog to let us know if it was useful. For more like this, see BMC Remedy Pulse Blogs.

Share: |

Hear from Amit Maity, a senior technical instructor at BMC Software.


Amit will review Federation in BMC Atrium CMDB including different types of data, provide considerations and recommendations on federating data, review the methods of federation and provide a demo.




View our IT training for Atrium CMDB

Join the BMC Global Services community.  Hear from experts in consulting, education and our centers of excellence. 

Share: |

I'd like to use terminology like "age old" idea in this case, however it's bit early to call software issues "age old" just yet because software has not really been around for ages just yet. But then again when I look at my 10 year old son there I could fit a couple "ages" of 10 year olds into the era where software has been around. So I think I'll be OK to say that the concept of retaining software records for a specific purpose has been linked to an "age old" problem with that type of retention. In our case, the case of BMC AtriumCore CMDB, that type of record would be an Asset record, a CI, Configuration Item or what some may think of or understand as "an inventory".



Inventory records are definitely useful to have. Specifically for trends, cost audits, outage research and so on. Especially if the status of the record has been tracked for historic changes and what we refer to as Asset Life Cycle (AssetLifecycleStatus). You can see when the CI was Ordered, Deployed, End of Life'd and Deleted. All useful stuff. All of these lifecycle states have their purpose and a CI will live very happily in the BMC.ASSET dataset. But not "ever after". Enter the arena: Mark As Deleted = Yes.



You see the last status I've listed, namely "Deleted", is actually an instruction for the CMDB workflow to purge that record and hence that record is "no more" once BMC AtriumCore Reconciliation Engine (RE for short) reconciles that data. OK, so why is this a problem?


The issue with this is people's expectations on what these records should be and their presence in the BMC.ASSET dataset is that they are indeed inventory records and they should be retained even if they are marked as deleted, or also known as "soft" deleted. The record is still present in the database, but no longer an active asset in the production environment.



So, on first thought that should actually be no problem. The CI is there for me to look at historical changes that include its time of order, deployment and removal (deletion) from production. No issues there. The problem is with the relationships that were hosted on that computer.

Let me explain why.



A typical CI is not a standalone record, it is related to other CIs and these relationships have attributes. For example we have cardinality, where the CI relationship can be defined as one to many, many to one and many to many. Then there is Weak type relationship where the CI on the "left" side of the relationship has to exist before the "right" side can be detected by some means of electronic discovery. This means that without the CI on the left, usually a ComputerSystem CI, none of the hosted components on the right would exist. There may be a license cost related to the operating system that's running on that computer, but no discovery tool could scan it if the computer is not deployed and turned on. Additionally there is a Cascade Delete option which is not enabled out of the box but can be toggled to be enabled for Hosted System Component relationships.



So, back to "Mark As Deleted = Yes", which I am going to refer to as MAD.Y and MAD.N for Yes and No. What is not immediately obvious is that there is workflow that will act on changes done to MAD. All relationships are removed from a CI when the MAD.Y is set. If you have the Cascade Delete on, then MAD.Y will also propagate that MAD.Y flag to the Hosted System Component CIs. This means that the computer system will be set for deletion on the next Purge activity as well as the Operating System that was hosted on it. This means that if you're tracking the license of that Operating System then that record would now be gone.



This can be a problem for customers that like to retrace the license management back to the Computer System on where it was hosted. Generally end user's Operating System would not be a big deal, but Oracle license or Server license that can easily exceed $10,000. Losing track of that CI could present issues when Audits are conducted. So, keeping the ComputerSystem and the hosted Product CIs now becomes an inventory issue. What do we do with these records? Once they are MAD.Y their relationship is fragmented and not backward traceable without some effort. Certainly not via automation. However this is exactly what ends up happening when Purge activity is not performed on that data in BMC.ASSET. Eventually it will require some clean up. And that is bad because nobody is very happy when they have to do it.



We'll have cases that arrive in our incident management tracking system where customer may complain about performance, or errors seen in reconciliation logs related to Multiple Matches found. Many of these issues result from data retention policy where the customer does not allow Purge of the MAD.Y records from BMC.ASSET for the reasons I've outlined above. However, it does not have be that way.


As of now, most Reconciliation jobs use this sequence:



Purge (or Delete) BMC.ADDM and BMC.ASSET (Identified Only)



Although this sequence makes the most sense, it includes the not so popular Purge of the MAD.Y records from BMC.ASSET. The truth is that if you want your data to be healthy then it needs to include the Purge. However, not all is lost. There is a better sequence that allows record retention.



Copy Dataset (with a qualificatn of "MarkAsDeleted = Yes"  AND (ClassId = "BMC_Product" or class id of interest))
Purge (or Delete) BMC.ADDM and BMC.ASSET (Identified only)



In this scenario you can use a qualification to first copy all CIs you're interested in where the MAD.Y is set and ClassId = 'BMC_COMPUTERSYSTEM AND ClassId = 'BMC_PRODUCT' or your asset class of interest.


Please see Jared Jones' PULSE blog on License Management:




Where covers new feature specific to Asset Management License Management :


" SP1 includes the expanded capabilities. If you get it installed you will see quite a few additional License Types out of the box. Essentially all the previous License Types now have an additional "_All" type that includes the other software classes."


Your copied dataset can then hold the name like BMC.ASSETS.PURGED where you can return to at any given time and run reports on.
It would only hold records that were at one point Marked As Deleted in the BMC.ASSET dataset and purged. Keep in mind that the "License Management" will be updated accordingly as configured in the Licensing Job related to the reconcilation job that handles the ADDM data.

That said, it just means that this is not yet fully automated out of the box or even considered as Best Practice because CMDB is not intended for asses inventory retention, but within the parameters of capabilities of the AtriumCore and not all that complex to add as a functionality enhancement.



In conclusion, is inventory retention in BMC.ASSET dataset good or bad? Well, it's definitely not in the design of the CMDB which is supposed to keep records of Configured items in production environments and it can cause data clean up issues. So, my vote tends to lean in the direction of "bad". You can do better with this "age old" problem.


I hope this helps, please rate my blog below or add comments on your experiences.  See more like this at BMC Remedy Pulse Blogs.

Daniel Hudsky

Share: |

This post shares some diagnostic enhancements included in the Remedy Configuration Check Utility in BMC Remedy AR System Server (ARSystem) and BMC Atrium CMDB Suite (CMDB) version 8.1.01 (Service Pack 1).  The goal of the enhancements it to simplify the process of identifying, correcting, and reporting on configuration issues in the product.




A little background


The earlier post 12 steps toward a systems approach to diagnostics outlines different kinds of diagnostics which may be required for products such as BMC Atrium CMDB, and the subsequent post 7 Tools to verify BMC Atrium CMDB is working well describes the diagnostics available in CMDB at the time.

In Service Pack 1, we looked at how we could automate or simplify these diagnostics so they can be executed and collected more easily when required.   We looked at the AtriumCoreMaintenanceTool Health Check and the pre-checker described in this post BMC Remedy Pre-checker for Remedy 8.1 (unsupported) to see which would be the appropriate tool to extend.  The Health Check functionality in the Maintenance Tool is also at the completion of the installation – as the “Post Install” check.  This design limits which checks can or should be run from it. For example, if we automated the process of checking recommended configurations – they would always fail in the post install check as there would be no opportunity to configure the product yet.   So we decided the next step in the journey of automating diagnostics was to extend the  Pre-checker to provide a simplified user interface to execute diagnostics.



See the product documentation to learn more about the features of the BMC Remedy Configuration Check utility and how to access it.


The Pre-Checker was originally designed to detect environmental issues for install and upgrade, but since the scope of the tool has changed, it has been renamed the Remedy Configuration Check utility. It can be used not only to check the environment configuration before install or upgrade, but also to detect product configuration issues which are the causes of many post installation issues. This tool will also enable us to automate frequent issue troubleshooting steps.




BMC Remedy Configuration Check utility

The goals are to -

  1. Help administrators to troubleshoot configuration issues
  2. If the administrator is not able to resolve the issue, make it easy to gather and share test results





How it works

The BMC Remedy Configuration Check Utility is included with BMC Remedy AR System 8.1.01 (Service Pack 1) media.  The file can be extracted to the system to begin using it. For more information on downloading and extracting the utility, see To Obtain the BMC Remedy Configuration Check utility.

CMDB checks included in BMC Remedy Configuration Check utility

In Service Pack 1, the following list of checks has been added for CMDB.



1) CMDB - System Information


This feature makes it easy to collect information about the system easily. This feature may also make it easier to compare different systems running BMC Atrium CMDB or report it when working with Customer Support.

Note - Going forward, this check will be moved in the information gathering category.




2) CMDB Metadata Check


This check detects for pending CMDB class changes.  This check performs the same test which can be performed manually using the command line cdmchecker tool with the –g option.

You can find more info on cdmchecker in the product documentation.




3) CMDB Class Overlay Check


Overlay on CMDB could cause issue in upgrade process. This check detects overlay on CMDB classes




4) CMDB - RE Private Port


We recommend that Reconciliation Engine private queue configuration for performance reasons. If private queue configuration is not configured correctly, then there won’t be any performance gain. This check can be used to detect improper private queue configuration.




5) CMDB - Index Check


This check will provide references to out of the box indexes in CMDB version 8.1.01 and validate the indexes exist.  It will report error if any index is missing.





This blog post hopefully provides a better way of why the Pre-check utility was renamed and some of the new capabilities added.  This expanded functionality should make it easier to diagnose issues.   A few checks were added to this tool in version 8.1.01.


Stay tuned for new additions in future releases.



I hope you found this blog useful, please rate it below or add comments.  To find similar information, see BMC Remedy Blog Posts.

Share: |

Hi All,

I have recently joined BMC Software as the new Product Manager for the Atrium CMDB and thought I should do an introduction to myself so that you can get an idea of what makes me tick and also understand my drive to make the Atrium CMDB a tool that can drive realisation of value for you our Customers.

I have been part of the BMC Remedy world since 2004 when I worked for an ISP and was given the mantle of 'the Remedy guy' for the entire organisation, I had never seen Remedy before and actually had no idea what it was. After a mild baptism by fire I immediately saw the value that the Remedy Action Request System could give the Network Operations Centre I ran at the time and also the wider organisation, back then we were rolling out ITSM 5.1.2.

Since that time I have been involved in many rollouts of BMC Remedy ARS based solutions, and most recently I lead the Remedy development organisation for BlackBerry rolling out ITSM across that organisation and recently spent sometime at ServiceNow before coming to BMC Software.

My desire is to actively engage with the community (you) and have what I hope are active discussions through this forum and others in order to ensure that we are building a tool that helps you solve your business challenges on a daily basis. Part of my role here at BMC is to ensure that your voice is heard and that we are investing in the right areas to drive our product and your success forward.

When I'm not talking about CMDB, working on the product roadmap or out talking with you, our customers, I spend my spare time watching movies, enjoying company of friends and a new past time of Husky Scootering, feel free to ask me more about that if you really want to know!

I'm based in the University town of Cambridge in the UK and look forward to our interactions whatever the medium as we at BMC evolve Atrium CMDB to meet your needs and supporting your success.

Contact me via twitter @flirble or via this community of course :-)

If you are having issues with Atrium CMDB I strongly suggest you log an issue with support and your account team before engaging me directly but if you feel I might be able to help then I'm happy to do so where I can.



Share: |

CMDB is great out of the box. All the classes and relationships we ship are very well inline with the business environments that our products serve. However! Every once in a while you'll have to extend the common data model with a couple more attributes, class definitions or product data loads.  Last October, I presented a Webinar "How to unlock the potential of Common Data Model in BMC Atrium CMDB" covering how to extend the CDM using Class Manager.  You can view the recording here. In this blog post, I would like to extend on the topic to talk about products which extend the CDM as part of an installer or by loading a CMDB Extension with the Maintenance Tool.



Several current products use extensions to the CMDB including BMC ProactiveNet Performance Manager (BPPM), Configuration Discovery Integration for CMDB (CDI), and it is also used for loading Product Catalog data updates. The reason for the separate install step is to make all the required changes semi-automatically, and hopefully painlessly.  This takes the element of human error out of it, so it can be an either "it worked, no big deal" or "it didn’t work " what happened?" experience.  Whenever I look at a failure, I always ask the question " "What was unexpected in the environment that blocked the operations the extension was trying to perform?"  I seek to understand.  There is a "Zen" to it. It should be a rational exercise, not a physical one like trying to stuff pythons in a paper bag. Below, I will highlight some of the ways the product has tried to prevent, improve, or minimize the room for failure, and share some of the ways I think about it as I look into issues.  Hopefully it leads to a more peaceful state for your CMDB and yourself.


As far as ensuring the server is in a good state before running the install, this is a general feature.  It is a good feature to have even when not running installations, so the AtriumCoreMaintenanceTool has a Health Check feature.  You can read more about it in the documentation here.  You can find more about other tools that can help in this regard in Jesse’s post 7 Tools to verify BMC Atrium CMDB is working well in the section on Verifying Product Installation and Environment.


If you've had to extend the model or planning on doing so then this is what you should know:


Extending the CDM means that you're altering tables in the database to add additional columns or maybe even creating an altogether new table to house the data you'll be collecting from your environment. Simply said, we need to add more labels so we have to define containers for our data.


With that in mind imagine that already have a physical container defined like a cupboard or jar and you needed to add just a few things to it and you were able to add two labels but could not fit the third because it ran out of space. This would make sense to us humans, but the installer still thinks that 1 and 2 need to be added because it was not told otherwise. You could argue "can’t the installer be made more intelligent, to examine what is already in the cupboard or jar?".  Now consider the case that the items interact with one another as they are added, and have different rules on what can be stored multiple times.


Does that make the challenge of ensuring reliable completion more complex?  You bet it does!So the extension installer follows strict orders " install, check for errors. In case of unforeseen circumstances, wait for further instruction. Basically the extension installer is instructed to add all three items and that is exactly what it tries to do. If it fails during the install, and the install is attempted again This causes a data structure collision because 1 and 2 already exist and hence a failure of the extension loader. Running it again will not change these results. It will keep failing on exactly the same collisions points it failed before. So, don't run the extension loader again hoping for different results. Instead look at the logs and see where the installer hit its first issue. There could have been a requirement to create a dataset first, either a manual step that was missed or a dependency that was violated. When investigating issues, it is sometimes useful to look at the manifest of files in the extension to see what it is trying to load. This helps to understand why an error occurs.



There are two types of extension loaders. One that comes with an executable e.g. simExtLoader.exe or pnExtLoader.exe) and AtriumCore\AtriumCoreMaintenanceTool.cmd.  AtriumCoreMaintenanceTool is installed with Atrium Core version 7.5 and later and provides the tool for loading CMDB extensions, so more information about extensions and what they contain can be found in the Atrium Core documentation.


Executable loaders can use CMDBDRIVER to deliver their "payload" from each subdirectory of the loader. For example 500-SIM-CDM-Extensions directory for simExtLoader has Class extensions as well as *OSD.txt files that have instructions on what to do.  The reason for executable loaders is to perform additional steps or checks as part of the install, but the subset which is installing CMDB extensions is largely the same.


Some loaders also add Normalization or Reconciliation jobs, Federated Launch Links and so on. These will be stored in the "arx" files in subdirectories of the extension loader. These additional records can also be only added once and if you run the installer again they will cause further failures but this time as a data collision rather than data structure collision. Again, here the installer is programmed to install all these things as if they never existed and the instructions basically say:


"Create New", rather than using this logic: "If you find it there already, then update it or move on to the next".


This is so because the original need to extend the CMDB still applies and the installer just knows that it has not been completed yet.


Above, I mentioned the marching orders of the Extension loader:  install, check for errors. In case of unforeseen circumstances, wait for further instruction.  The latter part of that was added in CMDB 7.6.04.  If an extension is currently loading, or has attempted to load and has failed, why should it be allowed to run again and make a mess of things?  It shouldn’t, so a simple mechanism was put in place to prevent that situation. When it runs, it adds records to a form called "Share:Application_Properties" that reflects the version of the extension and record the Status of the installation progress. If the installer needs to install Product Catalog data, which is also considered to be "an extension" of the CMDB then you'd be referencing the ProductCatalogData-2012_07_03.xml. The name of this file reflects the 2013 Product Catalog data load made available in July 3rd. Its contents will have GUID reference for Share:Application_Properties that checks the version of the PCT installed on your system.



In the case of Product Catalog that ID is "PD00C04FA081BA0SvxQgaxH66Q1wQA" and it is validated for version 7.6.04 or greater.

The next GUID it will then add to Share:Application_Properties (SAP) form is going to be "BMCPC00C04FA081BAbpfqSA9gV41Ar". This particular ID is then used to track the progress of the data load. This is done by adding a record to SAP with Name of Status and Value of Running. If the install fails the value will be changed to Failed.



At the conclusion of the install's completion this Status record is removed. If this record still exists and has Failed status then the installer is not going to let you do it again.



@BMC Software we have designed this part specifically for the reasons described above which are:


- Run the Health Check or verify the system is in a good state before installing extensions

- Run it once only

- Evaluate reasons for failure and address them individually

- If you can't complete them manually then restore the database and fix the original condition and run the installer once. Repeat if necessary, or identify individual component failures and complete the extension loading manually component by component.



I hope this post provides a better understanding of the rules that the extension loaders live by, and some of the thinking behind them.Hopefully this leads to more zen-like experiences with extending the CDM. I have probably skipped something so I am looking forward to see further questions on this topic so that we can have a full disclosure here for anyone to follow.


If you like content like this, see BMC Remedy Pulse Blogs for more like it.


If you have ideas on ways for Customer Support work better with you to enable success, join the Customer Support Community and provide ideas, feedback, or suggested improvements.



Thank you for reading!


Filter Blog

By date:
By tag: