Skip navigation
1 2 3 4 5 Previous Next


132 posts
Share This:

Hello and thank you for reading. First let me apologize for the highly granular technical details in this discussion.


This information has been provided to you by the official BMC Software AtriumCore Support organization in cooperation with the AtriumCore Customer Engineering team in Austin, TX. and San Jose, CA. We guarantee its accuracy and apologize for any typos that were not identified during publishing. Other publications on this topic are to be considered inferior unless otherwise specified. By inferior we mean that this article has the highest precedence weight in value and supersedes any previous discussions on this topic. 

Additional references are available and recommended for review:


BMC Documentation:

Best Practices for the Common Data Model - BMC Atrium Core 9.0 - BMC Documentation


Knowledge modules:


The following is an explanation of the CMDB Common Data Model design and its reaction to being introduced to overlays. Let’s begin by making the following general statement:


“Forms owned by the BMC AtriumCore CMDB application should never be overlaid, customized or otherwise altered outside of the CMDB Class Manager. AR Developer Studio does not have any role in managing CMDB data structures and its use to modify and overlay CMDB schemas can have and does have severe impact on functionality of the CMDB and can lead to data loss.“


Making the right and informed decisions while making modification to CMDB data structures can make all the difference to prevent long term impact on performance and even AR Server stability. I’ll begin by explaining the CMDB design a little.


CDMB Application - BMC.CORE


CMDB Forms are created in the same fashion as inner joins where each joined form has a common field used to find specific results. Querying these tables as a "joined view" will return only the records where both tables match exactly the same value in one column of the row (field). It's like pinpointing a GPS location on the map by its coordinates. In respect to BMC's CMDB these inner joins are then using the exact InstanceId value to find a unique match. To clarify this point the T(#) tables are indeed "physical" tables in whatever RDBMS you have deployed on. I double quoted the physical, because technically it's just electromagnetic charge where no one is serving dinner on that table. By physical I mean a difference between a database table and a joined view, where the view is a window that allows us to present the data in a controlled way.  ARSCHEMA table has records that include schema Id's that make up the AR metadata and "BMC.CORE:BMC_BaseElement", "BMC.CORE:BMC_ComputerSystem_" and "BMC.CORE:BMC_ComputerSystem" are listed there as tables or joins, but they are different DB objects ( views) created to be used as part of the CMDB data present in a ARSYSTEM database.



For example consider this "joining" of


T503 (BMC_CORE_BMC_BaseElement) and T522 (BMC_CORE_BMC_ComputerSystem_)


gives the result of the intersection of both tables where InstanceId (179) matches a record with the same value in both tables . This then gives the results as defined by the View with values end users understand as Configuration Item (CI). Here views show the joining of two or more tables that have common data and are used for data presentation for end users. With this design we can manage all aspects of data with extreme precision.


NOTE: Please note that the schemaids used here, id number 503 and 522, are not static id's from one database to another and can change. They are not exclusively identified as BaseElement or ComputerSystem table ids. Deployments in your environment would likely have different ID. Conversely all attributes will have the same Column ID from one database to another. For example Status will always be C7. There are some exceptions to this for column IDs and attribute names, but in general this will apply to most attributes.


I am going to add some SQL statements here to illustrate it. Running two queries to find a CI record with the same InstanceId in two different tables may seem to be impossible because we understand a BMC.CORE:<CLASS> schema to have a unique value for field InstanceId(179). It has a unique index on it and finding only one record should not be possible. That is true, but not if you separate the inner join into two tables.


Consider the following query that looks in each table:


select * from T522 where C179 = 'OI-306843f7d118455882a5847bf9200c8f'

select * from T503 where C179 = 'OI-306843f7d118455882a5847bf9200c8f'


Where in my example table T522 is BMC_CORE_BMC_ComputerSystem_ and T503 is the BMC_CORE_BaseElement. Note here that both are actual tables. We are more familiar with name BMC.CORE:BMC_ComputerSystem which is a view of the two tables where the joined data is displayed.


select * from T503 INNER JOIN T522 on T503.C179 = T522.C179 and T503.C179 = 'OI-306843f7d118455882a5847bf9200c8f'


This is essentially the same thing as running:


select * from BMC_CORE_BMC_ComputerSystem where InstanceId = 'OI-306843f7d118455882a5847bf9200c8f'


Here is an illustration showing these results in MSSQL Studio:




Note that the last result where the RequestId (Field C1) results in a composite record from both tables.


This is just to illustrate how most CMDB data is stored and looked up.


  If either table is altered outside the inner join (view) then the view will show the result as a whole record. We do not recommend doing this in practice, however some data manipulation can be performed with a certified CDMB Data Administrator. I can give two reasons for not doing it for you to consider. For one, making changes at the database level bypasses workflow. Secondly, altering and specifically deleting data through SQL query will impact only one table and can make ‘half’ of the record disappear. Cases where BMC_BaseElement part of joined record exists while BMC_ComputerSystem half is not found have been reported and are direct result of manipulation of data through the SQL back end.


For example if you run this SQL:


"delete from BMC_ComputerSystem_ where MarkAsDeleted = 1"


then you're only clearing out the BaseElement side while leaving the ComputerSystem_ table still full of that data. If you think you've done something like that in the past then you can check it by running this type of SQL query:


select count(*), ClassId  from BMC_CORE_BMC_BASEELEMENT where classid = 'BMC_COMPUTERSYSTEM' and instanceid not in (select instanceid from BMC_CORE_bmc_COMPUTERSYSTEM) group by ClassId


Any results from the above query mean that there are records in BMC_BaseElement table that are not found in BMC.CORE:BMC_ComputerSystem class container. You can substitute the COMPUTERSYSTEM ClassId in the above query with other class id values like BMC_IPENDPOINT and so on.


Please refrain from such practice. Use CMDBDiag, and Reconciliation Delete or Purge activity instead. These tools can help you delete data without violating the data consistency.


Next layer is the AR Metadata with set of tables that have references to schema ID of the actual table structures.


                - Table structures use prefixes like T, H and B

                 - AR Metadata is comprised of the various tables, but for the CMDB the ones that matter the most are:

                                  - arschema - references tables by ID (schemaid)

                                 - field - references columns in tables by fieldid and associations with schemaid.


There are other tables like “escalation” - references all escalations, “filter” - references all filters and others. These are not relevant for this topic, so I will not list them all here.


ARSCHEMA is usually the naming convention for "out of the box" name or ID of the database, however this name can be different if AR Server was installed into a database that was created before the installer was started. For the context of this posting just note that the name of this database can have different names but the "arschema" table itself can only have "arschema" as name. It is a table inside the ARSCHEMA database. So, when I mention the "arschema" in the following paragraphs I mean the "arschema table" rather than the database.


Records in "arschema" table are mostly pointers to tables that will use schemaid to look up the data. For example SchemaId 456 is referring to a table T456. Unfortunately these structures have no restrictions set on them and  can  be modified via BMC Developer Studio and become Overlays with "Best Practice Mode" and Customizations if "Development Mode" is used. The important thing to understand is that even CMDB data has a T table. This discussion is not about the design of AR Schema tables as it is about the CMDB data structures, so please forgive me if I am leaving out some details related to AR Schema database.



CMDB Metadata and the Common Data Model (CDM)


CMDB application is made of several components. Some API's, minor workflow, database tables and CDMB metadata. The last one is built in memory when the CMDB API is loaded by AR Monitor. It mainly queries forms like "OBJSTR:AttributeDefinitions" and "OBJSTR:Class" as the two main components for the hash table (arrays) where the data will be processed. The main difference between CMDB data structures and "arschema" data structures is that CMDB uses class hierarchy which "inherits" fields from its parent tables. The inheritance is simulated and for the data structure of a CI or Relationship this just means that tables have common purpose for data that is distributed. There is no other structure to date that uses this approach to storing data within the ARSCHEMA. Only the CMDB does this.


If you like to know more about the data management structures then please see DMTF (Data Management Task Force) design (


In this design all common attributes of objects are stored at the Base Parent class and get more specialized with their child classes. A parent class that has a child class with specialized attributes will not be aware of its child attributes. For example attribute "MACAddress(C260140111)" will only be found in BMC_LANEndPoint that could not be associated with a BMC_Person record where it has no meaning. The attribute that is appropriate to define an object will be owned by a class that closely matches the attributes of real world object. Sending a SQL query to BMC_BaseElement class where the MACAddress is included in the where clause will result in error: "Invalid column name 'MACAddress'."



These tables were designed to show data for various reasons, including data visualization. Volume of data is to reflect the data size of discovered items of various types and hence these tables were intended to be trimmed down for faster data processing. Typically the more data is in the table the longer it will take to look it up. Our expectation was somewhere in the millions and some of the largest deployments seen in our experience is in the range of ~80,000,000 records. But those are just the CI's. There are also Relationships, which also use inheritance. Total size for the CMDB data can be in the billions, although such data volumes would require specialized hardware that can handle processing it.


All CMDB tables typically begin with BMC.CORE, also known as the NameSpace, and the Object has human understandable ClassId that adds meaning. ComputerSystem, Person, NetworkPort, these all mean something to us within the context of ITIL. Each class has attributes that increase in number and the more the class is removed from its parent the more specialized it will be. For example all attributes of BaseElement class are inherited by all CI classes, but BaseElement is not aware of any attributes of its children. Editing BMC.CORE forms in Dev Studio only edits that one table, but does not push the change to any other table. For this you need the Class Manager exclusively.


BMC AR Developer studio has the ability to create overlays, which is the "modern" way to introduce customizations to ARSCHEMA. However, these are still customizations that are outside of the CMDB. Unfortunately we cannot prevent AR Administrators from adding customizations to the CMDB due to the nature of features the Dev Studio offers, but it does allow us to easily separate customizations compared to what was available in the past.


All CMDB data structure changes need to be done with Class Manager that then builds the set of instructions for CMDBDRIVER that then does the work of data structure changes in the CMDB. Changes to a parent class will always propagate to its children. If you are using the BMC Developer Studio to add overlays to BMC.CORE:BMC_BaseElement for example, then that change will only be set on the BMC.CORE:BMC_BaseElement class and not propagate to its child classes.


Asset Management (AM) as an example is an application that takes advantage of the CMDB data store, but it does not use the inheritance model of the CMDB. Since Version 8.0 AM stores its data in AST:Attributes and qualifies CMDB classes by records in AST:ClassAttributes. Please refer to Updates to CI lifecycle data attributes - BMC Remedy IT Service Management Suite 8.0 - BMC Documentation for more details of this change.


In AM you then have two halves of a composite record that is then presented to the Asset Operator in one interface. They see the data through AST:<CLASS> views as one record. One half of the data is in AST:Attributes and the other half is stored in the BMC.CORE:<CLASS> and both are joined into one visual interface. In order to join the data into a view that offers Asset Operators the access they need we need to run CMDB to Asset Synch. In this synch all classes that are selected to have a UI in Asset are processed by a triggered synch job. Updates to individual fields (attributes) are also done this way by altering the tables in Asset. Adding overlays to AST:Attributes put locks on these Asset views and this often prevents the CMDB to Asset (SynchUI) functionality to complete successfully .


AST:Attributes must be edited in Base Development mode so that these changes can be added during the synch. CMDB2Asset Synch is Overlay agnostic. It has no idea that overlays exist. The expectations by end users is that CMDB2Asset Synch will update overlays is unrealized. Unfortunately this leads to a significant amount of extra work that is done before reaching out to BMC Support for help.


In conclusion use overlays with caution, or rather not at all with the CMDB data structure and please set your expectations that you'll be likely asked to undo all overlays related to CMDB and joins related to CMDB data structures. Upgrading to AtriumCore version 9.0 will require all previously created overlays of the CMDB forms to be removed.


Daniel Hudsky

Share This:






     Handling Roundtrips


Share This:

Our Customer Programs team have posted the following as a call for Configuration Managers to take part in our UX study sessions, please take this opportunity to contribute the future of Atrium CMDB and Configuration Management - see the post at bmc Remedy with Smart IT UX Design Sessions for Configuration Managers

Share This:

This blog is not about configuring BMC Atrium (data) Integrator. Instead I wanted to blog about how Atrium Integrator trouble tickets get routed to the SME's. However if you're reading this because you want to understand AI then look here first:


Understanding Atrium Integrator




For this blog I just want to refer to Atrium Integrator for what it does. Its function is defined as a method to transfer data from various sources into data store structured within the AR SCHEMA.


BMC has chosen the PENTAHO technology after researching alternatives and found this Java based tool to best qualify for data transfers. This means that AI can be used to import data into any form of the AR Schema. It does not necessarily mean that any issue encountered with the transfer will be related to Atrium Core data stores or AR Server configuration.


Data transfers with Atrium Integrator intended for the CMDB data store are created via the Atrium Integrator console. Assignment of issues related to this is easy: BMC AtriumCore Support.


It diversifies from there. Other BMC Remedy Applications can also receive data by using the Atrium Integrator. Asset Management, Change Management and other apps can get data by adding transformation mappings with the Pentaho Spoon client. These will still show up as jobs in the Atrium Integrator console and have the ability to be triggered or run by scheduler. Any PENTAHO plugin issues can be resolved by AR Server support and Atrium Core support, but not if the trouble is with the data mapping itself. Atrium Core or AR Server support teams are not going to be familiar with the requirements of applications outside of their respective support boundary. For example if the destination form for data is SRM or Incident Management then sending support tickets to Atrium Core support or AR Server will be rerouted to SRM or Incident Management anyway.


We always try to achieve the fastest resolution possible. That is true for any issue and applies to any group within BMC Support organization. Customers may not see it that way because their ticket seems to be getting any attention at first and that is also our concern. Our internal routing of tickets is not transparent externally. This very experience is the reason for my blog post today. We want to work with the community and that requires communication.


This is what I want to achieve with this post today. Anyone that needs support with Atrium Integrator can help with the routing of the issue using the following logic:



If issue is with feature in:Best BMC Support Team Assignment:
Spoon Client (not application specific)AR Server or UDM
Pentaho Plugin (KETTLE)AR Server or UDM
CMDBOutput,CMDBInput,CMDBLookup methods

Atrium Core

Creating AI Jobs or Schedules with existing jobs in AI ConsoleAtrium Core
AROutput, ARInput, ARXInput methodsAR Server or UDM
UDM forms in general


Carte Server install and configurationAR Server
AI Users/Roles and PermissionsAtrium Core
UDM Users/Roles and PermissionsUDM
AIE to AI job migration toolAtrium Core

Installation of Atrium Integrator Server or Client

Atrium Core
Midtier related issue with AI console accessAtrium Core or AR Midtier
Application specific support other than CMDB Core formsApplication team that owns the destination form.
Investigating Scheduled jobs that did not trigger (due to AR Scheduler and AR Dispatcher)AR Server
Share This:


An article published by Forbes (sponsored by SunguardAS) details why a large proportion of CMDB implementations fail.

I wanted to complement and provide our perspective on the topic, based on feedback from our market and the capabilities provided by Atrium.

The trends in IT more than ever require a solid control over configurations:

  • Larger, more complex, and dynamic data centers accelerate the risk of bad changes, and push the need for automation
  • Adoption of public and private clouds result in more vendors, more operators, and integration layers
  • The accelerating demand for digital services from the business places IT in tough situations where reactivity and efficiency are key ingredients for success

This drives benefits of Configuration Management beyond what was outlined in the article:

  • Change control/change management: Documenting your environment illustrates the many interdependencies among the various components. The better you understand your existing environment, the better you can foresee the “domino effect” that changing any component of that environment will have on other elements. The end result: increased discipline in your IT change control and change management environment.
  • Disaster recovery: In the event of a disaster, how do you know what to recover if you don’t know what you started with? A production CMDB forms the basis for a recovery CMDB, which is a key element in any business continuity/disaster recovery plan. That comprehensive view of what your environment should look like can help you more quickly regain normal operations.

But also:

  • Automation: With the growing scale of data centers, there is no option but to automate routine tasks. That spans IT Operations Management which need business-driven provisioning, patching or compliance, IT Service Management which need to accelerate incident resolution by efficiently prioritizing/categorizing the work, etc.
  • Performance and availability: With availability being so critical to business success, how can IT be proactive and fulfill SLAs if it cannot map events that impact the infrastructure to the business service that is affected? How can capacity decisions be business driven without an accurate picture of the environment?

The article lists 4 reasons for CMDB failure (competing priorities, limited resources, complacency and overly manual approach).
The fact that a “CMDB Project” is mentioned here is symptomatic that many organizations have initially only considered the technology aspects, rather than establishing Configuration Management as a key discipline that relies on CMDB technology. The human factor is in most cases the #1 source of failure, and there are key questions that cannot be ignored, nor forgotten throughout the implementation:

  • What is the business reason for Configuration Management?
  • What current and future problems is Configuration Management going to address?
  • Who is the sponsor for this implementation?
  • What are the processes that will interface with Configuration Management, either to provide data or to consume data?

This ensures a top-down approach, that starts with a vision, drives the boundaries of the data model, the types of integrations, etc.

Once the implementation has kicked off, there are other reasons that can lead to failure such as:

  • The data getting into the CMDB is not governed correctly: it is Configuration Management’s responsibility to ensure that the data is accurate, and transformed appropriately, so it can be referenced reliably. This needs regular reviews of the rules and filters that automatically govern data accuracy
  • Expecting 100% coverage before going into production is playing on the perception that CMDB fails. Configuration Management is a continuous practice, and CMDB implementations need incremental success because the target will always be moving

When it comes to tips for success, I can’t agree more with the article about the absolute necessity to “automatically update the comprehensive picture of your environment to reflect the potentially tens of thousands of changes per year to your environment.” Atrium Discovery and Dependency Mapping (ADDMDiscovery can witness how efficient it is at feeding Atrium CMDB with trustable data, that can be automatically synchronized with service models.

Atrium CMDB definitely provides the most comprehensive solution, in terms of its capabilities to handle incoming data, possible interfaces for data consumers, scalability, and the wealth of integrations that exist with BMC or other vendor products.

Recommended reading: Critical Capabilities for Configuration Management Database (Gartner, June 2014).

A main benefit is that it does not require different tools for different data transformation operations. Now, because of this richness, an implementation has to start with the right understanding of the tool, as well as how it should be used. To that purpose, the documentation includes Best Practices that guide implementation towards meeting success with understanding the data model, loading data, normalizing it and ensuring correct reconciliation with other sources of data.

In a summary, Configuration Management is needed more than ever, and needs to be addressed as a discipline, that leverages the most appropriate tools which will guarantee data accuracy, high levels of automation, and strong integrations to drive the most value.

Atrium is the most widely deployed CMDB so it probably has the largest track record in failed implementations. The other side of the glass also means that it has the largest number of successes. This is confirmed by its users, and the rate of 85% failure is certainly not right when applied to it.

Share This:

In the first of the Effect-Tech CMDB webinar series we discussed the upfront aspects of properly setting the scope of your CMDB initiative. We discussed the high level implementation choices and why a use-case driven approach might be the most optimal method to deliver value more quickly. After discussing these options, we concluded part 1 with the introduction of a service model architecture that can be used to initially model your IT environment and expand over time. If you missed the first webinar you can watch a replay of Part 1 at Effect-Tech Webinars


Please join us on October 9th as we continue the conversation in part 2 of this webinar series.  We will explore the BMC Atrium classes and which classes are most relevant to support the service model architecture introduced in part 1. Furthermore, we will talk about the role of discovery and how it can and cannot be leveraged to keep your CI's up-to-date. After discussing the CMDB classes, we broach the topic of CI relationships and simplifying which relationship types you should use to drive meaningful value without added complexity.


With classes and relationships out of the way, we steer clear of CI attributes for the time being and introduce the need to define multiple service model views that allows users to better understand the numerous CI's and their relationships that result when a complex service model is built out. Finally, we will explore the role of the CMDB to assist application support teams in the areas of event and incident management - and potentially why integration to discovery tools are NOT required to provide value for these app groups.


Time permitting, we will introduce Effect-Tech's CMDB methodology and best practices that your organization can use to implement CMDB in a structured and repeatable way. This methodology avoids the common implementation mistake that essentially turns your CMDB project into a data, discovery, and reconciliation exercise. By implementing the CMDB using this systematic approach, we believe your organization will gain more value from your CMDB project - more quickly.


This webinar series is presented by Rick Chen, Managing Principle at Effect-Tech.  Rick shares from his wealth of CMDB knowledge and field experience. 



  • CMDB class discussion
  • If it's all about relationships - what, why, and how much?
  • CMDB service model views - and why it matters
  • Addressing the needs of application support teams
  • Introducing implementation best practices to get more value, more quickly out of your CMS / CMDB


Date/Time: October 9th  at 9am (PST)


Space is limited so reserve your webinar seat today - Register

Stephen Earl

CMDB @Engage

Posted by Stephen Earl Employee Sep 14, 2014
Share This:

I'm really looking forward to meeting our customers at BMC Engage in Orlando, Florida starting October 13th. I'm hoping to catch up with those of you I have met before and also to meet those of you I haven't met yet! I hope you will catch me in the corridors or at break or meal times to talk CMDB.


As part of my Engage activities I will be presenting Session 63 CMDB, With Great Power Comes Great Responsibility along with Darius Wallace where we will be discussing CMDB Futures, Best Practices for Implementation of CMDB in your business. The CMDB is part of the core of any ITSM implementation and implemented correctly can bring great benefits to your ITIL based processes, a great CMDB implementation can make you a Hero. However it is easy to become so focused on the CMDB implementation that it, unintentionally, becomes the Super Villain of your environment.


We will be discussing our new Best Practices resource, how we at BMC intend to help your implementation become a Hero and discuss our experiences in the field working with customers.


This session contains information you will find useful no matter what stage your implementation is at, no matter your skill level Introductory, Intermediate or Advanced


We look forward to meeting you all in Orlando at the Swan & Dolphin in Walt Disney World from 13th - 16th October!

Share This:

New features of Atrium SSO 9.0


Adam Linehan is a Lead Product Developer and he presents on Atrium SSO 9.0 during this combined Connect with Atrium and Connect with Remedy monthly webinar event on Wednesday, September 24, 2014.

In this session we will explore the new features available with Atrium SSO 9.0 and how they will help to simplify the setup, configuration and Troubleshooting of an Atrium SSO system.

Below is the recording from the live webinar including Q&A asked during the event.

Additionally, you can find questions discussed during the webinar session in an attachment or on this online document


If you have any additional questions on the information provided in this webinar, then we suggest you to please create new discussion in BMC Remedy Community group.


Please add any suggestions or feedback on this webinar session.

For more information, or if you have questions, please contact Gregory Kiyoi

Share This:

It is often said service is everybody’s business.  If you approach work in a function of Business Service Management, then “service” is literally at the very center of your business and your acronym.  So how do you measure your impact on service?  Everywhere and always. This is a story of a personal journey looking where to configure it, and finding it built into the very fabric of the application.  Just like service in the general sense.


So, I work in support and I thought I was doing pretty good with the daily "here I come to save the day" routine by now. Meeting after meeting, knocking out one solution after another. Life was great. Then one day I got faced with an upgrade of AtriumCore 8.1 which added a new link to the AtriumCore console that was labeled "Service Context Administration". So, of course I had to pursue it because something new is always interesting to me and clicked the link out of curiosity. Nothing happened. I asked about it during a meeting with some colleagues and learned "It's an Asset Management feature, it's configured for the Asset Management console". I was fine with that and went on with my life, but a seed of curiosity was planted.


Then one night while roasting peanuts and deep frying a fish, couple dwarves from the Lone Mountain came over. OK, they were not exactly dwarves, but my adventure with Service Context began. I think it started with someone asking about UDDI Atrium Web Services, which turned out to actually be the AR Server Midtier UDDI which does not resolve and hence we wrote that off as "AR Server" issue. But then "Atrium Web Service" and "Service Context" came up in the same sentence. I looked at the documentation and once again realized that this is a service provided by the AR Web Services and dismissed it. Once again, the murky waters of lake "Service Contextia" have been laid to rest to live another day.


But then, the date is August 12th 2014. We're having a team meeting and I am showing tricks on how to check and verify functionality of the AtriumCore console which includes Federation configuration. I noticed that Service Context was now added to my lab system's Federation Manager console where I had installed the 8.1 SP1 patch. Suddenly a government approved light bulb went off in my head. This must be it! Service Context was staring right back at me.



I thought that this is going to be very simple. After all Federation is just a matter of having a plugin configured in the ar.cfg and loaded java process with the Atrium Shared Java Plugin.


I checked that all is in order and then saw the Launch link in AtriumExplorer. At this point I thought I climbed the mountain and published a Webex recording on how to make the best of the Service Context. I thought the matter is closed. I've put my fedora hat on. I flipped the collar of my rain coat up and walked out into the rain confident that anyone could follow this setup.


Next day I got an unexpected reply from a reader: This was still not working for them.


So, what could this possibly be? What is this "Service Context"? Let's go back to basics and see what the documentation says about it.


I downloaded the entire AtriumCore Help Documents package from EPD and started to look for "Service Context" and also checked the iDocs: Troubleshooting BMC Atrium Service Context - BMC Atrium Core 8.1 - BMC Documentation


And now I hit the jack pot. Finally I found what I was looking for. So, if you're still reading this you probably want to know what it really is.

If you're starting in Asset Management console and you're looking at a list of ComputerSystem you need to work with and you need to know if this system has any Business Services associated with it and what impact on those services you'd have if you were to put this system in maintenance.


So, my confusion was whether this is an AtriumCore Web Service or if it's served by the Midtier. And the answer is the AR Midtier!!



I've captured the service registration below:




Here you can see that the Service Context is using the midtier arsys root directory to register it.

That's it. The installer actually did this already, so there was nothing for me to configure.


So, basically this is the outcome of the quest. Start here:




And get this "Service Context" for that computer system Dell Rack System - 517P95J:




Seems too simple in hind sight.


In summary, if you're interested in using Service Context, but making that work seems like a quest to the Lost Mines of the Lonely Mountain, then perhaps sharing my personal experience with this module can be of some help. I think just understanding of what the "context" means can help. A ComputerSystem can be investigated from ITSM Asset Management console for context of related business services and this is what the Service Context feature of the AtriumCore is all about. Did you have a similar experience?


To see more like this, see BMC Remedy Support Blogs

Share This:

>> Atrium Webinar Series 


Dear Users,


We would like to take this opportunity to thank everyone who participated in our Atrium Webinar- Working with Reconciliation Engine Webinar on August 14, 2014.

For people who missed this event or if you would like to go through the recording, watch the video below.



Additionally, you can find presentation from this webinar session in an attachment.


Please add comment for any suggestions or feedback on this webinar session, we welcome all your feedback. 

Share This:

Recently, I was involved in helping customer to resolve some data issues. There were duplicate data in different CMDB classes populated in the ADDM dataset as well as in the production dataset. The consuming applications were impacted due to inaccurate data in CMDB. While analyzing this, we found that the root cause of duplicate CIs in the production dataset (i.e. the asset dataset) was mainly due to improper reconciliation identification rules. The ADDM reconciliation job was using ADDMIntegrationid as part of identification rule which caused duplicate CIs E.g. there were two identical computer system CIs in the asset dataset with different ADDMIntegrationIds. This is an example of how critical the reconciliation process is to maintaining the quality and accuracy of configuration items in your CMDB.


Before we go further on ADDMIntegrationId use, let’s understand how the ADDMIntegrationID is used by ADDM and CMDB.


The ADDMIntegrationId attribute in CMDB class holds a unique key populated by ADDM as part of CMDB Sync operation. This ADDM specific key helps the CMDB sync operation to decide whether a CI already exists in CMDB before performing the insert or update operation.


Knowing that the ADDMIntegrationid is a unique Key to identify a CI, it’s tempting to use it in reconciliation identification rules. This is the most common mistake seen in CMDB data issues. This attribute has in the past been used as work around in CMDB reconciliation rules for Software Server, Database and Cluster CI classes. But with the fix described in KA411090 and this work around is no longer needed.


For reconciliation identification activity, it is very important to use attribute(s) which can provide CI uniqueness in reconciliation identification rules, and it’s equally important to use discoverable CI attributes. E.g. for computer System, serial number, host name & domain attributes to determine CI uniqueness.


Here are the few rules I follow when investigating issues like this:


1)     Make sure you understand why there are errors before implementing changes to RE rules.

2)     Avoid using ADDMIntegrationId for strong member CIs e.g. Computer System.  If I am tempted to do this, revisit rule 1 above.

3)      Using ADDMIntegrationId for weak class members is less risky because they don’t have sufficient information in attributes to identify CIs uniquely e.g. weak member like Processor, Monitor, Product etc. You can learn more about how to investigate data issues here



I hope this helps. Please rate my blog to let us know if it was useful. For more like this, see BMC Remedy Support Blogs

Share This:

>> Atrium Webinar Series 


Dear Users,

We would like to take this opportunity to thank everyone who participated in our Atrium Webinar- Understanding the Seamless Data Pump Products Webinar on July 10, 2014.

For people who missed this event or if you would like to go through the recording, watch the video below.

Additionally, you can find presentation from this webinar session in an attachment.

Please add comment for any suggestions or feedback on this webinar session, we welcome all your feedback. 

Share This:

Hear from Amit Maity, a senior technical instructor at BMC Software.


Amit will review Federation in BMC Atrium CMDB including different types of data, provide considerations and recommendations on federating data, review the methods of federation and provide a demo.




View our IT training for Atrium CMDB

Join the BMC Customer Success (consulting, education, and support) community.  Hear from experts in consulting, education and our centers of excellence. 

Share This:

>> Atrium Webinar Series 

Dear Users,

We would like to take this opportunity to thank everyone who participated in our Atrium Webinar- AtriumCore Data and System Sizing Webinar on June 12th, 2014.

For people who missed this event or if you would like to go through the recording, watch the video below.

Additionally, you can find presentation from this webinar session in an attachment.

If you get any additional questions on AtriumCore Data and System Sizing , then we suggest to create new discussion on BMC Atrium Community group.

Please add comment for any suggestions or feedback on this webinar session, we welcome all your feedback. 

Share This:

I'd like to use terminology like "age old" idea in this case, however it's bit early to call software issues "age old" just yet because software has not really been around for ages just yet. But then again when I look at my 10 year old son there I could fit a couple "ages" of 10 year olds into the era where software has been around. So I think I'll be OK to say that the concept of retaining software records for a specific purpose has been linked to an "age old" problem with that type of retention. In our case, the case of BMC AtriumCore CMDB, that type of record would be an Asset record, a CI, Configuration Item or what some may think of or understand as "an inventory".



Inventory records are definitely useful to have. Specifically for trends, cost audits, outage research and so on. Especially if the status of the record has been tracked for historic changes and what we refer to as Asset Life Cycle (AssetLifecycleStatus). You can see when the CI was Ordered, Deployed, End of Life'd and Deleted. All useful stuff. All of these lifecycle states have their purpose and a CI will live very happily in the BMC.ASSET dataset. But not "ever after". Enter the arena: Mark As Deleted = Yes.



You see the last status I've listed, namely "Deleted", is actually an instruction for the CMDB workflow to purge that record and hence that record is "no more" once BMC AtriumCore Reconciliation Engine (RE for short) reconciles that data. OK, so why is this a problem?


The issue with this is people's expectations on what these records should be and their presence in the BMC.ASSET dataset is that they are indeed inventory records and they should be retained even if they are marked as deleted, or also known as "soft" deleted. The record is still present in the database, but no longer an active asset in the production environment.



So, on first thought that should actually be no problem. The CI is there for me to look at historical changes that include its time of order, deployment and removal (deletion) from production. No issues there. The problem is with the relationships that were hosted on that computer.

Let me explain why.



A typical CI is not a standalone record, it is related to other CIs and these relationships have attributes. For example we have cardinality, where the CI relationship can be defined as one to many, many to one and many to many. Then there is Weak type relationship where the CI on the "left" side of the relationship has to exist before the "right" side can be detected by some means of electronic discovery. This means that without the CI on the left, usually a ComputerSystem CI, none of the hosted components on the right would exist. There may be a license cost related to the operating system that's running on that computer, but no discovery tool could scan it if the computer is not deployed and turned on. Additionally there is a Cascade Delete option which is not enabled out of the box but can be toggled to be enabled for Hosted System Component relationships.



So, back to "Mark As Deleted = Yes", which I am going to refer to as MAD.Y and MAD.N for Yes and No. What is not immediately obvious is that there is workflow that will act on changes done to MAD. All relationships are removed from a CI when the MAD.Y is set. If you have the Cascade Delete on, then MAD.Y will also propagate that MAD.Y flag to the Hosted System Component CIs. This means that the computer system will be set for deletion on the next Purge activity as well as the Operating System that was hosted on it. This means that if you're tracking the license of that Operating System then that record would now be gone.



This can be a problem for customers that like to retrace the license management back to the Computer System on where it was hosted. Generally end user's Operating System would not be a big deal, but Oracle license or Server license that can easily exceed $10,000. Losing track of that CI could present issues when Audits are conducted. So, keeping the ComputerSystem and the hosted Product CIs now becomes an inventory issue. What do we do with these records? Once they are MAD.Y their relationship is fragmented and not backward traceable without some effort. Certainly not via automation. However this is exactly what ends up happening when Purge activity is not performed on that data in BMC.ASSET. Eventually it will require some clean up. And that is bad because nobody is very happy when they have to do it.



We'll have cases that arrive in our incident management tracking system where customer may complain about performance, or errors seen in reconciliation logs related to Multiple Matches found. Many of these issues result from data retention policy where the customer does not allow Purge of the MAD.Y records from BMC.ASSET for the reasons I've outlined above. However, it does not have be that way.


As of now, most Reconciliation jobs use this sequence:



Purge (or Delete) BMC.ADDM and BMC.ASSET (Identified Only)



Although this sequence makes the most sense, it includes the not so popular Purge of the MAD.Y records from BMC.ASSET. The truth is that if you want your data to be healthy then it needs to include the Purge. However, not all is lost. There is a better sequence that allows record retention.



Copy Dataset (with a qualificatn of "MarkAsDeleted = Yes"  AND (ClassId = "BMC_Product" or class id of interest))
Purge (or Delete) BMC.ADDM and BMC.ASSET (Identified only)



In this scenario you can use a qualification to first copy all CIs you're interested in where the MAD.Y is set and ClassId = 'BMC_COMPUTERSYSTEM AND ClassId = 'BMC_PRODUCT' or your asset class of interest.


Please see Jared Jones' PULSE blog on License Management:


Where covers new feature specific to Asset Management License Management :


" SP1 includes the expanded capabilities. If you get it installed you will see quite a few additional License Types out of the box. Essentially all the previous License Types now have an additional "_All" type that includes the other software classes."


Your copied dataset can then hold the name like BMC.ASSETS.PURGED where you can return to at any given time and run reports on.
It would only hold records that were at one point Marked As Deleted in the BMC.ASSET dataset and purged. Keep in mind that the "License Management" will be updated accordingly as configured in the Licensing Job related to the reconcilation job that handles the ADDM data.

That said, it just means that this is not yet fully automated out of the box or even considered as Best Practice because CMDB is not intended for asses inventory retention, but within the parameters of capabilities of the AtriumCore and not all that complex to add as a functionality enhancement.



In conclusion, is inventory retention in BMC.ASSET dataset good or bad? Well, it's definitely not in the design of the CMDB which is supposed to keep records of Configured items in production environments and it can cause data clean up issues. So, my vote tends to lean in the direction of "bad". You can do better with this "age old" problem.


I hope this helps, please rate my blog below or add comments on your experiences.  See more like this at BMC Remedy Support Blogs.


Daniel Hudsky

Filter Blog

By date:
By tag: