1 2 3 Previous Next

BMC Atrium CMDB

33 posts
Share: |


This blog is not about configuring BMC Atrium (data) Integrator. Instead I wanted to blog about how Atrium Integrator trouble tickets get routed to the SME's. However if you're reading this because you want to understand AI then look here first:

 

Understanding Atrium Integrator

 

 

 

For this blog I just want to refer to Atrium Integrator for what it does. Its function is defined as a method to transfer data from various sources into data store structured within the AR SCHEMA.

 

BMC has chosen the PENTAHO technology after researching alternatives and found this Java based tool to best qualify for data transfers. This means that AI can be used to import data into any form of the AR Schema. It does not necessarily mean that any issue encountered with the transfer will be related to Atrium Core data stores or AR Server configuration.

 

Data transfers with Atrium Integrator intended for the CMDB data store are created via the Atrium Integrator console. Assignment of issues related to this is easy: BMC AtriumCore Support. 

 

It diversifies from there. Other BMC Remedy Applications can also receive data by using the Atrium Integrator. Asset Management, Change Management and other apps can get data by adding transformation mappings with the Pentaho Spoon client. These will still show up as jobs in the Atrium Integrator console and have the ability to be triggered or run by scheduler. Any PENTAHO plugin issues can be resolved by AR Server support and Atrium Core support, but not if the trouble is with the data mapping itself. Atrium Core or AR Server support teams are not going to be familiar with the requirements of applications outside of their respective support boundary. For example if the destination form for data is SRM or Incident Management then sending support tickets to Atrium Core support or AR Server will be rerouted to SRM or Incident Management anyway.

 

We always try to achieve the fastest resolution possible. That is true for any issue and applies to any group within BMC Support organization. Customers may not see it that way because their ticket seems to be getting any attention at first and that is also our concern. Our internal routing of tickets is not transparent externally. This very experience is the reason for my blog post today. We want to work with the community and that requires communication.

 

This is what I want to achieve with this post today. Anyone that needs support with Atrium Integrator can help with the routing of the issue using the following logic:

 

 

If issue is with feature in:Best BMC Support Team Assignment:
Spoon Client (not application specific)AR Server or UDM
Pentaho Plugin (KETTLE)AR Server or UDM
CMDBOutput,CMDBInput,CMDBLookup methods

Atrium Core

Creating AI Jobs or Schedules with existing jobs in AI ConsoleAtrium Core
AROutput, ARInput, ARXInput methodsAR Server or UDM
UDM forms in general

UDM

Carte Server install and configurationAR Server
AI Users/Roles and PermissionsAtrium Core
UDM Users/Roles and PermissionsUDM
AIE to AI job migration toolAtrium Core

Installation of Atrium Integrator Server or Client

Atrium Core
Midtier related issue with AI console accessAtrium Core or AR Midtier
Application specific support other than CMDB Core formsApplication team that owns the destination form.
Share: |


14350312947_3c797946e0_z.jpg

An article published by Forbes (sponsored by SunguardAS) details why a large proportion of CMDB implementations fail.

I wanted to complement and provide our perspective on the topic, based on feedback from our market and the capabilities provided by Atrium.


The trends in IT more than ever require a solid control over configurations:

  • Larger, more complex, and dynamic data centers accelerate the risk of bad changes, and push the need for automation
  • Adoption of public and private clouds result in more vendors, more operators, and integration layers
  • The accelerating demand for digital services from the business places IT in tough situations where reactivity and efficiency are key ingredients for success


This drives benefits of Configuration Management beyond what was outlined in the article:

  • Change control/change management: Documenting your environment illustrates the many interdependencies among the various components. The better you understand your existing environment, the better you can foresee the “domino effect” that changing any component of that environment will have on other elements. The end result: increased discipline in your IT change control and change management environment.
  • Disaster recovery: In the event of a disaster, how do you know what to recover if you don’t know what you started with? A production CMDB forms the basis for a recovery CMDB, which is a key element in any business continuity/disaster recovery plan. That comprehensive view of what your environment should look like can help you more quickly regain normal operations.

But also:

  • Automation: With the growing scale of data centers, there is no option but to automate routine tasks. That spans IT Operations Management which need business-driven provisioning, patching or compliance, IT Service Management which need to accelerate incident resolution by efficiently prioritizing/categorizing the work, etc.
  • Performance and availability: With availability being so critical to business success, how can IT be proactive and fulfill SLAs if it cannot map events that impact the infrastructure to the business service that is affected? How can capacity decisions be business driven without an accurate picture of the environment?


The article lists 4 reasons for CMDB failure (competing priorities, limited resources, complacency and overly manual approach).
The fact that a “CMDB Project” is mentioned here is symptomatic that many organizations have initially only considered the technology aspects, rather than establishing Configuration Management as a key discipline that relies on CMDB technology. The human factor is in most cases the #1 source of failure, and there are key questions that cannot be ignored, nor forgotten throughout the implementation:

  • What is the business reason for Configuration Management?
  • What current and future problems is Configuration Management going to address?
  • Who is the sponsor for this implementation?
  • What are the processes that will interface with Configuration Management, either to provide data or to consume data?

This ensures a top-down approach, that starts with a vision, drives the boundaries of the data model, the types of integrations, etc.


Once the implementation has kicked off, there are other reasons that can lead to failure such as:

  • The data getting into the CMDB is not governed correctly: it is Configuration Management’s responsibility to ensure that the data is accurate, and transformed appropriately, so it can be referenced reliably. This needs regular reviews of the rules and filters that automatically govern data accuracy
  • Expecting 100% coverage before going into production is playing on the perception that CMDB fails. Configuration Management is a continuous practice, and CMDB implementations need incremental success because the target will always be moving


When it comes to tips for success, I can’t agree more with the article about the absolute necessity to “automatically update the comprehensive picture of your environment to reflect the potentially tens of thousands of changes per year to your environment.” Atrium Discovery and Dependency Mapping (ADDMDiscovery (ADDM) can witness how efficient it is at feeding Atrium CMDB with trustable data, that can be automatically synchronized with service models.


Atrium CMDB definitely provides the most comprehensive solution, in terms of its capabilities to handle incoming data, possible interfaces for data consumers, scalability, and the wealth of integrations that exist with BMC or other vendor products.

Recommended reading: Critical Capabilities for Configuration Management Database (Gartner, June 2014).


A main benefit is that it does not require different tools for different data transformation operations. Now, because of this richness, an implementation has to start with the right understanding of the tool, as well as how it should be used. To that purpose, the documentation includes Best Practices that guide implementation towards meeting success with understanding the data model, loading data, normalizing it and ensuring correct reconciliation with other sources of data.


In a summary, Configuration Management is needed more than ever, and needs to be addressed as a discipline, that leverages the most appropriate tools which will guarantee data accuracy, high levels of automation, and strong integrations to drive the most value.


Atrium is the most widely deployed CMDB so it probably has the largest track record in failed implementations. The other side of the glass also means that it has the largest number of successes. This is confirmed by its users, and the rate of 85% failure is certainly not right when applied to it.



Share: |


In the first of the Effect-Tech CMDB webinar series we discussed the upfront aspects of properly setting the scope of your CMDB initiative. We discussed the high level implementation choices and why a use-case driven approach might be the most optimal method to deliver value more quickly. After discussing these options, we concluded part 1 with the introduction of a service model architecture that can be used to initially model your IT environment and expand over time. If you missed the first webinar you can watch a replay of Part 1 at Effect-Tech Webinars

 

Please join us on October 9th as we continue the conversation in part 2 of this webinar series.  We will explore the BMC Atrium classes and which classes are most relevant to support the service model architecture introduced in part 1. Furthermore, we will talk about the role of discovery and how it can and cannot be leveraged to keep your CI's up-to-date. After discussing the CMDB classes, we broach the topic of CI relationships and simplifying which relationship types you should use to drive meaningful value without added complexity.

 

With classes and relationships out of the way, we steer clear of CI attributes for the time being and introduce the need to define multiple service model views that allows users to better understand the numerous CI's and their relationships that result when a complex service model is built out. Finally, we will explore the role of the CMDB to assist application support teams in the areas of event and incident management - and potentially why integration to discovery tools are NOT required to provide value for these app groups.

 

Time permitting, we will introduce Effect-Tech's CMDB methodology and best practices that your organization can use to implement CMDB in a structured and repeatable way. This methodology avoids the common implementation mistake that essentially turns your CMDB project into a data, discovery, and reconciliation exercise. By implementing the CMDB using this systematic approach, we believe your organization will gain more value from your CMDB project - more quickly.

 

This webinar series is presented by Rick Chen, Managing Principle at Effect-Tech.  Rick shares from his wealth of CMDB knowledge and field experience. 

 

Agenda:

  • CMDB class discussion
  • If it's all about relationships - what, why, and how much?
  • CMDB service model views - and why it matters
  • Addressing the needs of application support teams
  • Introducing implementation best practices to get more value, more quickly out of your CMS / CMDB

 

Date/Time: October 9th  at 9am (PST)

 

Space is limited so reserve your webinar seat today - Register

Stephen Earl

CMDB @Engage

Posted by Stephen Earl Sep 14, 2014
Share: |


I'm really looking forward to meeting our customers at BMC Engage in Orlando, Florida starting October 13th. I'm hoping to catch up with those of you I have met before and also to meet those of you I haven't met yet! I hope you will catch me in the corridors or at break or meal times to talk CMDB.

 

As part of my Engage activities I will be presenting Session 63 CMDB, With Great Power Comes Great Responsibility along with Darius Wallace where we will be discussing CMDB Futures, Best Practices for Implementation of CMDB in your business. The CMDB is part of the core of any ITSM implementation and implemented correctly can bring great benefits to your ITIL based processes, a great CMDB implementation can make you a Hero. However it is easy to become so focused on the CMDB implementation that it, unintentionally, becomes the Super Villain of your environment.

 

We will be discussing our new Best Practices resource, how we at BMC intend to help your implementation become a Hero and discuss our experiences in the field working with customers.

 

This session contains information you will find useful no matter what stage your implementation is at, no matter your skill level Introductory, Intermediate or Advanced

 

We look forward to meeting you all in Orlando at the Swan & Dolphin in Walt Disney World from 13th - 16th October!

Share: |


It is often said service is everybody’s business.  If you approach work in I.T.as a function of Business Service Management, then “service” is literally at the very center of your business and your acronym.  So how do you measure your impact on service?  Everywhere and always. This is a story of a personal journey looking where to configure it, and finding it built into the very fabric of the application.  Just like service in the general sense.

 

So, I work in support and I thought I was doing pretty good with the daily "here I come to save the day" routine by now. Meeting after meeting, knocking out one solution after another. Life was great. Then one day I got faced with an upgrade of AtriumCore 8.1 which added a new link to the AtriumCore console that was labeled "Service Context Administration". So, of course I had to pursue it because something new is always interesting to me and clicked the link out of curiosity. Nothing happened. I asked about it during a meeting with some colleagues and learned "It's an Asset Management feature, it's configured for the Asset Management console". I was fine with that and went on with my life, but a seed of curiosity was planted.

 

Then one night while roasting peanuts and deep frying a fish, couple dwarves from the Lone Mountain came over. OK, they were not exactly dwarves, but my adventure with Service Context began. I think it started with someone asking about UDDI Atrium Web Services, which turned out to actually be the AR Server Midtier UDDI which does not resolve and hence we wrote that off as "AR Server" issue. But then "Atrium Web Service" and "Service Context" came up in the same sentence. I looked at the documentation and once again realized that this is a service provided by the AR Web Services and dismissed it. Once again, the murky waters of lake "Service Contextia" have been laid to rest to live another day.

 

But then, the date is August 12th 2014. We're having a team meeting and I am showing tricks on how to check and verify functionality of the AtriumCore console which includes Federation configuration. I noticed that Service Context was now added to my lab system's Federation Manager console where I had installed the 8.1 SP1 patch. Suddenly a government approved light bulb went off in my head. This must be it! Service Context was staring right back at me.

Federation.JPG

 

At this point I thought that this is going to be very simple. Federation is just a matter of having a plugin configured in the ar.cfg and loaded as java process with the Atrium Shared Java Plugin (via pluginsvr_config.xml)

 

I checked that all is in order and then saw the Launch link in AtriumExplorer. At this point I thought I conquered the mountain and published a Webex recording on how to make the best of the Service Context. That recording is now available on the WebEx service site. Click the link below to play it if you like to see the details:

 


"How to enable ServiceContext link in AtriumExplorer"
https://bmc.webex.com/bmc/lsr.php?RCID=9d7bd305a80e4f3a96d38f6e93ef5c10

 

Duration: 5 min 26 sec

 

At this point I thought the matter is closed. I've put on my fedora hat, flipped my rain coat collar up and walked into the rain.

 

Next day I got an unexpected reply: This was still not it!

 

So, what could this possibly be? What is this "Service Context"? Let's go back to basics and see what the documentation says about it.

 

I downloaded the entire AtriumCore Help Documents package from EPD and started to look for "Service Context" and also checked the iDocs: Troubleshooting BMC Atrium Service Context - BMC Atrium Core 8.1 - BMC Documentation

 

And now I hit the jack pot. Finally I found what I was looking for. So, if you're still reading this you probably want to know what it really is.

 

Here it goes in my own words. Say that you're in Asset Management console and you're looking at a list of ComputerSystem you need to work with. You need to know if this system has any Business Services associated with it and what impact on those services you'd have if you were to put this system in maintenance.

 

So, my confusion was whether this is an AtriumCore Web Service or if it's served by the Midtier. And the answer is The Midtier!!

 

 

I've captured the service registration below:

 

Registered.JPG

 

Here you can see that the Service Context is using the midtier arsys root directory to register it.

That's it. The installer actually did this already, so there was nothing for me to configure.

 

So, basically this is the outcome of the quest. Start here:

 

AssetConsole.JPG

 

And get this "Service Context" for that computer system Dell Rack System - 517P95J:

 

ServiceContext.JPG

 

Seems too simple in hind sight.

 

In summary, if you're interested in using Service Context, but making that work seems like a quest to the Lost Mines of the Lonely Mountain, then perhaps sharing my personal experience with this module can be of some help. I think just understanding of what the "context" means can help. A ComputerSystem can be investigated from ITSM Asset Management console for context of related business services and this is what the Service Context feature of the AtriumCore is all about. Did you have a similar experience?

 

To see more like this, see BMC Remedy Pulse blogs.

Share: |


Recently, I was involved in helping customer to resolve some data issues. There were duplicate data in different CMDB classes populated in the ADDM dataset as well as in the production dataset. The consuming applications were impacted due to inaccurate data in CMDB. While analyzing this, we found that the root cause of duplicate CIs in the production dataset (i.e. the asset dataset) was mainly due to improper reconciliation identification rules. The ADDM reconciliation job was using ADDMIntegrationid as part of identification rule which caused duplicate CIs E.g. there were two identical computer system CIs in the asset dataset with different ADDMIntegrationIds. This is an example of how critical the reconciliation process is to maintaining the quality and accuracy of configuration items in your CMDB.

 

Before we go further on ADDMIntegrationId use, let’s understand how the ADDMIntegrationID is used by ADDM and CMDB.

 

The ADDMIntegrationId attribute in CMDB class holds a unique key populated by ADDM as part of CMDB Sync operation. This ADDM specific key helps the CMDB sync operation to decide whether a CI already exists in CMDB before performing the insert or update operation.

 

Knowing that the ADDMIntegrationid is a unique Key to identify a CI, it’s tempting to use it in reconciliation identification rules. This is the most common mistake seen in CMDB data issues. This attribute has in the past been used as work around in CMDB reconciliation rules for Software Server, Database and Cluster CI classes. But with the fix described in KA411090 and this work around is no longer needed.

 

For reconciliation identification activity, it is very important to use attribute(s) which can provide CI uniqueness in reconciliation identification rules, and it’s equally important to use discoverable CI attributes. E.g. for computer System, serial number, host name & domain attributes to determine CI uniqueness.

 

Here are the few rules I follow when investigating issues like this:

 

1)     Make sure you understand why there are errors before implementing changes to RE rules.

2)     Avoid using ADDMIntegrationId for strong member CIs e.g. Computer System.  If I am tempted to do this, revisit rule 1 above.

3)      Using ADDMIntegrationId for weak class members is less risky because they don’t have sufficient information in attributes to identify CIs uniquely e.g. weak member like Processor, Monitor, Product etc. You can learn more about how to investigate data issues here

 

 

I hope this helps. Please rate my blog to let us know if it was useful. For more like this, see BMC Remedy Pulse Blogs.

Share: |


Hear from Amit Maity, a senior technical instructor at BMC Software.

 

Amit will review Federation in BMC Atrium CMDB including different types of data, provide considerations and recommendations on federating data, review the methods of federation and provide a demo.

 

 

 

View our IT training for Atrium CMDB


Join the BMC Global Services community.  Hear from experts in consulting, education and our centers of excellence. 

Share: |


I'd like to use terminology like "age old" idea in this case, however it's bit early to call software issues "age old" just yet because software has not really been around for ages just yet. But then again when I look at my 10 year old son there I could fit a couple "ages" of 10 year olds into the era where software has been around. So I think I'll be OK to say that the concept of retaining software records for a specific purpose has been linked to an "age old" problem with that type of retention. In our case, the case of BMC AtriumCore CMDB, that type of record would be an Asset record, a CI, Configuration Item or what some may think of or understand as "an inventory".

 

 

Inventory records are definitely useful to have. Specifically for trends, cost audits, outage research and so on. Especially if the status of the record has been tracked for historic changes and what we refer to as Asset Life Cycle (AssetLifecycleStatus). You can see when the CI was Ordered, Deployed, End of Life'd and Deleted. All useful stuff. All of these lifecycle states have their purpose and a CI will live very happily in the BMC.ASSET dataset. But not "ever after". Enter the arena: Mark As Deleted = Yes.

 

 

You see the last status I've listed, namely "Deleted", is actually an instruction for the CMDB workflow to purge that record and hence that record is "no more" once BMC AtriumCore Reconciliation Engine (RE for short) reconciles that data. OK, so why is this a problem?

 

The issue with this is people's expectations on what these records should be and their presence in the BMC.ASSET dataset is that they are indeed inventory records and they should be retained even if they are marked as deleted, or also known as "soft" deleted. The record is still present in the database, but no longer an active asset in the production environment.

 

 

So, on first thought that should actually be no problem. The CI is there for me to look at historical changes that include its time of order, deployment and removal (deletion) from production. No issues there. The problem is with the relationships that were hosted on that computer.


Let me explain why.

 

 

A typical CI is not a standalone record, it is related to other CIs and these relationships have attributes. For example we have cardinality, where the CI relationship can be defined as one to many, many to one and many to many. Then there is Weak type relationship where the CI on the "left" side of the relationship has to exist before the "right" side can be detected by some means of electronic discovery. This means that without the CI on the left, usually a ComputerSystem CI, none of the hosted components on the right would exist. There may be a license cost related to the operating system that's running on that computer, but no discovery tool could scan it if the computer is not deployed and turned on. Additionally there is a Cascade Delete option which is not enabled out of the box but can be toggled to be enabled for Hosted System Component relationships.

 

 

So, back to "Mark As Deleted = Yes", which I am going to refer to as MAD.Y and MAD.N for Yes and No. What is not immediately obvious is that there is workflow that will act on changes done to MAD. All relationships are removed from a CI when the MAD.Y is set. If you have the Cascade Delete on, then MAD.Y will also propagate that MAD.Y flag to the Hosted System Component CIs. This means that the computer system will be set for deletion on the next Purge activity as well as the Operating System that was hosted on it. This means that if you're tracking the license of that Operating System then that record would now be gone.

 

 

This can be a problem for customers that like to retrace the license management back to the Computer System on where it was hosted. Generally end user's Operating System would not be a big deal, but Oracle license or Server license that can easily exceed $10,000. Losing track of that CI could present issues when Audits are conducted. So, keeping the ComputerSystem and the hosted Product CIs now becomes an inventory issue. What do we do with these records? Once they are MAD.Y their relationship is fragmented and not backward traceable without some effort. Certainly not via automation. However this is exactly what ends up happening when Purge activity is not performed on that data in BMC.ASSET. Eventually it will require some clean up. And that is bad because nobody is very happy when they have to do it.

 

 

We'll have cases that arrive in our incident management tracking system where customer may complain about performance, or errors seen in reconciliation logs related to Multiple Matches found. Many of these issues result from data retention policy where the customer does not allow Purge of the MAD.Y records from BMC.ASSET for the reasons I've outlined above. However, it does not have be that way.

 

As of now, most Reconciliation jobs use this sequence:

 

 

Identify
Merge
Purge (or Delete) BMC.ADDM and BMC.ASSET (Identified Only)

 

 

Although this sequence makes the most sense, it includes the not so popular Purge of the MAD.Y records from BMC.ASSET. The truth is that if you want your data to be healthy then it needs to include the Purge. However, not all is lost. There is a better sequence that allows record retention.

 

 

Identify
Merge
Copy Dataset (with a qualificatn of "MarkAsDeleted = Yes"  AND (ClassId = "BMC_Product" or class id of interest))
Purge (or Delete) BMC.ADDM and BMC.ASSET (Identified only)

 

 

In this scenario you can use a qualification to first copy all CIs you're interested in where the MAD.Y is set and ClassId = 'BMC_COMPUTERSYSTEM AND ClassId = 'BMC_PRODUCT' or your asset class of interest.

 

Please see Jared Jones' PULSE blog on License Management:

 

https://communities.bmc.com/community/bmcdn/bmc_it_service_support/bmc_asset_management/blog/2013/11/08/the-pulse-getting-started-in-software-license-management-part-2--software

 

Where covers new feature specific to Asset Management License Management :

 

" SP1 includes the expanded capabilities. If you get it installed you will see quite a few additional License Types out of the box. Essentially all the previous License Types now have an additional "_All" type that includes the other software classes."

 

Your copied dataset can then hold the name like BMC.ASSETS.PURGED where you can return to at any given time and run reports on.
It would only hold records that were at one point Marked As Deleted in the BMC.ASSET dataset and purged. Keep in mind that the "License Management" will be updated accordingly as configured in the Licensing Job related to the reconcilation job that handles the ADDM data.

That said, it just means that this is not yet fully automated out of the box or even considered as Best Practice because CMDB is not intended for asses inventory retention, but within the parameters of capabilities of the AtriumCore and not all that complex to add as a functionality enhancement.

 

 

In conclusion, is inventory retention in BMC.ASSET dataset good or bad? Well, it's definitely not in the design of the CMDB which is supposed to keep records of Configured items in production environments and it can cause data clean up issues. So, my vote tends to lean in the direction of "bad". You can do better with this "age old" problem.

 

I hope this helps, please rate my blog below or add comments on your experiences.  See more like this at BMC Remedy Pulse Blogs.


Daniel Hudsky

Share: |


This post shares some diagnostic enhancements included in the Remedy Configuration Check Utility in BMC Remedy AR System Server (ARSystem) and BMC Atrium CMDB Suite (CMDB) version 8.1.01 (Service Pack 1).  The goal of the enhancements it to simplify the process of identifying, correcting, and reporting on configuration issues in the product.

 

 

 

A little background

 

The earlier post 12 steps toward a systems approach to diagnostics outlines different kinds of diagnostics which may be required for products such as BMC Atrium CMDB, and the subsequent post 7 Tools to verify BMC Atrium CMDB is working well describes the diagnostics available in CMDB at the time.

In Service Pack 1, we looked at how we could automate or simplify these diagnostics so they can be executed and collected more easily when required.   We looked at the AtriumCoreMaintenanceTool Health Check and the pre-checker described in this post BMC Remedy Pre-checker for Remedy 8.1 (unsupported) to see which would be the appropriate tool to extend.  The Health Check functionality in the Maintenance Tool is also at the completion of the installation – as the “Post Install” check.  This design limits which checks can or should be run from it. For example, if we automated the process of checking recommended configurations – they would always fail in the post install check as there would be no opportunity to configure the product yet.   So we decided the next step in the journey of automating diagnostics was to extend the  Pre-checker to provide a simplified user interface to execute diagnostics.

 

 

See the product documentation to learn more about the features of the BMC Remedy Configuration Check utility and how to access it.

 

The Pre-Checker was originally designed to detect environmental issues for install and upgrade, but since the scope of the tool has changed, it has been renamed the Remedy Configuration Check utility. It can be used not only to check the environment configuration before install or upgrade, but also to detect product configuration issues which are the causes of many post installation issues. This tool will also enable us to automate frequent issue troubleshooting steps.

 

 

 

BMC Remedy Configuration Check utility


The goals are to -

  1. Help administrators to troubleshoot configuration issues
  2. If the administrator is not able to resolve the issue, make it easy to gather and share test results

 

 

 

 

How it works


The BMC Remedy Configuration Check Utility is included with BMC Remedy AR System 8.1.01 (Service Pack 1) media.  The file can be extracted to the system to begin using it. For more information on downloading and extracting the utility, see To Obtain the BMC Remedy Configuration Check utility.




CMDB checks included in BMC Remedy Configuration Check utility


In Service Pack 1, the following list of checks has been added for CMDB.

 

 

1) CMDB - System Information

 

This feature makes it easy to collect information about the system easily. This feature may also make it easier to compare different systems running BMC Atrium CMDB or report it when working with Customer Support.

Note - Going forward, this check will be moved in the information gathering category.

 

 

 

2) CMDB Metadata Check

 

This check detects for pending CMDB class changes.  This check performs the same test which can be performed manually using the command line cdmchecker tool with the –g option.

You can find more info on cdmchecker in the product documentation.

 

 

 

3) CMDB Class Overlay Check

 

Overlay on CMDB could cause issue in upgrade process. This check detects overlay on CMDB classes

 

 

 

4) CMDB - RE Private Port

 

We recommend that Reconciliation Engine private queue configuration for performance reasons. If private queue configuration is not configured correctly, then there won’t be any performance gain. This check can be used to detect improper private queue configuration.

 

 

 

5) CMDB - Index Check

 

This check will provide references to out of the box indexes in CMDB version 8.1.01 and validate the indexes exist.  It will report error if any index is missing.

 

 

 

 

This blog post hopefully provides a better way of why the Pre-check utility was renamed and some of the new capabilities added.  This expanded functionality should make it easier to diagnose issues.   A few checks were added to this tool in version 8.1.01.

 

Stay tuned for new additions in future releases.

 

 

I hope you found this blog useful, please rate it below or add comments.  To find similar information, see BMC Remedy Blog Posts.

Share: |


Hi All,

I have recently joined BMC Software as the new Product Manager for the Atrium CMDB and thought I should do an introduction to myself so that you can get an idea of what makes me tick and also understand my drive to make the Atrium CMDB a tool that can drive realisation of value for you our Customers.

I have been part of the BMC Remedy world since 2004 when I worked for an ISP and was given the mantle of 'the Remedy guy' for the entire organisation, I had never seen Remedy before and actually had no idea what it was. After a mild baptism by fire I immediately saw the value that the Remedy Action Request System could give the Network Operations Centre I ran at the time and also the wider organisation, back then we were rolling out ITSM 5.1.2.

Since that time I have been involved in many rollouts of BMC Remedy ARS based solutions, and most recently I lead the Remedy development organisation for BlackBerry rolling out ITSM across that organisation and recently spent sometime at ServiceNow before coming to BMC Software.

My desire is to actively engage with the community (you) and have what I hope are active discussions through this forum and others in order to ensure that we are building a tool that helps you solve your business challenges on a daily basis. Part of my role here at BMC is to ensure that your voice is heard and that we are investing in the right areas to drive our product and your success forward.

When I'm not talking about CMDB, working on the product roadmap or out talking with you, our customers, I spend my spare time watching movies, enjoying company of friends and a new past time of Husky Scootering, feel free to ask me more about that if you really want to know!

I'm based in the University town of Cambridge in the UK and look forward to our interactions whatever the medium as we at BMC evolve Atrium CMDB to meet your needs and supporting your success.

Contact me via twitter @flirble or via this community of course :-)

If you are having issues with Atrium CMDB I strongly suggest you log an issue with support and your account team before engaging me directly but if you feel I might be able to help then I'm happy to do so where I can.

TTFN

Stephen

Share: |


CMDB is great out of the box. All the classes and relationships we ship are very well inline with the business environments that our products serve. However! Every once in a while you'll have to extend the common data model with a couple more attributes, class definitions or product data loads.  Last October, I presented a Webinar "How to unlock the potential of Common Data Model in BMC Atrium CMDB" covering how to extend the CDM using Class Manager.  You can view the recording here. In this blog post, I would like to extend on the topic to talk about products which extend the CDM as part of an installer or by loading a CMDB Extension with the Maintenance Tool.

 

 

Several current products use extensions to the CMDB including BMC ProactiveNet Performance Manager (BPPM), Configuration Discovery Integration for CMDB (CDI), and it is also used for loading Product Catalog data updates. The reason for the separate install step is to make all the required changes semi-automatically, and hopefully painlessly.  This takes the element of human error out of it, so it can be an either "it worked, no big deal" or "it didn’t work " what happened?" experience.  Whenever I look at a failure, I always ask the question " "What was unexpected in the environment that blocked the operations the extension was trying to perform?"  I seek to understand.  There is a "Zen" to it. It should be a rational exercise, not a physical one like trying to stuff pythons in a paper bag. Below, I will highlight some of the ways the product has tried to prevent, improve, or minimize the room for failure, and share some of the ways I think about it as I look into issues.  Hopefully it leads to a more peaceful state for your CMDB and yourself.

 

As far as ensuring the server is in a good state before running the install, this is a general feature.  It is a good feature to have even when not running installations, so the AtriumCoreMaintenanceTool has a Health Check feature.  You can read more about it in the documentation here.  You can find more about other tools that can help in this regard in Jesse’s post 7 Tools to verify BMC Atrium CMDB is working well in the section on Verifying Product Installation and Environment.

 

If you've had to extend the model or planning on doing so then this is what you should know:

 

Extending the CDM means that you're altering tables in the database to add additional columns or maybe even creating an altogether new table to house the data you'll be collecting from your environment. Simply said, we need to add more labels so we have to define containers for our data.

 

With that in mind imagine that already have a physical container defined like a cupboard or jar and you needed to add just a few things to it and you were able to add two labels but could not fit the third because it ran out of space. This would make sense to us humans, but the installer still thinks that 1 and 2 need to be added because it was not told otherwise. You could argue "can’t the installer be made more intelligent, to examine what is already in the cupboard or jar?".  Now consider the case that the items interact with one another as they are added, and have different rules on what can be stored multiple times.

 

Does that make the challenge of ensuring reliable completion more complex?  You bet it does!So the extension installer follows strict orders " install, check for errors. In case of unforeseen circumstances, wait for further instruction. Basically the extension installer is instructed to add all three items and that is exactly what it tries to do. If it fails during the install, and the install is attempted again This causes a data structure collision because 1 and 2 already exist and hence a failure of the extension loader. Running it again will not change these results. It will keep failing on exactly the same collisions points it failed before. So, don't run the extension loader again hoping for different results. Instead look at the logs and see where the installer hit its first issue. There could have been a requirement to create a dataset first, either a manual step that was missed or a dependency that was violated. When investigating issues, it is sometimes useful to look at the manifest of files in the extension to see what it is trying to load. This helps to understand why an error occurs.

 

 

There are two types of extension loaders. One that comes with an executable e.g. simExtLoader.exe or pnExtLoader.exe) and AtriumCore\AtriumCoreMaintenanceTool.cmd.  AtriumCoreMaintenanceTool is installed with Atrium Core version 7.5 and later and provides the tool for loading CMDB extensions, so more information about extensions and what they contain can be found in the Atrium Core documentation.

 

Executable loaders can use CMDBDRIVER to deliver their "payload" from each subdirectory of the loader. For example 500-SIM-CDM-Extensions directory for simExtLoader has Class extensions as well as *OSD.txt files that have instructions on what to do.  The reason for executable loaders is to perform additional steps or checks as part of the install, but the subset which is installing CMDB extensions is largely the same.

 

Some loaders also add Normalization or Reconciliation jobs, Federated Launch Links and so on. These will be stored in the "arx" files in subdirectories of the extension loader. These additional records can also be only added once and if you run the installer again they will cause further failures but this time as a data collision rather than data structure collision. Again, here the installer is programmed to install all these things as if they never existed and the instructions basically say:

 

"Create New", rather than using this logic: "If you find it there already, then update it or move on to the next".

 

This is so because the original need to extend the CMDB still applies and the installer just knows that it has not been completed yet.

 

Above, I mentioned the marching orders of the Extension loader:  install, check for errors. In case of unforeseen circumstances, wait for further instruction.  The latter part of that was added in CMDB 7.6.04.  If an extension is currently loading, or has attempted to load and has failed, why should it be allowed to run again and make a mess of things?  It shouldn’t, so a simple mechanism was put in place to prevent that situation. When it runs, it adds records to a form called "Share:Application_Properties" that reflects the version of the extension and record the Status of the installation progress. If the installer needs to install Product Catalog data, which is also considered to be "an extension" of the CMDB then you'd be referencing the ProductCatalogData-2012_07_03.xml. The name of this file reflects the 2013 Product Catalog data load made available in July 3rd. Its contents will have GUID reference for Share:Application_Properties that checks the version of the PCT installed on your system.

 

 

In the case of Product Catalog that ID is "PD00C04FA081BA0SvxQgaxH66Q1wQA" and it is validated for version 7.6.04 or greater.

The next GUID it will then add to Share:Application_Properties (SAP) form is going to be "BMCPC00C04FA081BAbpfqSA9gV41Ar". This particular ID is then used to track the progress of the data load. This is done by adding a record to SAP with Name of Status and Value of Running. If the install fails the value will be changed to Failed.

 

 

At the conclusion of the install's completion this Status record is removed. If this record still exists and has Failed status then the installer is not going to let you do it again.

 

 

@BMC Software we have designed this part specifically for the reasons described above which are:

 

- Run the Health Check or verify the system is in a good state before installing extensions

- Run it once only

- Evaluate reasons for failure and address them individually

- If you can't complete them manually then restore the database and fix the original condition and run the installer once. Repeat if necessary, or identify individual component failures and complete the extension loading manually component by component.

 

 

I hope this post provides a better understanding of the rules that the extension loaders live by, and some of the thinking behind them.Hopefully this leads to more zen-like experiences with extending the CDM. I have probably skipped something so I am looking forward to see further questions on this topic so that we can have a full disclosure here for anyone to follow.

 

If you like content like this, see BMC Remedy Pulse Blogs for more like it.

 

If you have ideas on ways for Customer Support work better with you to enable success, join the Customer Support Community and provide ideas, feedback, or suggested improvements.

 

 

Thank you for reading!

Daniel

Share: |


BMC Atrium Integrator (Practical Example of Data Transformation Using Spoon)

 

 

 

Kettle (K.E.T.T.L.E - Kettle ETTL Environment) has been recently aquired by the Pentaho group and renamed to Pentaho Data Integration. Kettle is a leading open source ETL application on the market. It is classified as an ETL tool, however the concept of classic ETL process (extract, transform, load) has been slightly modified in Kettle as it is composed of four elements, ETTL, which stands for:

 

Shows how to generate data warehouse surrogate keys in Pentaho Data Integration

 

 

Data Sanitization Pentaho Data Integration (PDI) example

 

 

Data Allocation Pentaho Data Integration example

 

 

Parameters and Variables - Atrium Integrator (Spoon)

 

 

BMC Atrium Integrator (About and Useful Links)

 

Share: |


About Atrium Integrator

The Atrium Integrator (AI) product is, as the name implies, an integration tool that facilitates the loading of data into the Atrium CMDB.  Atrium Integrator allows for a wide variety of input sources such as JBDC, JMS, CSV, web services and complex XML.  It leverages the "best of breed" ETL tool that has a very broad range of transformation capabilities.  The engine that powers AI is actually referred to as an "ETTL" tool.  This stands for "Extract, Transform, Transport, and Load"  It is based off the Pentaho Data Integration (PDI) tool with a common name of "Kettle". Kettle has a designer tool named "Spoon" that utilizes a drag and drop UI, speeding up the design of complex jobs.

 

Value Statement

* Simplify importing CI's and their Relationships through Wizard based UI
* Out of the box Templates for CI field and relationship mappings ensuring consistency
* Reduce effort and time with a Graphical, Drag and Drop Interface
* Powerful Extraction, Transformation and Loading engine (Pentaho Spoon) for massaging of data
* Scalable for large enterprises with millions of CI's

 

Videos How To's    

    

          Click here

Terminology

  • Repository - A relational database in which jobs and transformation are stored, along with the logs and execution history of the jobs.
  • Transformation - A collection of steps and hops that form the path through which data flows.
  • Step - Minimal unit inside a transformation.  These are group in categories based off of the function they provide (i.e. input, output..)
  • Hop - Represents the data flow between two steps.  It has a source and a destination.  A hop can only be between one source step and one destination step, but each step can have multiple hops ("paths").
  • Job - A process control component.  A job consists of Job Entries.  These can be either transformations or other jobs that are executed in a particular order, managed by hops.

Useful Links

Atrium Integrator is based on Pentaho Data Integration (aka Kettle, aka Spoon).Some useful links

Pentaho - Spoon User Guide - Gives you information on the steps that are available in AI, excluding the BMC specific ones (ARInput/AROutput)

Pentaho Community Forums - The Forums are great for asking questions about the product.

Pentaho Community - Lots of information here and you can also download a community edition of Pentaho Spoon to understand the product and it's uses... Obviously it will not be AI with the enhancements BMC have made but nevertheless a good learning tool.

 

Why not AIE?

There are various shortcomings with the AIE product that forced BMC to look into another direction.  Some of the shortcomings are listed below:

  • It lacks the ability to extract data from a myriad of sources that our customers have data in (MySQL, Sybase, Excel files...)
  • Very limited transformation capability.
  • Too many calls into BMC support to get the product to work.
  • Need to be well versed in the CMDB in order to model the data accurately.  No wizard is available.

AIE to AI Migration coming soon



Transformation Repository

BMC Atrium Integrator Transformation Repository (Really wish this acronym spelled something cool...)

Share: |


I ran into the issue with ADDM synching to CMDB and although it was perceived as a CMDB outage (via ARS Server) the root cause was based in Windows configuration.

 

In my experience the issue is related network and If it is sometimes CMDB related, sometimes network, sometimes ARserver, it would be useful to clarify how to determine which of those is the case.

 

Given that the CMDB is hosted on the ARS Server, it would really have to be an outage of the ARS Server or a network issue as CMDB would never respond with an ARERR code of 90.

 

First I'd like to refer to this article https://communities.bmc.com/community/bmcdn/bmc_remedy_ondemand/blog/2013/10/14/the-pulse-optimizing-addm-to-cmdb-sync-connections-for-remedy-ondemand-environments

 

So, please make sure that network access is established first by using "ping" from the ADDM appliance host, although often only specific ports are open and the company network may block the ping port. However if all of these have been ruled out you can still do a more direct check with the CMDB by using CMDBDRIVER or the (AR) DRIVER utility to connect.

 

You can ask for these binaries from the CMDB administrator although these clients will have to be executed directly from the ADDM host machine which can be running on a different Operating System than what the CMDB is installed on. This client is OS specific. For example if CMDB is running on an ARS Server that is hosted on MS Windows 2008 server and ADDM is hosted on Linux Red Hat then you would need the CMDBDRIVER client for Linux RH before you can run it from the ADDM appliance host.

 

If there is interest then I can produce a staging area with a CMDBDRIVER client for all operating systems from a staging area. Since the version check with the CMDBDRIVER is compatible across many versions you'll only need one version.

 

For convenience we've staged the 7604 version for Linux here:

 

ftp://ftp.bmc.com/pub/BSM/AtriumCore/utilities/cmdbbin.tar.gz

 

Download this to the ADDM appliance system and make sure you change execute permissions for your current user on the Linux system. ADDM only supports Linux, hence we are only providing the Linux binary here.

 

Once you have this in place you can start the client create a file called "options.txt" and put this in the txt file:

 


init

gver

1

OB00C04FA081BABZlxQAmyflAg1wEA

q


 

NOTE: The linked binary above already has everything setup. All you need to do there is to run the following command with 4 arguments:

 

./cmdbdriver.sh <ARS HOST> <AR User> <AR User Password> <AR TCP PORT>

 

Example:  ./cmdbdriver.sh bmc_arshostname Demo Demo 0

 

Results should look like this:

 


Command: INITIALIZATION   
CMDBInitialization  results


ReturnCode:  OK
Status List : 0 items
Command: GET VERSION


Number of Application Versions to get (0):    Id ():   
CMDBGetVersion  results ReturnCode:  OK
Version Information: 1 item(s)


Application ID: OB00C04FA081BABZlxQAmyflAg1wEA <----- ApplicationId of CMDB
Application Name: BMC Atrium CMDB Version: 8.1.00 <----- This is what you're looking for and proves connection can be established.


Status List : 0 items ( This just means that there no other items in the list)


 

 

If you don't get this result then your connection to the CMDB is not open.

 

This is just one method to achieve a simple test at a moments notice. There are other articles that accomplish similar thing with java:

 

https://docs.bmc.com/docs/display/public/ars81/Running+arconnect

 

 

Daniel

Share: |


Upgrading from versions prior to the 7.6.04 release can be tricky, but fairly simple to acheive if you know the steps covered below.

There are various KA's on this topic already. This pulse post rolls up all the known causes into one.

Starting point : BMC AtriumCore 7.5 CMDB SP1

Finish: BMC AtriumCore 8.1

 

 

The following areas are known to cause issues at some time during the installer:

 

  • Prechecker Utility of the CMDB Installer
  • Best Practice Conversion Utility - BPCU
  • Custom Workflow for cmdb fields
  • Attachment Size Limit
  • Warning suppression

 

Prechecker Utility of the CMDB Installer

 

Part of the installation includes a CMDB Pre-Checker that validates field ids and names. This prechecker should be only executed if the version of CMDB is already 7604 and not executed if the version is prior to 7603 SP1, however it is part of the installer and therefore it will run anyway and add ERRORs to the install log. This is OK and can be ignored.

 

Here is some background on this. If you run the prechecker before 7603 SP1 then the prechecker will have some failure exit codes that may also show up in the arerror.log. These errors are there because there are Field IDs for Company that get checked in the following forms:

 

PCT:VersionCompanyAssocStatusFlags
PCT:PatchCompanyAssocStatusFlags

 

The installer needs to change these Field IDs from value "420000165" to this value "1000000001". Since this is part of the installer and the prechecker runs before that can happen and hence these errors are ignored.

 

The log will show an exit code of 1025 after running this command:

 

LOG EVENT {Description=[AR ChangeID command],Detail=[C:\Users\E020156\AppData\Local\Temp\Utilities\rik\archgid.exe -x ARSALIAS -u Action Request Installer Account -p ****** -t 0 -c 10003 -N 7200 -L 14400 -X 21600 -F C:\Users\CURENTUSER\AppData\Local\Temp\Utilities\pc\changeFiledIDs.txt

 

Since there isn't a way to easily remove the Pre-Checker from the available AtriumCore Installer program then all you need to do is to understand that this is a false positive result. You'll basically see list of errors in the end where the "Success" status of the install is show giving you the option to review the log. That log review console will have some "red" lines in it.

 

BPCU

 

BMC has produced a customization conversion utility that converts custom workflow into Overlays. For customers who have Asset Management and other applications where the CMDB is providing data structure service and have customized workflow this utility is best used AFTER the AtriumCore Upgrade is completed. Althoug the BPCU utility does not overlay any CMDB schemas and workflow even if customizations are found, however running this utility sets the expectation that even if the CMDB was overlaid than this would be corrected by the BPCU. This is not true.

 

Please see KA380649 for reference.

 

Bottom line for this point is - run the BPCU after the upgrade of CMDB is completed, was succesful and was not impacted by overlay issues. We've done these upgrades now with 100% success rate and are confident to support this practice.

 

 

Custom Workflow for cmdb fields

 

This item is related to the previous one, but more specific to data that is being inserted into BMC.CORE CDM forms.

The installer inserts various template data into the BMC.CORE forms and this step can be blocked if additional data restrictions or attribute requirements are enforced by custom workflow. These fields may include MarkAsDeleted, Company, Region, Site and so on. If you have customizations in place that require a value in a field that is not already required out of the box then you should disable the workflow for those fields before you start the installer.

 

The install log would have captured such failure with exit code of 1025 and log entry would look like this:

 

C:\Users\CurrentUser\AppData\Local\Temp\Utilities\rik\rik.exe" loadapp -x ARSALIAS -t 0 -u "Action Request Installer Account" -p <not_displayed> -l "D:\Program Files\BMC Software\AtriumCore\" -n CMDB-RIK_Install -f "D:\Program Files\BMC Software\AtriumCore\cmdb\en\workflow\upgrade\764patch000\wf-RIK-CMDB.xml" -L -C

 

Where the name of the log that actually captured the real failure codes is "CMDB-RIK_Install.log"

This log would then show an error like this

 


INFO  - Error importing record 1: ERROR (806201): ; Please supply a Category, Type and Item for this Configuration Item. Those fields require and entry to create or modify a CI.
DATA "" "" "" "" "" "" "" "BMC_BUSINESSSERVICE" "0;" "" "" "" "" 1282162748 "BMC.ASSET" "" "" "" "" 0 "" "STANDARD" "BMC_GLOBAL_DEFAULT_SRVC" "" "" "Demo" "" "" "" "" "" "" "" "" "" "" 1285101154 "BMC_GLOBAL_DEFAULT_SRVC" "" 30 "" "" "" "" "" 0 "BMC_GLOBAL_DEFAULT_SRVC" "" "" "" "000000000000011|000000000000383" "" 10 10 "Default Service" "" "" "" "" "" "" "" "" "" "" 0 "1284595003Demo" "Demo" 0 "" "0" "" "" "" "" "" 0
INFO  - Import Completed in 8.237 seconds. 0 records were imported to BMC.CORE:BMC_BusinessService; 1 Records were not. From File D:\Program Files\BMC Software\AtriumCore\cmdb\en\workflow\upgrade\764patch000\.\USM_BusinessService_DefaultData.arx

[ERROR] LoadComponent- Data Import failed with code 1025 for file D:\Program Files\BMC Software\AtriumCore\cmdb\en\workflow\upgrade\764patch000\.\USM_BusinessService_DefaultData.arx

 


And there would be additional records like this that fail. Exit code of 1025 can almost always be associated with a data collision of some type. Either the record could not be imported, or it could not be deleted (code 1024) for what ever reason.

Please disable workflow that would prevent the installer from managing records in the CMDB. No customer records are deleted during the upgrade, only CMDB template or default loads like the BusinessService data will be impacted.

 

 

Attach Size Limit

 

We've found that the ARDBC setting of "Db-Max-Attach-Size: 0" has impact on attachments during upgrades. If you have this setting in the ar.cfg (ar.conf if Unix/Linux) then change the value to 200000000 (~200 MB) or just remove it from the config file altogether.

 

ARS also has a maximum attachment size limit that can conflict with the Data Visualization Components that need to be loaded into the Data Visualization forms during the install. The size of these attachments varies from few kilobytes to several megabytes.
Rule of thumb is to remove the attachment size restriction for the installer to complete successfully. This means that you should set the size limit to 0 (unlimited).

 

Set Attachment Size Maximum to 0 for the ARDBC (ARS Data Base Configuration values a.k.a. the ARDBC metadata).

You can also run a test before the upgrade by attaching a sample file that's no more than 100 MB to see if the attachment will work. This setting is in the AR Server Configuration Panel. You can leave it at 0 if it already is set that way.

 

Here is the brief explanation of attachment size topic :

 

Both config items ‘Db-Max-Attach-Size’ and ‘AR-Max-Attach-Size’ are the same. ‘Db-Max-Attach-Size’ was introduced in early releases of AR (not sure about version) and was specific to Oracle DB only. Later we introduced AR-Max-Attach-Size for all databases including Oracle. You should remove ‘Db-Max-Attach-Size’ from config file at this point.

 

Unlimited size applies if

AR-Max-Attach-Size = 0 or no entry exists in ar.cfg.

 

 

Warning Supression (ARS)

 

There are several Warnings interpreted by the installer as failures. These warning can be ingored by supressing them during the time of the installation.

 

You can add the following the Warning suppressions:

 

See : KA364458

 

Suppress-Warnings: 9936

 

Additional warnings maybe there already, so just add 9936 to the list for the upgrade to complete successfully.

 

Our expectation and coding of the Installer should not have this issue surface at all. However I always like to mention it here for those customers that are using this KA for upgrading to 7.6.04 SP4 of AtriumCore where this issue has manifested itself.

 

For this last point please see KA405615 for additional information.

 

A KA does exist with content similar to this PULSE post: KA406262

 

All Knowledge Article references require BMC Support Login.

Filter Blog

By date:
By tag: