Skip navigation
1 2 3 Previous Next


137 posts
Share This:

Hi Everyone,


Welcome to March’s CMDB Blog. In this blog, we will discuss about Normalization process using new CMDB UI.  Normalization is the first activity which takes place after importing data in CMDB source datasets through Atrium Integrator or Discovery Sync.


Below are the topics which we are going to cover:

Normalization Console Overview


Normalization process makes sure that product names and categorization are consistent across different datasets and from different data providers. Normalization Engine provides a centralized, customizable, and uniform way to overcome data consistency problems.


How to access Normalization Console in new UI?

- Once you login to new CMDB Console, CMDB Dashboard is displayed. From this home page, you can open it using ‘Manage Normalization’ option under ‘Jobs’ menu.



Normalization Console Layout

- Normalization Console shows the number of jobs along with the job run details.

- It displays information on the processed CIs and relationships. It also shows whether the CIs/Relationships have any errors after normalization or not.

- You can filter the jobs to be displayed by selecting the dataset above.



Creating Normalization Job


1. In the CMDB UI, select Jobs > Manage Normalization.

2. On the Normalization page, click Create Job. The Create Normalization page opens as shown in the following figure:


3. Enter a unique name for the job.

4. From the Dataset Configuration drop-down list, select the dataset.

5. If you want Product catalog entry to be created if it doesn’t exist, then you can check ‘Allow New Product Catalog Entry’ checkbox.

6. ‘Allow Unapproved CIs’ option will Normalize CI even If the product is unapproved in the Product Catalog.

     Normalization Engine will set its NormalizationStatus attribute to Normalized and Not Approved.

7. Select Normalization Features which you want to enable.

8. Schedule will decide the Job type i.e. Continuous / Batch. Continuous job will run continuously & Batch job runs on specific time.

9. Click on “Save” button to create the Normalization Job.



Normalization Configurations


You will find below options under Configuration > Manage Normalization Rules’  from CMDB Dashboard:

    • Catalog Mapping
    • Features
    • Dataset Configuration
    • Class Configuration




Catalog Mapping


From the Catalog Mapping, you can map incoming CI categorizations for a discovered or imported CI to preferred categorizations for different reasons. The product categorization aliases provide the correct categorizations either when the CI's Category, Type, and Item attributes have no values or when they do not match those in the corresponding Product Catalog entry.


You can create Product catalog alias mapping with below steps:

1. In the BMC CMDB Dashboard, select Configurations > Manage Normalization Rules > Catalog Mapping

2. In the Catalog Mapping screen, click the ( + ) sign to open the page in which you can select a class for which you want to create the product alias mapping.

Create Mapping.png

3. Select the CI class from the drop-down list

4. In the Discovery Product Categorization section, select the categorization values of discovered products for the CI class you selected.

    Specify a name for the product in the Product Name field.

    In the Mapped Product Categorization section, select the categorization values that are applied to the discovered product when it is normalized.



5. Click on Save button to save the details.



Normalization Features


The Normalization Engine is provided with various capabilities that enable you to define rules and actions to normalize CIs. These rules can be applied to individual datasets.

You can view/modify existing rules or create new rules from Configurations > Manage Normalization Rules > Features

Normalization Features.png


Version Rollup

There are two attributes related to versions i.e. MarketVersion & VersionNumber in BaseElement.  The VersionNumber attribute stores a full version number, such as, which can change frequently with maintenance releases. To manage licenses more accurately, the MarkerVersion attribute stores a simpler string version, such as 5.0 or 2007. You can do this version rollup for MarketVersion attribute using Version normalization feature.


Impact Normalization

The Impact Normalization feature is applied to relationship classes only. This feature sets the impact attributes based on the out-of-the-box or custom impact rules for the relationship class and the source and destination classes it relates to.


Relation Name Normalization

The Relation Name Normalization feature is applicable to relationship classes only. This feature replaces the existing relationship names with the BMC recommended or custom names, based on the CI classes in the relationship.


Instance Level Permissions

This feature enables you to set the row-level permissions on CIs as defined by the CI classes and additional qualifiers of the Instance attributes.


Suite Rollup Normalization

This feature enables you to define suites and their products and to identify an instance as a suite, a suite component, or a standalone application. Allows more accurate management of suite and individual licenses.



This feature enables you to create custom rules when the out-of-the-box Normalization features do not meet your requirements for normalizing data. You can create a custom rule for CI and relationship classes to perform a required action.


Dataset Configuration


               1. On the BMC CMDB Dashboard, click Configurations > Manage Normalization Rules > Dataset Configurations.

               2. Select an existing Dataset Configuration and click on Edit.


          3. On the All Datasets Configurations page, set the normalization settings for the selected dataset.


You can get more information about Dataset configuration in below link:


Class Configuration


You can add or remove classes for Normalization. If you remove class from Class configuration then CIs from that class will have NormalizationStatus as “Not Applicable for Normalization”.


To add a class for Normalization:

               1. On the BMC CMDB Dashboard, click Configurations > Manage Normalization Rules > Class Configuration

               2. Click on ‘Add Class Configuration’


               3. Select class name from drop-down list & click on Save



Enable Debug Level Logging


          You can enable debug level logging from CMDB Dashboard > Configurations > Core Configurations



Follow the below steps to enable debug level logging:

    1. Click on Normalization
    2. Select ‘Plugin Server Configuration’
    3. From the drop-down list select the server on which you want to enable debug logs.


          4. Click on API/Batch/Continuous Job Configuration.

      1. API Job Configuration – Logs for Inline Normalization Job & detailed NE API calls.
      2. Batch Job Configuration – Logs for Batch Job
      3. Continuous Job Configuration – Logs for Continuous Job.

          5. Change Log Level from Warn to Debug.

          6. Click on Save button.



Thank you for reading this blog. 

Share This:

In this post, we will discuss about BMC Atrium Integrator, which is now also referred as ‘Data Imports’ with the introduction of the new CMDB User Interface offered by latest CMDB versions.  Below is a reference link of the blog from November 2019 which partially discusses ‘Data Imports’ module too along with CMDB Reconciliation.


Almost every Administrator User of BMC CMDB uses Atrium Integrator / Pentaho Spoon for data import tasks, and some of them by now have gained expertise in creating complex jobs using those tools.Through this blog, we will focus only on the troubleshooting aspect of Atrium Integrator.  Creating of a job or a transformation, whether a basic or a complex one, isn’t covered in this write-up. Following are the sub-topics covered:


[1] Atrium Integrator Overview

[2] Various Atrium Integrator Components

[3] Carte Server – Start and Stop

[4] Atrium Integrator Configuration Forms (UDM:xxxxxx)

[5] Common Error Scenarios

[6] Atrium Integrator Logging


[1] Atrium Integrator Overview (AI Overview)


Atrium Integrator is an integration engine that helps you transfer data from External Datastores to CMDB classes and AR System forms. The purpose of having Atrium Integrator is to transfer data from a variety of input sources such as flat files, complex XML, JDBC, ODBC, JMS, Native Databases, Webservices and others using connectors. Atrium Integrator provides the ability to clean and transform data before transferring it to CMDB classes or AR forms.


Break-up of the Atrium Integrator components per installations:


  • BMC Remedy ARServer Installation installs:

    1. Atrium Integrator Carte Server

    2. Atrium Integrator Spoon

    3. ARSystem Adapters

    4. AR PDI Plugins


  • Atrium Integrator Server installs:

    1. Atrium Integrator Console

    2. CMDB Adapters


  • Atrium Integrator Spoon Client Installs:

       Remote Atrium Integrator Spoon


  Atrium Integrator uses Pentaho which is an ETL tool that enables you to extract, transform and load data. When you run a job from AI console it runs on the AI Carte server. You can also run jobs from Atrium Integrator Spoon client on Client machine/Desktops or directly on AR Server where Spoon is installed.


[2] Various Atrium Integrator Components


Please refer the Blog Post to know more about Data Import features in New CMDB UI console.

Helix Support: Using new CMDB UI - Class Manager & Atrium Integrator


2.1 Atrium Integrator Spoon Client:


The Atrium Integrator Spoon is a client side, user installable graphical user interface application which is used to design transformations and jobs. Atrium Integrator Spoon Client Installer is available on BMC EPD site from the standard BMC Remedy AR System installation program as one of the installable client program options.


For complete documentation on creating transformations and jobs using the Atrium Integrator Spoon client please refer:

Spoon User Guide - Pentaho Data Integration - Pentaho Wiki


2.2 Atrium Integrator Spoon:


Atrium Integrator Spoon gets installed along with AR Server Installation. BMC provides limited support for Spoon. Install and use only the Pentaho and Atrium Integrator Spoon version that is packaged with the BMC Remedy AR System installer.

Though Pentaho Spoon is supported on multiple platforms, BMC Supports Spoon only on Windows.


There are selected steps which BMC owns/supports : in Atrium Integrator transformation - Documentation for BMC CMDB 19.08 - BMC Documentation


For information about additional steps that you can add to your transformation, see the Pentaho documentation.

Spoon User Guide - Pentaho Data Integration - Pentaho Wiki


We support selected vendor databases which include - IBM DB2, MS SQL Server, Oracle, Sybase and My SQL.


[3] Carte Server – Start and Stop


The Carte server configuration entry is in ‘armonitor.cfg’ file. The path location of this file varies based on the Operating System used on AR server.


(Windows) ARSystemServerInstallDir\Conf\armonitor.cfg

(Linux / UNIX) /etc/arsystem/serverName/armonitor.conf

ARMonitor entry for Atrium Integrator.PNG


3.1 Starting Carte ServerA restart / start of BMC Remedy AR Server service will start the Carte Server as long as the Carte line is not commented (using # symbol) in armonitor config file.


3.2 Stopping Carte Server:  It’s important that Carte Server is not killed abruptly when AI job is being run.  In order to stop Carte Server for some time or permanently on a particular AR server, edit the armonitor configuration file by commenting out the line meant for Carte Server.  Save the file.  This must then be followed by killing the existing process for Carte Server.


On Windows server, the Task Manager must be scanned for the below highlighted line to identify the process of Carte Server. After selecting the process, use ‘End Process’ to kill the Carte Server, just like killing any other process in Windows server.


Atrium Integrator Windows Task Manager - locating Process.PNG


On Linux box, the same task can be achieved by first identifying the process ID for Carte Server, which can be done using following command.

  ps -ef | grep ‘diserver’


  Atrium Integrator process on Linux.PNG

This followed by kill command to kill the process

  kill -9 <process ID>


[4] Atrium Integrator Configuration Forms (UDM:xxxxxx)

Note: You will check all these UDM forms when an Atrium Integrator Job is not running.


Below mentioned tables maintain information related to metadata of Atrium Integrator tool.  It is important from troubleshooting perspective, to know what all data is stored in those tables.


  • UDM:Config
  • UDM:RAppPassword
  • UDM:ExecutionInstance
  • UDM:PermissionInfo


  1. UDM:Config: This form contains entries for all AR Servers in server group out of which one is the Default Carte Server to run AI jobs running on that server.  A few important things to note are:


  • The server port assigned to Atrium Integrator is 20000
  • Atrium Integrator can run in AR Server Group environment, and it will pickup the primary server from AR System Server Group Operation Ranking as the ‘Default’ server
  • Secondary server is the server ranked as 2nd in AR System Server Group Operation Ranking for Atrium Integrator operation



NOTE:  For Atrium Integrator to run in AR Server Group, it must be configured in AR System Server Group Operation Ranking form before it is to appear in UDM:Config.


Configure UDM in AR Server Group:


Before configuring this form for a server group environment, you must rank the Atrium Integrator servers by using the AR System Server Group Operation Ranking form. If you assign rank 1 to a server, that server becomes primary server and runs the jobs. If the primary server fails, the secondary server (failover server) runs the jobs. Failover server is the server to which you assigned rank 2. If you do not assign ranking to the servers in a server group environment, jobs run on the server which

receives the request first.

Server Group Ranking.PNG


2. UDM:RAppPassword:


This form stores a Remedy Application (RApp) Service password for a specific AR System server. The AR System server installer populates this regular form during AR System server installation. ARInput, AROutput, and CMDB steps provided by BMC use of this form to make connections to the AR System server.


In the event this password is changed in Remedy AR Server configurations or you restore or migrate DataBase from one server to other, then please make sure that you update this form with correct server names and its correspondent Rapp Password to avoid further issues while running AI/Spoon jobs.


If Atrium Integrator servers are configured to run in AR server group environment, then ensure that this form contains all the possible AR Server entries that includes Short names and FQDN. Remove the ones that aren’t needed or those holding incorrect AR server names.



Any incorrect information in this form leads to failure in Load steps used in Atrium Integrator jobs.


3. UDM:ExecutionInstance:


This regular form allows multiple instances of the same transformations to be run. For every instance, Atrium Integrator Engine provides Object ID; combination of Object ID and Transformation/Job Name is used as Unique Key.


This form has one very important field named as “Atrium Integrator Engine Server Name”  which hold AI server name. In server group environment, this field shows Primary server name. This field should hold the correct AR Server name (in case of DB Restore specifically).


Note: This form cannot be used to create a New Execution Instance manually , you can only use this form to view the created execution instances.


For few AI Job run related issues we suggest to delete existing UDM Execution entry for specific transformation/job and give it a try to trigger the job/transformation again.


4. UDM:PermissionInfo:


This form contains list of repository objects such as Transformations, Jobs, Database connections , Directories, Slave Servers, etc. By default all Jobs and Transformations are assigned Public permission. Users with access to this form can amend repository objects in this form.


During execution of Transformation/Job, query is being performed on this form and if user has access to this form then execution will happen successfully. If there is no permission then you may get errors.


[5] Common Error Scenarios


5.1 Atrium Integrator Job Schedules:


As we all know, an Atrium Integrator job can be scheduled to run at a specific time or an interval.  A few important things to know about the Job scheduler:

[a] Atrium Integrator job schedules are managed by AR Server and they are stored within the "AR System Job" form table.

[b] AR server runs an escalation by the name "ASJ:ScheduleJob" to trigger the job at a configured time.  BMC recommends to run this escalation on specific pool to avoid any Job schedule issues.

[c] One must also not schedule a reconciliation job and an Atrium Integrator job at the same time, because both the jobs could query or update the same data.


Errors like:


Error:- BMCATRIUM_NGIE000502: Failed to update job schedule.


BMCATRIUM_NGIE000501: Failed to create job schedule.


There can be several issues regarding Atrium Integrator job schedules such as schedules jobs not running on scheduled time or cannot modify created/existing schedules etc.


Resolution Approach:

--Verify "AR System Job" form entry to check if the valid job schedule exists with status as Active.

--Verify few important field values such as ‘Schedule Start time’, ‘Next Collection Time’ , ‘Type’ etc.

--You may try to run the job manually from AI console and if that works then verify if the specific job has UDM:ExecutionInstance created, if yes then delete that and see if the job triggers on specific time.

--Verify if escalation "ASJ:ScheduleJob" is enabled and running on specific pool number.


   To run the escalation on specific Pool number please refer following guidelines:

    Remedy - Server - How to assign a specific Pool to an Escalation


   Additionally you can visit:


--Our AI expert Gustavo del Gerbo has already covered few important Tips and Hot fixes to be applied:


DMT Schedule Jobs not working and/or failing inconsistently. Randomly schedules do not run. Memory leaks and high memory usage of AI. references in the following post regarding AI job scheduling and UDM Job getting stuck:


  1. Authentication Error when it tries to create a record in UDM:CartePending form.
  2. Unable to create webresult from XML. Error reading information from XML string: Premature end of file.
  3. 401 authentication error when it tries to publish the job to carte server.
  4. When you enable the carte server in debug mode you will get http 400 error when we publish the job to carte server and in arjavaplugin log you will get Socket rest connection error
  5. Error when DMT job console try to query the UDM:ExecutionStatus Form. Authentication error for AR_ESCALATOR or Remedy Application Service
  6. Issue where the first run of the scheduled job is successful and second run gets stuck.
  7. SW00515492 - AI Carte Server has a Memory Leaks and also pentaho ARDB plug-in have memory leaks.
  8. SW00515494 - Pentaho Spoon job received java.util.ConcurrentModificationException


Below is the list of Good to have, cumulative Atrium Integrator Hot fixes for different versions:


  • All versions: 8.1 all SPs, 9.0 all SPs, 9.1GA, 9.1 SP1 up to 9.1 SP2:

    - AI_9.1.00_29NOV2016_SW00518269_ALL


  • 9.1 GA (no service pack) or 9.1 SP1 version:

    - AI_9.1.00_12SEPT2016_SW00515492_SW00515494_ALL

    - AI_9.1.00_30DEC2016_SW00522054_ALL

    - FD_91_2016DEC14_SW00522122_ALL

    - Download and replace the kettle-engine.jar file from this blog to supersede the file from the hotfix package (the file from the package has some incompatibilities).


  • 9.1SP2:

    - AI_9.1.02_25JULY2017_SW00522054_All

    - AI_9.1.02_04MAY2018_SW00539413_All for Security (Version disclosure and other hacks)


  • 9.1SP3: For version 9.1.03 make sure to apply Patch 1 first and then apply the below Hot fixes:


    - AI_9.1.03.001_04JAN2018_SW00540959_ALL

    - AI_9.1.03.001_30JULY2018_SW00549235_ALL

    - AI_9.1.03.001_30Aug2018_SW00550547_All


  • 9.1SP4:

    - Use the attached file for

    - AI_9.1.04_18JAN2018_SW00543845 for Performance of AI console (Flex old UI).

    - AI_9.1.04_29JUNE_2018_SW00548512_All for Performance of Spoon.


  • 9.0 (all SP levels):

    -  AI_9.0.01_26OCT2016_SW00515994_ SW00516220_SW00516447_ALL


  • 8.1 SP2:

     - AI_8.1.02_05OCT2016_SW00516445_SW00515997_ALL


5.2 Out Of Memory Errors when running Atrium Integrator jobs


Sometimes we see out of memory error in arcarte logs


UnexpectedError: java.lang.OutOfMemoryError: GC overhead limit exceeded



-- AI jobs take long time to run/finish

-- Carte Server or at times AR Server crashes.


Resolution Approach:


----Enable -XX:+UseConcMarkSweepGC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath="c:\temp\MyDump.hprof" for both Spoon and Carte.

--For Spoon, add the line in Spoon.bat (ARSERVERHOME\diserver\data-integration\Spoon.bat)


--You can increase java heap size for Atrium Integrator and Spoon

When getting out of memory errors while running a job from AI console then increase java heap size in armonitor.conf file


"%BMC_JAVA_HOME%\java.exe "-Xmx1024m"-Djava.ext.dirs=C:\Program Files\Java\jre1.8.0_191\lib\ext;C:\Program Files\Java\jre1.8.0_191\lib;C:\Program Files\BMC Software\ARSystem\diserver\data-integration;C:\Program Files\BMC Software\ARSystem\diserver\data-integration\lib" "-Dorg.mortbay.util.URI.charset=UTF-8" "-DKETTLE_HOME=C:\Program Files\BMC Software\ARSystem\diserver" "-DKETTLE_REPOSITORY=" "-DKETTLE_USER=" "-DKETTLE_PASSWORD=" "-DKETTLE_PLUGIN_PACKAGES=" "-DKETTLE_LOG_SIZE_LIMIT=" "-DKETTLE_MAX_LOG_SIZE_IN_LINES=5000" "-DKETTLE_DISABLE_CONSOLE_LOGGING=Y" "-DKETTLE_COMPATIBILITY_MERGE_ROWS_USE_REFERENCE_STREAM_WHEN_IDENTICAL=Y" "-DKETTLE_LENIENT_STRING_TO_NUMBER_CONVERSION=Y" org.pentaho.di.www.Carte carteservername 20000 -i "C:\Program Files\BMC Software\ARSystem"



When running a job from Spoon: increase java heap size for Spoon:



"%PENTAHO_DI_JAVA_OPTIONS%"=="" set PENTAHO_DI_JAVA_OPTIONS="-Xms1024m" "Xmx2048m" "-XX:MaxPermSize=256m"



5.3 How to troubleshoot UDM / Atrium Integrator job related issues.


5.4 Atrium integrator Spoon and DataBase connectivity:

AI/Pentaho/Spoon/Carte MSSQL Connectivity. A mystery of many forms.


5.5 Atrium Integrator Performance best practices:

Best practices that can improve the performance of Atrium Integrator job


[6] Atrium Integrator Logging:


6.1 How to set DEBUG logs for Carte Server:


1. Locate the following file (in Linux it'll be in a similar directory from root):

<C:\Program Files\BMC Software\ARSystem\diserver\data-integration\pwd\>



2. Edit this file to change root logging level to Debug and save.  Here is the top part of the file where the change is made:

#Root logger log level


# Package logging level


6.2 Atrium Integrator adapter log files


In Windows systems, log files reside in <AR_system_installation_directory>/ARserver/db. In Unix systems, log files reside in <AR_system_installation_directory>/db. Carte server log files include:

  • arcarte.log
  • arcarte-stdout-<timestamp>.log
  • arcarte-stderr-<timestamp>.log


The ARDBC plug-in log file is arjavaplugin.log. All ARDBC plug-in messages are recorded in this file.


By default the log level is warn. If you want to log info or debug messages:

  1. Open <AR_system_installation_directory>/pluginsvr/log4j_pluginsvr.xml
  2. Search for logger com.bmc.arsys.pdi.
  3. Change the log level to info or debug.
  4. Restart the AR System server.


<logger name="com.bmc.arsys.pdi">
<level value="info" />


6.3 Spoon Logging:


While running a Job/Transformation from Spoon, you can set the logging level to one of the given options:



Thank you for reading this blog !

Share This:

Welcome to the first blog on CMDB in the year 2020. First off, I'd like to acknowledge the efforts of my colleagues Devendra Borse & Varun Patwardhan for reviewing the blog contents and Manish Dwivedi for sharing the Performance Stress Test results and valuable inputs on PerfMonitor.  In this blog, we cover information on fine tuning BMC CMDB Reconciliation performance, which we hope will assist you in handling such situations  Below are the sub-topics covered in this blog:


[A] Introduction – CMDB Reconciliation Performance

[B] Establishing a CMDB Reconciliation Performance Benchmark

[C] How to handle CMDB Reconciliation performance degradation using standard recommendations from BMC?

[C.1] Identify the uniqueness of the Reconciliation Job performance issues, and Remediate them

[C.2]  Are there unwanted CI data participating in CMDB Reconciliation job?

[D] How to report performance issues to BMC Support?


As Application and Database performance issues are broad topics, let’s look at the ones which are not covered in this blog:


  • Network related issues
  • Crashes of the ARRECOND process


We will now proceed with discussing the planned topics.


[A] Introduction – CMDB Reconciliation Performance


Reconciliation is one of the most important and resource intensive activities within the BMC CMDB application.  I’ve detailed some vital topics around this subject, which I hope will assist in fine tuning its performance.


[B] Establishing a CMDB Reconciliation Performance Benchmark


Like any other database application, the best way to ascertain Reconciliation job performance is by measuring.  It involves executing the Reconciliation jobs, measuring the test results by its activities in terms of the number of Configuration Items (CIs) identified and merged per second.  This exercise shall assist you in arriving at a performance benchmark for CMDB Reconciliation.  Some of the test results conducted in our lab are as following:


IMPORTANT: Please note, the test results are expected to vary somewhat in every environment.  The below figures are only a result of the sample tests performed in BMC labs, and should not be considered as official confirmation of a definite count of Configuration Items (CI) to reconcile, even given a similar load and architecture.


Test Results Spreadsheet.PNG


You can record similar stats from your recent runs of TEST and PROD environments' Reconciliation jobs, and arrive at a benchmark. The benchmark can alter over a period of time depending on CI data count, fine tuning queries, indexing and other AR and database server parameters affecting performance.  


Please find below instructions on establishing a benchmark, data volumes to use, methodologies etc


[C] How to handle CMDB Reconciliation performance degradation using standard recommendations from BMC?


While performing the test of running CMDB Reconciliation jobs in order to establish a benchmark, if you notice a performance degradation, please visit our standard configuration for AR server and Database server as following



If the performance of CMDB Reconciliation remains slow even after applying the standard recommendations, it must be investigated from an AR Server, Database server and Reconciliation Job perspective.  Based on our past experience, we’ve put down some steps in identifying the slowness issue, for remediation.


[C.1] Identify the uniqueness of the Reconciliation Job performance issue


There are three main components involved during the running of the Reconciliation Job:  AR System, Database & the Recon job configuration, including CI data that it reconciles.  I'd suggest the following approach to understand which component is causing the performance issue and resolving it.


[C.1.a] Is the slow performance of the Recon job faced as a result of huge numbers of Configuration Items (CI) to process?


You don't have control on the number of legitimate CI to reconcile, but you can use certain measures to eliminate those with erroneous conditions, and by using qualifications. Firstly, find out the total number of CIs to process, using the following SQL queries, and then compare the count with the benchmark you established – in terms of time taken by Recon job for a similar load.  That is to estimate the time to finish the job.


For Identification activity


SELECT COUNT(*) FROM <schema>.<class name> WHERE DatasetID != 'BMC.ASSET' AND ReconciliationIdentity = ‘0’


For Merge activity


SELECT COUNT(*) FROM <schema>.<class name> WHERE DatasetID != 'BMC.ASSET' AND ReconciliationMergeStatus = '40'


SELECT COUNT(*) FROM <schema>.<class name> WHERE DatasetID = '<Dataset ID>' AND ReconciliationMergeStatus = '40'


SELECT Count(*) from dbo.BMC_CORE_BMC_ComputerSystem where DatasetID != 'BMC.ASSET' and ReconciliationMergeStatus = '40'

SELECT COUNT(*) FROM <schema>.<class name> WHERE DatasetID = 'BMC.ADDM’' AND ReconciliationMergeStatus = '40'


You can further optimize the query by adding more conditions, but without removing the core ones like 'ReconciliationMergeStatus', 'ReconciliationIdentity'                                     


Remediate:  After you finish running the Reconciliation job, If you find the total time taken to reconcile a given number of CIs is inconsistent with the Benchmark spreadsheet, then AR and Database server configuration must be revisited and fine-tuned.  If the Recon job performance is as good as the benchmark, there isn't much one can do, however there are a few steps to eliminate the unwanted CIs for processing:


  • You can relieve the stress on the Recon job by eliminating those CIs off its radar which are the victim of data errors like duplicate CI, orphan relationships etc. We have a different sub-topic in this blog by the name 'Who is participating in Recon Job?' that you can refer to, for effectively handling those data errors.


  • To limit the count of CIs and improve the data quality, you may choose to reconcile only normalized CIs.


  • Lastly, qualifications can be used in the Recon job Identification activity to limit the number of CIs to process.


[C.1.b] Is the Recon Job performance issue faced because there are not enough threads configured on the AR server Recon private queue to cater to the huge number of CIs?


Symptoms: If you notice from Reconciliation engine log (arrecond.log) that only a few threads, like 1-2 of them, are used for a job that has a big count of CIs to reconcile, then you likely have insufficient threads.  In other words, if you notice good performance in the database server when running SQL queries, and that your Database administrator confirms available bandwidth of database engine to accept additional connections from the AR server, the slowness could be narrowed down to insufficient AR server thread configuration.  In some cases, adding a higher number of CPU cores may be an option to address the situation.                                                    


Remediate: BMC recommends using a Private RPC queue, '390698', for the Atrium CMDB Reconciliation engine process. In order to maintain the balance of threads among other processes on the AR Server, we recommend a number of (1.5 * Number of available CPU) for thread configuration purposes.


  • How to find total number of CPU core in Windows and Linux OS? 


In Windows server OS, the information on the total number of available CPU can be obtained by visiting the 'Performance' tab of Task Manager


CPU Pic.png



  • In Linux, the same information can be gained using the command ‘lscpu’ at the linux prompt                                                                                                  


Linux Number of CPU.png




Below is a table of recommended thread settings per CPU core   


Number of CPU

Max Threads


4 CPU Core



8 CPU Core







  • A screenshot of displaying the Thread settings for CMDB Reconciliation private queue


Private Queue 390698.png


  • General recommendations for threads per CPU core, and other performance indicators for Remedy applications







  • When can you possibly increase the number of CPU cores on the AR server?     


Increase the number of CPU cores on AR server if it can benefit reconciliation via increasing the number of threads, especially when the database (on the database server) can accommodate the additional connections, and can also effectively handle the load of SQL queries. More information on handling of database performance is documented under the sub-topic 'Is the Recon Job performance issue faced because of performance issue on Database server?'


  • How can you monitor  the CPUs performing as against the thread settings?


Please engage your System Administration team who can monitor CPU performance using advanced tools like PerfMonitor on Windows OS, or some CPU management related commands in Linux, to monitor the CPU performance while Recon job runs.


There are various videos on using PerfMonitor other than the below one from Microsoft that can search on the web, and use it to gather information on various performance counters.  Once this data is gathered, you can consult your System Administrator in case there's a need for additional CPU cores, or configure Thread settings effectively in AR Server for CMDB Reconciliation private queue.


Below are some of the counters used by our QA team to monitor CPU performance using PerfMonitor


Important counters from AR server perspective are as follows:


#Processor Counters

"\\ServerName\Processor(*)\% Processor Time"

"\\ServerName\Processor(*)\% User Time"

"\\ServerName\Processor(*)\% Idle Time"

"\\ServerName\Processor(*)\% Interrupt Time"

"\\ServerName\System\Processor Queue Length"

#Memory Counters

"\\ServerName\Memory\Available MBytes"

"\\ServerName\Memory\Page Faults/sec"

"\\ServerName\Memory\Page Reads/sec"

"\\ServerName\Memory\Page Writes/sec"

"\\ServerName\Memory\Page Writes/sec"

"\\ServerName\Memory\Pages Input/sec"

"\\ServerName\Memory\Pages Output/sec"


#Process Counters

"\\ServerName\Process(*)\ID Process"

"\\ServerName\Process(*)\% Processor Time"

"\\ServerName\Process(*)\% User Time"

"\\ServerName\Process(*)\Private Bytes"

"\\ServerName\Process(*)\Working Set"

"\\ServerName\Process(*)\Thread Count"

#PhysicalDisk Counters

"\\ServerName\PhysicalDisk(*)\% Idle time"

"\\ServerName\PhysicalDisk(*)\% Disk time"

"\\ServerName\PhysicalDisk(*)\% Disk Read Time"

"\\ServerName\PhysicalDisk(*)\% Disk Write Time"

"\\ServerName\PhysicalDisk(*)\Avg. Disk sec/Read"

"\\ServerName\PhysicalDisk(*)\Avg. Disk sec/Write"

"\\ServerName\PhysicalDisk(*)\Avg. Disk Queue Length"

"\\ServerName\PhysicalDisk(*)\Avg. Disk Read Queue Length"

"\\ServerName\PhysicalDisk(*)\Avg. Disk Write Queue Length"

#LogicalDisk Counters

"\\ServerName\LogicalDisk(*)\% Idle time"

"\\ServerName\LogicalDisk(*)\% Disk time"

"\\ServerName\LogicalDisk(*)\% Disk Read Time"

"\\ServerName\LogicalDisk(*)\% Disk Write Time"

"\\ServerName\LogicalDisk(*)\Avg. Disk sec/Read"

"\\ServerName\LogicalDisk(*)\Avg. Disk sec/Write"

"\\ServerName\LogicalDisk(*)\Avg. Disk Queue Length"

"\\ServerName\LogicalDisk(*)\Avg. Disk Read Queue Length"

"\\ServerName\LogicalDisk(*)\Avg. Disk Write Queue Length"

#Network Interface Counters

"\\ServerName\Network Interface(*)\Bytes Total/sec"

"\\ServerName\Network Interface(*)\Bytes Sent/sec"

"\\ServerName\Network Interface(*)\Bytes Received/sec"

#"\\ServerName\Network Interface(*)\Bytes/sec"

"\\ServerName\Network Interface(*)\Output Queue Length"


Important counters from Database server perspective are as follows:


#Processor Counters

"\\ServerName\Processor(*)\% Processor Time"

"\\ServerName\Processor(*)\% User Time"

"\\ServerName\Processor(*)\% Idle Time"

"\\ServerName\Processor(*)\% Interrupt Time"

"\\ServerName\System\Processor Queue Length"

#Memory Counters

"\\ServerName\Memory\Available MBytes"

"\\ServerName\Memory\Page Faults/sec"

"\\ServerName\Memory\Page Reads/sec"

"\\ServerName\Memory\Page Writes/sec"

"\\ServerName\Memory\Page Writes/sec"

"\\ServerName\Memory\Pages Input/sec"

"\\ServerName\Memory\Pages Output/sec"


#Process Counters

"\\ServerName\Process(*)\ID Process"

"\\ServerName\Process(*)\% Processor Time"

"\\ServerName\Process(*)\% User Time"

"\\ServerName\Process(*)\Private Bytes"

"\\ServerName\Process(*)\Working Set"

"\\ServerName\Process(*)\Thread Count"

#PhysicalDisk Counters

"\\ServerName\PhysicalDisk(*)\% Idle time"

"\\ServerName\PhysicalDisk(*)\% Disk time"

"\\ServerName\PhysicalDisk(*)\% Disk Read Time"

"\\ServerName\PhysicalDisk(*)\% Disk Write Time"

"\\ServerName\PhysicalDisk(*)\Avg. Disk sec/Read"

"\\ServerName\PhysicalDisk(*)\Avg. Disk sec/Write"

"\\ServerName\PhysicalDisk(*)\Avg. Disk Queue Length"

"\\ServerName\PhysicalDisk(*)\Avg. Disk Read Queue Length"

"\\ServerName\PhysicalDisk(*)\Avg. Disk Write Queue Length"

#LogicalDisk Counters

"\\ServerName\LogicalDisk(*)\% Idle time"

"\\ServerName\LogicalDisk(*)\% Disk time"

"\\ServerName\LogicalDisk(*)\% Disk Read Time"

"\\ServerName\LogicalDisk(*)\% Disk Write Time"

"\\ServerName\LogicalDisk(*)\Avg. Disk sec/Read"

"\\ServerName\LogicalDisk(*)\Avg. Disk sec/Write"

"\\ServerName\LogicalDisk(*)\Avg. Disk Queue Length"

"\\ServerName\LogicalDisk(*)\Avg. Disk Read Queue Length"

"\\ServerName\LogicalDisk(*)\Avg. Disk Write Queue Length"

#Network Interface Counters

"\\ServerName\Network Interface(*)\Bytes Total/sec"

"\\ServerName\Network Interface(*)\Bytes Sent/sec"

"\\ServerName\Network Interface(*)\Bytes Received/sec"

#"\\ServerName\Network Interface(*)\Bytes/sec"

"\\ServerName\Network Interface(*)\Output Queue Length"

#SQL Server Counters

"\\ServerName\MSSQL$CMDBPERF:General Statistics\User Connections"

"\\ServerName\MSSQL$CMDBPERF:SQL Statistics\Batch Requests/Sec"

"\\ServerName\MSSQL$CMDBPERF:SQL Statistics\SQL Compilations/sec"

"\\ServerName\MSSQL$CMDBPERF:SQL Statistics\SQL Re-Compilations/Sec"

"\\ServerName\MSSQL$CMDBPERF:Buffer Manager\Buffer cache hit ratio"

"\\ServerName\MSSQL$CMDBPERF:Locks(_Total)\Average Wait Time (ms)"

"\\ServerName\MSSQL$CMDBPERF:Locks(_Total)\Lock Timeouts/sec"

"\\ServerName\MSSQL$CMDBPERF:Locks(_Total)\Lock Wait Time (ms)"

"\\ServerName\MSSQL$CMDBPERF:Locks(_Total)\Lock Waits/sec"

"\\ServerName\MSSQL$CMDBPERF:Locks(_Total)\Number of Deadlocks/sec"




[C.1.c] Is the Recon Job performance issue faced because of performance issues on the Database server?         


Symptoms:       If the database server responds to SQL queries slowly while running the Reconciliation job, and as a result keeps the AR server threads in waiting mode, then the performance issue can be narrowed down to the database side.  Slow database performance can be due to multiple reasons, some of which are mentioned below


  • Long running queries due to the usage of incorrect index used
  • Worker threads not available to take the requests
  • I/O delays


Due to its complex nature, to understand the real cause of the database performance, please engage your Database Administrator (DBA) team.  From your side, you can help identify long running queries.


[i] Identifying Long Running Queries


  • Identify long running queries on the Remedy Database -                             

Usage of Remedy's Log Analyzer tool


Remedy log Analyzer tool is a very useful tool to find long running API calls, and there by SQL statements when analyzing the AR server side (SQL+API+Filter) logs captured at the time of AR Server performance issue.  For those who are still new to this tool, please go through the below video.




  • Identify long running queries on the Remedy Database -                                            

Usage database tools like Oracle Automatic Workload Repository (AWR) or Microsoft SQL Server Database Tuning Advisor


Please engage your Database Administrator to generate these reports. The benefit of generating these reports over AR Server logs is that it gives an overall picture of how the database server is performing not only from the Remedy application standpoint, but also to other applications it may be serving.


               To understand more about Oracle AWR report, please go through the below link:



               To understand more about MS SQL Server Tuning Advisor, please go through the below link:



[ii] Understanding the reason behind time taken


  • Is it a missing index?
  • Are queries waiting on a lock to be released?
  • Is it the physical disk I/O or Memory to temporary store the query results or CPU taking time to process the query?
  • Any other unknown reason


The validation of the above topics requires ownership by and engagement of your Database Administrators. However, from our past experience, we’d suggest the following approach.

  • Using Database tools, identify the set of SQL queries returning the results slowly.


  • Are those SQL queries related to Reconciliation job activities, the Remedy application in general or an external application?  It is easy to identify Reconciliation Job queries as they are run against CMDB class forms, for example BMC_CORE_BMC_BaseElement, BMC_CORE_BMC_BaseRelationship, BMC_CORE_BMC_ComputerSystem etc


  • If your DBA identifies long running queries on CMDB class forms, then have them advise on the creation of an index(s) that can help speed up query results.   Index fields and their sequence is decided by a DBA.


  • Indexes can be created on the CMDB Forms directly using CMDB Class Manager.  But we recommend to first create the indexes directly on the database tables.  Please take appropriate backup of the database before making changes to database schema.  After its successful testing (that also includes satisfactory query response time), you may later choose to drop that index from the database, and create it on the form instead.  The benefit of creating of the indexes on Remedy forms is that they're retained during upgrade of the application.


  • Once the index is created, the DBA should run the queries a few times with the new index outside of Recon job first to gather the stats.  If the results are satisfying, you can proceed to run the Recon job.


  • At this point, before attempting to run Recon job to test the new indexes, the DBA must evaluate the rest of the available indexes on those tables to ensure they don’t overlap and hence may take priority over the newly created ones, when running the Recon job.  Basically, your DBA must ensure that with new indexes, the SQL queries will use them even during the Recon job.  This is needed to confirm because neither the BMC Remedy AR server or CMDB application have API features that can force a particular index with a SQL query it generates.


  • When running the Recon job, if you notice proper usage of the index, yet the slowness persist, the DBA needs to investigate other parameters like Disk I/O, Memory usage, CPU consumption or database locks, unless the DB tool report still suggests some long running queries which are newer or the previous ones.


  • The indexing improvisation process continues until all the long running queries are fine tuned.



  [iii] Index design Consideration & other general tips


  • Understand the characteristics of the most frequently used queries. For example, knowing that a frequently used query joins two or more tables (like Child CMDB classes - a join of BMC_BaseElement and BMC_<child class> will help you determine the best type of indexes to use.


  • Microsoft suggests to determine the optimal storage location for the index. A non-clustered index can be stored in the same filegroup as the underlying table, or on a different filegroup. The storage location of indexes can improve query performance by increasing disk I/O performance. For example, storing a non-clustered index on a filegroup that is on a different disk than the table filegroup can improve performance because multiple disks can be read at the same time.


  • Use tools like Microsoft Database Engine Tuning Advisor and Oracle's SQL Tuning Advisor to analyze the database and get better index recommendations.


  • Avoid large numbers of indexes on the tables with frequent changes in data, like BMC_BaseElement, BMC_BaseRelationship, BMC_ComputerSystem etc


  • Avoid using too many columns in indexes.  Use the appropriate ones and its sequence.  Consider the order of the columns if the index will contain multiple columns. The column that is used in the WHERE clause in an equal to (=), greater than (>), less than (<), or BETWEEN search condition, or participates in a join, should be placed first. Additional columns should be ordered based on their level of distinctness, that is, from the most distinct to the least distinct.


  • The default cursor_sharing parameter in Oracle 10g is set to exact.


  • The Oracle database instance is allocated only a small amount of memory


  • SQL Server is allocated insufficient amount of space in the tempdb database


  • Avoid using the LIKE operator in queries (Identification rules and Qualification of Reconciliation Job)


  • For better performance and results, it is recommend that you use the Reconciliation Merge Order  By class in separate transactions option, and deselect  'Include Unchanged CIs option within Merge activity



[C.2] Are there any errors for CIs participating in the CMDB Reconciliation activity?


You might see performance issues if there are too many CIs failing to identify or merge. The same amount of CIs will go through reconciliation activity during every job run because of failures, which will unnecessarily increase load on Recon Engine, which is also unproductive calls to AR server and the database.

Hence, it’s better in the long-run to resolve those errors instead of ignoring them. Below are the most common errors that you may experience in Reconciliation activity:



ARERR[120092] The Dataset ID and Reconciliation Identity combination is not unique

          To resolve this error, please follow this KA #


Investigating issues related to Reconciliation Job

          Issues related to reconciliation jobs - Documentation for BMC CMDB 19.08 - BMC Documentation


Found multiple matches (instances) for class

          Follow this KA #




[D] How to report performance issues to BMC Support?


  • On the AR server running the Recon job, generate AR server side SQL+API+FILTER logs for least 10 minutes to capture slowness.  Also capture the Recon job log during the same time.

                         Setting log files options - Documentation for Remedy Action Request System 9.1 - BMC Documentation


  • Run the Atrium Core Maintenance utility tool on the AR server running Reconciliation Engine - grab the logs and config files on the AR server running Recon engine



  • If possible, run the log analysis using LogAnalyzer tool, and share the output with us


Thank you for reading.  Please share your feedback and queries.

Share This:

Having worked in Pentaho Spoon for sometime , you tend to encounter issues due to the negligence of minor configurations or over analyzing things. Below are the few of the issues I have encountered when working around in the tool and the solution that has worked for me to resolve them(Which might not be right all the time )


1. Weird data being uploaded from a transformation

Issue: When you do a import from a CSV file, sometimes weird values (consisting of @#! and some alphanumeric) gets updated into remedy.

Solution: This is due to one of the option check box that is selected during the data load. This needs to be unchecked for the data to be loaded properly without hampering the data values.

Note: Lazy conversion is really useful when you have to just read the data from a file and just pass it on as an file output, without any modification.


2. Loading data into the CMDB relationship table

Issue: When running a transformation for loading data into relationship table , based on the source instance ID and Destination instance ID, the records would be rejected.

Solution: In the CMDB output step, there is a check box "Use CheckSum", this needs to be unchecked for the relationship tables of CMDB.


3. Executing the AI jobs to run on linux environment

Issue: Due to some reason we were not able to execute the AI jobs from the Atrium console from the mid tier. While that was being worked on, we still needed a way to run the same.

Solution: We executed and ran the jobs directly from the command prompt. Details regarding the same has been explained in the below blog post.

Executing Spoon jobs in Linux Environment


4. Unable to run a job from command prompt

Issue: When we were trying to run the job from the command prompt, few of the jobs failed to execute.

Solution: This was mainly due to the space in the job names. there are 2 ways to rectify this one


     - Either have jobs names without spaces or separated by an underscore(_)

     - If there is a space in the name, then have the name of the jobs enclosed in " " , in the script used for executing


5. The sequential transformation in a job fails to run, when the previous job has been failed

Issue: When you have a job with multiple transformations in it, when a first transformation(or any previous transformation) fails the execution, all the successors jobs are quit and the job ends.

Solution: To avoid situation like this, make sure to have the job flow to allow "Unconditional" for the flow evaluation. This is by default set to "Follow when result is true".

Note: Change this setting only if there is no dependency between the transformation. Leave it unchanged if you need the previous transformation to execute successfully!

6. Connections being reset when importing jobs

Issue: The connection information is being modified or updated when the jobs are being imported.

Solution: This is a configuration in the spoon client , which when checked will update the connection information. This needs to be enabled with care based on the requirement. To locate the configuration, navigate to Spoon client, "Tools > Option". The highlighted option needs to be unchecked for this.


Note: These are some of the things that worked for me in working with the spoon client. It may not be the same for everyone. Hence decide on what is best for you when working with it.

Share This:

Hi Everyone,


Welcome to our new CMDB Blog for the month of December 2019. We provided information about Class Manager and Atrium Integrator in the new CMDB UI, in our previous CMDB blog. Here is the link of the previous blog if you have missed it - Helix Support: Using new CMDB UI - Class Manager & Atrium Integrator


In this blog, we will provide detailed information about the new interface for Reconciliation. Below are the topics which we are going to cover:

  • Reconciliation Console Overview
  • Creating a new Reconciliation job
  • Managing Reconciliation Rules



Reconciliation Console Overview


How to access Reconciliation in new CMDB UI ?


Login to the new CMDB UI --> Dashboard and click on the “Manage Reconciliation” link under the ‘Jobs’ menu.





Reconciliation Console Layout Details


- The Reconciliation Console shows the number of jobs along with the job run details.

- It displays information on the processed CIs and relationships. It also shows whether the CIs/relationships have any errors after processing or not.

- You can filter the jobs to be displayed by selecting the dataset above and/or the activity type.





Creating a new Reconciliation job


- Click on the ‘Create Job’ button in the Reconciliation console.

- The example given is for a 'Reconciliation: Identification, Merge and Purge' job.




- Enter the RE job name and select the activity to be created.

- The first activity to be created is ‘Identify’.

- Add a schedule for the job if needed.




Enter the Identify activity details as given below and click on save.

- Provide an activity name and sequence number for the Identification activity.

- Enter the source and target dataset for the activity

- Provide a specific qualification if required.




Create the ‘Merge’ activity by clicking on the ‘Add New’ button and enter the Merge activity details as given below.

- Provide an activity name and sequence number for the activity execution in the job.

- Select the source and target dataset for the Merge activity.

- Provide a specific qualification if required.

- Select a 'Dataset Merge Precedence Set'.

- The Merge Order is "By Class In Separate Transactions" by default.

- Choose whether the Merge Activity should do a Full Merge or a Delta Merge.




Click on the ‘Add New’ button again to create the ‘Purge’ activity.


Enter the Purge activity details as given below and click on save.

- Provide an activity name and select a sequence for the Purge activity.

- Select the dataset from which the data should be purged.

- Provide a specific qualification if required.




A new Test job will show as created in the RE job list.




To start the 'Test Job', click on the job entry and click on 'Start Job'.





Managing Reconciliation Rules


From the Dashboard go to Configurations --> ‘Manage Reconciliation Rules’ to access the RE rules like Identification Rules, Qualification Rules and Precedence Rules.




Managing Identification Rules


- To access the identification rules, click on Manage Reconciliation Rules à Identification rules.

- The rules are given class-wise and the class list is given on the left hand side.

- Once any class is selected, one can toggle between the Standard and Custom rules for that class.

- Next to each rule, there is an edit/delete button.





Managing Qualification Rules


- To access the Qualification Rules, click on Manage Reconciliation Rules à Qualification rules.

- There is a button each for adding a Qualification Ruleset and a Qualification Rule.





Managing Precedence Rules


- To access the Precedence Rules, click on Manage Reconciliation Rules à Precedence Exceptions.

- Select a Dataset Merge Precedence set to view it's details.

- On selecting a dataset, the Merge Ruleset for that dataset are shown.

- The tiebreaker value is also shown which can be changed as per the data update requirement based on precedences.




Thank you for reading this blog

Share This:

Hi Everyone,


Welcome to our new CMDB Blog for the month of November 2019. We mentioned about new features introduced in CMDB 19.08 in our last CMDB blog. Here is the link of last blog if you have missed it:


In this blog, we will provide detailed information about new interface for Class manager & additional capabilities added to the new user interface of Atrium Integrator. Below are the topics which we are going to cover:


  • Prerequisites
  • Class Manager Overview
  • Data Imports Console Overview
  • Navigating within Data Imports console
    • Listing, Filtering jobs
    • Executed Jobs
  • Manage Datastore
  • Create a Custom Job & Run it





  • Accessing and navigating the new CMDB UI

  • Configuring the URL to access the new CMDB UI

  • General queries about Configuration Management Dashboard UI



Class Manager Overview


Class Manager has a new user interface that you can access quickly and easily. The Class Manager interface displays all the classes in your data model. You can create or edit a class or relationship.

How to launch Class Manager console?
Once you login to new CMDB Console, CMDB Dashboard is displayed. From this home page, you can open it using ‘Classes’ option under ‘Class Management’ menu.




Below is the layout of Class manager:


You can perform the following tasks related to the data model by using Class Manager:


1. Define the properties of the class    


- You can define the type of class, how it stores data, and (for relationship classes) the relationship.

- You can create new class or subclass by clicking on “Create” or “Add Subclass” button.

   “Create” button is located under navigation pane & “Add Subclass” button is available under Information pane.


2. Configure instance auditing for the class


- Auditing enables you to track the changes made to instances of a class. You can select one of these options:-

      1. None - Select this to not perform an audit.
      2. Copy - Select this option to create a copy of each audited instance. Each form related to a class and its super class is duplicated to create audit forms that hold audited instances.
      3. Log - Select this option to create an entry in a log form that stores all attribute values from the audited instance in one field. If you select this option, you must also specify the name of the log form to use.


3. Define one or more CI and relationship class attributes


- You can create new attribute or edit existing one from Class manager.
- To create new attribute, select the class on which you wanted to create an attribute & click on ‘Add’ button under ‘Attributes’ tab:



- To modify existing attribute, you can click on attribute name & click on Edit button:



4. Specify permissions

- If you do not specify permissions for a class, BMC CMDB assigns default permissions.


5. Specify indexes


- Indexing classes can reduce database query time. Index attributes that you expect users to query frequently,

   are used by discovery applications to identify Configuration Items (CIs), and are used in reconciliation identification rules.
- Specifying or modifying indexes in a class that already holds a large number of instances can take a significant

   amount of time and disk space. You must avoid creating indexes during normal production hours.

- You can create index from Class manager by clicking on ‘Add’ button in ‘Indexes’ tab :



Once you save it, you can see new index under “Indexes” tab:


6. Propagate attributes in a weak relationship


- This step is necessary only if you have created a relationship class that has a weak relationship in which the attributes

  from one class should be propagated to another class.


For more detailed information, you can go through below document:


Data Imports Console Overview


Data Import console is a home to creating and managing Atrium Integrator (AI) jobs.  For creating a complex Atrium Integrator jobs

which uses difficult logics and multiple data manipulation steps, Pentaho Spoon is still the choice though.


How to launch Data Imports console?

CMDB Dashboard is displayed after login to the new CMDB UI. From this home page, one can load ‘Data Imports’ console by clicking

Manage Atrium Integrator’ link available under ‘Job’ menu.



Upon clicking ‘Manage Atrium Integrator’ option, the ‘Data Imports’ console is loaded:




Navigating within Data Imports console


  • Listing jobs, Filtering jobs

          The console lists all the jobs under the tab 'Total' on its home page.


Pagination - Pagination is applied to the list so they are visible in a small number per page. 

Filtering a Job - Jobs can be searched using a filter. For example, jobs that start with string 'SRM' or simply contains

that string in the job name can be searched by just typing 'SRM' (without quotes) in the Filter search box. 


        NOTEPlease note that wildcards symbols like '*' or '?' can't be used while searching the jobs


  • Executed Jobs

The jobs that have finished execution (either successful or failed) are listed under the tab ‘Executed Jobs’ on Data Imports console

As the name suggests, this tab shows the list of executed jobs, their status and CI Record management information including errors if any.


If you want to see the Job Run details, please click on the down arrow just beside the 'Run History' column of that job.


'Run History' column shows the total count of times the job has run.  The count has a link that can be used to drill

down to fine details of the Job run history.


Job Run History can be further filtered to show latest Job runs by selecting 'Today' and recent Job runs by using option 'Last 7 days' or monthly.


From the screenshot, you will notice that there are Job status with 'Successful' and 'Failed' too.  You can use the Status drop down

to search only the 'Failed' jobs and then use the drop down inside the Job Run details in order to see the which transformation within the job that failed




Manage Datastore


Datastores are logical connections made to a physical container of the data.  It could be a CSV file, XML file, Database or AR server itself. 

The Target datastore is always an AR Server connection (datastore) as that is where we want to push the data to. 

The data is pushed to CMDB classes form within Remedy AR server. The source datastore can be various connections including CSV, XML or a database.


To Create or Manage a datastore, please click on 'Manage Datastore' button on 'Data Imports' console



Please see below the different datastores that you can create :


Creating a Datastore




Source Datastore using Database as type and MS SQL as a vendor database




Source Datastore using Database as type and Oracle as a vendor database




Source Datastore using CSV as type




Source Datastore using XML as type




Source Datastore using AR server as type




NOTEFor CSV and XML file connections, please create a store locally on the AR server where the AI job will run to ensure fast performance.



Create a custom job & Run it


The prerequisite for creating a custom job in ‘Data Imports’ console is to have source and target datastore created, as mentioned in the previous step of this document. 

Once the datastores are created, use the ‘Create Job’ command on the console, as seen below.




Fill out the new Job Details and also do the Field mapping between Source and Target dataset. 

In the example below, we have used external ‘Database’ as a source data.




Do the Field mappings, as seen in the below image




Save the Job




Click on job name then 'Start Job' button to run it



Thank you for reading this blog.

Share This:

Hello Everyone,


I wanted to let you know that at the end of last week we released BMC CMDB 19.08, along with ITSM and AR System along with other products.


This release contains several items of interest and I wanted to quickly highlight them for you:

  1. Class Manager in the Dashboard driven UI: you can now manipulate the Common Data Model using our updated UI in a new streamlined and simplified Class Manager, we've stripped the Class Manager back to what is actually needed rather than containing superfluous information, but also enhanced it so it's easier to get things done!
  2. Atrium Integrator has also received attention in this release, you are now able to create end to end transformations for AI jobs directly in the new UI allowing for easier and streamlined experience, we aren't totally done here yet and there are some caveats, such as we do not yet support the concept of a separate file/source for the relationships between CIs via this method, you need to supply the parent CI in the child record.
  3. In the CMDB Explorer we have added the ability to run impact simulations, thus removing the need for a separate UI for the Impact Simulator it's now all just part of CMDB Explorer, you will also notice this has been embedded into the ITSM Mid-Tier UI too as part of our Flex deprecation efforts.
  4. Also Federated Actions (aka Launch in Context Federation) is now present and exposed in the CMDB Explorer with it's own configuration interface, we've also pre-configured for BMC Discovery so you can start getting value quickly from this powerful functionality.
  5. Changes to the CDM are also present in this release, with the technology world moving a pace we have updated the CDM again in this release with the following items:
    1. Introduced new attributes to BMC_Tag class to facilitate the population of Cloud Tag information by BMC Discovery
    2. Introduced a new 'informational' relationship class to allow you to relate anything to anything called BMC_RelatedTo this has no endpoint rules and does not carry Impact or Dependency information but allows you to, for instance, relate tags to any other CI. The intent here is to relation 'informational' CIs across the landscape
    3. We've relocated the 'isVirtual?' attribute from the BMC_System hierarchy into BaseElement, this is not a totally trivial exercise and we are not forcing you to adopt the move as part of an upgrade, but migrate the data at your pace. We will produce a Atrium Integrator job to assist with this via the Communities shortly. Anything using the API will now write to the new relocated attribute whilst your existing data will be left where it is.


This post highlights some of the major updates we've made, there are other defect fixes and updates made throughout the tool some refinements here and there but these are the larger more visible items for the details of everything we've been unto please check the Release Notes for 19.08. I'll leave you with some teaser screenshots of the Class Manager and update CMDB Explorer for now....


BMC CMDB Class Manager:


BMC CMDB Explorer with Impact Simulation and Federated Actions:


I look forward to seeing as many of you as possible at the BMC Helix Immersion Days in Santa Clara in September! Where we'll be running labs and sessions including CMDB Roadmap moving forward, the Configuration Manager Dashboard UI, CMDB Future concepts and functionality and I'll be at Evening with Engineering too! See you there....



Share This:

From Monday the 29th July 2019 BMC will be making a configuration change which will disable the 'What's New' popup window in the Configuration Manager Dashboard User Interface.


Why are we making this change?

We have had a number of reports from customers that the pop-up window in the Configuration Manager Dashboard UI is causing blocking behaviours in certain circumstances, in order to enable the maxiumum value to be realized by as many customers as possible we are going to disable this blocking behavior.


What does this mean?

When users log in to the BMC CMDB Configuration Manager Dashboard UI, they will NOT receive a 'What's New' popup window highlighting what has changed in the release, please use the 'Walkthroughs' slide out to access in context documentation highlighting 'What's New' for the CMDB.


Will this functionality be reinstated?

We will be working to add a configuration parameter in an upcoming release that will allow organizations to enable or disable this popup for their organization, once this is available we will enable the popup again for those organizations that have the configuration set appropriately.


Still want to clarify something? Comment below

Share This:

Have you signed up to come and join is at BMC Helix Immersion days in Santa Clara, September 16th? There will be PM's, Engineers, events and sessions around BMC CMDB and other tools in the Helix suite - lots to learn, lots to 'try' and sessions to participate in from labs to User Experience research... If you haven't signed up already please do at:


BMC Helix Immersion Days | Silicon Valley | September 16 - 18, 2019


See you there!


Stephen Earl

Prinicipal Product Manager, BMC CMDB

Stephen Earl

BMC CMDB 19.02 - GA

Posted by Stephen Earl Employee Mar 1, 2019
Share This:

Hi Everybody


I wanted to quickly post that BMC CMDB 19.02 is now GA, and can be downloaded from the EPD site.


New in this release:


CMDB Explorer

Edit and Create CIs from the new Dashboard User Experience, be aware that editing CI information utilises mid-tier CI forms.


Explorer Complex model.PNG


CMDB Archiving

You can now move data into a CMDB archive from your active datasets, this allows you to preserve data for compliance and other record keeping purposes.

Archive CIs, their related CIs, define retention periods, select your CIs by query and define where the archive data will be stored.




Location attributes on relationships

Shift your data paradigm and you can now specify the detailed location information for CIs in relationships allowing for your physical locations to be less granular eg: buildings or campus and your relationships hold the specific room, rack or shelf locations.


Service Cost Modelling attributes

You can now provide cost apportionment (expressed as a percentage) and cost flow attributes between CIs on relationships allowing the CMDB to provide improved raw data to enable your service models and active data to support service costing efforts.



How will these features and a capabilities help your business to succeed with CMDB? Please comment below

Looking forward to our upcoming releases you can expect us to continue our User Experience evolution and much more!

Share This:
Share This:



I'm looking for customers who are open to discussing how the Internet of Things is affecting your organization, how you are implementing IoT infrastructure within your organization and your approach to managing these devices within your infrastructure?


I would be looking to have a 30min discussion with those who are interested and implementing IoT in their infrastructure and providing service to their customers.


Please reach out to me via email or post a reply in this thread

Stephen Earl

BMC CMDB 18.05

Posted by Stephen Earl Employee Jun 21, 2018
Share This:

Well another release of BMC CMDB is in the wild and I wanted to quickly cover what we’ve been upto in 18.05.00 (find out more about the rest of the release here and from the Connect with Remedy webinar we held on Wednesday 20th June).


CMDB Search


In 18.05 we have introduced a new searching capability for the CMDB which provides greater flexibility for ‘power users’ and also easier searching for those who do not understand, or need to understand, the way the CMDB is organised and structured to find what they are looking for.


We have CMDB Quick Search for quickly finding CI’s that don’t require complex qualifications or relationship traversal, all you need to define is the Dataset you want to search in, we default to BMC.ASSET the Class you want to search again we provide the most frequently used as the default BMC_ComputerSystem, and the Attribute you want to search, all using typeahead menus etc. to ease the experience.


Screen Shot 2018-06-21 at 11.28.15.png


But we haven’t forgotten ‘power searchers’ in the new experience, here we have implemented some prebuilt complex searches that traverse relationships - the most commonly asked for - and also provided a custom option so you can build your own powerful searches in the UI.


You can then select a search result, or several and view them in our CMDB Explorer or export the results to a .csv file, you are also able to view CI details inline within the search results drawer.


Screen Shot 2018-06-21 at 11.24.44.png


CMDB Explorer


In this release we also introduce our all new CMDB Explorer which relies on the new searching capabilities as it’s root but then allows you to expand your understanding through visualization of the results and ability to filter and add additional search results to the new canvas.


The CMDB Explorer Canvas is where all the action takes place, along the left we have the toolbox which allows you to alter what is displayed and how it’s displayed and then along the top our CMDB Search ‘widget’ so you can search for additional content and bring that into your canvas to view.


We have implemented filtering capabilities, alternate layouts, hover over information when you are on a CI or Relationship object and also ‘inline’ CI Details view so you can see the details of the CI right there in the explorer window without having to jump to another screen, for those of you who are BMC Discovery customers this should appear pretty familiar as we were inspired by the work done in the Modelling interface within the BMC Discovery product.


Screen Shot 2018-06-21 at 11.29.22.png


Our efforts and focus now shift over coming releases to completing our User Experience story, amongst other things, we are now working to bring you Edit capabilities to explorer, Modelling toolsets for Services and other collections of CIs, Sandbox capabilities and also other tools that remain in the CMDB toolset such as Class Manager, Impact Designer and Impact Simulator - these utilities and tools will be updated over coming releases to the new UX paradigm along with existing elements being updated.


BMC ‘Atrium’ CMDB


You will also note that, as we started in 9.1.04, our product name is being rebranded to BMC CMDB, this release introduces that change in product, you will also notice that in BMC Communities the BMC CMDB community is now located within the BMC Remedy ITSM suite hierarchy, we’ve taken this step to make it easier to find and also more logically aligned with he primary use of the CMDB - this change is a logical move and does not reflect anything functionally changing or aligning in CMDB moving forward - CMDB remains and will continue to be delivered with other BMC products, such as BMC Discovery.


We would love to hear your feedback on our new capabilities in CMDB in this 18.05 release, please make comments below.

Share This:

Where is New CMDB UI hosted?

CMDB New UI is NOT hosted in mid-tier.

It is hosted on AR contained Jetty server.


Can I access the new CMDB UI from mid-tier?

Yes, From the home page fly-out menu, Atrium Core – Click on Configuration Manager Dashboard.


If I want to access new CMDB UI without mid-tier, can I do that?

Yes, below are typical urls for accessing new CMDB UI :

http://<ARServer>:<Jetty Port>/cmdb/index.html or https://<ARServer>:<Jetty Port>/cmdb/index.html

http://localhost:8008/cmdb/index.html or https://localhost:8008/cmdb/index.html


Are there any explicit configurations required for accessing new CMDB UI from Mid-Tier?

If there are no load balancers configured on AR Server, CMDB installs the configurations out-of-box and no explicit action is required, including in server group environments.

If you require any modifications due to jetty port change, https configuration change or server name modification or load balancer configurations then only you need to modify these



Where can I find the configurations for accessing new CMDB UI from Mid-Tier?

Home Page Flyout menu – AR System Administration Console – System – General – Centralized Configuration – com.bmc.arsys.server.shared – shared

Make sure that Jetty Port is correct. The out-of-box value is 8008. If you have modified the Jetty port, please reflect in Jetty Port parameter here.

Redirect-URL parameter will typically have following values :

cmdb.<ARServer>::http://<ARServer>:<Jetty Port>/cmdb/index.html;


How to configure for accessing new CMDB UI from Mid-Tier in server group environment?

If there are 2 servers in server group, ARServer1 and ARServer2,

then in such cases, Redirect-URL parameter will typically have following values :

cmdb.<ARServer1>:http:// <ARServer1>:<Jetty Port>/cmdb/index.html;<ARServer2>:http:// <ARServer2>:<Jetty Port>/cmdb/index.html;

Again, when you install server group, these values are populated by CMDB installer. If you require any modifications due to jetty port change, https configuration change or server name modification or load balancer configurations then only you need to modify these parameters.


How to configure for accessing new CMDB UI from Mid-Tier in server group load balancer environment?

If there are 2 servers in server group, ARServer1 and ARServer2 , with AR load balancer as remedy.bmc Redirect-URL parameter will need to change as follows



How does mid-tier access the new CMDB UI?

Mid-Tier looks for Redirect-URL parameter values. It looks for the same server from which context url is launched :

http://<MidTier Host>:<Mid Tier Web Server Port>/arsys/forms/<ARServer>AR+System+Customizable+Home+Page/Default+Administrator+View

Make sure the ARServer value here matches the one in Redirect-URL parameter i.e.

cmdb.<ARServer>:http://<ARServer>:<Jetty Port>/cmdb/index.html;


Are there any changes in Mid-Tier to access the new CMDB UI?

There is new servlet - ExternxalLaunchPointServlet released in SP4, so midtier version must be SP4 only.

This servlet (ExternalLaunchPointServlet) and its mapping configured under web.xml of midtier, please check if entry for same is present in midtier’s deployment descriptor i.e. midtier/WEB-INF/web.xml file.


How do we know if mid-tier servlet is getting called while accessing new CMDB  UI?

You can check the logs to find out if the servlet is being hit by setting the mid-tiers log level to Fine.

Once you set the log level hit the servlet and check midtier logs, you should see “Entering ExternalLaunchPointServlet” getting printed in the logs.


What if the customer has customized web.xml?

If web.xml is customized then after upgrade, you should add following :




If customer does not use /arsys context for Mid-Tier?

In such scenario, as Active link CMDB:LaunchCMDBConsole has a hard coded URL that includes

/arsys PERFORM-ACTION-OPEN-URL /arsys/launch/cmdb

you don’t use the default of arsys for your midtier context, then you need to customize this active link accordingly.


How does new CMDB UI handle requests?

With RSSO Configured :

  • Login credentials are accepted by RSSO and sent to AR server
  • AR server sends back the token to RSSO
  • RSSO stores the token ID in browser cache
  • CMDB New UI picks the token ID and attached to each request sent to AR server.

Without RSSO :

  • Login credentials are accepted by CMDB new UI and sent to AR Server
  • AR server sends back the token to CMDB new UI
  • CMDB New UI stores the token in browser cache
  • CMDB New UI send this token with each request sent to AR server.


Customer’s REST APIs stopped working after SP4 in RSSO environment. What do I do?

When you enable RSSO for AR, as Jetty is embedded in AR and the REST API is also running on Jetty and Jetty is enabled with RSSO, so making all the REST calls fail due to redirection to RSSO.

In such case, in server’s '' file’s exclude url should contain /api/jwt/login* so that existing REST APIs are not redirected to RSSO.


excluded-url-pattern=.*\\.xml|.*\\.gif|.*\\.css|.*\\.ico|/shared/config/.*|/WSDL/.*|/shared/error.jsp|/shared/timer/.*|/shared/login_commn.jsp|/shared/view_form.jsp|/shared/ar_url_encoder.jsp|/ThirdPartyJars/.*|/shared/logout.jsp|/shared/doc/.*|/shared/images/.*|/shared/login.jsp|/services/.*|/shared/file_not_found.jsp|/plugins/.*|/shared/wait.jsp|/servlet/GoatConfigServlet|/servlet/ConfigServlet|/shared/HTTPPost.class|/shared/FileUpload.jar|/BackChannel.*|/servlet/LicenseReleaseServlet.* |/api/jwt/login*


Any other useful links? Go on Docs pages





Share This:

This year I'm looking forward to meeting as many of you as possible at the T3:SMAC conference being organised and run by tooltechtrain, sponsored by BMC Software.


I have a session to walk you through our new CMDB User Experience releasing soon and also be available for meetings and hallway conversations of course - see you there!


If you want a new version of our updated CDM Diagram poster make sure you come and find me at the BMC area to grab one - let me know you are attending the conference and when we meet you can get your shiny new poster!


Visit the website are REGISTER for your place at the conference, don't forget to let me know you are coming and come and collect your poster...

Filter Blog

By date:
By tag: