Skip navigation
1 2 3 Previous Next

CMDB

142 posts
Share This:

Hi Everyone,

 

In this month's blog, we are going to share troubleshooting steps for the most common error message i.e. ARERR 623 Authentication Failed after authenticating through Remedy Single Sign On (RSSO). This error means, the user is getting authenticated successfully from Remedy SSO but failing to authorize on AR Server.

 

You might see this error due to different reasons. Please go through the below steps to resolve a 623 Authentication failed error:

 


 

1. Check AR Integration

 

a. Make sure the following AREA settings (<AR>/Conf/ar.cfg) are configured on the AR Server

    (Can be set from the Server Information form > EA tab):

 

External-Authentication-RPC-Socket: 390695

Authentication-Chaining-Mode: 1

Crossref-Blank-Password: T

 

b. Make sure that rsso.cfg file exists in <AR>/Conf & make sure that the below URL points to the correct Remedy SSO server service URL:

 

SSO-SERVICE-URL: <rsso_service_url>

 

c. Make sure that below files present in <AR>/pluginsvr

 

rsso-area-plugin-all.jar

gson-2.3.1.jar

slf4j-api-1.7.25.jar (For RSSO 18.11 or later versions)

 

d. Check below entries in <AR>/pluginsvr/pluginsvr_config.xml :

 

<plugin>

           <name>ARSYS.AREA.RSSO</name>

           <classname>com.bmc.rsso.plugin.area.RSSOPlugin</classname>

            <pathelement type="location"><AR>/pluginsvr/rsso-area-plugin-all.jar</pathelement>

            <pathelement type="location"><AR>/pluginsvr/gson-2.3.1.jar</pathelement>

            <pathelement type="location"><AR>/pluginsvr/slf4j-api-1.7.25.jar</pathelement>  --- (For RSSO 18.11 or later versions)

            <userDefined>

                 <configFile>{AR}/Conf/rsso.cfg</configFile>

            </userDefined>

</plugin>

 

2. Operating-Mode parameter in ar.cfg

 

- If you are getting 623 error after AR Server upgrade then It might be due to Operating-Mode parameter in ar.cfg in <AR>/conf.

- Make sure that Operating-Mode in the ar.cfg is set to 0. If It is 1, you will see this issue.

- You will need to restart AR server after changing the parameter.

- If you are interested in knowing about Operating-Mode parameter, you can go through below AR Blog :

   Operating Mode

 

3. Server Plugin Alias entry for AREA plugin

 

- AR Server's AREA plugin is used by RSSO Plugin for authentication. If you are missing the below line in ar.cfg then you will encounter a 623 error.

- Server-Plugin-Alias: AREA AREA <servername>:9999

- If It's missing then please add the above entry & restart the AR service.

 

4. Check Certificates (If using HTTPS for RSSO)

 

- If you are using the HTTPS protocol for RSSO Service URL in rsso.cfg then you might see a 623 error because of handshake issues between AR & RSSO server.

- To avoid certificate related errors, you should import the RSSO root certificate in Java cacerts on AR Server.

- If you wanted to confirm if the issue is happening due to Certificate or not, you can disable SSL/TLS checks for HTTPS communication on Agent side

  (This should be used only to confirm if the issue is related to Certificates or not)

- To disable SSL/TLS check, you can change the below parameter to true in rsso.cfg file exist in <AR>/conf

    com.bmc.rsso.tls.disable.checks: true

- This is only available for RSSO 19.05 & later versions

 

5. Midtier Service Password

 

- You might see a 623 error for Midtier service account after login e.g. ERROR (623): Authentication failed; MidTier Service

- That means you don't have the correct password for the AR Server in the Midtier config tool.

- You can update the password from Midtier Config tool → AR Server Settings

- Select the AR server & click on the edit button.

- Make sure that you check the Validate password option as It will give an error if the password is not correct.

    MidtierServPwd.png

6.  Check Username

 

- Sometimes an error can be seen because the username received from IDP (IDP could be LDAP/SAML/OKTA etc.) doesn't match with one exist in AR Server's User form.

- You can validate this by going to RSSO Admin Console & check sessions. It will show you the username received from LDAP or other authentication protocol.

- If that is not matching with the username on the User form then you will need to use transformation on the RSSO Admin console.

  e.g. If LDAP is sending username as "user@bmc.com" but on the User form, if it is specified as "user" then you will need to use the "Remove Email domain" transformation on the RSSO console.

 

7. AR Java plugin related issues

 

- RSSO Plugin is part of the AR Java plugin. You might see a 623 error if the AR Java plugin is not initialized or not working.

- You can run below command on the command line :

    netstat -an | findstr "ar_plugin_port"

     Processes.png

- You can also check the AR Java plugin process on Task manager if its running or not

         Java.png

- You will need to add the "Command line" column on Task Manager to see the complete java path.

- If you are not able to see Java process with "pluginsvr;" in the command line means it is not initialized.

- You can check arjavaplugin.log from ARSystem/db to see errors related to RSSO.

- If you don't see details for login failure in the log, you can enable debug level logging for AR Java plugin & restart plugin.

- You will need to re-login to get more details on why authentication is failing with a 623 error on AR Server.

- Here is the KA to enable debug level logging for AR Java plugin :

            Remedy - Server - v.9.x How to enable DEBUG Java Plugin Server Logging in AR System in the arjavaplugin.log file

 

These troubleshooting steps should help you to resolve a 623 error. Thank you for reading this blog!

Share This:

This month's blog is brief, but it covers an important topic of integrating Jetty server with Remedy SSO.  We have recently seen an increase in the number of support requests on this integration.  Below are the main topics of this blog:

 

[A]  Prior to Integrating Jetty server with Remedy SSO

[B] How to integrate Remedy SSO with Jetty server?  What components are involved (Prerequisites)?

[C] Steps to manually integrate Jetty server with Remedy SSO

           [C.1]  Running RSSO Installer

             [C.2] Copying 'rsso-agent.properties' file and making configuration changes in it

             [C.3]  Configuring AGENT-ID 

             [C.4] Configuring SSO Redirections

             [C.5]  WORKAROUND - Configuring memory-cache param

             [C.6]  Configuring RSSO AREA Plugin on AR Server

             [C.7]  Copy RSSO Agent file under DEPLOY folder

             [C.8]  Restart AR server service

 

 

There is a documentation in-place on this topic (link copied below).  This blog compliments to the doc link by adding some some screenshots and a crucial workaround of configuring 'use-in-memory-cache' parameter in order to make this integration working.

 

Documentation link:  Manually integrating Remedy Single Sign-On with Jetty server - Documentation for Remedy Action Request System 20.02 - BM…

 

Upon doing this integration, user won't be again prompted to login to Configuration Management Dashboard, which is the case now even after AR and MidTier  is integrated with Remedy SSO and, supposedly having an active active user session.

 

[A] Prior to Integrating Jetty server with Remedy SSO

 

Upon login to Remedy MidTier and access 'Configuration Manager Dashboard', you will be prompted with the login page like the one below:

 

login for CMDB UI.JPG

 

[B] How to integrate Remedy SSO with Jetty server?  What components are involved (Prerequisites)?

 

Components:

 

 

Installer:

 

BMC Remedy SSO installer, when run on AR server, by selecting an option 'Integrate with AR server', installs and configures all the files necessary for AR-RSSO integration.  The installer is also suppose to do multiple checks, like availability of Jetty server, Innovation Studio etc, and accordingly install additional files for those components, to integrate.  However, I'm sorry to inform here, at the time when installer performs files are not copied on many occasions, those pertaining to Jetty server integration.  Hence, manual integration comes into play.  Fortunately, the files are not too many to copy,  just a couple of them.  As a source of copying files, you'd need Remedy SSO Installer binaries on every AR server you're planning to integrate.

 

[C] Steps to manually integrate Jetty server with Remedy SSO

 

[C.1]  Running RSSO Installer

 

After RSSO installer is run successfully on AR server with an option 'integrate with AR server', ensure the MidTier is also integrated with RSSO and, user can authenticated via RSSO.  That is a prerequisite.  If user can't login using the combination of AR-MIDTIER-RSSO integration, authentication to Configuration Manager Dashboard won't work too.  Please refer to our following step-by-step guidelines to verify and correct AR-MidTier-RSSO integration pointers.  The below URL contains information for other components too besides just AR and Midtier, please only focus on the ones needed.

 

https://docs.bmc.com/docs/rsso/2002/manually-integrating-remedy-sso-with-bmc-applications-908954457.html

 

If user authentication works now after performing the above checks, please proceed to the next step.

 

[C.2] Copying 'RSSO-AGENT.PROPERTIES' file and making configuration changes in it

 

Stop the AR server service.  Copy 'rsso-agent.properties' file from <RSSO installer folder>/BMCRemedySSO/Disk1/files/rsso-agent/  to <ARSystem>\conf directory and, make the following changes

 

[C.3]  Configuring AGENT-ID 

 

NOTE:  <Agent-id> - Please don't keep the name of  AR Jetty RSSO agent-id same as MidTier RSSO Agent ID, or else upon logging out from Configuration Manager Dashboard, user session will be removed from MidTier too - details of the change below

 

agent-id=ARJetty_agent

# Application URL to trigger the RSSO logout process.

logout-urls=/api/rsso-logout

 

Agent ID.JPG

 

[C.4] Configuring SSO Redirections

 

# To support multiple RSSO servers, set the value to a comma separated string: each represents a 'domain to server url' mapping, with the format of <domain>:<url>, e.g. domain1:https://server1:8443/rsso,domain2:https://server2:8443/rsso

 

sso-external-url=http://remedysso.domain.com:8080/rsso

 

# RSSO webapp internal url for service calls.

# To support multiple RSSO servers, set the value to a comma separated string, each represents a 'domain to server url' mapping, with the format of <domain>:<url>, e.g. domain1:http://server1:8080/rsso,domain2:http://server2:8080/rsso

 

sso-service-url=http://remedysso.domain.com:8080/rsso

 

 

SSO redirections1.JPG

[C.5]  WORKAROUND - Configuring memory-cache param

 

  • Configure the following parameter in 'rsso-agent.properties' or else the redirection to Remedy SSO doesn't occur.  It is only a workaround to explicitly set this parameter to false, until the defect is fixed.  This parameter enables RSSO Agent to choose between HTTP session and in-memory cache, when performing SSO Token validation.  When this parameter is set to true, it can save a trip to RSSO server for token validation by validating against its memory-cache.

 

               use-in-memory-cache=false

 

[C.6]  Configuring RSSO AREA Plugin on AR Server

 

I'd also assume the following configuration already in-place in <ARSystem>\pluginsvr\pluginsvr-config.xml file

 

pluginsvr_config.JPG

 

 

[C.7]  Copy RSSO Agent file under DEPLOY folder

 

Copy the file 'rsso-agent-osgi.jar' from <RSSODistr>/BMCRemedySSO/Disk1/files/rsso-agent/   to  <AR_SERVER_HOME>/deploy

 

[C.8]  Restart AR server service

 

Upon a successful restart of AR server service, clear the browser cache and login to MidTier.  Launch 'Configuration Manager Dashboard' available under 'Atrium Core' sub-menu.   User should be authenticated to the new CMDB UI without having to ask for credentials.

 

Share This:

Welcome to the new blog on BMC CMDB.   In our last blog, we focused on CMDB REST API with focus on using 'instance' CRUD operations.  Below is the link for that blog

 

https://communities.bmc.com/community/bmcdn/bmc_it_service_support/cmdb/blog/2020/05/29/helix-support-introduction-to-cmdb-rest-api-services-and-comparison-with-cmdbdriver-command-line-tool

 

This blog is a continuation to the previous one with a focus on 'Attribute' CRUD operations.  Hence, the topics covered in this blog are as follows:

 

[a] Get Attribute information from a class using GET method

[b] Create a single custom attribute in a class using POST method

[c] Create multiple custom attributes in a class using POST method

[d] Modify a single attribute in a class using PATCH method

[e] Delete a single attribute in a class using DELETE method

 

To begin with, I've used POSTMAN tool to perform the above mentioned operations.   POSTMAN tool  is easily available for downloading on the web.  Just a recap, to access the CMDB REST API services, we must first generate an authorization token by login to AR server.   Its simple, it is covered in my previous blog.  To generate a token, please launch the POSTMAN tool and click the NEW button to present a new Request page.  You shall use the login URL using POST method, and fill out the AR login credentials like below.  Once the authentication is successful, you shall see the authorization token.

 

Authorization.PNG

 

Authorization Token as  Header:  This token, as long as valid, can be reused for the consumption of several REST API services.  Below is a screenshot of how one can populate such a token under the Header section n when using any desired REST API services.  In all of the examples in this blog, you shall refer to the below screenshot to fill out Headers to the requests.

 

Using the Auth code.PNG

 

If you see an error message like HTTP 400 or Authentication Failed, this means the authorization token is invalid, and you may have to regenerate it.

 

Having done that, we can now proceed with consuming the REST API for CMDB class Attribute CRUD operations.

 

[a] Get information about CMDB class Attributes using GET method:

 

Method: GET

 

Request URLhttp://testserver:8008/cmdb/v1.0/attributes/{namespace}/{className}

 

[STEP 1] Create a new request in POSTMAN tool.  Select the method as 'GET'.   Type the URL in the following format considering your are pulling Attribute information from BMC_ComputerSystem class

 

Example: http://testserver:8008/cmdb/v1.0/attributes/BMC.CORE/BMC_ComputerSystem

 

[STEP 2]  Go to the Headers section and fill out the Authorization parameters like mentioned in the above picture 'Authorization token as Headers'

 

[STEP 3] Switch to the PARAMS tab and fill out the parameters as seen in the screenshot.    In the example below, I'm pulling information on two attributes of BMC_ComputerSystem class - 'HostName' and 'PrimaryCapability'.  As you add those parameters, you will notice the Request URL is being appended with querystring parameters.

 

Click SEND button when all the parameters are typed out.  The GET method will retrieve the specified attribute information.

 

 

1_GET_Attribute.PNG

 

 

[b] Create a single custom attribute in a class using POST method

 

METHOD:   POST

 

REQUEST URL: http://testserver:8008/api/cmdb/v1.0/attributes/namespace}/{className}/{attribute}

 

[STEP 1]  Create a new request in POSTMAN.  Select the method as 'POST'.   Type your REST API server URL in the following example format considering you are creating an Attribute named 'test31' in BMC_BaseElement class.

 

EXAMPLEhttp://testserver:8008/api/cmdb/v1.0/attributes/BMC.CORE/BMC_BaseElement/test31

 

[STEP 2] Go to the Headers section and fill out the Authorization parameters like mentioned in the above picture 'Authorization token as Headers' (coped at the beginning of this blog)

 

[STEP 3] Switch to the BODY tab, select 'raw' as an option and change the format (available as a drop-down) from Text to JSON;  Copy the below JSON content inside the BODY tab, and then click SEND button.

 

{

"datatype":"CHAR",

"name":"test31",

"type":"REGULAR",

"class_id":"BMC_ASSETBASE",

"class_name_key":{"namespace":"BMC.CORE","name":"BMC_BaseElement"},

"characteristics":{"NAMESPACE":"BMC.CORE","DESCRIPTION":"Test 24","AUDIT":"NONE","HIDDEN":false,"CREATE_MODE":"PROTECTED"},

"entry_mode":"OPTIONAL",

"field_id":"234328",

"limit":{"char_limit":{"max_length":255,"pattern":"","char_menu":"","menu_style":"APPEND","qbe_match":"LEADING"}

}

}

 

2_Create_Single_Custom_Attribute.PNG

 

[STEP 4] Unless you see any error, you should be able to see HTTP 201 status in the response section.  You can then open Class Manager to confirm the new attribute been added to the BMC_BaseElement class

 

[c] Create multiple custom attributes in a class using POST method

 

METHOD:  POST

 

REQUEST URL:    http://testserver:8008/api/cmdb/v1.0/attributes/{namespace}/{class}

 

[STEP 1] Create a new request in POSTMAN.  Select the method as 'POST'.   Type your REST API server URL in the following example format considering your are creating the following attributes  in BMC_ComputerSystem -  Custom_REQUIRED of entry_mode 'REQUIRED',  Custom_OPTIONAL of entry_mode 'OPTIONAL' and Custom_DISPLAY_ONLY of entry_mode 'DISPLAY_ONLY'

 

EXAMPLEhttp://testserver:8008/api/cmdb/v1.0/attributes/BMC.CORE/BMC_BaseComputerSystem

 

[STEP 2] Go to the Headers section and fill out the Authorization parameters like mentioned in the above picture 'Authorization token as Headers' (coped at the beginning of this blog)

 

[STEP 3] Switch to the BODY tab, select 'raw' as an option and change the format (available as a drop-down) from Text to JSON;  Copy the below JSON content inside the BODY tab, and then click SEND button.

 

Note:  When using the below JSON, remember to change the field ID for each field so that it is unique.  Also, keep track of 'entry_mode' field so you are using the appropriate one to get the desired outcome.

 

 

[

  {

    "name": "Custom_REQUIRED",

    "type": "REGULAR",

    "datatype": "CHAR",

    "characteristics": {

    "DESCRIPTION": "This is a Test Description",

        "DEPRECATED": false,

"AUDIT": "AUDIT",

"HIDDEN": true,

        "NAMESPACE": "BMC.CORE"

    },

    "limit": {

        "char_limit": {

            "pattern": "",

            "max_length": 255,

            "list_format": "STANDARD",

            "char_menu": "Test_Name",

"qbe_match": "ANYWHERE"

        }

    },

"field_id": 536871949,

    "entry_mode": "REQUIRED",

"default_value": "False",

    "class_id": "BMC_COMPUTERSYSTEM",

    "class_name_key": {

        "name": "BMC_ComputerSystem",

        "namespace": "BMC.CORE"

    }

  },

 

   {

    "name": "Custom_OPTIONAL",

    "type": "REGULAR",

    "datatype": "CHAR",

    "characteristics": {

        "DEPRECATED": false,

"AUDIT": "COPY",

        "NAMESPACE": "BMC.CORE"

    },

    "limit": {

        "char_limit": {

            "pattern": "ALPHA",

            "max_length": 255,

            "list_format": "",

"menu_style": "OVERWRITE",

            "char_menu": "",

"qbe_match": "EQUAL"

        }

    },

    "field_id": 536871944,

    "entry_mode": "OPTIONAL",

    "class_id": "BMC_COMPUTERSYSTEM",

    "class_name_key": {

        "name": "BMC_ComputerSystem",

        "namespace": "BMC.CORE"

    }

  },

  {

    "name": "Custom_DISPLAY_ONLY",

    "type": "REGULAR",

    "datatype": "CHAR",

    "characteristics": {

        "DEPRECATED": false,

        "NAMESPACE": "BMC.CORE"

    },

    "limit": {

        "char_limit": {

            "pattern": "DIGIT",

            "max_length": 255,

            "list_format": "",

            "char_menu": ""

        }

    },

    "field_id": 536871946,

    "entry_mode": "DISPLAY_ONLY",

    "class_id": "BMC_COMPUTERSYSTEM",

    "class_name_key": {

        "name": "BMC_ComputerSystem",

        "namespace": "BMC.CORE"

    }

  }

]

 

3_Create_multiple_Custom_Attributes.PNG

 

[STEP 4 ] Unless you see any error, you should be able to see HTTP 201 or HTTP 204 status in the response section.  You can then open Class Manager to confirm the new attributes have been added to the BMC_ComputerSystem class.  HTTP Response status '204' means server has processed the request, but it has not returned any content, it suggests that the attributes were created successfully.

 

[d] Modify a single attribute in a class using PATCH method

 

METHOD: PATCH

 

REQUEST URL:  http://testserver:8008/api/cmdb/v1.0/attributes/{namespace}/{class}/{attribute}

 

[STEP 1] Create a new request in POSTMAN.  Select the method as 'PATCH'.  Type your REST API server URL in the following example format considering you are modifying attribute  named 'test31' from BMC_BaseElement class

 

Examplehttp://testserver:8008/api/cmdb/v1.0/attributes/BMC.CORE/BMC_BaseElement/test31

 

[STEP 2] Go to the Headers section and fill out the Authorization parameters like mentioned in the above picture 'Authorization token as Headers' (coped at the beginning of this blog)

 

[STEP 3] Switch to the BODY tab, select 'raw' as an option and change the format (available as a drop-down) from Text to JSON;  Copy the below JSON content inside the BODY tab, and then click SEND button.  As an example, we have modified the field size of this attribute from '255' to '1024' - you can notice that in "max_length' parameter in the below JSON code.

 

{

"datatype":"CHAR",

"name":"test31",

"type":"REGULAR",

"class_id":"BMC_ASSETBASE",

"class_name_key":{"namespace":"BMC.CORE","name":"BMC_BaseElement"},

"characteristics":{"NAMESPACE":"BMC.CORE","DESCRIPTION":"Test 24","AUDIT":"NONE","HIDDEN":false,"CREATE_MODE":"PROTECTED"},

"entry_mode":"OPTIONAL",

"field_id":"234328",

"limit":{"char_limit":{"max_length":1024,"pattern":"","char_menu":"","menu_style":"APPEND","qbe_match":"LEADING"

}

}

}

 

4_Update_Single_Custom_Attribute.PNG

 

[STEP 4] Unless you see any error, you should be able to see HTTP 204 status in the response section.  You can then open Class Manager to confirm the attribute has been modified to the BMC_BaseElement class

 

[e] Delete a single attribute in a class using DELETE method

 

METHOD:  DELETE

 

REQUEST URL:   http://testserver:8008/api/cmdb/v1.0/attributes/{namespace}/{class}/{attribute}

 

[STEP 1] Create a new request in POSTMAN.  Select the method as 'DELETE'.   Type your REST API server URL in the following example format considering your are deleting the attribute 'test31' from BMC_BaseElement class

 

EXAMPLEhttp://testserver:8008/api/cmdb/v1.0/attributes/BMC.CORE/BMC_BaseElement/test31

 

[STEP 2] Go to the Headers section and fill out the Authorization parameters like mentioned in the picture 'Authorization token as Headers'  (copied at the beginning of this blog)

 

[STEP 3] Switch to the BODY tab, select 'raw' as an option and change the format (available as a drop-down) from Text to JSON;  Copy the below JSON content inside the BODY tab, and then click SEND button.

 

{

"datatype":"CHAR",

"name":"test31",

"type":"REGULAR",

"class_id":"BMC_ASSETBASE",

"class_name_key":{"namespace":"BMC.CORE","name":"BMC_BaseElement"},

"characteristics":{"NAMESPACE":"BMC.CORE","DESCRIPTION":"Test 24","AUDIT":"NONE","HIDDEN":false,"CREATE_MODE":"PROTECTED"},

"entry_mode":"OPTIONAL",

"field_id":"234328",

"limit":{"char_limit":{"max_length":255,"pattern":"","char_menu":"","menu_style":"APPEND","qbe_match":"LEADING"

}

}

}

 

[STEP 4] Unless you see any error, you may most likely notice HTTP 204 status in the response section.  You can then open Class Manager to confirm the  attribute has been deleted from BMC_ComputerSystem class.

 

NOTE: Updating and Deleting of multiple attributes will be covered in a future blog.  They are not covered in this blog on purpose as I faced some technical issues while testing those examples.  Hope you find this information useful.  Thank you for reading!

Share This:

As most of you are aware, we are still shipping CMDBDRIVER program along with CMDB REST API to perform several operations including retrieving and sending data.  CMDB C API functions and CMDB  REST API provide similar data structures and functions to encapsulate information and functionality. Additionally, the web services API provides a set of platform-independent operations that communicate with your applications to retrieve and send data.

As the knowledge of these two component  is vast, I want to narrow down the scope of this blog to the following:.

Topics Covered in this blog:

[a]  A brief introduction of CMDBDRIVER tool & CMDB REST API services

[b]  A list of a few important REST API and the corresponding C API functions in CMDBDRIVER command-line tool

[c]  How to access CMDB REST API in BMC CMDB?

           [d]  How to consume CMDB REST API?

      [d.a] First, Generate an AR JWT token using tools like POSTMAN

      [d.b] Consume the CMDB REST API in POSTMAN or Swagger UI using the AR JWT token you generated in step ‘d.a’

 

Topics Not Covered in this blog:

[a] Configuring HTTPS on Jetty web server (offering CMDB REST API services)

[b] Integrating Remedy SSO with Configuration Management Dashboard

[c] REST API services other than CRUD operations for CMDB class ‘instance’

[d] Details of CMDBDRIVER commands

 

[a] A brief introduction of CMDBDRIVER tool & CMDB REST API services

 

CMDBDRIVERThe cmdbdriver program enables you to execute various BMC Atrium Configuration Management Database (BMC Atrium CMDB) C application programming interface (API) functions. You can combine several cmdbdriver commands in a script that you can execute with a single command. This combination is helpful when you need to reuse a series of commands.

https://docs.bmc.com/docs/ac9000/cmdbdriver-program-509980235.html

CMDB REST API: 

REpresentational State Transfer (REST) is a technique / architecture using which a user-friendly web-service for an Web application can be created..  We aren’t creating any REST API using BMC CMDB, but will be using the ones shipped by it.

You can use, both, the C & the REST API functions to perform the following operations in BMC CMDB

  • Create, modify, retrieve, delete classes.
  • Create, modify, retrieve, delete attributes
  • Create, modify, retrieve, delete instances
  • GraphWalk/GraphQuery
  • QueryByPath
  • Import/Export class/attributes/instances
  • Start/cancel/get RE jobs
  • Retrieve/activate federation

[b] A list of a few important REST API and the corresponding C API functions in CMDBDRIVER command-line tool

 

C API

REST API

Description

get (gi)

GET /cmdb/v1.0/instances/{datasetId}/{namespace}/{className}/{instanceId}

Gets single instances of a class for a given dataset

getmult (gmi)

GET /cmdb/v1.0/instances/{datasetId}/{namespace}/{className}

Gets multiple instances of a class for a given dataset

set (si)

PATCH /cmdb/v1.0/instances/{datasetId}/{namespace}/{className}/{instanceId}

Updates a single instance of a class for a given dataset for a given instance Id

setmult (smi)

PATCH /cmdb/v1.0/instances/{datasetId},

PATCH /cmdb/v1.0/instances,

PATCH /cmdb/v1.0/instances/{datasetId}/{namespace}/{className}

Updates multiple instances of a class

create (ci)

POST /cmdb/v1.0/instances

Can be used to create a single or multiple instances in a class

createmult (cmi)

POST /cmdb/v1.0/instances

POST /cmdb/v1.0/instances/{datasetId}

POST /cmdb/v1.0/instances/{datasetId}/{namespace}/{className}

Create multiple instances

delete (di)

DELETE /cmdb/v1.0/instances/{datasetId}/{namespace}/{className}/{instanceId}

Delete a single instance

deletemult (dmi)

DELETE /cmdb/v1.0/instances

DELETE /cmdb/v1.0/instances/{datasetId},

 

DELETE /cmdb/v1.0/instances/{datasetId}/{namespace}/{className}

 

Delete multiple instances

get (gc)

GET /cmdb/v1.0/classes/{namespace}/{className}

Returns a CDM class for given namespace name and class name

set (sc)

PATCH /cmdb/v1.0/classes/{namespace}/{className}

Modifies the given class

create (cc)

POST /cmdb/v1.0/classes/{namespace}/{className}

Creates a class

delete (dc)

DELETE /cmdb/v1.0/classes/{namespace}/{className}

Deletes a class as per the inputs given

getlist(glc)

GET /cmdb/v1.0/classes/{namespace},

GET /cmdb/v1.0/archivedclasses,

GET /cmdb/v1.0/classes,

GET /cmdb/v1.0/classes/archiveinfo/{namespace}/{className},

GET /cmdb/v1.0/classes/relationships

Get a list of classes

get (ga)

GET /cmdb/v1.0/attributes/{namespace}/{className}/{attributeName},

GET /cmdb/v1.0/attributes/{namespace}/{className}

Two separate commands to get a single and multiple attributes for a class, first one for a single and the second for a multiple attributes

set (sa)

PATCH /cmdb/v1.0/attributes/{namespace}/{className}/{attributeName},

PATCH /cmdb/v1.0/attributes/{namespace}/{className}

Two separate commands to update a single and multiple attributes for a class, first one for a single and the second for  multiple attributes

create (ca)

POST /cmdb/v1.0/attributes/{namespace}/{className}/{attributeName},

POST /cmdb/v1.0/attributes/{namespace}/{className}

Two separate commands to create a single and multiple attributes for a class, first one for a single attribute and the second for multiple attribute

 

[c] How to access CMDB REST API in BMC CMDB?

REST API is hosted on Jetty server, and is installed and configured on AR Server.

jetty-server name is your AR server host name

The Jetty web server can be configured to serve on HTTPS protocol too.   CMDB REST API services are available on GET, POST, PATCH and DELETE methods.

Swagger UI:

The REST API provides certain endpoints that are aligned with the Swagger specification language.   The Swagger UI shows all the endpoints provided by CMDB REST API.  The URL for accessing Swagger UI is following:

http://<jetty-server>:8008/cmdb/api/help.html

The above URL when launched in a browser session will present the page that allows authorization to Swagger Tool using the pre-authenticated AR-JWT Token

Upon successful authorization into Swagger tool, the page will display the list of all the available BMC CMDB REST API services for consumption purposes. – see the below pic

Authorising to Swagger tool.PNG

[d] How to consume CMDB REST API?

[d.1] First, Generate an AR JWT token using tools like POSTMAN

Video: https://www.youtube.com/watch?v=xue9Gx-dbEA&feature=youtu.be

Note: When using POSTMAN tool to authorize the user, please use the user credentials parameters as seen in the below picture. I’ve noticed it failing with HTTP 500 error if the parameter names vary.  So, the parameters are 'username' and 'password'

CREATEMultipleInstance Part 1.PNG

Another point to note is that the video shows Jetty server is hosted on port 8443, but the default port it 8008.

URL:      http://<jetty server name>:8008/api/jwt/login

[d.1] Consume the CMDB REST API in POSTMAN or Swagger UI using the AR JWT token you generated in step ‘d.1’

Below picture shows the available CRUD operations for instances

CRUD Instance available REST API.PNG

Note: I’ve used 4 examples below to demonstrate the use of CRUD (Create, Read, Update & Delete) operations for CMDB Class instances (Configuration Items);  I’ve used POSTMAN tool for CREATE, UPDATE and DELETE operations, and used SWAGGER UI for GET operation – to show the use-cases in both the tools.

 

Example 1 – Done using POSTMAN – Create multiple instances of a class for a given dataset (4 steps)

Example 2 Done using SWAGGER tool - Gets a single instance of a class for a given dataset for a given Instance Id (1 step – follow the pic)

Example 3 Done using POSTMAN  - Update an instance (4 steps)

Example 4 Done using POSTMAN  - Delete an instance (4 steps)

 

Example 1 – Done using POSTMAN – Create multiple instances of a class for a given dataset

 

[Step  1] Generate an Authorization code (you may use the one created in the step 'd.1' if it is still valid)

     REQUEST URI: http://<jetty-servername>:8008/api/jwt/login

  Method: POST

 

After filling out the parameters as seen in the screen, please click ‘Send’ command button

 

CREATEMultipleInstance Part 1.PNG

[Step  2] Create another POST request in a separate tab within POSTMAN tool, in order to configure HEADERS for the CREATE INSTANCE request

     Method: POST
 REQUEST URI:   http://<jetty-servername>:8008/api/cmdb/v1.0/instances/<datasetID>
 
       Example: http://testserver:8008/api/cmdb/v1.0/instances/BMC.ASSET

CREATEMultipleInstance Part 2.PNG
Fill out 'Authorization' with the AR JWT token that you generated in the previous step or step 'd.1', and 'Content-type' & 'Accept' with 'applicatio/json' respectively

[Step 3] Switch to the ‘Body’ tab within the same request, and paste the following string. 

And then click the ‘SEND’ button

{

"instances": [

    {

"class_name_key": {

        "name": "BMC_ComputerSystem",

"namespace": "BMC.CORE"

},

"attributes": {

"Name": "Amazon Cloud VM",

"ShortDescription": "Amazon Cloud VM"

}

    },

    {

"class_name_key": {

"name": "BMC_ComputerSystem",

"namespace": "BMC.CORE"

},

"attributes": {

"Name": "Amazon Cloud VM 1",

"ShortDescription": "Amazon Cloud VM 1"

}

    },

    {

"class_name_key": {

"name": "BMC_ComputerSystem",

"namespace": "BMC.CORE"

},

"attributes": {

"Name": "Amazon Cloud VM 2",

"ShortDescription": "Amazon Cloud VM 2"

}

    }

  ]

}

 

CREATEMultipleInstance Part 3.PNG

 

[Step 4]  Search the newly created instances in BMC_ComputerSystme class

 

Example 2 – Done using SWAGGER tool - Gets a single instance of a class for a given dataset for a given Instance Id

 

NOTE: When consuming the REST API service by referring to the following examples, please click ‘Try it out' button after filling out the parameter values. 

Instance GET part 1.PNG

Instance GET part 2.PNG

Instance GET part 3.PNG

 

Example 2.a – Gets multiple instances of a class for a given dataset – Please note to specify multiple instanceid in ‘id’ parameter separated each by comma. - Click 'Try it out' button after you have filled out all the parameters

GetMultipleInstance Part 1.PNGGetMultipleInstance Part 2.PNG

GetMultipleInstance Part 3.PNG

 

Example 3 Done using POSTMAN  - Update an instance

METHOD:  PATCH

REQUEST URI: http://<servername>:8008/api/cmdb/v1.0/instances/<datasetID>
 
Example: http://testserver:8008/api/cmdb/v1.0/instances/BMC.ASSET
 

[STEP 1]  GENERATE AUTHORISATION CODE (you may use the one created in the step 'd.1' if it is still valid)

      REQUEST URI: http://<jetty-servername>:8008/api/jwt/login

  Method: POST

           Fill out user credentials as seen in the below pic, and then click Send button

CREATEMultipleInstance Part 1.PNG

 

[STEP 2]  Create another PATCH request in a separate tab within POSTMAN tool, in order to configure HEADERS for the UPDATE INSTANCE request

UpdateMultipleInstances Part 1.PNG

Fill out 'Authorization' with the AR JWT token that you generated in the previous step or from step 'd.1', and 'Content-type' & 'Accept' with 'applicatio/json' respective

[STEP 3]  CONFIGURE BODY TEXT:  Switch to the Body tab.   Sample body text below – Alter the ‘instance_id and other details based on your example, copy it in the Body, and click Send button

{"instances":[{"instance_id":"OIGAA5V0FLJHWAQ1CGR5Q1CGR5PU93","class_name_key":{"name":"BMC_ComputerSystem","namespace":"BMC.CORE","_links":{}},"dataset_id":"BMC.ASSET","attributes":{ "Name": "Azure Cloud VM", "ShortDescription": "Azure Cloud VM"},"_links":{}}],"num_matches":1}

 

UpdateMultipleInstances Part 2.PNG

Click the SEND button once the BODY, HEADERS and REQUEST URI is populated

[STEP 4] VERIFY THE UPDATED INSTANCE IN THE CLASS

 

Example 4 Done using POSTMAN  - Delete an instance

 

METHOD:  DELETE

REQUEST URI: http://<servername>:8008/api/cmdb/v1.0/instances/<datasetID>

Example: http://testserver:8008/api/cmdb/v1.0/instances/BMC.ASSET

[STEP 1]  GENERATE AUTHORISATION CODE

Fill out user credentials as seen in the below pic, and then click Send button (you may use the one created in the step 'd.1' if it is still valid)

CREATEMultipleInstance Part 1.PNG

[STEP 2]  Create another DELETE request in a separate tab within POSTMAN tool, in order to configure HEADERS for the UPDATE INSTANCE request

DeleteSingleInstance part 2.PNG

Fill out 'Authorization' with the AR JWT token that you generated in the previous step or from step 'd.1', and 'Content-type' & 'Accept' with 'applicatio/json' respective

[STEP 3]  CONFIGURE BODY TEXT:  Switch to the Body tab.   Sample body text below – Alter the ‘instance_id and other details based on your example, copy it in the Body, and click Send button

{"instances":[{"instance_id":"OIGAA5V0FLJHWAQ1CO5FQ1CO5FQD08","class_name_key":{"name":"BMC_ComputerSystem","namespace":"BMC.CORE","_links":{}},"dataset_id":"BMC.ASSET","_links":{}}],"num_matches":1}

DeleteSingleInstance part 3.PNG

[STEP 4] VERIFY THE DELETED INSTANCE IN THE CLASS

Share This:

As you will all know Adobe Flash will be sunsetted at the end of 2020 and as such many browsers will cease support and have already opted for optional support of the technology Google Chrome, Microsoft Edge, Firefox.

 

BMC have been on a journey to remove these technologies from our various solutions and one of the biggest changes has been in our CMDB UI, the flash based Atrium Core Console, our updated CMDB UI is now based on HTML and Angular technologies. If you haven't made use of the new CMDB UI then details on accessing it can be found on docs.bmc.com

 

Working on this replacement project we have been looking at ways to not only replace existing functionality but also to reimagine how the CMDB is managed, making the most of the opportunity that has been provided to us.

 

The CMDB UI has refocused, giving actionable insights introducing KPI measurements and greater visibility and transparency to the CMDB data and operations using a progressive disclosure approach and removing clutter from the overall experience, for those new to the CMDB we added in app context sensitive help which adds rich guidance with direct links into docs.bmc.com, rich interactive guides and video content.

 

Class Manager received a reimagining and now provides a streamlined approach to managing the structure of the CMDB, we hope our customers find the new layout and functionality an improvement over the classic Class Manager functionality and find it easier to manipulate the data model using the new tool.

 

CMDB Explorer has been overhauled, and yes we know there are some behaviours that require some attention to make them more fluid and intuitive, however the work done here has allowed us to lay foundations for enhancements and ongoing work in this area.

 

At the outset of this journey we said we would look to rethink how things in the CMDB are done and that it would take several releases to complete our work, our finishing line is closing in as we work to complete our efforts along with Flash removal updates to other areas such as Approval Engine, SRM and updates to ITSM interactions with CMDB prior to the end of this calendar year.

 

To adopt the latest CMDB updates will require a platform upgrade, not applications, as CMDB is part of the platform installation.

 

Changes coming include adding the missing activities of the Reconciliation Engine functionality, introducing Dynamic Service Modelling functionality to the new CMDB UI, replacement of the CMDB Audit UI to align with the new framework, Federation capabilities and more functionality for Atrium Integrator.

 

I also want to make you aware the the current Service Catalog UI will not be replaced as is within the CMDB UI framework. We are working on our approach to managing this data and the shape this experience should take in a post-Flash landscape and provide an update as we move forward.

 

Our aims and goals remain the same, to progress the functionality of the CMDB UI and bring improved experiences which assist Configuration Managers and consumers of the CMDB with their daily tasks.

 

We welcome all constructive feedback on the new CMDB UI and look forward to completing this initial ‘technology replacement’ phase this year.

 

Stephen Earl

Principal Product Manager, BMC CMDB

Share This:

Hi Everyone,

 

Welcome to March’s CMDB Blog. In this blog, we will discuss about Normalization process using new CMDB UI.  Normalization is the first activity which takes place after importing data in CMDB source datasets through Atrium Integrator or Discovery Sync.

 

Below are the topics which we are going to cover:

Normalization Console Overview

 

Normalization process makes sure that product names and categorization are consistent across different datasets and from different data providers. Normalization Engine provides a centralized, customizable, and uniform way to overcome data consistency problems.

 

How to access Normalization Console in new UI?

- Once you login to new CMDB Console, CMDB Dashboard is displayed. From this home page, you can open it using ‘Manage Normalization’ option under ‘Jobs’ menu.

HowtoOpen.png

 

Normalization Console Layout

- Normalization Console shows the number of jobs along with the job run details.

- It displays information on the processed CIs and relationships. It also shows whether the CIs/Relationships have any errors after normalization or not.

- You can filter the jobs to be displayed by selecting the dataset above.

JobRuns.png

 

Creating Normalization Job

 

1. In the CMDB UI, select Jobs > Manage Normalization.

2. On the Normalization page, click Create Job. The Create Normalization page opens as shown in the following figure:

CreateJob.png

3. Enter a unique name for the job.

4. From the Dataset Configuration drop-down list, select the dataset.

5. If you want Product catalog entry to be created if it doesn’t exist, then you can check ‘Allow New Product Catalog Entry’ checkbox.

6. ‘Allow Unapproved CIs’ option will Normalize CI even If the product is unapproved in the Product Catalog.

     Normalization Engine will set its NormalizationStatus attribute to Normalized and Not Approved.

7. Select Normalization Features which you want to enable.

8. Schedule will decide the Job type i.e. Continuous / Batch. Continuous job will run continuously & Batch job runs on specific time.

9. Click on “Save” button to create the Normalization Job.

 

 

Normalization Configurations

 

You will find below options under Configuration > Manage Normalization Rules’  from CMDB Dashboard:

    • Catalog Mapping
    • Features
    • Dataset Configuration
    • Class Configuration

NormalizationConfigurations.png

 

 

Catalog Mapping

 

From the Catalog Mapping, you can map incoming CI categorizations for a discovered or imported CI to preferred categorizations for different reasons. The product categorization aliases provide the correct categorizations either when the CI's Category, Type, and Item attributes have no values or when they do not match those in the corresponding Product Catalog entry.

 

You can create Product catalog alias mapping with below steps:

1. In the BMC CMDB Dashboard, select Configurations > Manage Normalization Rules > Catalog Mapping

2. In the Catalog Mapping screen, click the ( + ) sign to open the page in which you can select a class for which you want to create the product alias mapping.

Create Mapping.png

3. Select the CI class from the drop-down list

4. In the Discovery Product Categorization section, select the categorization values of discovered products for the CI class you selected.

    Specify a name for the product in the Product Name field.

    In the Mapped Product Categorization section, select the categorization values that are applied to the discovered product when it is normalized.

CatalogMappingSave.png

 

5. Click on Save button to save the details.

 

 

Normalization Features

 

The Normalization Engine is provided with various capabilities that enable you to define rules and actions to normalize CIs. These rules can be applied to individual datasets.

You can view/modify existing rules or create new rules from Configurations > Manage Normalization Rules > Features

Normalization Features.png

 

Version Rollup

There are two attributes related to versions i.e. MarketVersion & VersionNumber in BaseElement.  The VersionNumber attribute stores a full version number, such as 5.0.1.12, which can change frequently with maintenance releases. To manage licenses more accurately, the MarkerVersion attribute stores a simpler string version, such as 5.0 or 2007. You can do this version rollup for MarketVersion attribute using Version normalization feature.

 

Impact Normalization

The Impact Normalization feature is applied to relationship classes only. This feature sets the impact attributes based on the out-of-the-box or custom impact rules for the relationship class and the source and destination classes it relates to.

 

Relation Name Normalization

The Relation Name Normalization feature is applicable to relationship classes only. This feature replaces the existing relationship names with the BMC recommended or custom names, based on the CI classes in the relationship.

 

Instance Level Permissions

This feature enables you to set the row-level permissions on CIs as defined by the CI classes and additional qualifiers of the Instance attributes.

 

Suite Rollup Normalization

This feature enables you to define suites and their products and to identify an instance as a suite, a suite component, or a standalone application. Allows more accurate management of suite and individual licenses.

 

Custom

This feature enables you to create custom rules when the out-of-the-box Normalization features do not meet your requirements for normalizing data. You can create a custom rule for CI and relationship classes to perform a required action.

 

Dataset Configuration

 

               1. On the BMC CMDB Dashboard, click Configurations > Manage Normalization Rules > Dataset Configurations.

               2. Select an existing Dataset Configuration and click on Edit.

DatasetConfig.png

          3. On the All Datasets Configurations page, set the normalization settings for the selected dataset.

     DatasetConfig2.png

You can get more information about Dataset configuration in below link:
https://docs.bmc.com/docs/ac1908/configuring-normalization-settings-for-datasets-877695864.html

 

Class Configuration

 

You can add or remove classes for Normalization. If you remove class from Class configuration then CIs from that class will have NormalizationStatus as “Not Applicable for Normalization”.

ClassConfig.png

To add a class for Normalization:

               1. On the BMC CMDB Dashboard, click Configurations > Manage Normalization Rules > Class Configuration

               2. Click on ‘Add Class Configuration’

                    AddClassConfiguration.png

               3. Select class name from drop-down list & click on Save

 

 

Enable Debug Level Logging

         

          You can enable debug level logging from CMDB Dashboard > Configurations > Core Configurations

   Logging.png

 

Follow the below steps to enable debug level logging:

    1. Click on Normalization
    2. Select ‘Plugin Server Configuration’
    3. From the drop-down list select the server on which you want to enable debug logs.

               Logging1.png

          4. Click on API/Batch/Continuous Job Configuration.

      1. API Job Configuration – Logs for Inline Normalization Job & detailed NE API calls.
      2. Batch Job Configuration – Logs for Batch Job
      3. Continuous Job Configuration – Logs for Continuous Job.

          5. Change Log Level from Warn to Debug.

          6. Click on Save button.

 

 

Thank you for reading this blog. 

Share This:

In this post, we will discuss about BMC Atrium Integrator, which is now also referred as ‘Data Imports’ with the introduction of the new CMDB User Interface offered by latest CMDB versions.  Below is a reference link of the blog from November 2019 which partially discusses ‘Data Imports’ module too along with CMDB Reconciliation.

 

https://communities.bmc.com/community/bmcdn/bmc_it_service_support/cmdb/blog/2019/11/13/helix-support-new-cmdb-ui-class-manager-atrium-integrator-aka-data-imports

 

Almost every Administrator User of BMC CMDB uses Atrium Integrator / Pentaho Spoon for data import tasks, and some of them by now have gained expertise in creating complex jobs using those tools.Through this blog, we will focus only on the troubleshooting aspect of Atrium Integrator.  Creating of a job or a transformation, whether a basic or a complex one, isn’t covered in this write-up. Following are the sub-topics covered:

 

[1] Atrium Integrator Overview

[2] Various Atrium Integrator Components

[3] Carte Server – Start and Stop

[4] Atrium Integrator Configuration Forms (UDM:xxxxxx)

[5] Common Error Scenarios

[6] Atrium Integrator Logging

 

[1] Atrium Integrator Overview (AI Overview)

 

Atrium Integrator is an integration engine that helps you transfer data from External Datastores to CMDB classes and AR System forms. The purpose of having Atrium Integrator is to transfer data from a variety of input sources such as flat files, complex XML, JDBC, ODBC, JMS, Native Databases, Webservices and others using connectors. Atrium Integrator provides the ability to clean and transform data before transferring it to CMDB classes or AR forms.

 

Break-up of the Atrium Integrator components per installations:

 

  • BMC Remedy ARServer Installation installs:

    1. Atrium Integrator Carte Server

    2. Atrium Integrator Spoon

    3. ARSystem Adapters

    4. AR PDI Plugins

 

  • Atrium Integrator Server installs:

    1. Atrium Integrator Console

    2. CMDB Adapters

 

  • Atrium Integrator Spoon Client Installs:

       Remote Atrium Integrator Spoon

 

  Atrium Integrator uses Pentaho which is an ETL tool that enables you to extract, transform and load data. When you run a job from AI console it runs on the AI Carte server. You can also run jobs from Atrium Integrator Spoon client on Client machine/Desktops or directly on AR Server where Spoon is installed.

 

[2] Various Atrium Integrator Components

 

Please refer the Blog Post to know more about Data Import features in New CMDB UI console.

Helix Support: Using new CMDB UI - Class Manager & Atrium Integrator

 

2.1 Atrium Integrator Spoon Client:

 

The Atrium Integrator Spoon is a client side, user installable graphical user interface application which is used to design transformations and jobs. Atrium Integrator Spoon Client Installer is available on BMC EPD site from the standard BMC Remedy AR System installation program as one of the installable client program options.

 

For complete documentation on creating transformations and jobs using the Atrium Integrator Spoon client please refer:

Spoon User Guide - Pentaho Data Integration - Pentaho Wiki

 

2.2 Atrium Integrator Spoon:

 

Atrium Integrator Spoon gets installed along with AR Server Installation. BMC provides limited support for Spoon. Install and use only the Pentaho and Atrium Integrator Spoon version that is packaged with the BMC Remedy AR System installer.

Though Pentaho Spoon is supported on multiple platforms, BMC Supports Spoon only on Windows.

 

There are selected steps which BMC owns/supports :

https://docs.bmc.com/docs/ac1902/steps-in-atrium-integrator-transformation-855473134.htmlSteps in Atrium Integrator transformation - Documentation for BMC CMDB 19.08 - BMC Documentation

 

For information about additional steps that you can add to your transformation, see the Pentaho documentation.http://wiki.pentaho.com/display/EAI/Spoon+User+Guide

Spoon User Guide - Pentaho Data Integration - Pentaho Wiki

 

We support selected vendor databases which include - IBM DB2, MS SQL Server, Oracle, Sybase and My SQL.

 

[3] Carte Server – Start and Stop

 

The Carte server configuration entry is in ‘armonitor.cfg’ file. The path location of this file varies based on the Operating System used on AR server.

 

(Windows) ARSystemServerInstallDir\Conf\armonitor.cfg

(Linux / UNIX) /etc/arsystem/serverName/armonitor.conf

ARMonitor entry for Atrium Integrator.PNG

 

3.1 Starting Carte ServerA restart / start of BMC Remedy AR Server service will start the Carte Server as long as the Carte line is not commented (using # symbol) in armonitor config file.

 

3.2 Stopping Carte Server:  It’s important that Carte Server is not killed abruptly when AI job is being run.  In order to stop Carte Server for some time or permanently on a particular AR server, edit the armonitor configuration file by commenting out the line meant for Carte Server.  Save the file.  This must then be followed by killing the existing process for Carte Server.

 

On Windows server, the Task Manager must be scanned for the below highlighted line to identify the process of Carte Server. After selecting the process, use ‘End Process’ to kill the Carte Server, just like killing any other process in Windows server.

 

Atrium Integrator Windows Task Manager - locating Process.PNG

 

On Linux box, the same task can be achieved by first identifying the process ID for Carte Server, which can be done using following command.

  ps -ef | grep ‘diserver’

 

  Atrium Integrator process on Linux.PNG

This followed by kill command to kill the process

  kill -9 <process ID>

 

[4] Atrium Integrator Configuration Forms (UDM:xxxxxx)
 

Note: You will check all these UDM forms when an Atrium Integrator Job is not running.

 

Below mentioned tables maintain information related to metadata of Atrium Integrator tool.  It is important from troubleshooting perspective, to know what all data is stored in those tables.

 

  • UDM:Config
  • UDM:RAppPassword
  • UDM:ExecutionInstance
  • UDM:PermissionInfo

 

  1. UDM:Config: This form contains entries for all AR Servers in server group out of which one is the Default Carte Server to run AI jobs running on that server.  A few important things to note are:

 

  • The server port assigned to Atrium Integrator is 20000
  • Atrium Integrator can run in AR Server Group environment, and it will pickup the primary server from AR System Server Group Operation Ranking as the ‘Default’ server
  • Secondary server is the server ranked as 2nd in AR System Server Group Operation Ranking for Atrium Integrator operation

UDM_CONFIG.PNG

 

NOTE:  For Atrium Integrator to run in AR Server Group, it must be configured in AR System Server Group Operation Ranking form before it is to appear in UDM:Config.

 

Configure UDM in AR Server Group:

 

Before configuring this form for a server group environment, you must rank the Atrium Integrator servers by using the AR System Server Group Operation Ranking form. If you assign rank 1 to a server, that server becomes primary server and runs the jobs. If the primary server fails, the secondary server (failover server) runs the jobs. Failover server is the server to which you assigned rank 2. If you do not assign ranking to the servers in a server group environment, jobs run on the server which

receives the request first.

Server Group Ranking.PNG

 

2. UDM:RAppPassword:

 

This form stores a Remedy Application (RApp) Service password for a specific AR System server. The AR System server installer populates this regular form during AR System server installation. ARInput, AROutput, and CMDB steps provided by BMC use of this form to make connections to the AR System server.

 

In the event this password is changed in Remedy AR Server configurations or you restore or migrate DataBase from one server to other, then please make sure that you update this form with correct server names and its correspondent Rapp Password to avoid further issues while running AI/Spoon jobs.

 

If Atrium Integrator servers are configured to run in AR server group environment, then ensure that this form contains all the possible AR Server entries that includes Short names and FQDN. Remove the ones that aren’t needed or those holding incorrect AR server names.

UDM_RAPPassword.PNG

 

Any incorrect information in this form leads to failure in Load steps used in Atrium Integrator jobs.

 

3. UDM:ExecutionInstance:

 

This regular form allows multiple instances of the same transformations to be run. For every instance, Atrium Integrator Engine provides Object ID; combination of Object ID and Transformation/Job Name is used as Unique Key.

 

This form has one very important field named as “Atrium Integrator Engine Server Name”  which hold AI server name. In server group environment, this field shows Primary server name. This field should hold the correct AR Server name (in case of DB Restore specifically).

 

Note: This form cannot be used to create a New Execution Instance manually , you can only use this form to view the created execution instances.

 

For few AI Job run related issues we suggest to delete existing UDM Execution entry for specific transformation/job and give it a try to trigger the job/transformation again.

 

4. UDM:PermissionInfo:

 

This form contains list of repository objects such as Transformations, Jobs, Database connections , Directories, Slave Servers, etc. By default all Jobs and Transformations are assigned Public permission. Users with access to this form can amend repository objects in this form.

 

During execution of Transformation/Job, query is being performed on this form and if user has access to this form then execution will happen successfully. If there is no permission then you may get errors.

 

[5] Common Error Scenarios

 

5.1 Atrium Integrator Job Schedules:

 

As we all know, an Atrium Integrator job can be scheduled to run at a specific time or an interval.  A few important things to know about the Job scheduler:

[a] Atrium Integrator job schedules are managed by AR Server and they are stored within the "AR System Job" form table.

[b] AR server runs an escalation by the name "ASJ:ScheduleJob" to trigger the job at a configured time.  BMC recommends to run this escalation on specific pool to avoid any Job schedule issues.

[c] One must also not schedule a reconciliation job and an Atrium Integrator job at the same time, because both the jobs could query or update the same data.

 

Errors like:

 

Error:- BMCATRIUM_NGIE000502: Failed to update job schedule.

 

BMCATRIUM_NGIE000501: Failed to create job schedule.

 

There can be several issues regarding Atrium Integrator job schedules such as schedules jobs not running on scheduled time or cannot modify created/existing schedules etc.

 

Resolution Approach:

--Verify "AR System Job" form entry to check if the valid job schedule exists with status as Active.

--Verify few important field values such as ‘Schedule Start time’, ‘Next Collection Time’ , ‘Type’ etc.

--You may try to run the job manually from AI console and if that works then verify if the specific job has UDM:ExecutionInstance created, if yes then delete that and see if the job triggers on specific time.

--Verify if escalation "ASJ:ScheduleJob" is enabled and running on specific pool number.

 

   To run the escalation on specific Pool number please refer following guidelines:

    Remedy AR System Server - How to assign a specific Pool to an Escalation

 

   Additionally you can visit:

   https://communities.bmc.com/docs/DOC-61591

 

--Our AI expert Gustavo del Gerbo has already covered few important Tips and Hot fixes to be applied:

 

DMT Schedule Jobs not working and/or failing inconsistently. Randomly schedules do not run. Memory leaks and high memory usage of AI. references in the following post regarding AI job scheduling and UDM Job getting stuck:

 

  1. Authentication Error when it tries to create a record in UDM:CartePending form.
  2. Unable to create webresult from XML. Error reading information from XML string: Premature end of file.
  3. 401 authentication error when it tries to publish the job to carte server.
  4. When you enable the carte server in debug mode you will get http 400 error when we publish the job to carte server and in arjavaplugin log you will get Socket rest connection error
  5. Error when DMT job console try to query the UDM:ExecutionStatus Form. Authentication error for AR_ESCALATOR or Remedy Application Service
  6. Issue where the first run of the scheduled job is successful and second run gets stuck.
  7. SW00515492 - AI Carte Server has a Memory Leaks and also pentaho ARDB plug-in have memory leaks.
  8. SW00515494 - Pentaho Spoon job received java.util.ConcurrentModificationException

 

Below is the list of Good to have, cumulative Atrium Integrator Hot fixes for different versions:

 

  • All versions: 8.1 all SPs, 9.0 all SPs, 9.1GA, 9.1 SP1 up to 9.1 SP2:

    - AI_9.1.00_29NOV2016_SW00518269_ALL

 

  • 9.1 GA (no service pack) or 9.1 SP1 version:

    - AI_9.1.00_12SEPT2016_SW00515492_SW00515494_ALL

    - AI_9.1.00_30DEC2016_SW00522054_ALL

    - FD_91_2016DEC14_SW00522122_ALL

    - Download and replace the kettle-engine.jar file from this blog to supersede the file from the hotfix package (the file from the package has some incompatibilities).

 

  • 9.1SP2:

    - AI_9.1.02_25JULY2017_SW00522054_All

    - AI_9.1.02_04MAY2018_SW00539413_All for Security (Version disclosure and other hacks)

 

  • 9.1SP3: For version 9.1.03 make sure to apply Patch 1 first and then apply the below Hot fixes:

 

    - AI_9.1.03.001_04JAN2018_SW00540959_ALL

    - AI_9.1.03.001_30JULY2018_SW00549235_ALL

    - AI_9.1.03.001_30Aug2018_SW00550547_All

 

  • 9.1SP4:

    - Use the attached file for SW00540959.zip

    - AI_9.1.04_18JAN2018_SW00543845 for Performance of AI console (Flex old UI).

    - AI_9.1.04_29JUNE_2018_SW00548512_All for Performance of Spoon.

 

  • 9.0 (all SP levels):

    -  AI_9.0.01_26OCT2016_SW00515994_ SW00516220_SW00516447_ALL

 

  • 8.1 SP2:

     - AI_8.1.02_05OCT2016_SW00516445_SW00515997_ALL

 

5.2 Out Of Memory Errors when running Atrium Integrator jobs

 

Sometimes we see out of memory error in arcarte logs

--

UnexpectedError: java.lang.OutOfMemoryError: GC overhead limit exceeded

--

OR

-- AI jobs take long time to run/finish

-- Carte Server or at times AR Server crashes.

 

Resolution Approach:

 

----Enable -XX:+UseConcMarkSweepGC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath="c:\temp\MyDump.hprof" for both Spoon and Carte.

--For Spoon, add the line in Spoon.bat (ARSERVERHOME\diserver\data-integration\Spoon.bat)

 

--You can increase java heap size for Atrium Integrator and Spoon

When getting out of memory errors while running a job from AI console then increase java heap size in armonitor.conf file

---

"%BMC_JAVA_HOME%\java.exe "-Xmx1024m"-Djava.ext.dirs=C:\Program Files\Java\jre1.8.0_191\lib\ext;C:\Program Files\Java\jre1.8.0_191\lib;C:\Program Files\BMC Software\ARSystem\diserver\data-integration;C:\Program Files\BMC Software\ARSystem\diserver\data-integration\lib" "-Dorg.mortbay.util.URI.charset=UTF-8" "-DKETTLE_HOME=C:\Program Files\BMC Software\ARSystem\diserver" "-DKETTLE_REPOSITORY=" "-DKETTLE_USER=" "-DKETTLE_PASSWORD=" "-DKETTLE_PLUGIN_PACKAGES=" "-DKETTLE_LOG_SIZE_LIMIT=" "-DKETTLE_MAX_LOG_SIZE_IN_LINES=5000" "-DKETTLE_DISABLE_CONSOLE_LOGGING=Y" "-DKETTLE_COMPATIBILITY_MERGE_ROWS_USE_REFERENCE_STREAM_WHEN_IDENTICAL=Y" "-DKETTLE_LENIENT_STRING_TO_NUMBER_CONVERSION=Y" org.pentaho.di.www.Carte carteservername 20000 -i "C:\Program Files\BMC Software\ARSystem"

----

 

When running a job from Spoon: increase java heap size for Spoon:

ARSERVERHOME\diserver\data-integration\Spoon.bat

---

"%PENTAHO_DI_JAVA_OPTIONS%"=="" set PENTAHO_DI_JAVA_OPTIONS="-Xms1024m" "Xmx2048m" "-XX:MaxPermSize=256m"

---

 

5.3 How to troubleshoot UDM / Atrium Integrator job related issues.

https://communities.bmc.com/docs/DOC-125340

 

5.4 Atrium integrator Spoon and DataBase connectivity:

AI/Pentaho/Spoon/Carte MSSQL Connectivity. A mystery of many forms.

 

5.5 Atrium Integrator Performance best practices:

https://docs.bmc.com/docs/ac1908/improving-the-performance-of-atrium-integrator-877696064.html

Best practices that can improve the performance of Atrium Integrator job

 

[6] Atrium Integrator Logging:

 

6.1 How to set DEBUG logs for Carte Server:

 

1. Locate the following file (in Linux it'll be in a similar directory from root):

<C:\Program Files\BMC Software\ARSystem\diserver\data-integration\pwd\carte_log4j.properties>

/bmc/ARSystem/deserver/data-integration/pwd/carte_log4j.properties

 

2. Edit this file to change root logging level to Debug and save.  Here is the top part of the file where the change is made:

#Root logger log level

log4j.rootLogger=DEBUG

# Package logging level

 

6.2 Atrium Integrator adapter log files

 

In Windows systems, log files reside in <AR_system_installation_directory>/ARserver/db. In Unix systems, log files reside in <AR_system_installation_directory>/db. Carte server log files include:

  • arcarte.log
  • arcarte-stdout-<timestamp>.log
  • arcarte-stderr-<timestamp>.log

 

The ARDBC plug-in log file is arjavaplugin.log. All ARDBC plug-in messages are recorded in this file.

 

By default the log level is warn. If you want to log info or debug messages:

  1. Open <AR_system_installation_directory>/pluginsvr/log4j_pluginsvr.xml
  2. Search for logger com.bmc.arsys.pdi.
  3. Change the log level to info or debug.
  4. Restart the AR System server.

   Example

<logger name="com.bmc.arsys.pdi">
<level value="info" />
</logger>

 

6.3 Spoon Logging:

 

While running a Job/Transformation from Spoon, you can set the logging level to one of the given options:

 

 

Thank you for reading this blog !

Share This:

Welcome to the first blog on CMDB in the year 2020. First off, I'd like to acknowledge the efforts of my colleagues Devendra Borse & Varun Patwardhan for reviewing the blog contents and Manish Dwivedi for sharing the Performance Stress Test results and valuable inputs on PerfMonitor.  In this blog, we cover information on fine tuning BMC CMDB Reconciliation performance, which we hope will assist you in handling such situations  Below are the sub-topics covered in this blog:

 

[A] Introduction – CMDB Reconciliation Performance

[B] Establishing a CMDB Reconciliation Performance Benchmark

[C] How to handle CMDB Reconciliation performance degradation using standard recommendations from BMC?

[C.1] Identify the uniqueness of the Reconciliation Job performance issues, and Remediate them

[C.2]  Are there unwanted CI data participating in CMDB Reconciliation job?

[D] How to report performance issues to BMC Support?

 

As Application and Database performance issues are broad topics, let’s look at the ones which are not covered in this blog:

 

  • Network related issues
  • Crashes of the ARRECOND process

 

We will now proceed with discussing the planned topics.

 

[A] Introduction – CMDB Reconciliation Performance

 

Reconciliation is one of the most important and resource intensive activities within the BMC CMDB application.  I’ve detailed some vital topics around this subject, which I hope will assist in fine tuning its performance.

 

[B] Establishing a CMDB Reconciliation Performance Benchmark

 

Like any other database application, the best way to ascertain Reconciliation job performance is by measuring.  It involves executing the Reconciliation jobs, measuring the test results by its activities in terms of the number of Configuration Items (CIs) identified and merged per second.  This exercise shall assist you in arriving at a performance benchmark for CMDB Reconciliation.  Some of the test results conducted in our lab are as following:

 

IMPORTANT: Please note, the test results are expected to vary somewhat in every environment.  The below figures are only a result of the sample tests performed in BMC labs, and should not be considered as official confirmation of a definite count of Configuration Items (CI) to reconcile, even given a similar load and architecture.

 

Test Results Spreadsheet.PNG

 

You can record similar stats from your recent runs of TEST and PROD environments' Reconciliation jobs, and arrive at a benchmark. The benchmark can alter over a period of time depending on CI data count, fine tuning queries, indexing and other AR and database server parameters affecting performance.  

 

Please find below instructions on establishing a benchmark, data volumes to use, methodologies etc

 

https://docs.bmc.com/docs/brid1908/bmc-remedy-itsm-suite-19-02-solution-performance-benchmarks-879731271.html

 

[C] How to handle CMDB Reconciliation performance degradation using standard recommendations from BMC?

 

While performing the test of running CMDB Reconciliation jobs in order to establish a benchmark, if you notice a performance degradation, please visit our standard configuration for AR server and Database server as following

 

https://bmcsites.force.com/casemgmt/sc_KnowledgeArticle?sfdcid=kA014000000h9kqCAA&type=FAQ

 

 

If the performance of CMDB Reconciliation remains slow even after applying the standard recommendations, it must be investigated from an AR Server, Database server and Reconciliation Job perspective.  Based on our past experience, we’ve put down some steps in identifying the slowness issue, for remediation.

 

[C.1] Identify the uniqueness of the Reconciliation Job performance issue

 

There are three main components involved during the running of the Reconciliation Job:  AR System, Database & the Recon job configuration, including CI data that it reconciles.  I'd suggest the following approach to understand which component is causing the performance issue and resolving it.

 

[C.1.a] Is the slow performance of the Recon job faced as a result of huge numbers of Configuration Items (CI) to process?

 

You don't have control on the number of legitimate CI to reconcile, but you can use certain measures to eliminate those with erroneous conditions, and by using qualifications. Firstly, find out the total number of CIs to process, using the following SQL queries, and then compare the count with the benchmark you established – in terms of time taken by Recon job for a similar load.  That is to estimate the time to finish the job.

 

For Identification activity

 

SELECT COUNT(*) FROM <schema>.<class name> WHERE DatasetID != 'BMC.ASSET' AND ReconciliationIdentity = ‘0’

 

For Merge activity

 

SELECT COUNT(*) FROM <schema>.<class name> WHERE DatasetID != 'BMC.ASSET' AND ReconciliationMergeStatus = '40'

OR

SELECT COUNT(*) FROM <schema>.<class name> WHERE DatasetID = '<Dataset ID>' AND ReconciliationMergeStatus = '40'

Example:

SELECT Count(*) from dbo.BMC_CORE_BMC_ComputerSystem where DatasetID != 'BMC.ASSET' and ReconciliationMergeStatus = '40'

SELECT COUNT(*) FROM <schema>.<class name> WHERE DatasetID = 'BMC.ADDM’' AND ReconciliationMergeStatus = '40'

 

You can further optimize the query by adding more conditions, but without removing the core ones like 'ReconciliationMergeStatus', 'ReconciliationIdentity'                                    

                                       

Remediate:  After you finish running the Reconciliation job, If you find the total time taken to reconcile a given number of CIs is inconsistent with the Benchmark spreadsheet, then AR and Database server configuration must be revisited and fine-tuned.  If the Recon job performance is as good as the benchmark, there isn't much one can do, however there are a few steps to eliminate the unwanted CIs for processing:

 

  • You can relieve the stress on the Recon job by eliminating those CIs off its radar which are the victim of data errors like duplicate CI, orphan relationships etc. We have a different sub-topic in this blog by the name 'Who is participating in Recon Job?' that you can refer to, for effectively handling those data errors.

 

  • To limit the count of CIs and improve the data quality, you may choose to reconcile only normalized CIs.

 

  • Lastly, qualifications can be used in the Recon job Identification activity to limit the number of CIs to process.

 

[C.1.b] Is the Recon Job performance issue faced because there are not enough threads configured on the AR server Recon private queue to cater to the huge number of CIs?

                                                      

Symptoms: If you notice from Reconciliation engine log (arrecond.log) that only a few threads, like 1-2 of them, are used for a job that has a big count of CIs to reconcile, then you likely have insufficient threads.  In other words, if you notice good performance in the database server when running SQL queries, and that your Database administrator confirms available bandwidth of database engine to accept additional connections from the AR server, the slowness could be narrowed down to insufficient AR server thread configuration.  In some cases, adding a higher number of CPU cores may be an option to address the situation.                                                   

 

Remediate: BMC recommends using a Private RPC queue, '390698', for the Atrium CMDB Reconciliation engine process. In order to maintain the balance of threads among other processes on the AR Server, we recommend a number of (1.5 * Number of available CPU) for thread configuration purposes.

 

  • How to find total number of CPU core in Windows and Linux OS? 

 

In Windows server OS, the information on the total number of available CPU can be obtained by visiting the 'Performance' tab of Task Manager

 

CPU Pic.png

 

 

  • In Linux, the same information can be gained using the command ‘lscpu’ at the linux prompt                                                                                                 

 

Linux Number of CPU.png

 

 

 

Below is a table of recommended thread settings per CPU core   

 

Number of CPU

Max Threads

Recommended

4 CPU Core

6

4

8 CPU Core

12

8

         

         

                                       

                        

  • A screenshot of displaying the Thread settings for CMDB Reconciliation private queue

 

Private Queue 390698.png

 

  • General recommendations for threads per CPU core, and other performance indicators for Remedy applications

 

                                             https://bmcsites.force.com/casemgmt/sc_KnowledgeArticle?sfdcid=kA014000000h9kqCAA

 

                        

                                                                                                                                                                                             

 

  • When can you possibly increase the number of CPU cores on the AR server?     

 

Increase the number of CPU cores on AR server if it can benefit reconciliation via increasing the number of threads, especially when the database (on the database server) can accommodate the additional connections, and can also effectively handle the load of SQL queries. More information on handling of database performance is documented under the sub-topic 'Is the Recon Job performance issue faced because of performance issue on Database server?'

 

  • How can you monitor  the CPUs performing as against the thread settings?

 

Please engage your System Administration team who can monitor CPU performance using advanced tools like PerfMonitor on Windows OS, or some CPU management related commands in Linux, to monitor the CPU performance while Recon job runs.

 

There are various videos on using PerfMonitor other than the below one from Microsoft that can search on the web, and use it to gather information on various performance counters.  Once this data is gathered, you can consult your System Administrator in case there's a need for additional CPU cores, or configure Thread settings effectively in AR Server for CMDB Reconciliation private queue.

 

https://techcommunity.microsoft.com/t5/windows-admin-center-blog/introducing-the-new-performance-monitor-for-windows/ba-p/957991

 

Below are some of the counters used by our QA team to monitor CPU performance using PerfMonitor

 

Important counters from AR server perspective are as follows:

 

#Processor Counters

"\\ServerName\Processor(*)\% Processor Time"

"\\ServerName\Processor(*)\% User Time"

"\\ServerName\Processor(*)\% Idle Time"

"\\ServerName\Processor(*)\% Interrupt Time"

"\\ServerName\System\Processor Queue Length"

#Memory Counters

"\\ServerName\Memory\Available MBytes"

"\\ServerName\Memory\Page Faults/sec"

"\\ServerName\Memory\Page Reads/sec"

"\\ServerName\Memory\Page Writes/sec"

"\\ServerName\Memory\Page Writes/sec"

"\\ServerName\Memory\Pages Input/sec"

"\\ServerName\Memory\Pages Output/sec"

"\\ServerName\Memory\Pages/sec"

#Process Counters

"\\ServerName\Process(*)\ID Process"

"\\ServerName\Process(*)\% Processor Time"

"\\ServerName\Process(*)\% User Time"

"\\ServerName\Process(*)\Private Bytes"

"\\ServerName\Process(*)\Working Set"

"\\ServerName\Process(*)\Thread Count"

#PhysicalDisk Counters

"\\ServerName\PhysicalDisk(*)\% Idle time"

"\\ServerName\PhysicalDisk(*)\% Disk time"

"\\ServerName\PhysicalDisk(*)\% Disk Read Time"

"\\ServerName\PhysicalDisk(*)\% Disk Write Time"

"\\ServerName\PhysicalDisk(*)\Avg. Disk sec/Read"

"\\ServerName\PhysicalDisk(*)\Avg. Disk sec/Write"

"\\ServerName\PhysicalDisk(*)\Avg. Disk Queue Length"

"\\ServerName\PhysicalDisk(*)\Avg. Disk Read Queue Length"

"\\ServerName\PhysicalDisk(*)\Avg. Disk Write Queue Length"

#LogicalDisk Counters

"\\ServerName\LogicalDisk(*)\% Idle time"

"\\ServerName\LogicalDisk(*)\% Disk time"

"\\ServerName\LogicalDisk(*)\% Disk Read Time"

"\\ServerName\LogicalDisk(*)\% Disk Write Time"

"\\ServerName\LogicalDisk(*)\Avg. Disk sec/Read"

"\\ServerName\LogicalDisk(*)\Avg. Disk sec/Write"

"\\ServerName\LogicalDisk(*)\Avg. Disk Queue Length"

"\\ServerName\LogicalDisk(*)\Avg. Disk Read Queue Length"

"\\ServerName\LogicalDisk(*)\Avg. Disk Write Queue Length"

#Network Interface Counters

"\\ServerName\Network Interface(*)\Bytes Total/sec"

"\\ServerName\Network Interface(*)\Bytes Sent/sec"

"\\ServerName\Network Interface(*)\Bytes Received/sec"

#"\\ServerName\Network Interface(*)\Bytes/sec"

"\\ServerName\Network Interface(*)\Output Queue Length"

 

Important counters from Database server perspective are as follows:

 

#Processor Counters

"\\ServerName\Processor(*)\% Processor Time"

"\\ServerName\Processor(*)\% User Time"

"\\ServerName\Processor(*)\% Idle Time"

"\\ServerName\Processor(*)\% Interrupt Time"

"\\ServerName\System\Processor Queue Length"

#Memory Counters

"\\ServerName\Memory\Available MBytes"

"\\ServerName\Memory\Page Faults/sec"

"\\ServerName\Memory\Page Reads/sec"

"\\ServerName\Memory\Page Writes/sec"

"\\ServerName\Memory\Page Writes/sec"

"\\ServerName\Memory\Pages Input/sec"

"\\ServerName\Memory\Pages Output/sec"

"\\ServerName\Memory\Pages/sec"

#Process Counters

"\\ServerName\Process(*)\ID Process"

"\\ServerName\Process(*)\% Processor Time"

"\\ServerName\Process(*)\% User Time"

"\\ServerName\Process(*)\Private Bytes"

"\\ServerName\Process(*)\Working Set"

"\\ServerName\Process(*)\Thread Count"

#PhysicalDisk Counters

"\\ServerName\PhysicalDisk(*)\% Idle time"

"\\ServerName\PhysicalDisk(*)\% Disk time"

"\\ServerName\PhysicalDisk(*)\% Disk Read Time"

"\\ServerName\PhysicalDisk(*)\% Disk Write Time"

"\\ServerName\PhysicalDisk(*)\Avg. Disk sec/Read"

"\\ServerName\PhysicalDisk(*)\Avg. Disk sec/Write"

"\\ServerName\PhysicalDisk(*)\Avg. Disk Queue Length"

"\\ServerName\PhysicalDisk(*)\Avg. Disk Read Queue Length"

"\\ServerName\PhysicalDisk(*)\Avg. Disk Write Queue Length"

#LogicalDisk Counters

"\\ServerName\LogicalDisk(*)\% Idle time"

"\\ServerName\LogicalDisk(*)\% Disk time"

"\\ServerName\LogicalDisk(*)\% Disk Read Time"

"\\ServerName\LogicalDisk(*)\% Disk Write Time"

"\\ServerName\LogicalDisk(*)\Avg. Disk sec/Read"

"\\ServerName\LogicalDisk(*)\Avg. Disk sec/Write"

"\\ServerName\LogicalDisk(*)\Avg. Disk Queue Length"

"\\ServerName\LogicalDisk(*)\Avg. Disk Read Queue Length"

"\\ServerName\LogicalDisk(*)\Avg. Disk Write Queue Length"

#Network Interface Counters

"\\ServerName\Network Interface(*)\Bytes Total/sec"

"\\ServerName\Network Interface(*)\Bytes Sent/sec"

"\\ServerName\Network Interface(*)\Bytes Received/sec"

#"\\ServerName\Network Interface(*)\Bytes/sec"

"\\ServerName\Network Interface(*)\Output Queue Length"

#SQL Server Counters

"\\ServerName\MSSQL$CMDBPERF:General Statistics\User Connections"

"\\ServerName\MSSQL$CMDBPERF:SQL Statistics\Batch Requests/Sec"

"\\ServerName\MSSQL$CMDBPERF:SQL Statistics\SQL Compilations/sec"

"\\ServerName\MSSQL$CMDBPERF:SQL Statistics\SQL Re-Compilations/Sec"

"\\ServerName\MSSQL$CMDBPERF:Buffer Manager\Buffer cache hit ratio"

"\\ServerName\MSSQL$CMDBPERF:Locks(_Total)\Average Wait Time (ms)"

"\\ServerName\MSSQL$CMDBPERF:Locks(_Total)\Lock Timeouts/sec"

"\\ServerName\MSSQL$CMDBPERF:Locks(_Total)\Lock Wait Time (ms)"

"\\ServerName\MSSQL$CMDBPERF:Locks(_Total)\Lock Waits/sec"

"\\ServerName\MSSQL$CMDBPERF:Locks(_Total)\Number of Deadlocks/sec"

 

 

 

[C.1.c] Is the Recon Job performance issue faced because of performance issues on the Database server?         

 

Symptoms:       If the database server responds to SQL queries slowly while running the Reconciliation job, and as a result keeps the AR server threads in waiting mode, then the performance issue can be narrowed down to the database side.  Slow database performance can be due to multiple reasons, some of which are mentioned below

 

  • Long running queries due to the usage of incorrect index used
  • Worker threads not available to take the requests
  • I/O delays

                                       

Due to its complex nature, to understand the real cause of the database performance, please engage your Database Administrator (DBA) team.  From your side, you can help identify long running queries.

 

[i] Identifying Long Running Queries

 

  • Identify long running queries on the Remedy Database -                            

Usage of Remedy's Log Analyzer tool

 

Remedy log Analyzer tool is a very useful tool to find long running API calls, and there by SQL statements when analyzing the AR server side (SQL+API+Filter) logs captured at the time of AR Server performance issue.  For those who are still new to this tool, please go through the below video.

                                                                                                         https://www.youtube.com/watch?v=bYK1PFyNwr0

 

 

  • Identify long running queries on the Remedy Database -                                           

Usage database tools like Oracle Automatic Workload Repository (AWR) or Microsoft SQL Server Database Tuning Advisor

 

Please engage your Database Administrator to generate these reports. The benefit of generating these reports over AR Server logs is that it gives an overall picture of how the database server is performing not only from the Remedy application standpoint, but also to other applications it may be serving.

                                                                                           

               To understand more about Oracle AWR report, please go through the below link:

               https://www.oracle.com/technetwork/database/manageability/diag-pack-ow09-133950.pdf

 

               To understand more about MS SQL Server Tuning Advisor, please go through the below link:

               https://docs.microsoft.com/en-us/sql/relational-databases/performance/start-and-use-the-database-engine-tuning-advisor?view=sql-server-ver15

 

[ii] Understanding the reason behind time taken

 

  • Is it a missing index?
  • Are queries waiting on a lock to be released?
  • Is it the physical disk I/O or Memory to temporary store the query results or CPU taking time to process the query?
  • Any other unknown reason

 

The validation of the above topics requires ownership by and engagement of your Database Administrators. However, from our past experience, we’d suggest the following approach.

  • Using Database tools, identify the set of SQL queries returning the results slowly.

 

  • Are those SQL queries related to Reconciliation job activities, the Remedy application in general or an external application?  It is easy to identify Reconciliation Job queries as they are run against CMDB class forms, for example BMC_CORE_BMC_BaseElement, BMC_CORE_BMC_BaseRelationship, BMC_CORE_BMC_ComputerSystem etc

 

  • If your DBA identifies long running queries on CMDB class forms, then have them advise on the creation of an index(s) that can help speed up query results.   Index fields and their sequence is decided by a DBA.

 

  • Indexes can be created on the CMDB Forms directly using CMDB Class Manager.  But we recommend to first create the indexes directly on the database tables.  Please take appropriate backup of the database before making changes to database schema.  After its successful testing (that also includes satisfactory query response time), you may later choose to drop that index from the database, and create it on the form instead.  The benefit of creating of the indexes on Remedy forms is that they're retained during upgrade of the application.

 

  • Once the index is created, the DBA should run the queries a few times with the new index outside of Recon job first to gather the stats.  If the results are satisfying, you can proceed to run the Recon job.

 

  • At this point, before attempting to run Recon job to test the new indexes, the DBA must evaluate the rest of the available indexes on those tables to ensure they don’t overlap and hence may take priority over the newly created ones, when running the Recon job.  Basically, your DBA must ensure that with new indexes, the SQL queries will use them even during the Recon job.  This is needed to confirm because neither the BMC Remedy AR server or CMDB application have API features that can force a particular index with a SQL query it generates.

 

  • When running the Recon job, if you notice proper usage of the index, yet the slowness persist, the DBA needs to investigate other parameters like Disk I/O, Memory usage, CPU consumption or database locks, unless the DB tool report still suggests some long running queries which are newer or the previous ones.

 

  • The indexing improvisation process continues until all the long running queries are fine tuned.

 

 

  [iii] Index design Consideration & other general tips

 

  • Understand the characteristics of the most frequently used queries. For example, knowing that a frequently used query joins two or more tables (like Child CMDB classes - a join of BMC_BaseElement and BMC_<child class> will help you determine the best type of indexes to use.

 

  • Microsoft suggests to determine the optimal storage location for the index. A non-clustered index can be stored in the same filegroup as the underlying table, or on a different filegroup. The storage location of indexes can improve query performance by increasing disk I/O performance. For example, storing a non-clustered index on a filegroup that is on a different disk than the table filegroup can improve performance because multiple disks can be read at the same time.

 

  • Use tools like Microsoft Database Engine Tuning Advisor and Oracle's SQL Tuning Advisor to analyze the database and get better index recommendations.

 

  • Avoid large numbers of indexes on the tables with frequent changes in data, like BMC_BaseElement, BMC_BaseRelationship, BMC_ComputerSystem etc

 

  • Avoid using too many columns in indexes.  Use the appropriate ones and its sequence.  Consider the order of the columns if the index will contain multiple columns. The column that is used in the WHERE clause in an equal to (=), greater than (>), less than (<), or BETWEEN search condition, or participates in a join, should be placed first. Additional columns should be ordered based on their level of distinctness, that is, from the most distinct to the least distinct.

 

  • The default cursor_sharing parameter in Oracle 10g is set to exact.

 

  • The Oracle database instance is allocated only a small amount of memory

 

  • SQL Server is allocated insufficient amount of space in the tempdb database

 

  • Avoid using the LIKE operator in queries (Identification rules and Qualification of Reconciliation Job)

 

  • For better performance and results, it is recommend that you use the Reconciliation Merge Order  By class in separate transactions option, and deselect  'Include Unchanged CIs option within Merge activity

 

 

[C.2] Are there any errors for CIs participating in the CMDB Reconciliation activity?

 

You might see performance issues if there are too many CIs failing to identify or merge. The same amount of CIs will go through reconciliation activity during every job run because of failures, which will unnecessarily increase load on Recon Engine, which is also unproductive calls to AR server and the database.

Hence, it’s better in the long-run to resolve those errors instead of ignoring them. Below are the most common errors that you may experience in Reconciliation activity:

 

 

ARERR[120092] The Dataset ID and Reconciliation Identity combination is not unique

          To resolve this error, please follow this KA # https://communities.bmc.com/docs/DOC-72932

 

Investigating issues related to Reconciliation Job

          Issues related to reconciliation jobs - Documentation for BMC CMDB 19.08 - BMC Documentation

 

Found multiple matches (instances) for class

          Follow this KA # https://communities.bmc.com/docs/DOC-108436

 

 

 

[D] How to report performance issues to BMC Support?

 

  • On the AR server running the Recon job, generate AR server side SQL+API+FILTER logs for least 10 minutes to capture slowness.  Also capture the Recon job log during the same time.

                         Setting log files options - Documentation for Remedy Action Request System 9.1 - BMC Documentation

 

  • Run the Atrium Core Maintenance utility tool on the AR server running Reconciliation Engine - grab the logs and config files on the AR server running Recon engine

                          https://docs.bmc.com/docs/ac91/bmc-atrium-core-maintenance-tool-609847389.html

 

  • If possible, run the log analysis using LogAnalyzer tool, and share the output with us

https://www.youtube.com/watch?v=bYK1PFyNwr0

 

Thank you for reading.  Please share your feedback and queries.

Share This:

Having worked in Pentaho Spoon for sometime , you tend to encounter issues due to the negligence of minor configurations or over analyzing things. Below are the few of the issues I have encountered when working around in the tool and the solution that has worked for me to resolve them(Which might not be right all the time )

 

1. Weird data being uploaded from a transformation

Issue: When you do a import from a CSV file, sometimes weird values (consisting of @#! and some alphanumeric) gets updated into remedy.

Solution: This is due to one of the option check box that is selected during the data load. This needs to be unchecked for the data to be loaded properly without hampering the data values.

Note: Lazy conversion is really useful when you have to just read the data from a file and just pass it on as an file output, without any modification.

 

2. Loading data into the CMDB relationship table

Issue: When running a transformation for loading data into relationship table , based on the source instance ID and Destination instance ID, the records would be rejected.

Solution: In the CMDB output step, there is a check box "Use CheckSum", this needs to be unchecked for the relationship tables of CMDB.

 

3. Executing the AI jobs to run on linux environment

Issue: Due to some reason we were not able to execute the AI jobs from the Atrium console from the mid tier. While that was being worked on, we still needed a way to run the same.

Solution: We executed and ran the jobs directly from the command prompt. Details regarding the same has been explained in the below blog post.

Executing Spoon jobs in Linux Environment

 

4. Unable to run a job from command prompt

Issue: When we were trying to run the job from the command prompt, few of the jobs failed to execute.

Solution: This was mainly due to the space in the job names. there are 2 ways to rectify this one

 

     - Either have jobs names without spaces or separated by an underscore(_)

     - If there is a space in the name, then have the name of the jobs enclosed in " " , in the script used for executing

 

5. The sequential transformation in a job fails to run, when the previous job has been failed

Issue: When you have a job with multiple transformations in it, when a first transformation(or any previous transformation) fails the execution, all the successors jobs are quit and the job ends.

Solution: To avoid situation like this, make sure to have the job flow to allow "Unconditional" for the flow evaluation. This is by default set to "Follow when result is true".

Note: Change this setting only if there is no dependency between the transformation. Leave it unchanged if you need the previous transformation to execute successfully!

6. Connections being reset when importing jobs

Issue: The connection information is being modified or updated when the jobs are being imported.

Solution: This is a configuration in the spoon client , which when checked will update the connection information. This needs to be enabled with care based on the requirement. To locate the configuration, navigate to Spoon client, "Tools > Option". The highlighted option needs to be unchecked for this.

 

Note: These are some of the things that worked for me in working with the spoon client. It may not be the same for everyone. Hence decide on what is best for you when working with it.

Share This:

Hi Everyone,

 

Welcome to our new CMDB Blog for the month of December 2019. We provided information about Class Manager and Atrium Integrator in the new CMDB UI, in our previous CMDB blog. Here is the link of the previous blog if you have missed it - Helix Support: Using new CMDB UI - Class Manager & Atrium Integrator

 

In this blog, we will provide detailed information about the new interface for Reconciliation. Below are the topics which we are going to cover:

  • Reconciliation Console Overview
  • Creating a new Reconciliation job
  • Managing Reconciliation Rules

 

 

Reconciliation Console Overview

 

How to access Reconciliation in new CMDB UI ?

 

Login to the new CMDB UI --> Dashboard and click on the “Manage Reconciliation” link under the ‘Jobs’ menu.

 

1.PNG

 

 

Reconciliation Console Layout Details

 

- The Reconciliation Console shows the number of jobs along with the job run details.

- It displays information on the processed CIs and relationships. It also shows whether the CIs/relationships have any errors after processing or not.

- You can filter the jobs to be displayed by selecting the dataset above and/or the activity type.

 

2.PNG

 

 

Creating a new Reconciliation job

 

- Click on the ‘Create Job’ button in the Reconciliation console.

- The example given is for a 'Reconciliation: Identification, Merge and Purge' job.

 

3.PNG

 

- Enter the RE job name and select the activity to be created.

- The first activity to be created is ‘Identify’.

- Add a schedule for the job if needed.

 

4.PNG

 

Enter the Identify activity details as given below and click on save.

- Provide an activity name and sequence number for the Identification activity.

- Enter the source and target dataset for the activity

- Provide a specific qualification if required.

 

5.PNG

 

Create the ‘Merge’ activity by clicking on the ‘Add New’ button and enter the Merge activity details as given below.

- Provide an activity name and sequence number for the activity execution in the job.

- Select the source and target dataset for the Merge activity.

- Provide a specific qualification if required.

- Select a 'Dataset Merge Precedence Set'.

- The Merge Order is "By Class In Separate Transactions" by default.

- Choose whether the Merge Activity should do a Full Merge or a Delta Merge.

 

7.PNG

 

Click on the ‘Add New’ button again to create the ‘Purge’ activity.

 

Enter the Purge activity details as given below and click on save.

- Provide an activity name and select a sequence for the Purge activity.

- Select the dataset from which the data should be purged.

- Provide a specific qualification if required.

 

8.PNG

 

A new Test job will show as created in the RE job list.

 

9.PNG

 

To start the 'Test Job', click on the job entry and click on 'Start Job'.

 

Capture.PNG

 

 

Managing Reconciliation Rules

 

From the Dashboard go to Configurations --> ‘Manage Reconciliation Rules’ to access the RE rules like Identification Rules, Qualification Rules and Precedence Rules.

 

10.PNG

 

Managing Identification Rules

 

- To access the identification rules, click on Manage Reconciliation Rules à Identification rules.

- The rules are given class-wise and the class list is given on the left hand side.

- Once any class is selected, one can toggle between the Standard and Custom rules for that class.

- Next to each rule, there is an edit/delete button.

 

11.PNG

 

 

Managing Qualification Rules

 

- To access the Qualification Rules, click on Manage Reconciliation Rules à Qualification rules.

- There is a button each for adding a Qualification Ruleset and a Qualification Rule.

 

12.PNG

 

 

Managing Precedence Rules

 

- To access the Precedence Rules, click on Manage Reconciliation Rules à Precedence Exceptions.

- Select a Dataset Merge Precedence set to view it's details.

- On selecting a dataset, the Merge Ruleset for that dataset are shown.

- The tiebreaker value is also shown which can be changed as per the data update requirement based on precedences.

 

13.PNG

 

Thank you for reading this blog

Share This:

Hi Everyone,

 

Welcome to our new CMDB Blog for the month of November 2019. We mentioned about new features introduced in CMDB 19.08 in our last CMDB blog. Here is the link of last blog if you have missed it:

https://communities.bmc.com/community/bmcdn/bmc_it_service_support/cmdb/blog/2019/08/27/bmc-cmdb-1908-is-here

 

In this blog, we will provide detailed information about new interface for Class manager & additional capabilities added to the new user interface of Atrium Integrator. Below are the topics which we are going to cover:

 

  • Prerequisites
  • Class Manager Overview
  • Data Imports Console Overview
  • Navigating within Data Imports console
    • Listing, Filtering jobs
    • Executed Jobs
  • Manage Datastore
  • Create a Custom Job & Run it

 

 

Prerequisites

 

  • Accessing and navigating the new CMDB UI

https://docs.bmc.com/docs/ac1908/accessing-and-navigating-the-new-cmdb-user-interface-877695737.html

  • Configuring the URL to access the new CMDB UI

https://docs.bmc.com/docs/ac1908/configuring-the-url-to-access-the-new-cmdb-ui-in-a-single-server-server-group-and-load-balancer-environment-877695526.html

  • General queries about Configuration Management Dashboard UI

https://communities.bmc.com/community/bmcdn/bmc_it_service_support/cmdb/blog/2018/04/03/everything-that-you-need-to-know-about-accessing-new-cmdb-ui

 

 

Class Manager Overview

 

Class Manager has a new user interface that you can access quickly and easily. The Class Manager interface displays all the classes in your data model. You can create or edit a class or relationship.


How to launch Class Manager console?
Once you login to new CMDB Console, CMDB Dashboard is displayed. From this home page, you can open it using ‘Classes’ option under ‘Class Management’ menu.


1.png

2.png

 

Below is the layout of Class manager:

3.png

You can perform the following tasks related to the data model by using Class Manager:

 

1. Define the properties of the class    

 

- You can define the type of class, how it stores data, and (for relationship classes) the relationship.

- You can create new class or subclass by clicking on “Create” or “Add Subclass” button.

   “Create” button is located under navigation pane & “Add Subclass” button is available under Information pane.
4.png

  5.png

2. Configure instance auditing for the class

 

- Auditing enables you to track the changes made to instances of a class. You can select one of these options:-

      1. None - Select this to not perform an audit.
      2. Copy - Select this option to create a copy of each audited instance. Each form related to a class and its super class is duplicated to create audit forms that hold audited instances.
      3. Log - Select this option to create an entry in a log form that stores all attribute values from the audited instance in one field. If you select this option, you must also specify the name of the log form to use.

      6.png

3. Define one or more CI and relationship class attributes

 

- You can create new attribute or edit existing one from Class manager.
- To create new attribute, select the class on which you wanted to create an attribute & click on ‘Add’ button under ‘Attributes’ tab:

               7.png

                8.png

- To modify existing attribute, you can click on attribute name & click on Edit button:

                   9.png

               10.png

4. Specify permissions

- If you do not specify permissions for a class, BMC CMDB assigns default permissions.

               11.png

5. Specify indexes

 

- Indexing classes can reduce database query time. Index attributes that you expect users to query frequently,

   are used by discovery applications to identify Configuration Items (CIs), and are used in reconciliation identification rules.
- Specifying or modifying indexes in a class that already holds a large number of instances can take a significant

   amount of time and disk space. You must avoid creating indexes during normal production hours.

- You can create index from Class manager by clicking on ‘Add’ button in ‘Indexes’ tab :

12.png

                 13.png

Once you save it, you can see new index under “Indexes” tab:

              14.png

6. Propagate attributes in a weak relationship

 

- This step is necessary only if you have created a relationship class that has a weak relationship in which the attributes

  from one class should be propagated to another class.

               15.png

For more detailed information, you can go through below document:
https://docs.bmc.com/docs/ac1908/creating-or-modifying-classes-using-the-class-manager-877695663.html

 

Data Imports Console Overview

 

Data Import console is a home to creating and managing Atrium Integrator (AI) jobs.  For creating a complex Atrium Integrator jobs

which uses difficult logics and multiple data manipulation steps, Pentaho Spoon is still the choice though.

 

How to launch Data Imports console?

CMDB Dashboard is displayed after login to the new CMDB UI. From this home page, one can load ‘Data Imports’ console by clicking

Manage Atrium Integrator’ link available under ‘Job’ menu.

 

1.png

Upon clicking ‘Manage Atrium Integrator’ option, the ‘Data Imports’ console is loaded:

2.png

 

 

Navigating within Data Imports console

 

  • Listing jobs, Filtering jobs

          The console lists all the jobs under the tab 'Total' on its home page.

    3.png

Pagination - Pagination is applied to the list so they are visible in a small number per page. 

Filtering a Job - Jobs can be searched using a filter. For example, jobs that start with string 'SRM' or simply contains

that string in the job name can be searched by just typing 'SRM' (without quotes) in the Filter search box. 

4.png   

        NOTEPlease note that wildcards symbols like '*' or '?' can't be used while searching the jobs

 

  • Executed Jobs

The jobs that have finished execution (either successful or failed) are listed under the tab ‘Executed Jobs’ on Data Imports console

As the name suggests, this tab shows the list of executed jobs, their status and CI Record management information including errors if any.

5.png

If you want to see the Job Run details, please click on the down arrow just beside the 'Run History' column of that job.

6.png

'Run History' column shows the total count of times the job has run.  The count has a link that can be used to drill

down to fine details of the Job run history.

7.png

Job Run History can be further filtered to show latest Job runs by selecting 'Today' and recent Job runs by using option 'Last 7 days' or monthly.

8.png

From the screenshot, you will notice that there are Job status with 'Successful' and 'Failed' too.  You can use the Status drop down

to search only the 'Failed' jobs and then use the drop down inside the Job Run details in order to see the which transformation within the job that failed

9.png

10.png

 

Manage Datastore

 

Datastores are logical connections made to a physical container of the data.  It could be a CSV file, XML file, Database or AR server itself. 

The Target datastore is always an AR Server connection (datastore) as that is where we want to push the data to. 

The data is pushed to CMDB classes form within Remedy AR server. The source datastore can be various connections including CSV, XML or a database.

 

To Create or Manage a datastore, please click on 'Manage Datastore' button on 'Data Imports' console

11.png

 

Please see below the different datastores that you can create :

 

Creating a Datastore

 

12.png

 

Source Datastore using Database as type and MS SQL as a vendor database

 

13.png

 

Source Datastore using Database as type and Oracle as a vendor database

 

14.png

 

Source Datastore using CSV as type

 

15.png

 

Source Datastore using XML as type

 

16.png

 

Source Datastore using AR server as type

 

17.png

 

NOTEFor CSV and XML file connections, please create a store locally on the AR server where the AI job will run to ensure fast performance.

 

 

Create a custom job & Run it

 

The prerequisite for creating a custom job in ‘Data Imports’ console is to have source and target datastore created, as mentioned in the previous step of this document. 

Once the datastores are created, use the ‘Create Job’ command on the console, as seen below.

 

19.png

 

Fill out the new Job Details and also do the Field mapping between Source and Target dataset. 

In the example below, we have used external ‘Database’ as a source data.

 

20.png

 

Do the Field mappings, as seen in the below image

 

21.png

 

Save the Job

 

22.png

 

Click on job name then 'Start Job' button to run it

 

23.png

Thank you for reading this blog.

Share This:

Hello Everyone,

 

I wanted to let you know that at the end of last week we released BMC CMDB 19.08, along with ITSM and AR System along with other products.

 

This release contains several items of interest and I wanted to quickly highlight them for you:

  1. Class Manager in the Dashboard driven UI: you can now manipulate the Common Data Model using our updated UI in a new streamlined and simplified Class Manager, we've stripped the Class Manager back to what is actually needed rather than containing superfluous information, but also enhanced it so it's easier to get things done!
  2. Atrium Integrator has also received attention in this release, you are now able to create end to end transformations for AI jobs directly in the new UI allowing for easier and streamlined experience, we aren't totally done here yet and there are some caveats, such as we do not yet support the concept of a separate file/source for the relationships between CIs via this method, you need to supply the parent CI in the child record.
  3. In the CMDB Explorer we have added the ability to run impact simulations, thus removing the need for a separate UI for the Impact Simulator it's now all just part of CMDB Explorer, you will also notice this has been embedded into the ITSM Mid-Tier UI too as part of our Flex deprecation efforts.
  4. Also Federated Actions (aka Launch in Context Federation) is now present and exposed in the CMDB Explorer with it's own configuration interface, we've also pre-configured for BMC Discovery so you can start getting value quickly from this powerful functionality.
  5. Changes to the CDM are also present in this release, with the technology world moving a pace we have updated the CDM again in this release with the following items:
    1. Introduced new attributes to BMC_Tag class to facilitate the population of Cloud Tag information by BMC Discovery
    2. Introduced a new 'informational' relationship class to allow you to relate anything to anything called BMC_RelatedTo this has no endpoint rules and does not carry Impact or Dependency information but allows you to, for instance, relate tags to any other CI. The intent here is to relation 'informational' CIs across the landscape
    3. We've relocated the 'isVirtual?' attribute from the BMC_System hierarchy into BaseElement, this is not a totally trivial exercise and we are not forcing you to adopt the move as part of an upgrade, but migrate the data at your pace. We will produce a Atrium Integrator job to assist with this via the Communities shortly. Anything using the API will now write to the new relocated attribute whilst your existing data will be left where it is.

 

This post highlights some of the major updates we've made, there are other defect fixes and updates made throughout the tool some refinements here and there but these are the larger more visible items for the details of everything we've been unto please check the Release Notes for 19.08. I'll leave you with some teaser screenshots of the Class Manager and update CMDB Explorer for now....

 

BMC CMDB Class Manager:

Class_Manager.png

BMC CMDB Explorer with Impact Simulation and Federated Actions:

Impact_Simulation.png

I look forward to seeing as many of you as possible at the BMC Helix Immersion Days in Santa Clara in September! Where we'll be running labs and sessions including CMDB Roadmap moving forward, the Configuration Manager Dashboard UI, CMDB Future concepts and functionality and I'll be at Evening with Engineering too! See you there....

 

Stephen.

Share This:

From Monday the 29th July 2019 BMC will be making a configuration change which will disable the 'What's New' popup window in the Configuration Manager Dashboard User Interface.

 

Why are we making this change?

We have had a number of reports from customers that the pop-up window in the Configuration Manager Dashboard UI is causing blocking behaviours in certain circumstances, in order to enable the maxiumum value to be realized by as many customers as possible we are going to disable this blocking behavior.

 

What does this mean?

When users log in to the BMC CMDB Configuration Manager Dashboard UI, they will NOT receive a 'What's New' popup window highlighting what has changed in the release, please use the 'Walkthroughs' slide out to access in context documentation highlighting 'What's New' for the CMDB.

 

Will this functionality be reinstated?

We will be working to add a configuration parameter in an upcoming release that will allow organizations to enable or disable this popup for their organization, once this is available we will enable the popup again for those organizations that have the configuration set appropriately.

 

Still want to clarify something? Comment below

Share This:

Have you signed up to come and join is at BMC Helix Immersion days in Santa Clara, September 16th? There will be PM's, Engineers, events and sessions around BMC CMDB and other tools in the Helix suite - lots to learn, lots to 'try' and sessions to participate in from labs to User Experience research... If you haven't signed up already please do at:

 

BMC Helix Immersion Days | Silicon Valley | September 16 - 18, 2019

 

See you there!

 

Stephen Earl

Prinicipal Product Manager, BMC CMDB

Stephen Earl

BMC CMDB 19.02 - GA

Posted by Stephen Earl Mar 1, 2019
Share This:

Hi Everybody

 

I wanted to quickly post that BMC CMDB 19.02 is now GA, and can be downloaded from the EPD site.

 

New in this release:

 

CMDB Explorer

Edit and Create CIs from the new Dashboard User Experience, be aware that editing CI information utilises mid-tier CI forms.

 

Explorer Complex model.PNG

 

CMDB Archiving

You can now move data into a CMDB archive from your active datasets, this allows you to preserve data for compliance and other record keeping purposes.

Archive CIs, their related CIs, define retention periods, select your CIs by query and define where the archive data will be stored.

 

Archiving.PNG

 

Location attributes on relationships

Shift your data paradigm and you can now specify the detailed location information for CIs in relationships allowing for your physical locations to be less granular eg: buildings or campus and your relationships hold the specific room, rack or shelf locations.

 

Service Cost Modelling attributes

You can now provide cost apportionment (expressed as a percentage) and cost flow attributes between CIs on relationships allowing the CMDB to provide improved raw data to enable your service models and active data to support service costing efforts.

 

 

How will these features and a capabilities help your business to succeed with CMDB? Please comment below

Looking forward to our upcoming releases you can expect us to continue our User Experience evolution and much more!

Filter Blog

By date:
By tag: