Skip navigation
1 2 3 Previous Next

Atrium CMDB

45 posts
Share:|

The *BMC Atrium CMDB 9.1: For Consumers, Configuration Managers, and Administrators* course teaches students what makes a Configuration Management Database (CMDB) useful to the business, how to configure the CMDB, and how to perform routine administrative and maintenance tasks. This course includes the accreditation exam for BMC Accredited Administrator: Atrium CMDB 9.1, and is also a part of the BMC Accredited Administrator: BMC Atrium CMDB 9.1 certification path.

 

Course details here, or contact Thomas Hogan (EMEA) or Brian Hall (AMER training), or email education@bmc.com

 

UPCOMING PUBLIC CLASSES

AUGUST 8, 2016:  EMEA (online)

AUGUST 8, 2016:  AMER (online)

SEPTEMBER 19, 2016: EMEA (online)

OCTOBER 3, 2016:  AMER (online)

 

Have multiple students? Contact us to discuss hosting a private class for your organization.

 

This instructor-led, five-day course includes a hands-on lab and is designed for:

  • Day 1 (Consumers): Learn how the CMDB is used by key IT Service Management functions, such as Asset, Incident, and Change Management.
  • Days 2-4 (Configuration Managers): Learn how to configure their environment to meet the requirements of CMDB consumers in the business.
  • Day 5 (Adminstrators): Learn routine administration and maintenance of the CMDB.

 

After having attended this course, you will be able to maximize the out-of-the-box functionality of BMC Atrium CMDB. 

 

IMPORTANT: Included in this course is the examination for BMC Accredited Administrator: BMC Atrium CMDB 9.1: Consumers, Configuration Managers and Administrators. Taking the exam and pursuing accreditation is optional, however all students enrolled in this course are automatically enrolled in the exam. You will have two attempts to pass the BMC Accredited Administrator exam. No retakes will be offered.  Those who pass will receive the title of BMC Accredited Administrator: Atrium CMDB 9.1

 

Atrium CMDB Heather Leventry Michelle Kerby Naji Abdallahi Tom Luebbe Jeff Jeffress Vidhya Srinivasan Joon Hahn Geoffrey Bergren Marike Owen Dirk Braune Elaine Miller Mitch Myers

 

Screen Shot 2016-07-19 at 6.46.05 AM.png

Share:|

BMC Remedy Single Sign On Service Provider (SP) certificate shipped with the product, which is used to sign SAML request, will be expired on April 21st 2016.

 

If you are using out of the box certificate to sign SAML requests in BMC Remedy Single Sign On, the request will fail due to the expiry of certificate.

 

In this blog, I will be covering the steps to update the BMC Remedy Single Sign On (RSSO) SP certificate so that it has a new expiry date, which will prevent from failure of SAML authentication.

 

If this certificate has already been replaced with a newer one with a valid future expiry date, you don't have to follow the steps mentioned in this blog. 

 

First of all, how to find the Certificate expiry date of relying party (RSSO) for SAML authentication?

 

  • An easy way to find the certificate expiry is by logging to ADFS tool and checking the RSSO service provider relying party properties.
  • In the Signature tab, you should see the certificate expiry date.

 

Likewise, for other IdP tools that you are using with RSSO, you will have to contact your IdP administrator to check the RSSO relying party certificate expiry date.

 

What steps are necessary to update BMC Remedy Single Sign On (RSSO) SP Certificate?

 

Important Notes:

 

(A) The below instructions are written for Windows OS. All paths mentioned below are for Windows OS. Please use relative paths if you're using Linux or Solaris OS.

 

(B) The file name for the java keystore should be cot.jks. The alias for java keystore (cot.jks) should be test2.  The password for the cot.jks keystore is 'changeit'    Please do not change the password.

 

(C) Please make sure to set the Path environment to jdk or jre bin folder or else you may get error like ‘unknown internal or external command’. In Windows this means that you'll need to edit the System Environment properties and find the global variable PATH to update it.

 

1.png

 

Steps to update the certificate:

 

1. Update java keystore named cot.jks

 

Perform the following steps on the machine installed with RSSO server by being in <tomcat>\rsso\WEB-INF\classes folder:

 

a. Take a backup of existing cot.jks from <tomcat>\rsso\WEB-INF\classes folder

 

b. Delete alias ‘test2’ from existing cot.jks using keytool command line:

 

keytool -delete -alias test2 -keystore cot.jks

 

Note:  The password for the cot.jks is "changeit".  Please don't change the password

 

c. Create a new keypair with alias ‘test2’ in existing cot.jks

 

keytool -keystore cot.jks -genkey -alias test2 -keyalg RSA -sigalg SHA256withRSA -keysize 2048 -validity 730

 

Note:  In the above example, we used 730 days as validity, which is equivalent to 2 years validity.  You can use the validity days at your discretion

 

d. Export ‘test2’ certificate in PEM format

 

keytool -export -keystore cot.jks -alias test2 -file test2.pem –rfc

 

e. Take a backup of the updated cot.jks

 

If you have other RSSO server instances in same cluster, replace cot.jks in <tomcat>\rsso\ rsso\WEB-INF\classes folder with the updated cot.jks in step 1.e

 

2. Update signing certificate in RSSO Admin console

 

a. Login RSSO Admin console

 

b. Go to ‘General->Advanced’ tab

 

c. Open the file test2.pem which is created in step 1.d in text editor, remove the first line:

 

(-----BEGIN CERTIFICATE-----)

 

and the last line:

 

(-----END CERTIFICATE-----)

 

Also remove the newline delimiters (\r\n), and then copy the contents.

E.g. If you use Notepad++, you can open ‘replace’ dialog, select ‘Extended’ search mode, find ‘\r\n’ and click ‘Replace All’ button.

 

 

2.png

 

d. Paste the copied content in step 2.c to the ‘Signing Certificate’ field, replace existing content in the text area

 

3.png

 

e. Click ‘Save’ button to save the change

 

f. Wait for 15 seconds, view the realm using SAML, click ‘View Metadata’ button in ‘Authentication’ tab. Verify the SP metadata is updated with the new signing certificate.

 

3. Update SP metadata at IdP side

 

- Export the SP metadata in step 2.f and save it in a local file

 

- Send the exported SP metadata and the new signing certificate in step 1.d to IdP team for updating.

 

If the IdP is ADFS, the customer can add the new signing certificate as below:

 

a. Open ‘Properties’ dialog of the relying party for RSSO
b. Go to ‘Signature’ tab
c. Click ‘Add’ button, select the new signing certificate file and click ‘OK’

 

4.png

 

 

Notes for rolling upgrades (Cluster / High Availability environment)

 

Should you have a requirement for zero-down time in a cluster environment (assuming ADFS is the IdP) for the signing certificate update, then you can take actions with following sequence:

 

1. Take one RSSO server instance down first, perform step 1 on it
2. Perform step 2
3. Perform step 3 (remember NOT to delete the old signing certificate)
4. Make the RSSO server instance up again
5. Take the second RSSO server instance down, update its cot.jks with the one already updated on first RSSO server instance in step 1.e, then make it up again
6. Repeat step 5 on all other RSSO server instances
7. After the keystore cot.jks is updated on all RSSO server instances, you can remove the old signing certificate on the RSSO relying party at ADFS side.

Share:|

MSSQL comes in many forms, versions, and with many authentication methods.

Most of the time we end up using the regular MSSQL authentication, where we provide a username/password combination, the user gets authenticated in the DB and there we go.

Many other times we need a more complex authentication method. Integrated Security, NTLM, Windows Authentication, Named Instances, Domains, etc. All those parameters can affect the way we need to login to MSSQL Database when we use AI/Spoon to connect.

 

Spoon has two basic options to connect to MSSQL:

 

The first one is the most used one, "MS SQL Server"

This one uses the 1.2.5 JTDS driver.

This driver is good for most common cases, but if you want to use Windows Authentication, Integrated Security or Named Instances you may run into problems.

 

You can use the JTDS FAQ (jTDS JDBC Driver) to answer the most common questions about this driver. Even some troubleshooting of the most common errors thrown.

 

I personally recommend you download the latest JTDS driver from here: jTDS - SQL Server and Sybase JDBC driver download | SourceForge.net

As of this writing it is 1.3.1. If you do not have this version you may encounter some issues connecting to MSSQL 2012 or higher, even if you are using the most basic of authentications.

 

If you want to use Integrated Security (also called Active Directory authentication, or AD) you can do so here as well, or even use Domain/User authentication. I will explain how each one works.

 

If you are not familiar with AD, its a centralized authentication mechanism allowing access to the various hardware and services in the network. By centralizing the authentication process, the same user account can be used to access multiple resources, and it eliminates some of the setup needed to enable those users on various systems. Most DBA’s prefer to use AD authentication for those reasons, and if you will be using PDI to access multiple MSSQL systems, you’ll probably want to become familiar with setting it up.

 

  1. Although Microsoft provides their own JDBC driver, which we will cover later on this post, this time around we will be using the open source driver jTDS.
  2. Extract the archive file and open it. Copy the jtds-1.3.1.jar file to the Pentaho .../data-integration/libext/JDBC folder on your system. Remove the jtds-1.2.5.jar file from there.
  3. In the folder where you extract the archive, locate the subfolder matching your systems architecture (x64, x86 or IA64). Open it, and open the SSO subfolder.
  4. Copy the ntlmauth.dll file to a folder on your path. (From a command prompt enter: ECHO $PATH$ to see the current path). On my system, I copied the file (as root) to the /usr/local/bin folder. In windows a good location could be /Windows/system32/
  5. Open the Pentaho GUI (aka Spoon) and start a new job. Click on the VIEW tab in the Explorer panel.
  6. Right click on Database Connections, and choose NEW to open the Database Connection window.
  7. Enter a name in the Connection Name box to identify it.
  8. Scroll down in Connection Type and choose MS SQL Server.
  9. In the Access panel, make sure Native (JDBC) is selected.
  10. In the Settings panel, enter your server’s hostname or IP address, the database you want to connect to, the port SQL Server is using (by default its 1433), and the user name and password in the appropriate fields. You can leave Instance Name empty unless your DBA tells you the server is using a named instance. It should look something like this:Screenshot-Database Connection -1
  11. In the left most panel, select Options. The right panel will refresh, and will probably only have one value entered: “instance”. Leave the value as is.
  12. Only in cases where you want the actual WindowsUser account that started Spoon to be the one authenticated add a parameter called “integratedSecurity” (watch the text case), and set the value to true. If this is true then you do not need to provide a Username and Password combination. Windows will use the current user that started Spoon to authenticate to the DB. This is most of the times NOT a good option, since for ex. you may need to connect to two different DBs, with two different domain/user combinations. If so, do not use this parameter.
  13. Add another parameter called “domain” and set the value to your network’s domain name. (You can use the full domain name or the shorthand one).Screenshot-Database Connection -2
  14. Click the TEST button at the bottom of the screen, and you should be rewarded with a successful connection window. Click OK and you are done.

Note: You may get the message of

The login is from an untrusted domain and cannot be used with Windows authentication. (code 18452, state 28000)

 

Some SQL Server machines are forced to work exclusively with NTLMv2 authentication. When attempting to create an account with SQL Server using Windows Authentication, its validation will fail with untrusted user/domain error.

In order to verify which authentication is used, execute gpedit.msc on the SQL Server host and look at the selected value of Computer Configuration->Windows Settings->Security Settings->Local Policies->Security Options->Network Security: LAN Manager
Authentication Level.

(ie. Authentication Level is set to "Send NTLMv2 response only. Refuse LM & NTLM" )

The default value for useNTLMv2 is false, so we need to set it to true to send LMv2/NTLMv2 responses when using Windows/Domain authentication.

 

We need to set the parameter "useNTLMv2" to "true" under "domain" (step 13 above).

 

Note also that if you decide to go with Integrated Security, the username/password combination will be the one that actually run the service that tries to connect to the DB. In this case, it is Spoon, but in the case you run the job from the AI Console the connection will be triggered/established by the Carte Server. The Carte server is a process that runs under the ARSystem (as a child of it) and it is started and monitored by ArMonitor. It's starting line is in the armonitor.cfg/conf file and unless you do something about it, CarteServer will be started by the same user that starts the ARServer, which means that this user may fail to connect even though you tested the job from Spoon and it was working.

 

For the above I do NOT recommend using Integrated Security unless really needed.

 

----------------

Now to the second option: MS SQL Server (Native)

 

You normally would use this option for better performance and for using the official Microsoft supported JDBC connection. This JDBC driver is the most compatible driver in the market, and it's officially supported by the same company that releases new MSSQL releases, so it makes sense that any new versions of MSSQL DB Server may be accompanied by new JDBC releases as well. In case your JTDS driver above stops working you may have to default to this one for new DB releases.

 

To deploy/install this you need to:

1. Download and install the JDBC 4.0 from Microsoft official webpage. https://www.microsoft.com/en-us/download/details.aspx?id=11774

2. After download install so that the files get unzipped to one folder. Then go to the folder and grab sqljdbc4.jar, copy this file to <AtriumIntegrator folder>/data-integration/libext/JDBC/

3. Do the same with the sqljdbc_auth.dll file from the 64 bits folder (unless you are on a 32 bit system).

4. Open Spoon, create a new DB Connection and use the MS SQL Server (native) option.

If you want to use the integratedSecurity feature you can leave the username and password fields blank, since it will use the credentials of the user who opened Spoon. On the left hand side of the connection properties page, go to options. Add one property: integratedSecurity and set the value to True.

Share:|

Hi!,

 

BMC does not support running Spoon in Linux, although we know it is quite possible from the Pentaho forums to do it.

What problems did you encounter when running Spoon in Linux (if you ever tried)?

 

Here is an issue we recently observed:

 

Short explanation: The culprit of the original error is that the /tmp filesystem had the “no_exec” option set to true, which kept any app to execute scripts there

 

Long explanation:

1) Replace the swt.jar from the BMC Pentaho's distribution (which is version 3.346 and located in "/opt/bmc/ARSystem/diserver/data-integration/libswt/linux/x86_64") by the distribution we downloaded from : https://www.eclipse.org/swt/ (version 4.527)

 

2) edited spoon.sh (in /opt/bmc/ARSystem/diserver/data-integration folder) and substituted:

 

OPT="$OPT $PENTAHO_DI_JAVA_OPTIONS -Djava.library.path=$LIBPATH -DKETTLE_HOME=/opt/bmc/ARSystem/diserver -DKETTLE_REPOSITORY=$KETTLE_REPOSITORY -DKETTLE_USER=$KETTLE_USER -DKETTLE_PASSWORD=$KETTLE_PASSWORD -DKETTLE_PLUGIN_PACKAGES=$KETTLE_PLUGIN_PACKAGES -DKETTLE_LOG_SIZE_LIMIT=$KETTLE_LOG_SIZE_LIMIT"

 

by :

 

OPT="$OPT $PENTAHO_DI_JAVA_OPTIONS -Xbootclasspath/a:${LIBPATH}swt.jar -Djava.library.path=$LIBPATH -DKETTLE_HOME=/opt/bmc/ARSystem/diserver -DKETTLE_REPOSITORY=$KETTLE_REPOSITORY -DKETTLE_USER=$KETTLE_USER -DKETTLE_PASSWORD=$KETTLE_PASSWORD -DKETTLE_PLUGIN_PACKAGES=$KETTLE_PLUGIN_PACKAGES -DKETTLE_LOG_SIZE_LIMIT=$KETTLE_LOG_SIZE_LIMIT"

 

(the only difference is  -Xbootclasspath/a:${LIBPATH}swt.jar, the reason why is based on: http://stackoverflow.com/questions/19969637/noclassdeffounderror-classnotfoundexception-while-using-swt)

 

3) Started Pentaho Spoon:

 

cd /opt/bmc/ARSystem/diserver/data-integration

./spoon.sh

 

-------------------------------------

There are more experiences here: Diethard Steiner on Business Intelligence: Having problems starting Pentaho Kettle Spoon on Linux? Here are some solutio…

 

Can you share yours?

Share:|

They say that the stars are bright. Late at night and deep in the heart of Texas. That was certainly the case here in Austin last night. The clouds rushing over the night sky occasionally exposed the Moon that was in conjunction with Jupiter. What a sight. According to my team of astrologers this is supposed favorable time when communications flow with clarity.

 

And so, this morning a colleague of mine made something very clear to me. He said that the Java AutoUpdate issue is currently the top concern in the field. At that point I've visualized the faces of administrators that work with java based applications around the world. Their expression being dismay and puzzlement. My facial expression was more septic than that. I realized that all java paths we've ever used to hard code a path to the java executable is no longer valid.

 

So what happened here? A Windows based task called the Java AutoUpdater downloads java updates and installs it at will. That in itself is not that big of a deal. In the past the previous java was left intact and it would still work where ever hardcoded.

But this time around the Java AutoUpdate also removes the contents of the former bin directory. Yes, the one from previous install.

 

For example the java.exe is now gone from the JAVA_HOME\bin folder and if you've had the path in any java application launcher then the app will no longer start. This path is no longer valid:

 

"C:\Program Files\Java1.7\bin\java.exe  -classpath ...."

 

Instead it becomes something else and need to manually change it to the new location.

 

"C:\Program Files\Java1.8\bin\java.exe  - classpath ..."

 

Bit of a nightmare if you think of how many java based applications are in use around the world.

 

Perhaps there is a lesson we can learn from this and not hard code java paths this way. The alternative is to always use a session variable like %JAVA_HOME% or %JAVA_BIN% instead of the C:\Program Files\Java1.7\.

 

Example:

 

%JAVA_HOME%\bin\java.exe  -classpath ...

or

%JAVA_BIN%\java.exe  -classpath ...

 

Unfortunately this presents another issue for compatibility. What if my application was coded with a specific feature that has now changed or depracated in the new version? Either way, please be on the look out and always first check the contents of your bin directory formerly known is JAVA_HOME.

Share:|

*RING*.... *RING*....

 

The phone rang a couple of times while I was finishing my morning coffee... I don't like being disturbed while at it... and normally Watson likes picking up the phone, but this time he wasn't around.

After some more futile rings, I decided to pick it up. Surely it was another one of those cell phone companies offering me 4g or some other new technology, but no. It WAS some new technology, but nobody was selling me anything.

We had a case here of a guy who was trying to use the new 9.1 Remedy, but the atrium console was unable to open. Oh my... You can imagine what can one do without an Atrium Console now-a-days, terrible, terrible news.

Of course, the victim wanted us to engage at full speed. And so we did.

Watson picked me up and we went to the source.

 

Reading the Tomcat logs the problem was clear, a class was missing.

<timestamp> - Initializing Atrium Servlets
javax.servlet.ServletException: flex.messaging.config.ConfigurationException: An error occurred trying to construct FlexFactory 'com.bmc.atrium.lcds.AtriumFlexFactory'.  The underlying cause is: 'java.lang.NoClassDefFoundError: com/bmc/cmdb/api/CMDBValue'.
  at flex.messaging.MessageBrokerServlet.init(MessageBrokerServlet.java:184)
  at com.bmc.atrium.web.AtriumServletDispatcher.init(AtriumServletDispatcher.java:60)
  at com.bmc.atrium.midtier.RealAtriumWidgetPlugin.processRequest(RealAtriumWidgetPlugin.java:156)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:497)
  at com.bmc.atrium.modulelauncher.ContextClassLoaderInvocationHandler.invoke(ContextClassLoaderInvocationHandler.java:26)
  at com.sun.proxy.$Proxy18.processRequest(Unknown Source)
  at com.bmc.atrium.modulelauncher.AtriumWidgetPlugin.processRequest(AtriumWidgetPlugin.java:231)
  at com.remedy.arsys.plugincontainer.impl.PluginServlet.postPluginInfo(PluginServlet.java:44)
  at com.remedy.arsys.plugincontainer.impl.PluginContainer.processRequestInfo(PluginContainer.java:86)
  at com.remedy.arsys.stubs.AuthenticationHelperServlet.doRequest(AuthenticationHelperServlet.java:79)
  at com.remedy.arsys.stubs.GoatHttpServlet.postInternal(GoatHttpServlet.java:98)
  at com.remedy.arsys.stubs.GoatHttpServlet.doGet(GoatHttpServlet.java:57)
  at javax.servlet.http.HttpServlet.service(HttpServlet.java:622)
  at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
  at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:291)
  at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
  at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
  at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239)
  at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
  at com.remedy.arsys.stubs.TenancyFilter.doFilter(TenancyFilter.java:49)
  at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239)
  at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
  at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212)
  at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106)
  at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502)
  at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141)
  at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)
  at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:616)
  at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88)
  at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:521)
  at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1096)
  at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:674)
  at org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.doRun(AprEndpoint.java:2500)
  at org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:2489)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
  at java.lang.Thread.run(Thread.java:745)
Caused by: flex.messaging.config.ConfigurationException: An error occurred trying to construct FlexFactory 'com.bmc.atrium.lcds.AtriumFlexFactory'.  The underlying cause is: 'java.lang.NoClassDefFoundError: com/bmc/cmdb/api/CMDBValue'.
  at flex.messaging.config.FactorySettings.createFactory(FactorySettings.java:73)
  at flex.messaging.config.MessagingConfiguration.createFactories(MessagingConfiguration.java:158)
  at flex.messaging.config.MessagingConfiguration.configureBroker(MessagingConfiguration.java:108)
  at flex.messaging.MessageBrokerServlet.init(MessageBrokerServlet.java:132)
  ... 40 more
Caused by: java.lang.NoClassDefFoundError: com/bmc/cmdb/api/CMDBValue
  at com.bmc.atrium.lcds.ProxyRegistry.initialize(ProxyRegistry.java:123)
  at com.bmc.atrium.lcds.AtriumFlexFactory.initialize(AtriumFlexFactory.java:86)
  at flex.messaging.config.FactorySettings.createFactory(FactorySettings.java:61)
  ... 43 more
Caused by: java.lang.ClassNotFoundException: com.bmc.cmdb.api.CMDBValue
  at com.bmc.atrium.modulelauncher.AtriumDVFClassLoader.loadClass(AtriumDVFClassLoader.java:57)
  ... 46 more
Throw Error - 9430

 

Line 50 and 55 and 44 and 02 where all giving me the same clue. CMDBValue class was missing.

But how? This is a very common class, and it is found in the cmdbapi.jar file.

This jar file changes by version, and it should be there under .../midtier/ThirdPartyJars/servername/

There is where the midtier fetches all the jar files that it needs for dependency classes.

We checked the folder, and the jar file was not there. So my suspicion was right.

Resolving this looked pretty simple:

Find the cmdbapi91.jar under .../atriumcore/cmdb/sdk64/bin

Copy to the thirdPartyJars folder and restart the midtier.

 

Elementary.

 

There is a more clean, better solution though.

  1. Access the BMC Remedy Mid Tier.
  2. When prompted, enter your AR credentials.
  3. Open the Data Visualization System Files form.
  4. In the Data Visualization System Files form, create an entry by entering the information in the following fields.
    • Name = cmdbapi91.jar
    • Description = CMDB Java API
    • Status = Active
    • Platform = All
    • Version = 9.1.00
  5. Click Add to attach the cmdbapi91.jar available at ..\..\..\AtriumCore\cmdb\sdk64\bin
  6. Click Save.
  7. Restart BMC Remedy Mid Tier.
Share:|

Recently I needed to troubleshoot an issue with BPPM Publishing where Impact Models were not being successfully published in BPPM.

Some smaller Models were successful, other larger Models would just not publish.

The Models showed up correctly in Remedy (Impact Designer), but when attempting to publish to BPPM they would time-out or throw a large number of errors in the BPPM logs.

 

Looking at the log files from AR System, I could see lengthy queries being executed from BPPM that were taking upwards of 20 minutes to execute before ultimate failure of the publishing occurred.  SQL analysis showed queries that were using "Null" values, so adding indexes was out of the question to speed up the queries and help the process.

 

Looking into the API calls that were executed, I could see that BPPM was executing an API query using the "CMDBGraphWalkQuery" function with no batching and no levels (which was effectively pulling the full model down across all classes) to "walk" the Model and obtain all the CI's and Relationships.

 

Service Models for this client were loaded via a 2 step process:

 

  1. Main Model was imported via ETL (AI Jobs/Transformations) from an Excel spreadsheet
  2. The remainder of the Model was imported from multiple external data sources, again via ETL (AI Jobs/Transformations)

 

This caused a problem when it came to troubleshooting the root cause as there was no "one source" of information where the full Model resided.  We analysed the Excel spreadsheet for anomalies and corrected, but no amount of troubleshooting and diagnostics was able to pin point the actual issue across the full Model.

 

For BPPM to successfully pull down the Model (CI's and Relationships), it needs to know what BPPM Cell to publish the the Model too.  In a large BPPM implementation, there maybe more than one BPPM Cell configured and the BPPM environment can span multiple networks, be segmented off based on security requirements, etc, so knowing where to Publish a Model too (target Cell) is key to a successful publishing attempt.

 

Eventually we determined through the BPPM logs that there were multiple CI's that were missing the "HomeCell" and "HomeCellAlias" values in the CMDB records as required by BPPM.  This may have been due to the way the Models were loaded, or another issue with the propagation of the Cell values via the Reconciliation process. 

 

With the larger Models having upwards of 50k + CI's (and Relationships), identifying each and everyone of the CI's related to a particular Model was practically impossible due to the way the data was loaded (and Infrastructure CI's could be used across more multiple Models).  So we were left with going line by line through the error logs to identify the offending CI's and updating manually, or coming up with another method to extract the data.

 

We eventually determined that the only way to get past this issue was to "emulate" what the BPPM Publishing process was doing to be able to see what CI's form part the Model we were attempting to Publish.

From here, we could also create a "self documenting" method for a Model where all the related CI's were exported to a file where we could see where the values were missing and then schedule a bulk update to set the "HomeCell" and "HomeCellAlias" values.

 

Attached is the Java application that was created that uses the "CMDBGraphWalkQuery" API call to "walk" the Models from the top to bottom and produce a "csv" file showing all related CI's to that Model, including the the "HomeCell" and "HomeCellAlias" values. 

It can walk a single Model or multiple Models (using the configuration file to set parameters).

There are options to update the values on the fly or just walk the Model and produce an output file (please see the Instruction file included).

 

To use, just extract the zip to a directory of choice (the file structure in the zip needs to be kept intact) and run the program either using the command line options or the configuration file.

 

I hope this helps anyone else facing a similar issue with publishing to BPPM.

 

Carl

Share:|

So Watson got a call. REST API.

We had heard about the guy at a local pub. He was here to take the business out of another guy... "Web Services" they called him.

That was all we knew about it, until that night.

We started following him up, according to the source the guy was not responding well. Bad Request 400.

We sent him the request exactly as stated in the documentation:

http://localhost:8008/api/cmdb/v1/instance/BMC.SAMPLE/BMC.CORE/BMC_ComputerSystem

 

But still, the guy... nothing.

 

A bad request means that the request does reach the Jetty Server, but you probably don't know what the Jetty server is anyway.

 

So first things first.

REST API is a new cool feature of ARS.

When you start ARS it will also start the REST Engine, and if you set it up correctly it will be listening for incoming requests at a port, these request will be APIs that will be resolved and replied using the same mechanism. All that magic happens because of something called Jetty, and that's all you need to know about that.

So how do you set it up correctly... well, that part I'm not so sure, but I will tell you a simple way that works:

 

1. You go an open the jetty-selector.xml file under .../Arsystem/jetty/etc/

2. Comment the first connector and uncomment the second one. So it should look like this one:

<?xml version="1.0"?>
<!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure.dtd">


<Configure id="Server" class="org.eclipse.jetty.server.Server">


  <!-- This is the default, HTTPS connector. Place your keystore file in this
  directory and set your passwords, preferrably obfuscated, to configure HTTPS. -->
  <!-- <Call name="addConnector">
  <Arg>
  <New class="org.eclipse.jetty.server.ssl.SslSelectChannelConnector">
  <Arg>
  <New class="org.eclipse.jetty.http.ssl.SslContextFactory">
  <Set name="keyStore"><Property name="jetty.home" default="." />C:\Users\Administrator\keystore</Set>
  <Set name="keyStorePassword">OBF:1tv71vnw1yta1zt11zsp1ytc1vn61tvv</Set>
  <Set name="keyManagerPassword">OBF:1tv71vnw1yta1zt11zsp1ytc1vn61tvv</Set>
  <Set name="trustStore"><Property name="jetty.home" default="." />C:\Users\Administrator\keystore</Set>
  <Set name="trustStorePassword">OBF:1tv71vnw1yta1zt11zsp1ytc1vn61tvv</Set>
  <Set name="excludeProtocols">
  <Array type="java.lang.String">
  <Item>SSLv3</Item>
  </Array>
  </Set>
  </New>
  </Arg>
  <Set name="port">8443</Set>
  <Set name="maxIdleTime">30000</Set>
  </New>
  </Arg>
  </Call> -->


    <!-- Uncomment this HTTP connector if you are using a reverse proxy that
  handles HTTPS. NOTE: Setting the forwarded property to true will process
  the X-Forwarding headers. -->
  <Call name="addConnector">
  <Arg>
  <New class="org.eclipse.jetty.server.nio.SelectChannelConnector">
  <Set name="host"><Property name="jetty.host" /></Set>
  <Set name="port"><Property name="jetty.port" default="8008" /></Set>
  <Set name="maxIdleTime">300000</Set>
  <Set name="Acceptors">2</Set>
  <Set name="statsOn">false</Set>
  <Set name="confidentialPort">8443</Set>
  <Set name="lowResourcesConnections">20000</Set>
  <Set name="lowResourcesMaxIdleTime">5000</Set>
  <Set name="forwarded">true</Set>
  </New>
  </Arg>
  </Call>


</Configure>

 

Line 10 and line 31 have the comment chars.

Also line 37 and 51 have been wiped out of the commenting brackets.

 

3. Configure the jetty.port if needed, the default is 8008, which Watson very much prefers to leave untouched.

 

So once that is completed, you have to restart ARS so that it can bring up the listeners!

After that is done, you can test your listener to make sure it is there, and you don't waste valuable time doing nothing... something that Watson likes to do on Sunday mornings, nothing.

You do not know how to check, easy.

Open up a command prompt, run:

 

netstat -nab | findstr /c:"8008"

 

If it is listening you should get something like this:

  TCP    0.0.0.0:8008           0.0.0.0:0              LISTENING

  TCP    [::]:8008              [::]:0                 LISTENING

 

Nice and tidy.

Now we need to go to the next step. POSTMAN.

This little fellow can be a burden when you live in London, but this one is a different kind of post-man.

You can download him from the wonderful internet, and make sure you get at least version 3.2.8 since I've had some issues with older versions.

Once you get that installed you open it up and you go and create a request like this one:

 

Postman 1.JPG

 

That thing in the bottom is the Token. You need that token because it will let you go and do the real APIs. So in fact it is a key to open the door to the real world of REST.

 

So now, with the Token we want to do some CMDB stuff:

 

Postman 2.JPG

 

You need to do a GET, the URL parameter can be one of multiple options. For more options I recommend reading the manual... boring!... or getting someone like Watson to do it for you.

 

Use the headers to put an Authorization there, and in the right hand side you will write AR-JWT and then you will paste the token we got on the previous call.

 

You are all set! We solved the mystery and time will tell what will happen to Web Services.

Share:|

start now.jpg

BMC training schedule is posted for January and February.  We want you to be successful in BMC solutions.  We run classes year round and worldwide across the BMC product lines.  Below are the classes listed by class/product name.


Review the below and register today.  Please check BMC Academy for latest availability, BMC reserves the right to cancel/change the schedule.  View our cancellation policy and FAQs.   As always, check back in BMC Academy for the most up to date schedule.


To see all our courses by product/solution, view our training paths.


Also, BMC offers accreditations and certifications across all product lines, learn more.

 

For questions, contact us

Americas - education@bmc.com

EMEA - emea_education@bmc.com

Asia Pacific - ap_education@bmc.com

 

ClassDate/Location
BMC Atrium CMDB 8.x: Administering - Part 2

11 January / EMEA / Online

25 January / Americas / Online

1 February / EMEA / Paris, FR

8 February / Americas / McLean, VA

8 February / EMEA / Winnersh, UK

22 February / Asia Pacific / Online

22 February / EMEA / Online

BMC Atrium CMDB 8.x: Administering - Part 3

18 January / EMEA / Online

1 February / Americas / Online

15 February / EMEA / Winnersh, UK

BMC Atrium CMDB 9.0: For Consumers

11 January / Americas / Online

11 January / EMEA / Paris, FR

18 January / Asia Pacific / Online

25 January / EMEA / Online

16 February / EMEA / Dortmund, DE

22 February / Americas / Online

29 February / EMEA / Paris, FR

Share:|

Hello and thank you for reading. First let me apologize for the highly granular technical details in this discussion.

 

This information has been provided to you by the official BMC Software AtriumCore Support organization in cooperation with the AtriumCore Customer Engineering team in Austin, TX. and San Jose, CA. We guarantee its accuracy and apologize for any typos that were not identified during publishing. Other publications on this topic are to be considered inferior unless otherwise specified. By inferior we mean that this article has the highest precedence weight in value and supersedes any previous discussions on this topic. 

Additional references are available and recommended for review:

 

BMC Documentation:

Best Practices for the Common Data Model - BMC Atrium Core 9.0 - BMC Documentation

 

Knowledge modules:

https://kb.bmc.com/infocenter/index?page=content&id=KA380649

 

The following is an explanation of the CMDB Common Data Model design and its reaction to being introduced to overlays. Let’s begin by making the following general statement:

 

“Forms owned by the BMC AtriumCore CMDB application should never be overlaid, customized or otherwise altered outside of the CMDB Class Manager. AR Developer Studio does not have any role in managing CMDB data structures and its use to modify and overlay CMDB schemas can have and does have severe impact on functionality of the CMDB and can lead to data loss.“

 

Making the right and informed decisions while making modification to CMDB data structures can make all the difference to prevent long term impact on performance and even AR Server stability. I’ll begin by explaining the CMDB design a little.

 

CDMB Application - BMC.CORE

 

CMDB Forms are created in the same fashion as inner joins where each joined form has a common field used to find specific results. Querying these tables as a "joined view" will return only the records where both tables match exactly the same value in one column of the row (field). It's like pinpointing a GPS location on the map by its coordinates. In respect to BMC's CMDB these inner joins are then using the exact InstanceId value to find a unique match. To clarify this point the T(#) tables are indeed "physical" tables in whatever RDBMS you have deployed on. I double quoted the physical, because technically it's just electromagnetic charge where no one is serving dinner on that table. By physical I mean a difference between a database table and a joined view, where the view is a window that allows us to present the data in a controlled way.  ARSCHEMA table has records that include schema Id's that make up the AR metadata and "BMC.CORE:BMC_BaseElement", "BMC.CORE:BMC_ComputerSystem_" and "BMC.CORE:BMC_ComputerSystem" are listed there as tables or joins, but they are different DB objects ( views) created to be used as part of the CMDB data present in a ARSYSTEM database.

 

 

For example consider this "joining" of

 

T503 (BMC_CORE_BMC_BaseElement) and T522 (BMC_CORE_BMC_ComputerSystem_)

 

gives the result of the intersection of both tables where InstanceId (179) matches a record with the same value in both tables . This then gives the results as defined by the View with values end users understand as Configuration Item (CI). Here views show the joining of two or more tables that have common data and are used for data presentation for end users. With this design we can manage all aspects of data with extreme precision.

 

NOTE: Please note that the schemaids used here, id number 503 and 522, are not static id's from one database to another and can change. They are not exclusively identified as BaseElement or ComputerSystem table ids. Deployments in your environment would likely have different ID. Conversely all attributes will have the same Column ID from one database to another. For example Status will always be C7. There are some exceptions to this for column IDs and attribute names, but in general this will apply to most attributes.

 

I am going to add some SQL statements here to illustrate it. Running two queries to find a CI record with the same InstanceId in two different tables may seem to be impossible because we understand a BMC.CORE:<CLASS> schema to have a unique value for field InstanceId(179). It has a unique index on it and finding only one record should not be possible. That is true, but not if you separate the inner join into two tables.

 

Consider the following query that looks in each table:

 

select * from T522 where C179 = 'OI-306843f7d118455882a5847bf9200c8f'

select * from T503 where C179 = 'OI-306843f7d118455882a5847bf9200c8f'

 

Where in my example table T522 is BMC_CORE_BMC_ComputerSystem_ and T503 is the BMC_CORE_BaseElement. Note here that both are actual tables. We are more familiar with name BMC.CORE:BMC_ComputerSystem which is a view of the two tables where the joined data is displayed.

 

select * from T503 INNER JOIN T522 on T503.C179 = T522.C179 and T503.C179 = 'OI-306843f7d118455882a5847bf9200c8f'

 

This is essentially the same thing as running:

 

select * from BMC_CORE_BMC_ComputerSystem where InstanceId = 'OI-306843f7d118455882a5847bf9200c8f'

 

Here is an illustration showing these results in MSSQL Studio:

 

JoinQueriesWeb.png

 

Note that the last result where the RequestId (Field C1) results in a composite record from both tables.

 

This is just to illustrate how most CMDB data is stored and looked up.

 

  If either table is altered outside the inner join (view) then the view will show the result as a whole record. We do not recommend doing this in practice, however some data manipulation can be performed with a certified CDMB Data Administrator. I can give two reasons for not doing it for you to consider. For one, making changes at the database level bypasses workflow. Secondly, altering and specifically deleting data through SQL query will impact only one table and can make ‘half’ of the record disappear. Cases where BMC_BaseElement part of joined record exists while BMC_ComputerSystem half is not found have been reported and are direct result of manipulation of data through the SQL back end.

 

For example if you run this SQL:

 

"delete from BMC_ComputerSystem_ where MarkAsDeleted = 1"

 

then you're only clearing out the BaseElement side while leaving the ComputerSystem_ table still full of that data. If you think you've done something like that in the past then you can check it by running this type of SQL query:

 

select count(*), ClassId  from BMC_CORE_BMC_BASEELEMENT where classid = 'BMC_COMPUTERSYSTEM' and instanceid not in (select instanceid from BMC_CORE_bmc_COMPUTERSYSTEM) group by ClassId

 

Any results from the above query mean that there are records in BMC_BaseElement table that are not found in BMC.CORE:BMC_ComputerSystem class container. You can substitute the COMPUTERSYSTEM ClassId in the above query with other class id values like BMC_IPENDPOINT and so on.

 

Please refrain from such practice. Use CMDBDiag, and Reconciliation Delete or Purge activity instead. These tools can help you delete data without violating the data consistency.

 

Next layer is the AR Metadata with set of tables that have references to schema ID of the actual table structures.

 

                - Table structures use prefixes like T, H and B

                 - AR Metadata is comprised of the various tables, but for the CMDB the ones that matter the most are:

                                  - arschema - references tables by ID (schemaid)

                                 - field - references columns in tables by fieldid and associations with schemaid.

 

There are other tables like “escalation” - references all escalations, “filter” - references all filters and others. These are not relevant for this topic, so I will not list them all here.

 

ARSCHEMA is usually the naming convention for "out of the box" name or ID of the database, however this name can be different if AR Server was installed into a database that was created before the installer was started. For the context of this posting just note that the name of this database can have different names but the "arschema" table itself can only have "arschema" as name. It is a table inside the ARSCHEMA database. So, when I mention the "arschema" in the following paragraphs I mean the "arschema table" rather than the database.

 

Records in "arschema" table are mostly pointers to tables that will use schemaid to look up the data. For example SchemaId 456 is referring to a table T456. Unfortunately these structures have no restrictions set on them and  can  be modified via BMC Developer Studio and become Overlays with "Best Practice Mode" and Customizations if "Development Mode" is used. The important thing to understand is that even CMDB data has a T table. This discussion is not about the design of AR Schema tables as it is about the CMDB data structures, so please forgive me if I am leaving out some details related to AR Schema database.

 

 

CMDB Metadata and the Common Data Model (CDM)

 

CMDB application is made of several components. Some API's, minor workflow, database tables and CDMB metadata. The last one is built in memory when the CMDB API is loaded by AR Monitor. It mainly queries forms like "OBJSTR:AttributeDefinitions" and "OBJSTR:Class" as the two main components for the hash table (arrays) where the data will be processed. The main difference between CMDB data structures and "arschema" data structures is that CMDB uses class hierarchy which "inherits" fields from its parent tables. The inheritance is simulated and for the data structure of a CI or Relationship this just means that tables have common purpose for data that is distributed. There is no other structure to date that uses this approach to storing data within the ARSCHEMA. Only the CMDB does this.

 

If you like to know more about the data management structures then please see DMTF (Data Management Task Force) design (http://www.dmtf.org).

 

In this design all common attributes of objects are stored at the Base Parent class and get more specialized with their child classes. A parent class that has a child class with specialized attributes will not be aware of its child attributes. For example attribute "MACAddress(C260140111)" will only be found in BMC_LANEndPoint that could not be associated with a BMC_Person record where it has no meaning. The attribute that is appropriate to define an object will be owned by a class that closely matches the attributes of real world object. Sending a SQL query to BMC_BaseElement class where the MACAddress is included in the where clause will result in error: "Invalid column name 'MACAddress'."

 

 

These tables were designed to show data for various reasons, including data visualization. Volume of data is to reflect the data size of discovered items of various types and hence these tables were intended to be trimmed down for faster data processing. Typically the more data is in the table the longer it will take to look it up. Our expectation was somewhere in the millions and some of the largest deployments seen in our experience is in the range of ~80,000,000 records. But those are just the CI's. There are also Relationships, which also use inheritance. Total size for the CMDB data can be in the billions, although such data volumes would require specialized hardware that can handle processing it.

 

All CMDB tables typically begin with BMC.CORE, also known as the NameSpace, and the Object has human understandable ClassId that adds meaning. ComputerSystem, Person, NetworkPort, these all mean something to us within the context of ITIL. Each class has attributes that increase in number and the more the class is removed from its parent the more specialized it will be. For example all attributes of BaseElement class are inherited by all CI classes, but BaseElement is not aware of any attributes of its children. Editing BMC.CORE forms in Dev Studio only edits that one table, but does not push the change to any other table. For this you need the Class Manager exclusively.

 

BMC AR Developer studio has the ability to create overlays, which is the "modern" way to introduce customizations to ARSCHEMA. However, these are still customizations that are outside of the CMDB. Unfortunately we cannot prevent AR Administrators from adding customizations to the CMDB due to the nature of features the Dev Studio offers, but it does allow us to easily separate customizations compared to what was available in the past.

 

All CMDB data structure changes need to be done with Class Manager that then builds the set of instructions for CMDBDRIVER that then does the work of data structure changes in the CMDB. Changes to a parent class will always propagate to its children. If you are using the BMC Developer Studio to add overlays to BMC.CORE:BMC_BaseElement for example, then that change will only be set on the BMC.CORE:BMC_BaseElement class and not propagate to its child classes.

 

Asset Management (AM) as an example is an application that takes advantage of the CMDB data store, but it does not use the inheritance model of the CMDB. Since Version 8.0 AM stores its data in AST:Attributes and qualifies CMDB classes by records in AST:ClassAttributes. Please refer to Updates to CI lifecycle data attributes - BMC Remedy IT Service Management Suite 8.0 - BMC Documentation for more details of this change.

 

In AM you then have two halves of a composite record that is then presented to the Asset Operator in one interface. They see the data through AST:<CLASS> views as one record. One half of the data is in AST:Attributes and the other half is stored in the BMC.CORE:<CLASS> and both are joined into one visual interface. In order to join the data into a view that offers Asset Operators the access they need we need to run CMDB to Asset Synch. In this synch all classes that are selected to have a UI in Asset are processed by a triggered synch job. Updates to individual fields (attributes) are also done this way by altering the tables in Asset. Adding overlays to AST:Attributes put locks on these Asset views and this often prevents the CMDB to Asset (SynchUI) functionality to complete successfully .

 

AST:Attributes must be edited in Base Development mode so that these changes can be added during the synch. CMDB2Asset Synch is Overlay agnostic. It has no idea that overlays exist. The expectations by end users is that CMDB2Asset Synch will update overlays is unrealized. Unfortunately this leads to a significant amount of extra work that is done before reaching out to BMC Support for help.

 

In conclusion use overlays with caution, or rather not at all with the CMDB data structure and please set your expectations that you'll be likely asked to undo all overlays related to CMDB and joins related to CMDB data structures. Upgrading to AtriumCore version 9.0 will require all previously created overlays of the CMDB forms to be removed.

 

Daniel Hudsky

Share:|

1.jpg

2.jpg

3.jpg

4.jpg

5.jpg

     Handling Roundtrips

6.jpg

Share:|

Our Customer Programs team have posted the following as a call for Configuration Managers to take part in our UX study sessions, please take this opportunity to contribute the future of Atrium CMDB and Configuration Management - see the post at bmc Remedy with Smart IT UX Design Sessions for Configuration Managers

Share:|

This blog is not about configuring BMC Atrium (data) Integrator. Instead I wanted to blog about how Atrium Integrator trouble tickets get routed to the SME's. However if you're reading this because you want to understand AI then look here first:

 

Understanding Atrium Integrator

 

 

 

For this blog I just want to refer to Atrium Integrator for what it does. Its function is defined as a method to transfer data from various sources into data store structured within the AR SCHEMA.

 

BMC has chosen the PENTAHO technology after researching alternatives and found this Java based tool to best qualify for data transfers. This means that AI can be used to import data into any form of the AR Schema. It does not necessarily mean that any issue encountered with the transfer will be related to Atrium Core data stores or AR Server configuration.

 

Data transfers with Atrium Integrator intended for the CMDB data store are created via the Atrium Integrator console. Assignment of issues related to this is easy: BMC AtriumCore Support. 

 

It diversifies from there. Other BMC Remedy Applications can also receive data by using the Atrium Integrator. Asset Management, Change Management and other apps can get data by adding transformation mappings with the Pentaho Spoon client. These will still show up as jobs in the Atrium Integrator console and have the ability to be triggered or run by scheduler. Any PENTAHO plugin issues can be resolved by AR Server support and Atrium Core support, but not if the trouble is with the data mapping itself. Atrium Core or AR Server support teams are not going to be familiar with the requirements of applications outside of their respective support boundary. For example if the destination form for data is SRM or Incident Management then sending support tickets to Atrium Core support or AR Server will be rerouted to SRM or Incident Management anyway.

 

We always try to achieve the fastest resolution possible. That is true for any issue and applies to any group within BMC Support organization. Customers may not see it that way because their ticket seems to be getting any attention at first and that is also our concern. Our internal routing of tickets is not transparent externally. This very experience is the reason for my blog post today. We want to work with the community and that requires communication.

 

This is what I want to achieve with this post today. Anyone that needs support with Atrium Integrator can help with the routing of the issue using the following logic:

 

 

If issue is with feature in:Best BMC Support Team Assignment:
Spoon Client (not application specific)AR Server or UDM
Pentaho Plugin (KETTLE)AR Server or UDM
CMDBOutput,CMDBInput,CMDBLookup methods

Atrium Core

Creating AI Jobs or Schedules with existing jobs in AI ConsoleAtrium Core
AROutput, ARInput, ARXInput methodsAR Server or UDM
UDM forms in general

UDM

Carte Server install and configurationAR Server
AI Users/Roles and PermissionsAtrium Core
UDM Users/Roles and PermissionsUDM
AIE to AI job migration toolAtrium Core

Installation of Atrium Integrator Server or Client

Atrium Core
Midtier related issue with AI console accessAtrium Core or AR Midtier
Application specific support other than CMDB Core formsApplication team that owns the destination form.
Share:|

14350312947_3c797946e0_z.jpg

An article published by Forbes (sponsored by SunguardAS) details why a large proportion of CMDB implementations fail.

I wanted to complement and provide our perspective on the topic, based on feedback from our market and the capabilities provided by Atrium.


The trends in IT more than ever require a solid control over configurations:

  • Larger, more complex, and dynamic data centers accelerate the risk of bad changes, and push the need for automation
  • Adoption of public and private clouds result in more vendors, more operators, and integration layers
  • The accelerating demand for digital services from the business places IT in tough situations where reactivity and efficiency are key ingredients for success


This drives benefits of Configuration Management beyond what was outlined in the article:

  • Change control/change management: Documenting your environment illustrates the many interdependencies among the various components. The better you understand your existing environment, the better you can foresee the “domino effect” that changing any component of that environment will have on other elements. The end result: increased discipline in your IT change control and change management environment.
  • Disaster recovery: In the event of a disaster, how do you know what to recover if you don’t know what you started with? A production CMDB forms the basis for a recovery CMDB, which is a key element in any business continuity/disaster recovery plan. That comprehensive view of what your environment should look like can help you more quickly regain normal operations.

But also:

  • Automation: With the growing scale of data centers, there is no option but to automate routine tasks. That spans IT Operations Management which need business-driven provisioning, patching or compliance, IT Service Management which need to accelerate incident resolution by efficiently prioritizing/categorizing the work, etc.
  • Performance and availability: With availability being so critical to business success, how can IT be proactive and fulfill SLAs if it cannot map events that impact the infrastructure to the business service that is affected? How can capacity decisions be business driven without an accurate picture of the environment?


The article lists 4 reasons for CMDB failure (competing priorities, limited resources, complacency and overly manual approach).
The fact that a “CMDB Project” is mentioned here is symptomatic that many organizations have initially only considered the technology aspects, rather than establishing Configuration Management as a key discipline that relies on CMDB technology. The human factor is in most cases the #1 source of failure, and there are key questions that cannot be ignored, nor forgotten throughout the implementation:

  • What is the business reason for Configuration Management?
  • What current and future problems is Configuration Management going to address?
  • Who is the sponsor for this implementation?
  • What are the processes that will interface with Configuration Management, either to provide data or to consume data?

This ensures a top-down approach, that starts with a vision, drives the boundaries of the data model, the types of integrations, etc.


Once the implementation has kicked off, there are other reasons that can lead to failure such as:

  • The data getting into the CMDB is not governed correctly: it is Configuration Management’s responsibility to ensure that the data is accurate, and transformed appropriately, so it can be referenced reliably. This needs regular reviews of the rules and filters that automatically govern data accuracy
  • Expecting 100% coverage before going into production is playing on the perception that CMDB fails. Configuration Management is a continuous practice, and CMDB implementations need incremental success because the target will always be moving


When it comes to tips for success, I can’t agree more with the article about the absolute necessity to “automatically update the comprehensive picture of your environment to reflect the potentially tens of thousands of changes per year to your environment.” Atrium Discovery and Dependency Mapping (ADDMDiscovery / ADDM can witness how efficient it is at feeding Atrium CMDB with trustable data, that can be automatically synchronized with service models.


Atrium CMDB definitely provides the most comprehensive solution, in terms of its capabilities to handle incoming data, possible interfaces for data consumers, scalability, and the wealth of integrations that exist with BMC or other vendor products.

Recommended reading: Critical Capabilities for Configuration Management Database (Gartner, June 2014).


A main benefit is that it does not require different tools for different data transformation operations. Now, because of this richness, an implementation has to start with the right understanding of the tool, as well as how it should be used. To that purpose, the documentation includes Best Practices that guide implementation towards meeting success with understanding the data model, loading data, normalizing it and ensuring correct reconciliation with other sources of data.


In a summary, Configuration Management is needed more than ever, and needs to be addressed as a discipline, that leverages the most appropriate tools which will guarantee data accuracy, high levels of automation, and strong integrations to drive the most value.


Atrium is the most widely deployed CMDB so it probably has the largest track record in failed implementations. The other side of the glass also means that it has the largest number of successes. This is confirmed by its users, and the rate of 85% failure is certainly not right when applied to it.



Share:|

In the first of the Effect-Tech CMDB webinar series we discussed the upfront aspects of properly setting the scope of your CMDB initiative. We discussed the high level implementation choices and why a use-case driven approach might be the most optimal method to deliver value more quickly. After discussing these options, we concluded part 1 with the introduction of a service model architecture that can be used to initially model your IT environment and expand over time. If you missed the first webinar you can watch a replay of Part 1 at Effect-Tech Webinars

 

Please join us on October 9th as we continue the conversation in part 2 of this webinar series.  We will explore the BMC Atrium classes and which classes are most relevant to support the service model architecture introduced in part 1. Furthermore, we will talk about the role of discovery and how it can and cannot be leveraged to keep your CI's up-to-date. After discussing the CMDB classes, we broach the topic of CI relationships and simplifying which relationship types you should use to drive meaningful value without added complexity.

 

With classes and relationships out of the way, we steer clear of CI attributes for the time being and introduce the need to define multiple service model views that allows users to better understand the numerous CI's and their relationships that result when a complex service model is built out. Finally, we will explore the role of the CMDB to assist application support teams in the areas of event and incident management - and potentially why integration to discovery tools are NOT required to provide value for these app groups.

 

Time permitting, we will introduce Effect-Tech's CMDB methodology and best practices that your organization can use to implement CMDB in a structured and repeatable way. This methodology avoids the common implementation mistake that essentially turns your CMDB project into a data, discovery, and reconciliation exercise. By implementing the CMDB using this systematic approach, we believe your organization will gain more value from your CMDB project - more quickly.

 

This webinar series is presented by Rick Chen, Managing Principle at Effect-Tech.  Rick shares from his wealth of CMDB knowledge and field experience. 

 

Agenda:

  • CMDB class discussion
  • If it's all about relationships - what, why, and how much?
  • CMDB service model views - and why it matters
  • Addressing the needs of application support teams
  • Introducing implementation best practices to get more value, more quickly out of your CMS / CMDB

 

Date/Time: October 9th  at 9am (PST)

 

Space is limited so reserve your webinar seat today - Register

Filter Blog

By date:
By tag: