Skip navigation
1 2 3 4 Previous Next

Atrium CMDB

51 posts
Share:|

T3: Service Management and Automation Conference is NOW OPEN FOR REGISTRATION AND CALL FOR PAPERS

 

BMC Software and the Tools, Technology & Training (T3) community are listening and you have been heard! We are excited to announce the launching of the first ‘T3: Service Management & Automation Conference’ to be held from 6 to 10 November 2017 at The Palms Hotel and Casino in Las Vegas.

 

The conference is being put on by TTT: Tools, Technology & Training Company. As the Flagship sponsor, BMC Software is providing the conference with full support in running the conference.

Together, we plan to bring to you a robust conference focused on the Tools, Technology and Training that you need to succeed in your roles and accomplishing your business needs!

 

The conference website is LIVE and is accessible by going to http://www.tooltechtrain.com/. If you have any questions about the conference, please contact the Board of Directors at info@tooltechtrain.com or call me at 443-812-9891 (leave a message with a phone number and I will get back to you as soon as I can.).

 

Lenny Warren

Member of Board of Directors

Share:|

If you want to use $TIMESTAMP$ or a formula of that to use as part of query to fetch/update data you can do the following supported method.

 

Use a Get System Info step first to get the actual/current TimeStamp.

 

 

Then, run the date here through Select Values to format the date in the way we want it:

 

 

yyyyMMdd is the format that ARS will accept here.

 

Then you can use this TIMESTAMP in your query, for example in CMDBInput here:

Share:|

You may have received a 'Product Change Notification' notice from the BMC support team (dated Feburary 7th 2017) which notes that the Atrium CMDB Suite has been withdrawn.

 

I wanted to write a post to clarify what has actually happened to the Atrium CMDB Suite and clear up any confusion, also hopefully lessen the flood of enquiries coming into my mailbox asking about it.

 

Atrium CMDB has been sold as part of many of our products and also sold separately. What we have actually done is withdraw the product set (Atrium CMDB Suite) that is sold individually as we now see Atrium CMDB as a core part of our AR Platform.

 

As such this notification should not be seen as Atrium CMDB going away, in fact that is far from the case - we have several updates planned at present and will be delivering updates as part of our service pack and standard releases moving forward all that is changing is how BMC sell and account for the CMDB within our product groupings and solutions removing the need for this separate product listing.

 

I hope this clears up any confusion that our Product Change Notification may have caused.

 

Of course if you have any questions or queries please feel free to reach out to me.

 

Stephen Earl

Lead Product Manager, Atrium CMDB

BMC Software Inc.

Share:|

Today's topic is about data quality and CMDB data reconciliation. Data management is impacted when data can not be verified and tracked to its source. We're in the business that seeks data quality and often rely on data auditing to see when a change was made. By whom, when, and why. However, in some cases this may not be as clear cut as it seems by just looking at the change history captured by the AR auditor.


There are less known ways of back tracking changes made to CMDB records, known as CIs and Relationships. 
This blog post is about hidden fields in the CMDB application that can help me understand when and how a record was updated.

 

Let's say that the MarkAsDeleted field has been getting set to a Yes value, while the source record is still set to No.
In such case you'd likely want to know what is making the change and you suspect Reconciliation Merge activity as the culprit.
You can follow markers back to it's source to see if it really is Reconciliation.

 

Usually the inquiry begins by looking at the recon logs. Logs set to debug level will show what value is being updated.
Value of 0 is No and 1 is Yes.  The log can also indicate that MarkAsDeleted is not being updated at all.


For example. Log shows:

Attribute   : MarkAsDeleted
Value       : 0  << this shows the default value of MarkAsDeleted as No.
From Dataset: BMC.ADDM

 

In this case we'd want to know "when" exactly did the change occur. I can find the proof by looking at the CI itself. Was it reconciliation doing it? Has it merged the CI with a value for MarkAsDeleted that equals Yes ?

 

Here is how to research it:

 

Find the CI in BMC.ASSET dataset that was modified to MarkAsDeleted = Yes and look at the LastModifiedBy value.

Is it "Remedy Application Service" ? If yes, then it could have been Reconciliation, but also Normalization or even AtriumIntegrator using that account.

I am going to need stronger evidence than that. So, my next step is to look in the AttributeDataSourceList field. This field has the footprint of the last merge. It includes the source record Dataset Id and a list of Field Id's that "won" the precedence contest from that dataset. If you see BMC.ADDM followed a list of id's that include the number "400129100" (MarkAsDeleted) in the list then you can assume that the field was updated during the last merge. But there is no guarantee that the value was Yes or No at that point.

 

attributedatasourcelist.png

 

But we can do better than that.

 

1. Run an AR Report to get the value from a hidden field named as LastREJobrunID.

 

ReportLastReJobrunId.png

 

This field is defined as follows in Attribute Definitions form:

Attribute Name* : LastREJobrunId
Data Type* : Character
Field ID : 490001289
ForeignKeyID (Class GUID) : BMC_ASSETBASE
instanceId : CMDB_ATTR_ID_LAST_REJOBRUN_ID
Namespace : BMC.CORE

 

LastREJobrunId.png

 

2. Take the LastREJobrunId value from the CI that was changed, go to form "RE:Job Event" and search it using the value from step 1 using the field named: Job Run Instance Id*.
The result should give you about 3 or so records. One record for each activity and one record for the log location.

 

REJobEvent.png

 

 

3. Take a look at the date range of these activities and see if the ModifiedDate of the CI fits within the activity related to Merge. These records will have a value in "ActivityName" that describe which activity has created them. If the Create Date and Modified Date is outside of the date range of Modified Date of the CI then it was not reconciliation that has set the value. You can stop here because reconciliation simply did not do this. If the value in LasModifiedBy is Remedy Application Service, then the data could have been modified by some other method like Normalization or AtriumIntegrator. However, if the Modified Date is in synch and within the range of the LastJobREId then this record has not been modified by anything else since it was merged into BMC.ASSET dataset.

 

REJobEventAndCS.png

 

If you do find that the Modified Date of CI does fit within the Create Date and Modified Date then it was Reconciliation that did it and you'd see this captured in debug level logs.

 

On the other hand of the investigation did not show that the record was modified within that range of Create and Modified Date in Reconciliation Job Events then some other method was used to make the change.

Share:|

Hello there fellow AtriumCore enthusiasts.

 

I got inspired to blog about something I've been working on with customers who synch their ADDM data to CMDB.

 

As a side note, ADDM is now called BMC Discovery, but I'll use ADDM for now since most people are still referring to it as ADDM. On the CMDB side we are still using dataset Id "BMC.ADDM" and "ADDMIntegrationId". I am not sure if this will change. I hope not, because it would impact previously reconciled data. Provenance (the source) is referenced in AttributeDataSourceList field and changing the DatasetId would impact Precedence rules.

 

Getting the discovered data across is one thing. Understanding what happens to it on the other side as another. I've sort of expected that ADDM administration skill set also includes AR Server configuration knowledge, but now I know that AR Server and ADDM administration is usually done by two mutually exclusive teams. This blog is likely going to help each side communicate their expectations to the "other" side.

 

For the ADDM Admin:

You already know that the Remedy AR System hostname and port is the destination for your data. You may also know that there is a Remote Procedure Call (*RPC) queue to which ADDM defaults to but is not restricted to. I'll get to the queue details later, but what is important to know next is that AR Server is running in a server group. I was trying to think of a good way to visualize this and one thing that came to mind is the three headed dog known as Cerberus. I could have also used the multi-headed Hydra as an example, but Cerberus seems more appropriate for what I want to say.

 

 

So, the dog Cerberus has three heads and it guards the underworld. If you have a dog then you already know that that dog gets pretty preoccupied when eating with just that one head. Imagine having a dog with three heads. You basically would have a dog that listens for squirrels, smokes a cigarette and has dinner all at the same time. Each head gets to decide what to do while the food goes into just one stomach. That's kind off what the AR Server does. Information is digested and deposited in one data store. Usually three or more AR Servers are interfaced with the AR Midtier and at least one of them is dedicated to processing CMDB data in bulk. Reconciliation, Normalization, and AR Escalations.

 

These AR System hosted features also use *RPC queues that allow for seamless data exchange between these external API's.

It keeps a dedicated queue opened for them. ADDM synchs its data to CMDB via AR Server using one of these RPC queues.

However, what you need to know is that the CMDB API used by ADDM defaults to queue number “390696”. This queue is actually the CMDB dispatcher queue and using it can sometimes cause contention with other CMDB activities.

 

This is likely the communication that you will have with the guys that administer AR Server. Unfortunately that CMDB RPC queue can not be changed, but BMC R&D is looking into adding more CMDB queues and perhaps even BMC Discovery (ADDM) dedicated queue. Don't quote me on that. It's a rumor. It can be set differently on the ADDM side. This is probably something that you will have to talk about with the AR Server administrators. Tell them that you want a different queue assigned. By doing so you can add more *RPC queue threads. For example 390699.

 

This is easily said than done because there are only two CMDB queues available. Queue 390698 and 390699.

The first one is usually used by Reconciliation engine and the other is for Normalization. BMC Discovery will do best if it has its own queue dedicated to it. But that might be difficult if that same server is used for Normalization and Reconciliation.

In this case it would be easier to add another head (AR Server)  to the server group and dedicate it exclusively to specific data processing activities. In AR Server version 9.1 you can dedicate another AR Server to run normalization by setting its rank in Failover ranking form. Not the server group ranking. You may still need to install that additional instance if that AR Server provides service to Asset Operators via midtier. This would the best way to ensure best performance for ADDM to CMDB integration.

Cerberus and Hades

Cerberus.png.

 

There is one more thing. AR Escalations. These are features that trigger when data is created or modified. You want to ask the Remedy admin to not have escalations enabled on the same server that gets the ADDM data.

 

Reconciliation From ADDM1.jpg

 

 

For the Remedy guys:

All you really need to know is that ADDM uses Java API that send sends the data and it can toggle whether that data is sent as a "pre" or "post" 7603 data model.

Anything after CMDB  version 7603 should be sent as 7603 with Impact as an attribute. This means that the CMDB will receive "Impact" relationship designation in an attribute rather the old and deprecated BMC_Impact class. It relieves the need to run the Deprecation plugin that convers the class to an attribute.

Share:|

AIS supports Weighted Clusters. That means, clusters that are composed of different weighted components. One of the components may be a small server and another a big servers. Maybe both are running the same services, but the users of those services will see a bigger impact if the big server goes down, so this needs to be replicated in the Impact Simulator.

 

The basic of the behavior is explained in the documentation. Here: Settings that affect the impact model - BMC Atrium Core 9.1 - BMC Documentation

and Here: Manually creating CI impact models of services - BMC Atrium Core 9.1 - BMC Documentation

 

But how does it really works on the background:

 

ImpactWeight

ImpactWeight is an attribute of the BMC_BaseRelationship class. It requires an integer value. The impact weight is used in impact relationships to determine how much importance (numerically weighted) to give to each provider relationship that impacts a consumer instance. A higher numerical value indicates a greater importance. Impact weight is used with the WEIGHTED_CLUSTER status computation model.

Example

A consumer instance (C1) has impact relationships with three provider instances (P1, P2, P3). Each impact relationship can be ascribed a status weight to gauge the effect that an event propagating from the provider instance has on the consumer. In this example, the following status weight values are entered for each impact relationship:

C1 and P1: StatusWeight=100

C1 and P2: StatusWeight=50

C1 and P3: StatusWeight=25

All three impact relationships apply the direct propagation model.

Instance P3 has a critical event, making its status UNAVAILABLE, which becomes its propagated status. The propagated statuses of P1 and P2 are classified as OK. The propagated status of one impact relationship is UNAVAILABLE, while the other two are OK.

To determine the weighted impact on the consumer instance, refer to the following table of propagated status values for components and to the following formula:

 

Value

Propagated Status

01

NONE

10

BLACKOUT

20

UNKNOWN

30

OK

40

INFO

50

WARNING

60

MINOR

70

IMPACTED

80

UNAVAILABLE

P3 has the status of UNAVAILABLE, which has a value of 80. P1 and P2 have the status of OK, which is 30. P3 has a status weight value of 25; P1 has a status weight value of 100; P2 has a status weight value of 50.

To calculate the impact that the provider relationships have on the consumer and the consumer's impact status, use the following formula, which applies the status weight to each relationship:

(Propagated Status Value x Status Weight Value) of each impact relationship (P1,P2,P3) / sum of the Status Weight values of all impact relationships

In this example, the formula would be calculated as follows:

[ (30 x 100) + (30 x 50) + (80 x 25) ] / (100 + 50 + 25) = 37.14

  1. 37.14 is the weighted mean of the propagated status values. It determines the impact status of the consumer instance. 37.14 is rounded to the nearest propagated status value, which is 40. The impact status of the consumer instance is INFO.

 

In the actual implementation of AIS the table of propagated statuses is much shorter since we do not have that many options in AIS:

 

This is what we implemented.

 

UNAVAILABLE(3, "CRITICAL", "Unavailable", 80),

IMPACTED(2, "MAJOR", "Very Impaired", 70),

MINOR(1, "MINOR", "Impaired", 60),

WARNING(0, "WARNING", "Slightly Impaired", 50),

OK(-1, "OK", "Ok", 30)

Share:|

The *BMC Atrium CMDB 9.1: For Consumers, Configuration Managers, and Administrators* course teaches students what makes a Configuration Management Database (CMDB) useful to the business, how to configure the CMDB, and how to perform routine administrative and maintenance tasks. This course includes the accreditation exam for BMC Accredited Administrator: Atrium CMDB 9.1, and is also a part of the BMC Accredited Administrator: BMC Atrium CMDB 9.1 certification path.

 

Course details here, or contact Thomas Hogan (EMEA) or Brian Hall (AMER training), or email education@bmc.com

 

UPCOMING PUBLIC CLASSES

AUGUST 8, 2016:  EMEA (online)

AUGUST 8, 2016:  AMER (online)

SEPTEMBER 19, 2016: EMEA (online)

OCTOBER 3, 2016:  AMER (online)

 

Have multiple students? Contact us to discuss hosting a private class for your organization.

 

This instructor-led, five-day course includes a hands-on lab and is designed for:

  • Day 1 (Consumers): Learn how the CMDB is used by key IT Service Management functions, such as Asset, Incident, and Change Management.
  • Days 2-4 (Configuration Managers): Learn how to configure their environment to meet the requirements of CMDB consumers in the business.
  • Day 5 (Adminstrators): Learn routine administration and maintenance of the CMDB.

 

After having attended this course, you will be able to maximize the out-of-the-box functionality of BMC Atrium CMDB. 

 

IMPORTANT: Included in this course is the examination for BMC Accredited Administrator: BMC Atrium CMDB 9.1: Consumers, Configuration Managers and Administrators. Taking the exam and pursuing accreditation is optional, however all students enrolled in this course are automatically enrolled in the exam. You will have two attempts to pass the BMC Accredited Administrator exam. No retakes will be offered.  Those who pass will receive the title of BMC Accredited Administrator: Atrium CMDB 9.1

 

Atrium CMDB Heather Leventry Michelle Kerby Naji Abdallahi Tom Luebbe Jeff Jeffress Vidhya Srinivasan Joon Hahn Geoffrey Bergren Marike Owen Dirk Braune Elaine Miller Mitch Myers

 

Screen Shot 2016-07-19 at 6.46.05 AM.png

Share:|

BMC Remedy Single Sign On Service Provider (SP) certificate shipped with the product, which is used to sign SAML request, will be expired on April 21st 2016.

 

If you are using out of the box certificate to sign SAML requests in BMC Remedy Single Sign On, the request will fail due to the expiry of certificate.

 

In this blog, I will be covering the steps to update the BMC Remedy Single Sign On (RSSO) SP certificate so that it has a new expiry date, which will prevent from failure of SAML authentication.

 

If this certificate has already been replaced with a newer one with a valid future expiry date, you don't have to follow the steps mentioned in this blog. 

 

First of all, how to find the Certificate expiry date of relying party (RSSO) for SAML authentication?

 

  • An easy way to find the certificate expiry is by logging to ADFS tool and checking the RSSO service provider relying party properties.
  • In the Signature tab, you should see the certificate expiry date.

 

Likewise, for other IdP tools that you are using with RSSO, you will have to contact your IdP administrator to check the RSSO relying party certificate expiry date.

 

What steps are necessary to update BMC Remedy Single Sign On (RSSO) SP Certificate?

 

Important Notes:

 

(A) The below instructions are written for Windows OS. All paths mentioned below are for Windows OS. Please use relative paths if you're using Linux or Solaris OS.

 

(B) The file name for the java keystore should be cot.jks. The alias for java keystore (cot.jks) should be test2.  The password for the cot.jks keystore is 'changeit'    Please do not change the password.

 

(C) Please make sure to set the Path environment to jdk or jre bin folder or else you may get error like ‘unknown internal or external command’. In Windows this means that you'll need to edit the System Environment properties and find the global variable PATH to update it.

 

1.png

 

Steps to update the certificate:

 

1. Update java keystore named cot.jks

 

Perform the following steps on the machine installed with RSSO server by being in <tomcat>\rsso\WEB-INF\classes folder:

 

a. Take a backup of existing cot.jks from <tomcat>\rsso\WEB-INF\classes folder

 

b. Delete alias ‘test2’ from existing cot.jks using keytool command line:

 

keytool -delete -alias test2 -keystore cot.jks

 

Note:  The password for the cot.jks is "changeit".  Please don't change the password

 

c. Create a new keypair with alias ‘test2’ in existing cot.jks

 

keytool -keystore cot.jks -genkey -alias test2 -keyalg RSA -sigalg SHA256withRSA -keysize 2048 -validity 730

 

Note:  In the above example, we used 730 days as validity, which is equivalent to 2 years validity.  You can use the validity days at your discretion

 

d. Export ‘test2’ certificate in PEM format

 

keytool -export -keystore cot.jks -alias test2 -file test2.pem –rfc

 

e. Take a backup of the updated cot.jks

 

If you have other RSSO server instances in same cluster, replace cot.jks in <tomcat>\rsso\ rsso\WEB-INF\classes folder with the updated cot.jks in step 1.e

 

2. Update signing certificate in RSSO Admin console

 

a. Login RSSO Admin console

 

b. Go to ‘General->Advanced’ tab

 

c. Open the file test2.pem which is created in step 1.d in text editor, remove the first line:

 

(-----BEGIN CERTIFICATE-----)

 

and the last line:

 

(-----END CERTIFICATE-----)

 

Also remove the newline delimiters (\r\n), and then copy the contents.

E.g. If you use Notepad++, you can open ‘replace’ dialog, select ‘Extended’ search mode, find ‘\r\n’ and click ‘Replace All’ button.

 

 

2.png

 

d. Paste the copied content in step 2.c to the ‘Signing Certificate’ field, replace existing content in the text area

 

3.png

 

e. Click ‘Save’ button to save the change

 

f. Wait for 15 seconds, view the realm using SAML, click ‘View Metadata’ button in ‘Authentication’ tab. Verify the SP metadata is updated with the new signing certificate.

 

3. Update SP metadata at IdP side

 

- Export the SP metadata in step 2.f and save it in a local file

 

- Send the exported SP metadata and the new signing certificate in step 1.d to IdP team for updating.

 

If the IdP is ADFS, the customer can add the new signing certificate as below:

 

a. Open ‘Properties’ dialog of the relying party for RSSO
b. Go to ‘Signature’ tab
c. Click ‘Add’ button, select the new signing certificate file and click ‘OK’

 

4.png

 

 

Notes for rolling upgrades (Cluster / High Availability environment)

 

Should you have a requirement for zero-down time in a cluster environment (assuming ADFS is the IdP) for the signing certificate update, then you can take actions with following sequence:

 

1. Take one RSSO server instance down first, perform step 1 on it
2. Perform step 2
3. Perform step 3 (remember NOT to delete the old signing certificate)
4. Make the RSSO server instance up again
5. Take the second RSSO server instance down, update its cot.jks with the one already updated on first RSSO server instance in step 1.e, then make it up again
6. Repeat step 5 on all other RSSO server instances
7. After the keystore cot.jks is updated on all RSSO server instances, you can remove the old signing certificate on the RSSO relying party at ADFS side.

Share:|

MSSQL comes in many forms, versions, and with many authentication methods.

Most of the time we end up using the regular MSSQL authentication, where we provide a username/password combination, the user gets authenticated in the DB and there we go.

Many other times we need a more complex authentication method. Integrated Security, NTLM, Windows Authentication, Named Instances, Domains, etc. All those parameters can affect the way we need to login to MSSQL Database when we use AI/Spoon to connect.

 

Spoon has two basic options to connect to MSSQL:

 

The first one is the most used one, "MS SQL Server"

This one uses the 1.2.5 JTDS driver.

This driver is good for most common cases, but if you want to use Windows Authentication, Integrated Security or Named Instances you may run into problems.

 

You can use the JTDS FAQ (jTDS JDBC Driver) to answer the most common questions about this driver. Even some troubleshooting of the most common errors thrown.

 

I personally recommend you download the latest JTDS driver from here: jTDS - SQL Server and Sybase JDBC driver download | SourceForge.net

As of this writing it is 1.3.1. If you do not have this version you may encounter some issues connecting to MSSQL 2012 or higher, even if you are using the most basic of authentications.

 

If you want to use Integrated Security (also called Active Directory authentication, or AD) you can do so here as well, or even use Domain/User authentication. I will explain how each one works.

 

If you are not familiar with AD, its a centralized authentication mechanism allowing access to the various hardware and services in the network. By centralizing the authentication process, the same user account can be used to access multiple resources, and it eliminates some of the setup needed to enable those users on various systems. Most DBA’s prefer to use AD authentication for those reasons, and if you will be using PDI to access multiple MSSQL systems, you’ll probably want to become familiar with setting it up.

 

  1. Although Microsoft provides their own JDBC driver, which we will cover later on this post, this time around we will be using the open source driver jTDS.
  2. Extract the archive file and open it. Copy the jtds-1.3.1.jar file to the Pentaho .../data-integration/libext/JDBC folder on your system. Remove the jtds-1.2.5.jar file from there.
  3. In the folder where you extract the archive, locate the subfolder matching your systems architecture (x64, x86 or IA64). Open it, and open the SSO subfolder.
  4. Copy the ntlmauth.dll file to a folder on your path. (From a command prompt enter: ECHO $PATH$ to see the current path). On my system, I copied the file (as root) to the /usr/local/bin folder. In windows a good location could be /Windows/system32/
  5. Open the Pentaho GUI (aka Spoon) and start a new job. Click on the VIEW tab in the Explorer panel.
  6. Right click on Database Connections, and choose NEW to open the Database Connection window.
  7. Enter a name in the Connection Name box to identify it.
  8. Scroll down in Connection Type and choose MS SQL Server.
  9. In the Access panel, make sure Native (JDBC) is selected.
  10. In the Settings panel, enter your server’s hostname or IP address, the database you want to connect to, the port SQL Server is using (by default its 1433), and the user name and password in the appropriate fields. You can leave Instance Name empty unless your DBA tells you the server is using a named instance. It should look something like this:Screenshot-Database Connection -1
  11. In the left most panel, select Options. The right panel will refresh, and will probably only have one value entered: “instance”. Leave the value as is.
  12. Only in cases where you want the actual WindowsUser account that started Spoon to be the one authenticated add a parameter called “integratedSecurity” (watch the text case), and set the value to true. If this is true then you do not need to provide a Username and Password combination. Windows will use the current user that started Spoon to authenticate to the DB. This is most of the times NOT a good option, since for ex. you may need to connect to two different DBs, with two different domain/user combinations. If so, do not use this parameter.
  13. Add another parameter called “domain” and set the value to your network’s domain name. (You can use the full domain name or the shorthand one).Screenshot-Database Connection -2
  14. Click the TEST button at the bottom of the screen, and you should be rewarded with a successful connection window. Click OK and you are done.

Note: You may get the message of

The login is from an untrusted domain and cannot be used with Windows authentication. (code 18452, state 28000)

 

Some SQL Server machines are forced to work exclusively with NTLMv2 authentication. When attempting to create an account with SQL Server using Windows Authentication, its validation will fail with untrusted user/domain error.

In order to verify which authentication is used, execute gpedit.msc on the SQL Server host and look at the selected value of Computer Configuration->Windows Settings->Security Settings->Local Policies->Security Options->Network Security: LAN Manager
Authentication Level.

(ie. Authentication Level is set to "Send NTLMv2 response only. Refuse LM & NTLM" )

The default value for useNTLMv2 is false, so we need to set it to true to send LMv2/NTLMv2 responses when using Windows/Domain authentication.

 

We need to set the parameter "useNTLMv2" to "true" under "domain" (step 13 above).

 

Note also that if you decide to go with Integrated Security, the username/password combination will be the one that actually run the service that tries to connect to the DB. In this case, it is Spoon, but in the case you run the job from the AI Console the connection will be triggered/established by the Carte Server. The Carte server is a process that runs under the ARSystem (as a child of it) and it is started and monitored by ArMonitor. It's starting line is in the armonitor.cfg/conf file and unless you do something about it, CarteServer will be started by the same user that starts the ARServer, which means that this user may fail to connect even though you tested the job from Spoon and it was working.

 

For the above I do NOT recommend using Integrated Security unless really needed.

 

----------------

Now to the second option: MS SQL Server (Native)

 

You normally would use this option for better performance and for using the official Microsoft supported JDBC connection. This JDBC driver is the most compatible driver in the market, and it's officially supported by the same company that releases new MSSQL releases, so it makes sense that any new versions of MSSQL DB Server may be accompanied by new JDBC releases as well. In case your JTDS driver above stops working you may have to default to this one for new DB releases.

 

To deploy/install this you need to:

1. Download and install the JDBC 4.0 from Microsoft official webpage. https://www.microsoft.com/en-us/download/details.aspx?id=11774

2. After download install so that the files get unzipped to one folder. Then go to the folder and grab sqljdbc4.jar, copy this file to <AtriumIntegrator folder>/data-integration/libext/JDBC/

3. Do the same with the sqljdbc_auth.dll file from the 64 bits folder (unless you are on a 32 bit system).

4. Open Spoon, create a new DB Connection and use the MS SQL Server (native) option.

If you want to use the integratedSecurity feature you can leave the username and password fields blank, since it will use the credentials of the user who opened Spoon. On the left hand side of the connection properties page, go to options. Add one property: integratedSecurity and set the value to True.

Share:|

Hi!,

 

BMC does not support running Spoon in Linux, although we know it is quite possible from the Pentaho forums to do it.

What problems did you encounter when running Spoon in Linux (if you ever tried)?

 

Here is an issue we recently observed:

 

Short explanation: The culprit of the original error is that the /tmp filesystem had the “no_exec” option set to true, which kept any app to execute scripts there

 

Long explanation:

1) Replace the swt.jar from the BMC Pentaho's distribution (which is version 3.346 and located in "/opt/bmc/ARSystem/diserver/data-integration/libswt/linux/x86_64") by the distribution we downloaded from : https://www.eclipse.org/swt/ (version 4.527)

 

2) edited spoon.sh (in /opt/bmc/ARSystem/diserver/data-integration folder) and substituted:

 

OPT="$OPT $PENTAHO_DI_JAVA_OPTIONS -Djava.library.path=$LIBPATH -DKETTLE_HOME=/opt/bmc/ARSystem/diserver -DKETTLE_REPOSITORY=$KETTLE_REPOSITORY -DKETTLE_USER=$KETTLE_USER -DKETTLE_PASSWORD=$KETTLE_PASSWORD -DKETTLE_PLUGIN_PACKAGES=$KETTLE_PLUGIN_PACKAGES -DKETTLE_LOG_SIZE_LIMIT=$KETTLE_LOG_SIZE_LIMIT"

 

by :

 

OPT="$OPT $PENTAHO_DI_JAVA_OPTIONS -Xbootclasspath/a:${LIBPATH}swt.jar -Djava.library.path=$LIBPATH -DKETTLE_HOME=/opt/bmc/ARSystem/diserver -DKETTLE_REPOSITORY=$KETTLE_REPOSITORY -DKETTLE_USER=$KETTLE_USER -DKETTLE_PASSWORD=$KETTLE_PASSWORD -DKETTLE_PLUGIN_PACKAGES=$KETTLE_PLUGIN_PACKAGES -DKETTLE_LOG_SIZE_LIMIT=$KETTLE_LOG_SIZE_LIMIT"

 

(the only difference is  -Xbootclasspath/a:${LIBPATH}swt.jar, the reason why is based on: http://stackoverflow.com/questions/19969637/noclassdeffounderror-classnotfoundexception-while-using-swt)

 

3) Started Pentaho Spoon:

 

cd /opt/bmc/ARSystem/diserver/data-integration

./spoon.sh

 

-------------------------------------

There are more experiences here: Diethard Steiner on Business Intelligence: Having problems starting Pentaho Kettle Spoon on Linux? Here are some solutio…

 

Can you share yours?

Share:|

They say that the stars are bright. Late at night and deep in the heart of Texas. That was certainly the case here in Austin last night. The clouds rushing over the night sky occasionally exposed the Moon that was in conjunction with Jupiter. What a sight. According to my team of astrologers this is supposed favorable time when communications flow with clarity.

 

And so, this morning a colleague of mine made something very clear to me. He said that the Java AutoUpdate issue is currently the top concern in the field. At that point I've visualized the faces of administrators that work with java based applications around the world. Their expression being dismay and puzzlement. My facial expression was more septic than that. I realized that all java paths we've ever used to hard code a path to the java executable is no longer valid.

 

So what happened here? A Windows based task called the Java AutoUpdater downloads java updates and installs it at will. That in itself is not that big of a deal. In the past the previous java was left intact and it would still work where ever hardcoded.

But this time around the Java AutoUpdate also removes the contents of the former bin directory. Yes, the one from previous install.

 

For example the java.exe is now gone from the JAVA_HOME\bin folder and if you've had the path in any java application launcher then the app will no longer start. This path is no longer valid:

 

"C:\Program Files\Java1.7\bin\java.exe  -classpath ...."

 

Instead it becomes something else and need to manually change it to the new location.

 

"C:\Program Files\Java1.8\bin\java.exe  - classpath ..."

 

Bit of a nightmare if you think of how many java based applications are in use around the world.

 

Perhaps there is a lesson we can learn from this and not hard code java paths this way. The alternative is to always use a session variable like %JAVA_HOME% or %JAVA_BIN% instead of the C:\Program Files\Java1.7\.

 

Example:

 

%JAVA_HOME%\bin\java.exe  -classpath ...

or

%JAVA_BIN%\java.exe  -classpath ...

 

Unfortunately this presents another issue for compatibility. What if my application was coded with a specific feature that has now changed or depracated in the new version? Either way, please be on the look out and always first check the contents of your bin directory formerly known is JAVA_HOME.

Share:|

*RING*.... *RING*....

 

The phone rang a couple of times while I was finishing my morning coffee... I don't like being disturbed while at it... and normally Watson likes picking up the phone, but this time he wasn't around.

After some more futile rings, I decided to pick it up. Surely it was another one of those cell phone companies offering me 4g or some other new technology, but no. It WAS some new technology, but nobody was selling me anything.

We had a case here of a guy who was trying to use the new 9.1 Remedy, but the atrium console was unable to open. Oh my... You can imagine what can one do without an Atrium Console now-a-days, terrible, terrible news.

Of course, the victim wanted us to engage at full speed. And so we did.

Watson picked me up and we went to the source.

 

Reading the Tomcat logs the problem was clear, a class was missing.

<timestamp> - Initializing Atrium Servlets
javax.servlet.ServletException: flex.messaging.config.ConfigurationException: An error occurred trying to construct FlexFactory 'com.bmc.atrium.lcds.AtriumFlexFactory'.  The underlying cause is: 'java.lang.NoClassDefFoundError: com/bmc/cmdb/api/CMDBValue'.
  at flex.messaging.MessageBrokerServlet.init(MessageBrokerServlet.java:184)
  at com.bmc.atrium.web.AtriumServletDispatcher.init(AtriumServletDispatcher.java:60)
  at com.bmc.atrium.midtier.RealAtriumWidgetPlugin.processRequest(RealAtriumWidgetPlugin.java:156)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:497)
  at com.bmc.atrium.modulelauncher.ContextClassLoaderInvocationHandler.invoke(ContextClassLoaderInvocationHandler.java:26)
  at com.sun.proxy.$Proxy18.processRequest(Unknown Source)
  at com.bmc.atrium.modulelauncher.AtriumWidgetPlugin.processRequest(AtriumWidgetPlugin.java:231)
  at com.remedy.arsys.plugincontainer.impl.PluginServlet.postPluginInfo(PluginServlet.java:44)
  at com.remedy.arsys.plugincontainer.impl.PluginContainer.processRequestInfo(PluginContainer.java:86)
  at com.remedy.arsys.stubs.AuthenticationHelperServlet.doRequest(AuthenticationHelperServlet.java:79)
  at com.remedy.arsys.stubs.GoatHttpServlet.postInternal(GoatHttpServlet.java:98)
  at com.remedy.arsys.stubs.GoatHttpServlet.doGet(GoatHttpServlet.java:57)
  at javax.servlet.http.HttpServlet.service(HttpServlet.java:622)
  at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
  at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:291)
  at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
  at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
  at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239)
  at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
  at com.remedy.arsys.stubs.TenancyFilter.doFilter(TenancyFilter.java:49)
  at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239)
  at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
  at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212)
  at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106)
  at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502)
  at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141)
  at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)
  at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:616)
  at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88)
  at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:521)
  at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1096)
  at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:674)
  at org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.doRun(AprEndpoint.java:2500)
  at org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:2489)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
  at java.lang.Thread.run(Thread.java:745)
Caused by: flex.messaging.config.ConfigurationException: An error occurred trying to construct FlexFactory 'com.bmc.atrium.lcds.AtriumFlexFactory'.  The underlying cause is: 'java.lang.NoClassDefFoundError: com/bmc/cmdb/api/CMDBValue'.
  at flex.messaging.config.FactorySettings.createFactory(FactorySettings.java:73)
  at flex.messaging.config.MessagingConfiguration.createFactories(MessagingConfiguration.java:158)
  at flex.messaging.config.MessagingConfiguration.configureBroker(MessagingConfiguration.java:108)
  at flex.messaging.MessageBrokerServlet.init(MessageBrokerServlet.java:132)
  ... 40 more
Caused by: java.lang.NoClassDefFoundError: com/bmc/cmdb/api/CMDBValue
  at com.bmc.atrium.lcds.ProxyRegistry.initialize(ProxyRegistry.java:123)
  at com.bmc.atrium.lcds.AtriumFlexFactory.initialize(AtriumFlexFactory.java:86)
  at flex.messaging.config.FactorySettings.createFactory(FactorySettings.java:61)
  ... 43 more
Caused by: java.lang.ClassNotFoundException: com.bmc.cmdb.api.CMDBValue
  at com.bmc.atrium.modulelauncher.AtriumDVFClassLoader.loadClass(AtriumDVFClassLoader.java:57)
  ... 46 more
Throw Error - 9430

 

Line 50 and 55 and 44 and 02 where all giving me the same clue. CMDBValue class was missing.

But how? This is a very common class, and it is found in the cmdbapi.jar file.

This jar file changes by version, and it should be there under .../midtier/ThirdPartyJars/servername/

There is where the midtier fetches all the jar files that it needs for dependency classes.

We checked the folder, and the jar file was not there. So my suspicion was right.

Resolving this looked pretty simple:

Find the cmdbapi91.jar under .../atriumcore/cmdb/sdk64/bin

Copy to the thirdPartyJars folder and restart the midtier.

 

Elementary.

 

There is a more clean, better solution though.

  1. Access the BMC Remedy Mid Tier.
  2. When prompted, enter your AR credentials.
  3. Open the Data Visualization System Files form.
  4. In the Data Visualization System Files form, create an entry by entering the information in the following fields.
    • Name = cmdbapi91.jar
    • Description = CMDB Java API
    • Status = Active
    • Platform = All
    • Version = 9.1.00
  5. Click Add to attach the cmdbapi91.jar available at ..\..\..\AtriumCore\cmdb\sdk64\bin
  6. Click Save.
  7. Restart BMC Remedy Mid Tier.
Share:|

Recently I needed to troubleshoot an issue with BPPM Publishing where Impact Models were not being successfully published in BPPM.

Some smaller Models were successful, other larger Models would just not publish.

The Models showed up correctly in Remedy (Impact Designer), but when attempting to publish to BPPM they would time-out or throw a large number of errors in the BPPM logs.

 

Looking at the log files from AR System, I could see lengthy queries being executed from BPPM that were taking upwards of 20 minutes to execute before ultimate failure of the publishing occurred.  SQL analysis showed queries that were using "Null" values, so adding indexes was out of the question to speed up the queries and help the process.

 

Looking into the API calls that were executed, I could see that BPPM was executing an API query using the "CMDBGraphWalkQuery" function with no batching and no levels (which was effectively pulling the full model down across all classes) to "walk" the Model and obtain all the CI's and Relationships.

 

Service Models for this client were loaded via a 2 step process:

 

  1. Main Model was imported via ETL (AI Jobs/Transformations) from an Excel spreadsheet
  2. The remainder of the Model was imported from multiple external data sources, again via ETL (AI Jobs/Transformations)

 

This caused a problem when it came to troubleshooting the root cause as there was no "one source" of information where the full Model resided.  We analysed the Excel spreadsheet for anomalies and corrected, but no amount of troubleshooting and diagnostics was able to pin point the actual issue across the full Model.

 

For BPPM to successfully pull down the Model (CI's and Relationships), it needs to know what BPPM Cell to publish the the Model too.  In a large BPPM implementation, there maybe more than one BPPM Cell configured and the BPPM environment can span multiple networks, be segmented off based on security requirements, etc, so knowing where to Publish a Model too (target Cell) is key to a successful publishing attempt.

 

Eventually we determined through the BPPM logs that there were multiple CI's that were missing the "HomeCell" and "HomeCellAlias" values in the CMDB records as required by BPPM.  This may have been due to the way the Models were loaded, or another issue with the propagation of the Cell values via the Reconciliation process. 

 

With the larger Models having upwards of 50k + CI's (and Relationships), identifying each and everyone of the CI's related to a particular Model was practically impossible due to the way the data was loaded (and Infrastructure CI's could be used across more multiple Models).  So we were left with going line by line through the error logs to identify the offending CI's and updating manually, or coming up with another method to extract the data.

 

We eventually determined that the only way to get past this issue was to "emulate" what the BPPM Publishing process was doing to be able to see what CI's form part the Model we were attempting to Publish.

From here, we could also create a "self documenting" method for a Model where all the related CI's were exported to a file where we could see where the values were missing and then schedule a bulk update to set the "HomeCell" and "HomeCellAlias" values.

 

Attached is the Java application that was created that uses the "CMDBGraphWalkQuery" API call to "walk" the Models from the top to bottom and produce a "csv" file showing all related CI's to that Model, including the the "HomeCell" and "HomeCellAlias" values. 

It can walk a single Model or multiple Models (using the configuration file to set parameters).

There are options to update the values on the fly or just walk the Model and produce an output file (please see the Instruction file included).

 

To use, just extract the zip to a directory of choice (the file structure in the zip needs to be kept intact) and run the program either using the command line options or the configuration file.

 

I hope this helps anyone else facing a similar issue with publishing to BPPM.

 

Carl

Share:|

So Watson got a call. REST API.

We had heard about the guy at a local pub. He was here to take the business out of another guy... "Web Services" they called him.

That was all we knew about it, until that night.

We started following him up, according to the source the guy was not responding well. Bad Request 400.

We sent him the request exactly as stated in the documentation:

http://localhost:8008/api/cmdb/v1/instance/BMC.SAMPLE/BMC.CORE/BMC_ComputerSystem

 

But still, the guy... nothing.

 

A bad request means that the request does reach the Jetty Server, but you probably don't know what the Jetty server is anyway.

 

So first things first.

REST API is a new cool feature of ARS.

When you start ARS it will also start the REST Engine, and if you set it up correctly it will be listening for incoming requests at a port, these request will be APIs that will be resolved and replied using the same mechanism. All that magic happens because of something called Jetty, and that's all you need to know about that.

So how do you set it up correctly... well, that part I'm not so sure, but I will tell you a simple way that works:

 

1. You go an open the jetty-selector.xml file under .../Arsystem/jetty/etc/

2. Comment the first connector and uncomment the second one. So it should look like this one:

<?xml version="1.0"?>
<!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure.dtd">


<Configure id="Server" class="org.eclipse.jetty.server.Server">


  <!-- This is the default, HTTPS connector. Place your keystore file in this
  directory and set your passwords, preferrably obfuscated, to configure HTTPS. -->
  <!-- <Call name="addConnector">
  <Arg>
  <New class="org.eclipse.jetty.server.ssl.SslSelectChannelConnector">
  <Arg>
  <New class="org.eclipse.jetty.http.ssl.SslContextFactory">
  <Set name="keyStore"><Property name="jetty.home" default="." />C:\Users\Administrator\keystore</Set>
  <Set name="keyStorePassword">OBF:1tv71vnw1yta1zt11zsp1ytc1vn61tvv</Set>
  <Set name="keyManagerPassword">OBF:1tv71vnw1yta1zt11zsp1ytc1vn61tvv</Set>
  <Set name="trustStore"><Property name="jetty.home" default="." />C:\Users\Administrator\keystore</Set>
  <Set name="trustStorePassword">OBF:1tv71vnw1yta1zt11zsp1ytc1vn61tvv</Set>
  <Set name="excludeProtocols">
  <Array type="java.lang.String">
  <Item>SSLv3</Item>
  </Array>
  </Set>
  </New>
  </Arg>
  <Set name="port">8443</Set>
  <Set name="maxIdleTime">30000</Set>
  </New>
  </Arg>
  </Call> -->


    <!-- Uncomment this HTTP connector if you are using a reverse proxy that
  handles HTTPS. NOTE: Setting the forwarded property to true will process
  the X-Forwarding headers. -->
  <Call name="addConnector">
  <Arg>
  <New class="org.eclipse.jetty.server.nio.SelectChannelConnector">
  <Set name="host"><Property name="jetty.host" /></Set>
  <Set name="port"><Property name="jetty.port" default="8008" /></Set>
  <Set name="maxIdleTime">300000</Set>
  <Set name="Acceptors">2</Set>
  <Set name="statsOn">false</Set>
  <Set name="confidentialPort">8443</Set>
  <Set name="lowResourcesConnections">20000</Set>
  <Set name="lowResourcesMaxIdleTime">5000</Set>
  <Set name="forwarded">true</Set>
  </New>
  </Arg>
  </Call>


</Configure>

 

Line 10 and line 31 have the comment chars.

Also line 37 and 51 have been wiped out of the commenting brackets.

 

3. Configure the jetty.port if needed, the default is 8008, which Watson very much prefers to leave untouched.

 

So once that is completed, you have to restart ARS so that it can bring up the listeners!

After that is done, you can test your listener to make sure it is there, and you don't waste valuable time doing nothing... something that Watson likes to do on Sunday mornings, nothing.

You do not know how to check, easy.

Open up a command prompt, run:

 

netstat -nab | findstr /c:"8008"

 

If it is listening you should get something like this:

  TCP    0.0.0.0:8008           0.0.0.0:0              LISTENING

  TCP    [::]:8008              [::]:0                 LISTENING

 

Nice and tidy.

Now we need to go to the next step. POSTMAN.

This little fellow can be a burden when you live in London, but this one is a different kind of post-man.

You can download him from the wonderful internet, and make sure you get at least version 3.2.8 since I've had some issues with older versions.

Once you get that installed you open it up and you go and create a request like this one:

 

Postman 1.JPG

 

That thing in the bottom is the Token. You need that token because it will let you go and do the real APIs. So in fact it is a key to open the door to the real world of REST.

 

So now, with the Token we want to do some CMDB stuff:

 

Postman 2.JPG

 

You need to do a GET, the URL parameter can be one of multiple options. For more options I recommend reading the manual... boring!... or getting someone like Watson to do it for you.

 

Use the headers to put an Authorization there, and in the right hand side you will write AR-JWT and then you will paste the token we got on the previous call.

 

You are all set! We solved the mystery and time will tell what will happen to Web Services.

Share:|

start now.jpg

BMC training schedule is posted for January and February.  We want you to be successful in BMC solutions.  We run classes year round and worldwide across the BMC product lines.  Below are the classes listed by class/product name.


Review the below and register today.  Please check BMC Academy for latest availability, BMC reserves the right to cancel/change the schedule.  View our cancellation policy and FAQs.   As always, check back in BMC Academy for the most up to date schedule.


To see all our courses by product/solution, view our training paths.


Also, BMC offers accreditations and certifications across all product lines, learn more.

 

For questions, contact us

Americas - education@bmc.com

EMEA - emea_education@bmc.com

Asia Pacific - ap_education@bmc.com

 

ClassDate/Location
BMC Atrium CMDB 8.x: Administering - Part 2

11 January / EMEA / Online

25 January / Americas / Online

1 February / EMEA / Paris, FR

8 February / Americas / McLean, VA

8 February / EMEA / Winnersh, UK

22 February / Asia Pacific / Online

22 February / EMEA / Online

BMC Atrium CMDB 8.x: Administering - Part 3

18 January / EMEA / Online

1 February / Americas / Online

15 February / EMEA / Winnersh, UK

BMC Atrium CMDB 9.0: For Consumers

11 January / Americas / Online

11 January / EMEA / Paris, FR

18 January / Asia Pacific / Online

25 January / EMEA / Online

16 February / EMEA / Dortmund, DE

22 February / Americas / Online

29 February / EMEA / Paris, FR

Filter Blog

By date:
By tag: