Skip navigation
1 2 3 4 Previous Next

Atrium CMDB

55 posts
Share:|

This year I'm looking forward to meeting as many of you as possible at the T3:SMAC conference being organised and run by tooltechtrain, sponsored by BMC Software.

 

I have a session to walk you through our new CMDB User Experience releasing soon and also be available for meetings and hallway conversations of course - see you there!

 

If you want a new version of our updated CDM Diagram poster make sure you come and find me at the BMC area to grab one - let me know you are attending the conference and when we meet you can get your shiny new poster!

 

Visit the website are REGISTER for your place at the conference, don't forget to let me know you are coming and come and collect your poster...

Share:|

Hello there and welcome to today's blog post. We'd like to inform our customer base and my fellow technical staff on something that perhaps lacks clarity to some extent.

I'll break the points I need to get across in three sections.

 

1. Atrium Single Sign ON (ASSO) installation on a bundled tomcat.

Historically we've always recommended to use the bundled tomcat when installing BMC products. My personal understanding of this is that we have coded the features using that instance with a specific version of java and hence trying to maintain that set together as one deployment makes sense. 10 years ago that would have been OK and somewhat expected. But things are changing rapidly and the software community has grown to a sizable workforce.

 

Having more people collaborate on a platform also helps us to find more vulnerabilities more quickly. Over the last 10 years we, the software community, have identified more sensitive areas than in the previous decades. That means that software vendors have to keep up with these flaws and correct the code and hence release fixes and patches more often. This of course means that our approach to installing with bundled software packages has been met with new challenges of compatibilities. The bundle become obsolete if new fixes can not be applied to it.

 

ASSO is one of the products where external (also known as existing) Apache Tomcat can be utilized. This gives our customers the ability to maintain fixes and adhere to their IT Department's security policies by subscribing to patches provided directly with the vendor. It should be implied that vulnerabilities of Tomcat are not vulnerabilities of our products and maintaining up to date fixes really is best handled by the software vendor.

 

2. Remedy Single Sign On (RSSO) installation on an existing tomcat.

RSSO does not even consider bundling the Tomcat with the installer. Based on what is true in section 1. we've already realized that this is the way to do it. Although you will still see Tomcat bundled with Atrium Web Services (AWS), the future of our product installers will likely follow a separate path and hence give the control of installing it back in the hands of IT Departments.

In doing so, IT Departments can maintain a Product Catalog and predict deployments by a controlled version release and patch maintenance. This is ideal.

 

3. Authentication of products using SSO

The above two points outline the differences between two SSO technologies and what platform they would run on. However, there is a third aspect of this topic and that is the compatibility between the two. ASSO is now becoming a interim legacy product and BMC's future for SSO is in RSSO. That is a clear path forward. However, we're not ready to cut to it just yet.

Some of our products are taking a position in line to get integrated with RSSO while operating under the ASSO flag. These products are getting attention to integrate with RSSO.

One of these products is True Sight 10. As of now TS10 can only get authenticated by ASSO, but the future is already laid out for this product. It will integrate with RSSO and when that happens ASSO will likely become an obsolete product.

 

Bottom Line:

A) For RSSO, this means for you is that for now the cut over can be planned, but not realized until RSSO agent is built. Meanwhile as new threats and vulnerabilities are discovered, and you are using RSSO your already installed Tomcat, then please make sure that your Tomcat software is updated as recommended by the Apache Tomcat software vendor. There is no impact on RSSO as of now, Aug 2017. Oracle's Java is already applying Java updates via agents installed on the hosted endpoints. We're not sure how Apache will solve their future updates, so that means that your IT Department is your best line of defense. RSSO provides and hopefully meets your needs for user authentication, but when it comes to patching Tomcat please make sure to keep up with the software vendor.

 

B) For ASSO bundled Apache Tomcat patching questions please check with BMC support for compatibility issues with new Tomcat releases. For now do not upgrade ASSO Tomcat. Instead ask the BMC AtriumCore support to create an escalation with development.

Share:|

Hi, Everyone!

 

It's been awhile since I posted here.  I also got to meet Doug (BMC CMDB Architect) in person recently.  Here is my recent ah-ha moment with CMDB, and Discovery.  I will start with BMC discovery (ADDM) then move to the CMDB topic of service concept.  There is lots confusion out there on the difference of technical and business service should be within the CMDB.  Some of you might say, I knew that years ago!  But, It took me awhile to grabs the concepts even though we used, implemented, and develop the various BMC products.  I am a visual and tactical learner.  I am writing this blog for those type of students.  Diagram 1 explains The Pattern Language (TPL) and how things are discovery

 

Picture 1 explains the discovery concepts and how things are the development by looking for a pattern within a process(s) running on a device.  Once you find that process you write a pattern or use discovery to find that process via TPL. By using the discovered process, you can now create a software instance(s) by the group the process into the software(s).

 

Screen Shot 2017-06-17 at 8.01.21 AM.png

 

Diagram 2 shows the structure of the TPL base on Diagram 1.  Notice the trigger is on a node of node kind base on the condition.  You now see the relationship between the pattern and the TPL.  Once you define the software instances into business application instance(s).  Once BAI is moved into the CMDB CI called BMC.CORE:BMC_ApplicationSystem, BMC.CORE:BMC_ApplicationInfrastructure.  Base on your application model you create using the pattern that is consumed by CMDB common data model in different ways.  You also need to know that TPL's foundation is in Python.  Those of you are interested in pattern and AI.  That's another discussion/blog.

 

Let's look at BAI and SI from the discovery with SAAM and predefine SI that becomes part of a larger model like BSM.

 

SAAM's Business Application Instances are consumed by these forms:

  • BMC.CORE:BMC_Application
  • BMC.CORE:BMC_ApplicationSystem

 

Let's look at the CDM for CMDB forms BMC.CORE:BMC_ApplicaitionSystem and BMC.CORE:BMC_Applicaiton.  You have to understand that parent class is BMC.CORE_ApplicationSystem.  The subclasses are BMC.CORE:BMC_Application, BMC.CORE:BMC_ApplicationInfrastructure, BMC.CORE:BMC_SoftwareServer.  (Basics)

 

CI Name
CI ClassDescription
BMC Atrium Discovery and Dependency Mapping Active Directory Proxy 10.1 identified as Active Directory on %hostname%

Parent: BMC.CORE:BMC_ApplicationSystem

Child:   BMC.CORE:BMC:SoftwareServer

The BMC_SoftwareServer class represents a single piece of software directly running (or otherwise deployed) on a single computer.
manager module on Apache Tomcat Application Server 7.0 listening on 8005, 8080, 8009 on %hostname%

Parent:  BMC.CORE:BMC_SystemService

Child:    BMC.CORE:BMC_ApplicaitonService

Class that stores information about services that represent low-level modules of an application, for example the components deployed within an application server. This class has no corresponding DMTF CIM class.
BSM (Business Service Managment is define pattern via TKU of software instance)

Parent:  BMC.CORE:BMC_System

Child: BMC.CORE:BMC_ApplicationSystem

Child:   BMC.CORE:BMC_Application

The BMC_Application class represents an instance of an end-user application that supports a particular business function and that can be managed as an independent unit.

 

If you drill down after modeling each of these, you notice that subclass is related to the mapping you've created to BMC.CORE:BMC_ApplicaitonSystem.

 

 

 

Software Instance is consumed by these forms:

  • BMC.CORE:BMC_APPLICATIONSERVICE

 

It is not comsume by the follwing forms:

  • BMC.CORE:BMC_SystemSoftware
  • BMC.CORE:BMC_ApplicationInfrastructure
  • BMC.CORE:BMC_SystemService

 

(Non-organized brain dump)

If you look at CMDB forums, there is lots of confusion around what technical and business service(s) is defined with CMDB in relationship to BMC discovery.  I've consulted with large companies like Duke, Starbucks, and eBay.  I am finding that this question is asked over and over again. The way understood @Doug Muller:  There is no direct relationship between business and technical services that relate(s) into BMC.CORE:BMC_ApplicationSystem.  These definitions can be defined by how your business generate revenue with a business service. (If your company makes cars.  Any system that supports selling cars is tied to business services.) . Technical services are defined supporting of business service(s).  You can define the technical services without a business service(s).  The confusion comes from the type of business your company does and way BMC represents examples of technical vs business service(s). BMC is a company that sells software so a lot of the business services sounds like technical services but, it is not a technical service(s).  Becuase those services help generate revenue for BMC software.  More to come later...

 

Why CMDB & Discovery project fail

 

CMDBDiscovery
Suggestions
Project ScopeThe scope of these projects starts out has let's map the services but, the reality is that there are lots of scope creeps.  The value creation is loose scoped based on my experience.  The value creation for CMDB needs to understand and measure for each ORG.  Discovery covers the automation of discovering IT infrastructure at the data center level but, does not cover end to end communications.  Mapping of BAI isn't scoped right.  BMC has recognized this issue by adding manage service to map application.
Project ConstraintsHuman Resource, Knowledge Base, Wisdom Base, and Where to start the value creation for an ORG.

 

Draft thoughts:

 

Service Modeling brain Dump

 

Service Modeling Best Practices comparable CDM fields

If you want to learn discovery in detail and how you can answer debated question.  Please start here:  ADDM Support Guide

Share:|
Share:|

T3: Service Management and Automation Conference is NOW OPEN FOR REGISTRATION AND CALL FOR PAPERS

 

BMC Software and the Tools, Technology & Training (T3) community are listening and you have been heard! We are excited to announce the launching of the first ‘T3: Service Management & Automation Conference’ to be held from 6 to 10 November 2017 at The Palms Hotel and Casino in Las Vegas.

 

The conference is being put on by TTT: Tools, Technology & Training Company. As the Flagship sponsor, BMC Software is providing the conference with full support in running the conference.

Together, we plan to bring to you a robust conference focused on the Tools, Technology and Training that you need to succeed in your roles and accomplishing your business needs!

 

The conference website is LIVE and is accessible by going to http://www.tooltechtrain.com/. If you have any questions about the conference, please contact the Board of Directors at info@tooltechtrain.com or call me at 443-812-9891 (leave a message with a phone number and I will get back to you as soon as I can.).

 

Lenny Warren

Member of Board of Directors

Share:|

If you want to use $TIMESTAMP$ or a formula of that to use as part of query to fetch/update data you can do the following supported method.

 

Use a Get System Info step first to get the actual/current TimeStamp.

 

 

Then, run the date here through Select Values to format the date in the way we want it:

 

 

yyyyMMdd is the format that ARS will accept here.

 

Then you can use this TIMESTAMP in your query, for example in CMDBInput here:

Share:|

You may have received a 'Product Change Notification' notice from the BMC support team (dated Feburary 7th 2017) which notes that the Atrium CMDB Suite has been withdrawn.

 

I wanted to write a post to clarify what has actually happened to the Atrium CMDB Suite and clear up any confusion, also hopefully lessen the flood of enquiries coming into my mailbox asking about it.

 

Atrium CMDB has been sold as part of many of our products and also sold separately. What we have actually done is withdraw the product set (Atrium CMDB Suite) that is sold individually as we now see Atrium CMDB as a core part of our AR Platform.

 

As such this notification should not be seen as Atrium CMDB going away, in fact that is far from the case - we have several updates planned at present and will be delivering updates as part of our service pack and standard releases moving forward all that is changing is how BMC sell and account for the CMDB within our product groupings and solutions removing the need for this separate product listing.

 

I hope this clears up any confusion that our Product Change Notification may have caused.

 

Of course if you have any questions or queries please feel free to reach out to me.

 

Stephen Earl

Lead Product Manager, Atrium CMDB

BMC Software Inc.

Share:|

Today's topic is about data quality and CMDB data reconciliation. Data management is impacted when data can not be verified and tracked to its source. We're in the business that seeks data quality and often rely on data auditing to see when a change was made. By whom, when, and why. However, in some cases this may not be as clear cut as it seems by just looking at the change history captured by the AR auditor.


There are less known ways of back tracking changes made to CMDB records, known as CIs and Relationships. 
This blog post is about hidden fields in the CMDB application that can help me understand when and how a record was updated.

 

Let's say that the MarkAsDeleted field has been getting set to a Yes value, while the source record is still set to No.
In such case you'd likely want to know what is making the change and you suspect Reconciliation Merge activity as the culprit.
You can follow markers back to it's source to see if it really is Reconciliation.

 

Usually the inquiry begins by looking at the recon logs. Logs set to debug level will show what value is being updated.
Value of 0 is No and 1 is Yes.  The log can also indicate that MarkAsDeleted is not being updated at all.


For example. Log shows:

Attribute   : MarkAsDeleted
Value       : 0  << this shows the default value of MarkAsDeleted as No.
From Dataset: BMC.ADDM

 

In this case we'd want to know "when" exactly did the change occur. I can find the proof by looking at the CI itself. Was it reconciliation doing it? Has it merged the CI with a value for MarkAsDeleted that equals Yes ?

 

Here is how to research it:

 

Find the CI in BMC.ASSET dataset that was modified to MarkAsDeleted = Yes and look at the LastModifiedBy value.

Is it "Remedy Application Service" ? If yes, then it could have been Reconciliation, but also Normalization or even AtriumIntegrator using that account.

I am going to need stronger evidence than that. So, my next step is to look in the AttributeDataSourceList field. This field has the footprint of the last merge. It includes the source record Dataset Id and a list of Field Id's that "won" the precedence contest from that dataset. If you see BMC.ADDM followed a list of id's that include the number "400129100" (MarkAsDeleted) in the list then you can assume that the field was updated during the last merge. But there is no guarantee that the value was Yes or No at that point.

 

attributedatasourcelist.png

 

But we can do better than that.

 

1. Run an AR Report to get the value from a hidden field named as LastREJobrunID.

 

ReportLastReJobrunId.png

 

This field is defined as follows in Attribute Definitions form:

Attribute Name* : LastREJobrunId
Data Type* : Character
Field ID : 490001289
ForeignKeyID (Class GUID) : BMC_ASSETBASE
instanceId : CMDB_ATTR_ID_LAST_REJOBRUN_ID
Namespace : BMC.CORE

 

LastREJobrunId.png

 

2. Take the LastREJobrunId value from the CI that was changed, go to form "RE:Job Event" and search it using the value from step 1 using the field named: Job Run Instance Id*.
The result should give you about 3 or so records. One record for each activity and one record for the log location.

 

REJobEvent.png

 

 

3. Take a look at the date range of these activities and see if the ModifiedDate of the CI fits within the activity related to Merge. These records will have a value in "ActivityName" that describe which activity has created them. If the Create Date and Modified Date is outside of the date range of Modified Date of the CI then it was not reconciliation that has set the value. You can stop here because reconciliation simply did not do this. If the value in LasModifiedBy is Remedy Application Service, then the data could have been modified by some other method like Normalization or AtriumIntegrator. However, if the Modified Date is in synch and within the range of the LastJobREId then this record has not been modified by anything else since it was merged into BMC.ASSET dataset.

 

REJobEventAndCS.png

 

If you do find that the Modified Date of CI does fit within the Create Date and Modified Date then it was Reconciliation that did it and you'd see this captured in debug level logs.

 

On the other hand of the investigation did not show that the record was modified within that range of Create and Modified Date in Reconciliation Job Events then some other method was used to make the change.

Share:|

Hello there fellow AtriumCore enthusiasts.

 

I got inspired to blog about something I've been working on with customers who synch their ADDM data to CMDB.

 

As a side note, ADDM is now called BMC Discovery, but I'll use ADDM for now since most people are still referring to it as ADDM. On the CMDB side we are still using dataset Id "BMC.ADDM" and "ADDMIntegrationId". I am not sure if this will change. I hope not, because it would impact previously reconciled data. Provenance (the source) is referenced in AttributeDataSourceList field and changing the DatasetId would impact Precedence rules.

 

Getting the discovered data across is one thing. Understanding what happens to it on the other side as another. I've sort of expected that ADDM administration skill set also includes AR Server configuration knowledge, but now I know that AR Server and ADDM administration is usually done by two mutually exclusive teams. This blog is likely going to help each side communicate their expectations to the "other" side.

 

For the ADDM Admin:

You already know that the Remedy AR System hostname and port is the destination for your data. You may also know that there is a Remote Procedure Call (*RPC) queue to which ADDM defaults to but is not restricted to. I'll get to the queue details later, but what is important to know next is that AR Server is running in a server group. I was trying to think of a good way to visualize this and one thing that came to mind is the three headed dog known as Cerberus. I could have also used the multi-headed Hydra as an example, but Cerberus seems more appropriate for what I want to say.

 

 

So, the dog Cerberus has three heads and it guards the underworld. If you have a dog then you already know that that dog gets pretty preoccupied when eating with just that one head. Imagine having a dog with three heads. You basically would have a dog that listens for squirrels, smokes a cigarette and has dinner all at the same time. Each head gets to decide what to do while the food goes into just one stomach. That's kind off what the AR Server does. Information is digested and deposited in one data store. Usually three or more AR Servers are interfaced with the AR Midtier and at least one of them is dedicated to processing CMDB data in bulk. Reconciliation, Normalization, and AR Escalations.

 

These AR System hosted features also use *RPC queues that allow for seamless data exchange between these external API's.

It keeps a dedicated queue opened for them. ADDM synchs its data to CMDB via AR Server using one of these RPC queues.

However, what you need to know is that the CMDB API used by ADDM defaults to queue number “390696”. This queue is actually the CMDB dispatcher queue and using it can sometimes cause contention with other CMDB activities.

 

This is likely the communication that you will have with the guys that administer AR Server. Unfortunately that CMDB RPC queue can not be changed, but BMC R&D is looking into adding more CMDB queues and perhaps even BMC Discovery (ADDM) dedicated queue. Don't quote me on that. It's a rumor. It can be set differently on the ADDM side. This is probably something that you will have to talk about with the AR Server administrators. Tell them that you want a different queue assigned. By doing so you can add more *RPC queue threads. For example 390699.

 

This is easily said than done because there are only two CMDB queues available. Queue 390698 and 390699.

The first one is usually used by Reconciliation engine and the other is for Normalization. BMC Discovery will do best if it has its own queue dedicated to it. But that might be difficult if that same server is used for Normalization and Reconciliation.

In this case it would be easier to add another head (AR Server)  to the server group and dedicate it exclusively to specific data processing activities. In AR Server version 9.1 you can dedicate another AR Server to run normalization by setting its rank in Failover ranking form. Not the server group ranking. You may still need to install that additional instance if that AR Server provides service to Asset Operators via midtier. This would the best way to ensure best performance for ADDM to CMDB integration.

Cerberus and Hades

Cerberus.png.

 

There is one more thing. AR Escalations. These are features that trigger when data is created or modified. You want to ask the Remedy admin to not have escalations enabled on the same server that gets the ADDM data.

 

Reconciliation From ADDM1.jpg

 

 

For the Remedy guys:

All you really need to know is that ADDM uses Java API that send sends the data and it can toggle whether that data is sent as a "pre" or "post" 7603 data model.

Anything after CMDB  version 7603 should be sent as 7603 with Impact as an attribute. This means that the CMDB will receive "Impact" relationship designation in an attribute rather the old and deprecated BMC_Impact class. It relieves the need to run the Deprecation plugin that convers the class to an attribute.

Share:|

AIS supports Weighted Clusters. That means, clusters that are composed of different weighted components. One of the components may be a small server and another a big servers. Maybe both are running the same services, but the users of those services will see a bigger impact if the big server goes down, so this needs to be replicated in the Impact Simulator.

 

The basic of the behavior is explained in the documentation. Here: Settings that affect the impact model - BMC Atrium Core 9.1 - BMC Documentation

and Here: Manually creating CI impact models of services - BMC Atrium Core 9.1 - BMC Documentation

 

But how does it really works on the background:

 

ImpactWeight

ImpactWeight is an attribute of the BMC_BaseRelationship class. It requires an integer value. The impact weight is used in impact relationships to determine how much importance (numerically weighted) to give to each provider relationship that impacts a consumer instance. A higher numerical value indicates a greater importance. Impact weight is used with the WEIGHTED_CLUSTER status computation model.

Example

A consumer instance (C1) has impact relationships with three provider instances (P1, P2, P3). Each impact relationship can be ascribed a status weight to gauge the effect that an event propagating from the provider instance has on the consumer. In this example, the following status weight values are entered for each impact relationship:

C1 and P1: StatusWeight=100

C1 and P2: StatusWeight=50

C1 and P3: StatusWeight=25

All three impact relationships apply the direct propagation model.

Instance P3 has a critical event, making its status UNAVAILABLE, which becomes its propagated status. The propagated statuses of P1 and P2 are classified as OK. The propagated status of one impact relationship is UNAVAILABLE, while the other two are OK.

To determine the weighted impact on the consumer instance, refer to the following table of propagated status values for components and to the following formula:

 

Value

Propagated Status

01

NONE

10

BLACKOUT

20

UNKNOWN

30

OK

40

INFO

50

WARNING

60

MINOR

70

IMPACTED

80

UNAVAILABLE

P3 has the status of UNAVAILABLE, which has a value of 80. P1 and P2 have the status of OK, which is 30. P3 has a status weight value of 25; P1 has a status weight value of 100; P2 has a status weight value of 50.

To calculate the impact that the provider relationships have on the consumer and the consumer's impact status, use the following formula, which applies the status weight to each relationship:

(Propagated Status Value x Status Weight Value) of each impact relationship (P1,P2,P3) / sum of the Status Weight values of all impact relationships

In this example, the formula would be calculated as follows:

[ (30 x 100) + (30 x 50) + (80 x 25) ] / (100 + 50 + 25) = 37.14

  1. 37.14 is the weighted mean of the propagated status values. It determines the impact status of the consumer instance. 37.14 is rounded to the nearest propagated status value, which is 40. The impact status of the consumer instance is INFO.

 

In the actual implementation of AIS the table of propagated statuses is much shorter since we do not have that many options in AIS:

 

This is what we implemented.

 

UNAVAILABLE(3, "CRITICAL", "Unavailable", 80),

IMPACTED(2, "MAJOR", "Very Impaired", 70),

MINOR(1, "MINOR", "Impaired", 60),

WARNING(0, "WARNING", "Slightly Impaired", 50),

OK(-1, "OK", "Ok", 30)

Share:|

The *BMC Atrium CMDB 9.1: For Consumers, Configuration Managers, and Administrators* course teaches students what makes a Configuration Management Database (CMDB) useful to the business, how to configure the CMDB, and how to perform routine administrative and maintenance tasks. This course includes the accreditation exam for BMC Accredited Administrator: Atrium CMDB 9.1, and is also a part of the BMC Accredited Administrator: BMC Atrium CMDB 9.1 certification path.

 

Course details here, or contact Thomas Hogan (EMEA) or Brian Hall (AMER training), or email education@bmc.com

 

UPCOMING PUBLIC CLASSES

AUGUST 8, 2016:  EMEA (online)

AUGUST 8, 2016:  AMER (online)

SEPTEMBER 19, 2016: EMEA (online)

OCTOBER 3, 2016:  AMER (online)

 

Have multiple students? Contact us to discuss hosting a private class for your organization.

 

This instructor-led, five-day course includes a hands-on lab and is designed for:

  • Day 1 (Consumers): Learn how the CMDB is used by key IT Service Management functions, such as Asset, Incident, and Change Management.
  • Days 2-4 (Configuration Managers): Learn how to configure their environment to meet the requirements of CMDB consumers in the business.
  • Day 5 (Adminstrators): Learn routine administration and maintenance of the CMDB.

 

After having attended this course, you will be able to maximize the out-of-the-box functionality of BMC Atrium CMDB. 

 

IMPORTANT: Included in this course is the examination for BMC Accredited Administrator: BMC Atrium CMDB 9.1: Consumers, Configuration Managers and Administrators. Taking the exam and pursuing accreditation is optional, however all students enrolled in this course are automatically enrolled in the exam. You will have two attempts to pass the BMC Accredited Administrator exam. No retakes will be offered.  Those who pass will receive the title of BMC Accredited Administrator: Atrium CMDB 9.1

 

Atrium CMDB Heather Leventry Michelle Kerby Naji Abdallahi Tom Luebbe Jeff Jeffress Vidhya Srinivasan Joon Hahn Geoffrey Bergren Marike Owen Dirk Braune Elaine Miller Mitch Myers

 

Screen Shot 2016-07-19 at 6.46.05 AM.png

Share:|

BMC Remedy Single Sign On Service Provider (SP) certificate shipped with the product, which is used to sign SAML request, will be expired on April 21st 2016.

 

If you are using out of the box certificate to sign SAML requests in BMC Remedy Single Sign On, the request will fail due to the expiry of certificate.

 

In this blog, I will be covering the steps to update the BMC Remedy Single Sign On (RSSO) SP certificate so that it has a new expiry date, which will prevent from failure of SAML authentication.

 

If this certificate has already been replaced with a newer one with a valid future expiry date, you don't have to follow the steps mentioned in this blog. 

 

First of all, how to find the Certificate expiry date of relying party (RSSO) for SAML authentication?

 

  • An easy way to find the certificate expiry is by logging to ADFS tool and checking the RSSO service provider relying party properties.
  • In the Signature tab, you should see the certificate expiry date.

 

Likewise, for other IdP tools that you are using with RSSO, you will have to contact your IdP administrator to check the RSSO relying party certificate expiry date.

 

What steps are necessary to update BMC Remedy Single Sign On (RSSO) SP Certificate?

 

Important Notes:

 

(A) The below instructions are written for Windows OS. All paths mentioned below are for Windows OS. Please use relative paths if you're using Linux or Solaris OS.

 

(B) The file name for the java keystore should be cot.jks. The alias for java keystore (cot.jks) should be test2.  The password for the cot.jks keystore is 'changeit'    Please do not change the password.

 

(C) Please make sure to set the Path environment to jdk or jre bin folder or else you may get error like ‘unknown internal or external command’. In Windows this means that you'll need to edit the System Environment properties and find the global variable PATH to update it.

 

1.png

 

Steps to update the certificate:

 

1. Update java keystore named cot.jks

 

Perform the following steps on the machine installed with RSSO server by being in <tomcat>\rsso\WEB-INF\classes folder:

 

a. Take a backup of existing cot.jks from <tomcat>\rsso\WEB-INF\classes folder

 

b. Delete alias ‘test2’ from existing cot.jks using keytool command line:

 

keytool -delete -alias test2 -keystore cot.jks

 

Note:  The password for the cot.jks is "changeit".  Please don't change the password

 

c. Create a new keypair with alias ‘test2’ in existing cot.jks

 

keytool -keystore cot.jks -genkey -alias test2 -keyalg RSA -sigalg SHA256withRSA -keysize 2048 -validity 730

 

Note:  In the above example, we used 730 days as validity, which is equivalent to 2 years validity.  You can use the validity days at your discretion

 

d. Export ‘test2’ certificate in PEM format

 

keytool -export -keystore cot.jks -alias test2 -file test2.pem –rfc

 

e. Take a backup of the updated cot.jks

 

If you have other RSSO server instances in same cluster, replace cot.jks in <tomcat>\rsso\ rsso\WEB-INF\classes folder with the updated cot.jks in step 1.e

 

2. Update signing certificate in RSSO Admin console

 

a. Login RSSO Admin console

 

b. Go to ‘General->Advanced’ tab

 

c. Open the file test2.pem which is created in step 1.d in text editor, remove the first line:

 

(-----BEGIN CERTIFICATE-----)

 

and the last line:

 

(-----END CERTIFICATE-----)

 

Also remove the newline delimiters (\r\n), and then copy the contents.

E.g. If you use Notepad++, you can open ‘replace’ dialog, select ‘Extended’ search mode, find ‘\r\n’ and click ‘Replace All’ button.

 

 

2.png

 

d. Paste the copied content in step 2.c to the ‘Signing Certificate’ field, replace existing content in the text area

 

3.png

 

e. Click ‘Save’ button to save the change

 

f. Wait for 15 seconds, view the realm using SAML, click ‘View Metadata’ button in ‘Authentication’ tab. Verify the SP metadata is updated with the new signing certificate.

 

3. Update SP metadata at IdP side

 

- Export the SP metadata in step 2.f and save it in a local file

 

- Send the exported SP metadata and the new signing certificate in step 1.d to IdP team for updating.

 

If the IdP is ADFS, the customer can add the new signing certificate as below:

 

a. Open ‘Properties’ dialog of the relying party for RSSO
b. Go to ‘Signature’ tab
c. Click ‘Add’ button, select the new signing certificate file and click ‘OK’

 

4.png

 

 

Notes for rolling upgrades (Cluster / High Availability environment)

 

Should you have a requirement for zero-down time in a cluster environment (assuming ADFS is the IdP) for the signing certificate update, then you can take actions with following sequence:

 

1. Take one RSSO server instance down first, perform step 1 on it
2. Perform step 2
3. Perform step 3 (remember NOT to delete the old signing certificate)
4. Make the RSSO server instance up again
5. Take the second RSSO server instance down, update its cot.jks with the one already updated on first RSSO server instance in step 1.e, then make it up again
6. Repeat step 5 on all other RSSO server instances
7. After the keystore cot.jks is updated on all RSSO server instances, you can remove the old signing certificate on the RSSO relying party at ADFS side.

Share:|

MSSQL comes in many forms, versions, and with many authentication methods.

Most of the time we end up using the regular MSSQL authentication, where we provide a username/password combination, the user gets authenticated in the DB and there we go.

Many other times we need a more complex authentication method. Integrated Security, NTLM, Windows Authentication, Named Instances, Domains, etc. All those parameters can affect the way we need to login to MSSQL Database when we use AI/Spoon to connect.

 

Spoon has two basic options to connect to MSSQL:

 

The first one is the most used one, "MS SQL Server"

This one uses the 1.2.5 JTDS driver.

This driver is good for most common cases, but if you want to use Windows Authentication, Integrated Security or Named Instances you may run into problems.

 

You can use the JTDS FAQ (jTDS JDBC Driver) to answer the most common questions about this driver. Even some troubleshooting of the most common errors thrown.

 

I personally recommend you download the latest JTDS driver from here: jTDS - SQL Server and Sybase JDBC driver download | SourceForge.net

As of this writing it is 1.3.1. If you do not have this version you may encounter some issues connecting to MSSQL 2012 or higher, even if you are using the most basic of authentications.

 

If you want to use Integrated Security (also called Active Directory authentication, or AD) you can do so here as well, or even use Domain/User authentication. I will explain how each one works.

 

If you are not familiar with AD, its a centralized authentication mechanism allowing access to the various hardware and services in the network. By centralizing the authentication process, the same user account can be used to access multiple resources, and it eliminates some of the setup needed to enable those users on various systems. Most DBA’s prefer to use AD authentication for those reasons, and if you will be using PDI to access multiple MSSQL systems, you’ll probably want to become familiar with setting it up.

 

  1. Although Microsoft provides their own JDBC driver, which we will cover later on this post, this time around we will be using the open source driver jTDS.
  2. Extract the archive file and open it. Copy the jtds-1.3.1.jar file to the Pentaho .../data-integration/libext/JDBC folder on your system. Remove the jtds-1.2.5.jar file from there.
  3. In the folder where you extract the archive, locate the subfolder matching your systems architecture (x64, x86 or IA64). Open it, and open the SSO subfolder.
  4. Copy the ntlmauth.dll file to a folder on your path. (From a command prompt enter: ECHO $PATH$ to see the current path). On my system, I copied the file (as root) to the /usr/local/bin folder. In windows a good location could be /Windows/system32/
  5. Open the Pentaho GUI (aka Spoon) and start a new job. Click on the VIEW tab in the Explorer panel.
  6. Right click on Database Connections, and choose NEW to open the Database Connection window.
  7. Enter a name in the Connection Name box to identify it.
  8. Scroll down in Connection Type and choose MS SQL Server.
  9. In the Access panel, make sure Native (JDBC) is selected.
  10. In the Settings panel, enter your server’s hostname or IP address, the database you want to connect to, the port SQL Server is using (by default its 1433), and the user name and password in the appropriate fields. You can leave Instance Name empty unless your DBA tells you the server is using a named instance. It should look something like this:Screenshot-Database Connection -1
  11. In the left most panel, select Options. The right panel will refresh, and will probably only have one value entered: “instance”. Leave the value as is.
  12. Only in cases where you want the actual WindowsUser account that started Spoon to be the one authenticated add a parameter called “integratedSecurity” (watch the text case), and set the value to true. If this is true then you do not need to provide a Username and Password combination. Windows will use the current user that started Spoon to authenticate to the DB. This is most of the times NOT a good option, since for ex. you may need to connect to two different DBs, with two different domain/user combinations. If so, do not use this parameter.
  13. Add another parameter called “domain” and set the value to your network’s domain name. (You can use the full domain name or the shorthand one).Screenshot-Database Connection -2
  14. Click the TEST button at the bottom of the screen, and you should be rewarded with a successful connection window. Click OK and you are done.

Note: You may get the message of

The login is from an untrusted domain and cannot be used with Windows authentication. (code 18452, state 28000)

 

Some SQL Server machines are forced to work exclusively with NTLMv2 authentication. When attempting to create an account with SQL Server using Windows Authentication, its validation will fail with untrusted user/domain error.

In order to verify which authentication is used, execute gpedit.msc on the SQL Server host and look at the selected value of Computer Configuration->Windows Settings->Security Settings->Local Policies->Security Options->Network Security: LAN Manager
Authentication Level.

(ie. Authentication Level is set to "Send NTLMv2 response only. Refuse LM & NTLM" )

The default value for useNTLMv2 is false, so we need to set it to true to send LMv2/NTLMv2 responses when using Windows/Domain authentication.

 

We need to set the parameter "useNTLMv2" to "true" under "domain" (step 13 above).

 

Note also that if you decide to go with Integrated Security, the username/password combination will be the one that actually run the service that tries to connect to the DB. In this case, it is Spoon, but in the case you run the job from the AI Console the connection will be triggered/established by the Carte Server. The Carte server is a process that runs under the ARSystem (as a child of it) and it is started and monitored by ArMonitor. It's starting line is in the armonitor.cfg/conf file and unless you do something about it, CarteServer will be started by the same user that starts the ARServer, which means that this user may fail to connect even though you tested the job from Spoon and it was working.

 

For the above I do NOT recommend using Integrated Security unless really needed.

 

----------------

Now to the second option: MS SQL Server (Native)

 

You normally would use this option for better performance and for using the official Microsoft supported JDBC connection. This JDBC driver is the most compatible driver in the market, and it's officially supported by the same company that releases new MSSQL releases, so it makes sense that any new versions of MSSQL DB Server may be accompanied by new JDBC releases as well. In case your JTDS driver above stops working you may have to default to this one for new DB releases.

 

To deploy/install this you need to:

1. Download and install the JDBC 4.0 from Microsoft official webpage. https://www.microsoft.com/en-us/download/details.aspx?id=11774

2. After download install so that the files get unzipped to one folder. Then go to the folder and grab sqljdbc4.jar, copy this file to <AtriumIntegrator folder>/data-integration/libext/JDBC/

3. Do the same with the sqljdbc_auth.dll file from the 64 bits folder (unless you are on a 32 bit system).

4. Open Spoon, create a new DB Connection and use the MS SQL Server (native) option.

If you want to use the integratedSecurity feature you can leave the username and password fields blank, since it will use the credentials of the user who opened Spoon. On the left hand side of the connection properties page, go to options. Add one property: integratedSecurity and set the value to True.

Share:|

Hi!,

 

BMC does not support running Spoon in Linux, although we know it is quite possible from the Pentaho forums to do it.

What problems did you encounter when running Spoon in Linux (if you ever tried)?

 

Here is an issue we recently observed:

 

Short explanation: The culprit of the original error is that the /tmp filesystem had the “no_exec” option set to true, which kept any app to execute scripts there

 

Long explanation:

1) Replace the swt.jar from the BMC Pentaho's distribution (which is version 3.346 and located in "/opt/bmc/ARSystem/diserver/data-integration/libswt/linux/x86_64") by the distribution we downloaded from : https://www.eclipse.org/swt/ (version 4.527)

 

2) edited spoon.sh (in /opt/bmc/ARSystem/diserver/data-integration folder) and substituted:

 

OPT="$OPT $PENTAHO_DI_JAVA_OPTIONS -Djava.library.path=$LIBPATH -DKETTLE_HOME=/opt/bmc/ARSystem/diserver -DKETTLE_REPOSITORY=$KETTLE_REPOSITORY -DKETTLE_USER=$KETTLE_USER -DKETTLE_PASSWORD=$KETTLE_PASSWORD -DKETTLE_PLUGIN_PACKAGES=$KETTLE_PLUGIN_PACKAGES -DKETTLE_LOG_SIZE_LIMIT=$KETTLE_LOG_SIZE_LIMIT"

 

by :

 

OPT="$OPT $PENTAHO_DI_JAVA_OPTIONS -Xbootclasspath/a:${LIBPATH}swt.jar -Djava.library.path=$LIBPATH -DKETTLE_HOME=/opt/bmc/ARSystem/diserver -DKETTLE_REPOSITORY=$KETTLE_REPOSITORY -DKETTLE_USER=$KETTLE_USER -DKETTLE_PASSWORD=$KETTLE_PASSWORD -DKETTLE_PLUGIN_PACKAGES=$KETTLE_PLUGIN_PACKAGES -DKETTLE_LOG_SIZE_LIMIT=$KETTLE_LOG_SIZE_LIMIT"

 

(the only difference is  -Xbootclasspath/a:${LIBPATH}swt.jar, the reason why is based on: http://stackoverflow.com/questions/19969637/noclassdeffounderror-classnotfoundexception-while-using-swt)

 

3) Started Pentaho Spoon:

 

cd /opt/bmc/ARSystem/diserver/data-integration

./spoon.sh

 

-------------------------------------

There are more experiences here: Diethard Steiner on Business Intelligence: Having problems starting Pentaho Kettle Spoon on Linux? Here are some solutio…

 

Can you share yours?

Share:|

They say that the stars are bright. Late at night and deep in the heart of Texas. That was certainly the case here in Austin last night. The clouds rushing over the night sky occasionally exposed the Moon that was in conjunction with Jupiter. What a sight. According to my team of astrologers this is supposed favorable time when communications flow with clarity.

 

And so, this morning a colleague of mine made something very clear to me. He said that the Java AutoUpdate issue is currently the top concern in the field. At that point I've visualized the faces of administrators that work with java based applications around the world. Their expression being dismay and puzzlement. My facial expression was more septic than that. I realized that all java paths we've ever used to hard code a path to the java executable is no longer valid.

 

So what happened here? A Windows based task called the Java AutoUpdater downloads java updates and installs it at will. That in itself is not that big of a deal. In the past the previous java was left intact and it would still work where ever hardcoded.

But this time around the Java AutoUpdate also removes the contents of the former bin directory. Yes, the one from previous install.

 

For example the java.exe is now gone from the JAVA_HOME\bin folder and if you've had the path in any java application launcher then the app will no longer start. This path is no longer valid:

 

"C:\Program Files\Java1.7\bin\java.exe  -classpath ...."

 

Instead it becomes something else and need to manually change it to the new location.

 

"C:\Program Files\Java1.8\bin\java.exe  - classpath ..."

 

Bit of a nightmare if you think of how many java based applications are in use around the world.

 

Perhaps there is a lesson we can learn from this and not hard code java paths this way. The alternative is to always use a session variable like %JAVA_HOME% or %JAVA_BIN% instead of the C:\Program Files\Java1.7\.

 

Example:

 

%JAVA_HOME%\bin\java.exe  -classpath ...

or

%JAVA_BIN%\java.exe  -classpath ...

 

Unfortunately this presents another issue for compatibility. What if my application was coded with a specific feature that has now changed or depracated in the new version? Either way, please be on the look out and always first check the contents of your bin directory formerly known is JAVA_HOME.

Filter Blog

By date:
By tag: