Skip navigation
1 2 3 Previous Next

TrueSight Operations Mgmt

140 posts

PATROL Agent and KMs in Docker Containers



This blog explains a way to create Docker image of Truesight repository package. Using this, one can build Docker image of Truesight repository package to run inside Docker container. Docker container is not replacement of full OS and hence local OS monitoring (OS KM's) is not something its intended for. Remote monitoring KM's (like VMware, OS remote motoring etc. ) are the ideal and best suited for containerization. We have used VMware KM for this exercise.


Docker Overview

Docker is a platform that enables users to build, package, ship and run distributed applications. Docker users package up their applications, and any dependent libraries or files, into a Docker image. Docker images are portable artifacts that can be distributed across Linux environments. Images that have been distributed can be used to instantiate containers where applications can run in isolation from other applications running in other containers on the same host operating system.



  1. The PATROL components for which you want to create image should be in the form of Truesight CMA repository package only (tar file)
  2. Truesight repository should be v10.0 and above
  3. Make sure all the components are 64 bit only as Docker only support 64 architecture.
  4. Make note of all inputs used while creating repository package as same details should be used in Dockerfile as well (Ex. PATROL default account, Install dir. etc.).


PATROL package details used for this exercise:

  1. Components part of PATROL package are PATROL Agent for Linux v10.0, Oracle for Linux (JRE) and VMware KM v4.0.0
  2. Provide docker host root password for the root user details.
  3. Provide integration service to connect to during package creation.
  4. PATROL package should be saved as tar file.





# -------------------------------------------------------------

# The resulting image will have PATROL components installed which are part of PATROL package used



# -------------------------------------------------------------

# (1) patrol_cma_package.tar

# Please create the tar package of PATROL component from v10.0 and above CMA



# -------------------------------------------------------------

# Create new directory on Docker host and put tar package along with this Dockerfile in it.

# Run:

#      $ docker build -t "patrol:1" .


# Pull base image

# -------------------------------------------------------------

FROM centos


# Maintainer

# ----------



# [REVIEW]Environment variables required for this build (Change the values of PATUSER and INSTALLDIR if required)

# -------------------------------------------------------------

ENV TARGET Linux-2-6-x86-64-nptl





# Add the installer file to container file system

# -------------------------------------------------------------

ADD patrol_cma_package.tar $BASEDIR


#[REVIEW] Setup filesystem and patrol user

# Encrypted value next to -p argument is the password for patrol user.

# To get encrypted value of your password for patrol user use following command.

# openssl passwd -crypt <password>

# ------------------------------------------------------------

RUN useradd -p Q70GsdNXWnwzs $PATUSER


RUN chmod -R 777 $INSTALLDIR


# SETUP hostname command as this is being used by PATROL silent installer and not available as part of image

# --------------------------------------------------------------

RUN echo "cat /etc/hostname" > /usr/bin/hostname

RUN chmod +x /usr/bin/hostname


# Install PATROL package

# --------------------------------------------------------------

WORKDIR /opt/bmc_products

RUN sh


# Setup required PATROL environment

# --------------------------------------------------------------






# Remove PATROL installer

# --------------------------------------------------------------

RUN rm -rf /opt/bmc_products


# Define default command to start PATROL Agent

# This will start PATROL agent on default port 3181. To change the port replace command with following.

# CMD ["PatrolAgent", "-p", "6755"]

# -------------------------------------------------


CMD ["PatrolAgent"]

#***************END of Dockerfile****************


Building Docker Image:

  1. Create new directory on docker host
  2. Copy PATROL tar package in the new directory
  3. Copy Dockerfile in the same directory
  4. Review the Dockerfile sections marked with [REVIEW] and do the appropriate changes.
  5. Create new docker image using following command

          $ docker build -t "patrol:1" .

Note: In the command above “patrol” is the image name “1” is the tag. Do not remove “.” (dot) at the end of command.


Verify Docker image created:


Create container using Docker image:

To create container from docker image use following command:

$ docker run –d –p 5000:3181 –h patrolagent-1 “patrol:1”

Details of command above:

  1. -p 5000:3181 will bind the container port 3181 to 5000 of the docker host. To access PATROL agent externally user has to use 5000.
  2. –hostname is not mandatory however it will help to set valid hostname for the container which would be consumed by PATROL Agent for setting up device name in Truesight. By default container ID will be set as hostname

Note: To override ENV variables at run time please us –e argument of docker run command.


Things to know:

  1. PATROL Agent configuration running inside docker container can be changed using remote pconfig utility.
  2. Restart of PATROL agent using pconfig of console will not work. It stops the PATROL Agent however to start it container has to be started again.
  3. If not done while package creation, integration with Truesight will work by setting /AgentSetup/integration/integrationServices variable.


Why would you change the beacon post from HTTP to HTTPS?


However, before explaining why we want to change the beacon post, let discuss what is a beacon post within the App Visibility End User Monitoring.


In App Visibility End User Experience Monitoring a JavaScript is injected into the actual pages loaded by users. That JavaScript then sends beacons to TrueSight App Visibility Manager which contains metrics collected from the end user’s browser about the page it just finished rendering.


App Visibility End User Experience Monitoring either needs an Agent to automatically inject the JavaScript (which can only work in servers supported by the AD Agent) or you need to manually add the JavaScript to your pages (which will work for any server).




Going forward we will also be introducing 2 more modes in Automatic Injection which will not require the agent.

As an application specialist you would be able to Automatically set up Active End User Monitoring for the following:

1)F5 Server

2)Apache Web Server




The AD Agent just monitors the request as normal. However, when it sees the response which contains the actual page which is being sent to the browser, it edits that page automatically before passing it on back to the client. In this edit, it inserts a <script> tag into the Head section of the HTML telling the browser where to find the JavaScript and leaves the rest of the page unchanged. Thus, when the page returns to the client browser, that browser begins to execute the JavaScript inside the browser itself. When significant actions happen in the browser, that script sends out notifications called Beacons to the TrueSight App Visibility Proxy component. Note that the hostname and port of the Proxy component were hard coded into the script when the AD Agent edited the page. Thus, all Beacons from that page will go to the same Proxy until the page is reloaded or a new page is loaded. A new Proxy component may be dynamically assigned by the AD Agent in that new edit of the page. The proxy component then sends its information up to the TrueSight App Visibility Portal where it becomes available just as any other TrueSight Application Visibility data.


The App Visibility End User Experience Monitoring data that comes in through the Proxy component to the TrueSight App Visible Portal is visible within the Application View is TrueSight Presentation Server.


Where the AD Agent Business Transaction data populates the Web, Business, and Database tiers, the App Visibility End User Experience Monitoring data populates the User and Network tiers.


Below is a link to the documentation that will explain App Visibility End User Monitoring further:



OK so why would you change the beacon post from HTTP to HTTPS?


By default, the behaviour of the Beacons is that it matches the protocol used by the page. For example, if the application web page is HTTPS then the App Visibility End User Monitoring will inject the JavaScript into the web page, and the beacon post will be HTTPS.


If the application web page is HTTP then a setting in the App Visibility End User Monitoring will need to be changed in order to inject the JavaScript into the http web page, and the beacon post will be HTTP.  However, there may be certain situation where the user would like to receive the Beacon in https while the application web page is in HTTP.

So, if the user wants the beacon post to be more secure then the protocol would need to be changed to HTTPS.



Below are the steps to change the beacon post from HTTP to HTTPS:


In the JavaScript code, there are variables to App Visibility Proxy http and https ports. You need to look for the code that builds the beacon URL based on the page protocol, search for the URL that is built for HTTP protocol and update the beacons URL to use HTTPS port and protocol. 

In all App Visibility Proxies install directory, update <apm-proxy-install-dir>/webapps/static-resources/aeuem-10.1.0.min.js file. Change 'http' to 'https' in all places that are marked in the screenshot.


Changing Beacon Post.png




How to verify that the beacon post is now HTTPS?


The best way to verify that beacons are being sent with HTTPS is to use the Developer tool which is available in any browser.

After opening the Developer tool, you should check the Network Tab -- >JavaScript(JS)-->Protocol


Here you can see “beacons” item and verify that it is sent with HTTPS


Does this step apply to Manual and Automatic JavaScript Injection Or just manual JavaScript Injection?


These steps will apply both for Manual as well as Automatic Injection (Any Kind)


Excited to announce the availability of the following role based trainings and certified associate exams on TrueSight Operations Management 11.x.


New ILT Courses:

  • BMC TrueSight Operations Management 11.x: Fundamentals Installing
  • BMC TrueSight Operations Management 11.x: Fundamentals Administering

New Certified Associate Exams:

  • BMC Certified Associate: TrueSight Operations Management 11.x for Administrators Online Exam
  • BMC Certified Associate: TrueSight Operations Management 11.x for Consultants Online Exam


To know more and to register, go to the TrueSight Operations Management Learning Path:


Nidhi Gupta Dirk Braune Rafael de Rojas Vrushali Athalye  Geoffrey Bergren Pankaj Pansare Jim Stephens Shweta Agarwal Kamraun Marashi

Sabrina Paprocki Jimish Thakkar Mani Singh


BMC TrueSight Operations Management users can now get more out of their network data.  Entuity Network Analytics seamlessly integrates with TrueSight, to provide your IT team with the complete tools they need for success.



Reduce Your Alert Noise:   Let Entuity Network Analytics for TrueSight weed out the non-essential network event alerts that get escalated into TrueSight for more productive management


When it comes to service management, the more details you can see  about the network the better.  However, some of these details are not entirely useful.  While network alerts can act as a flag for when something is wrong, they can also act as a false flag, alerting the professional to an event that is not or will not lead to a major service incident. You and your team probably  get countless alerts throughout the day.  If there is a major incident, these alerts should help prevent or lessen a network outage, not create more work accessing whether they are a critical alerts.  If they are not major alerts, they serve as a noisy distraction from the day to day tasks and projects that you are occupied with.  Often too much  time is spent categorizing these alerts into events.   When too much time is spent on tasks like these insignificant events, other projects do not receive the attention they require. When integrated with TrueSight, Entuity Network Analytics (ENA) eliminates the non-essential network alerts by pre-processing network incidents.  With the non-threatening events weeded out before the IT professional sees them, the overall event information that TrueSight receives is more effective.  This means ENA adds more value to your TrueSight Operations and you can expect that when the network noise is reduced that event alerts being escalated are legitimate alerts. More efficient event processing permits IT staff to tackle more pressing issues rather than wasting time manually validating event alerts.


Network services can be managed individually to ensure performance stability with powerful sets of analytics (order processing, VoIP, E-Commerce, remote branch IT performance)


ENA lets you take your management to the next level with the ability to manage by Network Services. Now IT teams can track particular applications’ traffic patterns out and back to the user and associate all of the devices that are encompassed for an application.  For example, an order processing application can be managed by not only the application with TrueSight but through ENA you can see how the order processing data gets sent back and forth to the user.  Is the performance slow because of the application or is it because of how the data is being transported?  ENA can manage by network service to answer the question how the service is being handled on the network. Coupled with TrueSight, ENA gives you another way to manage your applications for outstanding performance.  Visit for more information.


Consider the following typical business scenario:


business scenario.png

Effective operation management is essential for maintaining a healthy and thriving business. IT operations must keep applications, infrastructure, middleware, and services up and running to support key business processes.

BMC TrueSight Operations Management is unique performance and availability management solution that goes beyond monitoring to handle complex IT environments and diverse data streams to deliver actionable IT intelligence. This can help to resolve issues before they impact the business.

Additionally, BMC TrueSight Operations Management provides application-aware infrastructure monitoring IT Operations, bringing together infrastructure and application monitoring in one integrated solution.

Operators play a crucial role in day-to-day product management. They need to monitor events, devices, event groups, work with dashboards, to address the performance monitoring and incident management for an IT Infrastructure.

BMC TrueSight Operations Management 11.x: Fundamentals Operating training is specially designed for TSOM 11.x operators which covers all the exciting features of the product which are crucial for the operators. It's a 1-day training which contains many relevant labs which are useful for operators. For more information and details about registration, feel free to visit: Education COURSE page - BMC TrueSight Operations Management 11.x: Fundamentals Operating - BMC Software



Nidhi Gupta Dirk Braune Jimish Thakkar Rasika Sarnaik Geoffrey Bergren Kristen Linehan Sabrina Paprocki Steve Mundy Namita Maslekar


As your system grows, you may want to start putting multiple Synthetic TEA agents on a single Windows system.  It is BMC best practice when you do this to only have 1 TEA Agent from a single location on each machine.  For instance, let's say you have 3 locations and 3 TEA Agents assigned to each location:

Blue Location:

Blue Agents.png

Black Location:

Black Agents.png

Green Location:

Green Agents.png


You will then set up 3 Windows systems and install 3 TEA Agents on each system.  Each TEA Agent will point to one of the above locations.  For instance:

Windows Systems.png


If Windows System 3, Blue Location Agent 3 goes down:

Agent Crashes.png


then the system will load balance the Execution Plans from that System and Agent to the other agents on System 1 and System 2,  but there will only a single agent on System 1 and System 2 that will be affected, so it will not have such a large impact on the system resources.


If you have a scenario where your agents are not well balanced, then you may run into a situation where you have too many agents from the same location on the same system.  In this scenario, if one of those agents goes down or if the system goes down, then the system will load balance the Execution Plans to the other available agents.   This may lead to all the Execution Plans going to the same system or becoming "unbalanced" which could overwhelm the resources on that machine and either bring down that agent, or cause that agent to become very slow and take several hours to recover.

Too Many Agents crash.png


Why should I have minimal access and very few applications running on my system that is hosting a TEA Agent?


Typically, you run a Truesight Synthetic TEA Agent as a process because your script will need access to the desktop or will require special privileges that running the TEA agent as a service will not provide.


It is BMC's recommendation that when you run your TEA Agent as a process, that you do not run any other applications on the system that may interfere with the TEA Agent or your script.  It is also important that you do not allow any users to log into the system.  This is because some scripts must have access to the desktop and mouse in order to work properly.  If a user is logging in and taking control of the mouse, then there is a very significant chance that the scripts will lose the mouse and not be able to click on items when needed thereby causing the scripts to fail and give false alarms.


Also, if there are other processes running on the system that are using resources that are needed by the scripts, then the scripts may start to fail continuously until the TEA Agent can be restarted.  On a production server, this can be very hard to do when you can only restart processes during maintenance windows.


Coming up on October 18, 2018 is BMC’s annual user event, the BMC Exchange in New York City!



During this free event, there will be thought-provoking keynotes including global trends and best practices.  Also, you will hear from BMC experts and your peers in the Digital Service Operations (DSO) track.  Lastly, you get to mingle with everyone including BMC experts, our business partners, and your peers.  Pretty cool event to attend, right? 


In the DSO track, we are so excited to have 3 customers tell their stories. 

  • Cerner will speak about TrueSight Capacity Optimization and their story around automation and advanced analytics for capacity and planning future demand.   Check out Cerner’s BizOps 101 ebook
  • Park Place Technologies’ presentation will focus on how they leverage AI technology to transform organizations.
  • Freddie Mac will join us in the session about vulnerability management.  Learn how your organization can protect itself from security threats.  Hear how Freddie Mac is using BMC solutions. 


BMC product experts will also be present in the track and throughout the entire event.

  • Hear from the VP of Product Management on how to optimize multi-cloud performance, cost and security
  • Also, hear from experts on cloud adoption.  This session will review how TrueSight Cloud Operations provides you visibility and control needed to govern, secure, and manage costs for AWS, Azure, and Google Cloud services.


At the end of the day, there will be a networking reception with a raffle (or 2 or 3).  Stick around and talk to us and your peers.  See the products live in the solutions showcase. Chat with our partners.  Stay around and relax before heading home. 


Event Info:

Date: October 18th 2018

When: 8:30am – 7:00pm

  • Keynote begins at 9:30am
  • Track Sessions begin at 1:30pm
  • Networking Reception begins at 5:00pm

Where: 415 5th Ave, NY, NY 10016


For more information and to register, click here


Look forward to seeing you in NYC!  Oh, and comment below if you are planning to attend!  We are excited to meet you.


Thank you for participating in the TrueSight IT Data Analytics Community. To expand opportunities for collaboration, and simplify participation for members, this community will be consolidated with the TrueSight Operations Management community in the near future.

Cloud App 1-RR.png

This enhancement will continue to allow you to share information with ITDA customers, but will also expand your opportunities for collaboration by adding members of the much larger, TrueSight Operations Management community.   You will still be able to share ideas and experiences about ITDA, and how the collection of IT operations data (logs, machine data, events) can help you investigate and solve problems faster, avoid outages, increase efficiency, and reduce manual effort.  In addition, you will be able to view ITDA within the larger context of TrueSight Operations Management and communicate with customers who have that expanded solution.


Your support of this improvement is appreciated and we look forward to your continued and expanded collaboration in the future.


What does it mean for you?

You will be able to participate as in the past, on any content, the scope of the community will be broader and as a consequence increase knowledge sharing.

Is there something you need to do?

Yes, if you have not done it already, click follow on the TrueSight Operations Management Community:

Screen Shot 2018-07-27 at 09.07.51.png

For more information about the simplification of TrueSight communities read this post.


Any question? Comment below


Thank you for participating in the TrueSight App Visibility Manager Community. To expand opportunities for collaboration, and simplify participation for members, this community will be consolidated with the TrueSight Operations Management community in the near future.



This enhancement will continue to allow you to share information with TrueSight App Visibility Manager customers, but will also expand your opportunities for collaboration by adding members of the much larger, TrueSight Operations Management community.   You will still be able to share ideas and experiences about application visibility from an infrastructure and end-user standpoint, and how to improve application performance with capabilities such as machine learning and advanced analytics.


In addition, you will be able to view application management within the larger context of TrueSight Operations Management and communicate with customers who have that expanded solution. Your support of this improvement is appreciated and we look forward to your continued and expanded collaboration in the future.


What does it mean for you?

You will be able to participate as in the past, on any content, the scope of the community will be broader and as a consequence increase knowledge sharing.

Is there something you need to do?

Yes, if you have not done it already, click follow on the TrueSight Operations Management Community:

Screen Shot 2018-07-27 at 09.07.51.png

For more information about the simplification of TrueSight communities read this post.


Any question? Comment below


We've listened to your feedback (survey and more), and looked at TrueSight communities traffic.

You've told us loud and clear that you want a simplified structure to interact with your peers, fostering greater collaboration.


Special thanks to Patrick Mischler

We're ready to make it happen.

Cloud App 1-RR.png

Simplified structure

We're delighted to announce an expansion to the  TrueSight Operations Management community, soon consolidating the following sub-communities:

Screen Shot 2018-08-02 at 10.01.42.png


BMC has done this in the past with other product families, and the outcome has been very positive.


You can expect to see this transition taking place in the next weeks, and being fully completed by August 31st.


What does it mean for you?

You will be able to participate as in the past, on any content, the scope of the community will be broader and increase knowledge sharing.

Is there something you need to do?

Yes, if you have not done it already, click follow on the TrueSight Operations Management Community:

Screen Shot 2018-07-27 at 09.07.51.png


Thanks again for your continued participation in BMC Communities


Any question? Comment below


When EUEM components are deployed, they are communicating with each other in different ways. Let’s refer to the basic architecture of a simple deployment of EUEM.




Let’s review each component roles


  • The Cloud Probe is the capture engine. It is capturing TCP/IP packets and is building objects (HTTP( and HTTPS request and response pairs) and extracts whatever data it is configured to. When objects are captured, they are sent to the Real User Collector.


  • The Real User Collector buffers those objects (several Cloud Probes may send data to one collector) for the Analyzer to consume (one or several Analyzers may get data from one Collector).


  • The Real User Analyzer retrieves data from one or more Real User Collectors and performs most of the processing.



All the communications between EUEM components occur on a secure channel using HTTPS. The same goes for managing EUEM by accessing its web interface.


The Collector and the Analyzer have their own built-in SSL certificates. When deployed a self-signed SSL certificate is generated for each component. Given that the authentication between each EUEM components is done via user accounts, there is no two-way SSL authentication as in other TrueSight Operations Management products.


Replacing a SSL certificate on any EUEM component does not impact the entire deployment and without causing any disruption in the way the product works. This means that one can use a signed certificate on an Analyzer and still use the self-signed certificate on the Collector without breaking the flow between the components. This also means that one does not have to change anything on the TrueSight Presentation Server for the Analyzer & TSPS integration or on the App Visibility Portal server when the App Visibility integration is configured.


As long as the configured SSL certificate is a valid and signed one, there is no problem!


The steps below are for the Real User Analyzer but the same procedure applies to the Real User Collector. Since there is no web UI for Cloud Probe, there is no SSL certificate on Cloud Probe to change.


EUEM is a Java application running on a Tomcat server.  Replacing the SSL certificate is very simple. The steps are

  1. Get a signed SSL certificate from a Certificate Authority and the original SSL private Key.
  2. Bundle them in a Java keystore format.
  3. Configure EUEM to use this keystore instead of the default one created at installation time.


Important notice: It is your responsibility to provide the Java keystore file. BMC is not responsible and will not provide help in generating it.


Configuring the Analyzer to use your keystore file

  1. Copy your keystore file to the Real User Analyzer server.
  2. Make a copy of the following file as a backup.
    1. $EUEM_HOME/analyzer/apache-tomcat/conf/server.xml
  3. Edit the server.xml file
  4. Look for the following lines. The first line is a pointer to the full pathname of your keystore file. The second line specifies the password (*) to that keystore.
    1. keystoreFile="${truesight.home}/conf/platform/security/keystore/java/keystore"
    2. keystorePass="tsPwDSt0r3"
  5. Restart the Real User Analyzer at your convenience to have the changes take effect.


If you have the Real User Collector deployed on the same system you have to repeat the steps as its SSL certificate configuration is separate.



(*) About storing the keystore password in clear text in the server.xml file. This is a constraint from the Tomcat design itself. This is best and fully described in the official Apache Tomcat FAQ.


You are probably wondering….what happens to my AppVis data? Where does it go?  Does it get backed up?  How can I recover if I have a system failure?


These are all really good questions.


All of your data from the TEA Agents, .Net Agents and Java Agents are stored on the App Visibility Portal / Collector.  These components are not backed up automatically. You must do this manually and on a regular basis.  This is the only way that you will be able to restore the data if something happens to these components.


It is also important to note that you should backup the App Visibility Portal / Collector if you are doing an upgrade to the Truesight Presentation Server (TSPS).  It does not matter that you are not "touching" the Portal/Collector. The Portal/Collector databases are dependent on the Truesight Presentation Server and if something happens to that component during an upgrade, it will affect your data on the Portal/Collector.


If your Portal and Collector are on different systems, then you will need to make sure that both databases are backed up simultaneously. Otherwise, the data will become out of sync.


Below are links to our documentation that will explain how to backup and restore the different versions of the Portal / Collector.


  1. 10.5:


  1. 10.7


  1. 11.0


One of the greatest challenges users face today is defining what is a problem versus what is normal behavior. Synthetic Metric Rules (SMRs) were introduced in TrueSight Synthetic 10.7. They are used to define how synthetic transactions are monitored. This powerful functionality allows users to create rules that are more granular than ever before. SMRs significantly change the user experience and monitoring capability provided with the synthetic monitoring solution.


As you know, Events break down into Rules (defining what is a problem versus normal behavior) and Conditions (what to do when a particular problem is detected). In TrueSight, it is primarily TSIM that handles Conditions (e.g. integration with Remedy to open tickets when a problem is detected). SMRs fall into the first category (event notification when there is a problem). SMRs are all about how granularly the user can define what is a problem versus what is normal behavior. In TrueSight Synthetic 10.5 and earlier, users had to rely on SLA metrics to define Rules. These SLA metrics are only aggregated over a 5-minute interval. The 5-minute interval is extremely limiting. Say, for example, you have a specific URL Checker script that pings a web page once per minute. Let's say there are 5 executions in a 5-minute interval. If those 5 executions take 1.1, 1.2, 1.1, 1.2, and 5 seconds respectively and you set the threshold to 2 seconds and 20%, the SLA would fire if any one transaction takes more than 2 seconds to complete. With SMRs you have very fine-grained control. You can create a rule that says 2 consecutive transactions must be over 2 seconds which provides a more accurate rule and thus fewer false-positive alerts resulting in earlier detection of the problem.


With TS Synthetic 10.7 and 11.0 we provide Synthetic Metric Rules and superior data visualization. This includes metric rules for monitoring of metrics of synthetic executions:


  • Synthetic Metric Rules
  • Synthetic Health view
  • Synthetic Health Analysis view
  • Synthetic metric reports view


Below are some screenshots and hyperlinks to the online documentation to give you a sneak peek and guide you through the world of SMRs:


1) On the Synthetic Metric Rules page you can define and manage these rules and the criteria for generating events and issue notifications.


Managing Synthetic Metric Rules



2) Synthetic Health view


Investigating application issues reported by synthetic health events - BMC TrueSight App Visibility Manager 11.0



3) Synthetic Health Analysis view


Analyzing Synthetic Health Details



4) Synthetic Health Reports


Viewing Synthetic Health Reports


With the release of True Sight 11.0 (September 2017) came support for Remedy Single Sign-On and more authentication methods. In Atrium Single Sign-on True Sight was limited to using local and LDAP authentication. With Remedy Single Sign-On you now have the option of using Kerberos, SAML and Certificate Based Authentications.


In this post we'll take a look at configuring both Kerberos and SAML ADFS to Authenticate Trusight users. We'll also take a  brief look at Certificate Based Authentication but this will be covered in more detail in a separate post. We'll also take a look at how to troubleshooting both Kerberos and SAML Authentications.


Versions being used in this post

RSSO 9.1.04

TSPS 11.00

ADFS 3.0

Kerberos KDC/LDAP/DC Windows 2012


We'll start with a high level description of  Kerberos, SAML and Certifcate Based Authentications


Kerberos Authentication

Kerberos since Windows 2000 is the default authentication protocol for windows (NTLM was the default before this), every release of Windows desktop or server since includes a Kerberos client provider. One of the main benefits of using Kerberos as an authentication method besides being more secure than NTLM is the ability to seamlessly authenticate with applications, meaning once a user has authenticated with the domain and the KDC (Key Distribution Center) provides the user with a ticket to prove identity, the user will not need to authenticate themselves to any Kerberos enabled applications.  Generally Kerberos will not work unless a user is logged into a domain within the intranet or VPN.


SAML Authentication (ADFS)

SAML (Security Assertion Mark up Language) is a is an Extensible Markup Language (XML) standard that allows a user to log on once for affiliated but separate web applications.

The purpose of SAML is to provide centralised management of userid and password to allow access to applications from different vendors. SAML is a standard for logging users into applications based on their sessions in another context. This single sign-on (SSO) login standard has significant advantages over logging in using a username/password, SAML can be and is usually configured for cross domain authentication. Microsoft's implementation of SAML is Microsoft Active Directory Federation Services (ADFS) is is an extension of Active Directory. While active directory serves to contain user identification, authentication and authorisation within its own organisation and domain boundaries, its extension Federation Services can be used to cross these boundaries. ADFS can be configured to make use of Kerberos seamless authentication also.


Certificate Based Authentication

Certificate-based authentication uses a Digital certificate (X.509 certificates) to identify a user, machine, or device before granting access to a resource, network, application, etc. it is often deployed in coordination with traditional methods such as username and password. Certificates are installed on a user's device and is only valid for that user on the device.  A user can not move a certificate form one device to another, if they want to use a different device they will need to request a new user certificate for that device.


Which Authentication Method Should I use?

The simple answer is you won't generally have a choice. This will be dictated by the network and security admins and whatever authentication method they have put in place would be the one you use. We'll go through a few pro's and cons of these authentication methods. Kerberos out of the three would probably be the easiest and quickest one to configure, in terms of Remedy Single Sign-On all you will need is the KDC server name, a principle account and password. Kerberos although can be configured for cross domain authentication it usually is not, and will only work if you have logged in to the domain either via the internal network or VPN. SAML and certificate based authentication can be configured for cross domain authentication, but are more difficult to configure.


NOTE: The above Auth methods are only supported by the web components of TS. The java admin UI and the TS mobile app do not support these authentication protocols, therefore the Local Authentication option will need to be used. See comment section below for further updates regarding the authentication for the admin console.


NOTE: If you have configured BMC applications such as Remedy MT or MyIT with one of the authentication methods above, you would have noticed all you need to do is configure the Kerberos/SAML/Certificate authentication to be able to login to the application. True Sight works differently, with these authentication methods you will need to configure a separate authentication along side LDAP. Why is this the case? In a nutshell the answer is Groups and Authorisation, RSSO does Authentication not Authorisation.


RSSO goes to the IDP for authentication of a user, it then creates the related tokens and sessions and passes this over to the calling application, it is at the application Authorisation is done and the user is given the privileges they are entitled to for that particular application i.e. Operator for Truesight, Asset Viewer for Remedy ITSM to give two examples.


For Truesight we'll need to search the LDAP server to determine which groups the user belongs to.


1. The the initial Authentication (Kerberos, SAML, Certificate Based Authentications) will authenticate the user and provide the ability for users to login to applications without a user name and password  once logged in.


2. LDAP provides the users groups


Taking the need to have LDAP configured we will take a look at configuring this first and then move on to do the configurations for the kerberos/SAML(ADFS) and Certificate Based Authentications. We will touch lightly on certificate Authentication in the post. A more detailed post will be published to delve deeper into configuring and troubleshooting Certificate Based Authentication.


LDAP Configuration

Our first step is to create  an LDAP authentication in RSSO, this will predominantly be used to fetch group membership of the users after the initial authentication has been done with the primary authentication method Kerberos/ADFS or Certificate Base Authentication. A more detailed post of LDAP configuration can be found at ConfiguringRSSOLDAP


1. Login to the RSSO admin console edit the realm you want to configure for Authentication (in this instance we'll be using the default " * " realm)

2. For the application domain we are going to add  "" which is the FQDN to our TSOM server i.e.

3. On the Authentication tab select "LDAP" from the "Authentication Type" drop down list, select "Active Directory" from the "Preset" drop down list, fill out the rest of the LDAP server information and check "Enable Group Retrieval" check box. Hit the test tab and confirm connection to the LDAP server is successful(fig1)


Fig1. Shows the LDAP configuration and successful test connection to the server. For a detailed description of LDAP configuration see ConfiguringRSSOLDAP

4. Once we have LDAP configured we need to do a test login to make sure that we are able to login to TSOM and ensure the user's group list is also being returned successfully. We'll enable

the RSSO server logs to see if the groups are returning successfully. Go to the General tab---> Basic-->Set Server log level to "DEBUG"   (Don't forget to put this setting back to "INFO" after)


5. Do a test login to the TSOM server by entering the url in a browser in this case  if everything is working you will be redirected to the RSSO login page

6. Enter in an LDAP user name and password, if successful you will then login to the TSOM dasboard.


At login if you see:


"Authentication Failed" This means either the user name or password is incorrect or the user does not exist in the LDAP. Check the user name and password and the LDAP config in RSSO admin console, see ConfiguringRSSOLDAP for more information on RSSO LDAP configuration


"You are not Authorised to view this page"  This means RSSO has been able to authenticate the user. The group the user belongs to is either not being brought back by the LDAP query, the group configuration in RSSO is incorrect or user profile mapping in TS has not been configured, see ConfiguringRSSOLDAP for more information on RSSO LDAP configuration which goes through the users and group searches for LDAP in RSSO.


To confirm if the group is being brought back from the LDAP server, login to the RSSO server and open the RSSO server log file "tomcat/logs/rsso.0.log" search the log for:


"Searching groups by user '<UserName>' with filter" i.e  " Searching groups by user 'JCKER' with filter "


Further down in the log you will see a list of groups the user belongs to in LDAP


"All returned entries: " i.e. "All returned entries: [Domain Admins, Operators, Administrators]"


Note: You might see a partial result exception message in the log, this is fine and normally indicates a possible referral in LDAP


Now that we have LDAP configured and confirmed login to TSOM. We will take a look at configuring Kerberos, SAML ADFS & Certificate Based Authentications with RSSO




Summary of of Kerberos configuration with RSSO


1. Create the service principle on the KDC server (Domain Admin function)

2. Enable chaining mode for RSSO

3. Add Kerberos Authentication type

4. Configure browser for automatic login


Create The Service Principle

The first step in configuring Kerberos with Remedy Single Sign-On is to create the Service Principle User. This normally would be done by the Domain Administrator ask the admin to create a user account and map the RSSO service it or add the RSSO service to a pre-existing Service Principle Account. To do this they will need to use the setspn command from the command line.


Example: We will create a user in AD called "RSSOPRIN" and will map our RSSO service will be the FQDN of the RSSO server can also be a Load Balancer url, in this example or RSSO servers are behind an LB with the following url


To create the SPN in this example we will run the following from the command line on the Domain Controller (KDC) (fig2)






Other useful setspn commands:


setspn -X - Check to see wether there are duplicate services in the. Useful when you get a message that duplicates are found when creating services

setspn -d <servicename> <USER - Deletes a service that is mapped to a user

setspn -L <USER> - Lists service registered to the Service Principle User


RSSO has the ability to connect to the KDC directly by way if Service Principle user name and password, however some Domain Controllers(KDC) do not allow direct connection, if this is the case then you can use a keytab file to load into RSSO.


To create a keytab file after you have created the Service Principle User and mapped the service to it run the following command from the command line of the DC


ktpass -out <file> -mapuser <user> -princ HTTP/<host>@<DOMAIN> -pass <password> -ptype KRB5_NT_PRINCIPAL -target <DOMAIN> -knvo 0


In our example we run the following (fig 3) where the key tab file will be created in c:\temp directory.  Move the keytab file in to a location on the RSSO server you will need this path and file name later.


ktpass -out c:\temp\RSSOKeytab -mapuser RSSOPRIN -princ HTTP/ -pass Mypassword -ptype KRB5_NT_PRINCIPAL -target




Now that we have created the Service Principle User we'll continue to configure RSSO to authenticate with Kerberos


Configuring RSSO for Kerberos

1. Login to the RSSO admin console and edit the realm created earlier that has the LDAP configuration

2. Go to "Authentication" tab

3. Click on "Enable Chaning Mode"

4. Click "Add Authentication"

5. From the Authentication Type select "Kerberos"

6. Enter the FQDN name of the KDC server

7. Enter the Service Principle user name

8. If you are using a keytab file enter the path to and file name of the keytab file. If you are using direct connection to the server select the "SPN Password" radio button and enter the password

9. Leave User ID format & User ID Transformation (We'll cover this some more below)

10. Test the connection (fig4)

If successful you will see a "Kerberos Connection Successful message" if the connection fails review KA000130918 in BMC Knowledge base to troubleshoot further. If you are sure the connection details are correct review the RSSO server logs and ask the domain admin  to check the Windows event log on the Domain Controller, this will usually give a clue on why the connection failed.

11. Save the Authentication configuration (top right of screen)

12. In the List of Authentication Screen move Kerberos Authentication up to first in the list(Fig 5) (otherwise the save button will be greyed out)

13. Click save (bottom right of screen)




Fig5. Shows the chaining mode Kerberos Authentication needs to be first on the list



NOTE ON USER TRANSFORMATIONS:  For Trusesight you won't need to worry about having the user transformation, however if this realm is going to also serve other applications such as Remedy and MyIT you will need to select and option for "USERID Format" & "USER ID Transformation" from the drop down list, depending on how your users are configured in the application.


Before we can test the login we will need to configure the browsers to use auto login. This is normally done by group Windows policy where the configuration is sent to each end user's browsers. We will take a look at how to configure the browsers for testing purposes


Configure Browser For automatic Login


Internet Explorer & Chrome: Chrome will take is configurations from internet explorer. Once you have configured IE there is no need to do it on Chrome.


1.  Open IE and go to "Internet Options" -->Security-->Select Intranet--->Custom Level-->Scroll all the way down to the bottom and select "Automatically Logon for Intranet Zone" (Fig6)

2. Press OK




3. Clear the browser cache (not  really necessary for end user but testing its good practice) & close the browser

4. Open the browser and go to the application URL in our case TSOM ""

5. If everything is working as expected you will not be prompted for user name and password and you should see the TSOM dashboard with the user name at the top right of the screen (Fig8)



1. Open the FF browser

2. In the url bar type "about:config" click "Accept the Risk" on the warning message

3. In the Search bar type "network.negotiate-auth.trusted-uris"

4. Add the domain name to the trusted uri list (fig7)



5. Clear the browser cache (not  really necessary for end user but testing its good practice) & close the browser

6. Open the browser and to the application URL in our case TSOM ""

7. If everything is working as expected you will not be prompted for user name and password and you should see the TSOM dashboard with the user name at the top right of the screen (Fig)



If you are getting a windows prompt for user name and password while trying to login this generally means your browser is not configured correctly, review the browser configuration and perhaps speak to the network/domain administrator to see if there are any specific group policies on the browsers.





Summary of of ADFS configuration with RSSO


1. Create signing keystore using Java keytool, Sign certificate, Import the certificate into the keystore (optional only needs to be done if your ADFS IDP needs signing speak to the ADFS admin)

2. Enter the Service Provider information (The service provider in this instance is RSSO)

3. Enable chaining mode for RSSO

4. Import the ADFS IDP meta data file

5. Send the SP meta data to the ADFS admin to create the relying party trust and the claim rules


Creating The Signing Keystore

This step is optional depending if  requests that goes to the IDP needs signing. Speak to the ADFS admins to confirm this.


1.  On the RSSO server ensure that you have "java/jre/bin" in the environment variable path and you are able to run the "keytool" command from any location on the system.

Open a command prompt, "cd" to the location where you want to create the keystore. We can create the signing keystore in any directory in this case we will create it in the "tomcat/conf/" directory


2. To create the keystore we will use the keytool command


keytool -keystore <keystorefile> -genkey -alias <aliasname> -keyalg RSA -sigalg SHA256withRSA -keysize 2048  -validity 730


For example,


keytool -keystore saml.jks -genkey -alias sp-signing -keyalg RSA -sigalg SHA256withRSA -keysize 2048 -validity 730


Fill in the details requested (fig9)




The above command creates a keystore file named saml.jks that contains a keypair with the alias as "sp-signing"


3. Export the certificate signing request from the keystore (csr) with the following command


keytool -certreq -keyalg RSA -alias myalias -file certreq.csr -keystore c:\yoursite.mykeystore


For Example


keytool -certreq -keyalg RSA -alias sp-signing -file certreq.csr -keystore saml.jks


This command will create a file called certreq.csr in the directory (fig10)




4.  Send the .csr file to the Certificate Authority for Signing


5. When the signed certificate comes back it will either be in .cer or .p7b format. Use the  following keytool command to import the signed certificate into the signing keystore


keytool -importcert -alias <alias_name>  -keyalg RSA -keystore <keystore_file> -storepass <keystore_password> -file <signed_cert_file>


For Exmaple


keytool -importcert -alias sp-signing -keyalg RSA -keystore saml.jks -storepass internal4bmc  -file certnew.p7b


Fig11. Shows the import of the signed certificate


Configure The Service Provider Information


6.  We now need to create the Service Provider configuration (RSSO) login to the RSSO admin console -->Advance tab. In the "SAML Service Provider" section fill out the entries (Fig12)


SP Entity ID: This can be anything, best practice here is to name it something that describes the what the service provider is. Don't uses space since  the IDP (ADFS) will use this EntityID

External URL: This is the service url to RSSO with the "/rsso" at the end. It will either be the FQDN of the RSSO server or the FQDN of the LB that is servicing RSSO

KeyStore File: File and path to the keystore that holds the signing certificate. Include both the path and the keystore name.

Keystore Password: Keystore password for the keystore

Signing Key Alias: This is the alias name created when creating the keystore


Fig12. Shows the configuration of the example we are using to configure the SAML service provider


7. Restart the RSSO tomcat service to ensure the keystore get loaded correctly before moving on. If everything is OK with loading the keystore you will see the following messages in the RSSO server log.


com.bmc.rsso.core.cert.KeystoreManager.getKeystore(): Keystore type is not specified. Analyzing keystore file extension: saml.jks

com.bmc.rsso.core.cert.KeystoreManager.getKeystore(): Keystore type defined as: JKS

INFO Thread_23 com.bmc.rsso.core.cert.KeystoreManager.getKeystore(): Found keystore: C:\Program Files\Apache Software Foundation RSSO\apache-tomcat8.5.23\webapps\rsso\WEB-INF\classes\saml.jks


If there are any errors with loading the keystore you won't be able to move on to the next portion of the configuration. The error message in the log should be enough to tell you why the keystore has failed to load correctly i.e. "Invalid keystore password" , "Keystore Not found"


Importing ADFS (IDP) Meta Data

8. The next step is to import the IDP meta data in the RSSO admin console click Realm--->Edit the Realm that has LDAP configured


9. Click "Enable Chaining Mode" Click "Add Authentication"


10. Select "SAML" from the "Authentication Type" drop down list


11. At this stage we will need to either use the URL to the IDP meta data or the .xml file of the meta data. Depending on how ADFS has been configured you can get to the IDP meta data by

using the url  "" " or have the ADFS Admin send you the .xml file of the IDP meta data


12. In the "Import IDP Provider" screen select either to "Import from URL" in which case give the meta Data URL or "Import from Local File" in which case browse to the meta data .xlm file (fig13)



13. Click Import. If the meta data was able to import correctly you should get a successful message (fig14)



If the meta data fails to import correctly, then check with the ADFS admin to see if there are any issues connecting to the meta data url (if using the URL method) if you can get to the meta data url from a browser with url using the " " url, then save the meta data locally and use the import from file options (some ADFS servers are configured to not allow direct url access)


14. After the importing of the IDP data, you will need to check some fields are correct before saving the configuration


User ID Attribute: This is the LDAP attribute used for the user name default for ADFS is "sAMAccountName"

Sign Request: If the IDP does not require signing then unccheck this box. If the IDP requires signing ensure that this box is checked and that you have configured the keystore                           configuration as above.

User Transformation: If this realm is also being used by another application such as Remedy and MyIT, you might need to select one of the user transformations depending how the usernames are configured in the Remedy ITSM people form.


15. Save the configuration (Top right hand of the screen)


16. In the List of Authentication screen push "SAML" to the top of the Authentication list and hit "Save" (fig14)



17. We now have to provide the Service Provider (RSSO) meta data to the IDP admins. Edit the SAML configuration


18. Click on "View Meta Data" this will show the meta data for the service provider (RSSO).  You can either send the IDP admin the url or save the page to an .xml file and send that to them.


Creating The Relying Party Trust

19. The next step in the process is a ADFS admin function and should be done by the ADFS Admin. After either receiving the SP meta data url or .xml file the ADFS admin will create the relying party trust and create the claims rule. Send the ADFS admin the following url to follow Configuring Relying Party Trust  (Valid for ADFS 2.0 & 3.0)


20. After the IDP has been configured you can now test the login. Open a browser and go to the TSPS url. Depending on gow ADFS is configured you will either login automatically (no username/password required) or you will get the ADFS login screen, put in your username and password to login (fig15)




Troubleshooting ADFS login problem will be discussed in more details in a future post, but there are a few places we can have a look at to see where login issues might be


1. RSSO server log. On the RSSO admin console go to the General tab--->Basic and set "Server Log Level" to "Debug" reproduce this issue again, check the logs for any error messages and contact BMC support with the logs if you still can not figure out what the issue is.


2. Check the RSSO user sessions list, if the user is listed there, this mean RSSO Authentication was successful with ADFS, further troubleshooting needs to be done on the application end (in this case TSPS), check the application logs.


3. If there are no error messages in RSSO server log, this generally means ADFS has not returned back to RSSO. In this case ask the ADFS admin you check the ADFS Admin logs to see if that give a clue to what the problem is, do a filter search for the relying party trust of RSSO (fig16)






Certificate based authentication will be configured in the same process order as Kerberos and SAML above i.e. configure LDAP, Test and then configure the cert based authentication. The whole process will be covered in more details in a later post. But in summary what needs to be done is


1. Create the truststore and sign the certificate

2. Configure RSSO to use Certificate based information you can find details at Certificate-based authentication process

3. Depending on the device being used, Common Access Card or key you will need to deploy the  user certificate to this device

4. You can also use software based Certificate Access, in this case the user certificate will be installed on user's local certificate store

5. Test the login you should be prompted to select a certificate before login can occur (fig17)





For more information on end to end configuration of certificate base information see Certificate-based authentication process



When troubleshooting the login process, start with LDAP authentication first and confirm that you can login with an LDAP user, see ConfiguringRSSOLDAP for detailed troubleshooting notes

Once LDAP is confirmed then continue with the other authentication method (Kerberos, SAML Certificate Based Authentication). Turn on the RSSO server logs to DEBUG and check the logs.


Trying to figure out where the failure is can be quite confusing, but here are a few things to remember


1. If a user Session for the user exists in the RSSO admin sessions list then the Authentication process has been successful. The next step would be to check the application logs


2. For ADFS Authentication, if you see a call to ADFS in the RSSO server logs like


INFO Thread_33 com.bmc.rsso.servlet.saml.AuthnRequestServletSaml.processRequest(): [102] User is redirected to IdP login url:



And there is no response in the log like


com.bmc.rsso.commons.XMLUtil.removeNodes(): SAML document after removing nodes: <?xml version="1.0" encoding="UTF-8" standalone="no"?>

<samlp:Response xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" Consent="urn:oasis:names:tc:SAML:2.0:consent:unspecified" Destination="" ID="_4e6f17a5-d653-4550-a82f-03e4fb38d9e5" InResponseTo="_672a31e9-29e3-422c-80da-fd3bb628844b" IssueInstant="2017-12-18T22:53:23.298Z" Version="2.0">

<Issuer xmlns="urn:oasis:names:tc:SAML:2.0:assertion">http://clm-aus-022742.SSO.BMC.COM/adfs/services/trust</Issuer>



This mean that RSSO has sent the user over to SAML, but no response is coming back. Further troubleshooting will need to be done on the SAML IDP end, for ADFS check the ADFS admin event viewer.


3. For kerberos check if the ticket is being obtained from the KDC in the RSSO server logs


DEBUG Thread_29 com.bmc.rsso.core.kerberos.KerberosHelper.logSubject(): Kerberos subject obtained: Subject:


Private Credential: Kerberos Principal RSSOLBSPN@SSO.BMC.COMKey Version 0key EncryptionKey: keyType=17 keyBytes (hex dump)=

0000: 6D 63 E8 B0 CA F3 8E AF   C5 1B BA 6F 7B AB 75 9A  mc.........o..u.


f the ticket is not obtained, have the Admins check the Event Viewer logs on the KDC server, this will give you a clue on why the ticket was not granted


4. If this is a Dev/Test environment and you are in an LB configuration, shutdown one of the servers of RSSO and the application if possible, this will make troubleshooting easier


Contacting BMC support.

If you have been unable to find and fix the problem then contact BMC support with the following information


1. RSSO version

2. Authentications in use

3. Set RSSO server logs to debug reproduce the problem and send in the logs

4. Document one of the user name that is having the problem and a date/time stamp the problem was reproduced

5. Document the steps you are going through screen shots and Videos will help here if you can provide them.

6. The applications being used i.e. TSPS, Remedy, BAO, etc..etc..

Filter Blog

By date:
By tag: