Skip navigation
1 2 3 Previous Next

TrueSight Server Automation

132 posts
Share This:

After running a Patching and then Remediation Job, BlPackages and Deploy Jobs are generated.  You might want to retrieve the generated Deploy Jobs for further manipulation.  This information shows up in the Remediation Job Run log:

We can use some unreleased blcli commands to get the log entries and scrape the name of the job.  There are two cases to handle - since 8.8 there is an option in Patching Global Configuration named Remediation Settings: Using Single Deploy Job.  This determines if the Remediation Job will generate a Deploy Job per generated BlPackage (< 8.8 behavior), or only a single Deploy Job that will deploy all generated BlPackages to their respective servers ( 8.8+ default behavior).  Since either case may exist in 8.8+ we need to check for both.


We must start with the Patching Job Run Id - this can be obtained a variety of ways that is beyond the scope of the article.  From the patching job run, we will find the remediation job run id, then pull and process the log entries from that run to get the deploy job info.


PATCHING_JOB="/Workspace/Patching Jobs/Windows 2016 and 2019"
typeset -a DEPLOY_JOB_KEYS
# Getting the last run of my patching job for this example:
blcli_execute PatchingJob getDBKeyByGroupAndName "${PATCHING_JOB%/*}" "${PATCHING_JOB##*/}"
blcli_storeenv PATCHING_JOB_KEY
blcli_execute JobRun findLastRunKeyByJobKey ${PATCHING_JOB_KEY}
blcli_storeenv PATCHING_JOB_RUN_KEY
blcli_execute JobRun jobRunKeyToJobRunId ${PATCHING_JOB_RUN_KEY}
blcli_storeenv PATCHING_JOB_RUN_ID
# get the patching job run child ids (one will be the remediation job id)
blcli_execute JobRun findPatchingJobChildrenJobsByRunKey ${PATCHING_JOB_RUN_ID}
blcli_execute JobRun getJobRunId
blcli_execute Utility setTargetObject
blcli_execute Utility listPrint
        blcli_execute JobRun findById ${JOB_RUN_ID}
        blcli_execute JobRun getType
        blcli_storeenv JOB_RUN_TYPE_ID
        if [[ ${JOB_RUN_TYPE_ID} = 7033 ]]
# get the log entries for the remediation job
blcli_execute JobRun findById ${JOB_RUN_ID}
blcli_execute JobRun getJobKey
blcli_storeenv REMEDIATION_JOB_KEY
blcli_execute LogItem getLogItemsByJobRun ${REMEDIATION_JOB_KEY} ${JOB_RUN_ID}
blcli_execute Utility storeTargetObject logItems
blcli_execute Utility listLength
blcli_storeenv LIST_LENGTH
if [[ ${LIST_LENGTH} -gt 0 ]]
        for i in {0..$((${LIST_LENGTH}-1))}
                blcli_execute Utility setTargetObject logItems
                blcli_execute Utility listItemSelect ${i}
                blcli_execute Utility setTargetObject
                blcli_execute JobLogItem getMessage
                blcli_storeenv MESSAGE
                 # look for the < 8.8 case:
                if ( grep -q "Created deploy job" <<< "${MESSAGE}" )
                        DEPLOY_JOB_NAME="$(grep "Created deploy job" <<< "${MESSAGE}" | cut -f2- -d: | sed "s/^ Jobs//")"
                        echo "DEPLOY_JOB_NAME: ${DEPLOY_JOB_NAME}"
                        blcli_execute DeployJob getDBKeyByGroupAndName "${DEPLOY_JOB_NAME%/*}" "${DEPLOY_JOB_NAME##*/}"
                        blcli_storeenv DEPLOY_JOB_KEY
                 # look for the 8.8+ case
                if ( grep -q "Created Batch Job" <<< "${MESSAGE}" )
                        BATCH_JOB_NAME="$(grep "Created Batch Job" <<< "${MESSAGE}" | cut -f2- -d: | sed "s/^ Jobs//")"
                        echo "BATCH_JOB_NAME: ${BATCH_JOB_NAME}"
                        blcli_execute BatchJob getDBKeyByGroupAndName "${BATCH_JOB_NAME%/*}" "${BATCH_JOB_NAME##*/}"
                        blcli_storeenv batchJobKey
                        blcli_execute BatchJob findAllSubJobHeadersByBatchJobKey ${batchJobKey}
                        blcli_execute SJobHeader getDBKey
                        blcli_execute Utility setTargetObject
                        blcli_execute Utility listPrint
                        blcli_storeenv batchMembers
                        batchMembers="$(awk 'NF' <<< "${batchMembers}")"
                        while read i
                                blcli_execute Job findByDBKey ${i}
                                blcli_execute Job getName
                                blcli_storeenv deployJobName
                                blcli_execute Job getGroupId
                                blcli_storeenv deployJobGroupId
                                blcli_execute Group getQualifiedGroupName 5005 ${deployJobGroupId}
                                blcli_storeenv deployGroupPath
                                blcli_execute DeployJob getDBKeyByGroupAndName "${deployGroupPath}" "${deployJobName}"
                                blcli_storeenv jobKey
                                if [[ "${DEPLOY_JOB_KEYS/${jobKey}}" = "${DEPLOY_JOB_KEYS}" ]]
                                        echo "DEPLOY_JOB: ${deployGroupPath}/${deployJobName}"
                                        echo "DEPLOY_JOB_KEY: ${jobKey}"
                         done <<< "${batchMembers}"
        echo "Could not find job logs for ${REMEDIATION_JOB_KEY}..."
        exit 1


This will generate output like:

Single Job Mode:

BATCH_JOB_NAME: /Workspace/Patching Jobs/Deploy/Windows 2016 and 2019 batch deploy Windows 2016 and 2019@2019-09-29 14-41-25-125-0400
DEPLOY_JOB: /Workspace/Patching Jobs/Deploy/Windows 2016 and 2019-20011681 @ 2019-09-29 14-41-24-319-0400
DEPLOY_JOB_KEY: DBKey:SJobModelKeyImpl:2001176-1-2131104


Multiple Deploy Jobs:

DEPLOY_JOB_NAME: /Workspace/Patching Jobs/Deploy/Windows 2016 and @ 2019-09-29 14-51-40-404-0400
DEPLOY_JOB_NAME: /Workspace/Patching Jobs/Deploy/Windows 2016 and @ 2019-09-29 14-51-42-179-0400
BATCH_JOB_NAME: /Workspace/Patching Jobs/Deploy/Windows 2016 and 2019 batch deploy Windows 2016 and 2019@2019-09-29 14-51-42-726-0400
DBKey:SJobModelKeyImpl:2001183-1-2131308 DBKey:SJobModelKeyImpl:2001189-1-2131320


Once you have the name and DBKey of the generated Deploy Job, you can then proceed to do whatever you need to do to/for/with these jobs.

Share This:

Happy to announce that we have next patch ready with some key support enhancements:


  • Now you can run the OS hardening on Red Hat 8 servers. Patch2 allows you to install TrueSight Server Automation agent, application server and NSH on RHEL 8.
  • The Yellowfin version that is shipped with TrueSight Server Automation Live Reporting has been upgraded to 8.0.2 from 8.0.1.


The following component templates for the Defense Information Systems Agency (DISA) policy have been updated: 

  • DISA on Red Hat Enterprise Linux 6 is updated to benchmark version 1 - Release 23 of July 26, 2019.
  • DISA on Red Hat Enterprise Linux 7 is updated to benchmark version 2 - Release 4 of July 26, 2019.
  • DISA on Windows Server 2008 R2 DC is updated to benchmark version 1 - Release 31 of July 26, 2019.
  • DISA on Windows Server 2008 R2 MS is updated to benchmark version 1 - Release 30 of July 26, 2019.
  • DISA on Windows Server 2012 DC (2012 and 2012 R2) is updated to benchmark version 2 - Release 17 of July 26, 2019.
  • DISA on Windows Server 2012 MS (2012 and 2012 R2) is updated to benchmark version 2 - Release 16 of July 26, 2019.
  • DISA on Windows Server 2016 is updated to benchmark version 1 - Release 9 of July 26, 2019.


For more information on how you can download and upgrade the version to the latest patch please visit the page

Version Patch 2 for version 8.9.04.

Share This:

This question has come up a number of times and I decided to spend some time looking into it.  The goal is to be able to leverage a RedHat Satellite server as the source of a RedHat Patch Catalog in TrueSight Server Automation.  Standard disclaimers that this is not currently supported, may not work for you, may stop working in the future, etc, etc.


The below assumes some familiarity with Satellite and should work for Satellite version 6.x.  I did this with Satellite 6.5.


On the Satellite server, add or edit the file /etc/pulp/server/plugins.conf.d/yum_distributor.json, as noted in this Foreman bug, and add the following:


  "generate_sqlite": true                                                                                                                                                                                                                    


This is needed because BSA requires the metadata in sqlite format and this is not the default for Satellite.  After making this change you must restart the Pulp worker services.  The next synchronization of your repositories should include the sqlite metadata.  If not, you can forcefully regenerate the metadata of a Content View.


To verify you have generated the sqlite metadata, run the below command on the Satellite server after the synchronization completes:

find /var/lib/pulp/published/yum/master/yum_distributor -iname "*sqlite.bz2" -exec ls -la {} \;

-rw-r--r--. 1 apache apache 13938 Jun 14 16:28 /var/lib/pulp/published/yum/master/yum_distributor/ee48df18-3be3-4254-b518-de42a1a37cb4/1560544102.06/repodata/7f8c6bce5464871dd00ed0e0ed25e55fd460abb255ab0aa093a79529bb86cbc2-primary.sqlite.bz2

-rw-r--r--. 1 apache apache 155449 Jun 14 16:28 /var/lib/pulp/published/yum/master/yum_distributor/ee48df18-3be3-4254-b518-de42a1a37cb4/1560544102.06/repodata/cff3aeccd7f3ff871f72c5829ed93720e0f273d1206ee56c66fa8f6ee1d2e486-filelists.sqlite.bz2

-rw-r--r--. 1 apache apache 40915 Jun 14 16:28 /var/lib/pulp/published/yum/master/yum_distributor/ee48df18-3be3-4254-b518-de42a1a37cb4/1560544102.06/repodata/eb5044ef0c9e47dab11b242522890cfe6fbb6cf1942f14757af440ec54c9027f-other.sqlite.bz2



Subscribe the system used to store the RedHat Catalog for TSSA to your Satellite server to obtain the certificates used by TSSA in the catalog synchronization process.


In the Patch Global Configuration, or your offline downloader configuration file you will reference these certificates:

SSL CA Cert: /etc/pki/ca-trust/source/anchors/katello-server-ca.pem

SSL Client Cert: /etc/pki/entitlement/<numbers>.pem

SSL Client Key: /etc/pki/entitlement/<numbers>-key.pem

Note that the SSL CA Cert is different than the one used when synchronizing directly with RedHat.


You will need to update the RedHat Channel Filters List File (online catalog) or the offline downloader configuration file (offline catalog) with the urls and other information for the Satellite -provided channels you will use in your catalog.  The URLs will look something like:


The format of the url is https://<satellite server>/pulp/repos/<organization>/<content library>/<view name>/<product>.  An easy way to determine the URLs is to use the rct cat-cert command on the SSL Client Cert:


rct cat-cert /etc/pki/entitlement/3591963669563311224.pem



Type: yum

Name: Red Hat Enterprise Linux 7 Server (RPMs)

Label: rhel-7-server-rpms

Vendor: Red Hat

URL: /Example/Library/View1/content/dist/rhel/server/7/$releasever/$basearch/os

GPG: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

Enabled: True

Expires: 1

Required Tags: rhel-7-server

Arches: x86_64


Another way is to inspect the output of the subscription-manager repos --list command output (which only shows the repos applicable to the OS of the catalog server):

# subscription-manager repos --list


    Available Repositories in /etc/yum.repos.d/redhat.repo


Repo ID:   rhel-7-server-rpms

Repo Name: Red Hat Enterprise Linux 7 Server (RPMs)

Repo URL:$releasever/$basearch/os

Enabled:   1

Repo ID:   rhel-7-server-optional-rpms

Repo Name: Red Hat Enterprise Linux 7 Server - Optional (RPMs)

Repo URL:$releasever/$basearch/optional/os

Enabled:   0

Repo ID:   rhel-7-server-satellite-tools-6.5-rpms

Repo Name: Red Hat Satellite Tools 6.5 (for RHEL 7 Server) (RPMs)

Repo URL:$basearch/sat-tools/6.5/os

Enabled:   0


Once you have the urls and other information for the channels you want to build your catalog from you will update the RedHat Channel Filters List File in Patch Global Confgiuration or Offline Downloader configuration file with the urls and other information.


Example RedHat Filters file snippet for an online catalog:


   <redhat-channel use-reposync="true">

        <channel-name>RHEL 7 Optional RPMs from Satellite</channel-name>






    <redhat-channel use-reposync="true" is-parent="true">

        <channel-name>RHEL 6 RPMs from Satellite</channel-name>






    <redhat-channel use-reposync="true">

        <channel-name>RHEL 6 Optional RPMs from Satellite</channel-name>







This adds the rhel-7-server-rpms and rhel-6-server-rpms as parent channels and the others as child channels.



Example Offline Downloader configuration file snippet:

       <redhat-cert cert-arch="x86_64">












































































At this point, using the online RedHat Patch Catalog or offline downloader is the same as synchronizing with RedHat directly.  Finish the catalog creation and run the Catalog Update Job.




A few references were helpful in setting all of this up (that may require a RedHat Support account to access):

Installing Satellite Server from a Connected Network

How to forcefully regenerate metadata of a content view or repository on Red Hat Satellite 6

How do I register a system to Red Hat Satellite 6 server

How to change download policy of repositories in Red Hat Satellite 6

Share This:

If you are using TSSA/BSA to patch your Windows servers, you have hopefully already upgraded to a version of TSSA that uses the new Shavlik SDK version and are happily off patching your servers, because you received a notification from BMC via email, seen a blog post, attended a webinar, or seen the documentation flash page.


If you have not upgraded yet, you need to complete an upgrade before September 30, 2019 to continue patching your Windows servers with TSSA.


Let me elaborate on a few points from the documentation flash page:


You must upgrade the core TSSA infrastructure, that is the AppServer(s), Database and Consoles to a version of TSSA that uses the new Shavlik SDK.


You must also upgrade the RSCD agents on all Windows servers you need to deploy patches to as well.  This unfortunately breaks with precedent.  With most BSA/TSSA upgrades, you typically upgrade the core infrastructure and then roll out RSCD upgrades as change windows permit.  The notable exception to this rule is when there are RSCD side changes to enable new functionality.  The Shavlik SDK update is one of those exceptions.  Both the appserver and the RSCD must be upgraded.


While the flash page mentions a couple versions to upgrade to, there are actually a handful of TSSA versions that include the updated Shavlik SDK.

  • If your appserver and agents are already 8.9.03 then there is no action required.
  • If your appserver and agents are already, then there is no action required.
  • If your appserver and agents are already 8.9.04, then there is no action required.
  • If your appserver and agents are already, then there is no action required.
  • If you have some combination of an 8.9.04.x appserver and 8.9.03 agents, or appserver and 8.9.03 agents, then there is likely no action required, unless you have encountered one of the issues noted below.


However, if you are planning to upgrade, you should upgrade to or later, because you will likely run into a few bugs present in prior versions impacting the agent upgrade process and patching:

Fixed In
DRBLG-116254When you upgrade a Windows RSCD Agent to version, the sbin\BLPatchCheck2.exe binary is not upgraded.8.9.04

When you execute an Agent Installer job against multiple target servers in parallel, some of the jobs fail with the following error message:

Failed to copy installer bundle from file server

QM002409753Upgrading BMC Server Automation 8.9.02 agent on Microsoft Windows servers does not work as expected.
DRBLG-114734Shavlik 9.3 restart handling and robustness8.9.03.001
DRBLG-118439Windows CUJ update is slow8.9.04.001
DRBLG-117288Windows Patch Exclusions Not Working8.9.04
DRBLG-115981Windows Patching Analysis failed in absence of Digital Certificates8.9.03.001
DRBLG-119204After appserver upgrade to 8.9.04, Windows Patch Remediation fails unless target RSCD agent is upgraded to the same SP4.


A few other questions that have come up about the upgrade process:


Can I upgrade my agents to the new version ahead of my appserver upgrade ?

No, the appserver/infrastructure must be updated first.  TSSA/BSA has never supported using a newer RSCD version with an older appserver.


What happens if I can't complete the upgrade process before September 30, 2019 ?

New patch metadata with information about newly released patches will not be available to you and you will be unable to deploy them using TSSA's Patching solution.


Do I also need to upgrade the RSCD on other Operating Systems (Linux, AIX, Solaris, etc) immediately after upgrading the TSSA infrastructure ?

You do not need to upgrade these agents immediately after upgrading the TSSA infrastructure.

Share This:

Hello Everyone,


We are super excited to announce the launch of 'TrueSight Automation Console' for TrueSight Automation for Servers. This container based platform significantly simplifies OS patching experience for servers while providing near real time Patch Compliance status. Here are the highlights of this platform:


Simplified patch management

Using new automation console you can define the patch policies that are used for both audit and deployment of patches.

Based on the policy results, you can create remediation actions to apply required patches on the target servers. You basically just choose the policies, targets, remediation options, schedule (maintenance window), and receive notification (alerts).  It is a big step forward and provides greater ease of use, and makes it easier to deploy patches in this intuitive interface.




KPI driven Dashboards

The Dashboard shows patch compliance health. It reflects KPIs such as:

  • Patch Compliance of the environment
  • Assets missing patch SLAs and critical patches
  • Age of missing patches
  • Remediation trends
  • Visibility into the patches that are missing on the most number of servers

You can also:

  • Filter the dashboard data using filters for operating system, severity, and the patch policy.
  • Drill down on widgets to get details
  • Export data to share with stakeholders



Deploy missing patches

Based on scan results of patch policies, you can schedule the deployment of missing patches during the maintenance window. Also, you can choose the reboot options for the target servers as well as staging schedule for patch payloads.


Service Level Agreements(SLA)

SLAs define the period within which the missing patches need to be remediated. You can define SLAs based on your organization's policies.


Container Based Deployment

TrueSight Automation Console is packaged as docker containers which are easy to deploy.

You can install TrueSight Automation Console both in Interactive as well as in Silent mode.

It is a three simple step process to install TrueSight Automation Console:

  • Setup Stack Manager
  • Setup Database
  • Setup Automation Console application


For details, please refer the documentation at


Customers who want to understand more about this offering and to gain access to the software should reach out to the respective account manager.

Share This:

Hi all

Now that we've done a few sessions on TrueSight Smart Reporting, I thought I'd gather the links and resources together in a single place and share with everyone.

If you have other topics you'd like to see addressed or questions, please engage here, on the individual pages above or contact Support.




Share This:

As part of automating your patch deployments you may want to run your Patching Job and have it automatically generate the BlPackages and Deploy Jobs that contain missing patches and go ahead and schedule the various phases of the generated Deploy Jobs.  This is fairly simple in the gui.  In a Patching Job, on the Remediation Options tab, click on the Deploy Job Options button and then goto the Phases and Schedules tab.


While this is trivial for a single job, if you have several jobs to modify every patching cycle this becomes quite tedious.  We will of course turn to our friend the BLCLI.  There's a hint in the screenshot above as to how we will accomplish this.  The Populate options from an existing Job looks interesting.  And in-fact if I create a 'dummy' Deploy Job, setup the options and schedule that I want and then select that in the Populate options from an existing Job menu, the schedule and options are applied to my Patching Job / Deploy Job Options.


To really automate this I need to do a few things: Figure out the options and schedule we want set in the Patching Job, create or update a 'dummy' deploy job with the options and schedule I want, find my Patching Job (we could create one from scratch but not for this exercise), apply the 'dummy' job to the Patching Job, remove the schedule from the dummy job, and then optionally execute or schedule the Patching Job.


Dummy Job

To create the 'dummy job' we need a blpackage.  Let's provide the name and path to a BlPackage and create an empty BlPackage if it does not exist.

blcli_execute DepotGroup groupNameToDBKey "${DUMMY_BLPACKAGE%/*}"
blcli_storeenv depotGroupKey
blcli_execute DepotObject depotObjectExistsByTypeGroupAndName 28 ${depotGroupKey} "${DUMMY_BLPACKAGE##*/}"
blcli_storeenv pkgExists
if [[ "${pkgExists}" = "true" ]]
    blcli_execute BlPackage getDBKeyByGroupAndName "${DUMMY_BLPACKAGE%/*}" "${DUMMY_BLPACKAGE##*/}"
    blcli_execute DepotGroup groupNameToId "${DUMMY_BLPACKAGE%/*}"
    blcli_storeenv depotGroupId
    blcli_execute BlPackage createEmptyPackage "${DUMMY_BLPACKAGE##*/}" "" ${depotGroupId}
    blcli_execute DepotObject getDBKey
blcli_storeenv PACKAGE_KEY

Now that we have our dummy BlPackage, let's do the same with the dummy job.  When we create the deploy job, we'll pass parameters as defined in the DeployJob - createDeployJob_3 - Documentation for BMC Server Automation Command Line Interface 8.9 - BMC Documentation command.


blcli_execute JobGroup groupNameToDBKey "${DUMMY_BLDEPLOY%/*}"
blcli_storeenv jobGroupKey
blcli_execute Job jobExistsByTypeGroupAndName 30 ${jobGroupKey} "${DUMMY_BLDEPLOY##*/}"
blcli_storeenv jobExists
if [[ "${jobExists}" = "true" ]]
    blcli_execute DeployJob getDBKeyByGroupAndName "${DUMMY_BLDEPLOY%/*}" "${DUMMY_BLDEPLOY##*/}"
    blcli_execute JobGroup groupNameToId "${DUMMY_BLDEPLOY%/*}"
    blcli_storeenv GROUP_ID
    blcli_execute DeployJob createDeployJob "${DUMMY_BLDEPLOY##*/}" "${GROUP_ID}" "${PACKAGE_KEY}" "${DEPLOY_TYPE}" "${DUMMY_TARGET_SERVER}" ${DEPLOY_OPTS}
blcli_storeenv DEPLOY_JOB_KEY


Now we can set the phase schedules on the dummy job:

blcli_execute DeployJob setAdvanceDeployJobPhaseScheduleByDBKey ${DEPLOY_JOB_KEY} AtTime "${SIMULATE_TIME}" "${STAGE_DATE_TIME}" "" AtTime "${COMMIT_TIME}"
blcli_storeenv DEPLOY_JOB_KEY


Patching Job

Apply the options and schedule from the dummy job on your patching job:


PATCH_JOB="/Workspace/Patching Jobs/rhel6-clean"
REMEDIATION_DEPOT_FOLDER="/Workspace/Patch Deploy"
REMEDIATION_JOB_FOLDER="/Workspace/Patching Jobs"
blcli_storeenv PATCH_JOB_KEY



Optionally schedule the Patching Job

blcli_execute Job addOneTimeSchedule ${PATCH_JOB_KEY} "${ANALYSIS_TIME}"
blcli_storeenv PATCH_JOB_KEY



And that's it.  The attached script includes a hard-coded start time for the Deploy phases and Patching Job.  Those could be taken as input to the script or derived somehow.  There's also a check to see if the executeDeployJobsNow option is set and exit out since this requires manual correction.


There are a few things that could be done in the script that are not covered and left an an exercise for the reader:

  • Delete the dummy job and package after they've been used
  • Set pre- and post- commands in the Deploy Job options
  • Take the various options, schedule times, job path, folders, etc as script arguments
Share This:

In the Part 1 we saw how you can create a web based repository configuration of JFrog artifactory. In Part 2 we will see how you can create the depot software for deploying an application like notepad++ to windows targets.


Create a Depot Software for custom software


In this step you want to link the software payload http path with the depot software. The steps are usual as creating a new depot software. I will show here how you can do it for Custom Software. You will get the following screen to select the software payload. You can live browse the web repositories and select the payload you want to deploy. To pull the latest version from the repository for every deployment check the box "Download during deployment". Click on OK.

Now you will come across the screen where you can give the name and deploy/undeploy related commands for the software payload. For notepad++ we have shown it below. Please zoom in the browser if the commands are not visible. Click Next.

Go through the wizard for the rest of the screens similar to Depot Software and click Finish and you are all set!!


Deploy the software to the selected servers


Now you can execute the deploy job just like other deploys jobs by right click on the depot software created.

Share This:

Need of central repository for software payloads


A lot of customers use the file servers as remote repositories for storing the payloads. Where as having multiple remote repositories can give you scalability but when you want to host a software for which there are frequent releases and you want to maintain latest version of that software accessible to deploy it becomes an involved effort for the team. This is true especially when you have a servers distributed across the departments with different people as IT leads owning set of servers having unique requirement of softwares that need to be installed on them.


TrueSight Server Automation with its release offers you a way to store all your payloads at a central web repository and download the payloads latest version through a https url. In this release there is a out of the box integration with generic type repository of JFrog Artifactory. The pre-requisite is you need to have a JFrog Artifactory of Generic Local type repository already configured and uploaded your software payloads to it.


You can be ready to deploy the software payloads in three simple steps:

  1. Configure the web repository in RCP Console
  2. Create a Depot Software
  3. Create a Deploy job


We will see how to use these three steps in three parts of this blog.


Configure the web repository in RCP Console

As shown below create the Web repository configuration by following steps:


Click on add button to add new web repository configuration. Here you enter the artifactory url and the JFrog user and password. Click Next.

On next screen you can select the WebRepositoryConfig object level authorization to the role you are currently logged in RCP console with.

Click OK and then Click Finish on the parent screen.


Click on Part 2 to see how you can create the depot software for using https url and deploy the the software.

Share This:

I am excited to announce that TrueSight Server Automation is GA with some critical updates.


  • Now use all new REST APIs for core patching with well defined standard of documentation. You can now work with catalogs, groups, patching jobs, roles and servers with these APIs. The end points of these APIs are provided using swagger specific language.
  • With this release we are supporting out of the box integration with generic-local type of JFrog Artifactory. You can add a software package to the Depot by specifying the path to a payload that exists on a configured web repository. After adding a software package, you can deploy it using a Deploy Job just like you could do earlier with a NSH path.
  • In smart groups, now you do not need to create a smart group from scratch every time you need to reuse the existing conditions from other smart groups. As an Administrator or operator, you can create a copy of a smart group in a patch catalog.
  • We have also included provisioning support on Unified Extensible Firmware Interface (UEFI) by using Windows Imaging Format (WIM) images.
  • There is a whole set of blclis available now for dealing with functionalities in jobrun, patch catalog, execution tasks and smart patch catalog group.
  • There are additions in the OS support with the inclusion of Windows 2019 and SUSE 15.


Apart from these there are some critical fixes available.


For more information please visit the link Version Patch 1 for version 8.9.04 - Documentation for TrueSight Server Automation 8.9.00 - BMC Documentatio…




Share This:

I am super excited to share that TrueSight Server Automation 8.9.04 release is GA!! Following are the salient features of this release.


Job qualification check

You can now set the maximum number of servers targeted by a job. Every time you perform the following actions, a message with the number of target servers is displayed. This option is available through GUI only.

  • Create Job
  • Create Execution Task
  • Modify Job
  • Execute Against
  • Execute a job

Powershell integration

  • Execute PowerShell scripts through Type 3 NSH Scripts and scriptutil.
  • Extended Objects can support PowerShell script natively.
  • Option to generate script logs for Network Shell script job for ease of parsing.
  • Ability to easily pass custom arguments while launching PowerShell scripts as well as simple configuration to set the launch command for PowerShell.



Added platform support

  • Windows Server 2019 operating system
  • Ubuntu 18.04 operating system
  • POWER9 architecture

Compliance Content

  • CIS for SUSE 12 and Windows 2016
  • PCI for Windows 2016
  • DISA,PCI,CIS templates of old OS versions have been upgraded to the latest template versions.


For more details please visit Service Pack 4: version 8.9.04 - Documentation for TrueSight Server Automation 8.9.00 - BMC Documentation

Share This:

Coming up on October 18, 2018 is BMC’s annual user event, the BMC Exchange in New York City!




During this free event, there will be thought-provoking keynotes including global trends and best practices.  Also, you will hear from BMC experts and your peers in the Digital Service Operations (DSO) track.  Lastly, you get to mingle with everyone including BMC experts, our business partners, and your peers.  Pretty cool event to attend, right? 


In the DSO track, we are so excited to have 3 customers tell their stories. 

  • Cerner will speak about TrueSight Capacity Optimization and their story around automation and advanced analytics for capacity and planning future demand.   Check out Cerner’s BizOps 101 ebook
  • Park Place Technologies’ presentation will focus on how they leverage AI technology to transform organizations.
  • Freddie Mac will join us in the session about vulnerability management.  Learn how your organization can protect itself from security threats.  Hear how Freddie Mac is using BMC solutions. 


BMC product experts will also be present in the track and throughout the entire event.

  • Hear from the VP of Product Management on how to optimize multi-cloud performance, cost and security
  • Also, hear from experts on cloud adoption.  This session will review how TrueSight Cloud Operations provides you visibility and control needed to govern, secure, and manage costs for AWS, Azure, and Google Cloud services.


At the end of the day, there will be a networking reception with a raffle (or 2 or 3).  Stick around and talk to us and your peers.  See the products live in the solutions showcase. Chat with our partners.  Stay around and relax before heading home. 


Event Info:

Date: October 18th 2018

When: 8:30am – 7:00pm

  • Keynote begins at 9:30am
  • Track Sessions begin at 1:30pm
  • Networking Reception begins at 5:00pm

Where: 415 5th Ave, NY, NY 10016


For more information and to register, click here


Look forward to seeing you in NYC!  Oh, and comment below if you are planning to attend!  We are excited to meet you.

Share This:

Happy to announce that TrueSight Server Automation 8.9.03 went GA on 12th June 2018 & BMC Server Automation has been renamed to TrueSight Server Automation

Highlights of the release:

  • HIPAA Compliance for AIX 7.1
  • Ivanti 9.3 SDK based patching
    • Windows patching will now be supported using Ivanti(Shavlik) SDK 9.3
  • SMB2 support
    • Add Node which is SMB2 enabled - Agent Installer Job through RCP to a SMB2 enabled targets.
    • Unified Agent Installer to a SMB2 enabled targets.
  • SHA256 support

We are upgrading the secured certificates to support “Signature Algorithm” with SHA2 key. Those will be available at following places for this release.

      • Self-Signed certificate used by Application Server
      • Agent certificate


  • Security Enhancements

There are JRE and Magnicomp vulnerabilities which have got fixed. There are other vulnerability fixes also along with these which are discovered from application security team.

Questions or feedback? Comment below to let us know

For more details please take a look at the documentation here - Service Pack 3: version 8.9.03 - Documentation for TrueSight Server Automation 8.9.00 - BMC Documentation

Share This:

It can be useful to manually run yum or blyum with the repodata and includes from the command line to get a better picture of what might be happening during analysis or inspect the repodata or perform any other troubleshooting steps.  The following steps can be taken to accomplish this:


For 8.9.01 and below

Run the Patching Job with the DEBUG_MODE_ENABLED set to true. 

Gather the per-host generated data from the application server - for example if the Job Name is 'RedHat Analysis' and it was run against the target '' on Jun 11 you should see a directory:

<install dir>/NSH/tmp/debug/application_server/RedHat Analysis/Sat Jun 11 09-26-57 EDT 2016/

that contains:

analysis_err.log analysis_log.log analysis_res.log installed_rpms.log repo repodata.tar.gz yum_analysis.res yum.conf yum.err.log yum.lst


Copy the entire directory back to the target system (or any system you want to test these files on) into /tmp or some other location.


For 8.9.01 and later

The files mentioned above are kept in <rscd install>/Transactions/analysis_archive on the target system.  The three most recent runs should be present.




Once you've located the files used for analysis and located them on the target system

Edit the yum.conf and the cachedir and reposdir to match the current directory path.  Following the pre-8.9.01 example where we copied the directory into /tmp:


are changed to match the new path - if you copied the to /tmp then:


then from within that directory you can run:

/opt/bmc/bladelogic/NSH/bin/blyum -c yum.conf -C update

if an include list was used you can do:

/opt/bmc/bladelogic/NSH/bin/blyum -c yum.conf -C update `cat rpm-includes.lst.old`


/opt/bmc/bladelogic/NSH/bin/blyum -c yum.conf -C update `cat parsed-include.lst.old`

if the parsed list exists.  The parsed include list will not contain any rpms that are installed on the target and in the include list.  The parsed-include.lst was added in recent BSA versions to handle the situation where yum decides to update the rpm to the latest one in the catalog instead of leaving it alone when the include list contains the exact version of an rpm already installed on the system.


If it's a RedHat 7 target, the native yum is used, so use yum instead of blyum.

yum -c yum.conf -C update


You can also use the above process to copy the metadata from the target system to another system and run queries against the metadata or run analysis with the same metadata and options against a test system.  for example you could run:

/opt/bmc/bladelogic/NSH/bin/blyum -c yum.conf -C search <rpmname>


/opt/bmc/bladelogic/NSH/bin/blyum -c yum.conf -C info <rpmname>


to see if some rpm is in the metadata or not.

Share This:

BMC Software is alerting customers who use BMC Server Automation (BSA) for managing Unix/Linux Targets to this Knowledge Article which highlights vulnerability CVE-2018-9310 in the Magnicomp Sysinfo solution used by BSA RSCD Agent to capture Hardware Information. The Knowledge Article contains the details of the issue and also how to obtain and deploy the fix.

Filter Blog

By date:
By tag: