Skip navigation
1 2 3 Previous Next

TrueSight Server Automation

144 posts
Share This:

One of the core competencies of TrueSight Server Automation (TSSA) is automating the patch update process for servers. TSSA makes this easy for the typical monthly cycle of patching servers in large groups, but in no particular order for each server in the group. Many applications are hosted on multiple servers, often in a HA (High Availability) and/or failover cluster to limit service outage. Automation to patch these type of services needs to update the nodes in a specific order usually one at a time, to prevent service outages during the patch update process. We call this Service Aware Patching.

 

Microsoft Exchange is one of these services, and I wanted to share here how we have implemented Service Aware Patching entirely in TrueSight Server Automation here at Customer Zero (IE BMC-IT).

 

Key Concepts

 

When automating any cluster service, we need to be able to idle the current target node by moving services off to the other nodes. Experience has taught is that changing the cluster state requires a careful methodical approach to be successful.

 

Key "milestones" in the process;

  1. Verify the target cluster node is in the "normal" state BEFORE attempting to change the cluster state
  2. Move services off the target node to idle the node (IE moving the node into Maintenance Mode)
  3. Verify the services successfully moved to another node and the target node is actually idle
  4. Use normal TSSA Patch Analysis and Deployment to update the patch level of the target node
  5. Verify the node and services are still in the idle state, as patch deployment probably restarted the node
  6. Move the services back onto the target node (IE move the node out of Maintenance Mode)
  7. Verify cluster node and cluster is in the "normal"  state BEFORE moving on the the next node is the sequence

 

Notice we use a careful logical sequence to move the target node through the required phases, we call this the "chain" of steps. Important point here, is that if ANY step in the chain fails the automation procedure needs to stop and NOT PROCEED to the next step. if something unexpected happens in the chain of steps on the cluster, ignoring a failed step and continuing on will most likely cause a service outage due to the automation not halting at a failed step in the chain.

 

TSSA takes care of the actual patch updates, the question is how do we implement the steps to handle the transitions of the cluster nodes?

 

Microsoft Exchange automation requires knowledge on how to verify the Exchange service and how to move the Exchange node in and out of "Maintenance Mode". Fortunately there are Exchange experts that have created existing powershell scripts to do what we need here for Service Aware Patching.

 

We used the powershell scripts available here;

 

The first issue we encountered is common when attempting to automate existing scripts of any type. If the script was designed for interactive use and not called by automation, then the scripts typically do not do things like set the exit code on failure. They often rely on the errors to STDOUT being read by the admin running the scripts. This was true with these powershell scripts, so one of our Exchange admins made a copy of the scripts we needed for the all important "verify" steps in our Service Aware Patching chain and modified them to set a non zero exit code if the script failed in any of the key operations or checks. Our TSSA deploy jobs will automatically detect and report failure if the script exits with a non zero exit code.

 

Once we had the automation procedure steps in the chain defined and the powershell scripts to perform the steps needed to manipulate the Exchange cluster, all the hard work was done! Now we just need to create the TSSA jobs required to execute the steps in the chain in the correct order!

 

Implementation in TrueSight Server Automation

 

We used TSSA batch jobs to implement the chain of steps required in executed in a prescribed order.

 

Note: Important options to set in the TSSA Batch jobs;
  • "Continue executing batch when individual jobs return non-zero exit code" is NOT set
  • "Execute jobs sequentially"

 

The main TSSA batch job to run our Service Aware Patching and control the node by node sequence in the cluster;

 

TSSA Batch Job: "Exchange 2013 - FULL Sequence Maintenance Mode Operation (PROD)"
  • Exchange 2013 - Maintenance Mode Operation (Node #1)
  • NSH - SleepDelay (5 minutes)
  • Exchange 2013 - Maintenance Mode Operation (Node #2)
  • NSH - SleepDelay (5 minutes)
  • Exchange 2013 - Maintenance Mode Operation (Node #3)
  • NSH - SleepDelay (5 minutes)
  • Exchange 2013 - Maintenance Mode Operation (Node #4)
  • NSH - SleepDelay (5 minutes)
  • Exchange 2013 - Maintenance Mode Operation (Node #5)
  • NSH - SleepDelay (5 minutes)
  • Exchange 2013 - Maintenance Mode Operation (Node #6)

 

This main TSSA batch job is the controller job that calls the child job for each node in the prescribed order, and any failure will cause the batch job to stop and not continue to the next node.

 

Note: In the case of any failures in the chain of steps
Manual intervention by the Exchange admins would be required to determine how to resolve without service impact.

 

Each of the child jobs is where our chain of steps are executed against each specific node in the cluster;

 

TSSA Batch Job: "Exchange 2013 - Maintenance Mode Operation (Node #?)"
  • Deploy - Verify Exchange READY for Maintenance Mode
  • Deploy - Move Exchange to Maintenance Mode
  • Deploy - Verify Exchange Maintenance Mode
  • PAJ - Exchange 2013 Servers
  • Build Provisioning (Windows) - CHECKPOINT Wait for Server
  • Deploy - Move Exchange to Normal Mode
  • Deploy - Verify Exchange Normal Mode

 

Note: There is one child job per node.

 

Here's a screenshot showing a successful run of one of the child jobs;

 

SAP-Exchange-MMO-1.png

 

Overview of the TSSA low level jobs and the associated powershell script

 

Deploy - Verify Exchange READY for Maintenance Mode
  • verify-server-is-NORMAL_MODE.ps1
Description: Script we created to check mailbox database (Get-MailboxDatabase) status is OK

 

Deploy - Move Exchange to Maintenance Mode
  • Start-ExchangeServerMaintenanceMode.ps1
Description: Script from VanHybrid site to move Exchange into "Maintenance Mode"

 

Deploy Verify Exchange Maintenance Mode
  • verify-server-is-IN-maintenance-mode.ps1
Description: Script we created to check various Exchange components are in the Inactive state and the cluster node state is "Paused"

 

Deploy - Move Exchange to Normal Mode
  • Stop-ExchangeServerMaintenanceMode.ps1
Description: Script from VanHybrid site to move Exchange out of "Maintenance Mode"

 

Deploy - Verify Exchange Normal Mode
  • verify-server-NOT-in-maintenance.ps1
Description: Script we created to check various Exchange components are in Active state and the cluster node state is "Up"

 

Summary

 

Our Service Aware Patching use case for Exchange Service uses a single TSSA Batch job to run the entire sequence end to end, moving one node at a time into maintenance mode and then updating the patch level using normal TSSA patching jobs, and then return the node to service. All nodes will be sequenced through and the end result will be the entire Exchange cluster patched with no service outage. All using automation.

Share This:

TrueSight Smart Reporting for Server Automation (TSSR-SA) 19.2 Patch 2 released on February 27th, 2020.

 

The official product version is TSSR-SA 19.2.02.

 

TSSR-SA is the Smart Reporting solution for Truesight Server Automation (TSSA) environments and is the replacement product for BDSSA which reached its End-Of-Life in August 2019.

 

The TSSR-SA solution consists of two main products which currently use different versioning schemes. Although documented, this can sometimes result in confusion around which versions of the product components are compatible and should be downloaded and installed together for a valid TSSR-SA solution installation.

 

The goals of this month’s blog are to clarify details around:

 

  1. Which products make up the TSSR-SA 19.2.02 solution.
  2. Which versions of the constituent products are compatible.
  3. Which versions of TSSA are compatible with TSSR-SA 19.2.02
  4. Which files should be downloaded from the BMC EPD site in order to install or upgrade to TSSR-SA 19.2.02
  5. When to follow the documented install path vs the upgrade path

 

All of this information can be found in the official product documentation and documentation links will be provided when referenced.

 

Since some of this information exists in the TSSR-SA documentation space and some exists in the TSSR Platform documentation space, this blog aims to help avoid confusion by highlighting the most important information in one single place.

 

Thanks,


John O’Toole

Principal Technical Supp Analyst

BMC Software Customer Support

 

 

 

1. What products make up the TSSR-SA Smart Reporting Solution?

 

 

TSSR-SA consists of two main constituent products:

  • TrueSight Server Automation - Data Warehouse (TSSA-DW)
  • TrueSight Smart Reporting – Platform (TSSR Platform)

 

 

The product version information for TSSR-SA 19.2.02, as listed in the Release Notes, is as follows:

 

 

TrueSight Smart Reporting for Server Automation 19.2.02 (Solution)

Component

Version

TrueSight Server Automation - Data Warehouse

8.9.04.004

TrueSight Smart Reporting - Platform

20.02

 

 

For completeness, this can be compared with the product version information for TSSR-SA 19.02.01 (Patch 1) which released in December 2019:

 

 

TrueSight Smart Reporting for Server Automation 19.2.01 (Solution)

Component

Version

TrueSight Server Automation - Data Warehouse

8.9.04.003

TrueSight Smart Reporting - Platform

19.3

 

And with TSSR-SA 19.2 which released in July 2019:

 

TrueSight Smart Reporting for Server Automation 19.2. (Solution)

Component

Version

TrueSight Server Automation - Data Warehouse

8.9.04.002

TrueSight Smart Reporting - Platform

19.2

 

 

 

2. Which versions of TSA-DW are compatible with which versions of TSSR Platform?

 

 

This information can be found in the TSSR-SA 19.2.02 Release Notes:

 

 

 

 

As we can see here, in a valid TSSR-SA environment, there is a tight relationship between the versions of TSA-DW and TSSR Platform.

 

When downloading the TSSA-DW and TSSR Platform installers, it is important that compatible versions are downloaded.

 

 
3. Which versions of TrueSight Server Automation (TSSA) are compatible with which versions of TSSR-SA?

 

This information can also be found in the TSSR-SA 19.2.02 Release Notes:

 

Compatibility with TrueSight Server Automation

The following versions of TrueSight Server Automation are supported with TrueSight Server Automation - Data Warehouse:

  • BMC Server Automation 8.9.0.x and 8.9.02
  • TrueSight Server Automation 8.9.03.x, 8.9.04.x, and 20.02

 

So, every version of TSSR-SA 19.2.X is compatible with every version of BSA/TSSA 8.9.X and TSSA 20.02

 

The version numbers of TSSA and TSA-DW are not tightly coupled i.e. if an environment is currently running TSSA 8.9.03 the version of TSA-DW installed does not need to match.

 

What is important is that the version of TSA-DW installed is compatible with the corresponding version of TSSR-Platform. (see previous compatibility matrix)

 

For example, the following would be a valid combination:

 

TSSA 8.9.03

TSSA-DW 8.9.04 Patch 4          (part of TSSR-SA 19.2.02)

TSSR Platform 20.02                 (part of TSSR-SA 19.2.02)

 

 

4. Which files should I download in order to install a TSSR-SA 19.2.02 environment?

 

As mentioned previously, the TSSR-SA 19.2.02 solution consists of the following product versions:

 

TrueSight Smart Reporting for Server Automation 19.2.02 (Solution)

Component

Version

TrueSight Server Automation - Data Warehouse

8.9.04.004

TrueSight Smart Reporting - Platform

20.02

 

 

Details on which files to download for both TSA-DW 8.9.04.004 and TSSR Platform 20.02 can be found in the “Downloading the Patch” section of the TSSR-SA 19.2.02 release notes.

 

 

 

a) Locating and downloading the installer files for TSSA-DW 8.9.04.004

 

Since TSSA-DW 8.9.04.004 is a patch, the installation files are found under the “Product Patches” tab on EPD as highlighted below:

 

 

 

From here, we navigate to the TSSA-DW 8.9.04 area:

 

 

Under here, the TSSA 8.9.04.004 installation files are all dated 02/27/2020:

 

 

 

b) Locating and downloading the installer files for TSSR Platform 20.02

 

TSSR Platform 20.02 is a full product release so the installation files are found under the “Licensed Products” tab on EPD as highlighted below:

 

 

 

 

 

From here, we download either the Windows or the Linux installer depending on the OS of the TSSR-SA Server:

 

 

 

 

5. Once the installation files are downloaded, which documentation steps do I follow to install or upgrade TSSR-SA 19.2.02?

 

 

The steps to follow depend on whether this is the initial installation of TSSR-SA in the environment or whether a previous version of TSSR-SA (i.e. 19.2, 19.2.01) was previously installed and is now being upgraded to TSSR-SA 19.2.02.

 

 

Scenario A – Installing TSSR-SA for the first time in an environment:

 

This is the initial installation of TSSR-SA in this environment.

 

BDSSA is the TSSA reporting solution currently in use and we want TSSR-SA to use the same Warehouse DB that BDSSA has been using. This is sometimes referred to as a "migration" from BDSSA to TSSR-SA.

 

These are also the steps to follow if neither BDSSA nor TSSR have previously been installed in this environment.

 

In this scenario, we want to follow the Installing section in the TSSR-SA documentation.

 

Since the installation process includes steps from both the TSSA-DW and TSSR Platform documentation spaces, they are listed here in order:

 

  1. Review the TSSR-SA 19.2.02 Release Notes
  2. Review the TSSR-SA 19.2.02 Orientation for BDSSA users
  3. Review the TSSR-SA 19.2.02 Getting Started section
  4. Complete the Planning activities for TSSA-DW. Note: If the TSSA-DW installation will be using the same databases as BDSSA, no new databases or schemas need to be created for TSSA-DW.
  5. Complete the Prepare to install activities for TSSA-DW
  6. Complete the TSSA-DW Installation steps
  7. Complete the TSSA-DW Post-Installation tasks. Note: Step 2 of the Post-Installation tasks is the step where we provide TSSA-DW with the database information for the Warehouse, two ETL databases and the TSSA database.

 

 

As this point, we should have a functioning TSSA-DW installation and should be able to perform an ETL run as mentioned in step 6 of the Post Installation tasks.

 

If ETL is successful, we can continue to the TSSR 20.02 Platform installation which consists of the following steps:

 

8. Complete the TSSR Platform 20.02 Planning activities.

9. Complete the TSSR Platform 20.02 Preparing To Install activities. ( See point 6B below)
10. Complete the TSSR Platform 20.02 Installation steps
11. Complete the TSSR platform 20.02 Post Installation tasks which consists of two main steps:

  1. Adding the TSA-DW Component to TSSR Platform
  2. Configuring TSSR Platform settings

 

 

Scenario B – Upgrading an existing TSSR-SA environment:

 

A previous version of TSSR-SA (19.2 or 19.2.01) has already been installed in this environment and the goal is to upgrade it to TSSR-SA 19.2.02.

 

In this case, we want to follow the Upgrade steps in the TSSR-SA documentation.

 

  1. Review the  TSSR-SA 19.2.02 Release Notes
  2. Review the TSSR-SA 19.2.02 Upgrading page
  3. Complete the TSSA-DW Preparing to upgrade activities
  4. Complete the TSSA-DW Upgrade steps.
  5. Complete the TSSR Platform 20.02 Preparing to upgrade activities
  6. Complete the TSSR Platform 20.02 Upgrade steps
  7. Complete the TSSR-SA 19.2.02 Post-upgrade tasks. These steps are very important in order to avoid post-upgrade performance, resource usage, data-refresh or authentication issues

 

 

6. Important points and common questions/pitfalls

 

This section will be updated as common TSSR-SA 19.0.02 install/upgrade issues and questions are encountered.

 

A) Warehouse DB data condition to check before running TSSA-DW post-installation steps.

 

Make sure step 1 of the TSSA-DW Post-installation steps is followed if this is the initial migration of a BDSSA environment to TSSR-SA. Skipping this step may result in errors during the Warehouse DB upgrade.

 

"If you are performing the initial migration of BMC Decision Support for Server Automation warehouse to TrueSight Server Automation - Data Warehouse, you must run a diagnostic SQL query to verify that the version of BMC Decision Support for Server Automation warehouse is as expected. For more information, see this knowledge article"

 

 

 

B) Creating the TSSR Repository DB/Schema

 

When doing a fresh installation of TSSR-SA in an environment where BDSSA is currently installed, and the same Warehouse and ETL DBs are to be used by TSSR-SA, there can sometimes be confusion around the new TSSR Repository DB which TSSR Platform uses to store information such as metadata, users, and user permissions for reports.

 

Unlike the Warehouse and ETL DBs, this is not a DB which was present or used by BDSSA. This is a net-new DB required by TSSR Platform.

 

Oracle Environments:

For Oracle DB environments, the TSSR Repository user must be created in advance of running the TSSR Platform installer.

The installer is then provided with the DB connection information. See the "Setting up Oracle as the repository database" section of the Setting up the TSSR Repository Database page.

This page contains details on creating the Oracle TSSR Repository user, the tablespace (if a separate tablespace is preferred) and granting the required privileges.

 

 

SQL Server Environments:

For SQL Server environments, the user can choose whether to create the TSSR Repository DB and user in advance of running the TSSR Platform installer or to allow the installer to create the TSSR Repository DB and user.

 

If the option to have the installer create the DB and User is selected, then the installer requires Database Administrator credentials to be provided so that these tasks can be completed.

If the option to create the DB and User manually in advance is selected, then Database Administrator credentials will not be required by the installer.

Share This:

We are glad to announce the release of new version of TrueSight Server Automation 20.02.00. This new release brings in enhancements to the core features which help to stabilize the functionality.

 

RSCD Agent Enhancement:

The RSCD agent now includes a new Smart Agent feature to perform the following tasks:

  • Monitor the status of RSCD agent and share the status information with the Application Server.
  • Send a request to the Application Server to register the new server without manual intervention.

 

Patch Enhancement:

TrueSight Server Automation now supports the rebooting of Windows servers before applying patches to prevent patch job failures. When you run a patch job on the Windows server that is waiting for a reboot, the server is first rebooted and then the job is started.

 

Compliance templates available for CIS Windows 2019 and DISA STIG Windows 2019

 

Security enhancements

 

BLCLI support:

New BLCLI Utility - simpleExportAuditResultV2" command is added to export the results of an Audit job.

 

RESTful API support:

New REST APIs are added to the following resources: Deploy jobs, NSH script jobs, Patching jobs, and BLPackage

You can use these new APIs to perform the following operations:

  • Execute with approval operation on Deploy, NSH Script, and Patching jobs.
  • Update schedules for NSH Script and Deploy jobs.
  • Retrieve the details of a BLPackage.

 

For more details, please refer to the online documentation - 20.02 enhancements - Documentation for TrueSight Server Automation 20.02 - BMC Documentation

Share This:

We are glad to announce the release of a newer version of BMC’s Automation Console 20.02.

This console is an interface built on top of BMC Server Automation to simplify the OS patching experience for servers as well as for providing a single platform for vulnerability remediation and patching.

This console is available as a service, called BMC Helix Automation Console (SaaS), and as an on-premises product, called TrueSight Automation Console.

Here are the key highlights for this release:

Ability to create change ticket and approvals:

As part of the vulnerability remediation operation, you can now create a change request in the change management system, which tracks the operations, and goes through a change approval process. After a change request is approved, the operation runs according to the schedule.

Approvers can also reschedule the operation if needed.

 

ChangeApprovalManagement.png

Blind spot detection:

Automation Console integrates with BMC Discovery to find servers in your environment and were not scanned for vulnerabilities. Such servers or assets are blind spots and can be a potential security risk as there might be critical undiscovered vulnerabilities on those servers. The Discovered Assets page lists such assets. Key Performance Indicators (KPIs) on the Discovered Assets page show information about the total number of discovered assets, assets that are discovered but not mapped to endpoints in Server Automation, and assets that are not yet scanned. You must ensure that the discovered assets are scanned for missing patches and vulnerabilities.

Business Service Context:

On vulnerability dashboard, you can also see top 10 business services or applications with the maximum number of vulnerabilities and impacted assets. This will help you in prioritizing the remediation based on the business service impact.

 

These are just 3 key highlights, to deep dive into whole set of features delivered as part of this release, please go through the documentation link.

Please contact us if you would like to share any feedback.

Share This:

This question has come up a few times:  How can I remove the patch payloads from a patch repository that were downloaded but will likely not be used again ?  You can't just delete the underlying file off the file system because TSSA still has it flagged as downloaded and if you happen to need the patch again (maybe you are patching a newly provisioned system), you will get errors because TSSA can't find the patch file.

 

This script solves that problem for Windows catalogs.  It uses the DATE_CREATED property value (when the patch was added to the catalog) to find old patches whose payload exists on the repository file system, determines if the patch object has no dependencies, deletes the file, and then resets the downloaded flag to no.  If the patch is needed in the future, it will be re-downloaded.  The script will not run against: offline catalogs, catalogs with Download from Vendor checked, or catalogs that share a repository location with other catalogs in the same TSSA environment.  This should not be run against a TSSA environment that shares the catalog location with another TSSA environment.

 

Standard disclaimer: not supported, may not work for you, may cause small fires, don't test this in production.

 

Basic usage:

 nsh removeOldPatchesFromCatalog.nsh -P defaultProfile -R BLAdmins -c "<full path to catalog>"  -r <retention in days >

example:

 nsh removeOldPatchesFromCatalog.nsh -P defaultProfile -R BLAdmins -c "/Workspace/Patch Catalogs/Windows 2019"  -r 30

 

Will remove patches 30 days or older from the /Workspace/Patch Catalogs/Windows 2019 catalog.

 

The retention value should be longer than your patch cycle.  If it normally takes you about a month to patch all your servers, then use a retention value of 45 to 60 days.

 

Feedback is welcome.

Share This:

Depending on your licensing agreement with RedHat, the entitlements to the repositories you are using in a single Patch Catalog may be spread across multiple certificates.  Alternatively, you may need to maintain multiple Patch Catalogs, each with separate repository servers, and each needing their own set of entitlement certificates, and you cannot use the single set defined in the Patch Global Configuration.  Currently it is not possible to handle either of these situations.  With a small update to the yum_metadata_generator.sh both of these situations can be handled.  Also, since this will bypass the Patch Global Configuration settings for the entitlement certificate and key, you no longer have to worry about the case when RedHat revokes the certificates and subscription-manager on your repository server gets new certificates in /etc/pki/entitlements.

 

Standard disclaimer here:  this is not supported, will be removed by an upgrade, may not work in future versions, and may not work in your environment, etc etc.

 

What we need to do is look at the url defined in the repository configuration generated by the Catalog Update Job and search for it in the entitlement certificates present on the system.  Then we can update the generated repository configuration with the appropriate certificate present on the repository server.

 

First, locate the support-files-1.0-SNAPSHOT.jar file in the NSH/br/stdlib directory on an appserver.  Extract the script from this file into a temporary directory:

#unzip /opt/bmc/bladelogic/NSH/br/stdlib/support-files-1.0-SNAPSHOT.jar -d /tmp com/bmc/sa/patchfeed/redhat/yum_metadata_generator.sh
Archive:  /opt/bmc/bladelogic/NSH/br/stdlib/support-files-1.0-SNAPSHOT.jar
  inflating: /tmp/com/bmc/sa/patchfeed/redhat/yum_metadata_generator.sh  

 

Around line 348, add the following function:

updateEntitlements()
{
    while read repo
        do
        echo "REPO: ${repo}"
        for cert in /etc/pki/entitlement/*[0-9].pem
            do
            echo "Checking ${cert}..." | tee -a $log_file_name
            if grep -q "$(awk -v val="URL:" '{if ($1 == val) print $2 }' <<< "$(rct cat-cert "${cert}")"  | sed "s/\$basearch/${repoArch}/g;s/\$releasever/\.\*/g")" <<< "$(awk -v val="baseurl" '{if ($1 == val) print $3}' <<< "$(yum-config-manager -c ./yum.conf ${repo})")"
                then
                echo "Using ${cert} for ${repo}"  | tee -a $log_file_name
                yum-config-manager -c ./yum.conf --save --setopt=${repo}.sslclientkey=${cert%.*}-key.pem
                yum-config-manager -c ./yum.conf --save --setopt=${repo}.sslclientcert=${cert}
                break
            fi
        done
    done <<< "$(awk -v val="repo:" '{ if ($2 == val ) print $3}' <<< "$(yum-config-manager -c ./yum.conf )")"
}

 

After the export LD_LIBRARY_PATH="" around line 353 (before the addition above) call the function:

echo "Started executing yum metadata generator script" 

cd $repoDir
export LD_LIBRARY_PATH=""
updateEntitlements

 

Copy the support-files-1.0-SNAPSHOT.jar to the root of the temporary directory where you extracted it earlier.  Then replace the yum_metadata_generator.sh in the zip with the altered version:

cd /tmp
# cp /opt/bmc/bladelogic/NSH/br/stdlib/support-files-1.0-SNAPSHOT.jar .
# zip support-files-1.0-SNAPSHOT.jar com/bmc/sa/patchfeed/redhat/yum_metadata_generator.sh 
 updating: com/bmc/sa/patchfeed/redhat/yum_metadata_generator.sh (deflated 75%)

 

Stop the application server services.

Make a backup of the original support-files-1.0-SNAPSHOT.jar somewhere outside of the application server install directory.

Delete the NSH/br/stdlib/support-files-1.0-SNAPSHOT.jar

Copy the altered support-files-1.0-SNAPSHOT.jar into NSH/br/stdlib

On Linux the file should be owned by root and be owner read and write, group read, and everyone read (644)

Start the application server services

 

This will now look for entitlement certificates in the /etc/pki/entitlements directory on the repository server, ignoring what is defined in Patch Global Configuration.  As long as that directory contains your entitlement certificates for all of the repositories you have in your Patch Catalog, the Catalog Update Job should run without errors related to missing entitlements for one of the repositories being downloaded.

 

The same alteration can be performed to the offline downloader, with the support-files-1.0-SNAPSHOT.jar existing in the <downloader dir>/lib directory.

 

Since an upgrade will overwrite the altered jar file, you will need to perform the above modifications after an upgrade and re-test the Catalog Update Job after an upgrade.

Share This:

On January 7th, 2020 there was a change to the URL used by TrueSight Server Automation (TSSA), versions 8.9 Service Pack 3 (SP3) and above, to retrieve the xml metadata file from Ivanti.  When executing a Windows Catalog Update Job it may fail with the following error:

Error 01/17/2020 12:07:05 Validation Error :- BLPAT1004 - Http Url is not accessible.
Possible remediation steps :- Please check the proxy login credentials
Please check the vendor url
Please check network access to vendor url
https://content.ivanti.com/data/oem/BMC-Bladelogic/data/partner.manifest.xml

 

 

Note the URL in the error message “https://content.ivanti.com/data/oem/BMC-Bladelogic/data/partner.manifest.xml”.  This is now no longer a valid URL.  In support of the newer Shavlik SDK version 9.3 the URL has changed to a new location.  In some instances the old/original URL may work, but will eventually be fully disabled.  Ivanti now provide the metadata file at this URL:

https://content.ivanti.com/data/oem/BMC-Bladelogic/data/93/manifest/partner.manifest.xml

 

In TSSA the URL location to partner.manifest.xml should be updated within the Patch Global Configuration’s Shavlik URL Configuration tab.

GlobalPatchConfig_URLConfig.png

 

 

See BMC Knowledge Article 000181848 for further detailed steps to update the Patch Global Configuration.

When using the Windows off-line downloader utility the patch-psu.properties files within the resources sub-directory will need to be modified.  Update the “windows.shavlik.manifest_url” parameter to reflect the new URL location, so it will look like:

 

windows.shavlik.manifest_url = https://content.ivanti.com/data/oem/BMC-Bladelogic/data/93/manifest/partner.manifest.xml

 

 

For BladeLogic Server Automation (BSA) version 8.9 Service Pack 2 (SP2) or lower Windows Patch Analysis is no longer supported, as of September 30, 2019.  An upgrade will be required to use this feature again as per the earlier notification here:

https://docs.bmc.com/docs/tssa89/notification-of-action-needed-for-users-of-truesight-server-automation-microsoft-windows-patching-815403716.html

 

 

For continued status information of Patch Analysis functionality follow the “OS Patching Vendor Health Dashboard” in the BMC Communities. 

Also be sure to subscribe to the TrueSight Automation Video Channel for related videos and notifications as new video content is published.

Share This:

Hello Everyone,

 

Windows Patch Management is one of the most heavily used features of TrueSight Server Automation (TSSA) and involves the interaction between TSSA, Ivanti Technologies (previously known as Shavlik) and the underling Microsoft Windows Operating System and patch.

 

The most common TSSA Windows Patch Management support cases we receive tend to fall into the one of the following categories:

 

  1. A Windows Patch is believed to be installed on a target server but is reported as missing by the TSSA Patch Analysis job.
  2. A Windows Patch is not installed on a target server but is not reported as missing by the TSSA Patch Analysis job.
  3. How to add new products to the list of filters available in a TSSA Windows Patch Catalog.
  4. TSSA Windows Patch Analysis issues related to target server reboots.
  5. How Microsoft Servicing Stack Updates (SSUs) can affect TSSA Windows Patch Analysis results.
  6. In May 2019, Windows Catalog Updates began failing due to changes in how Oracle JAVA patches are distributed.

 

The TSSA Customer Support team has been working on enhancing our knowledge base articles and video content in these areas and, in this month’s blog, we wanted to highlight the most useful, and frequently used, knowledge articles and videos for each of the problem categories listed above.

 

 

1) False Positives – Windows patch is believed to be installed but reported as Missing by TSSA Patch Analysis

 

 

Knowledge Articles:

 

000099904: TSSA/BSA: Windows Patch Troubleshooting - A Windows Hotfix is reported as missing by TSSA Patch Analysis but is believed to be installed

 

000090870 TSSA/BSA: Windows Patch Troubleshooting - Patch Deploy Job appears to have succeeded but TSSA Patch Analysis still reports the patch as missing

 

Video:

 

 

 

 

2) False Negatives – Windows patch is believed to be missing but is not reported as Missing by TSSA Patch Analysis

 

Knowledge Article:

 

000095840 TSSA/BSA: Windows Patch Troubleshooting - A Windows Hotfix is not installed on a target server but is not reported as missing by TSSA Patch Analysis

 

 

3) How to add additional products to the list of available filters in a TSSA Windows Patch Catalog

(This topic can also surface as a “No mappings were found for the selected product" warning when running a Windows Patch Catalog)

 

Knowledge Articles:

 

000166476 BSA/TSSA: How to add a new product to the list of available filters available in a TSSA Windows Patch Catalog?

000130493 BSA/TSSA: Windows Catalog update job displays "No mappings were found for the selected product" warning in Job Run Log

 

Video:

 

 

 

 

4)     TSSA Windows Patch Analysis issues related to target server reboots

 

Knowledge Articles:

 

000081738 TSSA/BSA: Windows Deploy Job fails to reboot, reports "Reboot required but did not occur. Manual reboot needed to complete operation, exitCode = -4003"

 

000145107 TSSA/BSA: Reboot is pending on this machine, analysis results may be incorrect

 

 

 

5) How Microsoft Servicing Stack Updates (SSUs) can affect TSSA Windows Patch Analysis results

 

 

Knowledge Article:

 

000167748 TSSA/BSA: How can Microsoft Servicing Stack Updates (SSUs) affect TSSA Windows Patch Analysis results?

 

 

6) In May 2019, Windows Catalog Updates began failing due to changes in how Oracle JAVA patches are distributed.

 

Knowledge Article:

 

000168397 TSSA/BSA: Beginning May 2019 - Windows Catalog Updates failing due to problems downloading Oracle Java patches

 

 

 

Troubleshooting TSSA Windows Patch Analysis cases will often require downloading and running the Ivanti DPD Trace tool in order to gather more detailed information. Please see the following video which demonstrates how to download and run the Ivanti DPD Trace tool:

 

 

 

And Knowledge Article 000096560 which describes how to analyze the Trace.txt and shavlik_results.xml files generated by a TSSA Patch Analysis Job.

 

 

Finally, please make sure to subscribe to the TrueSight Automation Video Channel to find more useful videos and to be notified of new video content as it is published. If you are particularly interested in TSSA Windows Patch Management videos, we have a playlist specifically for this feature.

Share This:

Adding Authorizations to an Acl Template is typically accomplished by running a series of blcli_execute BlAclTemplate addTemplatePermission commands.  Some brief profiling shows that adding 100 Authorizations to a newly created Acl Template takes about 10 seconds.  That's not too bad, but I wonder if it could be faster.  If we look in the unreleased blcli commands documentation for the addTemplatePermission command we can see what it runs:

Reading through the sequence of commands what happens is that it loads up the acl object from the template (the thing that contains the list of role:authorizations), then creates a new acl entry (blAce), adds the new role and authorization, and then updates the acl object with this acl entry.  Then the acl template object is updated with the newly update acl list.  It's likely that instead of immediately running the template update, we can keep adding more and more acl entries to the acl object and do a single update at the end.

 

We want to be able to compare the run times to the current method of adding acls so we should build out a script that does both and compare run times.  I'll just grab a subset of the entire list of authorizations for this test and then run each method, noting the runtime.

 

#!/bin/nsh
# load the date/time module so we can use $EPOCHSECONDS
zmodload zsh/datetime
blcli_setjvmoption -Dcom.bladelogic.cli.execute.quietmode.enabled=true
blcli_setoption serviceProfileName defaultProfile
blcli_setoption roleName BLAdmins
# get just the system authorizations for the test
blcli_execute Authorization findAllByType 1
blcli_execute Authorization getName
blcli_execute Utility setTargetObject
blcli_execute Utility listPrint
blcli_storelocal allAuths
# grab 100, for further profiling get more.  for a real run you'd be reading the list from a file probably.
myAuths="$( tail -100 <<< "${allAuths}")"
echo "Number of auths: $(awk 'NF' <<< "${myAuths}" | wc -l)"
for i in {1..10}
       do
        startTime=${EPOCHSECONDS}
        # create the empty acl template
        blcli_execute BlAclTemplate createAclTemplate Template1 Template1
         # loop through the list of authorizations i pulled and add them to the template
        while read i
                do
                blcli_execute BlAclTemplate addTemplatePermission Template1 BLAdmins "${i}"
        done <<< "$(awk 'NF' <<< "${myAuths}")"
        endTime=${EPOCHSECONDS}
         # get the runtime.
        echo "addTemplatePermission RunTime=$((${endTime}-${startTime}))"
        blcli_execute BlAclTemplate deleteAclTemplateByName Template1

        startTime=${EPOCHSECONDS}
         # create the template, step through the underlying calls for addTemplatePermission
        blcli_execute BlAclTemplate createAclTemplate Template2 Template2
        blcli_execute BlAclTemplate findByName Template2
        blcli_execute Utility storeTargetObject template
        blcli_execute BlAclTemplate getTemplateBlAcl
        blcli_execute Utility setTargetObject
        blcli_execute Utility storeTargetObject blAcl

         # loop over the list of auths to add, add them to the acl object and do the update later.
        while read i
                do
                blcli_execute RBACRole getRoleIdByName BLAdmins
                blcli_execute Utility setTargetObject
                blcli_execute Utility storeTargetObject roleId
                blcli_execute Authorization getAuthorizationIdByName "${i}"
                blcli_execute Utility setTargetObject
                blcli_execute Utility storeTargetObject authId
                blcli_execute Utility setTargetObject
                blcli_execute BlAce createInstance NAMED_OBJECT=roleId NAMED_OBJECT=authId
                blcli_execute Utility setTargetObject
                blcli_execute Utility storeTargetObject blAce
                blcli_execute Utility setTargetObject blAcl
                blcli_execute BlAcl addAce NAMED_OBJECT=blAce
        done <<< "$(awk 'NF' <<< "${myAuths}")"
        blcli_execute Utility setTargetObject template
        blcli_execute BlAclTemplate update NAMED_OBJECT=template
        blcli_execute BlAclTemplate getDBKey
        endTime=${EPOCHSECONDS}
        echo "unreleased RunTime=$((${endTime}-${startTime}))"
        blcli_execute BlAclTemplate deleteAclTemplateByName Template2
done

 

Running the above shows (collapsed lines for space):

addTemplatePermission RunTime=15 unreleased RunTime=2

addTemplatePermission RunTime=8 unreleased RunTime=2

addTemplatePermission RunTime=8 unreleased RunTime=2

addTemplatePermission RunTime=9 unreleased RunTime=2

addTemplatePermission RunTime=8 unreleased RunTime=2

addTemplatePermission RunTime=7 unreleased RunTime=2

addTemplatePermission RunTime=8 unreleased RunTime=2

addTemplatePermission RunTime=8 unreleased RunTime=3

addTemplatePermission RunTime=8 unreleased RunTime=2

addTemplatePermission RunTime=8 unreleased RunTime=2

An average of 8 seconds for the loop of addTemplatePermission commands, average of 2 seconds for the set of unreleased commands.

 

To actually use the new method your script will look something like:

blcli_execute BlAclTemplate createAclTemplate MyTemplate MyTemplate
blcli_execute BlAclTemplate findByName MyTemplate
blcli_execute Utility storeTargetObject template
blcli_execute BlAclTemplate getTemplateBlAcl
blcli_execute Utility setTargetObject
blcli_execute Utility storeTargetObject blAcl

while read auth role
     do
     blcli_execute RBACRole getRoleIdByName "${role}"
     blcli_execute Utility setTargetObject
     blcli_execute Utility storeTargetObject roleId
     blcli_execute Authorization getAuthorizationIdByName "${auth}"
     blcli_execute Utility setTargetObject
     blcli_execute Utility storeTargetObject authId
     blcli_execute Utility setTargetObject
     blcli_execute BlAce createInstance NAMED_OBJECT=roleId NAMED_OBJECT=authId
     blcli_execute Utility setTargetObject
     blcli_execute Utility storeTargetObject blAce
     blcli_execute Utility setTargetObject blAcl
     blcli_execute BlAcl addAce NAMED_OBJECT=blAce
done < /tmp/MyAuthList.txt
blcli_execute Utility setTargetObject template
blcli_execute BlAclTemplate update NAMED_OBJECT=template
blcli_execute BlAclTemplate getDBKey

 

Where /tmp/MyAuthList.txt has the format like:

role authname

Share This:

Now that I can Get the JobRunId from a running NSH Script Job, I can do some other fun exercises, like finding out if the NSH Job is running from within a Batch Job, and getting information about the parent Batch Job like the jobRunId, name, group, etc. In the previous example, I was able to identify a NSH Job run with something unique about that run and match up the running jobs to find my jobRunId.  I can take a similar approach here by adding my NSH Script Job that finds its own jobRunId to my BatchJob, and then processing all of the running BatchJob runs to find with BatchJob and run is the parent of that NSH Script Job run.  Instead of processing all the running NSH Script Job runs for my role, I'll process only the BatchJob runs, then list the member job runs and see if any of them are my NSH Script Job run.

 

One unreleased blcli commands that is useful here is BatchJobRun getMemberJobRunsByBatchJobRun.

 

The below will be appended to the existing NSH Script Job in the previous article.

 

# I already have the list of running jobs from the getAllJobRunProgress command.
#
while read i
     do
     echo "Processing job run id for batch job: ${i}"
     blcli_execute JobRun findById ${i}
     blcli_execute JobRun getJobType
     blcli_storeenv jobTypeId
     # look at just the batch jobs
     if [[ ${jobTypeId} -eq 200 ]]
          then
          echo "JobRunId: ${i} is a batch job"
          blcli_execute JobRun findById ${i}
          blcli_execute JobRun getJobKey
          blcli_storeenv batchJobKey
          # get the member runs of this batch job
          blcli_execute BatchJobRun getMemberJobRunsByBatchJobRun ${batchJobKey} ${i}
          blcli_execute JobRun getJobRunId
          blcli_execute Utility setTargetObject
          blcli_execute Utility listPrint
          blcli_storelocal memberJobRunIds
          # loop through the member job runs for this batch job and look for *my* job run id
          while read j
               do
               echo "Member: ${j}"
               if [[ ${myJobRunId} -eq ${j} ]]
                    then
                    myBatchJobRunId=${j}
                    myBatchJobKey=${batchJobKey}
      blcli_execute Job findByDBKey ${myBatchJobKey}
    blcli_execute Job getName
    blcli_storeenv jobName
            blcli_execute Job getGroupId
    blcli_storeenv jobGroupId
    blcli_execute Group getQualifiedGroupName 5005 ${jobGroupId}
            blcli_storeenv jobGroup
    echo "MyBatchJob: ${jobGroup}/${jobName},${myBatchJobRunId},${myBatchJobKey}"
                    blcli_execute 
                    break 2
               fi
          done <<< "$(awk 'NF' <<< "${memberJobRunIds}")"
     fi
done <<< "$(awk 'NF' <<< "${jobRunIds}")"
Share This:

Building off of the Get the NSH Script Job Key, Name and Group location from a running NSH Script Job post, another useful piece of information is to get the JobRunId from inside the running NSH Script Job.

 

The first thought is to simply get the last JobRunKey from the JobKey I just retrieved.  That will probably work in most cases, but it is possible for multiple instances of a NSH Script Job to be running at the same time and the JobRun findLastRunKeyByJobKey may not return the RunKey of this run.  That might be a solution, if you know that there would only ever be a single instance of the Job running.

 

Let's instead have some more fun and figure out how we can be sure we are getting this run.  In the JobKey post, I found there are some UUIDs generated at run time for the execution of the script.  That seems interesting, however, as I found last time, those UUIDs don't seem to be something I can retrieve using the blcli.  But maybe I can use them in another way - I know that the generated UUID will be unique to this run.  I can show the UUID in the Job Run Log.  I can get the Job Run Log if I have the JobRunId/Key.  I don't have the Id or Key for this run (it's what I'm trying to get of course), but I can get the Id/Key of all the running jobs in the environment.  If I have the JobRunId/Key for all of the running jobs, I can loop through each one, and figure out if that run is this job by seeing if the UUIDs I'm echoing show up in the job run log.
Digging around in the unreleased blcli commands, I found a few promising commands: JobRun getAllJobRunProgress (which can be filtered by a role name, which reduces what I'm processing), JobRunProgress getJobRunId, which can act on the getAllJobRunProgress command output, Utility getCurrentRole, JobRun getLogItemsByJobRunId.

Here's the commented script:

logItemMax=30
# extract the UUIDs out of ${0}
myUUID="$(echo -n ${0} | sed "s/.*__//g;s/\.script.*//g")"
echo "myUUID: ${myUUID}"
# had to sleep for a bit for the log entry to get written
sleep 10
# get the current role id since I only need job runs that my role is running to find *this* job
blcli_execute Utility getCurrentRole
blcli_storeenv roleName
blcli_execute RBACRole findByName "${roleName}"
blcli_execute RBACRole getRoleId
blcli_storeenv roleId
# get all the running job ids
blcli_execute JobRun getAllJobRunProgress ${roleId}
blcli_execute JobRunProgress getJobRunId
blcli_execute Utility setTargetObject
blcli_execute Utility listPrint
blcli_storeenv jobRunIds
echo "Found running JobRunIds for role ${roleName}: $(tr '\n' ' ' <<< ${jobRunIds})"
# look at all the job runs for this role and find *this* run by looking for the uuid in the log messages
while read i
     do
     echo "Processing job run id: ${i}"
     blcli_execute JobRun findById ${i}
     blcli_execute JobRun getJobType
     blcli_storeenv jobTypeId
     # only look at nsh script jobs
     if [[ ${jobTypeId} -eq 111 ]]
          then
          # if there's more than ~30 log items, it won't be *this* run, because this run only should have a few log entries by now, speeds up processing
          blcli_execute JobRun getLogItemCountByJobRun ${i}
          blcli_storeenv logItemCount
          echo "JobRunId: ${i} LogItems: ${logItemCount}"
          if [[ ${logItemCount} -lt ${logItemMax} ]]
               then
               blcli_execute JobRun getLogItemsByJobRunId ${i}
               blcli_storelocal logItems
               while read j
                    do
                    if  ( grep -q "${myUUID}" <<< "${j}" )
                         then
                         # run i, log line j has myuuid in it, must be me
                         echo "My JobRunId: ${i}"
                         myJobRunId="${i}"
                         break 2
                    fi
               done <<< "$(awk 'NF' <<< "${logItems}")"
          else
               echo "JobRunId: ${i} had: ${logItemCount} log items, too many"
          fi
     fi
done <<< "$(awk 'NF' <<< "${jobRunIds}" | sort -r)"
echo "My final JobRunId: ${myJobRunId}"

This works with Execute the script separately against each host (Type 1) and Execute the script once, passing the host list as a parameter to the script (Type 2) type of NSH Scripts.  For a type 1, even though the UUIDs for each target instance of the script will be different, the same jobRunId is retrieved.

Share This:

Sometimes it can be useful to find the JobKey, Job Name and Job Group of a running NSH Job.  While I watch the appserver log when I run a NSH Script Job, I can see that the appserver makes at temporary copy of the script file and executes with NSH and I see something like the below in the appserver log:

[09 Dec 2019 08:44:50,526] [WorkItem-Thread-48] [INFO] [BLAdmin:BLAdmins:] [NSHScript] __JobRun-2001505,4-2030319__ Started pid 31669: /opt/bmc/bladelogic/NSH/bin/nsh --norc -c /opt/bmc

/bladelogic/NSH/tmp/application_server/scripts/job__b1db9f72-089d-49a3-a929-a98ca2931115/master_cc0669d6-d9da-4fd9-848f-7e820727e36d

After some investigation it seems that those UUIDs are generated on the fly for each run and they don't look very useful to get the jobkey.  However, I know that when running a shell script, ${0} expands to the script being executed, so let's see if that gives me any more information about this temporary script copy than the above that might be useful, otherwise, I'll look at the environment variables set in the script execution environment next and see if there is anything useful there.

I create a NSH Script and Job with simply:

echo ${0}

and run it.  In the Job Run Log, I see:

 

Info 12/09/2019 08:54:35 /opt/bmc/bladelogic/NSH/tmp/application_server/scripts/job__5bc8428a-0b20-48e7-933d-48606e5e1b7f/0f820f7e-2243-4913-ad8d-c538ca658b01.script_DBKey-SJobKeyImpl-2001505-4_sleep1.nsh

That's pretty great, the JobKey is right there, we just need some regex to extract the JobKey string out:

jobKey=$( echo -n ${0} | sed 's/^.*DBKey/DBKey/' | cut -d"_" -f1 | sed 's/-/:/;s/-/:/')

I could probably combine that into a single awk or sed statement but for now it works.  No need to look into environment details or anything else.

 

Now that I have the JobKey, we can get the rest with some hopefully well known unreleased blcli commands:

jobKey="$( echo -n $0 | sed 's/^.*DBKey/DBKey/' | cut -d"_" -f1 | sed 's/-/:/;s/-/:/')"

echo "JobKey: ${jobKey}"

blcli_execute Job getJobNameByDBKey ${jobKey}

blcli_storeenv jobName

blcli_execute Job findByDBKey ${jobKey}

blcli_execute Job getGroupId

blcli_storeenv jobGroupId

blcli_execute Group getQualifiedGroupName 5005 ${jobGroupId}

blcli_storeenv jobGroup

echo "Job: ${jobGroup}/${jobName}"

And there we go:

With a little shell and blcli knowledge we pretty quickly were able to retrieve the JobKey, Name and Group for a running NSH Script Job.

Share This:

After running a Patching and then Remediation Job, BlPackages and Deploy Jobs are generated.  You might want to retrieve the generated Deploy Jobs for further manipulation.  This information shows up in the Remediation Job Run log:

We can use some unreleased blcli commands to get the log entries and scrape the name of the job.  There are two cases to handle - since 8.8 there is an option in Patching Global Configuration named Remediation Settings: Using Single Deploy Job.  This determines if the Remediation Job will generate a Deploy Job per generated BlPackage (< 8.8 behavior), or only a single Deploy Job that will deploy all generated BlPackages to their respective servers ( 8.8+ default behavior).  Since either case may exist in 8.8+ we need to check for both.

 

We must start with the Patching Job Run Id - this can be obtained a variety of ways that is beyond the scope of the article.  From the patching job run, we will find the remediation job run id, then pull and process the log entries from that run to get the deploy job info.

 

PATCHING_JOB="/Workspace/Patching Jobs/Windows 2016 and 2019"
typeset -a DEPLOY_JOB_KEYS
# Getting the last run of my patching job for this example:
blcli_execute PatchingJob getDBKeyByGroupAndName "${PATCHING_JOB%/*}" "${PATCHING_JOB##*/}"
blcli_storeenv PATCHING_JOB_KEY
blcli_execute JobRun findLastRunKeyByJobKey ${PATCHING_JOB_KEY}
blcli_storeenv PATCHING_JOB_RUN_KEY
blcli_execute JobRun jobRunKeyToJobRunId ${PATCHING_JOB_RUN_KEY}
blcli_storeenv PATCHING_JOB_RUN_ID
# get the patching job run child ids (one will be the remediation job id)
blcli_execute JobRun findPatchingJobChildrenJobsByRunKey ${PATCHING_JOB_RUN_ID}
blcli_execute JobRun getJobRunId
blcli_execute Utility setTargetObject
blcli_execute Utility listPrint
blcli_storeenv PATCH_ANALYSIS_JOB_RUN_IDS
for JOB_RUN_ID in ${PATCH_ANALYSIS_JOB_RUN_IDS}
        do
        blcli_execute JobRun findById ${JOB_RUN_ID}
        blcli_execute JobRun getType
        blcli_storeenv JOB_RUN_TYPE_ID
        if [[ ${JOB_RUN_TYPE_ID} = 7033 ]]
                then
                break
        fi
done
# get the log entries for the remediation job
blcli_execute JobRun findById ${JOB_RUN_ID}
blcli_execute JobRun getJobKey
blcli_storeenv REMEDIATION_JOB_KEY
blcli_execute LogItem getLogItemsByJobRun ${REMEDIATION_JOB_KEY} ${JOB_RUN_ID}
blcli_execute Utility storeTargetObject logItems
blcli_execute Utility listLength
blcli_storeenv LIST_LENGTH
if [[ ${LIST_LENGTH} -gt 0 ]]
        then
        for i in {0..$((${LIST_LENGTH}-1))}
                do
                blcli_execute Utility setTargetObject logItems
                blcli_execute Utility listItemSelect ${i}
                blcli_execute Utility setTargetObject
                blcli_execute JobLogItem getMessage
                blcli_storeenv MESSAGE
                 # look for the < 8.8 case:
                if ( grep -q "Created deploy job" <<< "${MESSAGE}" )
                        then
                        DEPLOY_JOB_NAME="$(grep "Created deploy job" <<< "${MESSAGE}" | cut -f2- -d: | sed "s/^ Jobs//")"
                        echo "DEPLOY_JOB_NAME: ${DEPLOY_JOB_NAME}"
                        blcli_execute DeployJob getDBKeyByGroupAndName "${DEPLOY_JOB_NAME%/*}" "${DEPLOY_JOB_NAME##*/}"
                        blcli_storeenv DEPLOY_JOB_KEY
                        DEPLOY_JOB_KEYS+=${DEPLOY_JOB_KEY}
                fi
                 # look for the 8.8+ case
                if ( grep -q "Created Batch Job" <<< "${MESSAGE}" )
                        then
                        BATCH_JOB_NAME="$(grep "Created Batch Job" <<< "${MESSAGE}" | cut -f2- -d: | sed "s/^ Jobs//")"
                        echo "BATCH_JOB_NAME: ${BATCH_JOB_NAME}"
                        blcli_execute BatchJob getDBKeyByGroupAndName "${BATCH_JOB_NAME%/*}" "${BATCH_JOB_NAME##*/}"
                        blcli_storeenv batchJobKey
                        blcli_execute BatchJob findAllSubJobHeadersByBatchJobKey ${batchJobKey}
                        blcli_execute SJobHeader getDBKey
                        blcli_execute Utility setTargetObject
                        blcli_execute Utility listPrint
                        blcli_storeenv batchMembers
                        batchMembers="$(awk 'NF' <<< "${batchMembers}")"
                        while read i
                                do
                                blcli_execute Job findByDBKey ${i}
                                blcli_execute Job getName
                                blcli_storeenv deployJobName
                                blcli_execute Job getGroupId
                                blcli_storeenv deployJobGroupId
                                blcli_execute Group getQualifiedGroupName 5005 ${deployJobGroupId}
                                blcli_storeenv deployGroupPath
                                blcli_execute DeployJob getDBKeyByGroupAndName "${deployGroupPath}" "${deployJobName}"
                                blcli_storeenv jobKey
                                if [[ "${DEPLOY_JOB_KEYS/${jobKey}}" = "${DEPLOY_JOB_KEYS}" ]]
                                        then
                                        echo "DEPLOY_JOB: ${deployGroupPath}/${deployJobName}"
                                        echo "DEPLOY_JOB_KEY: ${jobKey}"
                                        DEPLOY_JOB_KEYS+="${jobKey}"
                                fi
                         done <<< "${batchMembers}"
                fi
        done
else
        echo "Could not find job logs for ${REMEDIATION_JOB_KEY}..."
        exit 1
fi

 

This will generate output like:

Single Job Mode:

BATCH_JOB_NAME: /Workspace/Patching Jobs/Deploy/Windows 2016 and 2019 batch deploy Windows 2016 and 2019@2019-09-29 14-41-25-125-0400
DEPLOY_JOB: /Workspace/Patching Jobs/Deploy/Windows 2016 and 2019-20011681 @ 2019-09-29 14-41-24-319-0400
DEPLOY_JOB_KEY: DBKey:SJobModelKeyImpl:2001176-1-2131104
DBKey:SJobModelKeyImpl:2001176-1-2131104

 

Multiple Deploy Jobs:

DEPLOY_JOB_NAME: /Workspace/Patching Jobs/Deploy/Windows 2016 and 2019-win16-894p2.example.com-20011681 @ 2019-09-29 14-51-40-404-0400
DEPLOY_JOB_NAME: /Workspace/Patching Jobs/Deploy/Windows 2016 and 2019-win19-894p2.example.com-20011681 @ 2019-09-29 14-51-42-179-0400
BATCH_JOB_NAME: /Workspace/Patching Jobs/Deploy/Windows 2016 and 2019 batch deploy Windows 2016 and 2019@2019-09-29 14-51-42-726-0400
DBKey:SJobModelKeyImpl:2001183-1-2131308 DBKey:SJobModelKeyImpl:2001189-1-2131320

 

Once you have the name and DBKey of the generated Deploy Job, you can then proceed to do whatever you need to do to/for/with these jobs.

Share This:

Happy to announce that we have next patch ready with some key support enhancements:

 

  • Now you can run the OS hardening on Red Hat 8 servers. Patch2 allows you to install TrueSight Server Automation agent, application server and NSH on RHEL 8.
  • The Yellowfin version that is shipped with TrueSight Server Automation Live Reporting has been upgraded to 8.0.2 from 8.0.1.

 

The following component templates for the Defense Information Systems Agency (DISA) policy have been updated: 

  • DISA on Red Hat Enterprise Linux 6 is updated to benchmark version 1 - Release 23 of July 26, 2019.
  • DISA on Red Hat Enterprise Linux 7 is updated to benchmark version 2 - Release 4 of July 26, 2019.
  • DISA on Windows Server 2008 R2 DC is updated to benchmark version 1 - Release 31 of July 26, 2019.
  • DISA on Windows Server 2008 R2 MS is updated to benchmark version 1 - Release 30 of July 26, 2019.
  • DISA on Windows Server 2012 DC (2012 and 2012 R2) is updated to benchmark version 2 - Release 17 of July 26, 2019.
  • DISA on Windows Server 2012 MS (2012 and 2012 R2) is updated to benchmark version 2 - Release 16 of July 26, 2019.
  • DISA on Windows Server 2016 is updated to benchmark version 1 - Release 9 of July 26, 2019.

 

For more information on how you can download and upgrade the version to the latest patch please visit the page

Version 8.9.04.002: Patch 2 for version 8.9.04.

Share This:

This question has come up a number of times and I decided to spend some time looking into it.  The goal is to be able to leverage a RedHat Satellite server as the source of a RedHat Patch Catalog in TrueSight Server Automation.  Standard disclaimers that this is not currently supported, may not work for you, may stop working in the future, etc, etc.

 

The below assumes some familiarity with Satellite and should work for Satellite version 6.x.  I did this with Satellite 6.5.

 

On the Satellite server, add or edit the file /etc/pulp/server/plugins.conf.d/yum_distributor.json, as noted in this Foreman bug, and add the following:

{                                                                                                                                                                                                                                            

  "generate_sqlite": true                                                                                                                                                                                                                    

}

This is needed because BSA requires the metadata in sqlite format and this is not the default for Satellite.  After making this change you must restart the Pulp worker services.  The next synchronization of your repositories should include the sqlite metadata.  If not, you can forcefully regenerate the metadata of a Content View.

 

To verify you have generated the sqlite metadata, run the below command on the Satellite server after the synchronization completes:

find /var/lib/pulp/published/yum/master/yum_distributor -iname "*sqlite.bz2" -exec ls -la {} \;

-rw-r--r--. 1 apache apache 13938 Jun 14 16:28 /var/lib/pulp/published/yum/master/yum_distributor/ee48df18-3be3-4254-b518-de42a1a37cb4/1560544102.06/repodata/7f8c6bce5464871dd00ed0e0ed25e55fd460abb255ab0aa093a79529bb86cbc2-primary.sqlite.bz2

-rw-r--r--. 1 apache apache 155449 Jun 14 16:28 /var/lib/pulp/published/yum/master/yum_distributor/ee48df18-3be3-4254-b518-de42a1a37cb4/1560544102.06/repodata/cff3aeccd7f3ff871f72c5829ed93720e0f273d1206ee56c66fa8f6ee1d2e486-filelists.sqlite.bz2

-rw-r--r--. 1 apache apache 40915 Jun 14 16:28 /var/lib/pulp/published/yum/master/yum_distributor/ee48df18-3be3-4254-b518-de42a1a37cb4/1560544102.06/repodata/eb5044ef0c9e47dab11b242522890cfe6fbb6cf1942f14757af440ec54c9027f-other.sqlite.bz2

[...]

 

Subscribe the system used to store the RedHat Catalog for TSSA to your Satellite server to obtain the certificates used by TSSA in the catalog synchronization process.

 

In the Patch Global Configuration, or your offline downloader configuration file you will reference these certificates:

SSL CA Cert: /etc/pki/ca-trust/source/anchors/katello-server-ca.pem

SSL Client Cert: /etc/pki/entitlement/<numbers>.pem

SSL Client Key: /etc/pki/entitlement/<numbers>-key.pem

Note that the SSL CA Cert is different than the one used when synchronizing directly with RedHat.

 

You will need to update the RedHat Channel Filters List File (online catalog) or the offline downloader configuration file (offline catalog) with the urls and other information for the Satellite -provided channels you will use in your catalog.  The URLs will look something like:

https://satellite.example.com/pulp/repos/Example/Library/View1/content/dist/rhel/server/7/7Server/x86_64/os

 

The format of the url is https://<satellite server>/pulp/repos/<organization>/<content library>/<view name>/<product>.  An easy way to determine the URLs is to use the rct cat-cert command on the SSL Client Cert:

 

rct cat-cert /etc/pki/entitlement/3591963669563311224.pem

[...]

Content:

Type: yum

Name: Red Hat Enterprise Linux 7 Server (RPMs)

Label: rhel-7-server-rpms

Vendor: Red Hat

URL: /Example/Library/View1/content/dist/rhel/server/7/$releasever/$basearch/os

GPG: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

Enabled: True

Expires: 1

Required Tags: rhel-7-server

Arches: x86_64

 

Another way is to inspect the output of the subscription-manager repos --list command output (which only shows the repos applicable to the OS of the catalog server):

# subscription-manager repos --list

+----------------------------------------------------------+

    Available Repositories in /etc/yum.repos.d/redhat.repo

+----------------------------------------------------------+

Repo ID:   rhel-7-server-rpms

Repo Name: Red Hat Enterprise Linux 7 Server (RPMs)

Repo URL:  https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/7/$releasever/$basearch/os

Enabled:   1

Repo ID:   rhel-7-server-optional-rpms

Repo Name: Red Hat Enterprise Linux 7 Server - Optional (RPMs)

Repo URL:  https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/7/$releasever/$basearch/optional/os

Enabled:   0

Repo ID:   rhel-7-server-satellite-tools-6.5-rpms

Repo Name: Red Hat Satellite Tools 6.5 (for RHEL 7 Server) (RPMs)

Repo URL:  https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/7/7Server/$basearch/sat-tools/6.5/os

Enabled:   0

 

Once you have the urls and other information for the channels you want to build your catalog from you will update the RedHat Channel Filters List File in Patch Global Confgiuration or Offline Downloader configuration file with the urls and other information.

 

Example RedHat Filters file snippet for an online catalog:

[...]

   <redhat-channel use-reposync="true">

        <channel-name>RHEL 7 Optional RPMs from Satellite</channel-name>

        <channel-label>rhel-7-server-optional-rpms-satellite</channel-label>

        <channel-os>RHES7</channel-os>

        <channel-arch>x86_64</channel-arch>

        <channel-url>https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/7/7Server/x86_64/optional/os</channel-url>

    </redhat-channel>

    <redhat-channel use-reposync="true" is-parent="true">

        <channel-name>RHEL 6 RPMs from Satellite</channel-name>

        <channel-label>rhel-6-server-rpms-satellite</channel-label>

        <channel-os>RHES6</channel-os>

        <channel-arch>x86_64</channel-arch>

        <channel-url>https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/6/6Server/x86_64/os</channel-url>

    </redhat-channel>

    <redhat-channel use-reposync="true">

        <channel-name>RHEL 6 Optional RPMs from Satellite</channel-name>

        <channel-label>rhel-6-server-optional-rpms-satellite</channel-label>

        <channel-os>RHES6</channel-os>

        <channel-arch>x86_64</channel-arch>

        <channel-url>https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/6/6Server/x86_64/optional/os</channel-url>

    </redhat-channel>

[...]

This adds the rhel-7-server-rpms and rhel-6-server-rpms as parent channels and the others as child channels.

 

 

Example Offline Downloader configuration file snippet:

       <redhat-cert cert-arch="x86_64">

                <caCert>/etc/pki/ca-trust/source/anchors/katello-server-ca.pem</caCert>

                <clientCert>/etc/pki/entitlement/2717125327657143845.pem</clientCert>

                <clientKey>/etc/pki/entitlement/2717125327657143845-key.pem</clientKey>

        </redhat-cert>

       

        <errata-type-filter>

                        <os>RHES7</os>

                        <arch>x86_64</arch>

                        <channel-label>rhel-7-server-rpms</channel-label>

                        <channel-url>https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/7/7Server/x86_64/os</channel-url>

                        <errata-severity>

                                <critical>true</critical>

                                <important>true</important>

                                <moderate>true</moderate>

                                <low>true</low>

                        </errata-severity>

                        <errata-type>

                                <security>true</security>

                                <bugfix>true</bugfix>

                                <enhancement>true</enhancement>

                        </errata-type>

        </errata-type-filter>

        <errata-type-filter>

                        <os>RHES7</os>

                        <arch>x86_64</arch>

                        <channel-label>rhel-7-server-optional-rpms</channel-label>

                        <channel-url>https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/7/7Server/x86_64/optional/os</channel-url>

                        <errata-severity>

                               <critical>true</critical>

                               <important>true</important>

                               <moderate>true</moderate>

                               <low>true</low>

                        </errata-severity>

                        <errata-type>

                                <security>true</security>

                                <bugfix>true</bugfix>

                                <enhancement>true</enhancement>

                       </errata-type>

        </errata-type-filter>

        <errata-type-filter>

                        <os>RHES6</os>

                        <arch>x86_64</arch>

                        <channel-label>rhel-6-server-rpms</channel-label>

                        <channel-url>https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/6/6Server/x86_64/os</channel-url>

                        <errata-severity>

                               <critical>true</critical>

                               <important>true</important>

                               <moderate>true</moderate>

                               <low>true</low>

                        </errata-severity>

                        <errata-type>

                                <security>true</security>

                                <bugfix>true</bugfix>

                                <enhancement>true</enhancement>

                        </errata-type>

        </errata-type-filter>

              <errata-type-filter>

                        <os>RHES6</os>

                        <arch>x86_64</arch>

                        <channel-label>rhel-6-server-optional-rpms</channel-label>

                        <channel-url>https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/6/6Server/x86_64/optional/os</channel-url>

                        <errata-severity>

                               <critical>true</critical>

                               <important>true</important>

                               <moderate>true</moderate>

                               <low>true</low>

                        </errata-severity>

                        <errata-type>

                                <security>true</security>

                                <bugfix>true</bugfix>

                                <enhancement>true</enhancement>

                        </errata-type>

        </errata-type-filter>

 

 

At this point, using the online RedHat Patch Catalog or offline downloader is the same as synchronizing with RedHat directly.  Finish the catalog creation and run the Catalog Update Job.

 

 

 

A few references were helpful in setting all of this up (that may require a RedHat Support account to access):

Installing Satellite Server from a Connected Network

How to forcefully regenerate metadata of a content view or repository on Red Hat Satellite 6

How do I register a system to Red Hat Satellite 6 server

How to change download policy of repositories in Red Hat Satellite 6

Filter Blog

By date:
By tag: