Skip navigation
1 2 3 Previous Next

TrueSight Server Automation

138 posts
Share This:

On January 7th, 2020 there was a change to the URL used by TrueSight Server Automation (TSSA), versions 8.9 Service Pack 3 (SP3) and above, to retrieve the xml metadata file from Ivanti.  When executing a Windows Catalog Update Job it may fail with the following error:

Error 01/17/2020 12:07:05 Validation Error :- BLPAT1004 - Http Url is not accessible.
Possible remediation steps :- Please check the proxy login credentials
Please check the vendor url
Please check network access to vendor url
https://content.ivanti.com/data/oem/BMC-Bladelogic/data/partner.manifest.xml

 

 

Note the URL in the error message “https://content.ivanti.com/data/oem/BMC-Bladelogic/data/partner.manifest.xml”.  This is now no longer a valid URL.  In support of the newer Shavlik SDK version 9.3 the URL has changed to a new location.  In some instances the old/original URL may work, but will eventually be fully disabled.  Ivanti now provide the metadata file at this URL:

https://content.ivanti.com/data/oem/BMC-Bladelogic/data/93/manifest/partner.manifest.xml

 

In TSSA the URL location to partner.manifest.xml should be updated within the Patch Global Configuration’s Shavlik URL Configuration tab.

GlobalPatchConfig_URLConfig.png

 

 

See BMC Knowledge Article 000181848 for further detailed steps to update the Patch Global Configuration.

When using the Windows off-line downloader utility the patch-psu.properties files within the resources sub-directory will need to be modified.  Update the “windows.shavlik.manifest_url” parameter to reflect the new URL location, so it will look like:

 

windows.shavlik.manifest_url = https://content.ivanti.com/data/oem/BMC-Bladelogic/data/93/manifest/partner.manifest.xml

 

 

For BladeLogic Server Automation (BSA) version 8.9 Service Pack 2 (SP2) or lower Windows Patch Analysis is no longer supported, as of September 30, 2019.  An upgrade will be required to use this feature again as per the earlier notification here:

https://docs.bmc.com/docs/tssa89/notification-of-action-needed-for-users-of-truesight-server-automation-microsoft-windows-patching-815403716.html

 

 

For continued status information of Patch Analysis functionality follow the “OS Patching Vendor Health Dashboard” in the BMC Communities. 

Also be sure to subscribe to the TrueSight Automation Video Channel for related videos and notifications as new video content is published.

Share This:

Hello Everyone,

 

Windows Patch Management is one of the most heavily used features of TrueSight Server Automation (TSSA) and involves the interaction between TSSA, Ivanti Technologies (previously known as Shavlik) and the underling Microsoft Windows Operating System and patch.

 

The most common TSSA Windows Patch Management support cases we receive tend to fall into the one of the following categories:

 

  1. A Windows Patch is believed to be installed on a target server but is reported as missing by the TSSA Patch Analysis job.
  2. A Windows Patch is not installed on a target server but is not reported as missing by the TSSA Patch Analysis job.
  3. How to add new products to the list of filters available in a TSSA Windows Patch Catalog.
  4. TSSA Windows Patch Analysis issues related to target server reboots.
  5. How Microsoft Servicing Stack Updates (SSUs) can affect TSSA Windows Patch Analysis results.
  6. In May 2019, Windows Catalog Updates began failing due to changes in how Oracle JAVA patches are distributed.

 

The TSSA Customer Support team has been working on enhancing our knowledge base articles and video content in these areas and, in this month’s blog, we wanted to highlight the most useful, and frequently used, knowledge articles and videos for each of the problem categories listed above.

 

 

1) False Positives – Windows patch is believed to be installed but reported as Missing by TSSA Patch Analysis

 

 

Knowledge Articles:

 

000099904: TSSA/BSA: Windows Patch Troubleshooting - A Windows Hotfix is reported as missing by TSSA Patch Analysis but is believed to be installed

 

000090870 TSSA/BSA: Windows Patch Troubleshooting - Patch Deploy Job appears to have succeeded but TSSA Patch Analysis still reports the patch as missing

 

Video:

 

 

 

 

2) False Negatives – Windows patch is believed to be missing but is not reported as Missing by TSSA Patch Analysis

 

Knowledge Article:

 

000095840 TSSA/BSA: Windows Patch Troubleshooting - A Windows Hotfix is not installed on a target server but is not reported as missing by TSSA Patch Analysis

 

 

3) How to add additional products to the list of available filters in a TSSA Windows Patch Catalog

(This topic can also surface as a “No mappings were found for the selected product" warning when running a Windows Patch Catalog)

 

Knowledge Articles:

 

000166476 BSA/TSSA: How to add a new product to the list of available filters available in a TSSA Windows Patch Catalog?

000130493 BSA/TSSA: Windows Catalog update job displays "No mappings were found for the selected product" warning in Job Run Log

 

Video:

 

 

 

 

4)     TSSA Windows Patch Analysis issues related to target server reboots

 

Knowledge Articles:

 

000081738 TSSA/BSA: Windows Deploy Job fails to reboot, reports "Reboot required but did not occur. Manual reboot needed to complete operation, exitCode = -4003"

 

000145107 TSSA/BSA: Reboot is pending on this machine, analysis results may be incorrect

 

 

 

5) How Microsoft Servicing Stack Updates (SSUs) can affect TSSA Windows Patch Analysis results

 

 

Knowledge Article:

 

000167748 TSSA/BSA: How can Microsoft Servicing Stack Updates (SSUs) affect TSSA Windows Patch Analysis results?

 

 

6) In May 2019, Windows Catalog Updates began failing due to changes in how Oracle JAVA patches are distributed.

 

Knowledge Article:

 

000168397 TSSA/BSA: Beginning May 2019 - Windows Catalog Updates failing due to problems downloading Oracle Java patches

 

 

 

Troubleshooting TSSA Windows Patch Analysis cases will often require downloading and running the Ivanti DPD Trace tool in order to gather more detailed information. Please see the following video which demonstrates how to download and run the Ivanti DPD Trace tool:

 

 

 

And Knowledge Article 000096560 which describes how to analyze the Trace.txt and shavlik_results.xml files generated by a TSSA Patch Analysis Job.

 

 

Finally, please make sure to subscribe to the TrueSight Automation Video Channel to find more useful videos and to be notified of new video content as it is published. If you are particularly interested in TSSA Windows Patch Management videos, we have a playlist specifically for this feature.

Share This:

Adding Authorizations to an Acl Template is typically accomplished by running a series of blcli_execute BlAclTemplate addTemplatePermission commands.  Some brief profiling shows that adding 100 Authorizations to a newly created Acl Template takes about 10 seconds.  That's not too bad, but I wonder if it could be faster.  If we look in the unreleased blcli commands documentation for the addTemplatePermission command we can see what it runs:

Reading through the sequence of commands what happens is that it loads up the acl object from the template (the thing that contains the list of role:authorizations), then creates a new acl entry (blAce), adds the new role and authorization, and then updates the acl object with this acl entry.  Then the acl template object is updated with the newly update acl list.  It's likely that instead of immediately running the template update, we can keep adding more and more acl entries to the acl object and do a single update at the end.

 

We want to be able to compare the run times to the current method of adding acls so we should build out a script that does both and compare run times.  I'll just grab a subset of the entire list of authorizations for this test and then run each method, noting the runtime.

 

#!/bin/nsh
# load the date/time module so we can use $EPOCHSECONDS
zmodload zsh/datetime
blcli_setjvmoption -Dcom.bladelogic.cli.execute.quietmode.enabled=true
blcli_setoption serviceProfileName defaultProfile
blcli_setoption roleName BLAdmins
# get just the system authorizations for the test
blcli_execute Authorization findAllByType 1
blcli_execute Authorization getName
blcli_execute Utility setTargetObject
blcli_execute Utility listPrint
blcli_storelocal allAuths
# grab 100, for further profiling get more.  for a real run you'd be reading the list from a file probably.
myAuths="$( tail -100 <<< "${allAuths}")"
echo "Number of auths: $(awk 'NF' <<< "${myAuths}" | wc -l)"
for i in {1..10}
       do
        startTime=${EPOCHSECONDS}
        # create the empty acl template
        blcli_execute BlAclTemplate createAclTemplate Template1 Template1
         # loop through the list of authorizations i pulled and add them to the template
        while read i
                do
                blcli_execute BlAclTemplate addTemplatePermission Template1 BLAdmins "${i}"
        done <<< "$(awk 'NF' <<< "${myAuths}")"
        endTime=${EPOCHSECONDS}
         # get the runtime.
        echo "addTemplatePermission RunTime=$((${endTime}-${startTime}))"
        blcli_execute BlAclTemplate deleteAclTemplateByName Template1

        startTime=${EPOCHSECONDS}
         # create the template, step through the underlying calls for addTemplatePermission
        blcli_execute BlAclTemplate createAclTemplate Template2 Template2
        blcli_execute BlAclTemplate findByName Template2
        blcli_execute Utility storeTargetObject template
        blcli_execute BlAclTemplate getTemplateBlAcl
        blcli_execute Utility setTargetObject
        blcli_execute Utility storeTargetObject blAcl

         # loop over the list of auths to add, add them to the acl object and do the update later.
        while read i
                do
                blcli_execute RBACRole getRoleIdByName BLAdmins
                blcli_execute Utility setTargetObject
                blcli_execute Utility storeTargetObject roleId
                blcli_execute Authorization getAuthorizationIdByName "${i}"
                blcli_execute Utility setTargetObject
                blcli_execute Utility storeTargetObject authId
                blcli_execute Utility setTargetObject
                blcli_execute BlAce createInstance NAMED_OBJECT=roleId NAMED_OBJECT=authId
                blcli_execute Utility setTargetObject
                blcli_execute Utility storeTargetObject blAce
                blcli_execute Utility setTargetObject blAcl
                blcli_execute BlAcl addAce NAMED_OBJECT=blAce
        done <<< "$(awk 'NF' <<< "${myAuths}")"
        blcli_execute Utility setTargetObject template
        blcli_execute BlAclTemplate update NAMED_OBJECT=template
        blcli_execute BlAclTemplate getDBKey
        endTime=${EPOCHSECONDS}
        echo "unreleased RunTime=$((${endTime}-${startTime}))"
        blcli_execute BlAclTemplate deleteAclTemplateByName Template2
done

 

Running the above shows (collapsed lines for space):

addTemplatePermission RunTime=15 unreleased RunTime=2

addTemplatePermission RunTime=8 unreleased RunTime=2

addTemplatePermission RunTime=8 unreleased RunTime=2

addTemplatePermission RunTime=9 unreleased RunTime=2

addTemplatePermission RunTime=8 unreleased RunTime=2

addTemplatePermission RunTime=7 unreleased RunTime=2

addTemplatePermission RunTime=8 unreleased RunTime=2

addTemplatePermission RunTime=8 unreleased RunTime=3

addTemplatePermission RunTime=8 unreleased RunTime=2

addTemplatePermission RunTime=8 unreleased RunTime=2

An average of 8 seconds for the loop of addTemplatePermission commands, average of 2 seconds for the set of unreleased commands.

 

To actually use the new method your script will look something like:

blcli_execute BlAclTemplate createAclTemplate MyTemplate MyTemplate
blcli_execute BlAclTemplate findByName MyTemplate
blcli_execute Utility storeTargetObject template
blcli_execute BlAclTemplate getTemplateBlAcl
blcli_execute Utility setTargetObject
blcli_execute Utility storeTargetObject blAcl

while read auth role
     do
     blcli_execute RBACRole getRoleIdByName "${role}"
     blcli_execute Utility setTargetObject
     blcli_execute Utility storeTargetObject roleId
     blcli_execute Authorization getAuthorizationIdByName "${auth}"
     blcli_execute Utility setTargetObject
     blcli_execute Utility storeTargetObject authId
     blcli_execute Utility setTargetObject
     blcli_execute BlAce createInstance NAMED_OBJECT=roleId NAMED_OBJECT=authId
     blcli_execute Utility setTargetObject
     blcli_execute Utility storeTargetObject blAce
     blcli_execute Utility setTargetObject blAcl
     blcli_execute BlAcl addAce NAMED_OBJECT=blAce
done < /tmp/MyAuthList.txt
blcli_execute Utility setTargetObject template
blcli_execute BlAclTemplate update NAMED_OBJECT=template
blcli_execute BlAclTemplate getDBKey

 

Where /tmp/MyAuthList.txt has the format like:

role authname

Share This:

Now that I can Get the JobRunId from a running NSH Script Job, I can do some other fun exercises, like finding out if the NSH Job is running from within a Batch Job, and getting information about the parent Batch Job like the jobRunId, name, group, etc. In the previous example, I was able to identify a NSH Job run with something unique about that run and match up the running jobs to find my jobRunId.  I can take a similar approach here by adding my NSH Script Job that finds its own jobRunId to my BatchJob, and then processing all of the running BatchJob runs to find with BatchJob and run is the parent of that NSH Script Job run.  Instead of processing all the running NSH Script Job runs for my role, I'll process only the BatchJob runs, then list the member job runs and see if any of them are my NSH Script Job run.

 

One unreleased blcli commands that is useful here is BatchJobRun getMemberJobRunsByBatchJobRun.

 

The below will be appended to the existing NSH Script Job in the previous article.

 

# I already have the list of running jobs from the getAllJobRunProgress command.
#
while read i
     do
     echo "Processing job run id for batch job: ${i}"
     blcli_execute JobRun findById ${i}
     blcli_execute JobRun getJobType
     blcli_storeenv jobTypeId
     # look at just the batch jobs
     if [[ ${jobTypeId} -eq 200 ]]
          then
          echo "JobRunId: ${i} is a batch job"
          blcli_execute JobRun findById ${i}
          blcli_execute JobRun getJobKey
          blcli_storeenv batchJobKey
          # get the member runs of this batch job
          blcli_execute BatchJobRun getMemberJobRunsByBatchJobRun ${batchJobKey} ${i}
          blcli_execute JobRun getJobRunId
          blcli_execute Utility setTargetObject
          blcli_execute Utility listPrint
          blcli_storelocal memberJobRunIds
          # loop through the member job runs for this batch job and look for *my* job run id
          while read j
               do
               echo "Member: ${j}"
               if [[ ${myJobRunId} -eq ${j} ]]
                    then
                    myBatchJobRunId=${j}
                    myBatchJobKey=${batchJobKey}
      blcli_execute Job findByDBKey ${myBatchJobKey}
    blcli_execute Job getName
    blcli_storeenv jobName
            blcli_execute Job getGroupId
    blcli_storeenv jobGroupId
    blcli_execute Group getQualifiedGroupName 5005 ${jobGroupId}
            blcli_storeenv jobGroup
    echo "MyBatchJob: ${jobGroup}/${jobName},${myBatchJobRunId},${myBatchJobKey}"
                    blcli_execute 
                    break 2
               fi
          done <<< "$(awk 'NF' <<< "${memberJobRunIds}")"
     fi
done <<< "$(awk 'NF' <<< "${jobRunIds}")"
Share This:

Building off of the Get the NSH Script Job Key, Name and Group location from a running NSH Script Job post, another useful piece of information is to get the JobRunId from inside the running NSH Script Job.

 

The first thought is to simply get the last JobRunKey from the JobKey I just retrieved.  That will probably work in most cases, but it is possible for multiple instances of a NSH Script Job to be running at the same time and the JobRun findLastRunKeyByJobKey may not return the RunKey of this run.  That might be a solution, if you know that there would only ever be a single instance of the Job running.

 

Let's instead have some more fun and figure out how we can be sure we are getting this run.  In the JobKey post, I found there are some UUIDs generated at run time for the execution of the script.  That seems interesting, however, as I found last time, those UUIDs don't seem to be something I can retrieve using the blcli.  But maybe I can use them in another way - I know that the generated UUID will be unique to this run.  I can show the UUID in the Job Run Log.  I can get the Job Run Log if I have the JobRunId/Key.  I don't have the Id or Key for this run (it's what I'm trying to get of course), but I can get the Id/Key of all the running jobs in the environment.  If I have the JobRunId/Key for all of the running jobs, I can loop through each one, and figure out if that run is this job by seeing if the UUIDs I'm echoing show up in the job run log.
Digging around in the unreleased blcli commands, I found a few promising commands: JobRun getAllJobRunProgress (which can be filtered by a role name, which reduces what I'm processing), JobRunProgress getJobRunId, which can act on the getAllJobRunProgress command output, Utility getCurrentRole, JobRun getLogItemsByJobRunId.

Here's the commented script:

logItemMax=30
# extract the UUIDs out of ${0}
myUUID="$(echo -n ${0} | sed "s/.*__//g;s/\.script.*//g")"
echo "myUUID: ${myUUID}"
# had to sleep for a bit for the log entry to get written
sleep 10
# get the current role id since I only need job runs that my role is running to find *this* job
blcli_execute Utility getCurrentRole
blcli_storeenv roleName
blcli_execute RBACRole findByName "${roleName}"
blcli_execute RBACRole getRoleId
blcli_storeenv roleId
# get all the running job ids
blcli_execute JobRun getAllJobRunProgress ${roleId}
blcli_execute JobRunProgress getJobRunId
blcli_execute Utility setTargetObject
blcli_execute Utility listPrint
blcli_storeenv jobRunIds
echo "Found running JobRunIds for role ${roleName}: $(tr '\n' ' ' <<< ${jobRunIds})"
# look at all the job runs for this role and find *this* run by looking for the uuid in the log messages
while read i
     do
     echo "Processing job run id: ${i}"
     blcli_execute JobRun findById ${i}
     blcli_execute JobRun getJobType
     blcli_storeenv jobTypeId
     # only look at nsh script jobs
     if [[ ${jobTypeId} -eq 111 ]]
          then
          # if there's more than ~30 log items, it won't be *this* run, because this run only should have a few log entries by now, speeds up processing
          blcli_execute JobRun getLogItemCountByJobRun ${i}
          blcli_storeenv logItemCount
          echo "JobRunId: ${i} LogItems: ${logItemCount}"
          if [[ ${logItemCount} -lt ${logItemMax} ]]
               then
               blcli_execute JobRun getLogItemsByJobRunId ${i}
               blcli_storelocal logItems
               while read j
                    do
                    if  ( grep -q "${myUUID}" <<< "${j}" )
                         then
                         # run i, log line j has myuuid in it, must be me
                         echo "My JobRunId: ${i}"
                         myJobRunId="${i}"
                         break 2
                    fi
               done <<< "$(awk 'NF' <<< "${logItems}")"
          else
               echo "JobRunId: ${i} had: ${logItemCount} log items, too many"
          fi
     fi
done <<< "$(awk 'NF' <<< "${jobRunIds}" | sort -r)"
echo "My final JobRunId: ${myJobRunId}"

This works with Execute the script separately against each host (Type 1) and Execute the script once, passing the host list as a parameter to the script (Type 2) type of NSH Scripts.  For a type 1, even though the UUIDs for each target instance of the script will be different, the same jobRunId is retrieved.

Share This:

Sometimes it can be useful to find the JobKey, Job Name and Job Group of a running NSH Job.  While I watch the appserver log when I run a NSH Script Job, I can see that the appserver makes at temporary copy of the script file and executes with NSH and I see something like the below in the appserver log:

[09 Dec 2019 08:44:50,526] [WorkItem-Thread-48] [INFO] [BLAdmin:BLAdmins:] [NSHScript] __JobRun-2001505,4-2030319__ Started pid 31669: /opt/bmc/bladelogic/NSH/bin/nsh --norc -c /opt/bmc

/bladelogic/NSH/tmp/application_server/scripts/job__b1db9f72-089d-49a3-a929-a98ca2931115/master_cc0669d6-d9da-4fd9-848f-7e820727e36d

After some investigation it seems that those UUIDs are generated on the fly for each run and they don't look very useful to get the jobkey.  However, I know that when running a shell script, ${0} expands to the script being executed, so let's see if that gives me any more information about this temporary script copy than the above that might be useful, otherwise, I'll look at the environment variables set in the script execution environment next and see if there is anything useful there.

I create a NSH Script and Job with simply:

echo ${0}

and run it.  In the Job Run Log, I see:

 

Info 12/09/2019 08:54:35 /opt/bmc/bladelogic/NSH/tmp/application_server/scripts/job__5bc8428a-0b20-48e7-933d-48606e5e1b7f/0f820f7e-2243-4913-ad8d-c538ca658b01.script_DBKey-SJobKeyImpl-2001505-4_sleep1.nsh

That's pretty great, the JobKey is right there, we just need some regex to extract the JobKey string out:

jobKey=$( echo -n ${0} | sed 's/^.*DBKey/DBKey/' | cut -d"_" -f1 | sed 's/-/:/;s/-/:/')

I could probably combine that into a single awk or sed statement but for now it works.  No need to look into environment details or anything else.

 

Now that I have the JobKey, we can get the rest with some hopefully well known unreleased blcli commands:

jobKey="$( echo -n $0 | sed 's/^.*DBKey/DBKey/' | cut -d"_" -f1 | sed 's/-/:/;s/-/:/')"

echo "JobKey: ${jobKey}"

blcli_execute Job getJobNameByDBKey ${jobKey}

blcli_storeenv jobName

blcli_execute Job findByDBKey ${jobKey}

blcli_execute Job getGroupId

blcli_storeenv jobGroupId

blcli_execute Group getQualifiedGroupName 5005 ${jobGroupId}

blcli_storeenv jobGroup

echo "Job: ${jobGroup}/${jobName}"

And there we go:

With a little shell and blcli knowledge we pretty quickly were able to retrieve the JobKey, Name and Group for a running NSH Script Job.

Share This:

After running a Patching and then Remediation Job, BlPackages and Deploy Jobs are generated.  You might want to retrieve the generated Deploy Jobs for further manipulation.  This information shows up in the Remediation Job Run log:

We can use some unreleased blcli commands to get the log entries and scrape the name of the job.  There are two cases to handle - since 8.8 there is an option in Patching Global Configuration named Remediation Settings: Using Single Deploy Job.  This determines if the Remediation Job will generate a Deploy Job per generated BlPackage (< 8.8 behavior), or only a single Deploy Job that will deploy all generated BlPackages to their respective servers ( 8.8+ default behavior).  Since either case may exist in 8.8+ we need to check for both.

 

We must start with the Patching Job Run Id - this can be obtained a variety of ways that is beyond the scope of the article.  From the patching job run, we will find the remediation job run id, then pull and process the log entries from that run to get the deploy job info.

 

PATCHING_JOB="/Workspace/Patching Jobs/Windows 2016 and 2019"
typeset -a DEPLOY_JOB_KEYS
# Getting the last run of my patching job for this example:
blcli_execute PatchingJob getDBKeyByGroupAndName "${PATCHING_JOB%/*}" "${PATCHING_JOB##*/}"
blcli_storeenv PATCHING_JOB_KEY
blcli_execute JobRun findLastRunKeyByJobKey ${PATCHING_JOB_KEY}
blcli_storeenv PATCHING_JOB_RUN_KEY
blcli_execute JobRun jobRunKeyToJobRunId ${PATCHING_JOB_RUN_KEY}
blcli_storeenv PATCHING_JOB_RUN_ID
# get the patching job run child ids (one will be the remediation job id)
blcli_execute JobRun findPatchingJobChildrenJobsByRunKey ${PATCHING_JOB_RUN_ID}
blcli_execute JobRun getJobRunId
blcli_execute Utility setTargetObject
blcli_execute Utility listPrint
blcli_storeenv PATCH_ANALYSIS_JOB_RUN_IDS
for JOB_RUN_ID in ${PATCH_ANALYSIS_JOB_RUN_IDS}
        do
        blcli_execute JobRun findById ${JOB_RUN_ID}
        blcli_execute JobRun getType
        blcli_storeenv JOB_RUN_TYPE_ID
        if [[ ${JOB_RUN_TYPE_ID} = 7033 ]]
                then
                break
        fi
done
# get the log entries for the remediation job
blcli_execute JobRun findById ${JOB_RUN_ID}
blcli_execute JobRun getJobKey
blcli_storeenv REMEDIATION_JOB_KEY
blcli_execute LogItem getLogItemsByJobRun ${REMEDIATION_JOB_KEY} ${JOB_RUN_ID}
blcli_execute Utility storeTargetObject logItems
blcli_execute Utility listLength
blcli_storeenv LIST_LENGTH
if [[ ${LIST_LENGTH} -gt 0 ]]
        then
        for i in {0..$((${LIST_LENGTH}-1))}
                do
                blcli_execute Utility setTargetObject logItems
                blcli_execute Utility listItemSelect ${i}
                blcli_execute Utility setTargetObject
                blcli_execute JobLogItem getMessage
                blcli_storeenv MESSAGE
                 # look for the < 8.8 case:
                if ( grep -q "Created deploy job" <<< "${MESSAGE}" )
                        then
                        DEPLOY_JOB_NAME="$(grep "Created deploy job" <<< "${MESSAGE}" | cut -f2- -d: | sed "s/^ Jobs//")"
                        echo "DEPLOY_JOB_NAME: ${DEPLOY_JOB_NAME}"
                        blcli_execute DeployJob getDBKeyByGroupAndName "${DEPLOY_JOB_NAME%/*}" "${DEPLOY_JOB_NAME##*/}"
                        blcli_storeenv DEPLOY_JOB_KEY
                        DEPLOY_JOB_KEYS+=${DEPLOY_JOB_KEY}
                fi
                 # look for the 8.8+ case
                if ( grep -q "Created Batch Job" <<< "${MESSAGE}" )
                        then
                        BATCH_JOB_NAME="$(grep "Created Batch Job" <<< "${MESSAGE}" | cut -f2- -d: | sed "s/^ Jobs//")"
                        echo "BATCH_JOB_NAME: ${BATCH_JOB_NAME}"
                        blcli_execute BatchJob getDBKeyByGroupAndName "${BATCH_JOB_NAME%/*}" "${BATCH_JOB_NAME##*/}"
                        blcli_storeenv batchJobKey
                        blcli_execute BatchJob findAllSubJobHeadersByBatchJobKey ${batchJobKey}
                        blcli_execute SJobHeader getDBKey
                        blcli_execute Utility setTargetObject
                        blcli_execute Utility listPrint
                        blcli_storeenv batchMembers
                        batchMembers="$(awk 'NF' <<< "${batchMembers}")"
                        while read i
                                do
                                blcli_execute Job findByDBKey ${i}
                                blcli_execute Job getName
                                blcli_storeenv deployJobName
                                blcli_execute Job getGroupId
                                blcli_storeenv deployJobGroupId
                                blcli_execute Group getQualifiedGroupName 5005 ${deployJobGroupId}
                                blcli_storeenv deployGroupPath
                                blcli_execute DeployJob getDBKeyByGroupAndName "${deployGroupPath}" "${deployJobName}"
                                blcli_storeenv jobKey
                                if [[ "${DEPLOY_JOB_KEYS/${jobKey}}" = "${DEPLOY_JOB_KEYS}" ]]
                                        then
                                        echo "DEPLOY_JOB: ${deployGroupPath}/${deployJobName}"
                                        echo "DEPLOY_JOB_KEY: ${jobKey}"
                                        DEPLOY_JOB_KEYS+="${jobKey}"
                                fi
                         done <<< "${batchMembers}"
                fi
        done
else
        echo "Could not find job logs for ${REMEDIATION_JOB_KEY}..."
        exit 1
fi

 

This will generate output like:

Single Job Mode:

BATCH_JOB_NAME: /Workspace/Patching Jobs/Deploy/Windows 2016 and 2019 batch deploy Windows 2016 and 2019@2019-09-29 14-41-25-125-0400
DEPLOY_JOB: /Workspace/Patching Jobs/Deploy/Windows 2016 and 2019-20011681 @ 2019-09-29 14-41-24-319-0400
DEPLOY_JOB_KEY: DBKey:SJobModelKeyImpl:2001176-1-2131104
DBKey:SJobModelKeyImpl:2001176-1-2131104

 

Multiple Deploy Jobs:

DEPLOY_JOB_NAME: /Workspace/Patching Jobs/Deploy/Windows 2016 and 2019-win16-894p2.example.com-20011681 @ 2019-09-29 14-51-40-404-0400
DEPLOY_JOB_NAME: /Workspace/Patching Jobs/Deploy/Windows 2016 and 2019-win19-894p2.example.com-20011681 @ 2019-09-29 14-51-42-179-0400
BATCH_JOB_NAME: /Workspace/Patching Jobs/Deploy/Windows 2016 and 2019 batch deploy Windows 2016 and 2019@2019-09-29 14-51-42-726-0400
DBKey:SJobModelKeyImpl:2001183-1-2131308 DBKey:SJobModelKeyImpl:2001189-1-2131320

 

Once you have the name and DBKey of the generated Deploy Job, you can then proceed to do whatever you need to do to/for/with these jobs.

Share This:

Happy to announce that we have next patch ready with some key support enhancements:

 

  • Now you can run the OS hardening on Red Hat 8 servers. Patch2 allows you to install TrueSight Server Automation agent, application server and NSH on RHEL 8.
  • The Yellowfin version that is shipped with TrueSight Server Automation Live Reporting has been upgraded to 8.0.2 from 8.0.1.

 

The following component templates for the Defense Information Systems Agency (DISA) policy have been updated: 

  • DISA on Red Hat Enterprise Linux 6 is updated to benchmark version 1 - Release 23 of July 26, 2019.
  • DISA on Red Hat Enterprise Linux 7 is updated to benchmark version 2 - Release 4 of July 26, 2019.
  • DISA on Windows Server 2008 R2 DC is updated to benchmark version 1 - Release 31 of July 26, 2019.
  • DISA on Windows Server 2008 R2 MS is updated to benchmark version 1 - Release 30 of July 26, 2019.
  • DISA on Windows Server 2012 DC (2012 and 2012 R2) is updated to benchmark version 2 - Release 17 of July 26, 2019.
  • DISA on Windows Server 2012 MS (2012 and 2012 R2) is updated to benchmark version 2 - Release 16 of July 26, 2019.
  • DISA on Windows Server 2016 is updated to benchmark version 1 - Release 9 of July 26, 2019.

 

For more information on how you can download and upgrade the version to the latest patch please visit the page

Version 8.9.04.002: Patch 2 for version 8.9.04.

Share This:

This question has come up a number of times and I decided to spend some time looking into it.  The goal is to be able to leverage a RedHat Satellite server as the source of a RedHat Patch Catalog in TrueSight Server Automation.  Standard disclaimers that this is not currently supported, may not work for you, may stop working in the future, etc, etc.

 

The below assumes some familiarity with Satellite and should work for Satellite version 6.x.  I did this with Satellite 6.5.

 

On the Satellite server, add or edit the file /etc/pulp/server/plugins.conf.d/yum_distributor.json, as noted in this Foreman bug, and add the following:

{                                                                                                                                                                                                                                            

  "generate_sqlite": true                                                                                                                                                                                                                    

}

This is needed because BSA requires the metadata in sqlite format and this is not the default for Satellite.  After making this change you must restart the Pulp worker services.  The next synchronization of your repositories should include the sqlite metadata.  If not, you can forcefully regenerate the metadata of a Content View.

 

To verify you have generated the sqlite metadata, run the below command on the Satellite server after the synchronization completes:

find /var/lib/pulp/published/yum/master/yum_distributor -iname "*sqlite.bz2" -exec ls -la {} \;

-rw-r--r--. 1 apache apache 13938 Jun 14 16:28 /var/lib/pulp/published/yum/master/yum_distributor/ee48df18-3be3-4254-b518-de42a1a37cb4/1560544102.06/repodata/7f8c6bce5464871dd00ed0e0ed25e55fd460abb255ab0aa093a79529bb86cbc2-primary.sqlite.bz2

-rw-r--r--. 1 apache apache 155449 Jun 14 16:28 /var/lib/pulp/published/yum/master/yum_distributor/ee48df18-3be3-4254-b518-de42a1a37cb4/1560544102.06/repodata/cff3aeccd7f3ff871f72c5829ed93720e0f273d1206ee56c66fa8f6ee1d2e486-filelists.sqlite.bz2

-rw-r--r--. 1 apache apache 40915 Jun 14 16:28 /var/lib/pulp/published/yum/master/yum_distributor/ee48df18-3be3-4254-b518-de42a1a37cb4/1560544102.06/repodata/eb5044ef0c9e47dab11b242522890cfe6fbb6cf1942f14757af440ec54c9027f-other.sqlite.bz2

[...]

 

Subscribe the system used to store the RedHat Catalog for TSSA to your Satellite server to obtain the certificates used by TSSA in the catalog synchronization process.

 

In the Patch Global Configuration, or your offline downloader configuration file you will reference these certificates:

SSL CA Cert: /etc/pki/ca-trust/source/anchors/katello-server-ca.pem

SSL Client Cert: /etc/pki/entitlement/<numbers>.pem

SSL Client Key: /etc/pki/entitlement/<numbers>-key.pem

Note that the SSL CA Cert is different than the one used when synchronizing directly with RedHat.

 

You will need to update the RedHat Channel Filters List File (online catalog) or the offline downloader configuration file (offline catalog) with the urls and other information for the Satellite -provided channels you will use in your catalog.  The URLs will look something like:

https://satellite.example.com/pulp/repos/Example/Library/View1/content/dist/rhel/server/7/7Server/x86_64/os

 

The format of the url is https://<satellite server>/pulp/repos/<organization>/<content library>/<view name>/<product>.  An easy way to determine the URLs is to use the rct cat-cert command on the SSL Client Cert:

 

rct cat-cert /etc/pki/entitlement/3591963669563311224.pem

[...]

Content:

Type: yum

Name: Red Hat Enterprise Linux 7 Server (RPMs)

Label: rhel-7-server-rpms

Vendor: Red Hat

URL: /Example/Library/View1/content/dist/rhel/server/7/$releasever/$basearch/os

GPG: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

Enabled: True

Expires: 1

Required Tags: rhel-7-server

Arches: x86_64

 

Another way is to inspect the output of the subscription-manager repos --list command output (which only shows the repos applicable to the OS of the catalog server):

# subscription-manager repos --list

+----------------------------------------------------------+

    Available Repositories in /etc/yum.repos.d/redhat.repo

+----------------------------------------------------------+

Repo ID:   rhel-7-server-rpms

Repo Name: Red Hat Enterprise Linux 7 Server (RPMs)

Repo URL:  https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/7/$releasever/$basearch/os

Enabled:   1

Repo ID:   rhel-7-server-optional-rpms

Repo Name: Red Hat Enterprise Linux 7 Server - Optional (RPMs)

Repo URL:  https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/7/$releasever/$basearch/optional/os

Enabled:   0

Repo ID:   rhel-7-server-satellite-tools-6.5-rpms

Repo Name: Red Hat Satellite Tools 6.5 (for RHEL 7 Server) (RPMs)

Repo URL:  https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/7/7Server/$basearch/sat-tools/6.5/os

Enabled:   0

 

Once you have the urls and other information for the channels you want to build your catalog from you will update the RedHat Channel Filters List File in Patch Global Confgiuration or Offline Downloader configuration file with the urls and other information.

 

Example RedHat Filters file snippet for an online catalog:

[...]

   <redhat-channel use-reposync="true">

        <channel-name>RHEL 7 Optional RPMs from Satellite</channel-name>

        <channel-label>rhel-7-server-optional-rpms-satellite</channel-label>

        <channel-os>RHES7</channel-os>

        <channel-arch>x86_64</channel-arch>

        <channel-url>https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/7/7Server/x86_64/optional/os</channel-url>

    </redhat-channel>

    <redhat-channel use-reposync="true" is-parent="true">

        <channel-name>RHEL 6 RPMs from Satellite</channel-name>

        <channel-label>rhel-6-server-rpms-satellite</channel-label>

        <channel-os>RHES6</channel-os>

        <channel-arch>x86_64</channel-arch>

        <channel-url>https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/6/6Server/x86_64/os</channel-url>

    </redhat-channel>

    <redhat-channel use-reposync="true">

        <channel-name>RHEL 6 Optional RPMs from Satellite</channel-name>

        <channel-label>rhel-6-server-optional-rpms-satellite</channel-label>

        <channel-os>RHES6</channel-os>

        <channel-arch>x86_64</channel-arch>

        <channel-url>https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/6/6Server/x86_64/optional/os</channel-url>

    </redhat-channel>

[...]

This adds the rhel-7-server-rpms and rhel-6-server-rpms as parent channels and the others as child channels.

 

 

Example Offline Downloader configuration file snippet:

       <redhat-cert cert-arch="x86_64">

                <caCert>/etc/pki/ca-trust/source/anchors/katello-server-ca.pem</caCert>

                <clientCert>/etc/pki/entitlement/2717125327657143845.pem</clientCert>

                <clientKey>/etc/pki/entitlement/2717125327657143845-key.pem</clientKey>

        </redhat-cert>

       

        <errata-type-filter>

                        <os>RHES7</os>

                        <arch>x86_64</arch>

                        <channel-label>rhel-7-server-rpms</channel-label>

                        <channel-url>https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/7/7Server/x86_64/os</channel-url>

                        <errata-severity>

                                <critical>true</critical>

                                <important>true</important>

                                <moderate>true</moderate>

                                <low>true</low>

                        </errata-severity>

                        <errata-type>

                                <security>true</security>

                                <bugfix>true</bugfix>

                                <enhancement>true</enhancement>

                        </errata-type>

        </errata-type-filter>

        <errata-type-filter>

                        <os>RHES7</os>

                        <arch>x86_64</arch>

                        <channel-label>rhel-7-server-optional-rpms</channel-label>

                        <channel-url>https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/7/7Server/x86_64/optional/os</channel-url>

                        <errata-severity>

                               <critical>true</critical>

                               <important>true</important>

                               <moderate>true</moderate>

                               <low>true</low>

                        </errata-severity>

                        <errata-type>

                                <security>true</security>

                                <bugfix>true</bugfix>

                                <enhancement>true</enhancement>

                       </errata-type>

        </errata-type-filter>

        <errata-type-filter>

                        <os>RHES6</os>

                        <arch>x86_64</arch>

                        <channel-label>rhel-6-server-rpms</channel-label>

                        <channel-url>https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/6/6Server/x86_64/os</channel-url>

                        <errata-severity>

                               <critical>true</critical>

                               <important>true</important>

                               <moderate>true</moderate>

                               <low>true</low>

                        </errata-severity>

                        <errata-type>

                                <security>true</security>

                                <bugfix>true</bugfix>

                                <enhancement>true</enhancement>

                        </errata-type>

        </errata-type-filter>

              <errata-type-filter>

                        <os>RHES6</os>

                        <arch>x86_64</arch>

                        <channel-label>rhel-6-server-optional-rpms</channel-label>

                        <channel-url>https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/6/6Server/x86_64/optional/os</channel-url>

                        <errata-severity>

                               <critical>true</critical>

                               <important>true</important>

                               <moderate>true</moderate>

                               <low>true</low>

                        </errata-severity>

                        <errata-type>

                                <security>true</security>

                                <bugfix>true</bugfix>

                                <enhancement>true</enhancement>

                        </errata-type>

        </errata-type-filter>

 

 

At this point, using the online RedHat Patch Catalog or offline downloader is the same as synchronizing with RedHat directly.  Finish the catalog creation and run the Catalog Update Job.

 

 

 

A few references were helpful in setting all of this up (that may require a RedHat Support account to access):

Installing Satellite Server from a Connected Network

How to forcefully regenerate metadata of a content view or repository on Red Hat Satellite 6

How do I register a system to Red Hat Satellite 6 server

How to change download policy of repositories in Red Hat Satellite 6

Share This:

If you are using TSSA/BSA to patch your Windows servers, you have hopefully already upgraded to a version of TSSA that uses the new Shavlik SDK version and are happily off patching your servers, because you received a notification from BMC via email, seen a blog post, attended a webinar, or seen the documentation flash page.

 

If you have not upgraded yet, you need to complete an upgrade before September 30, 2019 to continue patching your Windows servers with TSSA.

 

Let me elaborate on a few points from the documentation flash page:

 

You must upgrade the core TSSA infrastructure, that is the AppServer(s), Database and Consoles to a version of TSSA that uses the new Shavlik SDK.

 

You must also upgrade the RSCD agents on all Windows servers you need to deploy patches to as well.  This unfortunately breaks with precedent.  With most BSA/TSSA upgrades, you typically upgrade the core infrastructure and then roll out RSCD upgrades as change windows permit.  The notable exception to this rule is when there are RSCD side changes to enable new functionality.  The Shavlik SDK update is one of those exceptions.  Both the appserver and the RSCD must be upgraded.

 

While the flash page mentions a couple versions to upgrade to, there are actually a handful of TSSA versions that include the updated Shavlik SDK.

  • If your appserver and agents are already 8.9.03 then there is no action required.
  • If your appserver and agents are already 8.9.03.001, then there is no action required.
  • If your appserver and agents are already 8.9.04, then there is no action required.
  • If your appserver and agents are already 8.9.04.001, then there is no action required.
  • If you have some combination of an 8.9.04.x appserver and 8.9.03 agents, or 8.9.03.001 appserver and 8.9.03 agents, then there is likely no action required, unless you have encountered one of the issues noted below.

 

However, if you are planning to upgrade, you should upgrade to 8.9.04.001 or later, because you will likely run into a few bugs present in prior versions impacting the agent upgrade process and patching:

Bug
Summary
Fixed In
DRBLG-116254When you upgrade a Windows RSCD Agent to version 8.9.03.001, the sbin\BLPatchCheck2.exe binary is not upgraded.8.9.04
DRBLG-116834

When you execute an Agent Installer job against multiple target servers in parallel, some of the jobs fail with the following error message:

Failed to copy installer bundle from file server

8.9.04
QM002409753Upgrading BMC Server Automation 8.9.02 agent on Microsoft Windows servers does not work as expected.8.9.03.001
DRBLG-114734Shavlik 9.3 restart handling and robustness8.9.03.001
DRBLG-118439Windows CUJ update is slow8.9.04.001
DRBLG-117288Windows Patch Exclusions Not Working8.9.04
DRBLG-115981Windows Patching Analysis failed in absence of Digital Certificates8.9.03.001
DRBLG-119204After appserver upgrade to 8.9.04, Windows Patch Remediation fails unless target RSCD agent is upgraded to the same SP4.8.9.04.001

 

A few other questions that have come up about the upgrade process:

 

Can I upgrade my agents to the new version ahead of my appserver upgrade ?

No, the appserver/infrastructure must be updated first.  TSSA/BSA has never supported using a newer RSCD version with an older appserver.

 

What happens if I can't complete the upgrade process before September 30, 2019 ?

New patch metadata with information about newly released patches will not be available to you and you will be unable to deploy them using TSSA's Patching solution.

 

Do I also need to upgrade the RSCD on other Operating Systems (Linux, AIX, Solaris, etc) immediately after upgrading the TSSA infrastructure ?

You do not need to upgrade these agents immediately after upgrading the TSSA infrastructure.

Share This:

Hello Everyone,

 

We are super excited to announce the launch of 'TrueSight Automation Console' for TrueSight Automation for Servers. This container based platform significantly simplifies OS patching experience for servers while providing near real time Patch Compliance status. Here are the highlights of this platform:

 

Simplified patch management

Using new automation console you can define the patch policies that are used for both audit and deployment of patches.

Based on the policy results, you can create remediation actions to apply required patches on the target servers. You basically just choose the policies, targets, remediation options, schedule (maintenance window), and receive notification (alerts).  It is a big step forward and provides greater ease of use, and makes it easier to deploy patches in this intuitive interface.

 

AddPatchPolicy.png

 

KPI driven Dashboards

The Dashboard shows patch compliance health. It reflects KPIs such as:

  • Patch Compliance of the environment
  • Assets missing patch SLAs and critical patches
  • Age of missing patches
  • Remediation trends
  • Visibility into the patches that are missing on the most number of servers

You can also:

  • Filter the dashboard data using filters for operating system, severity, and the patch policy.
  • Drill down on widgets to get details
  • Export data to share with stakeholders

 

 

Deploy missing patches

Based on scan results of patch policies, you can schedule the deployment of missing patches during the maintenance window. Also, you can choose the reboot options for the target servers as well as staging schedule for patch payloads.

 

Service Level Agreements(SLA)

SLAs define the period within which the missing patches need to be remediated. You can define SLAs based on your organization's policies.

 

Container Based Deployment

TrueSight Automation Console is packaged as docker containers which are easy to deploy.

You can install TrueSight Automation Console both in Interactive as well as in Silent mode.

It is a three simple step process to install TrueSight Automation Console:

  • Setup Stack Manager
  • Setup Database
  • Setup Automation Console application

 

For details, please refer the documentation at https://docs.bmc.com/docs/display/tsac191

 

Customers who want to understand more about this offering and to gain access to the software should reach out to the respective account manager.

Share This:

Hi all

Now that we've done a few sessions on TrueSight Smart Reporting, I thought I'd gather the links and resources together in a single place and share with everyone.

If you have other topics you'd like to see addressed or questions, please engage here, on the individual pages above or contact Support.

Thanks

Seth

 

Share This:

As part of automating your patch deployments you may want to run your Patching Job and have it automatically generate the BlPackages and Deploy Jobs that contain missing patches and go ahead and schedule the various phases of the generated Deploy Jobs.  This is fairly simple in the gui.  In a Patching Job, on the Remediation Options tab, click on the Deploy Job Options button and then goto the Phases and Schedules tab.

2019-07-17_16-19.png

While this is trivial for a single job, if you have several jobs to modify every patching cycle this becomes quite tedious.  We will of course turn to our friend the BLCLI.  There's a hint in the screenshot above as to how we will accomplish this.  The Populate options from an existing Job looks interesting.  And in-fact if I create a 'dummy' Deploy Job, setup the options and schedule that I want and then select that in the Populate options from an existing Job menu, the schedule and options are applied to my Patching Job / Deploy Job Options.

 

To really automate this I need to do a few things: Figure out the options and schedule we want set in the Patching Job, create or update a 'dummy' deploy job with the options and schedule I want, find my Patching Job (we could create one from scratch but not for this exercise), apply the 'dummy' job to the Patching Job, remove the schedule from the dummy job, and then optionally execute or schedule the Patching Job.

 

Dummy Job

To create the 'dummy job' we need a blpackage.  Let's provide the name and path to a BlPackage and create an empty BlPackage if it does not exist.

DUMMY_BLPACKAGE="/Workspace/TestDeploy"
blcli_execute DepotGroup groupNameToDBKey "${DUMMY_BLPACKAGE%/*}"
blcli_storeenv depotGroupKey
blcli_execute DepotObject depotObjectExistsByTypeGroupAndName 28 ${depotGroupKey} "${DUMMY_BLPACKAGE##*/}"
blcli_storeenv pkgExists
if [[ "${pkgExists}" = "true" ]]
    then
    blcli_execute BlPackage getDBKeyByGroupAndName "${DUMMY_BLPACKAGE%/*}" "${DUMMY_BLPACKAGE##*/}"
else
    blcli_execute DepotGroup groupNameToId "${DUMMY_BLPACKAGE%/*}"
    blcli_storeenv depotGroupId
    blcli_execute BlPackage createEmptyPackage "${DUMMY_BLPACKAGE##*/}" "" ${depotGroupId}
    blcli_execute DepotObject getDBKey
fi
blcli_storeenv PACKAGE_KEY
echo "PACKAGE_KEY:${PACKAGE_KEY}"

Now that we have our dummy BlPackage, let's do the same with the dummy job.  When we create the deploy job, we'll pass parameters as defined in the DeployJob - createDeployJob_3 - Documentation for BMC Server Automation Command Line Interface 8.9 - BMC Documentation command.

 

DEPLOY_OPTS="${BASIC_DEPLOY_OPTS} ${ADVANCED_OPTS}"
DUMMY_BLDEPLOY="/Workspace/TestDeploy"
DUMMY_TARGET_SERVER="server1"
blcli_execute JobGroup groupNameToDBKey "${DUMMY_BLDEPLOY%/*}"
blcli_storeenv jobGroupKey
blcli_execute Job jobExistsByTypeGroupAndName 30 ${jobGroupKey} "${DUMMY_BLDEPLOY##*/}"
blcli_storeenv jobExists
if [[ "${jobExists}" = "true" ]]
    then
    blcli_execute DeployJob getDBKeyByGroupAndName "${DUMMY_BLDEPLOY%/*}" "${DUMMY_BLDEPLOY##*/}"
else
    blcli_execute JobGroup groupNameToId "${DUMMY_BLDEPLOY%/*}"
    blcli_storeenv GROUP_ID
    blcli_execute DeployJob createDeployJob "${DUMMY_BLDEPLOY##*/}" "${GROUP_ID}" "${PACKAGE_KEY}" "${DEPLOY_TYPE}" "${DUMMY_TARGET_SERVER}" ${DEPLOY_OPTS}
fi
blcli_storeenv DEPLOY_JOB_KEY
echo "DEPLOY_JOB_KEY:${DEPLOY_JOB_KEY}"

 

Now we can set the phase schedules on the dummy job:

blcli_execute DeployJob setAdvanceDeployJobPhaseScheduleByDBKey ${DEPLOY_JOB_KEY} AtTime "${SIMULATE_TIME}" "${STAGE_DATE_TIME}" "" AtTime "${COMMIT_TIME}"
blcli_storeenv DEPLOY_JOB_KEY
echo "DEPLOY_JOB_KEY:${DEPLOY_JOB_KEY}"

 

Patching Job

Apply the options and schedule from the dummy job on your patching job:

 

PATCH_JOB="/Workspace/Patching Jobs/rhel6-clean"
REMEDIATION_DEPOT_FOLDER="/Workspace/Patch Deploy"
REMEDIATION_JOB_FOLDER="/Workspace/Patching Jobs"
REMEDIATION_JOB_PREFIX="${PATCH_JOB##*/}-"
blcli_execute PatchingJob setRemediationWithDeployOptions ${PATCH_JOB_KEY} "${REMEDIATION_JOB_PREFIX}" "${REMEDIATION_DEPOT_FOLDER}" "${REMEDIATION_JOB_FOLDER}" ${DEPLOY_JOB_KEY}
blcli_storeenv PATCH_JOB_KEY
echo "PATCH_JOB_KEY:${PATCH_JOB_KEY}"

 

 

Optionally schedule the Patching Job

blcli_execute Job addOneTimeSchedule ${PATCH_JOB_KEY} "${ANALYSIS_TIME}"
blcli_storeenv PATCH_JOB_KEY
echo "PATCH_JOB_KEY:${PATCH_JOB_KEY}"

 

 

And that's it.  The attached script includes a hard-coded start time for the Deploy phases and Patching Job.  Those could be taken as input to the script or derived somehow.  There's also a check to see if the executeDeployJobsNow option is set and exit out since this requires manual correction.

 

There are a few things that could be done in the script that are not covered and left an an exercise for the reader:

  • Delete the dummy job and package after they've been used
  • Set pre- and post- commands in the Deploy Job options
  • Take the various options, schedule times, job path, folders, etc as script arguments
Share This:

In the Part 1 we saw how you can create a web based repository configuration of JFrog artifactory. In Part 2 we will see how you can create the depot software for deploying an application like notepad++ to windows targets.

 

Create a Depot Software for custom software

 

In this step you want to link the software payload http path with the depot software. The steps are usual as creating a new depot software. I will show here how you can do it for Custom Software. You will get the following screen to select the software payload. You can live browse the web repositories and select the payload you want to deploy. To pull the latest version from the repository for every deployment check the box "Download during deployment". Click on OK.

Now you will come across the screen where you can give the name and deploy/undeploy related commands for the software payload. For notepad++ we have shown it below. Please zoom in the browser if the commands are not visible. Click Next.

Go through the wizard for the rest of the screens similar to Depot Software and click Finish and you are all set!!

 

Deploy the software to the selected servers

 

Now you can execute the deploy job just like other deploys jobs by right click on the depot software created.

Share This:

Need of central repository for software payloads

 

A lot of customers use the file servers as remote repositories for storing the payloads. Where as having multiple remote repositories can give you scalability but when you want to host a software for which there are frequent releases and you want to maintain latest version of that software accessible to deploy it becomes an involved effort for the team. This is true especially when you have a servers distributed across the departments with different people as IT leads owning set of servers having unique requirement of softwares that need to be installed on them.

 

TrueSight Server Automation with its release 8.9.04.001 offers you a way to store all your payloads at a central web repository and download the payloads latest version through a https url. In this release there is a out of the box integration with generic type repository of JFrog Artifactory. The pre-requisite is you need to have a JFrog Artifactory of Generic Local type repository already configured and uploaded your software payloads to it.

 

You can be ready to deploy the software payloads in three simple steps:

  1. Configure the web repository in RCP Console
  2. Create a Depot Software
  3. Create a Deploy job

 

We will see how to use these three steps in three parts of this blog.

 

Configure the web repository in RCP Console

As shown below create the Web repository configuration by following steps:

 

Click on add button to add new web repository configuration. Here you enter the artifactory url and the JFrog user and password. Click Next.

On next screen you can select the WebRepositoryConfig object level authorization to the role you are currently logged in RCP console with.

Click OK and then Click Finish on the parent screen.

 

Click on Part 2 to see how you can create the depot software for using https url and deploy the the software.

Filter Blog

By date:
By tag: