Skip navigation
1 2 3 Previous Next

TrueSight Server Automation

125 posts
Share:|

In the Part 1 we saw how you can create a web based repository configuration of JFrog artifactory. In Part 2 we will see how you can create the depot software for deploying an application like notepad++ to windows targets.

 

Create a Depot Software for custom software

 

In this step you want to link the software payload http path with the depot software. The steps are usual as creating a new depot software. I will show here how you can do it for Custom Software. You will get the following screen to select the software payload. You can live browse the web repositories and select the payload you want to deploy. To pull the latest version from the repository for every deployment check the box "Download during deployment". Click on OK.

Now you will come across the screen where you can give the name and deploy/undeploy related commands for the software payload. For notepad++ we have shown it below. Please zoom in the browser if the commands are not visible. Click Next.

Go through the wizard for the rest of the screens similar to Depot Software and click Finish and you are all set!!

 

Deploy the software to the selected servers

 

Now you can execute the deploy job just like other deploys jobs by right click on the depot software created.

Share:|

Need of central repository for software payloads

 

A lot of customers use the file servers as remote repositories for storing the payloads. Where as having multiple remote repositories can give you scalability but when you want to host a software for which there are frequent releases and you want to maintain latest version of that software accessible to deploy it becomes an involved effort for the team. This is true especially when you have a servers distributed across the departments with different people as IT leads owning set of servers having unique requirement of softwares that need to be installed on them.

 

TrueSight Server Automation with its release 8.9.04.001 offers you a way to store all your payloads at a central web repository and download the payloads latest version through a https url. In this release there is a out of the box integration with generic type repository of JFrog Artifactory. The pre-requisite is you need to have a JFrog Artifactory of Generic Local type repository already configured and uploaded your software payloads to it.

 

You can be ready to deploy the software payloads in three simple steps:

  1. Configure the web repository in RCP Console
  2. Create a Depot Software
  3. Create a Deploy job

 

We will see how to use these three steps in three parts of this blog.

 

Configure the web repository in RCP Console

As shown below create the Web repository configuration by following steps:

 

Click on add button to add new web repository configuration. Here you enter the artifactory url and the JFrog user and password. Click Next.

On next screen you can select the WebRepositoryConfig object level authorization to the role you are currently logged in RCP console with.

Click OK and then Click Finish on the parent screen.

 

Click on Part 2 to see how you can create the depot software for using https url and deploy the the software.

Share:|

I am excited to announce that TrueSight Server Automation 8.9.04.001 is GA with some critical updates.

 

  • Now use all new REST APIs for core patching with well defined standard of documentation. You can now work with catalogs, groups, patching jobs, roles and servers with these APIs. The end points of these APIs are provided using swagger specific language.
  • With this release we are supporting out of the box integration with generic-local type of JFrog Artifactory. You can add a software package to the Depot by specifying the path to a payload that exists on a configured web repository. After adding a software package, you can deploy it using a Deploy Job just like you could do earlier with a NSH path.
  • In smart groups, now you do not need to create a smart group from scratch every time you need to reuse the existing conditions from other smart groups. As an Administrator or operator, you can create a copy of a smart group in a patch catalog.
  • We have also included provisioning support on Unified Extensible Firmware Interface (UEFI) by using Windows Imaging Format (WIM) images.
  • There is a whole set of blclis available now for dealing with functionalities in jobrun, patch catalog, execution tasks and smart patch catalog group.
  • There are additions in the OS support with the inclusion of Windows 2019 and SUSE 15.

 

Apart from these there are some critical fixes available.

 

For more information please visit the link Version 8.9.04.001: Patch 1 for version 8.9.04 - Documentation for TrueSight Server Automation 8.9.00 - BMC Documentatio…

 

 

 

Share:|

I am super excited to share that TrueSight Server Automation 8.9.04 release is GA!! Following are the salient features of this release.

 

Job qualification check

You can now set the maximum number of servers targeted by a job. Every time you perform the following actions, a message with the number of target servers is displayed. This option is available through GUI only.

  • Create Job
  • Create Execution Task
  • Modify Job
  • Execute Against
  • Execute a job

Powershell integration

  • Execute PowerShell scripts through Type 3 NSH Scripts and scriptutil.
  • Extended Objects can support PowerShell script natively.
  • Option to generate script logs for Network Shell script job for ease of parsing.
  • Ability to easily pass custom arguments while launching PowerShell scripts as well as simple configuration to set the launch command for PowerShell.

 

 

Added platform support

  • Windows Server 2019 operating system
  • Ubuntu 18.04 operating system
  • POWER9 architecture

Compliance Content

  • CIS for SUSE 12 and Windows 2016
  • PCI for Windows 2016
  • DISA,PCI,CIS templates of old OS versions have been upgraded to the latest template versions.

 

For more details please visit Service Pack 4: version 8.9.04 - Documentation for TrueSight Server Automation 8.9.00 - BMC Documentation

Share:|

Coming up on October 18, 2018 is BMC’s annual user event, the BMC Exchange in New York City!

 

Exchange-NY-CityImage-Linkedin.jpg

 

During this free event, there will be thought-provoking keynotes including global trends and best practices.  Also, you will hear from BMC experts and your peers in the Digital Service Operations (DSO) track.  Lastly, you get to mingle with everyone including BMC experts, our business partners, and your peers.  Pretty cool event to attend, right? 

 

In the DSO track, we are so excited to have 3 customers tell their stories. 

  • Cerner will speak about TrueSight Capacity Optimization and their story around automation and advanced analytics for capacity and planning future demand.   Check out Cerner’s BizOps 101 ebook
  • Park Place Technologies’ presentation will focus on how they leverage AI technology to transform organizations.
  • Freddie Mac will join us in the session about vulnerability management.  Learn how your organization can protect itself from security threats.  Hear how Freddie Mac is using BMC solutions. 

 

BMC product experts will also be present in the track and throughout the entire event.

  • Hear from the VP of Product Management on how to optimize multi-cloud performance, cost and security
  • Also, hear from experts on cloud adoption.  This session will review how TrueSight Cloud Operations provides you visibility and control needed to govern, secure, and manage costs for AWS, Azure, and Google Cloud services.

 

At the end of the day, there will be a networking reception with a raffle (or 2 or 3).  Stick around and talk to us and your peers.  See the products live in the solutions showcase. Chat with our partners.  Stay around and relax before heading home. 

 

Event Info:

Date: October 18th 2018

When: 8:30am – 7:00pm

  • Keynote begins at 9:30am
  • Track Sessions begin at 1:30pm
  • Networking Reception begins at 5:00pm

Where: 415 5th Ave, NY, NY 10016

 

For more information and to register, click here

 

Look forward to seeing you in NYC!  Oh, and comment below if you are planning to attend!  We are excited to meet you.

Share:|

Happy to announce that TrueSight Server Automation 8.9.03 went GA on 12th June 2018 & BMC Server Automation has been renamed to TrueSight Server Automation

Highlights of the release:

  • HIPAA Compliance for AIX 7.1
  • Ivanti 9.3 SDK based patching
    • Windows patching will now be supported using Ivanti(Shavlik) SDK 9.3
  • SMB2 support
    • Add Node which is SMB2 enabled - Agent Installer Job through RCP to a SMB2 enabled targets.
    • Unified Agent Installer to a SMB2 enabled targets.
  • SHA256 support

We are upgrading the secured certificates to support “Signature Algorithm” with SHA2 key. Those will be available at following places for this release.

      • Self-Signed certificate used by Application Server
      • Agent certificate

 

  • Security Enhancements

There are JRE and Magnicomp vulnerabilities which have got fixed. There are other vulnerability fixes also along with these which are discovered from application security team.

Questions or feedback? Comment below to let us know

For more details please take a look at the documentation here - Service Pack 3: version 8.9.03 - Documentation for TrueSight Server Automation 8.9.00 - BMC Documentation

Share:|

It can be useful to manually run yum or blyum with the repodata and includes from the command line to get a better picture of what might be happening during analysis or inspect the repodata or perform any other troubleshooting steps.  The following steps can be taken to accomplish this:

 

For 8.9.01 and below

Run the Patching Job with the DEBUG_MODE_ENABLED set to true. 

Gather the per-host generated data from the application server - for example if the Job Name is 'RedHat Analysis' and it was run against the target 'red6-88.example.com' on Jun 11 you should see a directory:

<install dir>/NSH/tmp/debug/application_server/RedHat Analysis/Sat Jun 11 09-26-57 EDT 2016/red6-88.example.com

that contains:

analysis_err.log analysis_log.log analysis_res.log installed_rpms.log repo repodata.tar.gz yum_analysis.res yum.conf yum.err.log yum.lst

 

Copy the entire red6-88.example.com directory back to the target system (or any system you want to test these files on) into /tmp or some other location.

 

For 8.9.01 and later

The files mentioned above are kept in <rscd install>/Transactions/analysis_archive on the target system.  The three most recent runs should be present.

 

 

 

Once you've located the files used for analysis and located them on the target system

Edit the yum.conf and the cachedir and reposdir to match the current directory path.  Following the pre-8.9.01 example where we copied the directory into /tmp:

cachedir=//var/tmp/stage/LinuxCatalog_2002054_red6-88.example.com
reposdir=//var/tmp/stage/LinuxCatalog_2002054_red6-88.example.com

are changed to match the new path - if you copied the red6-88.example.com to /tmp then:

cachedir=//tmp/red6-88.example.com
reposdir=//tmp/red6-88.example.com

then from within that directory you can run:

/opt/bmc/bladelogic/NSH/bin/blyum -c yum.conf -C update

if an include list was used you can do:

/opt/bmc/bladelogic/NSH/bin/blyum -c yum.conf -C update `cat rpm-includes.lst.old`

or

/opt/bmc/bladelogic/NSH/bin/blyum -c yum.conf -C update `cat parsed-include.lst.old`

if the parsed list exists.  The parsed include list will not contain any rpms that are installed on the target and in the include list.  The parsed-include.lst was added in recent BSA versions to handle the situation where yum decides to update the rpm to the latest one in the catalog instead of leaving it alone when the include list contains the exact version of an rpm already installed on the system.

 

If it's a RedHat 7 target, the native yum is used, so use yum instead of blyum.

yum -c yum.conf -C update

 

You can also use the above process to copy the metadata from the target system to another system and run queries against the metadata or run analysis with the same metadata and options against a test system.  for example you could run:

/opt/bmc/bladelogic/NSH/bin/blyum -c yum.conf -C search <rpmname>

or

/opt/bmc/bladelogic/NSH/bin/blyum -c yum.conf -C info <rpmname>

 

to see if some rpm is in the metadata or not.

Share:|

BMC Software is alerting customers who use BMC Server Automation (BSA) for managing Unix/Linux Targets to this Knowledge Article which highlights vulnerability CVE-2018-9310 in the Magnicomp Sysinfo solution used by BSA RSCD Agent to capture Hardware Information. The Knowledge Article contains the details of the issue and also how to obtain and deploy the fix.

Share:|

September 25, 2018 - Important update to below alert:

BMC Software has negotiated an extension of support for Shavlik version 9.1 to provide BMC Server Automation users additional time to upgrade. With this extension, users now have until September 30, 2019 to upgrade the BMC Server Automation infrastructure and the BMC Server Automation RSCD agents running on Microsoft Windows target servers.

Users who cannot upgrade their BMC Server Automation environment and Windows targets to one of the patches/releases listed in this topic by December 31, 2018, must instead reconfigure their environment by December 31, 2018, in order for Windows patching to continue to function.

This flash and prior notification (below) are now modified to reflect this update.

_______________________________________________________________________________________________________________________________________________________

 

Updated original notification from May 2018::

 

BMC Software is alerting users of BMC Server Automation for Windows Patching, that action must be taken before December 31, 2018 to ensure continued functioning of Windows Patching within the BMC Server Automation product beyond that date.

 

One of the following actions must be taken:

 

a) Upgrade the BSA Environment, including the RSCD agents on all Windows Targets used for BSA Patch Analysis, by December 31, 2018

or

b) A minor configuration change must be made to the BSA Environment by December 31, 2018 followed by an upgrade before the extended EOL date of September 30, 2019

 

Please see this updated Flash Bulletin in the BSA Documentation for full details.

Share:|

If you want to increase or decrease the logging level for the appserver there's a pretty easy way to accomplish this - edit the appserver's log4j.properties file. There are already a number of logging class entries in there and you may want to increase or decrease logging on some classes not listed.  There's a fairly simple way to figure out what classes are associated with what log entries:  add the class to the logger.  In the log4j.properties near the top look for these two lines:

log4j.appender.C.layout.ConversionPattern=[%d{DATE}] [%t] [%p] [%X{user}:%X{role}:%X{ip}] [%X{action}] %m%n
log4j.appender.R.layout.ConversionPattern=[%d{DATE}] [%t] [%p] [%X{user}:%X{role}:%X{ip}] [%X{action}] %m%n

 

we can add the class (category) by adding a %c and we'll put that in brackets so it looks like the rest of the log:

log4j.appender.C.layout.ConversionPattern=[%d{DATE}] [%t] [%p] [%c] [%X{user}:%X{role}:%X{ip}] [%X{action}] %m%n
log4j.appender.R.layout.ConversionPattern=[%d{DATE}] [%t] [%p] [%c] [%X{user}:%X{role}:%X{ip}] [%X{action}] %m%n

 

after about a minute, without restarting the appserver service, you will see the new log entries like:

[19 Mar 2018 08:57:35,073] [Scheduled-System-Tasks-Thread-10] [INFO] [com.bladelogic.om.infra.app.service.appserver.AppserverMemoryMonitorTask] [System:System:] [Memory Monitor] Total JVM (B): 625135616,Free JVM (B): 453789168,Used JVM (B): 171346448,VSize (B): 8967577600,RSS (B): 1027321856,Used File Descriptors: 300,Used Work Item Threads: 0/100,Used NSH Proxy Threads: 0/15,Used Client Connections: 3/200,DB Client-Connection-Pool: 2/2/0/200/150/50,DB Job-Connection-Pool: 2/2/0/200/150/50,DB General-Connection-Pool: 1/1/0/200/150/50

 

generally we should then be able to add entries like:

log4j.logger.<class>=<level>

to the log4j.properties file.

 

For example, if I don't want to see info messages from compliance job runs, I would turn on the class logging and then run a compliance job.  I'd see some entries like

[19 Mar 2018 08:57:36,903] [Job-Execution-2] [INFO] [com.bladelogic.om.infra.compliance.job.ComplianceJobExecutor] [BLAdmin:BLAdmins:] [Compliance] --JobRun-2000915,3-2052417-- Started running the job 'CisWin2012R2ComplianceJob' with priority 'NORMAL' on application server 'blapp89.example.com'(2,000,000)

I then add:

log4j.logger.com.bladelogic.om.infra.compliance.job.ComplianceJobExecutor=ERROR

to my log4j.properties file and wait a couple minutes and then re-run the job.  You should no longer see the INFO message.  Conversely, I may be able to get more information  out of this class by setting it to DEBUG, but that will depend if there is any debug logging already built into the class or not, which is not guaranteed. 

 

One thing to note - if you want to change the logging from DEBUG back to INFO or ERROR back to INFO you must alter the logger line, you can't simply delete the line from the log4j.properties file. 

 

If you elect do the above to reduce logging, make sure that when you interact with BMC Support you make it clear you have altered the logging levels because during troubleshooting we may be looking for log messages you have excluded and we will spend a lot of time figuring that out.

 

Using the above to enable debug logging can be useful while troubleshooting issues with the application server.  The nuclear option of course is to change the root logging level:

log4j.rootLogger=INFO, R, C

and if that is done you will likely need to increase the size of and number of rolled logs to handle the additional information being dumped into the files:

# Set the max size of the file

log4j.appender.R.MaxFileSize=20000KB

#Set the number of backup files to keep when rolling over the main file

log4j.appender.R.MaxBackupIndex=5

It's much better if you can use the above method to figure out what class you want more (or less) logging on or in the case of needing debug using the DEBUG_MODE_ENABLED property on the job to get job run debug information for a particular job.

 

Generally you should not have to alter the logger settings during normal operation.

Share:|

We are excited to introduce you to our new YouTube channel “BladeLogic Automation” for "How-to" videos, intended to help with a specific task or feature of products in the BladeLogic Automation suite (BSA, BDSSA, BDA and BNA).

 

 

Highlights:

 

Focused contents:  The contents of this channel will only focus on providing technical videos for the Server Automation, Decision Support for Server Automation,Database Automation and Network Automation products.   This content is developed by the BMC Support technical teams.

 

Featured Playlists: The channel will focus on technical contents, such as how-to, troubleshooting guides and functional demonstrations. Similar features/functions and categories will have their own Playlists to reduce the time to search the contents.

 

Snippet of our Playlists:

Click  to receive notifications when the new technical content is posted on the channel and to get the most out of the products – BSA, BDSSA, BNA and BDA.

Refer to our "Playlists" to play all the videos organized by topic or a product.

Here are the current Playlists:

 

We welcome feedback from the community.

Share:|

We are excited to introduce you to our new YouTube channel “BladeLogic Automation” for "How-to" videos, intended to help with a specific task or feature of products in the BladeLogic Automation suite (BSA, BDSSA, BDA and BNA).

 

 

Highlights:

 

Focused contents:  The contents of this channel will only focus on providing technical videos for the Server Automation, Decision Support for Server Automation, Database Automation and Network Automation products.   This content is developed by the BMC Support technical teams.

 

Featured Playlists: The channel will focus on technical contents, such as how-to, troubleshooting guides and functional demonstrations. Similar features/functions and categories will have their own Playlists to reduce the time to search the contents.

 

Snippet of our Playlists:

Click  to receive notifications when the new technical content is posted on the channel and to get the most out of the products – BSA, BDSSA, BNA and BDA.

Refer to our "Playlists" to play all the videos organized by topic or a product.

Here are the current Playlists:

 

We welcome feedback from the community.

Share:|

I am very pleased to announce that BSA 8.9 Service Pack 2 is now available.

 

Here are some highlights of the release:

1>     Compliance Support:

    1. DISA STIG content update for Windows 2016
    2. DISA STIG content update for RH 7

2>     Patch Analysis support for AIX Multibos

Users can now perform patch analysis on the standby instance of the Base Operating System (BOS) which is maintained in the same root volume group as the active BOS object.

3>     Patching support for AIX on an Alternate Disk or Multiple Boot Operating System

Some versions of AIX have the capability of maintaining multiple instances of Base Operating Systems (BOS). The additional instance of the BOS can be maintained in the same root volume group (multibos) or on a separate disk on a separate root volume group (alternate disk). The user can boot any one instance of the BOS which is called the active instance. The instance that has not been booted remains as a stand by instance. BMC Server Automation supports installation, maintenance, and technology-level updates on the stand by BOS instance without affecting system files on the active BOS.

4>     Export Health Dashboard

Users can now export the entire Health Dashboard in HTML format.

5>     Deprecation on Red Hat Network download option

Red Hat transitioned from its Red Hat Network hosted interface to a new Red Hat Subscription Management interface. To enable customer to seamlessly continue patching on Red Enterprise Linux, we have deprecated RHN download option. All patching on RHEL targets must be performed using the CDN download option (its selected by default when creating patch catalog).

6>     Message to review he number of target servers

To prevent any unplanned outages in your data centre, BSA now allows you to review the number of servers targeted by a job.

7>     Database cleanup and BLCLI enhancements

Share:|

This is a pretty easy one so I thought I'd do a little more with scripting to make it more fun.  I have a Batch Job and I want to get all the member job keys/names/paths/etc.  I don't see any released blcli commands that will do it so I'll look in the ...Unreleased blcli commands and documentation ...  Digging around in the BatchJob namespace (of course) I don't see any list* or get* commands that would do it, though the getMemberJobCountByJobKey looks interesting for later.  I did find a findAllSubJobHeadersByBatchJobKey which looks promising.  I know about headers from past posts in this series.    That gives me something like this:

BATCH_JOB="/Workspace/BatchJobs/MyBatchJob"
blcli_execute BatchJob getDBKeyByGroupAndName "${BATCH_JOB%/*}" "${BATCH_JOB##*/}"
blcli_storeenv batchJobKey
blcli_execute BatchJob findAllSubJobHeadersByBatchJobKey ${batchJobKey}
blcli_execute SJobHeader getDBKey
blcli_execute Utility setTargetObject
blcli_execute Utility listPrint
blcli_storelocal memberJobKeys

Easy-peasy.  What if one of those members is a Batch Job and I want the member jobs in there ? I can probably script together something like that, but first I should create a test batch job to work this out.  I want something like:

Batch Job1

    -> Member Job1a

    -> Member Job1b

    -> Member Job Batch2

                 -> Member Job2a

                 -> Member Job2b

     -> Member Job1c

I could just see what jobs I already have in my test environment and use the gui to create the above but that's no fun.  Why not script out the creation and then script reading it all back ?  Then I'll have a couple scripts I can take between environments to make sure it works the same across BSA versions and such.

 

I'll make some Update Server Properties Jobs as the members since they don't need a Depot Object just to make this a little shorter.  For that I have UpdateServerPropertyJob.createUpdateServerPropertyJobWithTargetServer that should do nicely.  I need to:

Create my USP jobs

Create the member batch jobs

Add some of the USPs to the member batch jobs

create the parent batch job

add the usp and member batches to that job

 

Then another script based on my snippet above will recurse through what I created and spit out the list of jobs in each batch.

 

Below is the script to create the nested Batch Job I'm going to test my dump script against.  This part is really just an exercise to show how you can quickly mock up up some test data to do actual work on with the blcli instead of doing it by hand in the BSA GUI.  Since this required running the same commands a few times I broke out some of the actions into functions in the script.

#!/bin/nsh
blcli_setjvmoption -Dcom.bladelogic.cli.execute.quietmode.enabled=true
blcli_setoption serviceProfileName defaultProfile
blcli_setoption roleName BLAdmins
jobGroup="/Workspace/BatchJobTest"
jobGroupExists="false"
batchJob="TestBatch"
targetServer="blapp891.local"
# Check if the group is there and make it if not
blcli_execute JobGroup groupExists "${jobGroup}"
blcli_storelocal jobGroupExists
if [[ "${jobGroupExists}" = "false" ]]
        then
        blcli_execute JobGroup createGroupWithParentName "${jobGroup##*/}" "${jobGroup%/*}"
fi
blcli_execute JobGroup groupNameToDBKey "${jobGroup}"
blcli_storelocal jobGroupKey
blcli_execute Utility convertModelType UPDATE_SERVER_PROPERTY_JOB
blcli_storelocal uspJobTypeId
blcli_execute Utility convertModelType BATCH_JOB
blcli_storelocal batchJobTypeId
# Make some USP Jobs
for i in {1..3}
        do
        for j in a b c
                do
                blcli_execute Job jobExistsByTypeGroupAndName ${uspJobTypeId} ${jobGroupKey} "USP-${i}${j}"
                blcli_storelocal jobExists
                if [[ "${jobExists}" = "false" ]]
                        then
                        blcli_execute UpdateServerPropertyJob createUpdateServerPropertyJobWithTargetServer "USP-${i}${j}" "${jobGroup}" "${targetServer}"
                fi
        done
done
createBatchJobWithUSP()
{
   local batchJob="${1}"
   local uspJob="${2}"
   local jobGroupId="${3}"

   blcli_execute UpdateServerPropertyJob getDBKeyByGroupAndName "${uspJob%/*}" "${uspJob##*/}"
   blcli_storelocal uspKey
   blcli_execute Job jobExistsByTypeGroupAndName ${batchJobTypeId} ${jobGroupKey} "${batchJob}"
   blcli_storelocal jobExists
   # delete the batch job if it already exists, then re-create it
   if [[ "${jobExists}" = "true" ]]
        then
        blcli_execute BatchJob deleteJobByGroupAndName "${jobGroup}" "${batchJob}"
   fi
   blcli_execute BatchJob createBatchJob "${batchJob}" ${jobGroupId} ${uspKey} true false false
   blcli_storeenv batchJobKey
}
addMemberJob()
{
   local type="${1}"
   local memberJob="${2}"
   local batchJob="${3}"

   blcli_execute ${type} getDBKeyByGroupAndName "${memberJob%/*}" "${memberJob##*/}"
   blcli_storelocal memberJobKey
   blcli_execute BatchJob getDBKeyByGroupAndName "${batchJob%/*}" "${batchJob##*/}"
   blcli_storelocal batchJobKey
   blcli_execute BatchJob addMemberJobByJobKey ${batchJobKey} ${memberJobKey}
}
blcli_execute JobGroup groupNameToId "${jobGroup}"
blcli_storelocal jobGroupId
# put the 2 and 3 jobs in batch jobs
for i in {2..3}
        do
        createBatchJobWithUSP "${batchJob}-${i}" "${jobGroup}/USP-${i}a" ${jobGroupId}
        for j in b c
                do
                addMemberJob UpdateServerPropertyJob "${jobGroup}/USP-${i}${j}" "${jobGroup}/${batchJob}-${i}"
        done
done
createBatchJobWithUSP "${batchJob}-1" "${jobGroup}/USP-1a" ${jobGroupId}
addMemberJob BatchJob "${jobGroup}/${batchJob}-2" "${jobGroup}/${batchJob}-1"
addMemberJob UpdateServerPropertyJob "${jobGroup}/USP-1b" "${jobGroup}/${batchJob}-1"
addMemberJob BatchJob "${jobGroup}/${batchJob}-2" "${jobGroup}/${batchJob}-1"
addMemberJob UpdateServerPropertyJob "${jobGroup}/USP-1c" "${jobGroup}/${batchJob}-1"

That gives me a Batch Job that looks like:

That seems like a lot of work just to make a job for me to test with.  In this case, yeah, it probably was.  Is the script perfect ? Could I add more error handling ? Could it be more elegant ?  It took me about 15 minutes to write and test it out.  It works.  I could have made the jobs there from scratch in about five minutes by hand in the BSA GUI.  But let's say I need to do this again in another environment, and then another.  Or what I'm testing requires me to have a lot of jobs and then delete them.  That's when doing some automation to setup your test data pays off.

 

Now let's start the real work - listing out the Batch Job and its members.  I'm going to forget I wrote this script and pretend I only know the Batch Job that I care about /Workspace/BatchJobTest/TestBatch-1.

Now I need to:

Get my Batch Job

List all the Jobs in it

See if any of those are in turn batch jobs and if so loop back to the first step

 

That loop part probably means another function.  I already have the bit to list all the member jobs from the snippet at the very beginning. I just need to handle the recursion:

#!/bin/nsh
blcli_setjvmoption -Dcom.bladelogic.cli.execute.quietmode.enabled=true
blcli_setoption serviceProfileName defaultProfile
blcli_setoption roleName BLAdmins
if [[ ${#@} -ne 1 ]]
        then
        echo "You must pass the Batch Job"
        exit 1
fi
batchJob="${1}"
getBatchJobMembers()
{
   local batchJob="${1}"
   local batchJobKey=
   local memberKeys=
   local hasParent="${2}"
   blcli_execute BatchJob getDBKeyByGroupAndName "${batchJob%/*}" "${batchJob##*/}"
   blcli_storelocal batchJobKey
   blcli_execute BatchJob findAllSubJobHeadersByBatchJobKey ${batchJobKey}
   blcli_execute SJobHeader getDBKey
   blcli_execute Utility setTargetObject
   blcli_execute Utility listPrint
   blcli_storelocal memberKeys
   while read memberKey
     do
     blcli_execute Job findByDBKey ${memberKey}
     blcli_execute Job getType
     blcli_storelocal jobTypeId
     blcli_execute Job getName
     blcli_storelocal jobName
     blcli_execute Job getGroupId
     blcli_storelocal jobGroupId
     blcli_execute Group getQualifiedGroupName 5005 ${jobGroupId}
     blcli_storeenv jobGroupPath
     if [[ "${hasParent}" = "true" ]]
         then
         echo "--${jobGroupPath},${jobName},${jobTypeId}"
     else
         echo "${jobGroupPath},${jobName},${jobTypeId}"
     fi
     if [[ ${jobTypeId} = 200 ]]
        then
        getBatchJobMembers "${jobGroupPath}/${jobName}" true
     fi
   done <<< "$(awk 'NF' <<< "${memberKeys}")"
}
getBatchJobMembers "${batchJob}" false

That gives me output like:

/Workspace/BatchJobTest,USP-1a,1017

/Workspace/BatchJobTest,TestBatch-2,200

--/Workspace/BatchJobTest,USP-2a,1017

--/Workspace/BatchJobTest,USP-2b,1017

--/Workspace/BatchJobTest,USP-2c,1017

/Workspace/BatchJobTest,USP-1b,1017

/Workspace/BatchJobTest,TestBatch-2,200

--/Workspace/BatchJobTest,USP-2a,1017

--/Workspace/BatchJobTest,USP-2b,1017

--/Workspace/BatchJobTest,USP-2c,1017

/Workspace/BatchJobTest,USP-1c,1017

Of course, once you have the DBKey of the member job you can do whatever you want there, not just list the group path and name. 

 

That's it - we have all the member jobs along with some information about them from the Batch Job.

Share:|

The listallthethings post you've all been waiting for right ?  This is the one I see come up most frequently on communities - I need to list out catalogs and all the stuff in them and do something to those objects or make some groups in the catalog.  The questions come up probably because there is not a released blcli namespace for Patch Smart Groups and no one is sure if a Catalog is a DepotObject or Group or turtle.  I mean it has a Job associated with it and Jobs are normally associated with DepotObjects.  But the Job is not in the Jobs workspace, it's in the catalog itself and a Catalog also has a bunch of DepotObjects and Groups in it.  So I'm going with turtle.

 

But a catalog is really a group.  If you want to be sure look for CATALOG and GROUP in the Object Type Ids list. Like RED_HAT_CATALOG_GROUP.  The requests I've seen are typically something like: find all my catalogs, find all the <os type> catalogs, find and run the CUJ, list the patch smart groups in a catalog, list the conditions of a patch smart group in a catalog, create a patch catalog smart group, list all the patches/errata/bulletins/etc in a catalog, set a property on the objects in the catalog if they meet some condition and delete all the stuff in my catalog.  So let's just dig right in.  Of course we have our trusty Unreleased blcli commands and documentation at the ready.

 

List all the catalogs of a specific OS type and do something to the CUJ

I'll combined these two asks together since it makes sense as you will see.  Since a catalog is a group, it made sense to me to start in the Group namespace.  I can lookup the object type id number or use a blcli command to convert the model type name to the number and then there's a command called Group.findAllByGroupType.  That looks like this:

blcli_execute Utility convertModelType RED_HAT_CATALOG_GROUP
blcli_storeenv groupType
blcli_execute Group findAllBytype ${groupType}

That gets me output like this:

[/Workspace/Patch Catalogs/RedHat Linux Patch Catalog, Id = 2001137;Name = RedHat Linux Patch Catalog;Desc = This is an example Patching Catalog for Redhat Linux Patching., /Workspace/Patch Catalogs/Test1, Id = 2210400;Name = Eagle;Desc = , /Workspace/Patch Catalogs/RedHat 6 and 7 x86_64, Id = 2208200;Name = RedHat 6 and 7 x86_64;Desc = , /Workspace/Patch Catalogs/RedHat 7 Newest, Id = 2208000;Name = RedHat 7 Newest;Desc = ]

That looks promising.  Group.findAllByGroupType says it outputs a list so I can then I could run a Utility.listPrint and then do some text processing on each line.  I'm a big fan of text processing but let's see what else we can do.  I tried to run some of the Group.get* commands on the list and they all threw errors.  Maybe I can iterate through the list like in the other examples.  That doesn't work so well because each item in the list is a different element of the catalog - one is the path, the other is the Id = xxx and so on.  Maybe there's a command in Group that will act on the list.  I'm back to text processing.  That's ok - I can just dump the list and look for the lines that have the Id = or the path.

 

That's not very elegant but it works.  I look around a little more in my Unreleased blcli commands and documentation and I find there's a PatchCatalog namespace.   And some os-specific namespaces like RedhatPatchCatalog are there and I don't see much in there that's useful for now.  PatchCatalog has a set of list*Catalogs commands that look good.  After a listPrint it looks good:

blcli_execute PatchCatalog listRedhatPatchCatalogs
blcli_execute Utility setTargetObject
blcli_execute Utility listPrint

/Workspace/Patch Catalogs/RedHat Linux Patch Catalog

/Workspace/Patch Catalogs/Test1

/Workspace/Patch Catalogs/RedHat 6 and 7 x86_64

/Workspace/Patch Catalogs/RedHat 7 Newest

Now I know the path to the group and the group type.  I can get the DBKey with PatchCatalog.getCatalogDBKeyByFullyQualifiedCatalogName or the id with PatchCatalog.getCatalogIdByFullyQualifiedCatalogName.  I can also get the Catalog Update Job DBKey with the PatchCatalog.getCUJDBKeyByFullyQualifiedCatalogName

blcli_execute Utility convertModelType RED_HAT_CATALOG_GROUP
blcli_storeenv groupType
blcli_execute PatchCatalog listRedhatPatchCatalogs
blcli_execute Utility setTargetObject
blcli_execute Utility listPrint
blcli_storelocal catalogs
while read catalog
     do
     blcli_execute PatchCatalog getCatalogDBKeyByFullyQualifiedCatalogName "${catalog}" ${groupType}
     blcli_storelocal catalogKey
     echo "DBKey: ${catalogKey}"
     blcli_execute PatchCatalog getCUJDBKeyByFullyQualifiedCatalogName REDHAT "${catalog}"
     blcli_storelocal cujKey
     echo "CUJKey: ${cujKey}"
done <<< "$(awk 'NF' <<< "${catalogs}")"

The while handles the spaces in the catalog paths (where a for would not).    There's some other commands that could do the same thing: PatchCatalog.getRedhatCatalogUpdateJobDBKey (and Windows, etc).  Often times that's the case.  Use whatever gets you the output you need.  There's no right answer.  If it works, it works.  Cool, now I can go off and run the CUJs, or update/remove/add a schedule and I have the Catalog key and path if I need that.  For example to feed into isCatalogLastUpdateSuccessful:

blcli_execute Utility convertModelType RED_HAT_CATALOG_GROUP
blcli_storeenv groupType
blcli_execute PatchCatalog listRedhatPatchCatalogs
blcli_execute Utility setTargetObject
blcli_execute Utility listPrint
blcli_storelocal catalogs
while read catalog
     do
     blcli_execute PatchCatalog getCatalogIdByFullyQualifiedCatalogName "${catalog}" ${groupType}
     blcli_storelocal catalogGroupId
     blcli_execute PatchCatalog isCatalogLastUpdateSuccessful ${catalogGroupId}
     blcli_storelocal isCatalogLastUpdateSuccessful
     echo "${catalog}:${isCatalogLastUpdateSuccessful}"
done  <<< "$(awk 'NF' <<< "${catalogs}")"

That's a quick and easy way to populate a dashboard or otherwise get a quick view of your catalog states.  If the run failed you could then get the CUJ Key, and dump the last job run log information and email it off. I know that's jumping from 0 to 60 wrt using the blcli but I learn by example so I'll teach by example.  Suffice to say at some point in my blcli usage I figured out how to do each of those things - find the latest run key, use that to get job run information, send an email - by poking around in the released/unreleased command docs, trying various commands in my test env and seeing what happened.  Here we go:

blcli_execute Utility convertModelType RED_HAT_CATALOG_GROUP
blcli_storeenv groupType
blcli_execute PatchCatalog listRedhatPatchCatalogs
blcli_execute Utility setTargetObject
blcli_execute Utility listPrint
blcli_storelocal catalogs

while read catalog
     do
     blcli_execute PatchCatalog getCatalogIdByFullyQualifiedCatalogName "${catalog}" ${groupType}
     blcli_storelocal catalogGroupId
     blcli_execute PatchCatalog isCatalogLastUpdateSuccessful ${catalogGroupId}
     blcli_storelocal isCatalogLastUpdateSuccessful
     if [[ "${isCatalogLastUpdateSuccessful}" = "true" ]]
        then
        blcli_execute PatchCatalog getCUJDBKeyByFullyQualifiedCatalogName REDHAT "${catalog}"
        blcli_storelocal cujKey
        # false is also returned if there are no runs
        blcli_execute JobRun findRunCountByJobKey ${cujKey}
        blcli_storelocal jobRunCount
        if [[ ${jobRunCount} -gt 0 ]]
                then
                blcli_execute JobRun findLastRunKeyByJobKeyIgnoreVersion ${cujKey}
                blcli_storelocal cujRunKey
                # get the start time of the cuj run and the job name for our log/email
                blcli_execute JobRun findByJobRunKey ${cujRunKey}
                blcli_execute JobRun getStartTime
                blcli_storelocal startTime
                blcli_execute JobRun getJobName
                blcli_storelocal jobName
                blcli_execute JobRun jobRunKeyToJobRunId ${cujRunKey}
                blcli_storelocal cujRunId
                blcli_execute JobRun getLogItemsByJobRunId ${cujRunId}
                blcli_storelocal cujLogItems
                echo "${cujLogItems}" > "/tmp/${catalog##*/}-${cujRunId}.log"
                blcli_execute Email sendMailWithAttachment appserver@example.com user@example.com "Log for failed CUJ: ${jobName} at: ${startTime}" "Please review attached log for errors" "/tmp" "${catalog##*/}-${cujRunId}.log"
        else
                echo "Catalog ${catalog} has had ${jobRunCount} runs..."
        fi
     fi
done  <<< "$(awk 'NF' <<< "${catalogs}")"

 

As a bonus, let's do all of that, but for all the catalog types.  And we'll use an associative array to help loop.  I'm just showing the looping logic and what needs to be parameterized, you can fill in the rest of the script.

typeset -A catalogTypeList
catalogTypeList=(listAixPatchCatalogs AIX_CATALOG_GROUP listDebianPatchCatalogs UBUNTU_CATALOG_GROUP listRedhatPatchCatalogs RED_HAT_CATALOG_GROUP listSolarisPatchCatalogs SOLARIS_CATALOG_GROUP listWindowsPatchCatalogs WINDOWS_CATALOG_GROUP)
for key in ${(k)catalogTypeList}
        do
        cliCall="${key}"
        modelType="${catalogTypeList[${key}]}"
        blcli_execute Utility convertModelType ${modelType}
        blcli_storeenv groupType
        blcli_execute PatchCatalog ${cliCall}
       [ ...... Rest of script above .....]       
 done 

 

 

That was more than I expected to cover.  The reset of the cases will move to subsequent posts.

Filter Blog

By date:
By tag: