Skip navigation
1 2 3 Previous Next

Server Automation

115 posts
Share:|

It can be useful to manually run yum or blyum with the repodata and includes from the command line to get a better picture of what might be happening during analysis or inspect the repodata or perform any other troubleshooting steps.  The following steps can be taken to accomplish this:

 

For 8.9.01 and below

Run the Patching Job with the DEBUG_MODE_ENABLED set to true. 

Gather the per-host generated data from the application server - for example if the Job Name is 'RedHat Analysis' and it was run against the target 'red6-88.example.com' on Jun 11 you should see a directory:

<install dir>/NSH/tmp/debug/application_server/RedHat Analysis/Sat Jun 11 09-26-57 EDT 2016/red6-88.example.com

that contains:

analysis_err.log analysis_log.log analysis_res.log installed_rpms.log repo repodata.tar.gz yum_analysis.res yum.conf yum.err.log yum.lst

 

Copy the entire red6-88.example.com directory back to the target system (or any system you want to test these files on) into /tmp or some other location.

 

For 8.9.01 and later

The files mentioned above are kept in <rscd install>/Transactions/analysis_archive on the target system.  The three most recent runs should be present.

 

 

 

Once you've located the files used for analysis and located them on the target system

Edit the yum.conf and the cachedir and reposdir to match the current directory path.  Following the pre-8.9.01 example where we copied the directory into /tmp:

cachedir=//var/tmp/stage/LinuxCatalog_2002054_red6-88.example.com
reposdir=//var/tmp/stage/LinuxCatalog_2002054_red6-88.example.com

are changed to match the new path - if you copied the red6-88.example.com to /tmp then:

cachedir=//tmp/red6-88.example.com
reposdir=//tmp/red6-88.example.com

then from within that directory you can run:

/opt/bmc/bladelogic/NSH/bin/blyum -c yum.conf -C update

if an include list was used you can do:

/opt/bmc/bladelogic/NSH/bin/blyum -c yum.conf -C update `cat rpm-includes.lst.old`

or

/opt/bmc/bladelogic/NSH/bin/blyum -c yum.conf -C update `cat parsed-include.lst.old`

if the parsed list exists.  The parsed include list will not contain any rpms that are installed on the target and in the include list.  The parsed-include.lst was added in recent BSA versions to handle the situation where yum decides to update the rpm to the latest one in the catalog instead of leaving it alone when the include list contains the exact version of an rpm already installed on the system.

 

If it's a RedHat 7 target, the native yum is used, so use yum instead of blyum.

yum -c yum.conf -C update

 

You can also use the above process to copy the metadata from the target system to another system and run queries against the metadata or run analysis with the same metadata and options against a test system.  for example you could run:

/opt/bmc/bladelogic/NSH/bin/blyum -c yum.conf -C search <rpmname>

or

/opt/bmc/bladelogic/NSH/bin/blyum -c yum.conf -C info <rpmname>

 

to see if some rpm is in the metadata or not.

Share:|

BMC Software is alerting customers who use BMC Server Automation (BSA) for managing Unix/Linux Targets to this Knowledge Article which highlights vulnerability CVE-2018-9310 in the Magnicomp Sysinfo solution used by BSA RSCD Agent to capture Hardware Information. The Knowledge Article contains the details of the issue and also how to obtain and deploy the fix.

Share:|

BMC Software is alerting customers who use BMC Server Automation (BSA) for Windows Patch Management that, due to an upcoming EOL of the version of the Ivanti Shavlik SDK used by BSA, action must be taken before December, 2018 to ensure continued functionality of BSA Windows Patch Management beyond that date.

 

Please see this Flash Bulletin in the BSA Documentation for full details:

Share:|

If you want to increase or decrease the logging level for the appserver there's a pretty easy way to accomplish this - edit the appserver's log4j.properties file. There are already a number of logging class entries in there and you may want to increase or decrease logging on some classes not listed.  There's a fairly simple way to figure out what classes are associated with what log entries:  add the class to the logger.  In the log4j.properties near the top look for these two lines:

log4j.appender.C.layout.ConversionPattern=[%d{DATE}] [%t] [%p] [%X{user}:%X{role}:%X{ip}] [%X{action}] %m%n
log4j.appender.R.layout.ConversionPattern=[%d{DATE}] [%t] [%p] [%X{user}:%X{role}:%X{ip}] [%X{action}] %m%n

 

we can add the class (category) by adding a %c and we'll put that in brackets so it looks like the rest of the log:

log4j.appender.C.layout.ConversionPattern=[%d{DATE}] [%t] [%p] [%c] [%X{user}:%X{role}:%X{ip}] [%X{action}] %m%n
log4j.appender.R.layout.ConversionPattern=[%d{DATE}] [%t] [%p] [%c] [%X{user}:%X{role}:%X{ip}] [%X{action}] %m%n

 

after about a minute, without restarting the appserver service, you will see the new log entries like:

[19 Mar 2018 08:57:35,073] [Scheduled-System-Tasks-Thread-10] [INFO] [com.bladelogic.om.infra.app.service.appserver.AppserverMemoryMonitorTask] [System:System:] [Memory Monitor] Total JVM (B): 625135616,Free JVM (B): 453789168,Used JVM (B): 171346448,VSize (B): 8967577600,RSS (B): 1027321856,Used File Descriptors: 300,Used Work Item Threads: 0/100,Used NSH Proxy Threads: 0/15,Used Client Connections: 3/200,DB Client-Connection-Pool: 2/2/0/200/150/50,DB Job-Connection-Pool: 2/2/0/200/150/50,DB General-Connection-Pool: 1/1/0/200/150/50

 

generally we should then be able to add entries like:

log4j.logger.<class>=<level>

to the log4j.properties file.

 

For example, if I don't want to see info messages from compliance job runs, I would turn on the class logging and then run a compliance job.  I'd see some entries like

[19 Mar 2018 08:57:36,903] [Job-Execution-2] [INFO] [com.bladelogic.om.infra.compliance.job.ComplianceJobExecutor] [BLAdmin:BLAdmins:] [Compliance] --JobRun-2000915,3-2052417-- Started running the job 'CisWin2012R2ComplianceJob' with priority 'NORMAL' on application server 'blapp89.example.com'(2,000,000)

I then add:

log4j.logger.com.bladelogic.om.infra.compliance.job.ComplianceJobExecutor=ERROR

to my log4j.properties file and wait a couple minutes and then re-run the job.  You should no longer see the INFO message.  Conversely, I may be able to get more information  out of this class by setting it to DEBUG, but that will depend if there is any debug logging already built into the class or not, which is not guaranteed. 

 

One thing to note - if you want to change the logging from DEBUG back to INFO or ERROR back to INFO you must alter the logger line, you can't simply delete the line from the log4j.properties file. 

 

If you elect do the above to reduce logging, make sure that when you interact with BMC Support you make it clear you have altered the logging levels because during troubleshooting we may be looking for log messages you have excluded and we will spend a lot of time figuring that out.

 

Using the above to enable debug logging can be useful while troubleshooting issues with the application server.  The nuclear option of course is to change the root logging level:

log4j.rootLogger=INFO, R, C

and if that is done you will likely need to increase the size of and number of rolled logs to handle the additional information being dumped into the files:

# Set the max size of the file

log4j.appender.R.MaxFileSize=20000KB

#Set the number of backup files to keep when rolling over the main file

log4j.appender.R.MaxBackupIndex=5

It's much better if you can use the above method to figure out what class you want more (or less) logging on or in the case of needing debug using the DEBUG_MODE_ENABLED property on the job to get job run debug information for a particular job.

 

Generally you should not have to alter the logger settings during normal operation.

Share:|

We are excited to introduce you to our new YouTube channel “BladeLogic Automation” for "How-to" videos, intended to help with a specific task or feature of products in the BladeLogic Automation suite (BSA, BDSSA, BDA and BNA).

 

 

Highlights:

 

Focused contents:  The contents of this channel will only focus on providing technical videos for the Server Automation, Decision Support for Server Automation, Database Automation and Network Automation products.   This content is developed by the BMC Support technical teams.

 

Featured Playlists: The channel will focus on technical contents, such as how-to, troubleshooting guides and functional demonstrations. Similar features/functions and categories will have their own Playlists to reduce the time to search the contents.

 

Snippet of our Playlists:

Click  to receive notifications when the new technical content is posted on the channel and to get the most out of the products – BSA, BDSSA, BNA and BDA.

Refer to our "Playlists" to play all the videos organized by topic or a product.

Here are the current Playlists:

 

We welcome feedback from the community.

Share:|

I am very pleased to announce that BSA 8.9 Service Pack 2 is now available.

 

Here are some highlights of the release:

1>     Compliance Support:

    1. DISA STIG content update for Windows 2016
    2. DISA STIG content update for RH 7

2>     Patch Analysis support for AIX Multibos

Users can now perform patch analysis on the standby instance of the Base Operating System (BOS) which is maintained in the same root volume group as the active BOS object.

3>     Patching support for AIX on an Alternate Disk or Multiple Boot Operating System

Some versions of AIX have the capability of maintaining multiple instances of Base Operating Systems (BOS). The additional instance of the BOS can be maintained in the same root volume group (multibos) or on a separate disk on a separate root volume group (alternate disk). The user can boot any one instance of the BOS which is called the active instance. The instance that has not been booted remains as a stand by instance. BMC Server Automation supports installation, maintenance, and technology-level updates on the stand by BOS instance without affecting system files on the active BOS.

4>     Export Health Dashboard

Users can now export the entire Health Dashboard in HTML format.

5>     Deprecation on Red Hat Network download option

Red Hat transitioned from its Red Hat Network hosted interface to a new Red Hat Subscription Management interface. To enable customer to seamlessly continue patching on Red Enterprise Linux, we have deprecated RHN download option. All patching on RHEL targets must be performed using the CDN download option (its selected by default when creating patch catalog).

6>     Message to review he number of target servers

To prevent any unplanned outages in your data centre, BSA now allows you to review the number of servers targeted by a job.

7>     Database cleanup and BLCLI enhancements

Share:|

This is a pretty easy one so I thought I'd do a little more with scripting to make it more fun.  I have a Batch Job and I want to get all the member job keys/names/paths/etc.  I don't see any released blcli commands that will do it so I'll look in the ...Unreleased blcli commands and documentation ...  Digging around in the BatchJob namespace (of course) I don't see any list* or get* commands that would do it, though the getMemberJobCountByJobKey looks interesting for later.  I did find a findAllSubJobHeadersByBatchJobKey which looks promising.  I know about headers from past posts in this series.    That gives me something like this:

BATCH_JOB="/Workspace/BatchJobs/MyBatchJob"
blcli_execute BatchJob getDBKeyByGroupAndName "${BATCH_JOB%/*}" "${BATCH_JOB##*/}"
blcli_storeenv batchJobKey
blcli_execute BatchJob findAllSubJobHeadersByBatchJobKey ${batchJobKey}
blcli_execute SJobHeader getDBKey
blcli_execute Utility setTargetObject
blcli_execute Utility listPrint
blcli_storelocal memberJobKeys

Easy-peasy.  What if one of those members is a Batch Job and I want the member jobs in there ? I can probably script together something like that, but first I should create a test batch job to work this out.  I want something like:

Batch Job1

    -> Member Job1a

    -> Member Job1b

    -> Member Job Batch2

                 -> Member Job2a

                 -> Member Job2b

     -> Member Job1c

I could just see what jobs I already have in my test environment and use the gui to create the above but that's no fun.  Why not script out the creation and then script reading it all back ?  Then I'll have a couple scripts I can take between environments to make sure it works the same across BSA versions and such.

 

I'll make some Update Server Properties Jobs as the members since they don't need a Depot Object just to make this a little shorter.  For that I have UpdateServerPropertyJob.createUpdateServerPropertyJobWithTargetServer that should do nicely.  I need to:

Create my USP jobs

Create the member batch jobs

Add some of the USPs to the member batch jobs

create the parent batch job

add the usp and member batches to that job

 

Then another script based on my snippet above will recurse through what I created and spit out the list of jobs in each batch.

 

Below is the script to create the nested Batch Job I'm going to test my dump script against.  This part is really just an exercise to show how you can quickly mock up up some test data to do actual work on with the blcli instead of doing it by hand in the BSA GUI.  Since this required running the same commands a few times I broke out some of the actions into functions in the script.

#!/bin/nsh
blcli_setjvmoption -Dcom.bladelogic.cli.execute.quietmode.enabled=true
blcli_setoption serviceProfileName defaultProfile
blcli_setoption roleName BLAdmins
jobGroup="/Workspace/BatchJobTest"
jobGroupExists="false"
batchJob="TestBatch"
targetServer="blapp891.local"
# Check if the group is there and make it if not
blcli_execute JobGroup groupExists "${jobGroup}"
blcli_storelocal jobGroupExists
if [[ "${jobGroupExists}" = "false" ]]
        then
        blcli_execute JobGroup createGroupWithParentName "${jobGroup##*/}" "${jobGroup%/*}"
fi
blcli_execute JobGroup groupNameToDBKey "${jobGroup}"
blcli_storelocal jobGroupKey
blcli_execute Utility convertModelType UPDATE_SERVER_PROPERTY_JOB
blcli_storelocal uspJobTypeId
blcli_execute Utility convertModelType BATCH_JOB
blcli_storelocal batchJobTypeId
# Make some USP Jobs
for i in {1..3}
        do
        for j in a b c
                do
                blcli_execute Job jobExistsByTypeGroupAndName ${uspJobTypeId} ${jobGroupKey} "USP-${i}${j}"
                blcli_storelocal jobExists
                if [[ "${jobExists}" = "false" ]]
                        then
                        blcli_execute UpdateServerPropertyJob createUpdateServerPropertyJobWithTargetServer "USP-${i}${j}" "${jobGroup}" "${targetServer}"
                fi
        done
done
createBatchJobWithUSP()
{
   local batchJob="${1}"
   local uspJob="${2}"
   local jobGroupId="${3}"

   blcli_execute UpdateServerPropertyJob getDBKeyByGroupAndName "${uspJob%/*}" "${uspJob##*/}"
   blcli_storelocal uspKey
   blcli_execute Job jobExistsByTypeGroupAndName ${batchJobTypeId} ${jobGroupKey} "${batchJob}"
   blcli_storelocal jobExists
   # delete the batch job if it already exists, then re-create it
   if [[ "${jobExists}" = "true" ]]
        then
        blcli_execute BatchJob deleteJobByGroupAndName "${jobGroup}" "${batchJob}"
   fi
   blcli_execute BatchJob createBatchJob "${batchJob}" ${jobGroupId} ${uspKey} true false false
   blcli_storeenv batchJobKey
}
addMemberJob()
{
   local type="${1}"
   local memberJob="${2}"
   local batchJob="${3}"

   blcli_execute ${type} getDBKeyByGroupAndName "${memberJob%/*}" "${memberJob##*/}"
   blcli_storelocal memberJobKey
   blcli_execute BatchJob getDBKeyByGroupAndName "${batchJob%/*}" "${batchJob##*/}"
   blcli_storelocal batchJobKey
   blcli_execute BatchJob addMemberJobByJobKey ${batchJobKey} ${memberJobKey}
}
blcli_execute JobGroup groupNameToId "${jobGroup}"
blcli_storelocal jobGroupId
# put the 2 and 3 jobs in batch jobs
for i in {2..3}
        do
        createBatchJobWithUSP "${batchJob}-${i}" "${jobGroup}/USP-${i}a" ${jobGroupId}
        for j in b c
                do
                addMemberJob UpdateServerPropertyJob "${jobGroup}/USP-${i}${j}" "${jobGroup}/${batchJob}-${i}"
        done
done
createBatchJobWithUSP "${batchJob}-1" "${jobGroup}/USP-1a" ${jobGroupId}
addMemberJob BatchJob "${jobGroup}/${batchJob}-2" "${jobGroup}/${batchJob}-1"
addMemberJob UpdateServerPropertyJob "${jobGroup}/USP-1b" "${jobGroup}/${batchJob}-1"
addMemberJob BatchJob "${jobGroup}/${batchJob}-2" "${jobGroup}/${batchJob}-1"
addMemberJob UpdateServerPropertyJob "${jobGroup}/USP-1c" "${jobGroup}/${batchJob}-1"

That gives me a Batch Job that looks like:

That seems like a lot of work just to make a job for me to test with.  In this case, yeah, it probably was.  Is the script perfect ? Could I add more error handling ? Could it be more elegant ?  It took me about 15 minutes to write and test it out.  It works.  I could have made the jobs there from scratch in about five minutes by hand in the BSA GUI.  But let's say I need to do this again in another environment, and then another.  Or what I'm testing requires me to have a lot of jobs and then delete them.  That's when doing some automation to setup your test data pays off.

 

Now let's start the real work - listing out the Batch Job and its members.  I'm going to forget I wrote this script and pretend I only know the Batch Job that I care about /Workspace/BatchJobTest/TestBatch-1.

Now I need to:

Get my Batch Job

List all the Jobs in it

See if any of those are in turn batch jobs and if so loop back to the first step

 

That loop part probably means another function.  I already have the bit to list all the member jobs from the snippet at the very beginning. I just need to handle the recursion:

#!/bin/nsh
blcli_setjvmoption -Dcom.bladelogic.cli.execute.quietmode.enabled=true
blcli_setoption serviceProfileName defaultProfile
blcli_setoption roleName BLAdmins
if [[ ${#@} -ne 1 ]]
        then
        echo "You must pass the Batch Job"
        exit 1
fi
batchJob="${1}"
getBatchJobMembers()
{
   local batchJob="${1}"
   local batchJobKey=
   local memberKeys=
   local hasParent="${2}"
   blcli_execute BatchJob getDBKeyByGroupAndName "${batchJob%/*}" "${batchJob##*/}"
   blcli_storelocal batchJobKey
   blcli_execute BatchJob findAllSubJobHeadersByBatchJobKey ${batchJobKey}
   blcli_execute SJobHeader getDBKey
   blcli_execute Utility setTargetObject
   blcli_execute Utility listPrint
   blcli_storelocal memberKeys
   while read memberKey
     do
     blcli_execute Job findByDBKey ${memberKey}
     blcli_execute Job getType
     blcli_storelocal jobTypeId
     blcli_execute Job getName
     blcli_storelocal jobName
     blcli_execute Job getGroupId
     blcli_storelocal jobGroupId
     blcli_execute Group getQualifiedGroupName 5005 ${jobGroupId}
     blcli_storeenv jobGroupPath
     if [[ "${hasParent}" = "true" ]]
         then
         echo "--${jobGroupPath},${jobName},${jobTypeId}"
     else
         echo "${jobGroupPath},${jobName},${jobTypeId}"
     fi
     if [[ ${jobTypeId} = 200 ]]
        then
        getBatchJobMembers "${jobGroupPath}/${jobName}" true
     fi
   done <<< "$(awk 'NF' <<< "${memberKeys}")"
}
getBatchJobMembers "${batchJob}" false

That gives me output like:

/Workspace/BatchJobTest,USP-1a,1017

/Workspace/BatchJobTest,TestBatch-2,200

--/Workspace/BatchJobTest,USP-2a,1017

--/Workspace/BatchJobTest,USP-2b,1017

--/Workspace/BatchJobTest,USP-2c,1017

/Workspace/BatchJobTest,USP-1b,1017

/Workspace/BatchJobTest,TestBatch-2,200

--/Workspace/BatchJobTest,USP-2a,1017

--/Workspace/BatchJobTest,USP-2b,1017

--/Workspace/BatchJobTest,USP-2c,1017

/Workspace/BatchJobTest,USP-1c,1017

Of course, once you have the DBKey of the member job you can do whatever you want there, not just list the group path and name. 

 

That's it - we have all the member jobs along with some information about them from the Batch Job.

Share:|

The listallthethings post you've all been waiting for right ?  This is the one I see come up most frequently on communities - I need to list out catalogs and all the stuff in them and do something to those objects or make some groups in the catalog.  The questions come up probably because there is not a released blcli namespace for Patch Smart Groups and no one is sure if a Catalog is a DepotObject or Group or turtle.  I mean it has a Job associated with it and Jobs are normally associated with DepotObjects.  But the Job is not in the Jobs workspace, it's in the catalog itself and a Catalog also has a bunch of DepotObjects and Groups in it.  So I'm going with turtle.

 

But a catalog is really a group.  If you want to be sure look for CATALOG and GROUP in the Object Type Ids list. Like RED_HAT_CATALOG_GROUP.  The requests I've seen are typically something like: find all my catalogs, find all the <os type> catalogs, find and run the CUJ, list the patch smart groups in a catalog, list the conditions of a patch smart group in a catalog, create a patch catalog smart group, list all the patches/errata/bulletins/etc in a catalog, set a property on the objects in the catalog if they meet some condition and delete all the stuff in my catalog.  So let's just dig right in.  Of course we have our trusty Unreleased blcli commands and documentation at the ready.

 

List all the catalogs of a specific OS type and do something to the CUJ

I'll combined these two asks together since it makes sense as you will see.  Since a catalog is a group, it made sense to me to start in the Group namespace.  I can lookup the object type id number or use a blcli command to convert the model type name to the number and then there's a command called Group.findAllByGroupType.  That looks like this:

blcli_execute Utility convertModelType RED_HAT_CATALOG_GROUP
blcli_storeenv groupType
blcli_execute Group findAllBytype ${groupType}

That gets me output like this:

[/Workspace/Patch Catalogs/RedHat Linux Patch Catalog, Id = 2001137;Name = RedHat Linux Patch Catalog;Desc = This is an example Patching Catalog for Redhat Linux Patching., /Workspace/Patch Catalogs/Test1, Id = 2210400;Name = Eagle;Desc = , /Workspace/Patch Catalogs/RedHat 6 and 7 x86_64, Id = 2208200;Name = RedHat 6 and 7 x86_64;Desc = , /Workspace/Patch Catalogs/RedHat 7 Newest, Id = 2208000;Name = RedHat 7 Newest;Desc = ]

That looks promising.  Group.findAllByGroupType says it outputs a list so I can then I could run a Utility.listPrint and then do some text processing on each line.  I'm a big fan of text processing but let's see what else we can do.  I tried to run some of the Group.get* commands on the list and they all threw errors.  Maybe I can iterate through the list like in the other examples.  That doesn't work so well because each item in the list is a different element of the catalog - one is the path, the other is the Id = xxx and so on.  Maybe there's a command in Group that will act on the list.  I'm back to text processing.  That's ok - I can just dump the list and look for the lines that have the Id = or the path.

 

That's not very elegant but it works.  I look around a little more in my Unreleased blcli commands and documentation and I find there's a PatchCatalog namespace.   And some os-specific namespaces like RedhatPatchCatalog are there and I don't see much in there that's useful for now.  PatchCatalog has a set of list*Catalogs commands that look good.  After a listPrint it looks good:

blcli_execute PatchCatalog listRedhatPatchCatalogs
blcli_execute Utility setTargetObject
blcli_execute Utility listPrint

/Workspace/Patch Catalogs/RedHat Linux Patch Catalog

/Workspace/Patch Catalogs/Test1

/Workspace/Patch Catalogs/RedHat 6 and 7 x86_64

/Workspace/Patch Catalogs/RedHat 7 Newest

Now I know the path to the group and the group type.  I can get the DBKey with PatchCatalog.getCatalogDBKeyByFullyQualifiedCatalogName or the id with PatchCatalog.getCatalogIdByFullyQualifiedCatalogName.  I can also get the Catalog Update Job DBKey with the PatchCatalog.getCUJDBKeyByFullyQualifiedCatalogName

blcli_execute Utility convertModelType RED_HAT_CATALOG_GROUP
blcli_storeenv groupType
blcli_execute PatchCatalog listRedhatPatchCatalogs
blcli_execute Utility setTargetObject
blcli_execute Utility listPrint
blcli_storelocal catalogs
while read catalog
     do
     blcli_execute PatchCatalog getCatalogDBKeyByFullyQualifiedCatalogName "${catalog}" ${groupType}
     blcli_storelocal catalogKey
     echo "DBKey: ${catalogKey}"
     blcli_execute PatchCatalog getCUJDBKeyByFullyQualifiedCatalogName REDHAT "${catalog}"
     blcli_storelocal cujKey
     echo "CUJKey: ${cujKey}"
done <<< "$(awk 'NF' <<< "${catalogs}")"

The while handles the spaces in the catalog paths (where a for would not).    There's some other commands that could do the same thing: PatchCatalog.getRedhatCatalogUpdateJobDBKey (and Windows, etc).  Often times that's the case.  Use whatever gets you the output you need.  There's no right answer.  If it works, it works.  Cool, now I can go off and run the CUJs, or update/remove/add a schedule and I have the Catalog key and path if I need that.  For example to feed into isCatalogLastUpdateSuccessful:

blcli_execute Utility convertModelType RED_HAT_CATALOG_GROUP
blcli_storeenv groupType
blcli_execute PatchCatalog listRedhatPatchCatalogs
blcli_execute Utility setTargetObject
blcli_execute Utility listPrint
blcli_storelocal catalogs
while read catalog
     do
     blcli_execute PatchCatalog getCatalogIdByFullyQualifiedCatalogName "${catalog}" ${groupType}
     blcli_storelocal catalogGroupId
     blcli_execute PatchCatalog isCatalogLastUpdateSuccessful ${catalogGroupId}
     blcli_storelocal isCatalogLastUpdateSuccessful
     echo "${catalog}:${isCatalogLastUpdateSuccessful}"
done  <<< "$(awk 'NF' <<< "${catalogs}")"

That's a quick and easy way to populate a dashboard or otherwise get a quick view of your catalog states.  If the run failed you could then get the CUJ Key, and dump the last job run log information and email it off. I know that's jumping from 0 to 60 wrt using the blcli but I learn by example so I'll teach by example.  Suffice to say at some point in my blcli usage I figured out how to do each of those things - find the latest run key, use that to get job run information, send an email - by poking around in the released/unreleased command docs, trying various commands in my test env and seeing what happened.  Here we go:

blcli_execute Utility convertModelType RED_HAT_CATALOG_GROUP
blcli_storeenv groupType
blcli_execute PatchCatalog listRedhatPatchCatalogs
blcli_execute Utility setTargetObject
blcli_execute Utility listPrint
blcli_storelocal catalogs

while read catalog
     do
     blcli_execute PatchCatalog getCatalogIdByFullyQualifiedCatalogName "${catalog}" ${groupType}
     blcli_storelocal catalogGroupId
     blcli_execute PatchCatalog isCatalogLastUpdateSuccessful ${catalogGroupId}
     blcli_storelocal isCatalogLastUpdateSuccessful
     if [[ "${isCatalogLastUpdateSuccessful}" = "true" ]]
        then
        blcli_execute PatchCatalog getCUJDBKeyByFullyQualifiedCatalogName REDHAT "${catalog}"
        blcli_storelocal cujKey
        # false is also returned if there are no runs
        blcli_execute JobRun findRunCountByJobKey ${cujKey}
        blcli_storelocal jobRunCount
        if [[ ${jobRunCount} -gt 0 ]]
                then
                blcli_execute JobRun findLastRunKeyByJobKeyIgnoreVersion ${cujKey}
                blcli_storelocal cujRunKey
                # get the start time of the cuj run and the job name for our log/email
                blcli_execute JobRun findByJobRunKey ${cujRunKey}
                blcli_execute JobRun getStartTime
                blcli_storelocal startTime
                blcli_execute JobRun getJobName
                blcli_storelocal jobName
                blcli_execute JobRun jobRunKeyToJobRunId ${cujRunKey}
                blcli_storelocal cujRunId
                blcli_execute JobRun getLogItemsByJobRunId ${cujRunId}
                blcli_storelocal cujLogItems
                echo "${cujLogItems}" > "/tmp/${catalog##*/}-${cujRunId}.log"
                blcli_execute Email sendMailWithAttachment appserver@example.com user@example.com "Log for failed CUJ: ${jobName} at: ${startTime}" "Please review attached log for errors" "/tmp" "${catalog##*/}-${cujRunId}.log"
        else
                echo "Catalog ${catalog} has had ${jobRunCount} runs..."
        fi
     fi
done  <<< "$(awk 'NF' <<< "${catalogs}")"

 

As a bonus, let's do all of that, but for all the catalog types.  And we'll use an associative array to help loop.  I'm just showing the looping logic and what needs to be parameterized, you can fill in the rest of the script.

typeset -A catalogTypeList
catalogTypeList=(listAixPatchCatalogs AIX_CATALOG_GROUP listDebianPatchCatalogs UBUNTU_CATALOG_GROUP listRedhatPatchCatalogs RED_HAT_CATALOG_GROUP listSolarisPatchCatalogs SOLARIS_CATALOG_GROUP listWindowsPatchCatalogs WINDOWS_CATALOG_GROUP)
for key in ${(k)catalogTypeList}
        do
        cliCall="${key}"
        modelType="${catalogTypeList[${key}]}"
        blcli_execute Utility convertModelType ${modelType}
        blcli_storeenv groupType
        blcli_execute PatchCatalog ${cliCall}
       [ ...... Rest of script above .....]       
 done 

 

 

That was more than I expected to cover.  The reset of the cases will move to subsequent posts.

Share:|

Here's how to remediate Meltdown/Spectre with BSA.

 

(how to do with SecOps Response Service / Threat Director, is here:Remediate Meltdown and Spectre with SecOps Response):

 

First, you'll need a current Windows or Linux Patch Catalog.  For the purposes of this discussion, I'll focus on Windows, but doing this under Linux is just as easy: swap out the catalog type, and the Analysis job is just the same.

 

My catalogs update at least every week, and this week, I've been updating every day, as there have been a number of changes to these patches.  I also get automated notifications from Ivanti/Shavlik to let me know when there are updated patches and vulnerabilities, and I'll sometimes update the catalogs right after I get one of those, in a week when we've got a new high-profile vulnerability.

 

As you can see, we got 9 new MS Bulletins in the last 3 days, and 133 new hotfixes, with updates to many more of each.  Great, we can spot check, but this should cover the latest fixes we've been reading about everywhere.

Now, let's go build a targeted policy, or patch smart group, that will let us focus our efforts on just these fixes.  Regular patching is being executed on a periodic basis, and we're all following best practices there already, right? 

 

Let's find our favorite production Windows Patch Catalog, right click New->Patch Catalog Smart Group

Let's give it a name (I called mine "Meltdown - Spectre Checks", and create it as a filter of Hotfix objects, where CVE_ID "is one of" a list of the three key CVE-IDs: CVE-2017-5751, -5753, and -5715.

 

The "is one of" operator makes it really easy to have a focused list in a single line in a Patch Smart Group.

 

Note that this Patch Smart Group now lists a range of useful patches for addressing this vulnerability:

 

Now we create a Patch Analysis Job like we would for any other task, and go find out what our exposure is:

 

 

Building a Patch Analysis Job (PAJ) is like any other, it needs a name, somewhere to live, and a set of servers to act on:

 

Note that the job automatically includes the Meltdown - Spectre Checks Patch Smart Group, since we created via right-click:

Since we're highly motivated to close these as soon as possible, I'm going to ask BSA to create remediation objects (packages and a job to deploy them) from the start.

Note that I can use any existing Server Groups, including Smart Groups based on CMDB or other server properties, including Environment, Location, related Business Services, etc.  I can also pick individual servers, or populate a group or job based on an external list of hosts, like you might get from an existing change management request.

There are options to notify the relevant team, but I'm going to click "Execute Job Now" so it can get rolling.

 

Once our Patching Analysis completes, it should show whether any hosts are missing the hotfixes:

 

 

Now, it's downloaded and packaged these hotfixes, and is ready to deploy at any time, including after an approved change control!

 

You can then either execute the deployment or schedule, and afterward you can re-run patch analysis, and observe that the patch is now applied, and the vulnerability closed!

 

Check status in the Live Dashboards in real time, and for reporting purposes.

 

Until next time!

Bill Robinson

List All The Components

Posted by Bill Robinson Moderator Jan 11, 2018
Share:|

Unlike the other object types we've looked at in listallthethings so far, there does not seem to be a Component.listAllByGroup type of blcli command available.  But the pattern is going to be similar as we've found with the other workspace objects.  With the other object types the listAllByGroup command contained calls to convert a group path to id, use that id with one of the findAll commands in the namespace and then get the name or DBKey of the returned objects.  We look in the Component namespace and see a findAllByComponentGroup which takes the ComponentGroup id.  Let's check the ComponentGroup namespace (or SmartComponentGroup) for the groupNameToId command and we see it's there.  Great.  The namespaces usually contain the basics like getName, getDBKey, getId.  Putting together the series of blcli commands:

 

blcli_execute SmartComponentGroup groupNameToId "/Workspace/All Components"
# or ComponentGroup.groupNameToId
blcli_storelocal componentGroupId
blcli_execute Component findAllByComponentGroup ${componentGroup} false
blcli_execute Component getDBKey
blcli_execute Utility setTargetObject
blcli_execute Utility listPrint
blcli_storelocal componentKeys

 

That was pretty easy.  Components are like Servers in that they don't need to exist in a workspace folder (unlike Jobs, DepotObjects and Templates).  However they are associated with Templates and Servers and sometimes we want to list all the Components associated with a server, regardless of template or list all the components associated with a template.  We might also want to list if the Component is "valid" which means the discovery conditions are met.  A component becomes invalid if the Component was discovered on a server and then later something changed on the server or in the discovery conditions and that component (server) no longer meets the discovery conditions.  For example - you run discovery for one of the out-of-the-box compliance templates, like the DISA STIG, for your Windows 2008 servers.  A number of components are created, one for each of your 2008 servers.  Later, you upgrade a handful of the 2008 servers to 2012.  You re-run discovery for the 2008 Windows STIG template and the components of that template for the now 2012 servers should be flagged as invalid because the discovery condition is that the server is Windows 2008 and now it's Windows 2012.  We will also get the full name, and associated device (normally the component name includes the device).  Also remember that it's possible to have more than one component for a template on a single server - in the event you are using Components to model an application that has multiple instances on a single system, eg the BladeLogic Application Server or an Oracle Database.  The point of the training session here on components is to provide some examples of what information I might want to retrieve about a component in my script.  Let's get into the examples.

 

I'll look in my trust Unreleased blcli commands and documentation  reference in the Component namespace and see if there are some commands that look like they will do what I want.  I'm really just reading the name, looking at the inputs and trying it to see if I get what I want.

 

First I want to pass a server name and get all the components and the associated templates.  I need something in the Server space to convert the name to an id or DBKey - yes that exists.  Now in the Component namespace I need to see if there's something to list the components by server.  I see a couple: findAllLatestByDevice and findAllLatestDBKeysByDevice.  Those look pretty good.  The first one returns the component objects, I'd have to run a Component.getDBKey and then dump the list.  That's not too bad. The second one returns a message Command execution failed. java.lang.IllegalStateException: Must be on app server.  Well I am on an appserver so I'm not sure why that's happening.  Welcome to the unreleased commands.  So I'll use the first one.  I want to get the template; I see a Component.getTemplateKey and I can feed that into some of the commands I used in  List All The Component Templates to get the group path to the template, and I see a Component.getName, and I see Component.isValid.  I'll script all that up:

blcli_execute Server getServerIdByName ${serverName}
blcli_storelocal serverId
blcli_execute Component findAllLatestByDevice ${serverId}
blcli_execute Utility storeTargetObject components
blcli_execute Utility listLength
blcli_storelocal listLength
for i in {0..$((${listLength}-1))}
        do
        blcli_execute Utility setTargetObject components
        blcli_execute Utility listItemSelect ${i}
        blcli_execute Utility setTargetObject
        blcli_execute Component getName
        blcli_storelocal componentName
        blcli_execute Component isValid
        blcli_storelocal isValid
        blcli_execute Component getTemplateKey
        blcli_storelocal templateKey
        blcli_execute Template findByDBKey ${templateKey}
        blcli_execute Template getName
        blcli_storelocal templateName
        blcli_execute Template getGroupId
        blcli_storelocal templateGroupId
        blcli_execute Group getQualifiedGroupName 5008 ${templateGroupId}
        blcli_storeenv templateGroupPath
        echo "${componentName},${isValid},${templateGroupPath}/${templateName}"
done

 

Now for the list of Components and their associated servers for a template.  I see a couple versions of Component.findAllByTemplate - one takes the template key, the other the template id.  Since I already know how to get the template key (Template.getDBKeyByGroupAndName).  Then I'll follow pretty much the same pattern as above with whatever blcli calls I need to get the component and associated server info.

template="/Workspace/MyTemplates/TestTemplate1"
blcli_execute Template getDBKeyByGroupAndName "${template%/*}" "${template##*/}"
blcli_storelocal templateKey
blcli_execute Component findAllByTemplate ${templateKey}
blcli_execute Utility storeTargetObject components
blcli_execute Utility listLength
blcli_storelocal listLength
for i in {0..$((${listLength}-1))}
        do
        blcli_execute Utility setTargetObject components
        blcli_execute Utility listItemSelect ${i}
        blcli_execute Utility setTargetObject
        blcli_execute Component getName
        blcli_storelocal componentName
        blcli_execute Component isValid
        blcli_storelocal isValid
        blcli_execute Component getDeviceId
        blcli_storelocal deviceId
        blcli_execute Server getServerNameById ${deviceId}
        blcli_storelocal serverName
        echo "${componentName},${isValid},${serverName}"
done

 

Since this is starting to get repetitive (which is good that we can follow the same patters between workspaces) I like to throw in something new here and there to keep it interesting.  At the top I have:

template="/Workspace/MyTemplates/TestTemplate1"

blcli_execute Template getDBKeyByGroupAndName "${template%/*}" "${template##*/}"

What's going on with that second line ?  The Template.getDBKeyByGroupAndName command takes the template group and name as inputs.  I have a variable named template and I passed in some gibberish to my blcli command.  If you recall NSH (what BSA uses for its command line shell) is based on ZSH, which is a Unix shell like bash, tcsh, csh, etc.  What's happening here is parameter expansion.   From the article:

${name#pattern}

${name##pattern}

If the pattern matches the beginning of the value of name, then substitute the value of name with the matched portion deleted; otherwise, just substitute the value of name. In the first form, the smallest matching pattern is preferred; in the second form, the largest matching pattern is preferred.

${name%pattern}

${name%%pattern}

If the pattern matches the end of the value of name, then substitute the value of name with the matched portion deleted; otherwise, just substitute the value of name. In the first form, the smallest matching pattern is preferred; in the second form, the largest matching pattern is preferred.

So the first one is matching the /* in the string /Workspace/MyTemplates/TestTemplate1 from the end so just '/TestTemplate1' and removing that substring from the overall string and returns just the folder path (/Workspace/MyTemplates).  The second one is matching */ out of the string from the beginning and because of the ## it's matching everything, so '/Workspace/MyTemplates' and deleting that from the string.  If there was just one # then it would return 'Workspace/MyTemplates/TestTemplate'.  This is the same thing as using the dirname and basename commands from the Unix shell.  The advantage of using the parameter substitution is you don't need to spawn off a child process to use it and it's cool.

 

Hopefully that was a quick and fun diversion into shell scripting since we are seeing a lot of the same kind of command sequences when we are listing out the various objects.

Share:|

Now listallthethings turns to DepotObjects.  If I want to list all the Depot Objects in a group, there's an existing command for that:

blcli_execute DepotObject listAllByGroup "/Workspace/MyDepotObjects"

And that lists all the Depot Objects in the (static) group.  Similar to the List All The Jobs post, we have the same issue with Depot Objects - we need to know the type in order to determine the DBKey, DepotObjectId, etc.

So again we go look at the DepotObject.listAllByGroup in the Unreleased blcli commands and documentation and find:

CommandInputReturn value stored name
DepotGroup.groupNameToId$qualifiedGroupName$groupId
DepotObject.findAllHeadersByGroupNAMED_OBJECT=groupId-
SDepotObjectHeader.getNameno input-
Utility.setTargetObjectno input-
Utility.listPrintno input-

And we look in the SDepotObjectHeader namespace to see what's there.  And we have the same set of examples from our other post:

blcli_execute SmartDepotGroup groupNameToId "/Workspace/All Depot Objects"  
# or DepotGroup.groupNameToId with a Static Depot Group  
blcli_storelocal depotGroupId  
blcli_execute DepotObject findAllHeadersByGroup ${jobGroupId}  
blcli_execute SDepotObjectHeader getDBKey  
blcli_execute Utility setTargetObject  
blcli_execute Utility listPrint  
blcli_storelocal depotObjectKeys  

And as in the Server post we can use SDepotObjectHeader.getDepotObjectId to get the Job Id instead of the DBKey. and then as before we could iterate through the list of Depot Object Keys and do something like update a property value:

while read depotObjectKey  
  do  
  blcli_execute DepotObject setPropertyValue ${jobKey} APPLICATION_NAME Payroll
done <<< "$(awk 'NF' <<< "${depotObjectKeys}")"  

 

 

Or maybe I want to update the description on the object.  Since I can't find a blcli command to do that directly, it looks like I need to use one of the primitives - DepotOjbect.setDescription to do it.  That will involve loading the DepotObject object and then acting on it, and we can still use same list of Depot Object keys we get above:

while read depotObjectKey  
   do  
    blcli_execute DepotObject findByDBKey ${depotObjectKey}  
    blcli_execute Utility storeTargetObject obj  
    blcli_execute DepotObject setDescription "Payroll Application"
    blcli_execute DepotObject update NAMED_OBJECT=obj
done <<< "$(awk 'NF' <<< "${depotObjectKeys}")" 

 

 

Sometimes though I might just want to list out the DepotObject, the group path where it exists, the DepotObject type, etc.  All of that information seems to be available in commands in SDepotObjectHeader.  In that case I want to load the SDepotObjectHeader object and run some commands against it.

 

blcli_execute SmartDepotGroup groupNameToId "/Workspace/All Depot Objects"
# or DepotGroup.groupNameToId with a Static Depot Group  
blcli_storelocal objGroupId  
blcli_execute DepotObject findAllHeadersByGroup ${objGroupId}  
blcli_execute Utility storeTargetObject objHeaders  
blcli_execute Utility listLength  
blcli_storelocal listLength  
for i in {0..$((${listLength}-1))}  
    do  
    blcli_execute Utility setTargetObject objHeaders  
    blcli_execute Utility listItemSelect ${i}  
    blcli_execute Utility setTargetObject  
    blcli_execute SDepotObjectHeader getName  
    blcli_storelocal objName  
    blcli_execute SDepotObjectHeader getObjectTypeId  
    blcli_storelocal objTypeId  
    blcli_execute SDepotObjectHeader getDescription  
    blcli_storelocal objDesc  
    blcli_execute SDepotObjectHeader getGroupId  
    blcli_storelocal objGrpId  
    blcli_execute Group getQualifiedGroupName 5001 ${objGrpId}  
    blcli_storelocal groupPath  
    echo "${groupPath}/${objName},${objDesc},${objTypeId}"  
done 

This will echo out the Depot Folder and Name, the description and the object type id for the Depot Object.

 

This one was not much different than Jobs; a similar set of commands but from a different namespace.  Generally that's the case: the commands are the same across namespaces for the same actions.  Not always as you'll find out but most of the time.  That can make it easy to parameterize your commands to build out script functions like:

getName()
{
   local ns="${1}"
   local dbKey="${2}"
   blcli_execute ${ns} findByDBKey ${dbKey}
   blcli_execute ${ns} getName
   blcli_storeenv objName
   blcli_execute ${ns} getType
   blcli_storeenv objTypeId
}

This is nice because it's a single function instead of maintaining separate functions for each object type.  Even if the command name varies you can always do some conditionals and parameterize the command name based on the namespace.

Bill Robinson

List All The Jobs

Posted by Bill Robinson Moderator Jan 11, 2018
Share:|

Continuing the series of listallthethings, iIf I want to list all the jobs in a group there's an existing command:

blcli_execute Job listAllByGroup "/Workspace/MyJobs"

And that lists out all the Job names in the (static) group.  Unfortunately that list isn't very actionable unless you already know what type of jobs they all are.  And I can't use the same command and a Smart Job Group. Instead I'd like a way to list the contents of a Static or Smart Job Group that gives me the DBKey or Id to the Job.  Most of the time when working with objects in BSA you need the DBKey to do anything with the object - update a property, run the job, rename it, etc.  And if you do need the name and path to the object, if you have the DBKey you can figure out the name, type, group path, etc.  I mentioned having only the name and path (e.g. the Job.listAllByGroup) isn't actionable because I won't know the type, and you need to know the type in order to use the specific namespace commands like AuditJob, SnapshotJob, etc.

 

Let's look in the Job.listAllByGroup command in the Unreleased blcli commands and documentation and see what's being run:

CommandInputReturn value stored name
JobGroup.groupNameToId$qualifiedGroupName$groupId
Job.findAllHeadersByGroupNAMED_OBJECT=groupId-
SJobHeader.getNameno input-
Utility.setTargetObjectno input-
Utility.listPrintno input-

So just like the List All The Servers it's using the object header and getting the name.  And just like what we did in that post we will look in the SJobHeader namespace to see if there's a command to get the DBKey or id.  And there is, so we can do:

blcli_execute SmartJobGroup groupNameToId "/Workspace/All Jobs"
# or JobGroup.groupNameToId with a Static Job Group
blcli_storelocal jobGroupId
blcli_execute Job findAllHeadersByGroup ${jobGroupId}
blcli_execute SJobHeader getDBKey
blcli_execute Utility setTargetObject
blcli_execute Utility listPrint
blcli_storelocal jobKeys

And as in the Server post we can use SJobHeader.getJobId to get the Job Id instead of the DBKey.

 

and then as before we could iterate through the list of Job Keys and do something like update a property value:

while read jobKey
  do
  blcli_execute Job setPropertyValue ${jobKey} RESULTS_RETENTION_TIME 30
done <<< "$(awk 'NF' <<< "${jobKeys}")"

 

Or maybe I want to update the parallelism setting on the job.  Since I can't find a blcli command to do that directly, it looks like I need to use one of the primitives - Job.setParallelProcs to do it.  That will involve loading the job object and then acting on it, and we can still use same list of job keys we get above:

while read jobKey
   do
    blcli_execute Job findByDBKey ${jobKey}
    blcli_execute Utility storeTargetObject job
    blcli_execute Job setParallelProcs 30
    blcli_execute Job update NAMED_OBJECT=job
done <<< "$(awk 'NF' <<< "${jobKeys}")"

 

Sometimes though I might just want to list out the job, the group path where it exists, the job type, etc.  All of that information seems to be available in commands in SJobHeader.  In that case I want to load the JobHeader object and run some commands against it.

blcli_execute SmartJobGroup groupNameToId "/Workspace/All Jobs"
# or JobGroup.groupNameToId with a Static Job Group
blcli_storelocal jobGroupId
blcli_execute Job findAllHeadersByGroup ${jobGroupId}
blcli_execute Utility storeTargetObject jobHeaders
blcli_execute Utility listLength
blcli_storelocal listLength
for i in {0..$((${listLength}-1))}
    do
    blcli_execute Utility setTargetObject jobHeaders
    blcli_execute Utility listItemSelect ${i}
    blcli_execute Utility setTargetObject
    blcli_execute SJobHeader getName
    blcli_storelocal jobName
    blcli_execute SJobHeader getObjectTypeId
    blcli_storelocal jobTypeId
    blcli_execute SJobHeader getDescription
    blcli_storelocal jobDesc
    blcli_execute SJobHeader getGroupId
    blcli_storelocal jobGrpId
    blcli_execute Group getQualifiedGroupName 5005 ${jobGrpId}
    blcli_storelocal groupPath
    echo "${groupPath}/${jobName},${jobDesc},${jobTypeId}"
done

This will echo out the Job Folder and Name, the description and the object type id for the job.

 

We built on the first post a bit and in addition to listing the objects in a group we pulled some information about them using more from the Unreleased blcli commands and documentation

Bill Robinson

List All The Servers

Posted by Bill Robinson Moderator Jan 11, 2018
Share:|

I see a lot of requests on the communities that end up requiring one to list a bunch of objects and then do something so I thought I'd provide some examples from each workspace that show how to listallthethings.

 

First off we'll start with servers.

 

Listing all the servers in a server group is pretty easy:

blcli_execute Server listServersInGroup "/Workspace/All Available Servers by OS/Linux - Red Hat/All Red Hat"

spits out a list of all my RedHat server names.  That's great, but sometimes I need the server DBKey or server id.  Since this command does almost what I want I'm going to look at what it's calling in the Unreleased blcli commands and documentation we see:

CommandInputReturn value stored name
ServerGroup.groupNameToId$qualifiedGroupName$groupId
Server.findAllHeadersByServerGroupNAMED_OBJECT=groupId false-
SDeviceHeader.getNameno input-
Utility.setTargetObjectno input-
Utility.listPrintno input-

Ok, so that's good - it's not directly calling an API command so I can probably run the same commands in sequence but instead of SDeviceHeader.getName there's hoptfully a .getId and/or .getDBKey in SDeviceHeader.  Upon inspection of the SDeviceHeader namespace I do see a getDeviceId and getDBKey.  Great!  So I can put together a couple sets of commands:

 

# interestingly this works for both smart and static groups
blcli_execute ServerGroup groupNameToId "/Workspace/All Available Servers by OS/Linux"
blcli_storeenv serverGroupId
blcli_execute Server findAllHeadersByServerGroup ${serverGroupId} true
blcli_execute SDeviceHeader getDBKey
blcli_execute Utility setTargetObject
blcli_execute Utility listPrint
blcli_storelocal serverDBKeys

 

or

blcli_execute ServerGroup groupNameToId "/Workspace/All Available Servers by OS/Linux"
blcli_storeenv serverGroupId
blcli_execute Server findAllHeadersByServerGroup ${serverGroupId}
blcli_execute SDeviceHeader getDeviceId
blcli_execute Utility setTargetObject
blcli_execute Utility listPrint
blcli_storelocal serverIDs

 

If I want to get *all* the servers in my environment then I'll look at the Server.listAllServers command and see what it's calling:

CommandInputReturn value stored name
Server.findAllDeviceHeadersno input-
SDeviceHeader.getNameno input-
Utility.setTargetObjectno input-
Utility.listPrintno input-

Same idea, replace the SDeviceHeader.getName with the command you want.

 

Then you have a list (serverDBKeys, serverIDs, etc) you can iterate over:

while read serverDBKey
    do
    echo "DBKey: ${serverDBKey}"
# trim off the trailing empty line of the key list.
done "$(awk 'NF' <<< "${serverDBKeys)")"

 

Note in the above example we are loading the object header, not the full object, which is much faster and uses less memory.  There are commands like Server.findAllBy* and those will load the entire server object for all objects in the group (or all or whatever you call) and that will likely be slower and use more of the blcli's heap so those examples were not provided above.  The one advantage with loading the object is that you can iterate through the list of objects and directly act directly on each object to get or set something.  Sometimes you need to do that because there are not other blcli commands for the action you are trying to do that take a DBKey, name or Id as input.

 

For example you can do something like:

blcli_execute Server findAllByServerGroup ${groupId}
blcli_execute Utility setTargetObject serverList
blcli_execute Utility listLength
blcli_storelocal listLength
for i in {0..$((${listLength}-1))}
   do
   blcli_execute Utility setTargetObject serverList
   blcli_execute Utility listItemSelect ${i}
   blcli_execute Utility setTargetObject
   blcli_execute Utility storeTargetObject server
   blcli_execute Server setDescription "Test Server"
   blcli_execute Server update NAMED_OBJECT=server
done

And iterate through the objects themselves, load them and then act on them.  The above example sets the description of the server object to 'Test Server".  This is just a simple example for this exercise.

 

Hopefully after the above example we've gotten a little more familiar with using the Unreleased blcli commands and documentation, understanding how to construct our own sequence of unreleased commands to do something that released commands don't do and learning a couple ways we can get a list of objects and iterate through them.

Share:|

Next up for listallthethings: Component Templates.  If I want to list all the Component Templates in a (static) Template group I can run:

blcli_execute TemplateGroup groupNameToId "/Workspace"

or for a Smart Template Group

blcli_execute SmartTemplateGroup groupNameToId "/Workspace/All Templates"

 

blcli_storelocal templateGroupId
blcli_execute Template listAllByGroup ${templateGroupId}

 

And that gives me the list of Templates in the group.  If you listed the contents of a static group and needed to get the DBKey of the template for some operation then you would just need to loop through the names, supply your group name and pass both to the Template.getDBKeyByGroupAndName command:

templateGroup="/Workspace"
blcli_execute TemplateGroup groupNameToId "${templateGroup}"
blcli_storelocal templateGroupId
blcli_execute Template listAllByGroup ${templateGroupId}
blcli_storelocal templateList
while read template
do
blcli_execute Template getDBKeyByGroupAndName "${templateGroup}" "${template}"
blcli_storelocal templateKey
echo "${templateGroup}/${template}:${templateKey}"
done <<< "$(awk 'NF' <<< "${templateList}")"

 

If we used the Smart Group then we can't use the Smart Group with the Template.getDBKeyByGroupAndName, so we need to take the approach of our other list exercises (List All The Depot Objects ,List All The Jobs , List All The Servers ) and look at the Template.listAllByGroup command and see if we can output the DBKey.  In the Unreleased blcli commands and documentation  we see this following command sequence for our command:

CommandInputReturn value stored name
Template.findAllByGroupId$groupid$-
Template.getNameno input-
Utility.setTargetObjectno input-
Utility.listPrintno input-

There doesn't seem to be a STemplateHeader namespace like we had for Jobs and DepotObjects.  So that's ok, but as we know from the other exercises, loading all the objects could result in some poor performance or memory issues with the blcli heap.  If the latter we can always add a blcli_setjvmoption -Xmx1024m or some other size prior to any other blcli command executions in the script to bump up the heap size (though this shouldn't be needed if running in a NSH Script Job and the Blcli Server is enabled).

 

There's a couple ways we could go w/ this; get the DBKey for all the templates and then iterate through that list to get the name and group or keep the list of templates in memory an iterate through that list.  This is similar to what we did in the previous posts.  If you just want the DBKey do the former.  I'll examples of each.

 

First the example where you get the DBKey for all the templates and loop through that list:

blcli_execute SmartTemplateGroup groupNameToId "${templateGroup}"
blcli_storelocal templateGroupId
blcli_execute Template findAllByGroupId ${templateGroupId}
blcli_execute Template getDBKey
blcli_execute Utility setTargetObject
blcli_execute Utility listPrint
blcli_storelocal templateKeys

while read templateKey
        do
        blcli_execute Template findByDBKey ${templateKey}
        blcli_execute Template getName
        blcli_storelocal templateName
        blcli_execute Template getGroupId
        blcli_storelocal templateGroupId
        blcli_execute Group getQualifiedGroupName 5008 ${templateGroupId}
        blcli_storeenv templateGroupPath
        echo "${templateGroupPath}/${templateName}"
done <<< "$(awk 'NF' <<< "${templateKeys}")"

 

Now the same thing, except loading each object and pulling out the same information:

blcli_execute SmartTemplateGroup groupNameToId "${templateGroup}"
blcli_storelocal templateGroupId
blcli_execute Template findAllByGroupId ${templateGroupId}
blcli_execute Utility storeTargetObject templates
blcli_execute Utility listLength
blcli_storelocal listLength
for i in {0..$((${listLength}-1))}
        do
        blcli_execute Utility setTargetObject templates
        blcli_execute Utility listItemSelect ${i}
        blcli_execute Utility setTargetObject
        blcli_execute Template getName
        blcli_storelocal templateName
        blcli_execute Template getGroupId
        blcli_storelocal templateGroupId
        blcli_execute Group getQualifiedGroupName 5008 ${templateGroupId}
        blcli_storelocal templateGroupPath
        echo "${templateGroupPath}/${templateName}"
done

 

These are similar patterns to what we do in the other listallthethings examples.  And just like the other examples I could be doing a set instead of a get with the DBKey or object in the loop.  If you'll noticed in the Group.getQualifiedGroupName I pass a number.  This is the object type id for a Static Template Group.  I pulled that number from my object type id reference.  I know the Template or Job or DepotObject can only ever exist in a static workspace group so I don't need to do any automagic derivation.  If you are scripting and trying to make a generic function you could do something like this instead:

blcli_execute Group findById ${groupId}
blcli_execute Group getType
blcli_storelocal groupTypeId
blcli_execute Group getQualifiedGroupName ${groupTypeId} ${groupId}

 

 

You can see some of the above logic at work in the Export Template Rules And Parts script where you can pass in a group name that contains some number of Component Templates and then the script will find all the templates in the group and dump out the desired information into each one.

Share:|

This seems to be a fairly common case:  you want to run a job from the blcli and wait for it to finish and then do something with the result.  You can use something like Job.executeJobAndWait but this can be problematic if the job runs for long enough that the blcli to appserver connection will be deemed idle.  The blcli sits there with no output so you aren't getting any indication anything is going on (hence the idle timeout problem), if that blcli to appserver connection is still alive or anything else.  Instead maybe you can start the job and get something back you can use to check the run status and then loop over running that command until you see the job run has finished.

 

The first thing to do is to find the right blcli command to run.  There are a couple that look promising: Job.executeJobAndReturnScheduleID and Job.executeJobAndWaitForRunID.  We need a command that will output something we can use to get the running status and I see a couple commands in the JobRun namespace that look like they will do that: JobRun.getJobRunStatusByScheduleId and JobRun.getJobRunIsRunningByRunKey  (One thing that's a little confusing that well see later is the executeJobAndWaitForRunID actually returns the jobRunKey, not the jobRunId.  And that's ok because we can use that with the getJobRunIsRunningByRunKey command)  There are some other commands that let you 'execute against' a set of targets for that run and they also return the scheduleId or jobRunKey.  As long as the execution command returns the scheduleId or jobRunKey or jobRunId it looks like we will be in good shape.  As you can tell there is no right blcli command.

 

We've found a couple different sets of commands that look like they will accomplish our goal of starting a job run and returning something that we can use to check status with another command.  We can use either command set.  The schedule one looks nice to me because it gives more information about status than running or not.  Let's look at that one first.  We need to get the jobKey, run it and get the scheduleId and then do a loop until the job returns a 'COMPLETE' status.  That's pretty straightforward:

 

blcli_execute NSHScriptJob getDBKeyByGroupAndName "/Workspace" "myJob"
blcli_storeenv JOB_KEY
blcli_execute Job executeJobAndReturnScheduleID ${JOB_KEY}
blcli_storeenv JOB_SCHEDULE_ID
JOB_STATUS="INCOMPLETE"
while [[ -n "${${JOB_STATUS//COMPLETE}//$'\n'}" ]]
     do
     sleep 10
     blcli_execute JobRun getJobRunStatusByScheduleId ${JOB_SCHEDULE_ID}
     blcli_storeenv JOB_STATUS
     echo "Schedule ${JOB_SCHEDULE_ID} status is: ${JOB_STATUS//$'\n'/ }"
done

 

There are some zsh-isms going on there I can explain: ${JOB_STATUS//$'\n'/ } removes the trailing new line from the return of getJobRunStatusByScheduleId.  ${${JOB_STATUS//COMPLETE}//$'\n'} does the same thing and also removes the string COMPLETE from the output and then the while test looks to see if the variable is non-zero in size.  When the job completes then it will show a status of COMPLETE only and by removing that string the variable will be zero and the loop will break.

 

You could put a counter in there as well if you want to prevent it from getting stuck in the loop, so something like:

 

blcli_execute NSHScriptJob getDBKeyByGroupAndName "/Workspace" "myJob"
blcli_storeenv JOB_KEY
blcli_execute Job executeJobAndReturnScheduleID ${JOB_KEY}
blcli_storeenv JOB_SCHEDULE_ID
JOB_STATUS="INCOMPLETE"
COUNT=1
while [[ -n "${${JOB_STATUS//COMPLETE}//$'\n'}" ]] && [[ ${COUNT} -le 100 ]]
     do
     sleep 10
     blcli_execute JobRun getJobRunStatusByScheduleId ${JOB_SCHEDULE_ID}
     blcli_storeenv JOB_STATUS
     echo "Schedule ${JOB_SCHEDULE_ID} status is: ${JOB_STATUS//$'\n'/ }"
     let COUNT+=1
done

Now I might want to check if the job run had errors or dump the job run log items out or use one of the Utility commands to export a job result or log to a file.  With the scheduleId approach I'll need to use a couple unreleased blcli commands to convert the schedule id into something i can use.

 

I can run something like this to convert the scheduled into a jobRunId or jobRunKey:

blcli_execute JobRun findByScheduleId ${JOB_SCHEDULE_ID}
blcli_execute JobRun getJobRunId
blcli_storeenv JOB_RUN_ID

or

blcli_execute JobRun findByScheduleId ${JOB_SCHEDULE_ID}
blcli_execute JobRun getJobRunKey
blcli_storeenv JOB_RUN_KEY

 

Depending on what other commands you want to run, they will take the jobRunId or jobRunKey choose the appropriate sequence above to feed into the commands you want to run.

 

 

Now that I have shown how to poll for status using the scheduleId we will look at how to poll for status using the getJobRunIsRunningByRunKey approach.  It's very similar - we execute the job and get the jobRunKey returned to us and then we do a loop checking to see if the job is running:

blcli_execute Job executeJobAndWaitForRunID ${JOB_KEY}
blcli_storeenv JOB_RUN_KEY
JOB_IS_RUNNING="true"
while [[ "${JOB_IS_RUNNING}" = "true" ]]
        do
        sleep 10
        blcli_execute JobRun getJobRunIsRunningByRunKey ${JOB_RUN_KEY}
        blcli_storeenv JOB_IS_RUNNING
        echo "${JOB_IS_RUNNING}"
done

 

In a normally functioning environment these will more or less work the same.  A problem with the getJobRunIsRunning approach is that if the job doesn't start running, for example you have a job routing rule sending the job to a downed appserver, your appservers are maxed out on WorkItemThreads, or you've reached the MaxJobs threshold, then your executeJobAndWaitForRunID is going to sit waiting for the job to start running and you are back to the possible idle connection timeout happening.

 

There are other commands that look interesting in the JobRun namespace (released and unreleased).  The JobRun.showRunningJobs looks neat - it outputs a nice table - but it only shows the job name which is not enough to know if your job is actually running since you can have duplicate job names in different workspace folders.

 

The above two examples are pretty simple.  They can be extended to manage multiple job runs - for example if you wanted to kick off a number of jobs at once and then wait until they were all complete, check their status, and then move on to some other action.  One approach to that would be some arrays to hold the job information an another loop around what I've done above.  That may be a future post.

Filter Blog

By date:
By tag: