Skip navigation
1 2 3 Previous Next

TrueSight Server Automation

130 posts
Share This:

This question has come up a number of times and I decided to spend some time looking into it.  The goal is to be able to leverage a RedHat Satellite server as the source of a RedHat Patch Catalog in TrueSight Server Automation.  Standard disclaimers that this is not currently supported, may not work for you, may stop working in the future, etc, etc.

 

The below assumes some familiarity with Satellite and should work for Satellite version 6.x.  I did this with Satellite 6.5.

 

On the Satellite server, add or edit the file /etc/pulp/server/plugins.conf.d/yum_distributor.json, as noted in this Foreman bug, and add the following:

{                                                                                                                                                                                                                                            

  "generate_sqlite": true                                                                                                                                                                                                                    

}

This is needed because BSA requires the metadata in sqlite format and this is not the default for Satellite.  After making this change you must restart the Pulp worker services.  The next synchronization of your repositories should include the sqlite metadata.  If not, you can forcefully regenerate the metadata of a Content View.

 

To verify you have generated the sqlite metadata, run the below command on the Satellite server after the synchronization completes:

find /var/lib/pulp/published/yum/master/yum_distributor -iname "*sqlite.bz2" -exec ls -la {} \;

-rw-r--r--. 1 apache apache 13938 Jun 14 16:28 /var/lib/pulp/published/yum/master/yum_distributor/ee48df18-3be3-4254-b518-de42a1a37cb4/1560544102.06/repodata/7f8c6bce5464871dd00ed0e0ed25e55fd460abb255ab0aa093a79529bb86cbc2-primary.sqlite.bz2

-rw-r--r--. 1 apache apache 155449 Jun 14 16:28 /var/lib/pulp/published/yum/master/yum_distributor/ee48df18-3be3-4254-b518-de42a1a37cb4/1560544102.06/repodata/cff3aeccd7f3ff871f72c5829ed93720e0f273d1206ee56c66fa8f6ee1d2e486-filelists.sqlite.bz2

-rw-r--r--. 1 apache apache 40915 Jun 14 16:28 /var/lib/pulp/published/yum/master/yum_distributor/ee48df18-3be3-4254-b518-de42a1a37cb4/1560544102.06/repodata/eb5044ef0c9e47dab11b242522890cfe6fbb6cf1942f14757af440ec54c9027f-other.sqlite.bz2

[...]

 

Subscribe the system used to store the RedHat Catalog for TSSA to your Satellite server to obtain the certificates used by TSSA in the catalog synchronization process.

 

In the Patch Global Configuration, or your offline downloader configuration file you will reference these certificates:

SSL CA Cert: /etc/pki/ca-trust/source/anchors/katello-server-ca.pem

SSL Client Cert: /etc/pki/entitlement/<numbers>.pem

SSL Client Key: /etc/pki/entitlement/<numbers>-key.pem

Note that the SSL CA Cert is different than the one used when synchronizing directly with RedHat.

 

You will need to update the RedHat Channel Filters List File (online catalog) or the offline downloader configuration file (offline catalog) with the urls and other information for the Satellite -provided channels you will use in your catalog.  The URLs will look something like:

https://satellite.example.com/pulp/repos/Example/Library/View1/content/dist/rhel/server/7/7Server/x86_64/os

 

The format of the url is https://<satellite server>/pulp/repos/<organization>/<content library>/<view name>/<product>.  An easy way to determine the URLs is to use the rct cat-cert command on the SSL Client Cert:

 

rct cat-cert /etc/pki/entitlement/3591963669563311224.pem

[...]

Content:

Type: yum

Name: Red Hat Enterprise Linux 7 Server (RPMs)

Label: rhel-7-server-rpms

Vendor: Red Hat

URL: /Example/Library/View1/content/dist/rhel/server/7/$releasever/$basearch/os

GPG: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

Enabled: True

Expires: 1

Required Tags: rhel-7-server

Arches: x86_64

 

Another way is to inspect the output of the subscription-manager repos --list command output (which only shows the repos applicable to the OS of the catalog server):

# subscription-manager repos --list

+----------------------------------------------------------+

    Available Repositories in /etc/yum.repos.d/redhat.repo

+----------------------------------------------------------+

Repo ID:   rhel-7-server-rpms

Repo Name: Red Hat Enterprise Linux 7 Server (RPMs)

Repo URL:  https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/7/$releasever/$basearch/os

Enabled:   1

Repo ID:   rhel-7-server-optional-rpms

Repo Name: Red Hat Enterprise Linux 7 Server - Optional (RPMs)

Repo URL:  https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/7/$releasever/$basearch/optional/os

Enabled:   0

Repo ID:   rhel-7-server-satellite-tools-6.5-rpms

Repo Name: Red Hat Satellite Tools 6.5 (for RHEL 7 Server) (RPMs)

Repo URL:  https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/7/7Server/$basearch/sat-tools/6.5/os

Enabled:   0

 

Once you have the urls and other information for the channels you want to build your catalog from you will update the RedHat Channel Filters List File in Patch Global Confgiuration or Offline Downloader configuration file with the urls and other information.

 

Example RedHat Filters file snippet for an online catalog:

[...]

   <redhat-channel use-reposync="true">

        <channel-name>RHEL 7 Optional RPMs from Satellite</channel-name>

        <channel-label>rhel-7-server-optional-rpms-satellite</channel-label>

        <channel-os>RHES7</channel-os>

        <channel-arch>x86_64</channel-arch>

        <channel-url>https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/7/7Server/x86_64/optional/os</channel-url>

    </redhat-channel>

    <redhat-channel use-reposync="true" is-parent="true">

        <channel-name>RHEL 6 RPMs from Satellite</channel-name>

        <channel-label>rhel-6-server-rpms-satellite</channel-label>

        <channel-os>RHES6</channel-os>

        <channel-arch>x86_64</channel-arch>

        <channel-url>https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/6/6Server/x86_64/os</channel-url>

    </redhat-channel>

    <redhat-channel use-reposync="true">

        <channel-name>RHEL 6 Optional RPMs from Satellite</channel-name>

        <channel-label>rhel-6-server-optional-rpms-satellite</channel-label>

        <channel-os>RHES6</channel-os>

        <channel-arch>x86_64</channel-arch>

        <channel-url>https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/6/6Server/x86_64/optional/os</channel-url>

    </redhat-channel>

[...]

This adds the rhel-7-server-rpms and rhel-6-server-rpms as parent channels and the others as child channels.

 

 

Example Offline Downloader configuration file snippet:

       <redhat-cert cert-arch="x86_64">

                <caCert>/etc/pki/ca-trust/source/anchors/katello-server-ca.pem</caCert>

                <clientCert>/etc/pki/entitlement/2717125327657143845.pem</clientCert>

                <clientKey>/etc/pki/entitlement/2717125327657143845-key.pem</clientKey>

        </redhat-cert>

       

        <errata-type-filter>

                        <os>RHES7</os>

                        <arch>x86_64</arch>

                        <channel-label>rhel-7-server-rpms</channel-label>

                        <channel-url>https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/7/7Server/x86_64/os</channel-url>

                        <errata-severity>

                                <critical>true</critical>

                                <important>true</important>

                                <moderate>true</moderate>

                                <low>true</low>

                        </errata-severity>

                        <errata-type>

                                <security>true</security>

                                <bugfix>true</bugfix>

                                <enhancement>true</enhancement>

                        </errata-type>

        </errata-type-filter>

        <errata-type-filter>

                        <os>RHES7</os>

                        <arch>x86_64</arch>

                        <channel-label>rhel-7-server-optional-rpms</channel-label>

                        <channel-url>https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/7/7Server/x86_64/optional/os</channel-url>

                        <errata-severity>

                               <critical>true</critical>

                               <important>true</important>

                               <moderate>true</moderate>

                               <low>true</low>

                        </errata-severity>

                        <errata-type>

                                <security>true</security>

                                <bugfix>true</bugfix>

                                <enhancement>true</enhancement>

                       </errata-type>

        </errata-type-filter>

        <errata-type-filter>

                        <os>RHES6</os>

                        <arch>x86_64</arch>

                        <channel-label>rhel-6-server-rpms</channel-label>

                        <channel-url>https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/6/6Server/x86_64/os</channel-url>

                        <errata-severity>

                               <critical>true</critical>

                               <important>true</important>

                               <moderate>true</moderate>

                               <low>true</low>

                        </errata-severity>

                        <errata-type>

                                <security>true</security>

                                <bugfix>true</bugfix>

                                <enhancement>true</enhancement>

                        </errata-type>

        </errata-type-filter>

              <errata-type-filter>

                        <os>RHES6</os>

                        <arch>x86_64</arch>

                        <channel-label>rhel-6-server-optional-rpms</channel-label>

                        <channel-url>https://satellite.example.com/pulp/repos/Example_Org/ExampleEnvironment/ExampleContentView/content/dist/rhel/server/6/6Server/x86_64/optional/os</channel-url>

                        <errata-severity>

                               <critical>true</critical>

                               <important>true</important>

                               <moderate>true</moderate>

                               <low>true</low>

                        </errata-severity>

                        <errata-type>

                                <security>true</security>

                                <bugfix>true</bugfix>

                                <enhancement>true</enhancement>

                        </errata-type>

        </errata-type-filter>

 

 

At this point, using the online RedHat Patch Catalog or offline downloader is the same as synchronizing with RedHat directly.  Finish the catalog creation and run the Catalog Update Job.

 

 

 

A few references were helpful in setting all of this up (that may require a RedHat Support account to access):

Installing Satellite Server from a Connected Network

How to forcefully regenerate metadata of a content view or repository on Red Hat Satellite 6

How do I register a system to Red Hat Satellite 6 server

How to change download policy of repositories in Red Hat Satellite 6

Share This:

If you are using TSSA/BSA to patch your Windows servers, you have hopefully already upgraded to a version of TSSA that uses the new Shavlik SDK version and are happily off patching your servers, because you received a notification from BMC via email, seen a blog post, attended a webinar, or seen the documentation flash page.

 

If you have not upgraded yet, you need to complete an upgrade before September 30, 2019 to continue patching your Windows servers with TSSA.

 

Let me elaborate on a few points from the documentation flash page:

 

You must upgrade the core TSSA infrastructure, that is the AppServer(s), Database and Consoles to a version of TSSA that uses the new Shavlik SDK.

 

You must also upgrade the RSCD agents on all Windows servers you need to deploy patches to as well.  This unfortunately breaks with precedent.  With most BSA/TSSA upgrades, you typically upgrade the core infrastructure and then roll out RSCD upgrades as change windows permit.  The notable exception to this rule is when there are RSCD side changes to enable new functionality.  The Shavlik SDK update is one of those exceptions.  Both the appserver and the RSCD must be upgraded.

 

While the flash page mentions a couple versions to upgrade to, there are actually a handful of TSSA versions that include the updated Shavlik SDK.

  • If your appserver and agents are already 8.9.03 then there is no action required.
  • If your appserver and agents are already 8.9.03.001, then there is no action required.
  • If your appserver and agents are already 8.9.04, then there is no action required.
  • If your appserver and agents are already 8.9.04.001, then there is no action required.
  • If you have some combination of an 8.9.04.x appserver and 8.9.03 agents, or 8.9.03.001 appserver and 8.9.03 agents, then there is likely no action required, unless you have encountered one of the issues noted below.

 

However, if you are planning to upgrade, you should upgrade to 8.9.04.001 or later, because you will likely run into a few bugs present in prior versions impacting the agent upgrade process and patching:

Bug
Summary
Fixed In
DRBLG-116254When you upgrade a Windows RSCD Agent to version 8.9.03.001, the sbin\BLPatchCheck2.exe binary is not upgraded.8.9.04
DRBLG-116834

When you execute an Agent Installer job against multiple target servers in parallel, some of the jobs fail with the following error message:

Failed to copy installer bundle from file server

8.9.04
QM002409753Upgrading BMC Server Automation 8.9.02 agent on Microsoft Windows servers does not work as expected.8.9.03.001
DRBLG-114734Shavlik 9.3 restart handling and robustness8.9.03.001
DRBLG-118439Windows CUJ update is slow8.9.04.001
DRBLG-117288Windows Patch Exclusions Not Working8.9.04
DRBLG-115981Windows Patching Analysis failed in absence of Digital Certificates8.9.03.001
DRBLG-119204After appserver upgrade to 8.9.04, Windows Patch Remediation fails unless target RSCD agent is upgraded to the same SP4.8.9.04.001

 

A few other questions that have come up about the upgrade process:

 

Can I upgrade my agents to the new version ahead of my appserver upgrade ?

No, the appserver/infrastructure must be updated first.  TSSA/BSA has never supported using a newer RSCD version with an older appserver.

 

What happens if I can't complete the upgrade process before September 30, 2019 ?

New patch metadata with information about newly released patches will not be available to you and you will be unable to deploy them using TSSA's Patching solution.

 

Do I also need to upgrade the RSCD on other Operating Systems (Linux, AIX, Solaris, etc) immediately after upgrading the TSSA infrastructure ?

You do not need to upgrade these agents immediately after upgrading the TSSA infrastructure.

Share This:

Hello Everyone,

 

We are super excited to announce the launch of 'TrueSight Automation Console' for TrueSight Automation for Servers. This container based platform significantly simplifies OS patching experience for servers while providing near real time Patch Compliance status. Here are the highlights of this platform:

 

Simplified patch management

Using new automation console you can define the patch policies that are used for both audit and deployment of patches.

Based on the policy results, you can create remediation actions to apply required patches on the target servers. You basically just choose the policies, targets, remediation options, schedule (maintenance window), and receive notification (alerts).  It is a big step forward and provides greater ease of use, and makes it easier to deploy patches in this intuitive interface.

 

AddPatchPolicy.png

 

KPI driven Dashboards

The Dashboard shows patch compliance health. It reflects KPIs such as:

  • Patch Compliance of the environment
  • Assets missing patch SLAs and critical patches
  • Age of missing patches
  • Remediation trends
  • Visibility into the patches that are missing on the most number of servers

You can also:

  • Filter the dashboard data using filters for operating system, severity, and the patch policy.
  • Drill down on widgets to get details
  • Export data to share with stakeholders

 

 

Deploy missing patches

Based on scan results of patch policies, you can schedule the deployment of missing patches during the maintenance window. Also, you can choose the reboot options for the target servers as well as staging schedule for patch payloads.

 

Service Level Agreements(SLA)

SLAs define the period within which the missing patches need to be remediated. You can define SLAs based on your organization's policies.

 

Container Based Deployment

TrueSight Automation Console is packaged as docker containers which are easy to deploy.

You can install TrueSight Automation Console both in Interactive as well as in Silent mode.

It is a three simple step process to install TrueSight Automation Console:

  • Setup Stack Manager
  • Setup Database
  • Setup Automation Console application

 

For details, please refer the documentation at https://docs.bmc.com/docs/display/tsac191

 

Customers who want to understand more about this offering and to gain access to the software should reach out to the respective account manager.

Share This:

Hi all

Now that we've done a few sessions on TrueSight Smart Reporting, I thought I'd gather the links and resources together in a single place and share with everyone.

If you have other topics you'd like to see addressed or questions, please engage here, on the individual pages above or contact Support.

Thanks

Seth

 

Share This:

As part of automating your patch deployments you may want to run your Patching Job and have it automatically generate the BlPackages and Deploy Jobs that contain missing patches and go ahead and schedule the various phases of the generated Deploy Jobs.  This is fairly simple in the gui.  In a Patching Job, on the Remediation Options tab, click on the Deploy Job Options button and then goto the Phases and Schedules tab.

2019-07-17_16-19.png

While this is trivial for a single job, if you have several jobs to modify every patching cycle this becomes quite tedious.  We will of course turn to our friend the BLCLI.  There's a hint in the screenshot above as to how we will accomplish this.  The Populate options from an existing Job looks interesting.  And in-fact if I create a 'dummy' Deploy Job, setup the options and schedule that I want and then select that in the Populate options from an existing Job menu, the schedule and options are applied to my Patching Job / Deploy Job Options.

 

To really automate this I need to do a few things: Figure out the options and schedule we want set in the Patching Job, create or update a 'dummy' deploy job with the options and schedule I want, find my Patching Job (we could create one from scratch but not for this exercise), apply the 'dummy' job to the Patching Job, remove the schedule from the dummy job, and then optionally execute or schedule the Patching Job.

 

Dummy Job

To create the 'dummy job' we need a blpackage.  Let's provide the name and path to a BlPackage and create an empty BlPackage if it does not exist.

DUMMY_BLPACKAGE="/Workspace/TestDeploy"
blcli_execute DepotGroup groupNameToDBKey "${DUMMY_BLPACKAGE%/*}"
blcli_storeenv depotGroupKey
blcli_execute DepotObject depotObjectExistsByTypeGroupAndName 28 ${depotGroupKey} "${DUMMY_BLPACKAGE##*/}"
blcli_storeenv pkgExists
if [[ "${pkgExists}" = "true" ]]
    then
    blcli_execute BlPackage getDBKeyByGroupAndName "${DUMMY_BLPACKAGE%/*}" "${DUMMY_BLPACKAGE##*/}"
else
    blcli_execute DepotGroup groupNameToId "${DUMMY_BLPACKAGE%/*}"
    blcli_storeenv depotGroupId
    blcli_execute BlPackage createEmptyPackage "${DUMMY_BLPACKAGE##*/}" "" ${depotGroupId}
    blcli_execute DepotObject getDBKey
fi
blcli_storeenv PACKAGE_KEY
echo "PACKAGE_KEY:${PACKAGE_KEY}"

Now that we have our dummy BlPackage, let's do the same with the dummy job.  When we create the deploy job, we'll pass parameters as defined in the DeployJob - createDeployJob_3 - Documentation for BMC Server Automation Command Line Interface 8.9 - BMC Documentation command.

 

DEPLOY_OPTS="${BASIC_DEPLOY_OPTS} ${ADVANCED_OPTS}"
DUMMY_BLDEPLOY="/Workspace/TestDeploy"
DUMMY_TARGET_SERVER="server1"
blcli_execute JobGroup groupNameToDBKey "${DUMMY_BLDEPLOY%/*}"
blcli_storeenv jobGroupKey
blcli_execute Job jobExistsByTypeGroupAndName 30 ${jobGroupKey} "${DUMMY_BLDEPLOY##*/}"
blcli_storeenv jobExists
if [[ "${jobExists}" = "true" ]]
    then
    blcli_execute DeployJob getDBKeyByGroupAndName "${DUMMY_BLDEPLOY%/*}" "${DUMMY_BLDEPLOY##*/}"
else
    blcli_execute JobGroup groupNameToId "${DUMMY_BLDEPLOY%/*}"
    blcli_storeenv GROUP_ID
    blcli_execute DeployJob createDeployJob "${DUMMY_BLDEPLOY##*/}" "${GROUP_ID}" "${PACKAGE_KEY}" "${DEPLOY_TYPE}" "${DUMMY_TARGET_SERVER}" ${DEPLOY_OPTS}
fi
blcli_storeenv DEPLOY_JOB_KEY
echo "DEPLOY_JOB_KEY:${DEPLOY_JOB_KEY}"

 

Now we can set the phase schedules on the dummy job:

blcli_execute DeployJob setAdvanceDeployJobPhaseScheduleByDBKey ${DEPLOY_JOB_KEY} AtTime "${SIMULATE_TIME}" "${STAGE_DATE_TIME}" "" AtTime "${COMMIT_TIME}"
blcli_storeenv DEPLOY_JOB_KEY
echo "DEPLOY_JOB_KEY:${DEPLOY_JOB_KEY}"

 

Patching Job

Apply the options and schedule from the dummy job on your patching job:

 

PATCH_JOB="/Workspace/Patching Jobs/rhel6-clean"
REMEDIATION_DEPOT_FOLDER="/Workspace/Patch Deploy"
REMEDIATION_JOB_FOLDER="/Workspace/Patching Jobs"
REMEDIATION_JOB_PREFIX="${PATCH_JOB##*/}-"
blcli_execute PatchingJob setRemediationWithDeployOptions ${PATCH_JOB_KEY} "${REMEDIATION_JOB_PREFIX}" "${REMEDIATION_DEPOT_FOLDER}" "${REMEDIATION_JOB_FOLDER}" ${DEPLOY_JOB_KEY}
blcli_storeenv PATCH_JOB_KEY
echo "PATCH_JOB_KEY:${PATCH_JOB_KEY}"

 

 

Optionally schedule the Patching Job

blcli_execute Job addOneTimeSchedule ${PATCH_JOB_KEY} "${ANALYSIS_TIME}"
blcli_storeenv PATCH_JOB_KEY
echo "PATCH_JOB_KEY:${PATCH_JOB_KEY}"

 

 

And that's it.  The attached script includes a hard-coded start time for the Deploy phases and Patching Job.  Those could be taken as input to the script or derived somehow.  There's also a check to see if the executeDeployJobsNow option is set and exit out since this requires manual correction.

 

There are a few things that could be done in the script that are not covered and left an an exercise for the reader:

  • Delete the dummy job and package after they've been used
  • Set pre- and post- commands in the Deploy Job options
  • Take the various options, schedule times, job path, folders, etc as script arguments
Share This:

In the Part 1 we saw how you can create a web based repository configuration of JFrog artifactory. In Part 2 we will see how you can create the depot software for deploying an application like notepad++ to windows targets.

 

Create a Depot Software for custom software

 

In this step you want to link the software payload http path with the depot software. The steps are usual as creating a new depot software. I will show here how you can do it for Custom Software. You will get the following screen to select the software payload. You can live browse the web repositories and select the payload you want to deploy. To pull the latest version from the repository for every deployment check the box "Download during deployment". Click on OK.

Now you will come across the screen where you can give the name and deploy/undeploy related commands for the software payload. For notepad++ we have shown it below. Please zoom in the browser if the commands are not visible. Click Next.

Go through the wizard for the rest of the screens similar to Depot Software and click Finish and you are all set!!

 

Deploy the software to the selected servers

 

Now you can execute the deploy job just like other deploys jobs by right click on the depot software created.

Share This:

Need of central repository for software payloads

 

A lot of customers use the file servers as remote repositories for storing the payloads. Where as having multiple remote repositories can give you scalability but when you want to host a software for which there are frequent releases and you want to maintain latest version of that software accessible to deploy it becomes an involved effort for the team. This is true especially when you have a servers distributed across the departments with different people as IT leads owning set of servers having unique requirement of softwares that need to be installed on them.

 

TrueSight Server Automation with its release 8.9.04.001 offers you a way to store all your payloads at a central web repository and download the payloads latest version through a https url. In this release there is a out of the box integration with generic type repository of JFrog Artifactory. The pre-requisite is you need to have a JFrog Artifactory of Generic Local type repository already configured and uploaded your software payloads to it.

 

You can be ready to deploy the software payloads in three simple steps:

  1. Configure the web repository in RCP Console
  2. Create a Depot Software
  3. Create a Deploy job

 

We will see how to use these three steps in three parts of this blog.

 

Configure the web repository in RCP Console

As shown below create the Web repository configuration by following steps:

 

Click on add button to add new web repository configuration. Here you enter the artifactory url and the JFrog user and password. Click Next.

On next screen you can select the WebRepositoryConfig object level authorization to the role you are currently logged in RCP console with.

Click OK and then Click Finish on the parent screen.

 

Click on Part 2 to see how you can create the depot software for using https url and deploy the the software.

Share This:

I am excited to announce that TrueSight Server Automation 8.9.04.001 is GA with some critical updates.

 

  • Now use all new REST APIs for core patching with well defined standard of documentation. You can now work with catalogs, groups, patching jobs, roles and servers with these APIs. The end points of these APIs are provided using swagger specific language.
  • With this release we are supporting out of the box integration with generic-local type of JFrog Artifactory. You can add a software package to the Depot by specifying the path to a payload that exists on a configured web repository. After adding a software package, you can deploy it using a Deploy Job just like you could do earlier with a NSH path.
  • In smart groups, now you do not need to create a smart group from scratch every time you need to reuse the existing conditions from other smart groups. As an Administrator or operator, you can create a copy of a smart group in a patch catalog.
  • We have also included provisioning support on Unified Extensible Firmware Interface (UEFI) by using Windows Imaging Format (WIM) images.
  • There is a whole set of blclis available now for dealing with functionalities in jobrun, patch catalog, execution tasks and smart patch catalog group.
  • There are additions in the OS support with the inclusion of Windows 2019 and SUSE 15.

 

Apart from these there are some critical fixes available.

 

For more information please visit the link Version 8.9.04.001: Patch 1 for version 8.9.04 - Documentation for TrueSight Server Automation 8.9.00 - BMC Documentatio…

 

 

 

Share This:

I am super excited to share that TrueSight Server Automation 8.9.04 release is GA!! Following are the salient features of this release.

 

Job qualification check

You can now set the maximum number of servers targeted by a job. Every time you perform the following actions, a message with the number of target servers is displayed. This option is available through GUI only.

  • Create Job
  • Create Execution Task
  • Modify Job
  • Execute Against
  • Execute a job

Powershell integration

  • Execute PowerShell scripts through Type 3 NSH Scripts and scriptutil.
  • Extended Objects can support PowerShell script natively.
  • Option to generate script logs for Network Shell script job for ease of parsing.
  • Ability to easily pass custom arguments while launching PowerShell scripts as well as simple configuration to set the launch command for PowerShell.

 

 

Added platform support

  • Windows Server 2019 operating system
  • Ubuntu 18.04 operating system
  • POWER9 architecture

Compliance Content

  • CIS for SUSE 12 and Windows 2016
  • PCI for Windows 2016
  • DISA,PCI,CIS templates of old OS versions have been upgraded to the latest template versions.

 

For more details please visit Service Pack 4: version 8.9.04 - Documentation for TrueSight Server Automation 8.9.00 - BMC Documentation

Share This:

Coming up on October 18, 2018 is BMC’s annual user event, the BMC Exchange in New York City!

 

Exchange-NY-CityImage-Linkedin.jpg

 

During this free event, there will be thought-provoking keynotes including global trends and best practices.  Also, you will hear from BMC experts and your peers in the Digital Service Operations (DSO) track.  Lastly, you get to mingle with everyone including BMC experts, our business partners, and your peers.  Pretty cool event to attend, right? 

 

In the DSO track, we are so excited to have 3 customers tell their stories. 

  • Cerner will speak about TrueSight Capacity Optimization and their story around automation and advanced analytics for capacity and planning future demand.   Check out Cerner’s BizOps 101 ebook
  • Park Place Technologies’ presentation will focus on how they leverage AI technology to transform organizations.
  • Freddie Mac will join us in the session about vulnerability management.  Learn how your organization can protect itself from security threats.  Hear how Freddie Mac is using BMC solutions. 

 

BMC product experts will also be present in the track and throughout the entire event.

  • Hear from the VP of Product Management on how to optimize multi-cloud performance, cost and security
  • Also, hear from experts on cloud adoption.  This session will review how TrueSight Cloud Operations provides you visibility and control needed to govern, secure, and manage costs for AWS, Azure, and Google Cloud services.

 

At the end of the day, there will be a networking reception with a raffle (or 2 or 3).  Stick around and talk to us and your peers.  See the products live in the solutions showcase. Chat with our partners.  Stay around and relax before heading home. 

 

Event Info:

Date: October 18th 2018

When: 8:30am – 7:00pm

  • Keynote begins at 9:30am
  • Track Sessions begin at 1:30pm
  • Networking Reception begins at 5:00pm

Where: 415 5th Ave, NY, NY 10016

 

For more information and to register, click here

 

Look forward to seeing you in NYC!  Oh, and comment below if you are planning to attend!  We are excited to meet you.

Share This:

Happy to announce that TrueSight Server Automation 8.9.03 went GA on 12th June 2018 & BMC Server Automation has been renamed to TrueSight Server Automation

Highlights of the release:

  • HIPAA Compliance for AIX 7.1
  • Ivanti 9.3 SDK based patching
    • Windows patching will now be supported using Ivanti(Shavlik) SDK 9.3
  • SMB2 support
    • Add Node which is SMB2 enabled - Agent Installer Job through RCP to a SMB2 enabled targets.
    • Unified Agent Installer to a SMB2 enabled targets.
  • SHA256 support

We are upgrading the secured certificates to support “Signature Algorithm” with SHA2 key. Those will be available at following places for this release.

      • Self-Signed certificate used by Application Server
      • Agent certificate

 

  • Security Enhancements

There are JRE and Magnicomp vulnerabilities which have got fixed. There are other vulnerability fixes also along with these which are discovered from application security team.

Questions or feedback? Comment below to let us know

For more details please take a look at the documentation here - Service Pack 3: version 8.9.03 - Documentation for TrueSight Server Automation 8.9.00 - BMC Documentation

Share This:

It can be useful to manually run yum or blyum with the repodata and includes from the command line to get a better picture of what might be happening during analysis or inspect the repodata or perform any other troubleshooting steps.  The following steps can be taken to accomplish this:

 

For 8.9.01 and below

Run the Patching Job with the DEBUG_MODE_ENABLED set to true. 

Gather the per-host generated data from the application server - for example if the Job Name is 'RedHat Analysis' and it was run against the target 'red6-88.example.com' on Jun 11 you should see a directory:

<install dir>/NSH/tmp/debug/application_server/RedHat Analysis/Sat Jun 11 09-26-57 EDT 2016/red6-88.example.com

that contains:

analysis_err.log analysis_log.log analysis_res.log installed_rpms.log repo repodata.tar.gz yum_analysis.res yum.conf yum.err.log yum.lst

 

Copy the entire red6-88.example.com directory back to the target system (or any system you want to test these files on) into /tmp or some other location.

 

For 8.9.01 and later

The files mentioned above are kept in <rscd install>/Transactions/analysis_archive on the target system.  The three most recent runs should be present.

 

 

 

Once you've located the files used for analysis and located them on the target system

Edit the yum.conf and the cachedir and reposdir to match the current directory path.  Following the pre-8.9.01 example where we copied the directory into /tmp:

cachedir=//var/tmp/stage/LinuxCatalog_2002054_red6-88.example.com
reposdir=//var/tmp/stage/LinuxCatalog_2002054_red6-88.example.com

are changed to match the new path - if you copied the red6-88.example.com to /tmp then:

cachedir=//tmp/red6-88.example.com
reposdir=//tmp/red6-88.example.com

then from within that directory you can run:

/opt/bmc/bladelogic/NSH/bin/blyum -c yum.conf -C update

if an include list was used you can do:

/opt/bmc/bladelogic/NSH/bin/blyum -c yum.conf -C update `cat rpm-includes.lst.old`

or

/opt/bmc/bladelogic/NSH/bin/blyum -c yum.conf -C update `cat parsed-include.lst.old`

if the parsed list exists.  The parsed include list will not contain any rpms that are installed on the target and in the include list.  The parsed-include.lst was added in recent BSA versions to handle the situation where yum decides to update the rpm to the latest one in the catalog instead of leaving it alone when the include list contains the exact version of an rpm already installed on the system.

 

If it's a RedHat 7 target, the native yum is used, so use yum instead of blyum.

yum -c yum.conf -C update

 

You can also use the above process to copy the metadata from the target system to another system and run queries against the metadata or run analysis with the same metadata and options against a test system.  for example you could run:

/opt/bmc/bladelogic/NSH/bin/blyum -c yum.conf -C search <rpmname>

or

/opt/bmc/bladelogic/NSH/bin/blyum -c yum.conf -C info <rpmname>

 

to see if some rpm is in the metadata or not.

Share This:

BMC Software is alerting customers who use BMC Server Automation (BSA) for managing Unix/Linux Targets to this Knowledge Article which highlights vulnerability CVE-2018-9310 in the Magnicomp Sysinfo solution used by BSA RSCD Agent to capture Hardware Information. The Knowledge Article contains the details of the issue and also how to obtain and deploy the fix.

Share This:

September 25, 2018 - Important update to below alert:

BMC Software has negotiated an extension of support for Shavlik version 9.1 to provide BMC Server Automation users additional time to upgrade. With this extension, users now have until September 30, 2019 to upgrade the BMC Server Automation infrastructure and the BMC Server Automation RSCD agents running on Microsoft Windows target servers.

Users who cannot upgrade their BMC Server Automation environment and Windows targets to one of the patches/releases listed in this topic by December 31, 2018, must instead reconfigure their environment by December 31, 2018, in order for Windows patching to continue to function.

This flash and prior notification (below) are now modified to reflect this update.

_______________________________________________________________________________________________________________________________________________________

 

Updated original notification from May 2018::

 

BMC Software is alerting users of BMC Server Automation for Windows Patching, that action must be taken before December 31, 2018 to ensure continued functioning of Windows Patching within the BMC Server Automation product beyond that date.

 

One of the following actions must be taken:

 

a) Upgrade the BSA Environment, including the RSCD agents on all Windows Targets used for BSA Patch Analysis, by December 31, 2018

or

b) A minor configuration change must be made to the BSA Environment by December 31, 2018 followed by an upgrade before the extended EOL date of September 30, 2019

 

Please see this updated Flash Bulletin in the BSA Documentation for full details.

Share This:

If you want to increase or decrease the logging level for the appserver there's a pretty easy way to accomplish this - edit the appserver's log4j.properties file. There are already a number of logging class entries in there and you may want to increase or decrease logging on some classes not listed.  There's a fairly simple way to figure out what classes are associated with what log entries:  add the class to the logger.  In the log4j.properties near the top look for these two lines:

log4j.appender.C.layout.ConversionPattern=[%d{DATE}] [%t] [%p] [%X{user}:%X{role}:%X{ip}] [%X{action}] %m%n
log4j.appender.R.layout.ConversionPattern=[%d{DATE}] [%t] [%p] [%X{user}:%X{role}:%X{ip}] [%X{action}] %m%n

 

we can add the class (category) by adding a %c and we'll put that in brackets so it looks like the rest of the log:

log4j.appender.C.layout.ConversionPattern=[%d{DATE}] [%t] [%p] [%c] [%X{user}:%X{role}:%X{ip}] [%X{action}] %m%n
log4j.appender.R.layout.ConversionPattern=[%d{DATE}] [%t] [%p] [%c] [%X{user}:%X{role}:%X{ip}] [%X{action}] %m%n

 

after about a minute, without restarting the appserver service, you will see the new log entries like:

[19 Mar 2018 08:57:35,073] [Scheduled-System-Tasks-Thread-10] [INFO] [com.bladelogic.om.infra.app.service.appserver.AppserverMemoryMonitorTask] [System:System:] [Memory Monitor] Total JVM (B): 625135616,Free JVM (B): 453789168,Used JVM (B): 171346448,VSize (B): 8967577600,RSS (B): 1027321856,Used File Descriptors: 300,Used Work Item Threads: 0/100,Used NSH Proxy Threads: 0/15,Used Client Connections: 3/200,DB Client-Connection-Pool: 2/2/0/200/150/50,DB Job-Connection-Pool: 2/2/0/200/150/50,DB General-Connection-Pool: 1/1/0/200/150/50

 

generally we should then be able to add entries like:

log4j.logger.<class>=<level>

to the log4j.properties file.

 

For example, if I don't want to see info messages from compliance job runs, I would turn on the class logging and then run a compliance job.  I'd see some entries like

[19 Mar 2018 08:57:36,903] [Job-Execution-2] [INFO] [com.bladelogic.om.infra.compliance.job.ComplianceJobExecutor] [BLAdmin:BLAdmins:] [Compliance] --JobRun-2000915,3-2052417-- Started running the job 'CisWin2012R2ComplianceJob' with priority 'NORMAL' on application server 'blapp89.example.com'(2,000,000)

I then add:

log4j.logger.com.bladelogic.om.infra.compliance.job.ComplianceJobExecutor=ERROR

to my log4j.properties file and wait a couple minutes and then re-run the job.  You should no longer see the INFO message.  Conversely, I may be able to get more information  out of this class by setting it to DEBUG, but that will depend if there is any debug logging already built into the class or not, which is not guaranteed. 

 

One thing to note - if you want to change the logging from DEBUG back to INFO or ERROR back to INFO you must alter the logger line, you can't simply delete the line from the log4j.properties file. 

 

If you elect do the above to reduce logging, make sure that when you interact with BMC Support you make it clear you have altered the logging levels because during troubleshooting we may be looking for log messages you have excluded and we will spend a lot of time figuring that out.

 

Using the above to enable debug logging can be useful while troubleshooting issues with the application server.  The nuclear option of course is to change the root logging level:

log4j.rootLogger=INFO, R, C

and if that is done you will likely need to increase the size of and number of rolled logs to handle the additional information being dumped into the files:

# Set the max size of the file

log4j.appender.R.MaxFileSize=20000KB

#Set the number of backup files to keep when rolling over the main file

log4j.appender.R.MaxBackupIndex=5

It's much better if you can use the above method to figure out what class you want more (or less) logging on or in the case of needing debug using the DEBUG_MODE_ENABLED property on the job to get job run debug information for a particular job.

 

Generally you should not have to alter the logger settings during normal operation.

Filter Blog

By date:
By tag: