Skip navigation
1 2 3 Previous Next

Cloud Lifecycle Mgmt

65 posts
Share This:

I am very pleased to announce that BSA 8.9 Service Pack 2 is now available.


Here are some highlights of the release:

1>     Compliance Support:

    1. DISA STIG content update for Windows 2016
    2. DISA STIG content update for RH 7

2>     Patch Analysis support for AIX Multibos

Users can now perform patch analysis on the standby instance of the Base Operating System (BOS) which is maintained in the same root volume group as the active BOS object.

3>     Patching support for AIX on an Alternate Disk or Multiple Boot Operating System

Some versions of AIX have the capability of maintaining multiple instances of Base Operating Systems (BOS). The additional instance of the BOS can be maintained in the same root volume group (multibos) or on a separate disk on a separate root volume group (alternate disk). The user can boot any one instance of the BOS which is called the active instance. The instance that has not been booted remains as a stand by instance. BMC Server Automation supports installation, maintenance, and technology-level updates on the stand by BOS instance without affecting system files on the active BOS.

4>     Export Health Dashboard

Users can now export the entire Health Dashboard in HTML format.

5>     Deprecation on Red Hat Network download option

Red Hat transitioned from its Red Hat Network hosted interface to a new Red Hat Subscription Management interface. To enable customer to seamlessly continue patching on Red Enterprise Linux, we have deprecated RHN download option. All patching on RHEL targets must be performed using the CDN download option (its selected by default when creating patch catalog).

6>     Message to review he number of target servers

To prevent any unplanned outages in your data centre, BSA now allows you to review the number of servers targeted by a job.

7>     Database cleanup and BLCLI enhancements

Share This:

It takes me immense pleasure to announce the general availability of CLM 4.6.06!


Here are some highlights of the release:


1. Azure integration enhancement


We have enriched the integration with Azure by adding below support:


  • Using a single blueprint, users can provision same image (installable resource for CLM) across any geographical location.
  • To provide more option for redundancy, in a single blueprint, option can be set to deploy a set of machine in one Availability Set while another set of machine in another Availability set.
  • For storage, Azure's Managed Disk is now supported. Deployment from managed images is supported.
  • Option to choose different replication types (Standard & Premium) for data and  OS disks is now supported.
  • Option to provision resources into a user defined Resource Group has been made easier.
  • Organizations can now set their own default naming conventions (name advisor) for any new Azure resources that is provisioned.
  • Support for internal (private facing) Load Balancers has been added.
  • Existing SOIs can be onboarded into CLM. Compliance can be set against to-be-deployed resources and can be triggered against already deployed instances through CLM.


2. Cisco ACI support


User can provision service with Cisco ACI enabled POD/Network Container and perform Day 1 and Day 2 actions


3. Network path support on POD level network


4. Hype-V Onboarding


Admins now can onboard existing Hyper-V instance into CLM. Below support has been added


  • Single Tier VM Onboard using Quickstart & API
  • Multi-Tier VMs Onboard using API
  • Day 2 actions on onboarded VM


5. Custom Day 2 support


In End user Portal, admins can now create UI (custom panel) for custom day 2 actions.


  • Following custom panels can be created:
    • Day-2 Actions
    • Advance TRO Actions.
    • Custom Server Operator actions


  • Data-sources can be attached to Custom fields to provide run-time data


  • A toolkit is provided to make creation of custom panel for Day 1 and Day 2 simpler. Feature includes:
    • Support for creating  custom Panels for Day 1, Day 2 & Day 2 (Advanced TRO) actions.
    • Support for associating data sources to custom fields
    • Support for deleting custom panels created using Toolkit


For more details, please refer:


Note: CLM 4.6.06 onwards, BMC Compact Rapid Deployment Stack and Small Rapid Deployment Stack is discontinued and no longer supported.




Sushan Bhattacharjee

Product Manager

BMC Software

Share This:

I am very pleased to announce that the simplified Install Planner for CLM is now GA!  The goal was to simplify CLM installation and upgrade process.


Here are some highlights of the release:


1> Pre-Analyzer

We have now introduced Pre-Analyzer in the Install Planner. The Pre-Analyzer checks all the system pre-requisites required before CLM installation. For CLM Upgrade, it scans the CLM servers and report the Upgrade-Readiness of a CLM environment. At the end of the Pre-Analyzer run, the user gets a detailed report of the checks that were done, status of the check (pass/fail) and the remediation. Thus, Pre-Analyzer identifies cause of any potential failure related to pre-requisites not met and leads to successful CLM installation or upgrade.


2> Ease of use

The revamped install planner has less input screen and lets user integrate CLM components easily. It has intelligence to install/upgrade individual components in required sequence automatically.


3> Communication channel and Data Checker

It keeps a check on NSH-RSCD communication channel during install/upgrade of individual components to avoid un-wanted failure of product install/upgrade. It also has an integrated utility that will check data inconsistencies before CLM upgrade.


4> Troubleshooting

Report generated from Pre-Analyzer helps to identify the potential cause of failure and remediation actions to be performed, even before running the installation. Wherever possible, instead of re-directing user to documentation, precise steps and suggestions have been provided to user through install planner.


For more details, please refer:



Sushan Bhattacharjee

Product Manager

BMC Software

Share This:


I am thrilled to announce the general availability of Cloud Lifecycle Management 4.6.05, our fifth Feature Pack for CLM 4.6!  We began employing this Feature Pack methodology over a year ago, immediately after the release of v4.6.  It allows us to deliver new value in the platform more frequently in a way that is easy for customers to deploy with minimal downtime -- one user described it as "a very clean install with no hassles," further noting that theirs was a "1 step and 4 click install."  We plan to continue with this direction for the foreseeable future.  (For existing CLM customers who are currently running on versions prior to 4.6, look for another announcement soon regarding improvements in the upgrade process to get you to v4.6, which will position you to take advantage of this Feature Pack approach.)



One of the most significant aspects of this release is a major advancement in our integration with Azure.  We've long provided integration with Azure, but (based on when we began our integration journey) it was based on Microsoft's Azure Service Management (ASM) integration architecture.  Microsoft later introduced a new architecture, Azure Resource Manager (ARM), and has since redirected customers away from ASM.  We have now redesigned our original integration capabilities to utilize ARM.


This means that the same platform-agnostic, graphical CLM Service Blueprints that model server resources, application packages, storage volumes, network attachments and FW/LB services, which might be used to provision to vSphere, AWS, Hyper-V, OpenStack, Azure (via ASM) or other platforms, can now also be used to provision to Azure via ARM.  This allows customers to realize a multi-cloud strategy  with minimal friction, easily reusing Blueprint models and adjusting placement policies to target onboarded ARM-based Virtual Networks.


But rather than just replace ASM API calls with equivalent ARM API calls to achieve this, we've taken a new approach that takes advantage of the fundamental template-based design of ARM: when new requests for CLM Service Offerings are submitted, CLM will dynamically translate the associated Service Blueprint (which, while presented in visual fashion in the Service Designer, is persisted in JSON form) into an ARM template (which is another JSON syntax).  This is expected to give us more flexibility in the future to incorporate new Azure capabilities.


Two other byproducts of this design are that 1) there is no new CLM provider needed, but rather we are leveraging the same Azure Resource Manager template provider introduced in FP4, and 2) Service Blueprint references to externally-authored templates can be intermingled with standard server, application, storage and network objects, with output values from one step in the specified order of provisioning feeding as input to subsequent steps.  Note that you can use the original ASM-based CLM Azure provider and these new ARM-based replacement capabilities at the same time (relying on placement policies to determine which requests go thru which architecture) as a transition strategy, but the original ASM-based provider is will be deprecated soon and won't be enhanced any further.


On the topic of provisioning using externally-authored Azure Resource Manager templates, two other enhancements have been made in this release.  First, beyond just strings, now boolean and integer data types for template parameters are directly supported (and others are on the roadmap).  Second (and more significantly), those external template files can be more effectively secured.  When CLM submits a request to Azure that involves an external ARM template, it passes a URL reference to that file that the Azure public service must be able to resolve.  We considered resolving the template URL in CLM and simply passing the content of the template to Azure, but this would not adequately address the cases where the main template references one or more nested templates (which, in turn, could reference other nested templates, ad infinitum), and those subordinate templates would need to be resolvable by the Azure public service.  Instead, we are leveraging the Azure storage and SAS token model to render them both sufficiently accessible and sufficiently access-protected.  For each CLM Service Offering request, CLM generates a new access token that is only valid for a few moments and appends it to the template URL, which when given to Azure, authorizes Azure to securely resolve that template and any referenced nested templates (which have to be in the same storage location) for just long enough to fulfill the request.



Moving on to another category of enhancements, we have many CLM customers that happen to be reaching their next hardware and/or vSphere refresh interval (or want to transition to a vCenter Appliance), are looking to stand up their new infrastructure along side what is currently in use, and would like to migrate existing CLM-managed workloads over these new resources.  Other customers are looking to move already-deployed workloads from their original network container or compute cluster destinations to different locations, either to achieve better balance of workloads within those locations or to deliberately create a temporary imbalance in anticipation of an upcoming, very large deployment request.  And still others would like to deploy workloads to a quarantined network container where additional configuration and testing can be performed before moving them to their ultimately intended destinations.


Historically, these use cases have presented challenges due to the relationships that CLM maintains for manage resources, but this release offers a solution.  You can use the native vSphere VM migration facilities to actually move the VMs to the desired location.  Then the CLM administrator can invoke a new CLM SDK call to synchronize CLM's relationship information on the affected VMs with their new as-is state.  To make this scanning operation more efficient, the CLM administrator can choose from four different such calls, that scope the set of VMs to be scanned for necessary updates:

  • cluster-migrate: scopes the scan to specified virtual clusters where VMs originally resided
  • host-migrate: scopes the scan to specified ESX hosts where VMs originally resided, which may have been moved from one vCenter to another
  • resource-pool-migrate: scopes the scan to specified resource pools where VMs originally resided
  • service-migrate: scopes the scan to VMs in specified Service Offering Instances


Running these commands will allow CLM to update its internal data, including associations to virtual clusters, resource pools and/or virtualization hosts, and networking details such as IP address, switch & switch port connections and network container & network zone assignment.  It will allow CLM to update DNS and IPAM entries for these servers as well.


You can invoke these SDK calls with the "--dryrun true" option to get an assessment of which VMs CLM recognizes are in new locations and what synchronization updates will be made.  The resultant report will be equivalent to when the call is made without that option, after synchronization updates have been made.  Performing this synchronization step will allow CLM to continue managing these servers throughout their lifespans, including altering the running state, modifying resource configurations and deploying application packages.



In CLM 4.6, we introduced a provider for Docker, allowing users to provision Docker containers to Docker hosts, and assuring administrators that only images from approved repositories are used, that IT Processes such as Change Management are being fulfilled automatically without impact to the end users, and that containers are deployed to appropriate Docker hosts based on policy.


In this Feature Pack, we are extending that capability to support provisioning to Docker Swarm clusters.  This is accomplished by enhancing the existing Docker provider, so the same container-oriented Service Blueprints can, based on placement policies, be deployed to newly-onboarded Docker Swarm clusters.  And onboarding Swarm clusters is accomplished much like onboarding standalone Docker hosts, but references the Swarm Manager port rather than the Docker Engine port on the host server.  When containers are deployed to that Swarm Manager, it will distribute them among the nodes in the cluster according to its configured algorithm.


Note that, independent from this new capability, CLM can still be used to provision Docker Engine or Docker Swarm clusters (as well as other cluster and schedule management environments) , and can apply ongoing compliance evaluations to those execution environments, leveraging BladeLogic out-of-the-box compliance materials such as Security Contained Automation Protocol (SCAP) 1.2.



CLM has for many years provided its own flavor of Software Defined Networking, automating the configuration of network services on standard network equipment based on CLM Network and Service Blueprints.  It uses a model of Pods and Network Containers; Pods represent a set of physical infrastructure that includes those standard network devices, and Network Containers are logical boundaries created within that infrastructure that segregates network traffic between different tenants or organizations within the same cloud service.


Organizations are increasingly interested in using new Software Defined Networking platforms, which separate the control plane from the data plane.  One of the commonly chosen SDN platforms is VMware NSX, and in this release, CLM has extended its Pod and Container Model to support VMware NSX.


This allows CLM administrators to create and update Pods and Network Containers on NSX, which can include specification of VxLAN-based subnets and NSX Distributed Firewalls.  It accomplishes this via "injection templates" in BladeLogic Network Automation, which embeds REST API commands in the templates sent to NSX Manager, allowing for additional flexibility to incorporate any other necessary NSX REST calls.  These Network Blueprints can also incorporate non-NSX-based Load Balancers and Perimeter Firewalls to create and manage Network Containers with a wide range of capabilities.


CLM Service Blueprint specifications of NIC attachments to subnets, load balancer pool entries and network paths will be dynamically rendered into the necessary updates to the NSX environment, virtual switches, load balancers and perimeter firewalls during fulfillment of CLM Service Offering requests.


(Stay tuned for an announcement regarding similar capabilities for the Cisco ACI SDN platform in the near future.)



Back in CLM 4.5, we introduced a much more effective way for administrators to troubleshoot when Service Offering request failed.  CLM proactively notifies administrators when such a failure occurs, and provides the administrator with actionable information, including the synopsis of what failed, what the estimated root cause was, what the recommended corrective action is and complete details of each step in the provisioning sequence that was performed up until the failure.  It allows the administrator to retry the request after attempting to correct the failure, leveraging the same change approval (if relevant) and generated names.  Ideally, an administrator can respond to and complete the request before the requesting user even realized there was a problem.  And in the situations where more analysis is required, it provides the ability to download snippets across multiple log files related to just that transaction.


Then in CLM 4.6 Feature Pack 3, we extended this capability in two ways: making the rich details (such as tag analysis during placement) available not just on failed requests but also on successful requests, which can even be viewed while the provisioning process is still underway; and we provided a configuration option to make this level of detail available to end users rather than strictly administrators.


Now in CLM 4.6 Feature Pack 5, we have applied these capabilities to actions performed on already-provisioned resources.  This includes state changes such as shutdown, power off or power on, as well as configuration changes such as adding CPUs, memory, disks or NICs.    This will enable administrators to provide even higher levels of service and satisfaction to end users, and in certain cases even enable end users to more effectively resolve issues themselves.



Feature Pack 5 is available now on the Electronic Product Download (EPD) portion of the BMC Support site.  It is can be applied quickly on any CLM 4.6 environment, regardless of which Feature Pack, if any, has already been deployed (as every Feature Packs is cumulative of all prior Feature Packs' capabilities).  Download and start using it today, and send me your thoughts!



Steve Anderson

Product Line Manager

BMC Software


Share This:

I am excited to announce the general availability of Cloud Lifecycle Management 4.6.04!  In an effort to deliver valuable new capabilities in a faster and easier-to-consume fashion, we have been delivering more frequent Feature Pack releases, of which this is the fourth this year (on top of the 4.6.00 release at the beginning of 2016).


This latest Feature Pack expands our integration into Microsoft Azure to allow customers to provision HDInsight Clusters, Redis Caches, IoT Hubs or virtually any other service type via Azure Resource Manager templates.  On AWS, it allows for simpler definitions of access rules via reusable Security Groups.  It also extends our unique strengths in compliance of services, as well as further enhancing programmatic  use of CLM by dev/test users, and is the easiest update deployment yet.


At the same time, we have also published the Application Centric Cloud integration that we demonstrated at BMC Engage earlier this month, through which CLM can truly deploy complete application environments on-demand, including the "last mile" of development teams' internal application builds.  Here's an example of how this can be used.


In this example, when a new build of the JPetStore custom application is triggered in Jenkins, BRPD automatically creates a new application deployment instance (via a Jenkins plugin to BMC Release Package and Deployment, aka BRPD), complete with all of the tier-specific steps to deploy a running instance of this new version:



When a Dev/Test user wants a new JPetStore application environment, he or she can choose the particular build number to deploy, including the just-completed latest build:

JPetStore request.png


CLM will deploy all of the server, storage and networking infrastructure that is needed (which may vary among different single and multi-tier offering sizes), along with all of the platform components (web server platforms, application server platforms, database server platforms, etc.), and finally the custom application elements (via BRPD) and even bootstrap data.  Later, the dev/test user can also update the JPetStore application on an existing environment with a newer build:



This will help dev/test users eliminate time they currently spend deploying and configuring custom builds within these environments, reducing distractions and enabling them to focus more of their time on actually developing and testing new application releases.  These environments are created in consistent fashion, which further reduces time spent debugging manually-induced variations between developer, tester and ultimately production environments.


Diving further into to other new capabilities, the base 4.6.00 release of CLM back in January included the framework for a new provider type for external template-based provisioning, which was designed to accommodate implementations for AWS CloudFormation, Azure Resource Manager, OpenStack Heat and others, but did not include any out-of-the-box provider implementations.  Feature Pack 1 (4.6.01) followed a few weeks later, delivering a provider implementation for AWS CloudFormation.  Feature Pack 4 (4.6.04) now delivers a provider implementation for Azure Resource Manager.


The intention for supporting external template-based provisioning is to extend beyond the kinds of resources that CLM models in its Service Blueprints (such as servers, applications, storage, relational databases and networking) to the entire spectrum of resource types that a cloud service might offer (such as analytics, non-relational databases, in-memory cache, server-less code execution and mobile services).  And if a cloud service introduces a new type of resource (and it is supported via its template language), it can be immediately leveraged via CLM.  Using CLM to drive provisioning of these templates expands the scope of visibility that the organization has into what resources are provisioned where, both internally and externally.  It also allows for integration with necessary IT processes such as change management and automated CMDB updates.


Feature Pack 4 also simplifies the administration of external template references by automatically discovering template parameters and creating corresponding CLM Service Blueprint parameters, including enumerated parameter value choices that can be displayed as dropdown selections during service requests.  This greatly simplifies the CLM Administrator's steps when incorporating an external template file.  Here's what it looks like:


A CLM Service Blueprint that references an ARM template now presents a button to Discover Parameters from that template:

SBP pointing to ARM template.png


A portion of the referenced template is shown here, highlighting its parameters:

ARM template.png


Upon clicking "Discover Parameters", CLM automatically creates Service Blueprint parameters from those template parameters (which can subsequently be set to be displayed to end users during Service Offering requests):

SBP with parameters.png


Another area of improvement in Feature Pack 4 addresses programmatic invocation of CLM, i.e. via the API or SDK.  Such calls now generate the same Activity Log tracking details as requests submitted via the CLM End User Portal.  This brings more parity between the UI-oriented use of CLM and the more developer-centric means of using CLM, ensuring that the same audit trail and visibility is available no matter how users interact with CLM.


In CLM 4.5, new troubleshooting details were introduced for CLM Administrators, enabling them to quickly identify errors, take corrective action and restart the provisioning request.  CLM 4.6.03 introduced the ability to display this level of detail for all requests, whether failed or successful or even still in progress, for example providing insight on how placement decisions were made.  Insight into the processing of tags during provisioning can greatly assist in properly setting tags to achieve your desired placement strategy.  This visibility can be, based on CLM configuration, made available to CLM End Users as well, since some errors can be addressed by end users without the involvement of CLM Administrators.  CLM 4.6.04 now provides AWS and Azure-specific troubleshooting & progress details as well.


Feature Pack 4 provides other improvements in its integration with AWS, including the ability to utilize any new EC2 Instance Types and Instance Families, even if introduced by AWS at a later point in time.  It also allows CLM to associate new EC2 instances with pre-existing Security Groups (e.g. for standard access rules to be used on many/all servers), in addition to the Security Groups dynamically generated from Network Paths defined in CLM Service Blueprints.


Following up on the compliance visibility capability introduced in CLM 4.6.00, where newly provisioned servers are added to BladeLogic Server Automation Smart Groups for recurring compliance scanning jobs, Feature Pack 4 offers an option to ensure that the compliance job is executed immediately during the service request fulfillment.  This helps ensure that service offerings are measured for compliance as soon as they are provisioned and, if matched with automated remediation in BladeLogic Server Automation, brought into compliance from the outset.


The installation of Feature Pack 4 is simple to execute, taking on the order of an hour to complete, and includes the Azure and OpenStack providers (which were previously additional installation items).  It requires an existing environment running on CLM 4.6 (either the base release or any prior 4.6 Feature Pack level), and includes all updates delivered in previous Feature Packs.  I strongly encourage you to download and deploy this latest update and begin taking advantage of its expanded capabilities!


-- Steve Anderson, Product Line Manager, BMC



P.S. In case you missed the earlier CLM 4.6 Feature Packs, here's a brief review of their capabilities:


  • CLM 4.6.01 (Feature Pack 1):
    • AWS CloudFormation external template provider
  • CLM 4.6.02 (Feature Pack 2):
    • ITSM Foundation Data brownfield synchronization tool
  • CLM 4.6.03 (Feature Pack 3):
    • Self Service
      • Troubleshooting/progress data available to end users
      • End users can request services for & transfer services to other users
    • APIs
      • SDK invocation of custom operator actions
      • Management API for detailed CLM and component version numbers
    • Platform Support
      • Service-level compliance visibility for more platforms
      • Azure support for private network connections, Security Groups, new instance types
    • Networking
      • Support for VxLANs
      • Improved firewall rule management performance
      • Load balancer parameterization
Share This:

BMC Cloud Lifecycle Management 4.6.03 includes the following themes:

4.6.03 is a cumulative release that includes the enhancements in 4.6.01 (AWS CloudFormation Template support) and 4.6.02 (BMC ITSM Brownfield utility).


BMC recommends that you use version 4.6.03 if you have not already started using 4.6.01 or 4.6.02.


Provider Parity

This theme includes enhancements for third-party provider integration.



  • New instance types that are available in Azure
  • Network Security Groups (NSGs) to control inbound and outbound network traffic from the VM
  • VPN-based private networks to secure management traffic to external, public cloud-based resources
  • Compliance at the service level

Amazon Web Services (AWS)

  • Compliance at service level


  • Support for the Liberty and Mitaka releases
  • Compliance at the service level

Cloud Self-Service and Brokering

This theme includes enhancements for BMC Cloud Lifecycle Management users.



  • Enable end users to transfer ownership and request services on behalf of another user
  • Rename tenants
  • Change the description of a service offering instance from the Administration Console
  • Specify the Azure Government URL when setting up an Azure user account

Tenant administrators

  • View the progress details of a service offering instance (SOI)

End users

  • Change the description of an SOI from the My Cloud Services console
  • View the progress details of an SOI

API and Networking

This theme includes API, SDK, and networking enhancements.



  • New SDK commands allow execution of custom operator actions
  • The ManagementApplication API shows detailed version numbers of BMC Cloud Lifecycle Management and its key components


For more details about the documentation, click here.


For any queries regarding this release, contact Steve Anderson.


Downloading the installer

Go to the BMC Support website > Product Downloads section to download the installer for 4.6.03.

Share This:

We often hear from our customers that videos are a really helpful way to learn how to get the most out of our products. Well, we have been doing something about that feedback. The BMC Cloud Lifecycle Management (CLM) team, (including folks in Support, the field, and R&D) has been busy making more "how to" videos to help you harness the power of CLM. In fact, we have added over 100 new CLM videos in the past few months! Please check them out in the How to Videos - Cloud Lifecycle Management playlist on the BMC Communities YouTube channel. The playlist is ordered by date, so you will see the newest ones listed first. Please look them over and let us know if you find these videos helpful by Liking or Disliking the video or leaving a comment.

If you would like to see the videos in context with the related technical documentation, you can access the listing of topics containing videos here.

Share This:

A couple of critical broken authentication vulnerabilities were disclosed to BMC by ERNW Gmbh (an independent research company) and they will be disclosed publicly at the Troopers 2016 conference in Heidelberg on March 16th, 2016. These vulnerabilities allow remote unauthorized access to Linux/Unix RSCD agents using the agents’ RPC API. Windows agents are not affected. For more-detailed information, please see the following Flash Notification.


The security of our solutions is of the utmost importance to BMC. Once we were made aware of the issue, we investigated, developed a fix, and began the process of notifying customers.


As part of the notification process, BMC has published a Knowledge Article with patching information as well as general recommendations for reducing the likelihood of successful exploitation. We strongly recommend that BSA customers follow the instructions within the article. Should you have questions or need assistance, please contact BMC Customer Support.

Share This:

By John Stamps

This video describes step-by-step how to properly configure the My Cloud Services console. It also includes helpful troubleshooting tips so that the console displays properly.


This video is added to the Configuring the My Cloud Services Console topic in BMC Cloud Lifecycle Management online technical documentation. Let us know if you find the video useful by rating or commenting on the blog post.

Share This:

By John Stamps.


Three new videos added to the BMC Cloud Lifecycle Management documentation provide detailed conceptual information and step-by-step instructions on how to change passwords in BMC Cloud Lifecycle Management applications. They demonstrate the password dependencies between the applications in the BMC Cloud Lifecycle Management solution.

The following video shows how to change the BBNA and BBSA passwords:

The following video shows how to change the AR System Demo password:

The following video shows how to change the CSM Super User password:

Let us know if you find the videos helpful by commenting on the blog post or the documentation here:

Share This:

start now.jpg


BMC training schedule is posted for January and February.  We want you to be successful in BMC solutions.  We run classes year round and worldwide across the BMC product lines.  Below are the classes listed by class/product name.

Review the below and register today.  Please check BMC Academy for latest availability, BMC reserves the right to cancel/change the schedule.  View our cancellation policy and FAQs.   As always, check back in BMC Academy for the most up to date schedule.

To see all our courses by product/solution, view our training paths.

Also, BMC offers accreditations and certifications across all product lines, learn more.


For questions, contact us

Americas -


Asia Pacific -


BMC Atrium Orchestrator 7.6: Foundation - Part 222 February / EMEA / Muchen, DE
BMC Atrium Orchestrator 7.6: Foundation - Part 3

25 January / Americas / Online

15 February / Asia Pacific / Online

22 February / EMEA / Online

BMC Atrium Orchestrator 7.8: Foundation - Part 2

18 January / Americas / Online

8 February / Asia Pacific / Online

15 February / EMEA / Online

15 February / Americas / San Jose, CA

22 February / Americas / Online

BMC Cloud Lifecycle Management 4.5: Essentials

18 January / Asia Pacific / Online

18 January / Americas / Houston, TX

25 January / EMEA / Winnersh, UK

1 February / Americas / Online

15 February / EMEA / Online

BMC Certified Professional: BMC Cloud Lifecycle Management 4.x Practical Exam

4 January / Americas / Houston, TX

4 January / EMEA / Winnersh, UK

4 January / Asia Pacific / Pune, IN

4 January / Asia Pacific / Bangalore, IN

1 February / Americas / Houston, TX

1 February / EMEA / Winnersh, UK

1 February / Asia Pacific / Pune, IN

1 February / Americas / Mexico City, MX

1 February / Americas / Sao Paulo, BR

1 February / Asia Pacific / Bangalore, IN

29 February / Americas / Houston, TX

29 February / EMEA / Winnersh, UK

29 February / Asia Pacific / Pune, IN

29 February / Americas / Mexico City, MX

29 February / Americas / Sao Paulo, BR

29 February / Asia Pacific / Bangalore, IN

Share This:

Although CLM Callouts have been around for quite some time now, I do get questions on these all the time. In this blog post I will try to clarify the Callout concepts, some commonly asked questions and best practices. We will also walk through Callout registration and request processing. So, bear with me as we call out on this small journey!



1. Callouts Overview

As the name suggests, a Callout is essentially a piece of code that CLM can invoke (call out) during a usecase flow. The best way to look at a Callout is to think of influencing CLM's behavior. Yes, "influencing" is a loaded term. It could be as simple as register my VM into the DNS or could be as loaded as work with the Order Fulfillment system to process the order before commencing provisioning. Following are some other common usecases where customers use CLM Callouts

  • Have VM/server join a domain
  • Notify a 3rdparty application (such as, Backup or Monitoring) about existence of the newly provisioned server
  • Perform any prerequisite Security configurations that are required for provisioning
  • Update 3rdparty CMDB (non-BMC)
  • In releases prior to CLM 4.5, Change Management Integration often used a Callout-based approach.
    Note: From CLM 4.5+, please use the new Change Management Provider architecture to integrate with Change Management systems that are not supported out-of-the-box.


Callouts are typically written as BMC Atrium Orchestrator (BAO) workflows. That does not mean the entire code has to be written in BAO. Often customers have scripts they would like to integrate in a callout scenario and other forms of reusable code. So, think of the Callout BAO workflow as the launch pad/integration layer for such scenarios so that it can handle translation of data and response.


Although it is not as commonly known, but Callouts do have types. There are 3 types of Callouts that CLM supports.

  1. Regular/Standard Callout: Ok, this is not really a type. I just made it up. These Callouts are written in BAO and are the most commonly used Callouts. CLM invokes the Callout and waits for its completion. The Callout returns the result of the processing (typically, success or failure with a message). CLM handles the result and accordingly proceeds with the next steps. In this post I will be focusing on the Standard Callouts only.
  2. HTTP Callout: These are not  commonly used and are typically meant for complex scenarios where the Callout processing is going to take a long time. Now long is relative. But, think of something that will typically run in hours or days (such as, a Purchase Order Processing task that my involve manual steps as well). These Callouts are front-ended by a Java Servlet-style gateway, which receives the Callout invocation and simply informs CLM that the task processing has begun. CLM hibernates the calling task, so that no resources are consumed on the CLM side. Once the Callout has finished the Servlet (or an associated piece of code) can inform CLM about the result of the operation. CLM handles the result and accordingly proceeds with the next steps.
  3. Notification Callout: These Callouts are fire-and-forget style Callouts. That is, CLM invokes the Callout, but does not wait for its completion. As the name suggests, these are intended to simply notify 3rparty systems.



2. Think Object and not time when thinking about Callouts


In my discussions, this is the biggest challenge/shift that has to be understood to use Callouts effectively. Often when we think of a usecase flow, we think in terms of a sequence of operations and events. So, it is natural to ask how do I know where to "attach my Callout" in a CLM flow? Can someone provide me a list? These are fair questions. But, lets park these for a moment and see if at the end of this section you still have those.


A Callout is always attached to an operation on a CLM object. An object here means an entity like the Service instance, Virtual Machine, Application Software, etc. And operation means an action on that object. A typical example is creation of an object. A Callout can either run before the operation (a.k.a pre-callout) or after the operation (a.k.a. post-callout). For example, if you wanted to do DNS registration of your VM with static IP, a VM create pre-callout would be the most appropriate place.


Lets talk a bit about this "think object and not time" aspect. When thinking of associating Callouts, follow this simple guideline.

  • Identify the class for the object of interest. Use CLM's Object Model as reference. In most cases, you should be able to identify the object and its class you are interested in. For example, if you were interested in DNS registration, you are most likely interested in the VM (VirtualGuest) itself. But, if you were doing something at Firewall Rule level, the object of interest would be FirewallRule. Simple! right?
    Object model - BMC Cloud Lifecycle Management 4.5 - BMC Documentation
  • Determine whether your processing needs to happen before the object comes into existence or after it or before/after its operation. Again the above Object Model reference should help you determine the operation of interest.
  • Once you know the class and operation, you can register one or more pre/post Callouts against it and CLM will honor the registration regardless of when that happens in time sequence.


To strike this point home slightly differently, when you start thinking in objects you start taking care of additional usecases that you may not be thinking at the outset. For example, a DNS pre-callout for VM would be useful for provisioning. But, it will also be able to handle any servers added via CLM's auto-scale or additional VMs added via CLM's Add Server capability. So, you see, thinking in object saves you the time and effort to dig into time sequences. Instead, focus on the object of interest and CLM will invoke the Callout whenever that object and operation come into play. Hopefully, it has started to make more sense now.


Having said that, realistically there are a handful of objects and operations that are most commonly used by customers. I am summarizing these here for a quick reference.


ClassOperationCallout Usage Example

A pre-callout for Order Processing.

ServiceOfferingInstanceCONSTRUCTORA post-callout for updating 3rdparty CMDB.
ServiceOfferingInstanceapplyOptionChoiceA pre-callout for doing any processing prior to applying Option Choice(s).
ComputeContainerCONSTRUCTORA post-callout to register a server (virtual/physical) for monitoring.
ComputeContainerDESTRUCTOR and DECOMMISSIONA post-callout to do any clean up after a server (virtual/physical) has been decommissioned or rolled back.
ComputeContainerEXECUTE_SCRIPTA pre-callout to perform the logic for a Custom Operator Action (such as, create snapshot).
ComputeContainerSTARTA pre-callout to perform processing before starting a server.
ComputeContainerSTOPA pre-callout to perform processing before stopping a server.
VirtualGuestCONSTRUCTORA pre-callout to do DNS registration for a virtual server.
VirtualGuestDESTRUCTORA post-callout to do any clean up post decommission/roll back of a virtual server (such as, clean up DNS).
OperatingSystemCONSTRUCTORA pre-callout to do DNS registration for a physical server.
ApplicationSoftwareCONSTRUCTORA pre-callout to do any specific processing prior to initiating software install.
NetworkConnectorCONSTRUCTORA pre-callout to do any specific processing before CLM acquires NIC information (such as, IP address, network/portgroup/VLAN info, etc.)
NetworkConnectorDESTRUCTORA post-callout to do any specific processing after CLM has decommissioned/removed a NIC.



3. Commonly asked questions about Callouts


Q. How long can a Callout take?

A. The short answer is 'forever'. Yes, a Callout can ideally run forever and CLM will wait for its completion. A Callout invocation can even withstand CLM Platform Manager restarts. CLM handles Callout invocation intelligently by hibernating the parent task that invoked the Callout. This is to reduce undue load on the system. The Callout workflow that is invoked by CLM (keeping a Standard Callout in mind) is the one that CLM would wait on. This workflow can choose to call other BAO workflows and utilities/code.


Q. Can a Callout abort further processing? Or can a Callout return status? Can it written an error message that will be shown in CLM Activities results?

A. The short answer is 'Yes' to all of the above questions. A Callout response is well defined. It can return success or failure. In case of failure, it can also return an error message. I have attached some sample Callout response XMLs to this post for reference.


Q. Can I reuse my existing scripts with Callouts?

A. While it is our intent is to promote re-usability with Callouts, the answer to the question really depends on what type of script it is and how reusable it is? BAO provides various adapters for typical integration scenarios like the various command line adapters. It is possible to invoke a script using such an adapter. However, the script should support re-use. Or hopefully it requires minor tweaks. Keep the target platform idiosyncrasies in mind as well. For example, it is fairly well understood in a UNIX style scripting on how to interpret the exit status of a script and how to fetch the error message. However, that may not be true for all platforms. The point being ensure your script is capable of handling the usecase needs and the automation desired.


Q. Can a Callout change the object it is associated with?

A. No. For those familiar with Application Architecture, it is important to know that Callouts do not follow the Decorator pattern, which let you decorate (manipulate) the objects. Callouts are purely meant to influence behavior.


Q. I also see things like Blueprint Post Deployment Actions, Pre/Post Install Actions per Software, Custom Action via Option Choices. How are these different from Callouts and when should I use these over Callouts?

A. It is a loaded question. So, let me try to break these into simpler points.

  • Callouts are system-wide and global in nature. That is, anytime CLM invokes the operation on the object for which you have registered one or more Callouts, CLM will invoke the Callout(s). Of course, you can write logic inside a Callout to decide when to do processing and when not to do anything. But, CLM will invoke the Callout if its object and operation are in the path of that usecase flow. Period.
  • Blueprint Post Deployment Actions (PDAs) or the ones injected using Service Catalog Option Choices are typically meant for specific usecases only. For example, I may have a PDA to format and partition a newly added disk. But, this does not need to happen for every CLM provisioning flow. Only when the user requested that specific Blueprint or Option Choice. PDAs can also be of type NSH (Bladelogic Server Automation script). Callouts can only be written in BAO (at least the main Callout workflow).
  • The Pre/Post Install actions per Software are to invoke that action only for that specific software. On the contrary, if you were to invoke a BAO workflow for every software installed, you could instead consider using an ApplicationSoftware_CONSTRUCTOR pre/post Callout.
  • Custom Action via Option Choices are similar to PDAs but work in the context of a given Option Choice. For example, an Add Disk option choice could include a Custom Action to format and partition the disk. Again, Custom Actions can be of type BAO workflow or NSH.



4. How do I register and use Callouts?


We have talked a lot about Callout concepts and hopefully clarified some questions that might have bee on your mind. Lets see these in action now.


Callout registration is a very simple process. Following is a screenshot that demonstrates the process.

  • There is a Callout Workspace that is part of the CLM Admin Portal->Configuration.
  • You can simply create an entry for a new Callout registration. As you can see the data being asked is very minimal. Mainly, you need to know the Class and Operation and whether you want to invoke Callout before or after the operation. You obviously need to know the main Callout BAO workflow name. Notice that the BAO workflow name is specified in a fully qualified format (starting with a ':'). Don't worry about weight. If you have multiple Callouts for a given operation, you can assign weight to give priority to one Callout over others.
  • Once you save this entry, the Callout is active from that point.


Now that we know how a Callout is registered, lets see what data it receives. Remember that a Callout is associated with an object (more accurately, class) and operation. As such, the data it receives is the object representation at that point in time. For example, a pre Callout for the same object may receive less data than the post Callout for the same object because during the operation rest of the object got populated. This may also help you determine a more appropriate place for associating the Callout depending on what data you are looking for. So, it is difficult to generalize what data a Callout would receive. However, there is a standard pattern followed for sending data to Callouts. Lets talk about that.


Following is a sample Callout request data (often referred to as "CSMRequest" or "CSMRequest XML") for a ServiceOfferingInstance_BULKCREATE pre-callout.



  • The root is always "CsmRequest".
  • Most of the data related to the object would be found under the "operationParameters" section. For example, here you can see the data around the requested Service Offering, selected option choices, the hostname prefix specified by the user, any parameters and data associated with the request, etc.
  • The Callout can parse out this data along with any additional lookups and data it needs and use it for its processing.
  • Additional metadata about the request is available in other parts of the XML.


Once a Callout has finished processing it should return the status along with any error and associated message to CLM, as appropriate. Note that the Callout responses do not return the object the Callout was associated with because... You got it - because a Callout cannot manipulate the object. It can only influence behavior.


Following is an example of a successful Callout response.



Following is an example of a failure Callout response.



5. Conclusion


As you can see, Callouts do require a bit of an effort. But, it could be well worth depending on the usecase and what you are trying to accomplish. These are certainly nuggets that are available to you to integrate CLM better in rest of your ecosystem.


If there are other topics you would like me to blog about, please feel free to leave me a comment. I will try my best to respond to the requests.


Have a good day!

- Nitin

Cloud is a journey with milestones and not a destination!

Share This:




In case you were not aware, we released the BMC Cloud Lifecycle Management 4.5: Essentials course a few weeks ago, and this replaces the previously named Administrator Boot Camp. Not only did we update the content to include 4.5, but we also changed a few other things:


·         Why the name Essentials? We are confident that any administrator or user of the CLM products can take this course and walk away with the essentials they need to function in CLM.


·         What’s different? We have added a lot of new content to make the course more robust, including Quick Start, BAO, BNA, and more. The labs and lab environment have both been upgraded, by adding specific detail for each student to follow to ensure each feature works, lab challenges that allow the students to explore more functionality and to test their retention of the processes they have just learned, and each student will now have an independent lab environment, allowing them to work much more quickly, and to work at their own pace.


·         What’s the same? CLM 4.5: Essentials is a traditional, instructor/student learning environment.


·         Why CLM 4.5: Essentials? CLM Essentials is now the initial education in a customer's solution journey and introduces skills from the BladeLogic (BSA, BNA), Atrium Orchestrator, AR, ITSM, and SRM curricula needed to get started with CLM. When students take BMC CLM 4.5: Essentials they will receive training on these subjects that will encourage them to continue their education in these product lines



Share This:

By Lisa Greene



If you want to learn how to create an option so that users can extend a disk, see the new video on the Configuring a resize disk option choice topic in theBMC Cloud Lifecycle Management 4.5 documentation. This video also explains how to have the disk automatically formatted.

The library of BMC Cloud Lifecycle Management videos continues to grow. To see which topics include videos, go to the PDFs and videos topic.

BMC Communities also has a list of helpful videos on the BMC CLM How-to Videos Catalog page.

Let us know if you find these videos helpful by rating the blog post or commenting on the Configuring a resize disk option choice topic.

Share This:

By Lisa Greene


The BMC Cloud Lifecycle Management team has another new video for your to check out. This video is about how options work with change approval using BMC Change Management in BMC Cloud Lifecycle Management. You can find the video on the Configuring end-user Option Choices in service blueprints topic in the BMC Cloud Lifecycle Management 4.5 documentation.

To see which topics include videos, see the growing library on the PDFs and videos topic.

Another list of videos is posted on BMC Communities on the BMC CLM How-to Videos Catalog page.

Let us know if you find these videos helpful by rating the blog post or commenting on the Configuring end-user Option Choices in service blueprints topic.

Filter Blog

By date:
By tag: