Skip navigation

Cloud Lifecycle Mgmt

2 Posts authored by: Steven Anderson
Share:|

INTRODUCTION

I am thrilled to announce the general availability of Cloud Lifecycle Management 4.6.05, our fifth Feature Pack for CLM 4.6!  We began employing this Feature Pack methodology over a year ago, immediately after the release of v4.6.  It allows us to deliver new value in the platform more frequently in a way that is easy for customers to deploy with minimal downtime -- one user described it as "a very clean install with no hassles," further noting that theirs was a "1 step and 4 click install."  We plan to continue with this direction for the foreseeable future.  (For existing CLM customers who are currently running on versions prior to 4.6, look for another announcement soon regarding improvements in the upgrade process to get you to v4.6, which will position you to take advantage of this Feature Pack approach.)

 

AZURE INTEGRATION ENHANCEMENTS

One of the most significant aspects of this release is a major advancement in our integration with Azure.  We've long provided integration with Azure, but (based on when we began our integration journey) it was based on Microsoft's Azure Service Management (ASM) integration architecture.  Microsoft later introduced a new architecture, Azure Resource Manager (ARM), and has since redirected customers away from ASM.  We have now redesigned our original integration capabilities to utilize ARM.

 

This means that the same platform-agnostic, graphical CLM Service Blueprints that model server resources, application packages, storage volumes, network attachments and FW/LB services, which might be used to provision to vSphere, AWS, Hyper-V, OpenStack, Azure (via ASM) or other platforms, can now also be used to provision to Azure via ARM.  This allows customers to realize a multi-cloud strategy  with minimal friction, easily reusing Blueprint models and adjusting placement policies to target onboarded ARM-based Virtual Networks.

 

But rather than just replace ASM API calls with equivalent ARM API calls to achieve this, we've taken a new approach that takes advantage of the fundamental template-based design of ARM: when new requests for CLM Service Offerings are submitted, CLM will dynamically translate the associated Service Blueprint (which, while presented in visual fashion in the Service Designer, is persisted in JSON form) into an ARM template (which is another JSON syntax).  This is expected to give us more flexibility in the future to incorporate new Azure capabilities.

 

Two other byproducts of this design are that 1) there is no new CLM provider needed, but rather we are leveraging the same Azure Resource Manager template provider introduced in FP4, and 2) Service Blueprint references to externally-authored templates can be intermingled with standard server, application, storage and network objects, with output values from one step in the specified order of provisioning feeding as input to subsequent steps.  Note that you can use the original ASM-based CLM Azure provider and these new ARM-based replacement capabilities at the same time (relying on placement policies to determine which requests go thru which architecture) as a transition strategy, but the original ASM-based provider is will be deprecated soon and won't be enhanced any further.

 

On the topic of provisioning using externally-authored Azure Resource Manager templates, two other enhancements have been made in this release.  First, beyond just strings, now boolean and integer data types for template parameters are directly supported (and others are on the roadmap).  Second (and more significantly), those external template files can be more effectively secured.  When CLM submits a request to Azure that involves an external ARM template, it passes a URL reference to that file that the Azure public service must be able to resolve.  We considered resolving the template URL in CLM and simply passing the content of the template to Azure, but this would not adequately address the cases where the main template references one or more nested templates (which, in turn, could reference other nested templates, ad infinitum), and those subordinate templates would need to be resolvable by the Azure public service.  Instead, we are leveraging the Azure storage and SAS token model to render them both sufficiently accessible and sufficiently access-protected.  For each CLM Service Offering request, CLM generates a new access token that is only valid for a few moments and appends it to the template URL, which when given to Azure, authorizes Azure to securely resolve that template and any referenced nested templates (which have to be in the same storage location) for just long enough to fulfill the request.

 

SYNCHRONIZING MIGRATED VSPHERE WORKLOADS

Moving on to another category of enhancements, we have many CLM customers that happen to be reaching their next hardware and/or vSphere refresh interval (or want to transition to a vCenter Appliance), are looking to stand up their new infrastructure along side what is currently in use, and would like to migrate existing CLM-managed workloads over these new resources.  Other customers are looking to move already-deployed workloads from their original network container or compute cluster destinations to different locations, either to achieve better balance of workloads within those locations or to deliberately create a temporary imbalance in anticipation of an upcoming, very large deployment request.  And still others would like to deploy workloads to a quarantined network container where additional configuration and testing can be performed before moving them to their ultimately intended destinations.

 

Historically, these use cases have presented challenges due to the relationships that CLM maintains for manage resources, but this release offers a solution.  You can use the native vSphere VM migration facilities to actually move the VMs to the desired location.  Then the CLM administrator can invoke a new CLM SDK call to synchronize CLM's relationship information on the affected VMs with their new as-is state.  To make this scanning operation more efficient, the CLM administrator can choose from four different such calls, that scope the set of VMs to be scanned for necessary updates:

  • cluster-migrate: scopes the scan to specified virtual clusters where VMs originally resided
  • host-migrate: scopes the scan to specified ESX hosts where VMs originally resided, which may have been moved from one vCenter to another
  • resource-pool-migrate: scopes the scan to specified resource pools where VMs originally resided
  • service-migrate: scopes the scan to VMs in specified Service Offering Instances

 

Running these commands will allow CLM to update its internal data, including associations to virtual clusters, resource pools and/or virtualization hosts, and networking details such as IP address, switch & switch port connections and network container & network zone assignment.  It will allow CLM to update DNS and IPAM entries for these servers as well.

 

You can invoke these SDK calls with the "--dryrun true" option to get an assessment of which VMs CLM recognizes are in new locations and what synchronization updates will be made.  The resultant report will be equivalent to when the call is made without that option, after synchronization updates have been made.  Performing this synchronization step will allow CLM to continue managing these servers throughout their lifespans, including altering the running state, modifying resource configurations and deploying application packages.

 

DOCKER SWARM SUPPORT

In CLM 4.6, we introduced a provider for Docker, allowing users to provision Docker containers to Docker hosts, and assuring administrators that only images from approved repositories are used, that IT Processes such as Change Management are being fulfilled automatically without impact to the end users, and that containers are deployed to appropriate Docker hosts based on policy.

 

In this Feature Pack, we are extending that capability to support provisioning to Docker Swarm clusters.  This is accomplished by enhancing the existing Docker provider, so the same container-oriented Service Blueprints can, based on placement policies, be deployed to newly-onboarded Docker Swarm clusters.  And onboarding Swarm clusters is accomplished much like onboarding standalone Docker hosts, but references the Swarm Manager port rather than the Docker Engine port on the host server.  When containers are deployed to that Swarm Manager, it will distribute them among the nodes in the cluster according to its configured algorithm.

 

Note that, independent from this new capability, CLM can still be used to provision Docker Engine or Docker Swarm clusters (as well as other cluster and schedule management environments) , and can apply ongoing compliance evaluations to those execution environments, leveraging BladeLogic out-of-the-box compliance materials such as Security Contained Automation Protocol (SCAP) 1.2.

 

SOFTWARE DEFINED NETWORKING -- NSX SUPPORT

CLM has for many years provided its own flavor of Software Defined Networking, automating the configuration of network services on standard network equipment based on CLM Network and Service Blueprints.  It uses a model of Pods and Network Containers; Pods represent a set of physical infrastructure that includes those standard network devices, and Network Containers are logical boundaries created within that infrastructure that segregates network traffic between different tenants or organizations within the same cloud service.

 

Organizations are increasingly interested in using new Software Defined Networking platforms, which separate the control plane from the data plane.  One of the commonly chosen SDN platforms is VMware NSX, and in this release, CLM has extended its Pod and Container Model to support VMware NSX.

 

This allows CLM administrators to create and update Pods and Network Containers on NSX, which can include specification of VxLAN-based subnets and NSX Distributed Firewalls.  It accomplishes this via "injection templates" in BladeLogic Network Automation, which embeds REST API commands in the templates sent to NSX Manager, allowing for additional flexibility to incorporate any other necessary NSX REST calls.  These Network Blueprints can also incorporate non-NSX-based Load Balancers and Perimeter Firewalls to create and manage Network Containers with a wide range of capabilities.

 

CLM Service Blueprint specifications of NIC attachments to subnets, load balancer pool entries and network paths will be dynamically rendered into the necessary updates to the NSX environment, virtual switches, load balancers and perimeter firewalls during fulfillment of CLM Service Offering requests.

 

(Stay tuned for an announcement regarding similar capabilities for the Cisco ACI SDN platform in the near future.)

 

PROGRESS/TROUBLESHOOTING DETAILS FOR ACTIONS ON EXISTING RESOURCES

Back in CLM 4.5, we introduced a much more effective way for administrators to troubleshoot when Service Offering request failed.  CLM proactively notifies administrators when such a failure occurs, and provides the administrator with actionable information, including the synopsis of what failed, what the estimated root cause was, what the recommended corrective action is and complete details of each step in the provisioning sequence that was performed up until the failure.  It allows the administrator to retry the request after attempting to correct the failure, leveraging the same change approval (if relevant) and generated names.  Ideally, an administrator can respond to and complete the request before the requesting user even realized there was a problem.  And in the situations where more analysis is required, it provides the ability to download snippets across multiple log files related to just that transaction.

 

Then in CLM 4.6 Feature Pack 3, we extended this capability in two ways: making the rich details (such as tag analysis during placement) available not just on failed requests but also on successful requests, which can even be viewed while the provisioning process is still underway; and we provided a configuration option to make this level of detail available to end users rather than strictly administrators.

 

Now in CLM 4.6 Feature Pack 5, we have applied these capabilities to actions performed on already-provisioned resources.  This includes state changes such as shutdown, power off or power on, as well as configuration changes such as adding CPUs, memory, disks or NICs.    This will enable administrators to provide even higher levels of service and satisfaction to end users, and in certain cases even enable end users to more effectively resolve issues themselves.

 

CONCLUSION

Feature Pack 5 is available now on the Electronic Product Download (EPD) portion of the BMC Support site.  It is can be applied quickly on any CLM 4.6 environment, regardless of which Feature Pack, if any, has already been deployed (as every Feature Packs is cumulative of all prior Feature Packs' capabilities).  Download and start using it today, and send me your thoughts!

 

--

Steve Anderson

Product Line Manager

BMC Software

steve_anderson@bmc.com

@SteveA_BMC

Share:|

I am excited to announce the general availability of Cloud Lifecycle Management 4.6.04!  In an effort to deliver valuable new capabilities in a faster and easier-to-consume fashion, we have been delivering more frequent Feature Pack releases, of which this is the fourth this year (on top of the 4.6.00 release at the beginning of 2016).

 

This latest Feature Pack expands our integration into Microsoft Azure to allow customers to provision HDInsight Clusters, Redis Caches, IoT Hubs or virtually any other service type via Azure Resource Manager templates.  On AWS, it allows for simpler definitions of access rules via reusable Security Groups.  It also extends our unique strengths in compliance of services, as well as further enhancing programmatic  use of CLM by dev/test users, and is the easiest update deployment yet.

 

At the same time, we have also published the Application Centric Cloud integration that we demonstrated at BMC Engage earlier this month, through which CLM can truly deploy complete application environments on-demand, including the "last mile" of development teams' internal application builds.  Here's an example of how this can be used.

 

In this example, when a new build of the JPetStore custom application is triggered in Jenkins, BRPD automatically creates a new application deployment instance (via a Jenkins plugin to BMC Release Package and Deployment, aka BRPD), complete with all of the tier-specific steps to deploy a running instance of this new version:

Jenkins.png

 

When a Dev/Test user wants a new JPetStore application environment, he or she can choose the particular build number to deploy, including the just-completed latest build:

JPetStore request.png

 

CLM will deploy all of the server, storage and networking infrastructure that is needed (which may vary among different single and multi-tier offering sizes), along with all of the platform components (web server platforms, application server platforms, database server platforms, etc.), and finally the custom application elements (via BRPD) and even bootstrap data.  Later, the dev/test user can also update the JPetStore application on an existing environment with a newer build:

UpdateApp.png

 

This will help dev/test users eliminate time they currently spend deploying and configuring custom builds within these environments, reducing distractions and enabling them to focus more of their time on actually developing and testing new application releases.  These environments are created in consistent fashion, which further reduces time spent debugging manually-induced variations between developer, tester and ultimately production environments.

 

Diving further into to other new capabilities, the base 4.6.00 release of CLM back in January included the framework for a new provider type for external template-based provisioning, which was designed to accommodate implementations for AWS CloudFormation, Azure Resource Manager, OpenStack Heat and others, but did not include any out-of-the-box provider implementations.  Feature Pack 1 (4.6.01) followed a few weeks later, delivering a provider implementation for AWS CloudFormation.  Feature Pack 4 (4.6.04) now delivers a provider implementation for Azure Resource Manager.

 

The intention for supporting external template-based provisioning is to extend beyond the kinds of resources that CLM models in its Service Blueprints (such as servers, applications, storage, relational databases and networking) to the entire spectrum of resource types that a cloud service might offer (such as analytics, non-relational databases, in-memory cache, server-less code execution and mobile services).  And if a cloud service introduces a new type of resource (and it is supported via its template language), it can be immediately leveraged via CLM.  Using CLM to drive provisioning of these templates expands the scope of visibility that the organization has into what resources are provisioned where, both internally and externally.  It also allows for integration with necessary IT processes such as change management and automated CMDB updates.

 

Feature Pack 4 also simplifies the administration of external template references by automatically discovering template parameters and creating corresponding CLM Service Blueprint parameters, including enumerated parameter value choices that can be displayed as dropdown selections during service requests.  This greatly simplifies the CLM Administrator's steps when incorporating an external template file.  Here's what it looks like:

 

A CLM Service Blueprint that references an ARM template now presents a button to Discover Parameters from that template:

SBP pointing to ARM template.png

 

A portion of the referenced template is shown here, highlighting its parameters:

ARM template.png

 

Upon clicking "Discover Parameters", CLM automatically creates Service Blueprint parameters from those template parameters (which can subsequently be set to be displayed to end users during Service Offering requests):

SBP with parameters.png

 

Another area of improvement in Feature Pack 4 addresses programmatic invocation of CLM, i.e. via the API or SDK.  Such calls now generate the same Activity Log tracking details as requests submitted via the CLM End User Portal.  This brings more parity between the UI-oriented use of CLM and the more developer-centric means of using CLM, ensuring that the same audit trail and visibility is available no matter how users interact with CLM.

 

In CLM 4.5, new troubleshooting details were introduced for CLM Administrators, enabling them to quickly identify errors, take corrective action and restart the provisioning request.  CLM 4.6.03 introduced the ability to display this level of detail for all requests, whether failed or successful or even still in progress, for example providing insight on how placement decisions were made.  Insight into the processing of tags during provisioning can greatly assist in properly setting tags to achieve your desired placement strategy.  This visibility can be, based on CLM configuration, made available to CLM End Users as well, since some errors can be addressed by end users without the involvement of CLM Administrators.  CLM 4.6.04 now provides AWS and Azure-specific troubleshooting & progress details as well.

 

Feature Pack 4 provides other improvements in its integration with AWS, including the ability to utilize any new EC2 Instance Types and Instance Families, even if introduced by AWS at a later point in time.  It also allows CLM to associate new EC2 instances with pre-existing Security Groups (e.g. for standard access rules to be used on many/all servers), in addition to the Security Groups dynamically generated from Network Paths defined in CLM Service Blueprints.

 

Following up on the compliance visibility capability introduced in CLM 4.6.00, where newly provisioned servers are added to BladeLogic Server Automation Smart Groups for recurring compliance scanning jobs, Feature Pack 4 offers an option to ensure that the compliance job is executed immediately during the service request fulfillment.  This helps ensure that service offerings are measured for compliance as soon as they are provisioned and, if matched with automated remediation in BladeLogic Server Automation, brought into compliance from the outset.

 

The installation of Feature Pack 4 is simple to execute, taking on the order of an hour to complete, and includes the Azure and OpenStack providers (which were previously additional installation items).  It requires an existing environment running on CLM 4.6 (either the base release or any prior 4.6 Feature Pack level), and includes all updates delivered in previous Feature Packs.  I strongly encourage you to download and deploy this latest update and begin taking advantage of its expanded capabilities!

 

-- Steve Anderson, Product Line Manager, BMC

 

 

P.S. In case you missed the earlier CLM 4.6 Feature Packs, here's a brief review of their capabilities:

 

  • CLM 4.6.01 (Feature Pack 1):
    • AWS CloudFormation external template provider
  • CLM 4.6.02 (Feature Pack 2):
    • ITSM Foundation Data brownfield synchronization tool
  • CLM 4.6.03 (Feature Pack 3):
    • Self Service
      • Troubleshooting/progress data available to end users
      • End users can request services for & transfer services to other users
    • APIs
      • SDK invocation of custom operator actions
      • Management API for detailed CLM and component version numbers
    • Platform Support
      • Service-level compliance visibility for more platforms
      • Azure support for private network connections, Security Groups, new instance types
    • Networking
      • Support for VxLANs
      • Improved firewall rule management performance
      • Load balancer parameterization

Filter Blog

By date:
By tag: