Share:|

INTRODUCTION

I am thrilled to announce the general availability of Cloud Lifecycle Management 4.6.05, our fifth Feature Pack for CLM 4.6!  We began employing this Feature Pack methodology over a year ago, immediately after the release of v4.6.  It allows us to deliver new value in the platform more frequently in a way that is easy for customers to deploy with minimal downtime -- one user described it as "a very clean install with no hassles," further noting that theirs was a "1 step and 4 click install."  We plan to continue with this direction for the foreseeable future.  (For existing CLM customers who are currently running on versions prior to 4.6, look for another announcement soon regarding improvements in the upgrade process to get you to v4.6, which will position you to take advantage of this Feature Pack approach.)

 

AZURE INTEGRATION ENHANCEMENTS

One of the most significant aspects of this release is a major advancement in our integration with Azure.  We've long provided integration with Azure, but (based on when we began our integration journey) it was based on Microsoft's Azure Service Management (ASM) integration architecture.  Microsoft later introduced a new architecture, Azure Resource Manager (ARM), and has since redirected customers away from ASM.  We have now redesigned our original integration capabilities to utilize ARM.

 

This means that the same platform-agnostic, graphical CLM Service Blueprints that model server resources, application packages, storage volumes, network attachments and FW/LB services, which might be used to provision to vSphere, AWS, Hyper-V, OpenStack, Azure (via ASM) or other platforms, can now also be used to provision to Azure via ARM.  This allows customers to realize a multi-cloud strategy  with minimal friction, easily reusing Blueprint models and adjusting placement policies to target onboarded ARM-based Virtual Networks.

 

But rather than just replace ASM API calls with equivalent ARM API calls to achieve this, we've taken a new approach that takes advantage of the fundamental template-based design of ARM: when new requests for CLM Service Offerings are submitted, CLM will dynamically translate the associated Service Blueprint (which, while presented in visual fashion in the Service Designer, is persisted in JSON form) into an ARM template (which is another JSON syntax).  This is expected to give us more flexibility in the future to incorporate new Azure capabilities.

 

Two other byproducts of this design are that 1) there is no new CLM provider needed, but rather we are leveraging the same Azure Resource Manager template provider introduced in FP4, and 2) Service Blueprint references to externally-authored templates can be intermingled with standard server, application, storage and network objects, with output values from one step in the specified order of provisioning feeding as input to subsequent steps.  Note that you can use the original ASM-based CLM Azure provider and these new ARM-based replacement capabilities at the same time (relying on placement policies to determine which requests go thru which architecture) as a transition strategy, but the original ASM-based provider is will be deprecated soon and won't be enhanced any further.

 

On the topic of provisioning using externally-authored Azure Resource Manager templates, two other enhancements have been made in this release.  First, beyond just strings, now boolean and integer data types for template parameters are directly supported (and others are on the roadmap).  Second (and more significantly), those external template files can be more effectively secured.  When CLM submits a request to Azure that involves an external ARM template, it passes a URL reference to that file that the Azure public service must be able to resolve.  We considered resolving the template URL in CLM and simply passing the content of the template to Azure, but this would not adequately address the cases where the main template references one or more nested templates (which, in turn, could reference other nested templates, ad infinitum), and those subordinate templates would need to be resolvable by the Azure public service.  Instead, we are leveraging the Azure storage and SAS token model to render them both sufficiently accessible and sufficiently access-protected.  For each CLM Service Offering request, CLM generates a new access token that is only valid for a few moments and appends it to the template URL, which when given to Azure, authorizes Azure to securely resolve that template and any referenced nested templates (which have to be in the same storage location) for just long enough to fulfill the request.

 

SYNCHRONIZING MIGRATED VSPHERE WORKLOADS

Moving on to another category of enhancements, we have many CLM customers that happen to be reaching their next hardware and/or vSphere refresh interval (or want to transition to a vCenter Appliance), are looking to stand up their new infrastructure along side what is currently in use, and would like to migrate existing CLM-managed workloads over these new resources.  Other customers are looking to move already-deployed workloads from their original network container or compute cluster destinations to different locations, either to achieve better balance of workloads within those locations or to deliberately create a temporary imbalance in anticipation of an upcoming, very large deployment request.  And still others would like to deploy workloads to a quarantined network container where additional configuration and testing can be performed before moving them to their ultimately intended destinations.

 

Historically, these use cases have presented challenges due to the relationships that CLM maintains for manage resources, but this release offers a solution.  You can use the native vSphere VM migration facilities to actually move the VMs to the desired location.  Then the CLM administrator can invoke a new CLM SDK call to synchronize CLM's relationship information on the affected VMs with their new as-is state.  To make this scanning operation more efficient, the CLM administrator can choose from four different such calls, that scope the set of VMs to be scanned for necessary updates:

  • cluster-migrate: scopes the scan to specified virtual clusters where VMs originally resided
  • host-migrate: scopes the scan to specified ESX hosts where VMs originally resided, which may have been moved from one vCenter to another
  • resource-pool-migrate: scopes the scan to specified resource pools where VMs originally resided
  • service-migrate: scopes the scan to VMs in specified Service Offering Instances

 

Running these commands will allow CLM to update its internal data, including associations to virtual clusters, resource pools and/or virtualization hosts, and networking details such as IP address, switch & switch port connections and network container & network zone assignment.  It will allow CLM to update DNS and IPAM entries for these servers as well.

 

You can invoke these SDK calls with the "--dryrun true" option to get an assessment of which VMs CLM recognizes are in new locations and what synchronization updates will be made.  The resultant report will be equivalent to when the call is made without that option, after synchronization updates have been made.  Performing this synchronization step will allow CLM to continue managing these servers throughout their lifespans, including altering the running state, modifying resource configurations and deploying application packages.

 

DOCKER SWARM SUPPORT

In CLM 4.6, we introduced a provider for Docker, allowing users to provision Docker containers to Docker hosts, and assuring administrators that only images from approved repositories are used, that IT Processes such as Change Management are being fulfilled automatically without impact to the end users, and that containers are deployed to appropriate Docker hosts based on policy.

 

In this Feature Pack, we are extending that capability to support provisioning to Docker Swarm clusters.  This is accomplished by enhancing the existing Docker provider, so the same container-oriented Service Blueprints can, based on placement policies, be deployed to newly-onboarded Docker Swarm clusters.  And onboarding Swarm clusters is accomplished much like onboarding standalone Docker hosts, but references the Swarm Manager port rather than the Docker Engine port on the host server.  When containers are deployed to that Swarm Manager, it will distribute them among the nodes in the cluster according to its configured algorithm.

 

Note that, independent from this new capability, CLM can still be used to provision Docker Engine or Docker Swarm clusters (as well as other cluster and schedule management environments) , and can apply ongoing compliance evaluations to those execution environments, leveraging BladeLogic out-of-the-box compliance materials such as Security Contained Automation Protocol (SCAP) 1.2.

 

SOFTWARE DEFINED NETWORKING -- NSX SUPPORT

CLM has for many years provided its own flavor of Software Defined Networking, automating the configuration of network services on standard network equipment based on CLM Network and Service Blueprints.  It uses a model of Pods and Network Containers; Pods represent a set of physical infrastructure that includes those standard network devices, and Network Containers are logical boundaries created within that infrastructure that segregates network traffic between different tenants or organizations within the same cloud service.

 

Organizations are increasingly interested in using new Software Defined Networking platforms, which separate the control plane from the data plane.  One of the commonly chosen SDN platforms is VMware NSX, and in this release, CLM has extended its Pod and Container Model to support VMware NSX.

 

This allows CLM administrators to create and update Pods and Network Containers on NSX, which can include specification of VxLAN-based subnets and NSX Distributed Firewalls.  It accomplishes this via "injection templates" in BladeLogic Network Automation, which embeds REST API commands in the templates sent to NSX Manager, allowing for additional flexibility to incorporate any other necessary NSX REST calls.  These Network Blueprints can also incorporate non-NSX-based Load Balancers and Perimeter Firewalls to create and manage Network Containers with a wide range of capabilities.

 

CLM Service Blueprint specifications of NIC attachments to subnets, load balancer pool entries and network paths will be dynamically rendered into the necessary updates to the NSX environment, virtual switches, load balancers and perimeter firewalls during fulfillment of CLM Service Offering requests.

 

(Stay tuned for an announcement regarding similar capabilities for the Cisco ACI SDN platform in the near future.)

 

PROGRESS/TROUBLESHOOTING DETAILS FOR ACTIONS ON EXISTING RESOURCES

Back in CLM 4.5, we introduced a much more effective way for administrators to troubleshoot when Service Offering request failed.  CLM proactively notifies administrators when such a failure occurs, and provides the administrator with actionable information, including the synopsis of what failed, what the estimated root cause was, what the recommended corrective action is and complete details of each step in the provisioning sequence that was performed up until the failure.  It allows the administrator to retry the request after attempting to correct the failure, leveraging the same change approval (if relevant) and generated names.  Ideally, an administrator can respond to and complete the request before the requesting user even realized there was a problem.  And in the situations where more analysis is required, it provides the ability to download snippets across multiple log files related to just that transaction.

 

Then in CLM 4.6 Feature Pack 3, we extended this capability in two ways: making the rich details (such as tag analysis during placement) available not just on failed requests but also on successful requests, which can even be viewed while the provisioning process is still underway; and we provided a configuration option to make this level of detail available to end users rather than strictly administrators.

 

Now in CLM 4.6 Feature Pack 5, we have applied these capabilities to actions performed on already-provisioned resources.  This includes state changes such as shutdown, power off or power on, as well as configuration changes such as adding CPUs, memory, disks or NICs.    This will enable administrators to provide even higher levels of service and satisfaction to end users, and in certain cases even enable end users to more effectively resolve issues themselves.

 

CONCLUSION

Feature Pack 5 is available now on the Electronic Product Download (EPD) portion of the BMC Support site.  It is can be applied quickly on any CLM 4.6 environment, regardless of which Feature Pack, if any, has already been deployed (as every Feature Packs is cumulative of all prior Feature Packs' capabilities).  Download and start using it today, and send me your thoughts!

 

--

Steve Anderson

Product Line Manager

BMC Software

steve_anderson@bmc.com

@SteveA_BMC