Search BMC.com
Search

Share: |


No, this article isn’t about Dolly the Sheep, it’s about Cloud Computing.  Specifically, about different approaches to virtual server provisioning, and (behind the scenes) the choices and capabilities provided by a provisioning engine (and thus exposed in a service catalog).  Let’s begin.
The market has advanced enough in the past 6-12 months where we can now, sensibly and credibly, talk about a prototypical cloud lifecycle solution, with a wide selection of offerings available from vendors great and small.   Typically, these solutions house a catalog of offerings, presented to a user for selection, and which is subsequently instantiated through a provisioning process.  Many solutions implement this by simply cloning the template, attaching it to the appropriate network, and turning it on. While this sounds straightforward, this immediately should raise a number of questions from those folks in IT responsible for managing and maintaining their organization’s cloud.
First, think about your (internal) customers – typically application developers who build, test, or maintain those applications created to run your business.  How standardized are these developers, in terms of the OS and version and they’re using? What about all the additional software components, such as messaging middleware, app or web servers, or databases, which they require as part of their images ?  If your organization is like most, there is an enormous variety of software components in use – and the natural result of using a purely image-based provisioning system is that you’ll end up with a combinatorical explosion of hundreds and hundreds images.  This isn’t just theoretical -- earlier this year I worked with a system integrator customer, who used our solution to solve exactly this problem at a governmental agency.
So, what’s wrong with having many, many customized images to choose from? Isn’t this exactly the kind of flexibility and responsiveness that cloud computing promised?   Look for a continued discussion in Part 2 of this series.  Until then, please share your opinion and experiences in this area by submitting a comment below.

No, this article isn't about Dolly the Sheep, it’s about Cloud Computing.  Specifically, about different approaches to virtual server provisioning, and (behind the scenes) the choices and capabilities provided by a provisioning engine (and thus exposed in a service catalog).  Let’s get started.

 

The market has advanced enough in the past 6-12 months where we can now, sensibly and credibly, talk about a prototypical cloud lifecycle solution, with a wide selection of offerings available from vendors great and small.   Typically, these solutions house a catalog of offerings, presented to a user for selection, and which is subsequently instantiated through a provisioning process.  Many solutions implement this by simply cloning the template, attaching it to the appropriate network, and turning it on. While this sounds straightforward, this immediately should raise a number of questions from those folks in IT responsible for managing and maintaining their organization’s cloud.

 

First, think about your (internal) customers – typically application developers who build, test, or maintain those applications created to run your business.  How standardized are these developers, in terms of the OS and version and they’re using? What about all the additional software components, such as messaging middleware, app or web servers, or databases, which they require as part of their images ?  If your organization is like most, there is an enormous variety of software components in use – and the natural result of using a purely image-based provisioning system is that you’ll end up with a combinatorial explosion of hundreds and hundreds images.  This isn’t just theoretical -- earlier this year I worked with a system integrator customer, who used our solution to solve exactly this problem at a governmental agency.

 

So, what’s wrong with having many, many customized images to choose from? Isn’t this exactly the kind of flexibility and responsiveness that cloud computing promised?   Look for a continued discussion in Part 2 of this series.  Until then, please share your opinion and experiences in this area by submitting a comment below.


Share: |


Here at VMworld, we've been enjoying the occasional evening out on the town. Last night, we had a walking tour of the town, and had a wonderful dinner at new harbour, and late into the night, we got on the subway to get back to our hotel.

 

The subway in Copenhagen is very modern. Unlike the rinkidink Boston subway, a schedule of upcoming trains is posted on the track (strangely, using those flip-number signs like old-school alarm clocks). It tells you that the next train to "somewhere with an umlaut" is coming in 5 minutes. We got on. Shortly thereafter, somewhere in a tunnel, the train stopped.

 

As it turned out, another train was coming along in the other direction, and we had to wait for them to pass before proceeding. The limited resource that was the train track was at capacity, at midnight on a Tuesday, and we waited. The announcer said, in Danish (as interpretted by our local translator), that "the other train should be passing through sometime soon."  A train system that can tell me that Umlaut train is 5 minutes away can only tell me that the other train is "coming sometime soon."

 

Capacity management is all about understanding the state of your resources currently, and how their state will change in the future. For years, IT has been working on predictive analytics. In a cloud environment, those analytics will be deployed in a far more dynamic environment than ever before. As our CTO recently noted, splotchy workloads and peaks in utilization typically follow a rather predictable pattern - every christmas or at the end of every month. Predictable bursts, like crossing trains, should be well understood and accounted for in the management of the cloud.

 

Only the true anomolies really deserve a special exception or announcement - like a 20 inning red sox game in boston. Or the decent of 10,000 geeks onto a northern european train system. Its then that capacity management is truly tested.

Share: |


I’m very pleased to be able to announce the news that BMC has acquired Neptuny Software, a provider of business-aware capacity planning solutions. See today’s press release for the official story.  In this blog entry I will explore this acquisition further, and explain some context and motivation for this.  But first, I’d like to warmly welcome the Neptuny team to BMC – congratulations, and we look forward to working with you.


The decision to strengthen our capacity management offerings through this acquisition was really driven by two key tenets (shown below) – which reflect what we’re hearing from our customers today, and demonstrate the continued execution of our Business Service Management vision:

  • Recognition that capacity management is a core requirement of a modern IT environment, and that it’s doubly important in a virtualized/cloud environment
  • An amplification of BMC’s commitment to providing a business perspective for IT

 

Let’s briefly explore these. Many organizations today do understand the need for capacity management – ensuring that they have the ability to look forward in time, and map out what IT infrastructure changes and investments will be required in their enterprise. As organizations begin to move more and more of their infrastructure into a virtualized, shared services model, two things happen.

 

First, the underlying physical hardware (compute, network, and storage) starts being shared across more (and more complex) systems.   This has the result of increasing utilization, which is generally a good thing, up to a point.  But this also increases the complexity and dynamicism of the environment, by introducing more interdependencies and ongoing change among the various components.  As a result, any given component has the potential to impact many others – thus driving the need for active management and modeling of its capacity (as well as governance, although this is a separate topic).  In addition, in a cloud environment, the provisioning placement engine needs accurate and up-to-date visibility into current resource capacity, in order to properly decide where and how to place services.

Secondly, and perhaps more importantly, is the need to provide a way for IT to view its operations from a business perspective.  Most people are likely familiar with the business context provided by a CMDB – allowing IT to distinguish technically identical resources (such as servers), and allocate technical and human resources based on its important to the business.  A classic example of this is a pair of servers – technically identical, but providing wildly different services to the business. One might be hosting the production database for a company’s web commerce application, while the other hosts the customer message board and community site.  Both are important to the business, but the database is mission-critical, and will result in lost revenue to the company if it’s down.  The community site is non-revenue-generating, and an outage won’t materially impact the business.  Thus, these two identical pieces of hardware need to have significantly different configuration policies, change management processes, and levels of service (e.g. response time and effort in the case of an outage).

The Neptuny software provides this same kind of business perspective in the world of capacity management.  It enables organizations to connect business metrics – such as website transactions per hour – to the underlying IT infrastructure.  Thus, the “what-if” scenarios can be based on extrapolating the growth of meaningful business indicators, and makes the projected growth in IT infrastructure an output of the model, rather than a required input. This allows IT to participate in business discussions, using a vocabulary and measurement system consistent with the rest of the business.  This is the larger goal of Business Service Management – to elevate IT, and allow it to align its goals with the business.


To wrap up – we’re thrilled to have been able to bring this technology on board, and look forward to sharing it with our customers.

 


 

Filter Blog

By date:
By tag:
It's amazing what I.T. was meant to be.