Search BMC.com
Search

Share: |


If you have been keeping up with IT headlines in any shape or form over the past year, you probably noticed that Cloud is the new hot topic in IT. However, if you look a bit closer, what you may not have noticed is that a good portion of the discussion around Cloud nowadays has shifted away from the core functionality, such as rapid provisioning of servers and applications, and more towards issues of on-going maintenance, security, and compliance of Cloud infrastructure. For example, lets take a look at some of the recent headlines these days:

 

 

 

 

 

Amazon realizes that one of the major hurdles of cloud services is to overcome it’s stigma of being purely suited for development and test environments and to give assurances to customers that cloud infrastructure can be used to operate core functions of the business such as payment processing. Amazon’s efforts towards certification have resulted in one additional step towards that assurance.

 

Meanwhile, in the public sector, Google and Microsoft have been in a heated battle for cloud business from the US government. Google announced that their Apps service obtained FISMA certification about 6 months ago with a contract awarded to them not long thereafter, over Microsoft, from the United States General Services Administration. Microsoft finally retorted recently by announcing its own FISMA certification and contract with the USDA.

 

From these recent headlines, it’s apparent that the on-going management, security, and compliance of cloud services are starting to really become the core around how cloud service providers differentiate their services from others. It is no longer just about how fast you can provision servers and applications and is instead becoming more about what assurances you can provide during the lifetime of that server or application. This has become the new standard for cloud services. Because of this, when building out new cloud infrastructure, cloud service providers can no longer think of security and compliance of the solutions as an afterthought. Cloud service providers who implement integrated solutions that can not only provision new servers and applications but can also provide all the required assurances (i.e. security, compliance, monitoring) during the lifetime of the service will be the ones who remain competitive.

Bill Robinson

Stack'em High

Posted by Bill Robinson Dec 8, 2010
Share: |


In more than one customer engagement I’ve thrown around the term ‘stack’.  Like “Application Stack” or “Full Stack Provisioning”.  Well what is a stack and why should you care?

 

A stack is the list of applications you drop on a server to make it do X.  The stack is usually made up of 3 parts – the ‘base stack’ which encompasses things like an endpoint security agent, performance monitoring agent, baseline security configuration and current patches, then the ‘application stack’ which makes the server what it is – a database, a web server, custom application code, and then the configurations specific to that server.

 

Stacks exist when IT has standardized tools.  All Windows Servers get McAfee AntiVirus,  All Webservers run Apache. 

 

I’ve been to customers where sadly there was no standardization.  Every server got whatever the requestor wanted on it.  And they wanted to automate server provisioning.  Sure – the OS went down faster, but the rest was a nightmare.  There were political issues about standardizing because the ‘this is how we’ve always done it’ mentality was in place.  It took significant effort to win them over to providing a menu of a few options instead of a write-in.  This not only reduced licensing and support costs but made for faster server deployments. 

 

I’ve been to customers where there was a high degree of standardization.  All the hardware was from one vendor, they had 3 hardware configurations (small, medium, large).  All the software was common.  They brought in BMC BladeLogic to be the automation engine, and BladeLogic talked to other systems that billed cost centers, setup user accounts and updated asset tracking systems. 

 

Hopefully it’s pretty easy to identify the stacks in the environment.  One step would be to do a software inventory audit on all your systems and start looking at the data.  The patterns should emerge pretty quickly.  Once that’s done, start figuring out what the outliers are and determine if they are really needed or if they can be replaced by something more standard. 

 

Once you get the stacks defined, you can make your menu of services to provide: Windows WebServer, Linux Database, Solaris AppServer.  Maybe Big Linux Database, Small Solaris WebServer, etc.

 

The goal is conformity.  This simplifies licensing, support and deployment.  It speeds upgrade times.  If you need to switch AntiVirus vendors or hardware vendors, it’s a pretty easy swap out.  .  It also makes it easy to move to “the cloud” (or anywhere else) because you know what needs to get dropped on the server and all the associated configurations. 

Share: |


Several years ago I had some great experiences while working on a team building out a large data center from scratch. In particular, the extent of the complexity of the underlying infrastructure was new to me. Of all of the complexity, the infrastructure element that stuck in my mind the most was the Database. Coming from an application development and management background, I always thought of the database as just “being there” and someone else’s problem. I was both impressed and blown away by the work required to stand up a highly available, enterprise quality database. I was also struck by how the DBAs were so specialized, and how difficult it was to find quality DBAs with real-world experience with enterprise quality databases. As a team, we put a lot of focus on automation, but databases seemed to be somewhat apart. The automation design for the shared database infrastructure took 6+ months, and was rarely used. Just like many other IT shops, we just put most of our effort into other parts of the infrastructure.

 

If you look at the importance of the database within the data center, my experience, while very normal, seems illogical. The database is the repository of the most sensitive and important data in the enterprise, yet we had little success folding the database infrastructure into the wider automation mechanisms. Why is that the case? The same reasons that drive DBAs to be so specialized, also has led IT to treat databases as something different, requiring a different approach. Oracle and Microsoft are constantly changing how their databases are installed, configured, upgraded, and patched. Only specialists can keep up with that pace. In particular, scaling an enterprise database can something be something like changing the engine of a jet engine in-flight, while the passengers look on in amazement and horror.

 

So, this is fundamentally why I am so excited about BMC’s acquisition of GridApp last week. GridApp has successfully taken on the herculean task of incorporating that specialist knowledge for building, upgrading, patching, scaling, and maintaining databases. Database Automation takes the fear and trembling out of these complicated tasks. This automation provides the path for DBAs to conclusively join the automation revolution going on with their server, network, and application brethren, and focus on the strategic design and engineering work that makes their job more enjoyable.

 

So, welcome to the party Database Administrators. It’s great to have you.

Share: |


First, let me apologize for posting this a few weeks late.  I hope the length of this post goes some way to make amends for that.
A good friend and long-time Consulting colleague is oft-heard to insist, "the most successful projects are the ones where we never touch the keyboard."   And for the longest time, I had repeated this mantra to consultant after consultant as we brought new hires into the Automation Consulting Practice.
This was a good model for a while, and in some cases still holds true today.  But as our business grew and matured, we started moving away from a single person on site and implementing our solutions to larger, more complex implementation projects.  It became soon apparent that this approach, an approach that I now refer to as 'Hero-led Adoption', could not scale and therefore would not optimally support our customers to be the most successful.
Recognizing still that Adoption was key to long-term customer success and therefore happy customers, the next step in our own maturity curve  was a 'Team Contribution' model.  In this approach, we would build a project plan to implement, optimize, and deploy our automation solutions, but some portion of the effort would be delivered by our customers' own internal staff as an extension to our project team.  The size of the portion would vary from customer to customer based on a number of variables, but the intention was always the same. We saw some success with this model.  It proved valuable for a number of reasons: our customers' people were integrated into the project delivery and therefore learned a lot naturally by working alongside our consultants, our customers got more scope for less cost, and we were therefore able to focus on some of the more strategic initiatives our customers had.  The downfall to this model was that it still left only a small team of technical people engaged therefore remaining difficult to integrate our solutions into the broader business.
We took a further step and began to work more with our customers to optimize their broader IT & business processes and integrate with their organizational initiatives, simply a 'Process Optimization' model.  This involved process definition, documentation, integration, and training.  We quickly learned that all this effort followed by training in and of itself, without a more comprehensive strategy encompassing it, was not optimally effective.  We received feedback that, after training, people just would go back to their day jobs. We saw that top-down process definition was not, alone, effective at driving change.
This inspired yet another iteration, the design of 'Role-based Adoption Programmes'.  This approach arose out of a large, complex, multiple-product platform implementation and a recognition that classical product training was insufficient to address all of the interconnection of a composite platform.  We instead looked at the various user & stakeholder roles within the customer's end-state organization to operate, support, and evolve the platform and designed a measured education and results-monitored Adoption Programme integrated with the project timelines and deliverables.  This proved effective at delivering a solution into a broad audience across fragmented organizations, but still did not quite reveal all the business benefits of the success of the project.
Which I cannot and would not claim that we are finished evolving and maturing, today we find ourselves still employing all of the above methodologies successfully, but we are also more than ever before conscious that the most important KPI of all is the almighty dollar (or whatever your local currency).  We are designing Adoption Programmes as above, but linked to tangible, financial benefits upon which we can provide continuous feedback to the business to help drive alignment between themselves and their IT organization(s) and therefore ensure continuous adoption.  I refer to this as 'Closed-loop Adoption Assurance'.
I don't claim to have yet all the answers, but we continue to make progress and continue to innovate new solutions to a continuing affliction affecting more than just Automation, but in fact all disciplines throughout all enterprise software vendors.
I will refer back to the concepts in this article in future entries and perhaps write entire entries on  individual concepts addressed above.  And of course, I will try to be more timely with my posts as well.

First, let me apologize for posting this a few weeks late.  I hope the length of this post goes some way to make amends for that.

 

A good friend and long-time Consulting colleague is oft-heard to insist, "the most successful projects are the ones where we never touch the keyboard."   And for the longest time, I had repeated this mantra to consultant after consultant as we brought new hires into the Automation Consulting Practice.

 

This was a good model for a while, and in some cases still holds true today.  But as our business grew and matured, we started moving away from a single person on site and implementing our solutions to larger, more complex implementation projects.  It became soon apparent that this approach, an approach that I now refer to as 'Hero-led Adoption', could not scale and therefore would not optimally support our customers to be the most successful.

 

Recognizing still that Adoption was key to long-term customer success and therefore happy customers, the next step in our own maturity curve  was a 'Team Contribution' model.  In this approach, we would build a project plan to implement, optimize, and deploy our automation solutions, but some portion of the effort would be delivered by our customers' own internal staff as an extension to our project team.  The size of the portion would vary from customer to customer based on a number of variables, but the intention was always the same. We saw some success with this model.  It proved valuable for a number of reasons: our customers' people were integrated into the project delivery and therefore learned a lot naturally by working alongside our consultants, our customers got more scope for less cost, and we were therefore able to focus on some of the more strategic initiatives our customers had.  The downfall to this model was that it still left only a small team of technical people engaged therefore remaining difficult to integrate our solutions into the broader business.

 

We took a further step and began to work more with our customers to optimize their broader IT & business processes and integrate with their organizational initiatives, simply a 'Process Optimization' model.  This involved process definition, documentation, integration, and training.  We quickly learned that all this effort followed by training in and of itself, without a more comprehensive strategy encompassing it, was not optimally effective.  We received feedback that, after training, people just would go back to their day jobs. We saw that top-down process definition was not, alone, effective at driving change.

 

This inspired yet another iteration, the design of 'Role-based Adoption Programmes'.  This approach arose out of a large, complex, multiple-product platform implementation and a recognition that classical product training was insufficient to address all of the interconnection of a composite platform.  We instead looked at the various user & stakeholder roles within the customer's end-state organization to operate, support, and evolve the platform and designed a measured education and results-monitored Adoption Programme integrated with the project timelines and deliverables.  This proved effective at delivering a solution into a broad audience across fragmented organizations, but still did not quite reveal all the business benefits of the success of the project.

 

Which I cannot and would not claim that we are finished evolving and maturing, today we find ourselves still employing all of the above methodologies successfully, but we are also more than ever before conscious that the most important KPI of all is the almighty dollar (or whatever your local currency).  We are designing Adoption Programmes as above, but linked to tangible, financial benefits upon which we can provide continuous feedback to the business to help drive alignment between themselves and their IT organization(s) and therefore ensure continuous adoption.  I refer to this as 'Closed-loop Adoption Assurance'.

 

I don't claim to have yet all the answers, but we continue to make progress and continue to innovate new solutions to a continuing affliction affecting more than just Automation, but in fact all disciplines throughout all enterprise software vendors.

 

I will refer back to the concepts in this article in future entries and perhaps write entire entries on  individual concepts addressed above.  And of course, I will try to be more timely with my posts as well.

Filter Blog

By date:
By tag:
It's amazing what I.T. was meant to be.