Search BMC.com
Search

Share: |


Driving home last night, listening to Marketplace on NPR I heard the attached segment about making improvements in managing the food supply chain. As it went on I thought, "I wish all BMC could hear this" because the food supply chain is an apt metaphor for the application release process. Consider:

 

  1. The primary business driver is Time to Market - the "produce" can't be delivered too quickly
  2. UNLESS hasty or sloppy delivery processes contaminate the product or fail to remove contamination.
  3. The product changes hands frequently and covers a lot of distance.
  4. The earlier contamination is detected, isolated and removed, the less costly it is and the less negative impact it has on the market.
  5. Current practices are manual, ad hoc and ineffective.
  6. Regulatory compliance is mandating changes to current processes.
  7. The solution is a combination of logistics, automation, human workflows and end-to-end traceability.
  8. IBM and Microsoft are getting in the game, but they aren't the innovators in the space.

 

Please take 4 minutes to listen to the attached segment and:

  • when you hear a reference to produce, think "application"
  • when you hear "contamination", think "configuration error or application bug"
  • when you hear "people gettng sick and dying", think "major service outage"
  • when you hear "market" think "data center"
  • ...

 

you get the idea.

Share: |


I am convinced at this point that my computer has it in for me, or at least some of the software on it. I know all of you have been there. You are happily going along your merry way, and the Computer (notice the capital C, indicating the personification of the computer and its evil intent) decides that it has had enough of your attempts at work prose, and fails, freezes, or thoughtfully reboots your laptop. As I was waiting for a particular software application for writing documents from a Washington State-based company to finish thinking about the document I was working on, I had some time to think myself. Why does this frustrate me so much? I used to go get coffee while my computer booted in the old days. The "Blue Screen of Death" was close to a social cliche at one point. So what changed, other than the obvious improvements in the code of certain software companies?

 

One word - EXPECTATIONS

MH900445054.JPG

 

Once you experience something better, the natural inclination of any human is to become desensitized to what was, at one point, absolutely astounding. We usually go even one step further and start to feel entitled to what was, only a short time before, out of reach. We have all seen it in our lives. I am longer content with staying in youth hostels and using my jacket for a pillow while traveling. A car with no air conditioning seems like an abuse of my basic human rights. I expect any website to be slick and "Web 2.0" as a matter of course. I can't make buying decisions without some sort of rating system and hundreds of user comments to guide me.

 

So, how does this relate to Data Center Automation? Well, I look at our industry right now, and I see a lot of expectation setting. Cloud Computing is the IT revolution of this decade. It shouldn't matter where your infrastructure is. "Compute Capacity" should be delivered not in weeks, but minutes. Business Applications should see updates every few weeks, not every few months. The list goes on and on, and we at BMC have built our business on achieving those expectations.

 

There is a rub, though. As often happens with IT revolutions like cloud, the inclination of both vendors and users is to reduce the solution to a single, easy to grasp concept. "To the Cloud" says the hard-working, small business CEO. "Isn't virtualization the same thing as cloud?". "Traditional Management and Automation Techniques no longer apply in the cloud". And as always happens, the unwary consumer is caught up in these over-simplications, and ends up paying the price.

 

So, what's the bottom line? I am excited about the changes in our industry, and I believe that many of the lofty predictions will seem like child's play 10 years from now. I also believe that users expectations of IT are higher now than any time before. I think this decade will be one of the most exciting for IT ever. However, I also believe that the repercussions for an ill-conceived dive into the new world could be a rude awakening for IT organizations. So what's my advice? Look at the end-to-end process for delivering your services, and make sure that you, and those who seek to counsel you, don't over-simplify the solution. It is better to consider the more comprehensive solution than to be stuck with hundreds of angry users that have expectations you can't meet.

Share: |


...These are the voyages of the starship Enterprise Software...

 

http://www.howtogeek.com/wp-content/uploads/2010/04/startrek04.jpg

 

With all cheeky Star Trek references aside, you probably figured out that I was referring to databases. We have spent a good amount of time over the past several months discussing datacenter automation from the perspective of servers and applications as well as automation’s overall value to cloud computing.  Considering that virtually every application ultimately interfaces with database on the backend, we would be remiss if we did not discuss the topic of databases, an arguably perfect candidate for automation.

 

Before we begin, lets take a look at why databases are important in general.


  • They are everywhere:  The primary purpose of a database is to provide data to applications. Although they are hardly seen and we may not interact with them directly, virtually every application we interact with is ultimately dependent on a database. When you read a Facebook profile, post an update to Twitter, or perform a Google search, you are interacting with a database. In fact, these very words you are reading on this blog post right now is generated from a database.

  • They are closest to the business:  Businesses depend on databases more than any other component in the datacenter. If a webserver or an application crashes, they can be replaced relatively easily as webservers and applications simply provide functionality. Using the Facebook example above, if Facebook lost every web and application server but had their databases intact, you could argue that they could rebuild their webservers, reload their applications, and eventually come back online. In contrast, if Facebook lost every database server and the data associated with them, there would essentially be no Facebook.

 

So now that we have established why databases are important, let’s take a look at why databases are a perfect candidate for automation:


  • High-complexity: Due to the fact that databases are so critical to the business and must always be available, database technologies have increasingly become more and more complex over the years to support increasingly robust features for performance, high-availability, and disaster recovery. When setting up a new database manually, it is so easy nowadays to miss that 1 step in the 100 step process that is typically required to setup an enterprise database. Automation can ensure that the 100 step process is followed correctly 100 percent of the time.

  • Expensive DBAs: Because of the increasingly high criticality and high complexity of databases nowadays, database administrators are generally much more expensive than other IT personnel. Freeing up a DBA’s precious time by automating many of the more remedial tasks, such as the provisioning and patching of databases, can yield big returns.

  • Cloud Computing: Given all the spotlight on cloud computing nowadays, it is important to note that database automation is key to enabling real cloud computing. The benefits of being able to provision servers and applications within 2 minutes will ultimately go unrealized if the databases that these applications depend on takes 2 months to provision.

 

With BMC's recent acquisition of GridApp Systems, BMC's Service Automation line of products have been rounded out into a very robust portfolio for realizing the efficiency, consistency, and manageability that automation provides into one of the final frontiers of the datacenter.



Frank Yu

To the cloud...oh wait

Posted by Frank Yu Mar 18, 2011
Share: |


With cloud being the hottest topic in IT for the past year, a lot of companies have started going down the path of implementing their own private or hybrid cloud. However, based on the feedback that I’ve been getting from various customers, many of them are stuck.

 

The problem is that they are treating cloud as a standalone entity. Considerations were made on how to expose it to end users, how to rapidly provision large number of virtual guests, and how to recycle the resources back into the pool after decommissioning. These requirements are often tested in a self contained lab through a pilot program, and most of them turn out to be fairly successful. However, when the time comes to take the cloud solution from the lab into a real production environment, many challenges arise:

 

  • How does the network in which the virtual guest will reside get provisioned?
  • How will the decision on which IP is assigned to which server be made?
  • How will the assigned IP be mapped to the correct name in DNS?
  • What about VLAN creation and assignment?
  • What about load balancers in the environment?
  • What about firewalls and ACL rules?
  • How can change management be integrated into the solution to govern the requests?
  • What about costing and chargeback?
  • How will multi-tenancy concerns be addressed?
  • How will the application stack be deployed on top of the virtual guest?
  • How will the newly provisioned server be added into monitoring?
  • How will the server get patched and secured?
  • What about servers that need to follow regulatory compliance?
  • How will the server retirement process remove all the related entries in the entire environment?

    Stop_Sign.jpg

 

It is nearly impossible to bolt on bits and pieces of software and integration points onto a standalone cloud solution that wasn't initially designed to meet these challenges. This is why so many companies are getting stuck and coming to us for help. A cloud solution that does a great job providing on demand compute but doesn't easily fit into rest of the IT infrastructure will have an extremely difficult time standing up in production.

 

This is why I make the same recommendation to all our clients: When it comes to building a cloud solution, do not look at it in a vacuum. Instead, approach it as a vital part of IT that will need to work in concert with rest of the environment; and be mindful of all the considerations associated with that.

Share: |


DevOps describes the cultural mashup of Development and Operations
to address the Application Service Delivery bottleneck.

http://dev2ops.org/storage/WallOfConfusion.png

In Boston last week, the first DevOpsDays event of 2011 was held. Two days later at the CloudConnect conference in Santa Clara, there was a DevOps and Automation track.

 

Perhaps you’re asking yourself, “What is this ‘DevOps’, and what does it mean to me?” So glad you asked.

 

With Agile methodologies and Java application servers, Development is producing application changes faster than ever before. Those changes go through a release process, at the end of which Operations deploys them into Production. With cloud and web-based platforms, Operations can instantly deploy application changes and users will see those changes as soon as they refresh their browsers.

 

With Development producing changes faster and Operations able to make them available almost instantaneously, the release process in between has become the time-to-market bottleneck. Relieving this bottleneck is complicated by differences in professional cultures. Operations sees Development as cavalier about operational discipline; Development sees Operations as inflexible, imposing processes and approvals that hinder productivity. When things were simpler this culture clash was not such a big deal, but increasing change rates and greater environmental complexity have created real business issues around the delivery and stability of application changes. This is why Gartner is reporting a sharp increase in inquiries around application release solutions.

 

Some of our customers process over a thousand application changes per week and run their applications through nine different testing environments. The backlogs are becoming unacceptable. Operations must often borrow developers to assist with deployment and configuration, detracting from Development’s productivity. Detailed documentation must be written, tailored and updated - for every application, and for each environment between Development and Production. Without an Application Service Delivery strategy, the release process is labor intensive, time consuming and error prone. It is mission critical work that must be done by expert administrators who are doing their best with generic, ad hoc tools like command line interfaces, scripts, paper documents, spreadsheets, email, conference calls and instant messaging.

 

DevOps describes the movement to create a cultural mashup of Development and Operations to address the Application Service Delivery bottleneck. Development must produce applications that are more easily deployed and configured. Operations must become more agile in their workflows. The whole release process must be coordinated end-to-end as a symphony of activity that includes change management, human workflows, automated tasks and configuration data management.

 

For more on DevOps, check out the Dev2Ops website.

Share: |


There is an age old question within Customer Support on how to manage your customer support center – with support staff who are generalists or with support staff who are specialists. Typically in a Level 1 support center, it is important to staff with generalists who can answer general question, and escalate to a specialist when more detailed technical support is needed. In the support ecosystem, it is that deep level of technical knowledge that a support engineer holds sacred to himself; it is what used to separate him from the rest.

 

In this world of ever growing interconnectivity - where one product must seamlessly talk to another, -our expert support staff can no longer pride themselves in their ability to go deeply technical in a single product. In order to stand out from the rest, they need to truly understand how the customer is using the product (use cases) AND they need to continue to provide their technical expertise in their major product line while having generalist knowledge. Solutions are made of up of multiple products that communicate with each other and share information with each other via the CMDB or other mechanisms such as APIs.

mad_scientist.jpg

 

In order to prepare our staff to support this solution and other solutions, we need our staff to remain technically deep in their primary area of expertise, yet they need to be generalists in the products that interact within relevant use cases. In order to deliver the best service to our customers, we have to ensure that the best support person handles the solution issue from end to end so that our customers do not get bounced around from product team to product team. This means that the support engineer needs to be technical enough to handle the integrations between the products and have a strong support infrastructure behind him as well as understanding the context in which the customer is using the solution. We have done extensive training with our support engineers to this end. We will not build end to end experts overnight, but I think you will see that your support experience will be much more adapted to the way you are using our solutions.

 

 

Not only does our support delivery have to change, our entire support infrastructure must also adapt.

  • With DCA you will see our traditional documentation has started to transform to deliver documentation in a much more solution-centric way; It will take you end-to-end from the use case perspective, while delivering documentation to any part of the solution in just a mouse click away.. If you have not checked it out already, check out the new online DCA documentation (please note: login information required). I think you’ll be pleasantly surprised by the experience.

 

  • Another challenge we face in supporting solutions is that our internal support labs can no longer be product centric. The introduction of VMs allowed us to bring up and down use cases for testing in a matter of minutes, something, that without automation would have taken a day or two to configure.

 

  • And soon, from our support site, you will be able to enter an issue based on solution. We will no longer require you, the user, to determine the product that is the root cause of the problem. Our goal is to take the burden off of you to determine what product your problem is related to and to only describe the problem once while maintaining consistency and knowledge with a single support expert.

 

I’d love to hear about your experiences, so please, feel free to post questions, concerns and comments!

Share: |


desert-long.jpg

The ‘Cloud’ is the next hottest vehicle for IT to deliver value to the enterprise.  At its heart, by empowering end users to rapidly access resources, you can accelerate the time to deliver new products, reduce issues caused with misconfiguration and react more swiftly to capacity constraints.  This is why many enterprises are making Cloud one of their top initiatives in the next 12 months. 

 

Most Cloud offerings focus on virtual machine (VM) availability, access control and charge back.  While a self service vehicle to provide pre-built systems to users can add value, it ignores the most important part of the puzzle, the application stack.  As the Director of IT at a large financial player noted, what good is empowering end users to access systems if those systems don’t actually do anything.  In many cases, it can cause more harm than good.  The big question is why don’t most offerings handle this?

 

The reality is that most Cloud suites have little visibility into the application tier.  Applications are their own beasts with their own rules.  J2EE apps need to be managed differently than databases.  Relying on users to build and maintain this content is a guaranteed failure.  For an infrastructure player that simply understands some components of the full offering, they have no hope of enabling end customers to actualize the value of the Cloud.  For example, I worked with an enterprise where their Cloud vendor was arguing to create a full VM for every slight application variation that existed (literally every schema variation in development would be another virtual machine).  Of course this was impossible to maintain and the customer quickly recognized that it would never be practical.  

 

The key to enabling the Cloud is full stack functionality – from system deployment, to app deployment, to database deployment to change management.  This, combined with a well constructed, multi-tenant, self-service offering, is the killer application that will truly make the Cloud vision a reality.  Without it you are left with a partial solution that will breed chaos.  We may see clouds in the sky but if they don’t produce rain, we’re all in big trouble.

Share: |


Why “lights-out” management of your infrastructure does not mean “lights-out” resources

 

Customers, with the help of BMC, tend to build their business case on the backs of three key areas of potential savings:

  • Retirement of displaced products (think Shavlik, Patchlink, Tripwire, Symantec, or other point tools)
  • Risk mitigation avoid penalties for audit failures/findings by customer’s external audit staff)
  • Reduction in labor costs

 

The first two are relatively straight-forward. If we displace a product, the customer saves on the maintenance stream. If a customer can avoid an audit finding, they save the dollars associated with whatever fee is levied upon them. The third – reduction in labor costs – is where I find that we (customer and BMC) tend to trip up. Here’s why.

 

In building the business case, we message the potential windfall in savings from labor cost reduction while simultaneously failing to educate the decision makers on the need to invest in resources for the roll-out, ongoing development, and maintenance of the automation platform – also known as a customer "Center of Excellence". Why? Because it waters down the ROI. And because automation carries with it the allure of autonomous actions taken by some all-knowing, all-powerful system. We know better, don’t we?

bullseye.jpg

 

Now, this doesn’t mean the customer will allocate all resources from the anticipated reduction to the automation platform – they will still see significant savings on the labor side no matter what the size of the COE is – but the customer should plan to budget some labor cost associated with the solution on a continual basis – and should socialize this very early on in the conversations with business leaders.

 

Ok – so you've taken the bait. You’re buying into this COE concept, but don't know where to start? There are three core competencies to any successful COE:

 

Engineering

These people author content for new use cases, expand existing use cases, evaluate new features, and enable the rest of the org on the value of adopting automation. Generally speaking, they are viewed as the development arm of the COE. For a medium sized implementation (a few use cases, < 5,000 servers), you’ll need no more than 1 or 2 resources devoted to this effort – with an extended virtual team providing guidance on their areas of expertise.

 

Operations

This function is designed to maintain the automation platform infrastructure, including monitoring of the environment, expansion of the infrastructure, and root cause analysis on key issues impacting product use cases. Generally speaking, they are viewed as the NOC & Support arm of the COE and own all care & feeding. For a medium sized implementation, assume another resource.

 

Business Analysis

These people assess the business impact and ROI associated with each of the use cases. Before a new use case is on-boarded, the business analyst figures out the anticipated savings/value achieved by the use case – and once implemented and placed into production – reports on the ongoing value of the use case. In many organizations, the same people who run the “Engineering” aspects of the COE can service this function. In larger implementations, a Program Manager will be designated.

 

Most organizations that adopt automation are technology oriented, so they gravitate to the first two functions because that's what they know how to do best. They will ask questions like "Can the infrastructure handle one more use case?" rather than asking the more important question - "What is the business value of adopting this user case?". The last function - business analysis - is routinely overlooked, but is critical to the success of any implementation. After all, if the business leaders don’t see the value of their investment, why would they bother investing more?

 

Every successful implementation of BMC’s automation platform has some variant of this structure – whether they call it the “Center of Excellence”, "Platform Architecture”, “Engineering”, or something else. And virtually all of the escalations I deal with have misaligned expectations on what it will take to manage and support the platform. For customers investing millions in a solution that is promising millions in savings, a COE with dedicated focus on achieving the promised business results is a smart investment - but only if you want to succeed...

Michael Ducy

Complexity

Posted by Michael Ducy Mar 4, 2011
Share: |


Flipping through the Harvard Business Review I saw an ad that had the tag line,

 

"Complexity presents an opportunity and a threat at the same time."

 

Sitting back and thinking about this statement in the context of Data Center Automation, I realized that most organizations are challenged by the complexity of their processes and not by the actual technology.  This complexity presents itself as a threat to the organization in several ways.

 

First, the complexity makes it harder to bring in new people to perform the same job.  When people who have designed overly complex systems leave an organization (or even a department), it is often hard to find people that can fully replace this individual to manage the complex system.  I have personally seen people promoted into different roles in an organization only to be constantly brought back to support a complex system they designed.  Some organizations may be hesitant to eliminate under-performing staff due to their specialized knowledge of a complex system or promote individuals into new roles because there will be no one left to support the complex system.

 

Second, complexity may prevent organizations from undertaking new initiatives.  When new initiatives come face to face with the complex systems, many organizations will either scrap the new initiative, or build complexity into the new initiative to handle this edge case. This creates a vicious "Web of Complexity" that only exacerbates the problem.

 

Third, when complex systems fail, the mean-time-to-repair (MTTR) is often longer and more difficult.  Specialized experts must be called in to fix the problem, requiring them to work after hours, weekends, or during vacation.  This causes undue stress on an organization and its people, thus reducing the overall effectiveness of the organization.

 

What can organizations looking at Data Center Automation do about complexity?  They can begin by using their automation initiatives as a chance to reduce and remove the complexity that has been built into systems over time.  Often, complexity is built into systems to support legacy methods.  As you review your systems and processes, ensure that these legacy ways of doing things are still required.  Have your staff approach the situation with open minds, realizing that processes built several years ago can most likely be optimized and made less complex, or completely removed with automation.  Often, complexity is built in because there was no simple way to perform a task.  With an automation solution that provides a framework for automation, rather than just a scripting platform, much of this complexity can be removed.

 

According to this Harvard Business Review blog post, complexity is weighing heavily on CEO's minds these days.  Tackling complexity as part of your Data Center Automation initiatives presents an opportunity for organizations by giving them the chance to experience better MTTR, increased staff happiness, increased agility to take on new initiatives, and less dependency on underperforming staff. 

 

And best of all, your CEO will sleep better at night knowing his IT organization has reduced complexity.

Share: |


So you like the idea of fully automating Cloud provisiong, but are not ready to invest the time and money in jumping to that level.  There are alternatives that can deliver more automation benefits fromwhat you have or with smaller investment.

 

What automated actions would you launch in BMC Atrium Orchestrator from the server admin console of BMC BladeLogic Server Automation?   This new feature was released recentlly in version 8.1 of BMC BladeLogic Server Automation to reduce server admin workload as well as the number of different user interfaces a server admin has to leverage in executing tasks.  Any task automation workflows defined in BMC Atrium Orchestrator can be initiated to accomplish simple tasks, such as 360 ping test or query fro hardware information.  Or maybe more complex tasks may be of interest to you, such as adding a new server to a network load-balancer at the same time as the server provisioning job is created. 

 

Here is an example of how easy it is to select a BMC Atrium Orchestrator workflow from the BMC BladeLogic Server Autoamtion console:

 

BLjobBAOscreenshoot_3mar11.png

 

Many routine and repetitive tasks can be easily pre-defined in BMC Atrium Orchestrator to be called from this drop-down menu.  Only your imagination of how to reduce your manual efforts can lead you to exploit all the benefits available to your operations.  Putting this capability in place is not difficult, as it does not require the substantial projects you might expect for Change Managment process integration or Cloud Computing self-service provisioning.  However, this server adminstration productivity enhancement does utilze the same orchestration technology that is the foundation for orchestrating tasks in these more comprehensive processes.  You will not duplicate the costs of executing simpler workflows with this solution when the time comes to implement larger scale soluitions and you can still execute the simple task automation on the same orchestration platform supporting the grander functions.  Yet another option for incremental improvement of server configuration management you can choose on your roadmap to the next level of data center operations maturity - on the way to Cloud automation.

 

Maybe you have other ideas reagarding how to use this new feathre that would benefit your efficiency and effectiveness - maybe simple - maybe more comlex.  Tell me about them.

Filter Blog

By date:
By tag:
It's amazing what I.T. was meant to be.