Skip navigation
Share This:

Excellence can take on many different meanings depending on your perspective. It can be a rating that indicates a level of superiority. For students, excellence is often rewarded with an “A” on a report card. An athlete who wins a gold medal is recognized for his or her excellence in a sport. Or excellence could refer to the quality of something that is exceptionally good – like my favorite brand of chocolate chip cookies.



However, since this is a technology blog, I’d like to shift the focus to excellence in data center automation. To drive efficiencies, automation requires developing and implementing effective processes.



BMC’s Ben Newton and Tim Fessenden discussed the concept of a Center of Excellence (COE) for the data center, and how this approach brings together talent from many disciplines to create and maintain automation best practices.  They explained how business analysts, people in engineering and operations, as well in other groups, can work together to drive automation and business value. They also identified steps for building the center. Their article describes this approach in: How a Center of Excellence Can Boost Automation Benefits

Share This:


If you’re a BMC Database Automation (BDA) user, you’ve already discovered the efficiencies and cost savings that come from automated provisioning and patching of your databases. However, there are probably lots of other database administration tasks you need to perform frequently, such as creating backups or setting up user accounts, that could be made easier and more efficient.

To help you automate these kinds of tasks, BDA includes an incredibly useful feature called Custom Actions. A Custom Action allows you to execute administrative tasks on a target server in the context of BDA’s knowledge of the database environment and includes a rich set of support features.

Custom Actions are script-based, run on Linux, UNIX and Windows, and can contain pretty much any content you like. For example, if your action installs an agent on a server, you can include the agent installer package as part of the action’s content. BDA also provides a useful set of environment variables for your action to use, including things like the Oracle home and SID of the target, patch levels, the install user, and many others.

You can run your actions on one or more nodes, and you can schedule them to be run in the future or on a recurring basis. You can also create a menu option for your action in the BDA user interface so that users run it from the BDA Grid just like provisioning, and, like provisioning, it can have its own pre-verification phase.



Because Custom Actions are subject to BDA’s Role-Based Access Controls (RBAC), you can tightly control who runs your action and where it runs. You can limit your action to specific domains, operating system types, database versions and more.


As an added bonus, you can insert your action into a provisioning workflow as a pre or post-script, guaranteeing that it gets run whenever someone provisions a database.


Try automating a few of your favorite DBA tasks with Custom Actions and you’ll quickly see how it can make your workday more efficient.

Share This:

What is computer performance?  Those of us in the field think we know what it means, but do we really?  Is the user’s view of performance what we actually measure? 


I found myself pondering over this question when I found myself talking to one of my computer-illiterate friends.  He’s super-smart, but has no patience to learn the technical underpinnings of his PC.  I can relate – I don’t actually know how my flat panel TV works, and have only a vague notion of how my carfrustrated user.jpg moves me from one place to another.  I just expect them to work.  That’s how he feels about PCs.  When he is browsing, opening a program or trying to print, he blames the PC itself for any slowdowns.  I found him searching for faster machines, using the GHz rating to determine the relative speed. 


Any performance analyst worth their salt is shaking their heads right now.  Clock rating is only one of so many components that we need to look at to understand performance.  We pat ourselves on the back; we are so much more knowledgeable than the average user.  But do we have tunnel vision too?  Is our pool of data including all aspects of an end user experience? 



We work in silos of data ourselves.  My friend’s is very limited – he only sees one piece of hardware and one aspect of that hardware as the problem.  But when I did performance for a living, we weren’t all that much better. We had a metric called response time (yes, mainframes measure that), but it wasn’t really what the user saw.  That number represented the point when data arrived at the mainframe with a request to when the request came back to the mainframe. All the back-end network, potentially other servers and the internet was completely ignored. 


First, we need to know what we are measuring and what we should be measuring.  We want to clock from the moment the end user hits “enter” to when he receives a response on whatever device he chose.  Fortunately, there are solutions that can simulate this or actually measure user interactions. We simply have to employ them to get a real number.


Second, we have to understand what a transaction is to our user.  Not what we think of it as in IT terms, but the real business transaction that we deliver.  It helps to have a way to map it – which servers does it traverse?  Which networks? What data stores does it need?  We have always tried to measure the components of response time, the “speeds and feeds,” or more accurately, the “using and the waiting times,” but now, we can’t understand where the problem is unless we know the transaction path.   Again, there are tools that help you build a CMDB, automatically discovering the assets and relationships.  But you have to know that this needs to be done.


Finally, you have to move this all to a proactive approach, where you set thresholds and monitor, to detect and repair issues before the user sees them (or buys a new PC, because his is “too slow.”)   You need to do this because most users blame the owners of a web site or program for their performance woes, not their PC.  And that means they blame you.  And understand this – cloud will not fix this problem for you – it only makes it more difficult.  Get this right now, using the right automation tools, so you can limit those panicked help desk calls.  Be a performance hero by understanding what your users mean by performance.

Share This:

I recently had a chance to chat with Ron Kaminski, ITS Sr. Consultant, and Shane Z. Smith – their capacity planner who is focused on web site and data center performance.  Ron had previously written a wonderful paper on the above topic and I wanted to learn more. clouds.jpg


DK:   You previously wrote a paper about these concerns, but not everyone had a chance to read it. And things may have changed. Could you give us some insights into what people are missing as they approach SaaS and Cloud?


Ron K:  People think you don’t need to bother with tools – just throw more hardware at the problem.  But they miss why they have a capacity issue.  As an example, in a file server farm, you might find that the real work only takes up 1% of the resources used.  Large amounts of resources are devoted to things like virus scanners, a huge consumer.  No one expects virus scanners to take up 70% of the machine.  Infrastructure decisions are being made without understanding how much (or how little) of the CPU utilization is for real work (how little).


Kimberly Clark has sites everywhere in the world, so when you do backups or maintenance activity based on North American time, good times to do it in the US may be peak usage time for the other side of the globe.   To manage this effectively, you need to be able to differentiate between real work and support work.  People who just look at total CPU don’t see that. 


Shane S:  Without tools based on workload characterization, we wouldn’t have caught these issues.  And even trying to get the data can be a problem.  You can’t look at data at this level in PaaS (platform as a service), so that’s a problem. And SaaS vendors require proof that performance problems aren’t at your end.  You need tools that help you do that. 


Ron K:  We needed to create our own collector, because we have a lot of virtual terminals.  Did you know that if you leave a DOS window open, it burns 100% of the CPU?  People aren’t aware of this impact.   We need the ability to detect what process is causing the problem and then pass rules that minimize the impact.  As an example, some screen savers can take 100% of the CPU too.  By banning selected programs or behaviors, we have recovered as much as 40 CPUs worth of resources.  You need a tool to get to the details.  Otherwise, you are like a one-legged guy in a horse race.  Vendor tools need to get easier to use and be a lot more scalable.


With 3000 nodes, you need something that is scale-aware.  There is going to be lots of manual characterization. 


DK:  Doesn’t a CMDB help with workload characterization?


Ron K:  Yes, it is going to be essential to make this scale.  And some people will use one workload characterization file to manage everything – discover it once and apply it to every situation – but that works best with a more homogenous environment. 

But this isn’t perfect.  People’s assumptions of what is on a node versus what shows up in the CMDB are often out of sync.  Ideally, I want a capacity planning tool to feed this information into a CMDB.  Get people out of the business of doing workload characterization. 

Capacity planners need to understand what is driving CPU busy – if not, they will wildly overprovision.  Disk is often what causes them to die, or one bad link in the network.  In a global firm, it can be very difficult to determine these choke points.  Without war in the world, all companies are going to be global and we need the right tools to do that.


DK:  Do these initiatives change the role of a capacity planner? Eliminate it? 


Shane S:  That depends on the situation.  With PaaS, we have zero insight into workloads and performance – PaaS is stateless.  You can’t get performance metrics from them.   What you get is total CPU and total memory – that’s all.  You can’t do capacity planning like that.  With IaaS (infrastructure as a service), you need to get the vendor to let you put in your tools.  It can take time.  In the private cloud – you must do it.  In PaaS, you are reliant on the PaaS to provide you monitoring/performance tools to get the information you required for capacity planning.


Ron K:  The problem is one of scale. You can’t do it by hand.  You need tools.  There are some things you can get even if you only get totals. If you have a 4-CPU box and one CPU is routinely 100% busy, this is a loop or something that is just soaking up CPU, like a DOS command window on a virtual desktop.  If you can’t get a collector in there, get a mini-collector that once a day goes in and collects process consumption.  You can get information by comparing day to day – it may help to flush out what is just using too much resources.  You have to be sneakier as a capacity planner now.  I’m looking to vendor partners who come up with tools that help us manage the complexity and the scale.  But for now, we’re working to deploy these mini-collectors so we can least point out the silliness.


Shane S:  The stateless environment is a challenge – it is much harder to get the data.


Ron K:  Service providers want to charge you.  If 30% of your environments are doing something stupid, you are just wasting money.  Capacity planning is an issue of scale – there can be a lot of savings there.


DK:   What do you mean when you see these technologies as having an “Achilles Heel?” 


Ron K:  This issue is ubiquitous web access – people think that because anything that can be done can and should be done on the web.  But how you choose to get it there can impact resource demand.  Just because you want to do this, there are thousands of ways to do it.  You need to think about this, focus on efficiency and select better.  They call it code for a reason.  It isn’t obvious what is using a lot of resources and so choices in coding can have a huge resource impact.  Choices must take into account the distances, especially for chatty applications.  We’re going to look back in 30 yrs and laugh at the code we are running now.  I told my mother back in 1970, “In the future, no one is going to read books anymore.  People would read off screens in the future.”  In the future, people won’t be writing code. 


The current manual method in IT takes too long to get things running. In 20 years, vendors will have this automated.  Users will be saying what they want the app to do.  Human interfaces, like Siri, will know what the rules are and enable the automation of applications, without IT coders. Those programs will quickly learn how to optimize the code.  We will always need capacity planning tools until the automation is excellent, and I put it to you that we will always need them for advanced analysis. Coding as a career has to end.  Data is just getting bigger – so the impact of bad coding decisions is going to get worse.


In reality, there are too many nodes to do this the way we used to do it.  Large scale automation in the future will help.  I believe corporate data centers will go away – everything will be on the web or in the cloud.   But then the tools will have to scale even more.  I think IaaS will be the software vendor’s next big challenge.  We need this because too many still see capacity planning simply as telling you how much to buy, not how efficient your systems can be.  Large scale vendors of services need to find a way to handle these problems. 


Capacity planning teams will shrink – automation will replace them.  The future is global firms.


Shane S:  Automation will eventually take over, but the vendors are extremely far away from that.  PaaS and IaaS don’t seem to be doing capacity planning properly, but it is in their interest to do this.


Ron K:  This is a big opportunity for a vendor and it would give better value for their customers too.  It might result in a new thing – CPaaS.   Cloud users need something that doesn’t require their cloud providers to install something – we need new tools which can use the data already available.    Both cloud providers and cloud users need more detail.  As a user of a cloud, you would want to know that another user is sucking up so much resource that it is impacting you.


DK:  Isn’t that the same kind of problem that we had when we first started sharing CPU resources?


Ron K:  Yes, and we still need the kind of data that shows you when this is happening.  “Hey, cloud vendor – why does my performance suck when I’m doing the same work as I was last hour?” This tool needs to exist. 


What we don’t have is time – there is so much to do, but never enough people to get it all done.  Tools could help but not if we have to write them.   I recently converted my home system to MACs and now, when software is updated, it is automatic – I don’t have to worry about it.  That is the future.  Upgrades should just happen.


DK: What’s the best way for CPs to adapt to these technologies? 


Ron K:   Take a step back and see how you spend your time.  Figure out what can be automated.  Do that.  If you don’t find ways to make capacity planning scalable, you won’t be able to get it done and answer  questionss fast enough to be relevant.  You need to be fast.


Shane S:  Automation is key.  And you need to understand the toolset that you have to make automation work.  Don’t rely on IaaS tools to work properly – validate them. Make your own automated alerts to weed out these problems. 

Share This:


BMC Server Automation (BSA), part of the BMC BladeLogic Automation Suite, was recently listed by the US National Institute of Standards and Technology (NIST) as a SCAP (Security Content Automation Protocol, pronounced as S-CAP) validated product, which is a milestone in the history of BSA compliance. In the coming years SCAP will significantly impact the security market and the tools providing security management.


Since the day I joined BladeLogic the product team, the Federal sales team and I have discussed when we should support SCAP. This security standard is an emerging area, with lots of aspects of security compliance still being discussed & debated in open communities. Last year was the inflection point when the US Dept of Defense (DoD) started to mandate that federal agencies purchase only SCAP-validated products for their security management & compliance. That finalized the decision to go full speed at supporting SCAP & getting the NIST validation.


In the information assurance landscape today we have the content (CIS benchmarks, SOX, PCI etc) and the compliance tools (vulnerability scanners, patch management tools, configuration management systems etc). The content is written in ambiguous prose that results in multiple technical interpretations and proprietary implementations by security vendors. An enterprise has a host of devices in its infrastructure to secure and monitor (servers, applications, databases and networks) resulting in high number of configuration settings and patches to deal with. They end up using a variety of specialized tools to identify security problems, which make security management resource intensive and prone to mistakes.

To add to a security team’s headache there are now tons of new vulnerabilities being found every week, and more requirements to meet to provide evidence of compliance (standards, guidelines, regulations, etc,)




Around 2002, The Dept of Homeland Defense tasked NIST to develop a standardized approach for maintaining the security of Federal enterprise systems, which lead to the birth of SCAP.

SCAP grew out of a set of well-established open standards for expressing, organizing, and communicating security related information. This protocol decouples the content from the tool (enabling interoperability between tools) and standardizes the way in which vulnerabilities and configurations issues are named, documented and reported.


The SCAP protocol is a suite of six XML specifications that provide –

Languages for defining your security policy and how to check your systems using that policy

Enumerations for a standardized naming system and associated dictionary for documenting vulnerabilities

Scoring system for measuring the vulnerability and deriving a severity score for it to show which vulnerabilities you need to worry about first.


BSA 8.2 supports SCAP 1.0 and is now validated by NIST for SCAP 1.0 support -


Security teams using BSA 8.2 now have a common way to describe vulnerabilities and can use SCAP content from any vendor to process security compliance, report standardized results and then prioritize and remediate vulnerabilities.

Our strategic public sector customers are moving to BSA 8.2 to use it for ensuring their systems are compliant based on SCAP checklists. Other U.S federal agencies are also looking at BSA to meet their SCAP requirements. Though this standard was created for the Federal government, it also benefits the commercial market, and I foresee that in the coming years more and more commercial entities will begin to use SCAP validated tools.


As I mentioned before, the SCAP standard is evolving as we speak and is incorporating new XML specifications into the protocol to enable asset reporting, checklist reporting, configuration scoring system, common remediation, etc. So SCAP is positioned to become the Holy Grail of IT security management in the coming future.

A new era in security & compliance management is emerging and I am excited to lead BMC into it.

Share This:

Some months ago, I began a series of articles in this blog about Solution Adoption that I never managed to finish.  I still have some things to write on that topic and so may still finish that series, but this time around I thought I'd try writing about something different. 


I work with many customers who struggle to build a strong business case for their own cloud, whether public or private.  Though, to be fair, it's normally not about the business case.  It tends instead to be more about an insufficient thoroughness of planning or thoughtfulness of value-add beyond purely rapid provisioning or sometimes even just virtualization or plain infrastructure, unsurprising in a world where the vast majority of people looking to build a cloud are doing so for the very first time.


My perhaps ambitious goal with this blog series is to help those planning to build a cloud solution and offer services to their internal or external customers through an analysis of elements and good practices for building a sustainable business case for a cloud.  This blog reflects my personal experiences based on insight gained by working with customers and seeing first-hand what has and has not worked.  I would welcome scrutinous comments and discourse along the way, that we all may learn from those who have gone before us.


With introductions behind us, let's get started with our first round of discussion — goal definition.


So you want to build a cloud...


A customer within a service provider recently said in conversation that, while most organizations are still debating whether 'to cloud or not to cloud’, others are racing ahead to get there first at the expense of good planning.  We'll enter the story of our cloud-planning protagonist (our reader) at the point where you have decided to take the plunge.  I ought to be clear up front that 'to cloud' is not a goal in and of itself, rather it's the decision that is made that asks you to define your goals.  So I would not ask the question about whether or not to build a cloud. If you accept the general industry consensus that, for nearly everyone, 'to cloud' is a foregone conclusion, then this blog will investigate what happens when finally the decision is taken.


This part is fairly straightforward. The questions can almost answer themselves at this stage, and are largely driven by the context of your organization. 


It may seem a trivial question, but one should ask, why is a cloud important to your organization?  Are you looking to transform your IT organization into a more business-aware, 21st century operation, to recreate it as a customer-focused service provider which can respond to business needs as quickly as they can arise, perhaps even preempting them?  Are your looking instead about pure cost savings, doing more with less, consolidating and expiring aging infrastructure, and achieving cost efficiencies through automation?  I would argue that these two points — transformation and cost savings — are not mutually exclusive, though I often see them treated as such. You should be conscious of how cloud is disruptive and, as such, will compel transformation.  Transformation will drive cost savings and vice versa.  People have asked me "how do we transform in order to be ready for cloud?” Despite a few exceptions, I generally respond to that by saying — 'don't try'.  Just start by building small.  By starting small and building some basic cloud capabilities, one puts a stake in the ground ahead of you toward which your organization can strive.  Waiting until you are transformed and ready means that many will never see a cloud.


Will you build a public or a private cloud? For a public cloud, be aware of the market and your would-be competition. Perhaps your customers expect you to offer a public cloud service and you need not disappoint, but understand how market forces will go a long way toward setting the price of your base service offering before considering value-add services.  Even if the purpose of your public cloud is largely to meet customer expectations, this does not mean that you can build only a loss-leading solution.  Nothing about having a public cloud dictates that you cannot use the same technology and infrastructure to build a private cloud (you should of course find a management solution that supports this).  And with a private cloud, whether stand-alone or an extension of your public cloud, you can focus on cost savings and organizational transformation as mentioned above.  Indeed, I've even seen customers plan to extend their private cloud to provide a public cloud; this too makes sense that they want to have a mature capability to manage a cloud environment before making cloud capability a public offering.


Finally, you must answer the question, are you building a cloud as a cost centre or a profit centre?  Even for a private cloud, IT could conceivably turn a profit at least from a purely internal accounting standpoint.  But, this point is important. One cannot assume that a cloud is a panacea for IT cost containment, though with proper planning you can provide a solution for a broad range of IT ailments.  If you build your cost and pricing model with appropriate robustness, IT can begin to offer the enterprise a real partnership.  In building an internal, private cloud, one should plan a break-even accounting model where costs are distributed in any number of ways throughout the consumers or would-be consumers of your cloud capacity, something to be discussed in subsequent posts.


As I mention above, I work with many customers who, by the time I meet them, have failed to plan their cloud well. Many jump first, look later, over-investing in infrastructure before figuring out how to commercialize it or even how to manage it.  I’ve seen teams replaced for such lack of planning.  I’ve seen others legitimately worried for their jobs because their business case, as it were, was not well-considered or their cloud solution not well-planned.  Whether through this blog or through direct interaction with myself or others in BMC, I want to help customers to avoid such pitfalls and the accompanying desperation.


From within our Cloud Practice, BMC Consulting offers, in the form of a Cloud Planning Workshop, an in-depth assessment of these pertinent topics, collaborating with you to build an executable road map of your cloud capability, capacity, offerings and organizational changes with cost-benefit analysis of each stepwise investment.


In the coming posts, we'll talk about understanding your business, identifying patterns and dependencies, initial investments, operational costs, organizational change and accountability, and much more.  Until next time, please feel free to comment on this post and suggest any other initial questions that you believe should be asked.  Perhaps your comments will inform a subsequent post or at least spark some lively debate.

Share This:

The world of automation has come a long way in the past 10 years but the revolution is only just beginning. Virtualization was initially lauded as a way to maximize the utilization of hardware assets but is rapidly showing its real value around increasing business agility. Virtualization in conjunction with automation and well defined IT processes has given rise to the game changing behemoth that is Cloud Computing. Well defined service offerings can be requested by business users “on demand” through simple to use web based interfaces and consistently delivered in a matter of minutes. A task that would previously have taken a world class IT organization weeks if not months to complete can now be achieved in the same time it takes to make a trip to the canteen to get yourself a nice cup of coffee.


What has been achieved to date is outstanding. If you had of shown this capability to your average IT manager a few years back their jaw would have hit the floor! So job done right? Pats on the back and move on to the next big thing?


Hang on, not so fast! Is the job really done? The business user is happy they got their service up and running and understand they will pay an ongoing monthly fee for it, but how do they know that its being properly managed? You can’t just deploy a business service and not manage it can you? How long before the business users want this next level of information served up to them….. am I really getting what I paid for?


This presents many challenges to the IT department which must be overcome.

“Day 2” management tools are typically silo focused. By day 2 management I mean all the tasks that go on to manage the underlying infrastructure after it’s been provisioned. For example ongoing configuration management, patching, compliance, backups etc.Today’s automation tools allow me to patch a server or run a compliance check on a network device, but how do I relate this silo based approach to the service centric view that cloud adoption has fostered?  Informing a business user that a Windows server (that is just one component of many) in their business service is compliant with some government or industry standard is meaningless to them.What about the service itself?


The answer is that automation tools must understand the services that are delivered. They must become service aware. It must be possible to initiate automation at the service level and span multiple technologies. The questions that the business users will be asking are “Do my customer facing financial services meet industry PCI regulations? Is the configuration of my ERP system secure, have there been any changes which deviate from the “trusted” end to end configuration of the service? Automation tools need to answer these questions and be able to provide results back to the end users in a context which they are able to understand.


It’s not just the expectations of the business user that are changing either. Hardware vendors are now shipping “Cloud ready hardware in a box”. Not only is it a server in the big tin box anymore, it’s also the networking and storage all pre-wired and configured. Just look at the Cisco UCS platform! This converged infrastructure is requiring a new type of IT operator, one that needs to be an IT generalist as opposed to just a specialist in one area. Yes IT still needs the in-depth specialist tools but they also need more usable, cross silo solutions that help them manage this converged infrastructure in a consistent and centralized way.


Automation tool vendors must rise to the challenge and support the needs of this new service orientated, converged infrastructure, cloud enabled world of IT.  Automation has been the catalyst for business agility in the cloud revolution, its next goal is to help deliver business alignment.

Share This: mark of a mature IT organization is effective change management. But all too often, updating change records is a neglected data entry chore undermining the usefulness of the change record and raising the spectre of audit failure. To combat this, some of our automation solutions support a feature called "operator initiated change" whereby a Remedy ticket is automatically opened, updated and closed as part of making administrative changes. Role-based access control ensures the person making the change is an authorized administrator, so they are relieved of the tedium of updating the change ticket. Having changes require approvals means a solid record of authorized changes is ensured. This works because the operator never has to change their focus from system administrator to Remedy user, in effect never changing their persona,


A persona-driven approach is the essence of closed-loop change management because it allows someone to stay "in character" while making changes and automatically updates the change record to accurately reflect real-world state. Software development has its own system of change tracking for enhancement requests, defects, etc.and persona-driven approaches to closed-loop change management is an objective there too.


The DevOps trend has focused attention on improving the overall application change cycle (the DevOps Cycle). We at BMC are now taking a persona-driven approach to change management during the application release process. Planning and performing tasks during the release process automatically update change records as those tasks are executed (successfully or not, especially not). As a result, change records contain detailed results of task execution. We record not just operational changes but correlate those changes with the software development changes that are represented in the release (i.e. this deployed application went through these pre-production stages, corresponds to this ITSM change history, contains fixes for these defects and supports these user stories).


This achieves a complete "chain of custody" for the deployed application and connects CMMI with ITIL change disciplines. It provides forensic information for application support personnel should an issue arise, satisfies auditors and delights compliance officers.


Closed-loop change management in the application release process closes a huge part of the gap between Dev and Ops.

Share This:

What do customers want?  Working within product management, every release brings that question to the forefront.  For BMC’s Data Center Automation 8.2 release, we faced that challenge head-on. Do we go for dazzling new features designed to catch the attention of the analysts and the press?  Or do we go back deep into the product, focus on the existing feature set and work to improve them to provide more flexibility and better customer experience for our customers?  For 8.2, it was an easy decision.  If you want to craft a solution that serves our technical users now and into the future, at times, you need to enhance the foundation.  Just as the most amazing house won’t stand up to an earthquake without a good foundation, neither will a product line if the fundamentals aren’t revisited and enhanced.



With DCA, the structure was really great.  So we looked at how we could improve what we already had.  One area that was key was to simplify the work for our users – do things better instead of just doing more things.   One way of doing this was focusing our attention on what our customers had told us and asked for around our existing features, RFEs, defects, issues, etc.  –  This release is chalk full of new improvements that are in directed response to enhancements requests and fixes for customer issues.  Many DCA customers, especially on the BSA side, have went through an upgrade from our 7.x to 8.x versions over the past few years, and many remember that given some of the large new features (the new eclipse UI, native patching for Unix, integration of the provisioning UI, etc.) there was a learning curve for many of the new customers; DCA 8.2 minimizes that problem.  The focus on improving existing features minimized radical changes in the product allowing for a very logical and intuitive transition from older versions to DCA 8.2, and as a result you can get Day 1 value from the improvements. 


Now don’t get me wrong, there are major and exciting new things in this release.  With DCA 8.2 we have started the move to a consolidated reporting strategy (using Business Objects), in DCA 8.2 this has started with consolidated database and network solutions (BDA didn’t have a reporting solution until this release).  BSA reporting has not yet made the transition to the consolidated framework, but it has underwent and overhaul allowing for huge performance and  reliability improvements as well as great strides around ease of use.  BDA customers are going to be very happy to see a much more flexible and simply way to create user defined actions, and I think customers will also like BSA’s unified agent installer, which automates the deployment of agents natively in the product.  But talking with a lot of customers about the release, I think the thing that people have been the most excited about is that we have eliminated license enforcement of agents for BSA, removing the need for customers to need to register and deregister agents with the licensing portal, a huge improvement around simplifying the usage of the product.


Along with this focus on the foundation, DCA 8.2 does continue down other paths the products have been going down over the past few releases, especially continuing to focus around simplifying virtual management and enabling cloud computing.  BSA has added support for several new hypervisors, including Microsoft Hyper-V, while BNA continues to add support for additional virtual switches and improving their support for Network Pods and containers. 


When planning a new release, it can be fun to code up a dazzling new bells and whistles, and you will see a couple of these in 8.2, but today, we opted to build a world class foundation instead.  If you’re an existing automation customer, I would highly recommend getting more information on DCA 8.2, and if you’re not I would recommend understanding some of the capabilities the suite has.  Please check us out on

Share This:

I’ve participated in and listened in on some interesting discussions this week in regards to Cloud Computing. The first was a discussion in which Massimo Re Ferrè attempted to create what he called the Cloud Magic Rectangle (more on that in a minute). The second discussion I listened in on was between Massimo and Randy Bias. While I didn’t track the entire conversation, one thing that Randy said really stuck out. Randy mentioned the concept of organizations that lean forward vs. organizations that lean back.


The Cloud Magic Rectangle

Let’s first look at the Magic Rectangle. As you might have realized, the name pokes fun at Gartner’s Magic Quadrant concept. The goal was to bucket different Cloud solutions into 3 categories: Orchestrated Clouds, Policy Based Clouds, and Design for Fail Clouds. Each category has various characteristics based on the value proposition, how the cloud is built, and the benefits to the end user.


Generally speaking Massimo has it right with what makes up each category. Where I take issue is in his conclusion where he lumps BMC in with the Orchestrated Cloud camp. The first release of BMC’s Cloud Lifecycle Management  was very much an Orchestrated Cloud solution. But as our product has matured over the last 2 ½ years, we have shifted more towards a Policy-Based Cloud. I won’t bother to run through an exhaustive list, but features like Public or Private Cloud Support, APIs, Scalability, Multi Hypervisor Support, Integrated Layer 2 network support, Integrated Security, Hardware Agnostic, Virtual DC Support, and Policy Based Placement have all been features of CLM for at least 1 year, if not since the product’s inception.


But beyond the error in categorization, something else struck me. Massimo told me that he got feedback  that BMC shouldn’t have even been included in the Orchestrated Cloud column (ie, we’re not Cloud or Cloud Management).  This is similar to other sentiments I’ve heard where people have called us “legacy” vendors, “Cloud Washed Shite”. That was coupled with the argument that in order to be “policy based cloud” customers should be able to download a trial of your software and install it themselves like they can with VCloud Director and Microsoft System Center 2012.


While these are interesting arguments they couldn’t be farther from the truth. First, if you think building a Cloud is as simple as downloading a trial software package, and throwing it on some lab hardware, you’re not long for the Cloud world. Sure, you can grab the install, build yourself a “Cloud” and roll that out within your little siloed organization. But if you want an enterprise wide solution that needs to bring along with it the baggage of the last 25+ years of Client-Server computing, you need expertise and experience that a implementation partner can bring. In the end, our professional services isn’t about installing the software, it’s about designing a solution to meet your enterprises short and long term goals. And most importantly, about serving the needs of the business.


On the other point, I turn to a blog post I read by Randy Bias regarding complexity and simplicity in Cloud building. An interesting point Randy made was that if you want to build Amazon Web Services (AWS) in 2012, you don’t try to build AWS in 2012. Instead you start simple and build AWS as it was in 2008 or 2009. Then you layer on more features by iterating on top of this foundation. BMC CLM helps customers to build this foundation, and then layer on the features they need in the future. Additionally, much of our product development is driven off of customer’s requests for new features, and as the customer matures, we mature with them. Two years ago we had no Capacity Aware Placement. Now CLM can determine which compute pools will provide you the resources required to fulfill your request.


Forward or Backwards Leaning

And that brings me around to the other conversation I listened in on between Randy and Massimo. Randy mentioned that a difference in opinion between him and Massimo could be due to Randy dealing with more forward leaning organizations vs. backward leaning organizations. This is an important distinction and one that is often lost in the “Cloud Wars”. Many pundits say “Enterprise IT needs to be like NetFlix”. In principal, this is a good idea. NetFlix has done some pretty amazing things in the Cloud, Operations, and Development space, and they should be applauded for their work. In practice, everyone being NetFlix is much more difficult. NetFlix is a very forward leaning organization. As a young company, they are not tied down with the baggage of years of “Business as Usual”.


Many IT organizations are backward Leaning, or at the very least middle leaning. They have years and years of baggage that they are dragging with them on the Cloud journey. If they’ve managed to shed some of this baggage, they’ve become more middle leaning. But in the end they need companies that understand where these IT organizations are coming from, and how they can move forward, while still managing this baggage on a day to day basis as it is not going away anytime soon. 


And that is the beauty of BMC’s overall solution and strategy. We can help optimize your IT organization by providing consulting services to start you on the Cloud journey, we can provide you products that are constantly maturing in the features and functionality required for Cloud, and we can manage all that baggage you are bringing along with you. In the end, it is about meeting the needs of the business, something BMC has helped companies do for years. Cloud doesn’t change this end goal, it simply changes the speed and way you achieve that goal.

Share This:

Now that we know we are not in the old school and not going to lose our jobs to automation, we are still not out of school.  Even with packaged automation solutions focused on configuration management, cloud provisioning and event management, among others, what are you doing about other repetitive tasks and procedures that are not covered by these solutions?  The forest is full of underbrush, small trees and bare branches that need to be cleared in order for the healthy trees to thrive and we can enjoy a walk through the woods when we are out of school.  These tasks may be small, may be administrative or be complex series’ of tasks in a procedure, but they consume staff time and in many cases, higher-tier skills.  As a class of activities, they drag down the productivity of IT as much as activities more narrowly focused on a specific area, such as server configuration management.  Where a server configuration management solution can be packaged as a well-defined set of use cases, these unaddressed, repetitive procedures are not as easily packaged as a solution set.



The repetitive procedures we are talking about typically interact with multiple systems, even for simple use cases.  Thus, the term orchestration is an easy way to differentiate these use cases from specialized automation products.  Simple use cases may be execution of service requests to reset a password, increase email quota or add to storage allocation.  These types of tasks occur frequently, consuming staff time, including tier 2 and possibly tier 3 resources to deliver the requested service.  The frequency of occurrence you experience with these tasks adds up to a significant chunk of staff time that could be utilized to produce greater value to the business than these routine and rather mundane tasks required to maintain company operations.  BMC Atrium Orchestrator is a platform for automating the execution of these use cases that is flexible in adapting to your specific process requirements, such as creating and updating Incident tickets and Change Request as appropriate for compliance with your enterprise governance policies.  Just the automatic documentation of the actions and results is a significant time saver and ensures audit requirements are being met with no impact on staff time.


BMC Atrium Orchestrator offers the ability to embed the triggers for workflows into existing operator consoles or applications to reduce the number of different user interfaces required to deliver service requests and even to eliminate operator intervention all-together.  Operators are more productive when they can focus on a single window into their responsibilities in IT.  One less console to learn and log into may seem like a small item until you observe the effort involved in switching back and forth, not to mention the fact that the context for the workflow execution can be automatically included in the trigger from the operator’s console.  Small improvements in operator efficiency are multiplied by the number of times the switch between multiple consoles is done by every operator every day.  Triggering a workflow automatically from within an application eliminates staff effort in cases where the execution procedure is well known, requiring not decision to initiate the process.  Orchestration workflows can be initiated from many sources, such as self-service request portals, events and pending job starts, involving staff resources only when required for approvals or escalation.


Since you already know you will not lose your job from automating tasks in IT, you still have to prove the economic value for implementing an orchestration platform and the workflows for executing procedures.  Productivity is the key concept on which to focus – increasing throughput of work accomplished with a fixed amount of resources.  The basic financial benefit of decreasing the time and effort to execute repetitive tasks with automation is a significant reduction in the staff capacity assigned to these responsibilities as well as a greater amount of these tasks that can be reliable controlled by lower tier resources.  In staff utilization terms this appears on paper as a reduction in costs – fewer staff employed for these tasks.  In reality, this translates to more tasks that can be executed by existing staff – greater productivity for current cost of operations or lower cost per request/action.  Staff displaced from covering the workload of these tasks, is now able to address other work, including delivering new project faster or taking on projects that you previously did not have enough staff to take on.  This looks like free staff, because you are already paying the cost, but the financial benefits will come from increased revenue or reduced cost somewhere else in the business that results from the increased capacity in IT to create business efficiency and enable new revenue opportunities.


This basic approach can be applied to a wide range of repetitive tasks, procedures and processes in your organization.  Examples include enriching information and documentation of Incident tickets, triage & remediation of events, ticket synchronization between multiple service desks, automatic execution of self-service requests, and even some provisioning use cases.  Obviously, there are automation products that do some tasks you may consider for orchestration, but these are highly specialized and only accomplish tasks focused on a single domain like BMC Server Automation.  You may consider BMC Atrium Orchestrator of a few use cases covered by these specialized automation products, but orchestration will not be the most effective solution for the complete set of use cases covered by BMC Server Automation.  However, you can start orchestrating a subset of use cases covered by BMC Server Automation.  Implementation of BMC Server Automation later eliminates some BMC Atrium Orchestrator workflows, improves productivity in server operations and opens new opportunities for more mature process orchestration, such as orchestrating server change job execution within your enterprise Change Management process, improving compliance with change documentation policies and reducing audit violation penalties.


Obviously, you will be in school for the journey of automating IT for some time.  The workflows implemented with BMC Atrium Orchestrator are dynamic and change over time based on the arrival of other automation products in your IT management environment and your progress in maturity in automation.  BMC Atrium Orchestrator will be with you throughout the journey and I will be back to discuss the progression of orchestrating repetitive procedures in these more complex situations.  In the mean time, your homework is to identify those pesky, simple tasks that repeated every day that you can start orchestrating now.

Share This:

After reading yet another assertion in the tech media that BMC was "old school" in regards to the new and wonderful world of virtualization/cloud, I decided that I needed to respond. This reminds me a lot of the current Republican primary campaign. If Gingrich can accuse Romney of being a rich snob often enough, or Romney vigorously accuse Gingrich of being a wild and crazy guy, those associations might just stick. Regardless of your political leanings, I am sure you can admit to the somewhat amusing, but also disturbing, ability of scuttlebutt and unfounded accusations to label as well as, or even better than, the truth. So, I am have taken it upon myself to strike out in the name of rational product comparisons, not school-boy declarations that BMC is long in the tooth.




1. First of all, when is company age even relevant? Yes, BMC is 30 years old. Apple, Inc. is 35! Clearly, the age of a company is irrelevant. What matters is what they have done lately!


2. Second - look under the covers! Most large software companies have brought in talent through acquisitions. BMC acquired companies like Marimba, RealOps, ProactiveNet, BladeLogic, GridApp, and Coradiant to bring in new ideas, talent, and solutions. That is good business practice. As a point in fact, the virtualization vendors (usually the "new school" to BMC's "old school") are no different, . VMware inherited the Configuresoft acquisition from EMC, which bought the ten-year old company in 2009. That is two years older than BladeLogic. Redhat purchased companies like JBoss, Makara, and Gluster. Citrix acquired companies like Xensource and VMLogix. All were arguably good business decisions. It does however make the "old school" moniker look odd.


3. Third - is BMC keeping up? When you boil down the current arguments where "old school" comes up, you inevitably find the core argument that cloud and virtualization have made previous management software irrelevant. I don't deny that cloud and virtualization are changing the way we look at IT. In fact, I would argue that most big developments in IT have generated new requirements that software management vendors either respond to, or they don't. And not responding means increasing irrelevance. However, the implicit assumption that BMC hasn't kept up with the trend doesn't bear out in fact. BMC's BladeLogic and ProactiveNet suites have the broadest virtualization support in the industry. BMC's Cloud Lifecycle Management is an industry leading solution for private, public, and hybrid clouds. Customers can decide for themselves what solutions meet their requirements best - but don't let facile arguments take away from a productive, competitive discussion.


4. Finally, has virtualization really changed the field that much? I am a big fan of virtualization, and I embrace cloud computing as the best driver towards automation that I have seen in the last decade. However, I find the argument that virtualization requires a fundamental break with the past to be amusing at best, generally uninformed, and purposely misleading at worst. VIRTUALIZATION IS INFRASTRUCTURE. When did we forget that? Virtualization and cloud do amazing things to make more efficient and dynamic use of compute power and data center space. That doesn't mean that the core IT management imperatives have fundamentally altered. I still need to manage devices and the software on them - be they physical or virtual. I still need to maintain my SLAs and reduce downtime. I still have to be compliant with security and regulatory policies. I still need to push my application updates consistently and quickly. Virtualization and cloud have changed the pace, and introduced some new rules, but the rules of the game haven't changed, we just have more pieces to play.



So, after all these points, shouldn't the important questions be "does it do what it says it does?" and "does it accomplish my goals?", and, when comparing to another product, "does it do it better, cheaper, and/or faster?"? Most importantly - "does this product enable me to meet my business objectives?".


My challenge to the media and the vendor community - let's do everyone, particularly customers with real needs, a big favor, and compare products on what actually matters to users, and not resort to weak labels like "old school".

Share This:

I know – I’ve already lost you. What the heck is a Sous Vide Supreme?  If you are an aficionado of Top Chef, several of the contestants swore by this device which allows you to vacuum pack food and cook it at constant temperature, thus delivering such perfection as a steak that is exactly the right color all the way through.  If you are a food lover, you know the difficulty of getting proteins cooked exactly the way you want consistently.  Sous vide is a way to do that, as well as to experiment with new textures, cook flavors in more intensely, etc.  After seeing what you could do, I had to try it… but at the time, the units cost $1500.  That’s fine for a real chef, but home chefs don’t spend that kind of money.


So I jury-rigged it, using a candy thermometer, my Dutch oven and a lot of attention and patience.  I found it is extremely difficult to hold a temperature over time, ensuring the same results.  I also was not able to get a real vacuum, but boil-in bags worked reasonably well.  But this method was nit-picky and time-consuming.






On my last birthday, my kind sister gave me not only the Sous Vide Supreme, but the vacuum sealer, so everything would be perfect (and yes, the price came down).  Now, I can literally toss together my seasonings, vacuum up a protein and dump it into the machine and let it cook as long as 4 (or more) hours, without a care in the world.  The results will be perfect!


I get SO much more done because the cooking is on auto-pilot. And it’s better than just an automated Denise.  The results are better than you can ensure by doing it yourself.  This made me think about automation.  As a long-time capacity and performance geek, automation always seemed to be a way of saying that my employer could replace me with software.  I knew this wasn’t the case, but there was that lingering fear that ceding control to software would leave me without a purpose.


But you have to start somewhere.  And with fewer people to do the work, you have to work smarter.  It began with moving over to do UNIX performance.  I started with shell scripts, which were a pain to write, didn’t really give me the data the way I wanted to see it, but at least were something.  Then, we found a few freeware tools that were a bit better.  This was good as my Korn shell programming was not my strong suit.  Finally, I caved in and we got what is now known as BMC Capacity Optimization.  I discovered that what it did were things it could do much better than I could, leaving me free to do the job no tool really can do, which is interpret the data adding the politics and culture – data the software cannot obtain.


Once I let go of those day-to-day manual processes, I found that not only was life less challenging, but it was also less boring.  Software automation does the things that you only found interesting the first time you did them.  It also lowers your risk.  Software rarely makes mistakes; people don’t have a perfect track record, particularly when on call.  Finally (and best of all), automation elevated my status.  When I was no longer down in the weeds, fussing over the “perfect steak,” I had time to create a “feast” of value and understand more clearly what the business needed from IT.  Automation is freedom!  Automation means career success, if you let it.


I’ll confess – none of what I was able to do tasted as great as those sous vide steaks or salmon.  But where would I have had the time or the mental cycles to experiment with food without automation?  Get out there and get some – you never know what interesting things you can do with the time you recover until you begin.

Share This:

-by Joe Goldberg, Lead Technical Marketing Consultant, Control-M Solutions Marketing, BMC Software


Traditional capacity planning collects a variety of data about infrastructure utilization, which  is used to plot trends and plan for the future. The typical inputs are CPU utilization, Memory utilization, queue lengths, execution times and similar technical metrics.

More enlightened approaches include collecting end-to-end transaction timing, response times from synthetic transactions and attempting to inject business priorities and SLA metrics.


A powerful source of data that is frequently overlooked is workload automation.


According to Gartner, 70% of business processing done in your IT environment is still performed in batch, even in distributed systems. That batch is managed by your workload automation solution and there’s a ton of useful data that can inform your capacity planning activities with a business perspective that you would otherwise miss.


Perhaps you see a server, or even an entire pool, that is getting progressively busier. CPU Utilization, memory usage and number of processes are all going up at a pretty steady rate. Do you need to upgrade or add capacity? Maybe, but maybe not. Although volume is growing, it may turn out to be work that has a low priority and although utilization is growing, the timeframe over which these jobs/processes should run is quite long and the SLA is not in any danger of being breached within the upcoming planning cycle.


Conversely, if utilization metrics show no increase, it may be reasonable to assume all is well. However, it’s quite feasible that your workload automation tool is using settings and allocations from a previous capacity allocation cycle to restrict the amount of work it is processing even though the current amount of work has increased and SLAs are being missed.


The right combination of Workload Automation and Capacity Planning should result in an environment that is synergistic. As additional capacity is provisioned, the workload automation solution automatically increases its thresholds thus allowing more work to run. Conversely, as SLA achievement slack times decrease over time, that trending information is automatically injected into the next capacity planning cycle.


Now you may be thinking – isn’t cloud computing going to free us from all this tedious planning?


You probably already knew the answer in your heart of hearts. In case you still want to hear it, here it is; there’s no free lunch! Virtual or cloud is NOT magic. Behind all that seemingly endless computing power, there is still a physical server and “seemingly” is exactly what it is – that’s why it’s called virtual. You don’t really think that Amazon just throws an endless amount of servers into the EC2 cloud, do you? After all, they do have a business to run and so does every future enterprise that plans to deploy any type of cloud.




In fact, Capacity Planning is actually getting more important and connecting it with the business perspective held by workload automation will only help it meet expectations and goals.


The postings in this blog are my own and do not necessarily represent the opinions or positions of BMC Software


Posted by Denise P. Kalm Feb 17, 2012
Share This:

We have a saying in the scuba diving world – “Plan your dive, then dive your plan.”  It sounds almost too simple, but in diving, living by these words ensures that you keep on…living.  As I read articles in the Divers Alert Network magazine – our go-to guide on diving safely – the one thing that keeps coming up is how the casualties referenced either failed to plan or deviated from their plans.  A frequent example is how two guys decide to go to Ginnie Springs in Florida to do some cave diving – forget that they aren’t certified, don’t have the special equipment and have no idea what they are up against. They have heard about the gin-clear waters and want to see what it is like.  We rarely hear their stories because this is a recipe for disaster.  The other common case is what happens when the situation changes and the diver abandons a well-thought-out dive.  If you’ve seen the film “Open Water,” you know what happens there.  If your dive is to be 30 minutes, you surface at 30 minutes, even if you find something really cool our there.  You don’t hang out longer, figuring that the boat will simply wait.  (Yes, it should, but do you want to take the chance?)


Divers spend a lot of time training so they can earn their C-card, the certificate that entitles them to rent air.  Unfortunately, that training is just a beginning and not a plan for real-world diving.  Not only do you need that training, but you need to continue to learn and then practice your learning with rigor.  If you don’t check your equipment and that of your buddy, there will be a time when you regret it.



So what does this have to do with Cloud?  I’m so glad you asked.  As I watch people talk about cloud, I see too often a failure to follow the diving mantra.  Caught up on the financial benefits of cloud, too many rush into it without looking into how they will ensure SLAs in a cloud environment.  Customers won’t just cut you some slack if performance is worse after you move. They don’t care that it saved you money – they just want service.  Your business can suffer “decompression sickness” if you don’t plan to manage the applications you migrate.


Cloud managers can also get distracted – just like divers – and deviate from their plans (when they have them).  Again, the results can be fatal to your business.  There are high paid consultants who can come in and give your business “decompression treatments” but by then, the patient is in trouble.

So what should you do?  Learn from divers and take the time to do it right.


  1. Get your C-card - Learn enough to evaluate what cloud can provide for you and what it will cost.  Which applications belong there? Which ones don’t?  Private, hybrid or public?
  2. Plan your dive - Plan your cloud migration carefully, considering not just what you want to do from an application basis but also how you will manage it.
  3. Dive your plan – Execute as you planned.  If situations come up that require a course-correction, go back to your planning exercise and adjust it with the same diligence and discussion you performed the first time.  Take the time to do it right.



As a Master Scuba Diver, I apply these lessons to many aspects of my life.  Learn from the living divers and make your cloud migration a business-enhancer. Don’t get bent!

Filter Blog

By date:
By tag: