Search BMC.com
Search

Share: |


Dog.jpg

If you’re a BMC Database Automation (BDA) user, you’ve already discovered the efficiencies and cost savings that come from automated provisioning and patching of your databases. However, there are probably lots of other database administration tasks you need to perform frequently, such as creating backups or setting up user accounts, that could be made easier and more efficient.


To help you automate these kinds of tasks, BDA includes an incredibly useful feature called Custom Actions. A Custom Action allows you to execute administrative tasks on a target server in the context of BDA’s knowledge of the database environment and includes a rich set of support features.


Custom Actions are script-based, run on Linux, UNIX and Windows, and can contain pretty much any content you like. For example, if your action installs an agent on a server, you can include the agent installer package as part of the action’s content. BDA also provides a useful set of environment variables for your action to use, including things like the Oracle home and SID of the target, patch levels, the install user, and many others.


You can run your actions on one or more nodes, and you can schedule them to be run in the future or on a recurring basis. You can also create a menu option for your action in the BDA user interface so that users run it from the BDA Grid just like provisioning, and, like provisioning, it can have its own pre-verification phase.

 

Menus.jpg

Because Custom Actions are subject to BDA’s Role-Based Access Controls (RBAC), you can tightly control who runs your action and where it runs. You can limit your action to specific domains, operating system types, database versions and more.

 

As an added bonus, you can insert your action into a provisioning workflow as a pre or post-script, guaranteeing that it gets run whenever someone provisions a database.

 

Try automating a few of your favorite DBA tasks with Custom Actions and you’ll quickly see how it can make your workday more efficient.

Share: |


What is computer performance?  Those of us in the field think we know what it means, but do we really?  Is the user’s view of performance what we actually measure? 

 

I found myself pondering over this question when I found myself talking to one of my computer-illiterate friends.  He’s super-smart, but has no patience to learn the technical underpinnings of his PC.  I can relate – I don’t actually know how my flat panel TV works, and have only a vague notion of how my carfrustrated user.jpg moves me from one place to another.  I just expect them to work.  That’s how he feels about PCs.  When he is browsing, opening a program or trying to print, he blames the PC itself for any slowdowns.  I found him searching for faster machines, using the GHz rating to determine the relative speed. 

 

Any performance analyst worth their salt is shaking their heads right now.  Clock rating is only one of so many components that we need to look at to understand performance.  We pat ourselves on the back; we are so much more knowledgeable than the average user.  But do we have tunnel vision too?  Is our pool of data including all aspects of an end user experience? 

sand.jpg

 

We work in silos of data ourselves.  My friend’s is very limited – he only sees one piece of hardware and one aspect of that hardware as the problem.  But when I did performance for a living, we weren’t all that much better. We had a metric called response time (yes, mainframes measure that), but it wasn’t really what the user saw.  That number represented the point when data arrived at the mainframe with a request to when the request came back to the mainframe. All the back-end network, potentially other servers and the internet was completely ignored. 

 

First, we need to know what we are measuring and what we should be measuring.  We want to clock from the moment the end user hits “enter” to when he receives a response on whatever device he chose.  Fortunately, there are solutions that can simulate this or actually measure user interactions. We simply have to employ them to get a real number.

 

Second, we have to understand what a transaction is to our user.  Not what we think of it as in IT terms, but the real business transaction that we deliver.  It helps to have a way to map it – which servers does it traverse?  Which networks? What data stores does it need?  We have always tried to measure the components of response time, the “speeds and feeds,” or more accurately, the “using and the waiting times,” but now, we can’t understand where the problem is unless we know the transaction path.   Again, there are tools that help you build a CMDB, automatically discovering the assets and relationships.  But you have to know that this needs to be done.

 

Finally, you have to move this all to a proactive approach, where you set thresholds and monitor, to detect and repair issues before the user sees them (or buys a new PC, because his is “too slow.”)   You need to do this because most users blame the owners of a web site or program for their performance woes, not their PC.  And that means they blame you.  And understand this – cloud will not fix this problem for you – it only makes it more difficult.  Get this right now, using the right automation tools, so you can limit those panicked help desk calls.  Be a performance hero by understanding what your users mean by performance.

Share: |


I recently had a chance to chat with Ron Kaminski, ITS Sr. Consultant, and Shane Z. Smith – their capacity planner who is focused on web site and data center performance.  Ron had previously written a wonderful paper on the above topic and I wanted to learn more. clouds.jpg

 

DK:   You previously wrote a paper about these concerns, but not everyone had a chance to read it. And things may have changed. Could you give us some insights into what people are missing as they approach SaaS and Cloud?

 

Ron K:  People think you don’t need to bother with tools – just throw more hardware at the problem.  But they miss why they have a capacity issue.  As an example, in a file server farm, you might find that the real work only takes up 1% of the resources used.  Large amounts of resources are devoted to things like virus scanners, a huge consumer.  No one expects virus scanners to take up 70% of the machine.  Infrastructure decisions are being made without understanding how much (or how little) of the CPU utilization is for real work (how little).

 

Kimberly Clark has sites everywhere in the world, so when you do backups or maintenance activity based on North American time, good times to do it in the US may be peak usage time for the other side of the globe.   To manage this effectively, you need to be able to differentiate between real work and support work.  People who just look at total CPU don’t see that. 

 

Shane S:  Without tools based on workload characterization, we wouldn’t have caught these issues.  And even trying to get the data can be a problem.  You can’t look at data at this level in PaaS (platform as a service), so that’s a problem. And SaaS vendors require proof that performance problems aren’t at your end.  You need tools that help you do that. 

 

Ron K:  We needed to create our own collector, because we have a lot of virtual terminals.  Did you know that if you leave a DOS window open, it burns 100% of the CPU?  People aren’t aware of this impact.   We need the ability to detect what process is causing the problem and then pass rules that minimize the impact.  As an example, some screen savers can take 100% of the CPU too.  By banning selected programs or behaviors, we have recovered as much as 40 CPUs worth of resources.  You need a tool to get to the details.  Otherwise, you are like a one-legged guy in a horse race.  Vendor tools need to get easier to use and be a lot more scalable.

 

With 3000 nodes, you need something that is scale-aware.  There is going to be lots of manual characterization. 

 

DK:  Doesn’t a CMDB help with workload characterization?

 

Ron K:  Yes, it is going to be essential to make this scale.  And some people will use one workload characterization file to manage everything – discover it once and apply it to every situation – but that works best with a more homogenous environment. 

But this isn’t perfect.  People’s assumptions of what is on a node versus what shows up in the CMDB are often out of sync.  Ideally, I want a capacity planning tool to feed this information into a CMDB.  Get people out of the business of doing workload characterization. 

Capacity planners need to understand what is driving CPU busy – if not, they will wildly overprovision.  Disk is often what causes them to die, or one bad link in the network.  In a global firm, it can be very difficult to determine these choke points.  Without war in the world, all companies are going to be global and we need the right tools to do that.

 

DK:  Do these initiatives change the role of a capacity planner? Eliminate it? 

 

Shane S:  That depends on the situation.  With PaaS, we have zero insight into workloads and performance – PaaS is stateless.  You can’t get performance metrics from them.   What you get is total CPU and total memory – that’s all.  You can’t do capacity planning like that.  With IaaS (infrastructure as a service), you need to get the vendor to let you put in your tools.  It can take time.  In the private cloud – you must do it.  In PaaS, you are reliant on the PaaS to provide you monitoring/performance tools to get the information you required for capacity planning.

 

Ron K:  The problem is one of scale. You can’t do it by hand.  You need tools.  There are some things you can get even if you only get totals. If you have a 4-CPU box and one CPU is routinely 100% busy, this is a loop or something that is just soaking up CPU, like a DOS command window on a virtual desktop.  If you can’t get a collector in there, get a mini-collector that once a day goes in and collects process consumption.  You can get information by comparing day to day – it may help to flush out what is just using too much resources.  You have to be sneakier as a capacity planner now.  I’m looking to vendor partners who come up with tools that help us manage the complexity and the scale.  But for now, we’re working to deploy these mini-collectors so we can least point out the silliness.

 

Shane S:  The stateless environment is a challenge – it is much harder to get the data.

 

Ron K:  Service providers want to charge you.  If 30% of your environments are doing something stupid, you are just wasting money.  Capacity planning is an issue of scale – there can be a lot of savings there.

 

DK:   What do you mean when you see these technologies as having an “Achilles Heel?” 

 

Ron K:  This issue is ubiquitous web access – people think that because anything that can be done can and should be done on the web.  But how you choose to get it there can impact resource demand.  Just because you want to do this, there are thousands of ways to do it.  You need to think about this, focus on efficiency and select better.  They call it code for a reason.  It isn’t obvious what is using a lot of resources and so choices in coding can have a huge resource impact.  Choices must take into account the distances, especially for chatty applications.  We’re going to look back in 30 yrs and laugh at the code we are running now.  I told my mother back in 1970, “In the future, no one is going to read books anymore.  People would read off screens in the future.”  In the future, people won’t be writing code. 

 

The current manual method in IT takes too long to get things running. In 20 years, vendors will have this automated.  Users will be saying what they want the app to do.  Human interfaces, like Siri, will know what the rules are and enable the automation of applications, without IT coders. Those programs will quickly learn how to optimize the code.  We will always need capacity planning tools until the automation is excellent, and I put it to you that we will always need them for advanced analysis. Coding as a career has to end.  Data is just getting bigger – so the impact of bad coding decisions is going to get worse.

 

In reality, there are too many nodes to do this the way we used to do it.  Large scale automation in the future will help.  I believe corporate data centers will go away – everything will be on the web or in the cloud.   But then the tools will have to scale even more.  I think IaaS will be the software vendor’s next big challenge.  We need this because too many still see capacity planning simply as telling you how much to buy, not how efficient your systems can be.  Large scale vendors of services need to find a way to handle these problems. 

 

Capacity planning teams will shrink – automation will replace them.  The future is global firms.

 

Shane S:  Automation will eventually take over, but the vendors are extremely far away from that.  PaaS and IaaS don’t seem to be doing capacity planning properly, but it is in their interest to do this.

 

Ron K:  This is a big opportunity for a vendor and it would give better value for their customers too.  It might result in a new thing – CPaaS.   Cloud users need something that doesn’t require their cloud providers to install something – we need new tools which can use the data already available.    Both cloud providers and cloud users need more detail.  As a user of a cloud, you would want to know that another user is sucking up so much resource that it is impacting you.

 

DK:  Isn’t that the same kind of problem that we had when we first started sharing CPU resources?

 

Ron K:  Yes, and we still need the kind of data that shows you when this is happening.  “Hey, cloud vendor – why does my performance suck when I’m doing the same work as I was last hour?” This tool needs to exist. 

 

What we don’t have is time – there is so much to do, but never enough people to get it all done.  Tools could help but not if we have to write them.   I recently converted my home system to MACs and now, when software is updated, it is automatic – I don’t have to worry about it.  That is the future.  Upgrades should just happen.

 

DK: What’s the best way for CPs to adapt to these technologies? 

 

Ron K:   Take a step back and see how you spend your time.  Figure out what can be automated.  Do that.  If you don’t find ways to make capacity planning scalable, you won’t be able to get it done and answer  questionss fast enough to be relevant.  You need to be fast.

 

Shane S:  Automation is key.  And you need to understand the toolset that you have to make automation work.  Don’t rely on IaaS tools to work properly – validate them. Make your own automated alerts to weed out these problems. 

Share: |


CapitolHill_1.jpg

BMC Server Automation (BSA), part of the BMC BladeLogic Automation Suite, was recently listed by the US National Institute of Standards and Technology (NIST) as a SCAP (Security Content Automation Protocol, pronounced as S-CAP) validated product, which is a milestone in the history of BSA compliance. In the coming years SCAP will significantly impact the security market and the tools providing security management.

 

Since the day I joined BladeLogic the product team, the Federal sales team and I have discussed when we should support SCAP. This security standard is an emerging area, with lots of aspects of security compliance still being discussed & debated in open communities. Last year was the inflection point when the US Dept of Defense (DoD) started to mandate that federal agencies purchase only SCAP-validated products for their security management & compliance. That finalized the decision to go full speed at supporting SCAP & getting the NIST validation.

 

In the information assurance landscape today we have the content (CIS benchmarks, SOX, PCI etc) and the compliance tools (vulnerability scanners, patch management tools, configuration management systems etc). The content is written in ambiguous prose that results in multiple technical interpretations and proprietary implementations by security vendors. An enterprise has a host of devices in its infrastructure to secure and monitor (servers, applications, databases and networks) resulting in high number of configuration settings and patches to deal with. They end up using a variety of specialized tools to identify security problems, which make security management resource intensive and prone to mistakes.

To add to a security team’s headache there are now tons of new vulnerabilities being found every week, and more requirements to meet to provide evidence of compliance (standards, guidelines, regulations, etc,)

 

headache.jpg

 

Around 2002, The Dept of Homeland Defense tasked NIST to develop a standardized approach for maintaining the security of Federal enterprise systems, which lead to the birth of SCAP.

SCAP grew out of a set of well-established open standards for expressing, organizing, and communicating security related information. This protocol decouples the content from the tool (enabling interoperability between tools) and standardizes the way in which vulnerabilities and configurations issues are named, documented and reported.

 

The SCAP protocol is a suite of six XML specifications that provide –

Languages for defining your security policy and how to check your systems using that policy

Enumerations for a standardized naming system and associated dictionary for documenting vulnerabilities

Scoring system for measuring the vulnerability and deriving a severity score for it to show which vulnerabilities you need to worry about first.

 

BSA 8.2 supports SCAP 1.0 and is now validated by NIST for SCAP 1.0 support - http://nvd.nist.gov/scapproducts.cfm#scapproducts

 

Security teams using BSA 8.2 now have a common way to describe vulnerabilities and can use SCAP content from any vendor to process security compliance, report standardized results and then prioritize and remediate vulnerabilities.

Our strategic public sector customers are moving to BSA 8.2 to use it for ensuring their systems are compliant based on SCAP checklists. Other U.S federal agencies are also looking at BSA to meet their SCAP requirements. Though this standard was created for the Federal government, it also benefits the commercial market, and I foresee that in the coming years more and more commercial entities will begin to use SCAP validated tools.

 

As I mentioned before, the SCAP standard is evolving as we speak and is incorporating new XML specifications into the protocol to enable asset reporting, checklist reporting, configuration scoring system, common remediation, etc. So SCAP is positioned to become the Holy Grail of IT security management in the coming future.

A new era in security & compliance management is emerging and I am excited to lead BMC into it.

Share: |


Some months ago, I began a series of articles in this blog about Solution Adoption that I never managed to finish.  I still have some things to write on that topic and so may still finish that series, but this time around I thought I'd try writing about something different. 

 

I work with many customers who struggle to build a strong business case for their own cloud, whether public or private.  Though, to be fair, it's normally not about the business case.  It tends instead to be more about an insufficient thoroughness of planning or thoughtfulness of value-add beyond purely rapid provisioning or sometimes even just virtualization or plain infrastructure, unsurprising in a world where the vast majority of people looking to build a cloud are doing so for the very first time.

 

My perhaps ambitious goal with this blog series is to help those planning to build a cloud solution and offer services to their internal or external customers through an analysis of elements and good practices for building a sustainable business case for a cloud.  This blog reflects my personal experiences based on insight gained by working with customers and seeing first-hand what has and has not worked.  I would welcome scrutinous comments and discourse along the way, that we all may learn from those who have gone before us.

 

With introductions behind us, let's get started with our first round of discussion — goal definition.

 

So you want to build a cloud...

 

A customer within a service provider recently said in conversation that, while most organizations are still debating whether 'to cloud or not to cloud’, others are racing ahead to get there first at the expense of good planning.  We'll enter the story of our cloud-planning protagonist (our reader) at the point where you have decided to take the plunge.  I ought to be clear up front that 'to cloud' is not a goal in and of itself, rather it's the decision that is made that asks you to define your goals.  So I would not ask the question about whether or not to build a cloud. If you accept the general industry consensus that, for nearly everyone, 'to cloud' is a foregone conclusion, then this blog will investigate what happens when finally the decision is taken.

 

This part is fairly straightforward. The questions can almost answer themselves at this stage, and are largely driven by the context of your organization. 

 

It may seem a trivial question, but one should ask, why is a cloud important to your organization?  Are you looking to transform your IT organization into a more business-aware, 21st century operation, to recreate it as a customer-focused service provider which can respond to business needs as quickly as they can arise, perhaps even preempting them?  Are your looking instead about pure cost savings, doing more with less, consolidating and expiring aging infrastructure, and achieving cost efficiencies through automation?  I would argue that these two points — transformation and cost savings — are not mutually exclusive, though I often see them treated as such. You should be conscious of how cloud is disruptive and, as such, will compel transformation.  Transformation will drive cost savings and vice versa.  People have asked me "how do we transform in order to be ready for cloud?” Despite a few exceptions, I generally respond to that by saying — 'don't try'.  Just start by building small.  By starting small and building some basic cloud capabilities, one puts a stake in the ground ahead of you toward which your organization can strive.  Waiting until you are transformed and ready means that many will never see a cloud.

 

Will you build a public or a private cloud? For a public cloud, be aware of the market and your would-be competition. Perhaps your customers expect you to offer a public cloud service and you need not disappoint, but understand how market forces will go a long way toward setting the price of your base service offering before considering value-add services.  Even if the purpose of your public cloud is largely to meet customer expectations, this does not mean that you can build only a loss-leading solution.  Nothing about having a public cloud dictates that you cannot use the same technology and infrastructure to build a private cloud (you should of course find a management solution that supports this).  And with a private cloud, whether stand-alone or an extension of your public cloud, you can focus on cost savings and organizational transformation as mentioned above.  Indeed, I've even seen customers plan to extend their private cloud to provide a public cloud; this too makes sense that they want to have a mature capability to manage a cloud environment before making cloud capability a public offering.

 

Finally, you must answer the question, are you building a cloud as a cost centre or a profit centre?  Even for a private cloud, IT could conceivably turn a profit at least from a purely internal accounting standpoint.  But, this point is important. One cannot assume that a cloud is a panacea for IT cost containment, though with proper planning you can provide a solution for a broad range of IT ailments.  If you build your cost and pricing model with appropriate robustness, IT can begin to offer the enterprise a real partnership.  In building an internal, private cloud, one should plan a break-even accounting model where costs are distributed in any number of ways throughout the consumers or would-be consumers of your cloud capacity, something to be discussed in subsequent posts.

 

As I mention above, I work with many customers who, by the time I meet them, have failed to plan their cloud well. Many jump first, look later, over-investing in infrastructure before figuring out how to commercialize it or even how to manage it.  I’ve seen teams replaced for such lack of planning.  I’ve seen others legitimately worried for their jobs because their business case, as it were, was not well-considered or their cloud solution not well-planned.  Whether through this blog or through direct interaction with myself or others in BMC, I want to help customers to avoid such pitfalls and the accompanying desperation.

 

From within our Cloud Practice, BMC Consulting offers, in the form of a Cloud Planning Workshop, an in-depth assessment of these pertinent topics, collaborating with you to build an executable road map of your cloud capability, capacity, offerings and organizational changes with cost-benefit analysis of each stepwise investment.

 

In the coming posts, we'll talk about understanding your business, identifying patterns and dependencies, initial investments, operational costs, organizational change and accountability, and much more.  Until next time, please feel free to comment on this post and suggest any other initial questions that you believe should be asked.  Perhaps your comments will inform a subsequent post or at least spark some lively debate.

Share: |


The world of automation has come a long way in the past 10 years but the revolution is only just beginning. Virtualization was initially lauded as a way to maximize the utilization of hardware assets but is rapidly showing its real value around increasing business agility. Virtualization in conjunction with automation and well defined IT processes has given rise to the game changing behemoth that is Cloud Computing. Well defined service offerings can be requested by business users “on demand” through simple to use web based interfaces and consistently delivered in a matter of minutes. A task that would previously have taken a world class IT organization weeks if not months to complete can now be achieved in the same time it takes to make a trip to the canteen to get yourself a nice cup of coffee.

 

What has been achieved to date is outstanding. If you had of shown this capability to your average IT manager a few years back their jaw would have hit the floor! So job done right? Pats on the back and move on to the next big thing?

 

Hang on, not so fast! Is the job really done? The business user is happy they got their service up and running and understand they will pay an ongoing monthly fee for it, but how do they know that its being properly managed? You can’t just deploy a business service and not manage it can you? How long before the business users want this next level of information served up to them….. am I really getting what I paid for?

 

This presents many challenges to the IT department which must be overcome.

“Day 2” management tools are typically silo focused. By day 2 management I mean all the tasks that go on to manage the underlying infrastructure after it’s been provisioned. For example ongoing configuration management, patching, compliance, backups etc.Today’s automation tools allow me to patch a server or run a compliance check on a network device, but how do I relate this silo based approach to the service centric view that cloud adoption has fostered?  Informing a business user that a Windows server (that is just one component of many) in their business service is compliant with some government or industry standard is meaningless to them.What about the service itself?

 

The answer is that automation tools must understand the services that are delivered. They must become service aware. It must be possible to initiate automation at the service level and span multiple technologies. The questions that the business users will be asking are “Do my customer facing financial services meet industry PCI regulations? Is the configuration of my ERP system secure, have there been any changes which deviate from the “trusted” end to end configuration of the service? Automation tools need to answer these questions and be able to provide results back to the end users in a context which they are able to understand.

 

It’s not just the expectations of the business user that are changing either. Hardware vendors are now shipping “Cloud ready hardware in a box”. Not only is it a server in the big tin box anymore, it’s also the networking and storage all pre-wired and configured. Just look at the Cisco UCS platform! This converged infrastructure is requiring a new type of IT operator, one that needs to be an IT generalist as opposed to just a specialist in one area. Yes IT still needs the in-depth specialist tools but they also need more usable, cross silo solutions that help them manage this converged infrastructure in a consistent and centralized way.

 

Automation tool vendors must rise to the challenge and support the needs of this new service orientated, converged infrastructure, cloud enabled world of IT.  Automation has been the catalyst for business agility in the cloud revolution, its next goal is to help deliver business alignment.

Share: |


http://pmtips.net/wp-content/uploads/2010/12/project-closure.jpgA mark of a mature IT organization is effective change management. But all too often, updating change records is a neglected data entry chore undermining the usefulness of the change record and raising the spectre of audit failure. To combat this, some of our automation solutions support a feature called "operator initiated change" whereby a Remedy ticket is automatically opened, updated and closed as part of making administrative changes. Role-based access control ensures the person making the change is an authorized administrator, so they are relieved of the tedium of updating the change ticket. Having changes require approvals means a solid record of authorized changes is ensured. This works because the operator never has to change their focus from system administrator to Remedy user, in effect never changing their persona,

 

A persona-driven approach is the essence of closed-loop change management because it allows someone to stay "in character" while making changes and automatically updates the change record to accurately reflect real-world state. Software development has its own system of change tracking for enhancement requests, defects, etc.and persona-driven approaches to closed-loop change management is an objective there too.

 

The DevOps trend has focused attention on improving the overall application change cycle (the DevOps Cycle). We at BMC are now taking a persona-driven approach to change management during the application release process. Planning and performing tasks during the release process automatically update change records as those tasks are executed (successfully or not, especially not). As a result, change records contain detailed results of task execution. We record not just operational changes but correlate those changes with the software development changes that are represented in the release (i.e. this deployed application went through these pre-production stages, corresponds to this ITSM change history, contains fixes for these defects and supports these user stories).

 

This achieves a complete "chain of custody" for the deployed application and connects CMMI with ITIL change disciplines. It provides forensic information for application support personnel should an issue arise, satisfies auditors and delights compliance officers.

 

Closed-loop change management in the application release process closes a huge part of the gap between Dev and Ops.

Share: |


What do customers want?  Working within product management, every release brings that question to the forefront.  For BMC’s Data Center Automation 8.2 release, we faced that challenge head-on. Do we go for dazzling new features designed to catch the attention of the analysts and the press?  Or do we go back deep into the product, focus on the existing feature set and work to improve them to provide more flexibility and better customer experience for our customers?  For 8.2, it was an easy decision.  If you want to craft a solution that serves our technical users now and into the future, at times, you need to enhance the foundation.  Just as the most amazing house won’t stand up to an earthquake without a good foundation, neither will a product line if the fundamentals aren’t revisited and enhanced.

 

foundation.jpg

With DCA, the structure was really great.  So we looked at how we could improve what we already had.  One area that was key was to simplify the work for our users – do things better instead of just doing more things.   One way of doing this was focusing our attention on what our customers had told us and asked for around our existing features, RFEs, defects, issues, etc.  –  This release is chalk full of new improvements that are in directed response to enhancements requests and fixes for customer issues.  Many DCA customers, especially on the BSA side, have went through an upgrade from our 7.x to 8.x versions over the past few years, and many remember that given some of the large new features (the new eclipse UI, native patching for Unix, integration of the provisioning UI, etc.) there was a learning curve for many of the new customers; DCA 8.2 minimizes that problem.  The focus on improving existing features minimized radical changes in the product allowing for a very logical and intuitive transition from older versions to DCA 8.2, and as a result you can get Day 1 value from the improvements. 

 

Now don’t get me wrong, there are major and exciting new things in this release.  With DCA 8.2 we have started the move to a consolidated reporting strategy (using Business Objects), in DCA 8.2 this has started with consolidated database and network solutions (BDA didn’t have a reporting solution until this release).  BSA reporting has not yet made the transition to the consolidated framework, but it has underwent and overhaul allowing for huge performance and  reliability improvements as well as great strides around ease of use.  BDA customers are going to be very happy to see a much more flexible and simply way to create user defined actions, and I think customers will also like BSA’s unified agent installer, which automates the deployment of agents natively in the product.  But talking with a lot of customers about the release, I think the thing that people have been the most excited about is that we have eliminated license enforcement of agents for BSA, removing the need for customers to need to register and deregister agents with the licensing portal, a huge improvement around simplifying the usage of the product.

 

Along with this focus on the foundation, DCA 8.2 does continue down other paths the products have been going down over the past few releases, especially continuing to focus around simplifying virtual management and enabling cloud computing.  BSA has added support for several new hypervisors, including Microsoft Hyper-V, while BNA continues to add support for additional virtual switches and improving their support for Network Pods and containers. 

 

When planning a new release, it can be fun to code up a dazzling new bells and whistles, and you will see a couple of these in 8.2, but today, we opted to build a world class foundation instead.  If you’re an existing automation customer, I would highly recommend getting more information on DCA 8.2, and if you’re not I would recommend understanding some of the capabilities the suite has.  Please check us out on www.bmc.com/data-center-automation/dca.html

Share: |


I’ve participated in and listened in on some interesting discussions this week in regards to Cloud Computing. The first was a discussion in which Massimo Re Ferrè attempted to create what he called the Cloud Magic Rectangle (more on that in a minute). The second discussion I listened in on was between Massimo and Randy Bias. While I didn’t track the entire conversation, one thing that Randy said really stuck out. Randy mentioned the concept of organizations that lean forward vs. organizations that lean back.

 

The Cloud Magic Rectangle


Let’s first look at the Magic Rectangle. As you might have realized, the name pokes fun at Gartner’s Magic Quadrant concept. The goal was to bucket different Cloud solutions into 3 categories: Orchestrated Clouds, Policy Based Clouds, and Design for Fail Clouds. Each category has various characteristics based on the value proposition, how the cloud is built, and the benefits to the end user.

 

Generally speaking Massimo has it right with what makes up each category. Where I take issue is in his conclusion where he lumps BMC in with the Orchestrated Cloud camp. The first release of BMC’s Cloud Lifecycle Management  was very much an Orchestrated Cloud solution. But as our product has matured over the last 2 ½ years, we have shifted more towards a Policy-Based Cloud. I won’t bother to run through an exhaustive list, but features like Public or Private Cloud Support, APIs, Scalability, Multi Hypervisor Support, Integrated Layer 2 network support, Integrated Security, Hardware Agnostic, Virtual DC Support, and Policy Based Placement have all been features of CLM for at least 1 year, if not since the product’s inception.

 

But beyond the error in categorization, something else struck me. Massimo told me that he got feedback  that BMC shouldn’t have even been included in the Orchestrated Cloud column (ie, we’re not Cloud or Cloud Management).  This is similar to other sentiments I’ve heard where people have called us “legacy” vendors, “Cloud Washed Shite”. That was coupled with the argument that in order to be “policy based cloud” customers should be able to download a trial of your software and install it themselves like they can with VCloud Director and Microsoft System Center 2012.

 

While these are interesting arguments they couldn’t be farther from the truth. First, if you think building a Cloud is as simple as downloading a trial software package, and throwing it on some lab hardware, you’re not long for the Cloud world. Sure, you can grab the install, build yourself a “Cloud” and roll that out within your little siloed organization. But if you want an enterprise wide solution that needs to bring along with it the baggage of the last 25+ years of Client-Server computing, you need expertise and experience that a implementation partner can bring. In the end, our professional services isn’t about installing the software, it’s about designing a solution to meet your enterprises short and long term goals. And most importantly, about serving the needs of the business.

 

On the other point, I turn to a blog post I read by Randy Bias regarding complexity and simplicity in Cloud building. An interesting point Randy made was that if you want to build Amazon Web Services (AWS) in 2012, you don’t try to build AWS in 2012. Instead you start simple and build AWS as it was in 2008 or 2009. Then you layer on more features by iterating on top of this foundation. BMC CLM helps customers to build this foundation, and then layer on the features they need in the future. Additionally, much of our product development is driven off of customer’s requests for new features, and as the customer matures, we mature with them. Two years ago we had no Capacity Aware Placement. Now CLM can determine which compute pools will provide you the resources required to fulfill your request.

 

Forward or Backwards Leaning


And that brings me around to the other conversation I listened in on between Randy and Massimo. Randy mentioned that a difference in opinion between him and Massimo could be due to Randy dealing with more forward leaning organizations vs. backward leaning organizations. This is an important distinction and one that is often lost in the “Cloud Wars”. Many pundits say “Enterprise IT needs to be like NetFlix”. In principal, this is a good idea. NetFlix has done some pretty amazing things in the Cloud, Operations, and Development space, and they should be applauded for their work. In practice, everyone being NetFlix is much more difficult. NetFlix is a very forward leaning organization. As a young company, they are not tied down with the baggage of years of “Business as Usual”.

 

Many IT organizations are backward Leaning, or at the very least middle leaning. They have years and years of baggage that they are dragging with them on the Cloud journey. If they’ve managed to shed some of this baggage, they’ve become more middle leaning. But in the end they need companies that understand where these IT organizations are coming from, and how they can move forward, while still managing this baggage on a day to day basis as it is not going away anytime soon. 

 

And that is the beauty of BMC’s overall solution and strategy. We can help optimize your IT organization by providing consulting services to start you on the Cloud journey, we can provide you products that are constantly maturing in the features and functionality required for Cloud, and we can manage all that baggage you are bringing along with you. In the end, it is about meeting the needs of the business, something BMC has helped companies do for years. Cloud doesn’t change this end goal, it simply changes the speed and way you achieve that goal.

Share: |


Now that we know we are not in the old school and not going to lose our jobs to automation, we are still not out of school.  Even with packaged automation solutions focused on configuration management, cloud provisioning and event management, among others, what are you doing about other repetitive tasks and procedures that are not covered by these solutions?  The forest is full of underbrush, small trees and bare branches that need to be cleared in order for the healthy trees to thrive and we can enjoy a walk through the woods when we are out of school.  These tasks may be small, may be administrative or be complex series’ of tasks in a procedure, but they consume staff time and in many cases, higher-tier skills.  As a class of activities, they drag down the productivity of IT as much as activities more narrowly focused on a specific area, such as server configuration management.  Where a server configuration management solution can be packaged as a well-defined set of use cases, these unaddressed, repetitive procedures are not as easily packaged as a solution set.

 

woods.JPG

The repetitive procedures we are talking about typically interact with multiple systems, even for simple use cases.  Thus, the term orchestration is an easy way to differentiate these use cases from specialized automation products.  Simple use cases may be execution of service requests to reset a password, increase email quota or add to storage allocation.  These types of tasks occur frequently, consuming staff time, including tier 2 and possibly tier 3 resources to deliver the requested service.  The frequency of occurrence you experience with these tasks adds up to a significant chunk of staff time that could be utilized to produce greater value to the business than these routine and rather mundane tasks required to maintain company operations.  BMC Atrium Orchestrator is a platform for automating the execution of these use cases that is flexible in adapting to your specific process requirements, such as creating and updating Incident tickets and Change Request as appropriate for compliance with your enterprise governance policies.  Just the automatic documentation of the actions and results is a significant time saver and ensures audit requirements are being met with no impact on staff time.

 

BMC Atrium Orchestrator offers the ability to embed the triggers for workflows into existing operator consoles or applications to reduce the number of different user interfaces required to deliver service requests and even to eliminate operator intervention all-together.  Operators are more productive when they can focus on a single window into their responsibilities in IT.  One less console to learn and log into may seem like a small item until you observe the effort involved in switching back and forth, not to mention the fact that the context for the workflow execution can be automatically included in the trigger from the operator’s console.  Small improvements in operator efficiency are multiplied by the number of times the switch between multiple consoles is done by every operator every day.  Triggering a workflow automatically from within an application eliminates staff effort in cases where the execution procedure is well known, requiring not decision to initiate the process.  Orchestration workflows can be initiated from many sources, such as self-service request portals, events and pending job starts, involving staff resources only when required for approvals or escalation.

 

Since you already know you will not lose your job from automating tasks in IT, you still have to prove the economic value for implementing an orchestration platform and the workflows for executing procedures.  Productivity is the key concept on which to focus – increasing throughput of work accomplished with a fixed amount of resources.  The basic financial benefit of decreasing the time and effort to execute repetitive tasks with automation is a significant reduction in the staff capacity assigned to these responsibilities as well as a greater amount of these tasks that can be reliable controlled by lower tier resources.  In staff utilization terms this appears on paper as a reduction in costs – fewer staff employed for these tasks.  In reality, this translates to more tasks that can be executed by existing staff – greater productivity for current cost of operations or lower cost per request/action.  Staff displaced from covering the workload of these tasks, is now able to address other work, including delivering new project faster or taking on projects that you previously did not have enough staff to take on.  This looks like free staff, because you are already paying the cost, but the financial benefits will come from increased revenue or reduced cost somewhere else in the business that results from the increased capacity in IT to create business efficiency and enable new revenue opportunities.

 

This basic approach can be applied to a wide range of repetitive tasks, procedures and processes in your organization.  Examples include enriching information and documentation of Incident tickets, triage & remediation of events, ticket synchronization between multiple service desks, automatic execution of self-service requests, and even some provisioning use cases.  Obviously, there are automation products that do some tasks you may consider for orchestration, but these are highly specialized and only accomplish tasks focused on a single domain like BMC Server Automation.  You may consider BMC Atrium Orchestrator of a few use cases covered by these specialized automation products, but orchestration will not be the most effective solution for the complete set of use cases covered by BMC Server Automation.  However, you can start orchestrating a subset of use cases covered by BMC Server Automation.  Implementation of BMC Server Automation later eliminates some BMC Atrium Orchestrator workflows, improves productivity in server operations and opens new opportunities for more mature process orchestration, such as orchestrating server change job execution within your enterprise Change Management process, improving compliance with change documentation policies and reducing audit violation penalties.

 

Obviously, you will be in school for the journey of automating IT for some time.  The workflows implemented with BMC Atrium Orchestrator are dynamic and change over time based on the arrival of other automation products in your IT management environment and your progress in maturity in automation.  BMC Atrium Orchestrator will be with you throughout the journey and I will be back to discuss the progression of orchestrating repetitive procedures in these more complex situations.  In the mean time, your homework is to identify those pesky, simple tasks that repeated every day that you can start orchestrating now.

Filter Blog

By date:
By tag:
It's amazing what I.T. was meant to be.