Search BMC.com
Search

Share: |


It seems that every couple of months I read another article about the search for the Higgs Boson, and how that will either confirm or call into question various all-encompassing "theory of everything" in particle physics. While that story of particle-colliding is interesting in its own science-geeky way, I also think that the drive towards the all-encompassing answer has echoes outside of physics and large groups of grad students spending years underground building super-colliders. The drive to establish some underlying theory for "how things should work" is not only intellectually satisfying, it also provides a lot of insight into why things should work one way versus the other, and why some things work and others don't.

 

So, getting right to my point - I think IT needs a Unified Theory. There are so many conflicting pressures on IT organizations pulling in different directions. ITIL calls for deliberation and adult supervision at all times. Cloud sometimes seems to be turning IT personnel into short order cooks, rather than thoughtful chefs. And then Agile Development has developers seething at IT for being too slow, plodding, and behind the times. So, what is an IT organization to do?

 

To even get to that theory of everything, we need to ask - why do company IT departments exist at all? The answer is not "to buy expensive computers and provide a living to hard working geeks everywhere". It's not even "to run the company's IT stuff". At the end of the day, companies would not be spending millions of dollars/euros/pounds if IT weren't essential to the business. "Yeah, duh" you say. Well, yes, it is obvious, but do we run IT like the whole reason it exists is to make the business successful? Many times, the answer is no. Most companies are entirely dependent on their IT infrastructure to run the business, yet decisions are often made for "non-business" reasons.

 

Ok. So, IT departments exist for the good of the business. Now how should IT run, based on that observation? I suggest that IT should be managed in order to provide the best service to the business - that is, the most value for the least cost. Taking that a bit further, I think that means that IT should run as a service-oriented organization - IT as a Service. Again, this seems rather obvious, but do we do this? This is really a fundamental change in the relationship with the business. IT should focus on making decisions based on achieving the greatest value for the business. Interactions with the rest of the company should be framed in terms of the services provided - not hardware, CPUs, or MS Sharepoint directories.

 

Now, we could spend a lot of time on what it means to be service-oriented, but I want to focus on one part. IT as a Service - ITaaS - means standardizing requestable services into a service catalog - and then fronting that service catalog with a self-service portal. Now, that is where it becomes the Unified Theory. Instead of having all of these different, silo-ed approaches to requesting services from IT - the self-service portal becomes the ultimate equalizer. And this shouldn't just be for the obvious services - like requesting a new cell phone, or a cloud-based server. It should also include core-IT functions like scheduling the release of an application, requesting a compliance scan, or request a new patch. My vision is of the worlds of Systems Mgmt, Workload Mgmt, Cloud, and DevOps all meeting together in a unified structure that removes the complexity from the requester. But it also means that IT needs to get serious about standardization and automation.

 

To illustrate this, I have added a graphic that show the basic thrust of the argument. So, do you agree? Isn't a "Unified Theory" essential to IT's survival?

 

IT as a Service.png

Share: |


What would you say if I told you that you can finally meet the goals your business owners and IT management have asked of you in terms of Application Performance Management (APM)?  Well, you can indeed meet those APM goals … if only you are willing to stop doing APM!

stop.png

Yes, you read that right … the problem with today’s application support and application operations practices is just that – we are managing application performance.  So what’s wrong with APM?

 

  • How often do you find conflicting data pointing to the root cause of an application problem?
  • Did you ever fix an application problem only to discover your end-users are still frustrated with service levels?
  • Is it possible to instrument everything that’s an important part of your mission-critical applications?  If so, at what cost?
  • When is the last time you got a call from an application into the service desk?

 

If you want your team to stand apart as one of “the groups that gets it”, stop doing APM and start managing end user performance.  If you run a mission-critical application you know already that the satisfaction of your end users is the only thing that matters to the business.  If end users are content with your service delivery they will buy more books, return to your site more regularly, post social media updates about how great your company is, and ensure your company’s revenue goals are met.

 

You will be the hero … BUT, if you fail … your customers will abandon transactions, they will trash your company on Twitter and Facebook, and your company’s reputation will be irreparably damaged.  And you will probably get the chance to meet some of your company executives under not very pleasant circumstances.  You can prevent this from happening and ensure delighted end users are the hallmark of your company.

 

So what does end user focused application performance management look like?  Look for a solution with the following capabilities:

spyglass.png

  • Global Visibility into application performance from the end-user perspective
    • Eliminate “blind spots” without information overload
  • Automated End-User Forensic Analysis
    • Combine maximum visibility into user experience with deep session analysis
    • to enable rapid and precise problem detection, prioritization and isolation
  • Real-Time Change Impact Analysis
    • Prevent end-user performance issues resulting from application and infrastructure changes
  • End-to-End Application Performance Monitoring
    • Detect problems as soon as a single user experiences them - and capture all of the diagnostic data necessary to drive rapid problem isolation and resolution - for all application architectures

 

 

If you want to be an application superhero at your company and manage applications in a way that brings your team closer to the needs of the business, quit doing the same thing expecting different results – take a new approach to APM and watch as customer satisfaction grows and your business goals are exceeded.

 

Share: |


I found myself caught up recently in a Twitter storm over why, or why not, multi-hypervisor support in Cloud, Automation, and Management software is important. The proponents were positioned on the side of cost, and the detractors were against the complexity such a solution introduces. Multi-Hypervisor support is important for 2 reasons: managing costs, and reducing vendor lock-in.


Managing Costs


One of the big reasons enterprises are clamoring for this feature is the increasing cost of virtualization. Ziff Davis estimates the budgeted cost of virtualization increased 17% in 2011 and will increase another 8% in 2012 (pdf report). This increase in budget is of course directly correlated to the significant trend in virtualization adoption, but it also quickly becomes a sore point for CIOs, as this increase in budget needs to somehow be funded. This increased demand has CIOs looking for other options in the Hypervisor space, such as Citrix’s Xen Server. Additionally, organizations with enterprise agreements – of which there are many – are being courted by Microsoft to use Microsoft’s HyperV platform for free. Of course, “free” is relative, and there are plenty of hidden costs with a free hypervisor, but from a purely budgetary perspective, free is attractive for many CIOs.


Another reason for this increased budgetary expense is VMware exploiting their strong market position. Off the record, CIOs have told me that renewals are becoming increasingly expensive and difficult with VMware. Anecdotes abound on blogs and the Internet over the costly upgrade and renewal process associated with the licensing of VMware VSphere.


Reducing Lock-in


This leveraging of VMware’s market position has made many CIOs realize one key point; they find themselves locked into one particular platform with no secondary plan. In economics this is known as a “Hold-Up Problem” (yes like “Stick’em Up Mr. CIO.”) This fear was increased when VMware introduced its latest revision, VSphere 5. As part of the product introduction, VMware changed its licensing model to be based not only on CPUs of host systems, but also the RAM that would be available to the running virtual machines.


As servers become more and more dense (more cores, more memory), they can, in theory, run more virtualized guests. VMware most likely envisioned a future where companies would decrease their license spend as server density increases, and thus rolled out the new licensing plan. But the plan backfired. VMware customers revolted, dubbing the new licensing model vTax, and many CIOs woke up to the risk of being at the mercy of their virtualization vendor. Eventually VMware relented and revised their new vRAM licensing model to be more generous, but the snowball had already been pushed down the hill. Enterprises are now looking more seriously at alternative hypervisors.


Complexity


Multiple hypervisor platforms increase complexity; there’s no way around that. But so does having multiple operating systems, multiple application server platforms, multiple hardware vendors, etc. IT organizations have figured out how to overcome the complexity that a new platform introduces, and IT has managed to survive.


Cynicism aside, the level of complexity can be reduced if operations teams are operating effectively. First, basic knowledge of one hypervisor platform can speed adoption of the new platform. For instance, a competent VMware Administrator should have knowledge of Clusters, Data Stores, Hosts, and Guests and how those systems interact. The same concepts exist in Citrix Xen Server (albeit with different names), and this base knowledge can be leveraged to quickly ramp up on the new technology. If organizations have competent staff running their virtualization environments, they should be able to quickly ramp up on new technologies. If they aren’t then you might want to reevaluate your staffing plans.


Second, solutions exist that are multiple hypervisor aware; the key is how you use them. BMC’s own Cloud Lifecycle Management supports VMware and Citrix, BMC’s BladeLogic supports four x86 hypervisors and two Unix hypervisors, and Open Source solutions such as OpenStack supports multiple hypervisors. In using one of these solutions, you need to ensure that you are abstracting on the correct level.


Say for instance you wish to deploy MySQL servers to both a Xen environment and a VMware environment. Traditional, single hypervisor organizations with no automation would build a template containing all the software preinstalled. New instances are easy – point, click, clone. With multi-hypervisors you need to abstract one layer up, on the automation layer. You build basic templates in Xen and VMware, and then you build a common package in your automation solution to deploy MySQL to the new instance. As a side effect, you now have a package that can be deployed to virtually any server whenever you need a new MySQL installation. Complexity isn’t necessarily increased; the work is just moved to a different part of the operations stack. And in the end, your operations teams need to be moving towards this better operating model. Multi-hypervisor support simply accelerates this change.


In the end, multi-hypervisor support is becoming a larger and larger part of the enterprise’s playbook. It may initially seem to introduce complexity, but this perceived complexity can be overcome by adjusting your operations models. The scare that VMware introduced into CIOs dreams this past summer has helped to accelerate organizations multi-hypervisor goals. Whether you agree with it or not, more and more vendors will (or are) emulate BMC’s support of multiple hypervisors as more and more CIOs demand it.


For more on this discussion, and more viewpoints you can check out these Blog Posts:

 

Virtualization Costs, Virtualization Advantages and the Case for Multi-Hypervisors by Massimo Re Ferrè

 

Why Enterprises Will Force Down the Cost of Virtualization by Mark Thiele

Share: |


When I started in performance and capacity planning, we weren’t all that concerned about money.  IT was a cost-center; everyone accepted this and it wasn’t a problem.  We had the luxury of extra capacity and loads of time to plan.  This approach made the job much easier, but also less valuable to the organization.  Our quarterly capacity plans were rarely of interest to anyone.  The business would let us buy what we said we needed.  It was sort of the same way in our personal lives.  Remember going out to dinner or drinks whenever you wanted to?  It just seemed to be an easier time.

 

Ah, for the good old days!  Outsourcing and offshoring put IT in the position of competing and cost was the trigger.  This inflection began in the late ‘80’s and only got worse as hardware got cheaper, and companies began to undervalue capacity planning.  Why pay an expert a high salary AND fund software for them when a server is so cheap?  The same challenge will keep arising as long as we don’t change what we do and how we message it.  It seems so easy to just buy cheap servers even if that isn’t always the right answer (anyone ever see more CPU capacity thrown at a memory problem?)  And now, we are all more cost conscious whether it is entertainment (Netflix versus going out to movies and cooking at home rather than dining out) or necessities (do we really need to go to the doctor – can’t we just gut it out?)  Everyone is squeezing the last cent out of their dollars.

 

 

dollar.png

 

 

 

 

Capacity planning is providing cost-effective performance and availability by understanding the relationship between IT resources and business transactions and managing to business KPIs.  We are business-enablers.  The result of the buy cheap (penny-foolish) approach was the rooms full of under-utilized or even completely unused servers.  It might have looked cheaper, but it really wasn’t.

 

A well-thought-out capacity planning process involves collection of “all the data, all the time,” fed into an automation engine which produces reports, analytics and feeds into modeling and trending engines.  This allows for a continuous cycle of analysis and improvement.  You aren’t doing the IT equivalent of “making copies;” you spend your valuable time understanding the business and proactively managing the environment to provide the business just-in-time capacity to provide the SLAs they demand.  It elevates your job to a new level – you (and everyone else) can see how your work contributes to business success.   Of course, it requires a new kind of capacity planner, one who is willing to learn the business, understand the variety of platforms applications run across and who will rise from the weeds of their favorite silo.  Don’t be afraid of “beginner’s mind;” this is how we grow, learn and add more value.

 

“We keep moving forward, opening new doors, and doing new things, because we're curious and curiosity keeps leading us down new paths.”

                          •        Walt Disney
Share: |


Your Recipe for Value Maturity

 

Have you ever heard the saying, “Anyone who can read a cookbook can cook”? That may work if the recipe is well tested and the cook has a basic knowledge of cooking terms and processes. However, culinary experts —professional chefs — have a deeper knowledge of how various factors can affect the results. This includes how the freshness of ingredients or an alteration in cooking temperature at the right time can influence the outcome of a dish.

 

They are likely to taste-test frequently,monitoring the cooking progress and adjusting seasonings or cooking procedures accordingly. This close attention to detail is often the difference between a good meal and an excellent one.

 

Likewise, real-time monitoring and management plays acritical role in day-to-day IT operations, often making the difference in how well you achieve your business objectives. The effective implementation of availability and performance management tools enables organizations to understand the current state of IT,with a growing number of IT organizations using these tools to understand how well they are supporting critical business services. To this end, application performance management is considered a core subset of availability and performance management, which provides visibility into how well user and application transactions flow across the IT infrastructure in support of these business services.

 

Analyst firms continue to create and enhance their maturity models to help their clients understand the value of technology, as well as provide a path toward greater organizational and process maturity. New technologies, approaches, priorities, and challenges have changed how IT is,and should be, monitored. The focus has moved from monitoring faults and outages to managing performance degradation. It has also moved from a reactive approach based on mean time to repair (MTTR), to a proactive approach that uses behavioral analysis to avoid outages. In addition, there is a need to understand how end users are working with IT,and how new IT service delivery models (e.g., public and private clouds) are being used. These factors, plus a growing reliance on IT, are fueling the need for more effective ways to monitor IT applications and end-user behavior — no matter where those applications are located or how users choose to access them. For example, many users now access applicationsfrom their smartphones, tablets, televisions, laptops, and other devices.

 

This paper explains the shortcomings of traditional,analyst-developed IT maturity models for informing and guiding the evaluation of IT monitoring capabilities, the purchase of monitoring tools, and the creation of effective monitoring strategies.

 

About the Authors

David Williams is a Vice President of Strategy in the Office of the CTO, with particular focus on availability and performance management, application performance management, IT operations automation, and management tools architectures. He has 29 years ofexperience in IT operations management. Williams joined BMC from Gartner, where he was research vice president, leading the research for IT process automation(run book automation), event correlation and analysis, performance monitoring, and  IT operations management architectures and frameworks. His past experience also includes executive-level positions at AlterPoint (acquired by Versata) and ITMasters (acquired by BMC), and he served as vice president of Product Management and Strategy at IBM Tivoli. He also worked as a senior technologist at CA Technologies for Unicenter TNG and spent his early years in IT working in computer operations for several companies, including Bankers Trust.


Leslie Minnix-Wolfe is Lead Solutions Manager for Proactive Operations and the Service Assurance products at BMC Software. Minnix-Wolfe has more than 25 years of diverse development and marketing experience, primarily in the IT systems management domain, with a broad base of other experience, especially in BSM and predictive analytics. She previously held product and development management positions at several high-tech start-ups, including Netuitive and Managed Objects. She holds a BS in math/computer science from the College of William and Mary.

Share: |


In this Buzz on IT Automation Podcast, Ben Newton and Tim Fessenden of BMC talk about building a business case for automation and educating the decision makers about investing in resources for the rollout,ongoing development, and maintenance of an IT automation platform, and creating  a customer "Center of Excellence."

 

Bios:Ben Newton is the Sr Manager for Operations Buyer Marketing and Tim Fessendenis Product Line Executive for Data Center Automation at BMC Software.

Share: |


For anyone who has been in the performance field for a while, the truism seemed to hold true.  If you did a great job and performance was excellent, management thought they didn’t need you. If the performance was poor, you had failed.  The job of performance management was poorly understood, in part because of the arcane processes we used to manage it – people simply thought good response time was “magic.”

 

As systems became more complex and business applications hopped across servers, geographies and even across companies, people began to be more aware that performance doesn’t “just happen.”  The internet quickly weeds out the performance-haves from the have-nots; people simply won’t wait and they don’t have to.  There are alternatives for every service and every product.  Customers have begun to value great response time as a competitive differentiator.  The management of great companies is beginning to realize that as well. 

 

Great performance analysts usually move up through the ranks from a variety of roles – developer, database specialist, systems analyst…   They have a wide perspective because they have to.  You can’t do the job unless you can both understand all the components at a deep, detailed level while still seeing the transaction holistically.  It’s not for everyone. The best truly “feel the need for speed” and this shows up in their daily lives.  Got a good shortcut, anyone?

 

But with all that complexity and increasing demand for stellar performance, the job has become more difficult. Add to this that in most organizations, management has been skinnying down the ranks, so you are either managing alone or with one other person who may or may not have the experience to do the job you can do.  And the job just keeps getting bigger and more challenging. 

 

The solution?  Great tools that give you all the data, all the time, filtered down to show you what you need to see.  Gone are the days where 90% of your day had to be spent running scripts to collect and transform data and then sifting through too many metrics to find ones of interest.  Even if you enjoyed that (you didn’t really, did you?), you simply don’t have the time.  Let automation and intelligent software be your friend, allowing the time to do the expert analysis and problem solving.  That’s what’s fun about the job.

 

For the first time in performance management history, we have the chance to be heroes.  But only if we can make our systems hum.  Consider taking your game to the next level by selecting a performance management solution that provides ALL the information and intelligence you need, so that you can be a key player in your company’s success.

 

 

IT hero.jpg

Share: |


It's that time of year again. We take stock of all the things that make us feel guilty, and dream about the person that we want to be. There is something almost subconscious about the need to re-invent ourselves on a regular basis, and the new year provides as good an excuse as any. And, inevitably, most of the time we don't end up reinventing anything, and our resolutions are forgotten by Super Bowl Sunday. So, you say "Thanks for the psychology lesson, Dr. Phil. This is an IT software blog. Please make your point". Thank you. I do have one (my resolution for the year is to be more concise…). So, if we end up breaking and forgetting most of our resolutions, why even have the ritual? The answer is easy - with such hectic lives, we hardly ever stop to consider why were are doing what we are doing what we are doing, and living how we are living. Even if we don't change anything, the exercise of questioning ourselves is essential to making sure that we don't get stuck in any number of ruts.new-years-resolutions.jpg

 

So, what does this have to do with IT? I think IT organizations need to do the same thing. Most IT shops spend so much fighting fires, planning budgets, and wrapping and unwrapping themselves around various operational axles, that they rarely stop ask - why do we do things the way we do them? As we have discussed in previous blog entries, implementing software like BMC's BSM products often requires cultural change along with the simple installation and configuration of the solutions. But, if the IT organization is charging along at 100 mph, how can they possibly take a breath and question their processes and procedures? So, could IT benefit from the new year's resolution process? YES (That was a rhetorical question).

 

So, if we agree that some "soul searching" is good for IT, how do we approach it? At the risk of pushing my analogy too far, why do our own new year's resolutions fail? I would put a few reasons out there for your consideration. First, they can be unrealistic - particularly if they are driven by peer pressure and not by a real need. Did you join the gym because you want to get fit, or because Grandpa helpfully pointed out your very generous slice of Carrot Cake after Christmas dinner. For IT, any changes need to be driven by real business needs, not by hype or over-active vendors. Build a cloud because you have a solid business case for it, not so you can get the CIO off your back. Second, many resolutions fail because of the absence of a serious plan. On a personal note, after 10 years of failed resolutions, I finally hit my weight goal in 2011. What was the difference? I buckled down, got a serious plan, and followed it. This goes for IT as well. It is great to decide that you want to some particularly grueling processes, but you don't identify success metrics (the bathroom scale in my case), establish clear goals, and project manage to that goal, you won't succeed - and the helpful resolution will be for naught.

 

So, where do you start? Be willing to question the status quo. Use business goals as a club to bludgeon waste and inefficient processes. Make sure you set achievable goals, and clearly define success and how to measure.

 

Happy new year, and may you succeed in your new year's resolutions - both personally, and for the business.

Share: |


We’re all customers now, surfing the web daily to keep updated, shop, and connect with people.  As customers, we demand two things – we want fast response times and we want every aspect of the application to work.  In fact, even though we are IT people and know how hard it is to deliver, we demand perfection from everyone else’s application.

 

Some savvy companies get it – they have figured out how to stand above the crowd with well-performing, always-available apps.  How do they do it? 

HappyGirl.gif

BMC Software and Gartner partnered on a newsletter designed to help you quickly understand the keys to success.  The articles introduce you to the concepts of leveraging end-user experience management and application release automation to improve DevOps collaboration – and prevent issues from being introduced into your production environments – which of course, is fundamental to customer satisfaction.    Once you understand the fast track to improving application management, you’re well on the way to helping your company lead. 

 

Start the New Year off right by investing a few minutes to peruse this great content.  Read the newsletter now.

Filter Blog

By date:
By tag:
It's amazing what I.T. was meant to be.