Share:|

According to the Market Share Analysis: IT Operations Management Software, Worldwide, 2012 (ID: G00249133) report published by Gartner in May 2013 the 2012 application performance monitoring (APM) market is over $2 billion growing at 6.5% with the availability and performance (A&P) monitoring market (IT infrastructure) being $2.8 billion growing at 7.6%.  Even though financial reporting for these two areas is split the objective in usage terms is to combine them to allow IT organizations understand how the IT infrastructure impacts business applications and visa versa. When combined these two monitoring focus areas become the largest IT management market segment with over 25% of the $18B total market. To put this into further perspective the joint APM/Availability and Performance revenues (~$4.8B) are larger than configuration management, the second largest market segment, by over $1B - which is growing at a slower rate (6.3%) than either APM or Availability and Performance monitoring.

 

Large or small, service provider, Telco, SMB or Enterprise everybody has monitoring so I find it amazing that it is the largest and among the highest growth markets.  Monitoring remains one of the most fragmented IT management spaces with tools from dozens of vendors ranging from $free to $hundreds of thousands. To remain relevant demands constant innovation in many forms including event collection, event consolidation, event processing, event reporting, ease of use, low complexity, product delivery, and product pricing and licensing.

 

monitoring is not the same
When thinking of monitoring an image that comes to mind is NASA and a moon launch. Dozens of people intensely looking at monitors anxiously looking for irregularities and working closely with each other to identify potential issues that may impact mission success and the safety of the astronauts. Each person may have a different view on the health of the mission but close collaboration between team members ensures an holistic view is understood at all times and as priorities change at various mission stages so does the attention.

The information displayed on each monitor is continually analyzed and correlated with other sources with the objective to seek out potential issues that an individual monitor may not make clear. NASA monitors missions with the assumption something will go wrong which demands an immediate response to remediate the problem and ensure the success of the mission.


putting too much emphasis on the tools
For decades IT professionals have used monitoring products to provide visibility into the health of the IT.  Typically, the IT infrastructure is monitored in fragmented piece parts with disparate non-collaborative teams all getting different views on IT health. For many monitoring is accomplished when resources are available or an issue is reported. Unlike NASA most IT organizations assume everything is fine and look to monitoring to confirm an outage and provide root-cause analysis.

 

IT continues to fragment and increase in complexity  driving many  to employ more monitoring tools in an attempt to provide greater clarity on overall IT health. However, instead of making things easier to understand this creates additional challenges with each IT support organization providing increasingly different and potentially conflicting views on the health of the IT infrastructure. IT organizations using dozens of monitoring tools covering every aspect of their IT environment have limited ability to clearly identify issues and the impact they have on the business. With each IT support team looking through different monitoring lenses the ability to gain an holistic, trusted view, becomes almost impossible.


avoid liability and attribute blame
When business is impacted by an IT issue many organizations bring together the different IT support teams to help identify the cause, understand how it was detected and make sure it doesn't occur again. Even though  IT executives do this to pacify and assure the business of IT’s competency and value each IT support organization will use their monitoring tools as evidence with which to prove either it was not their issue or show the issue was identified and resolved in-line with company policy and service levels. This behavior changes monitoring from a proactive, issue avoidance practice to one where it is used to prove innocence and assign blame.


infrastructure availability does not equal application availability
Routinely IT support organizations use the statistics gathered by their monitoring tools to show effectiveness, IT availability and business value. Each IT component is monitored to policy derived by how each IT team associates value to the components. The traditional 99.9% up-time metric is still used by IT operations as a way to show IT availability. Unfortunately the business does not equate availability with how each component is functioning instead preferring to measure IT value against applications performance and the support the IT organization provides.

 

These two viewpoints  create confusion and conflict with IT support teams unable to comprehend the fact that the business does not care about the individual health of each IT component. A business manager will assess the value of the IT organization based on the opinions and input of the  people who consumed the IT resource and not on a mountain of confusing, irrelevant technical detail that conflicts with  the IT consumer experience. In some cases this situation will drive the business to seek alternative IT providers for new applications and IT services.


how much are service quality problems costing business?
While IT monitoring is employed in every business it is rarely used effectively.  While tools for monitoring are designed to provide proactive warnings of current and potential issues the effectiveness of the tools can only be realized when they are used to show business impact augmented by an organization focused on proactive monitoring practices and collaborative team work.  In addition, in today’s world of highly distributed applications, cloud services and mobile devices it is no longer valid to assess IT effectiveness by just monitoring the health of the data center.


monitoring evolved
Even though monitoring tools, practices and approaches continue to be updated it’s an evolution not a set of dramatic changes.  In the 1990's the focus was on the data center elements because for many that is where a majority of the IT resources were. Over time the need to understand how IT resources were being provided moved monitoring from basic availability to measuring performance and a set of processes and best practices to ensure specific outages and IT service degradations did not occur again.  More recently monitoring has evolved in multiple directions. The dynamic nature of the IT infrastructure demands that monitoring is able to keep up with constant change and business priorities.  This demand has created a new set of monitoring tools that dynamically discover IT components, establish relationships through various communication methods and dynamically map, in real-time, how IT resources are used in support of the changing needs of the business. The highly distributed and fragmented IT infrastructure created a demand for tools that can actively search and associate disparate data from disparate sources and then provide, through analysis, information on IT health that could not be achieved by the more traditional monitoring approaches.  And lastly, the way business consumes IT has forced many IT organizations to focus on the end-user experience.  Only by focusing on how end-users consume IT resources will the IT organization be able to fully understand and support the business.


Wrap-up
The business expectation of IT performance and availability continue to rise. IT and business are synonymous. The impact IT has on business means executives continually evaluate the support and services provided by the IT organization and assess ways for improvement.  For business IT value is a very easy metric to measure it comes down to availability, performance, responsiveness, flexibility and support.  IT is no longer a specialist art-form. For many it is akin to breathing in and out - it’s taken for granted. As such, it should not inhibit how IT consumers do their job or how a company performs and remains competitive. If IT is seen as a barrier to success businesses are increasingly taking the option to consider alternatives service providers.

 

Dynamic IT infrastructures challenge the ability to monitor effectively using technology unable to keep up with change.
IT is in a constant state of flux ranging from virtual resources being provisioned, updated and removed to IT consumers using different devices, communication mechanisms and locations to access IT services.  Understanding how IT supports the business requires technology able to both provide end-to-end service visibility and maintain an accurate view of IT when constant change occurs.  In addition, the ability to provide a business oriented IT view requires technology that can map IT resource relationships and application communication paths from the database all the way to the IT consumer.  To achieve this requires new technology that dynamically views IT resources as applications and services and not individual components.

 

There is such a thing as too much data. Even though specialist monitoring tools are needed to provide a deeper, more granular, view of specific IT components pulling this level of data from multiple sources can make the job of identifying business issues a daunting task. High volumes of disparate event data creates confusion and conflict demanding technology that consolidates, correlates and prioritizes issues aligned with how the business consumes IT services.

 

The power of the IT consumer. It is critical that IT organizations monitor how IT consumers are experiencing IT services. IT consumers not only influence how the business assesses the value of IT but they continue to push the barriers on how IT services are chosen and used. IT consumers no longer use IT resources just from the corporate data center. The value of IT is assessed as an overall experience no matter where applications are sourced, what access methods are used or where support is located. The only way to fully understand how the business views IT services is to monitor how IT consumers use IT.