Skip navigation

MainView Middleware Mgmt

3 Posts authored by: Uwe Rudloff
Share This:

Right Sizing for Success

 

In my brief essay on What is Middleware? I touched on some of the advantages and consequences of shifting the complexity of software function out of applications into middleware. In Are You Situationally (Middleware) Aware? I also reflected on how the wrong sorts of automation can make life more difficult for us.

This leads us to think about a right sizing your approach to monitoring. Specifically, using a top-down, bottom-up approach in following the life of a business transaction helps in developing a monitoring strategy which is closely aligned with the goals of your business customers. In a few small steps you are able to quickly find a route to value which your customers will love.

About Transaction Tracing

 

IT and development organizations need to understand how important business processes are being affected by the services that they provide. Application Performance Management is one area of development operations which provides another tool for service improvement. The APM paradigm identifies transaction tracing as one important part of this. And the type of tracing that is needed goes beyond the simple silo tracing approach to a multi-disciplinary approach.


Tracing transactions is nothing new. Since the dawn of computing there has always been a need to trace transaction steps from either a business or technical point of view. Businesses need trace records to reliably audit business steps that have been performed in order to prove completion or compliance. Technical operations need trace records to reliably trace steps that were taken by an algorithm or service to prove correct and timely processing.

 

Frequently, tracing paradigms suffer from numerous shortcomings. They may be too wordy and performance-draining, resulting in too much irrelevant data being collected for the ultimate goal of supporting the business. Or they may be too little too late, since human intervention and expertise is required to manually interpret the results. Or, the tracing may be too isolated, since an identified event recorded in one technologies’ trace may be difficult to relate to an upstream or downstream event in another technologies’ trace.

 

What is new is the need for an effective, real time, correlated view of both the business and technical service steps completed throughout all of the IT services that a business transaction touches. This multi-disciplinary tracing paradigm should be as technology-independent as possible in order to provide for maximum effectiveness. And it should be agile enough to quickly adapt to the changing business and IT landscape.

 

Operational Challenges

 

Today's number of IT components, their configurations, relationships between them, and the constant changes introduced by business and technology needs makes manual tracking difficult. It is impractical and costly to conduct manual audits to maintain IT service relationship and application dependency mapping information. What is needed here is automation that gathers the required information concerning hard and soft IT assets as well as the relationships among them.


On the other hand, the sheer size and complexity of asset/software relationships can also pose a problem. The mere action of automatically mapping a set of assets is in itself meaningless unless thoughtful and meaningful context is placed around the relationships from a business point of view. Again, everything worth knowing about in one’s IT infrastructure is not necessarily important enough to rise to the level the business service view. From the point of view of the business, a “boiling the ocean” service tracing approach is not necessarily efficient or desirable.


This is where a reduction step is necessary to map one or more technical service relationships to a business service for a well-defined service impact view. Correlated, real time transaction tracing which feeds both technical service performance as well as business performance metrics into a business service state view is yet one more aspect of technical service management which is needed to provide the most complete business service impact picture. Clearly, an understanding of the business, how it depends upon the underlying technical services, and how a business transaction is expected to flow through the underlying application services is important to know.

 

Tools and Methodology

 

 

What is needed is a set of tools and methodology for effectively building relevant business context around a set of technical IT services with a minimal amount of effort.


Discovery automates the process of gathering required information and populating and maintaining a Configuration Management Database (CMDB). This is done by discovering IT hardware and software and creating instances of configuration items (CIs) and relationships from the discovered data. A CI can be physical (such as a computer system), logical (such as an installed instance of a software program), or conceptual (such as a business service).

 

The mapping step automatically relates CIs (hardware/software assets) together to form service impact relationships. This is where a reduction methodology helps to crystallize services relationships into meaningful context which have meaning to a running business. This is where knowledge of “what is important to the business” is important for IT to know and apply to the mapping of service relationships. Emerging Big Data analytic approaches can assist in a knowledge-based goal-oriented approach to determining what is important for effective tracing.


The mapping of a business transaction flow among application services using a correlated real time transaction tracing view completes the picture by providing expected inter- and intra-application service elapsed times and business-related data. This provides the basis for enriched alerting within service impact models in the event of any excursion from the expected technical or business metric norms.


Here we outline some best practices for deciding how to select the appropriate technical and application assets and metrics to provide the appropriate context within an APM transaction tracing solution.

 

Top-Down: Four Easy Steps to Right Size Your Transaction Tracing

 

STEP 1: DETERMINE WHAT IS IMPORTANT TO THE BUSINESS

 

Just because a process exists does not mean that it is important enough to be traced.

 

Frequently, not enough thought is given to the relative importance of a process to a business. Tracing for the sake of monitoring brings no value to a business. Tracing is only valuable if it actively used to help the business stay on top of its game.

 

A simple measure of what is important can be summarized as follows. Determine the business processes which:

(1)  Bring the company the most revenue.

(2)  Bring the company the most profits.

(3)  Cost the company the most money in terms of

a.    Direct costs.

b.    Hidden costs due to under- or non-performance risks (monetary penalties or fines).

(4)  Have the highest risk in terms of damage to the company reputation (human visibility) if it fails or is faulty.

(5)  Uses up the most expert people time (war-room scenario) when trying to narrow down the problem focus.

(6)  Must be audited for legal or company policy reasons.

 

Once the relevance of an IT-dependent business process is determined, it is much easier to focus on who will directly benefit from transaction tracing.

 

STEP 2: DETERMINE WHO WILL BENEFIT THE MOST IN THE BUSINESS

 

 

 

Business users can directly benefit from transaction tracing processes.

 

Determine who will benefit the most from any of:

(1)  Transaction history (elapsed times, payload details) for analysis.

(2)  Receiving real time alerts for underperforming transactions. This could include end-to-end as well as inter-activity elapsed times or numeric payload values exceeding numeric thresholds.

(3)  Narrowing the scope of a problem transaction run to a specific technology or runtime. This is so that the appropriate people can be alerted to the issue and provided known information for use in remediation or setting business customer expectations.

(4)  Proof of correct business transaction completion or data compliance.

 

At this point we have identified the appropriate business processes to trace. Once the direct transaction tracing users are determined, a mapping of the subset of runtime assets can be done which is relevant to the most important processes and people of the business.

 

STEP 3: DETERMINE WHERE THE BUSINESS TRANSACTION WILL BE EXPOSED

 

 

Once the appropriate business transaction has been identified, the specific runtimes it traverses (OS type, version, hostnames, IP addresses, and middleware types, versions, and names if any) from logical start to finish of the business transaction need to be identified. Further, the logical order of the runtimes called needs to be determined.

 

It may not be necessary or even appropriate to map every single runtime traversed for that particular business transaction. This would depend on the need (see “Who” and “What”). If eighty percent of the “What” happens in only twenty percent of the runtimes (if this can be reasonably determined ahead of time), then it would be appropriate to start with those runtimes. Another starting point is to merely trace the end-to-end (front to back and end to end) runtimes, and then successively build out tracing to more granular levels in between the beginning and end runtimes when the classes of underperforming transactions and further suspect runtimes have been identified.

 

The most inappropriate way to trace transactions is to monitor every single transaction traversing every single runtime, no matter what business transaction it is, no matter who the audience for the tracing is, and without some guidance as to the expected order of runtime execution. This is also known as “boiling the ocean” and is not appropriate, efficient, or useful.

 

For each runtime call, the resource names being traversed need to be determined. For example, WebSphere MQ resource names are calling application name, queue manager name, queue names or topic strings, and channel names. For WebSphere Application Server the HTTP URI or Servlet Name are the resource names.

 

Above all what is most important in deciding where to expose the transaction is the selection of the transaction tracing toolset. What is needed in any transaction tracing solution is broad technology and OS support and not just individual silo solutions. For example, integrated transaction tracing does not end at a JEE or .Net container wall. The non-JEE and non-.Net runtimes both upstream (before) and downstream (after) the JEE or .Net container must be accessible to the transaction tracing solution, and it must span both distributed and mainframe systems. Anything less will result in a “black-box” myopic view of up- or downstream results based only on elapsed times and with no more granular or contextual contributions to analyzing the transaction flow.

 

STEP 4: DETERMINE TRANSACTION IDENTITY AND ATTRIBUTES

 

There are basically two types of data that are important to know from an exposed transaction tracing data stream. These are transaction identity and transaction payload.

 

Identifying the Transaction

 

 

Transaction identity is important to know since it is the way to uniquely distinguish one distinct transaction from another transaction. The identity of a transaction can be any field, technical- or business-related, that is exposed in the data stream. The identity of a transaction should be unique enough so that for a class of transactions the identity of the transaction does not repeat for a distinctly separate transaction within the same class.

 

If a transaction traverses multiple runtimes (either in parallel, serial, or looped fashion), it is important to have a method to “stitch” the transaction events (i.e. the appearance of the transaction at each runtime) together. The easiest, though not the usual, way is to count on the unique transaction identifier to appear at each transaction event with the same value for that distinct transaction. Depending on the scope of the transaction and how the business application was architected this could be difficult if not impossible since modern transactions typically span multiple runtimes running with different technologies.

 

More typically a distinct transaction identity morphs itself into a new identity, due to either one application service calling another application service, or because one runtime passes the call on to another runtime downstream. In this case a correlating transaction event must be captured to relate the two aliasing transaction identities together.

 

If a distinct transaction is processed multiple times in a runtime in a looped fashion, and it is required to trace the transaction at each loop iteration, then there are two additional sub-types of transaction identity data that are important to know. These are instance data, which uniquely identifies each loop iteration, and the limiting data, which distinctly signals when the loop for that distinct transaction has finished.

Categorizing the Transaction

 

 

Identifying a transaction without context is not useful for most analytic purposes. Although time stamps can be collected and associated with a distinct transaction event, using this data in isolation without business or technical context does not necessarily help solve a performance or business problem.

 

What is more useful is to collect additional data associated with a transaction event. Also known as “payload data”, again, a selective approach to collecting the data associated with a transaction event is the most effective way to trace the transaction. What payload data to collect would again depend upon the need (see “Who” and “What” above).


Collecting payload field values can be useful for classifying the business transaction in some way i.e. by claim type, for example, or by agent name, for reporting and alerting. It may also be useful to collect environmental information related to the transaction event for someone involved in solving a technical problem. Another way to select payload is to monitor and alert on it if the value in the payload field exceeds a threshold, or to filter classes of transactions. The entire transaction can even be flagged as failed based on payload field value(s) or threshold(s). Payload fields can also be collected and stored in history to prove compliance with laws or company policies.

Be selective when collecting payload values. For example, if the same payload value appears in two different transaction events for one distinct transaction, it is only necessary to collect it at one transaction event, not at both events. The correlation through the transaction identity associates that piece of data with all of the other transaction events for that distinct transaction and effort of collecting it again is unnecessary.

The most inappropriate way to collect payload data is to store every piece of data that is accessible at the runtime call at every runtime for “whenever it is needed” and for “whoever may use” it in the future. This is another form of “boiling the ocean” and is not appropriate, efficient or useful. This can potentially require large amounts of IT resources to back the collected data for the benefit of no one.

Bottom-Up: Enrich the Business Service Impact Model With Your Transaction Tracing Metrics

 

Once the appropriate process, people, and supporting transactions have been identified, the resulting exposed transactions, transaction categories, and metrics can then be tied back into the impact models which support the business service.

 

A service impact model should represent, at a minimum, the IT software/hardware assets which support the business services and their relationships to the business services. For example, if a claims system depends upon a hardware server running and OS, then any performance degradation of the hardware or OS should map to the performance degradation of the business service.

 

This two-dimensional relationship mapping is enriched by a third dimension, which is that of the transaction flow. Not only are end-to-end, inter-, and intra-elapsed times available to the impact model, but any other transaction metrics which have been exposed in the payload of the transaction at any transaction event. This encompasses not only technical but also business metrics. As a result, transaction tracing brings a richer context into a service impact model without overwhelming the result.

 

The Results

 

Transaction tracing is an important dimension of the Application Performance Management arena. Not only are technically-oriented tracing alternatives for a technology silo important, but a cross-platform, cross-technology and cross-domain correlated transaction tracing alternative is also needed to add business context to an application performance problem.

 

For this, a business knowledge-based goal-oriented approach to determining what is important to measure, followed by exposing only the relevant data which applies to these business-oriented performance measuring goals goes a long way to reduce the shortcomings of most of today’s siloed APM tracing offerings.

 

The result is a holistic, correlated, cross-platform - from distributed to mainframe and beyond - view of critical business transactions, answering the all-important question: “What happened to my transaction?”

 

 

 

The opinions expressed in this writing are solely my own and do not necessarily reflect the opinions, positions, or strategies or of BMC Software, it's management, or any other organization or entity. If you have any comment or question, please feel free to respond and I will be happy to respond to you. Please comment, your feedback is important. Thank you for reading this post.

 

 

Share This:

Conserving Complexity

 

In my brief essay on What is Middleware? I touched on how middleware management is a necessary component of the larger application performance management issue, how middleware management naturally needs mechanisms to follow individual business transactions, and some of the advantages and consequences of shifting the complexity of software function out of applications into middleware.

This latter effect, the shifting of complexity into middleware, leads to a discussion of how to manage this “hidden” complexity. With higher complexity comes higher risk. One critical component of managing middleware is the man-machine interface of the middleware automation itself.

Even though some of the burden of running an application depending upon multiple complex systems has shifted from application to middleware, this complexity is still there to a greater or (hopefully) lesser extent and still needs to be managed. But how do we manage this in a way which causes the least risk of disruption to the business?

Clearly one needs to have procedures and tools in place to manage changes to and operations of middleware systems. One could argue that a well-designed fully automated system requires human oversight but little intervention, whereas a poorly-designed one requires more constant human-based decision making and inputs.

But perhaps just as important as good system automation is maximizing situation awareness for the applications and operations teams (operators, administrators and users) while minimizing any unintended effects that automation can potentially cause.

So what is meant by situation awareness (SA)? Situation awareness is the perception of environmental elements with respect to time and/or space, the comprehension of their meaning, and the projection of their status after some variable has changed, such as time, or some other variable, such as a predetermined event.

In this essay, I will briefly outline symptoms of low situational awareness, and how middleware automation should be designed to counter this.

Planes, Trains and Automobiles

The concept of SA has been studied extensively and is a critical skill for decision makers operating within complex environments such as aviation, air traffic control, emergency services, military operations, and other time-sensitive systems where human decision making and appropriate operational execution by humans is the ultimate back-stop in preventing negative consequences and promoting operational efficiency. It is also critical for more ordinary activities such as driving a car or even walking down a sidewalk.

Good SA is manifested in those who are able, in real time, to perceive their environmental state, understand the meaning of that state, and correctly anticipate future potential outcomes. Poor situation awareness often leads to poor (or no) decisions and is often cited as the leading cause of accidents, property or business loss, human casualties, and is usually one of the primary causes cited in accidents due to human error.

In my own main avocation, flying sailplanes (gliders) and airplanes, we are trained that good situation awareness will keep us on the “straight and level”, lead to better decision making, and keep us and out of trouble. I think that some of what is known about the human factors involved in the intimate interplay between flying machines and humans could also be useful in how we go about automating middleware.     

Clearly, SA of the current and future state of middleware systems serving critical business function and understanding how these states affect business function is a critical skill. Just as driving a car or flying an airplane requires a “heads up” attitude with respect to current and future environmental states, managing a set of middleware runtimes to perform a set of business functions on time and within budget can be just as critical for the health of an organization.

What is your Middleware SA Score?

So, what is your middleware SA score? Put more simply, are you able to recognize when your SA score is not up to what it should be? For a few tips, look for the following conditions when managing your middleware or analyzing an application performance problem or outage:

  • Are your feeling a constant state of surprise or “behind the system”?
  • Are basic operational tasks which you normally can complete efficiently and effectively not as easy to do?
  • Are there circumstances where you would always let the system resolve a situation for you rather than you performing the steps manually because you have not practiced that scenario enough?
  • Are you forgetting to do required tasks or not cross-checking what the system is telling you?
  • Are you entering incorrect inputs?
  • Are you incorrectly prioritizing needed tasks?
  • Are you suddenly overwhelmed with tasks at certain times?
  • Are you letting your system do all the work without knowing where it is at?
  • Do you know what three things to do, in the correct order, if the system fails right now?
  • Are you confused about the system state, or unsure of what to do next?
  • Are you deviating from Standard Operating Procedures?
  • Are you letting the system exceed critical limits?

 

If you recognize any one of these situations in your own performance then you are probably not situationally middleware aware. The risk of negative consequences due to your own mistakes or inaction is higher than it could be.

How do we counter this in the middleware realm?

Raising your Middleware SA Score

When we use middleware monitoring and automation tools we need to approach the design and the use of the tools from practical standpoints. Human factors researchers have much to say about the man-machine interaction with automation, but I will put forth a few tips I think should be considered. Some of these points may be common sense, some not so obvious.

Usability

Middleware automation should not be difficult or time consuming to turn on or off. Automation should never get in the way of the business but should always help to manage the business effectively. An example of this is when one is faced with a choice between performing a middleware management task manually and reconfiguring the automation to do it for you: many times it us faster to do it manually. This can be relevant in unforeseen time-critical situations.

The monitoring and automation of middleware should be used appropriately. Users should be trained to understand how the automation works and what the design philosophy behind the monitoring and automation systems are. Policies and procedures should be in place to direct users when to use middleware automation and when not to use it. Users should have a rational framework to help them decide when to use automation.

The design of the middleware automation and monitoring should take overreliance on automation into consideration. One of the factors affecting overreliance on automation is user task workload. If the middleware automation user is task-saturated with middleware monitoring or other tasks, then she will not be able to effectively “monitor the monitor” and will rely much more heavily on automation.

Overreliance on automation may lead to acts of omission or commission where, by design or misuse, a middleware monitor does not alert a user to a situation or where the middleware automation is allowed by the user to carry out an action which is inappropriate to the situation. Use of automation to completely replace manual skills can also lead to significant degradation of a user’s skills.

Feedback on middleware monitoring and automation states, actions and intentions must be salient and informative so that not only will a complacent user’s attention be drawn to it, but will also enable her to intervene effectively.  A user’s involvement provides safety benefits by keeping the user informed and able to intervene.

A middleware monitoring system should also be flexible enough to allow it to be used based on the user’s roles and responsibilities, and not define the user’s role based on the monitoring design. This should take into account the need for a user’s active participation in the automation process even if it reduces the overall system performance from a fully automated standpoint.

System Trust

Middleware monitoring and automation failures, such as false alarms and inappropriate automation, can lead to a user not trusting the system and ignoring real system hazards.  A monitoring threshold which is set very sensitively may result in many alarms being raised, or transient alarms being set off. Eventually, a user may tune out the alarm “noise” and in the process ignore a real hazardous situation. This is especially true when the sampling rate is very frequent.

In the other hand an event which could be catastrophic, but is highly unlikely to occur, may not afford the user enough time to prepare for and execute corrective action. This can also be true when the sampling rate is set too low.

Users also need to be aware that their decision making may contain biases and they should be trained on how to recognize this. Both user interpretation of, and use of, middleware monitored values can exhibit human biases when making decisions.

For example, the bias of representativeness may falsely lead a user to believe that because a situation looks similar or follows a familiar pattern to another previously experienced situation, that the outcome or likelihood of success will be the same as that previously experienced. This fails to take into account the unique circumstances and actual facts of the current situation.

Sometimes when other independent indications contradict the primarily indicated monitoring status (and subsequent automated action) users can be biased towards trusting the primary monitoring indications, and subsequently allow inappropriate actions and future middleware states as recommended or carried out by the automation.

Nag and Gag, or Nudge into Action?

What is needed is a balance between the “noise” and the “silence before the crash”. A good middleware monitoring and automation system will provide not only the ability to set individual sample rates but will also use the monitored information to assign likelihoods or probabilities that the middleware will encounter hazardous states in the near future. This should give the middleware automation user an a list of reasonable choices which will allow them to arrive at a good decision in a timely manner. A very good middleware monitor will also provide the breadcrumbs to show how it arrived at such a conclusion.

To sum up, here are the points I just addressed:

  • Middleware automation should not be difficult or time consuming to turn on or off.
  • Automation should never get in the way of the business.
  • The monitoring and automation of middleware should be used appropriately.
  • Users should be trained to understand how the automation works.
  • Users should never be totally reliant or complacent on automation.
  • Monitoring should involve active user participation in the automation process.
  • Feedback from automation must be salient and informative.
  • Users should never be overloaded with tasks because of the automation.
  • Monitoring design should conform to the user role.
  • Middleware automation should provide the user with a list of choices for good decision making.
  • Middleware monitors should make it easy to understand how it arrived at a recommended action.

 

Bottom line is, middleware monitoring and automation should never cloud a user’s situation awareness, but enhance it.


What do you think? Are human factors in managing middleware complexity with or without middleware automation an issue in your organization or your own work? I would be interested to hear your experience.

In my next essay I will touch on how some of this is addressed by BMC.

The opinions expressed in this writing are solely my own and do not necessarily reflect the opinions, positions, or strategies or of BMC Software, it's management, or any other organization or entity. If you have any comment or question, please feel free to respond and I will be happy to respond to you. Please comment, your feedback is important. Thank you for reading this post.

What is Middleware?

Posted by Uwe Rudloff Aug 19, 2013
Share This:

What is Middleware?

Well, we hear this term thrown around a lot. I have been in the software industry for many years and looking back on it I have developed my own generalized definition around it. Maybe you will agree.

Middleware is any software, that is not the operating system, that mediates interactions between any two other software entities. The goal of software mediation in this context is to simplify or leverage the work of the developer and to optimize the use of system resources. In the course of providing software mediation a side-effect is the added ability to provide points of access for monitoring and control. More on that later.

With this definition it is clear we can see that there are many forms of middleware that has been around even before the term middleware was invented. IBM's DB2, CICS, IMS are a few examples here. More recently, "application servers" which provide object-oriented application serving environments such as IBM WebSphere and Microsoft .Net are what comes to mind when people say "middleware". Middleware for messaging is also a form of middleware such as IBM WebSphere MQ and TIBCO EMS. There are many more forms of middleware which I will not go into.

Glue or goo?

Middleware has become ubiquitous in modern IT establishments. Long gone are the days when a single machine hooked up to hard-wired workstations time-shared its computing resources. Today the marriage of the new and the old is the norm. The glue to the marriage is middleware. Important stuff these marriages.

Although middleware is there to simplify someone's work, it comes at a price. I call it the law of conservation of complexity. What happens is that the complexity is buried a level below the developer's work, but it is not entirely eliminated. Call it vertical complexity. Middleware is still a piece of software and subject to all the same rules of software development and constrained in resource use as any other piece of software. In the end, middleware is ultimately a product of human invention, ultimately managed by people and subject to failure. No magic bullet, just software.

We commonly hear the term "set it and forget it" when talking about using middleware. Nice concept but ultimately unrealistic in practice. A developer may set it. But a production manager will certainly not forget it.

For example, messaging middleware makes using a network much easier for a developer. Essentially a network is used to send messages to remote machines. Messaging middleware makes the job of message delivery easier for the developer by taking care of the networking detail drudgery (did I send it? did it get there? do I need to re-send it?) on behalf of the sender and receiver. That way the developer need not re-invent the programming mistakes of those that went before him. The sending and receiving software need not even know who is doing the sending or receiving, or even when it is happening. Magic!

On the other hand, being another piece of software, messaging middleware uses system resources like any other software. It has taken over the responsibility (and complexity) of handling message delivery in real time. It needs system and network resources to work. It needs to queue messages if the network or receiving application is not working fast enough (or is down). It needs to handle the messages in a way that will guarantee delivery. It needs to have mechanisms in place to recover if things go wrong. Plus, the messaging runtime may be doing its thing for not just one, but possibly many messaging-oriented applications. Normal system software stuff. No Otto, this is not a PC which you can re-boot in order to recover. There is important stuff in these messages!

Enter the Transaction Followers

In a relatively simple scenario, application A and B use messaging middleware to work together. However reality is more complex than this. A typical "marriage of the old and new" involves many more middleware mediators, from web to application, transformation, enrichment, validation, synchronization, routing, messaging and legacy mainframe transaction processing and database subsystems. Here we have horizontal complexity. An assembly line of runtimes helping to complete one real time business interaction needs real oversight in real time. And not just with up/down indications after five minutes of rumination. If the business interaction fails, how do we localize the problem runtime that it depends upon? Even if all runtimes are available business interactions can still fail due to issues happening at a lower logical level within the runtime. From the perspective of a business application owner, this assembly-line of systems may be hidden from view but the problem becomes very visible when something goes wrong.

Diving in Deep

Even worse, each middleware runtime involved in a step of getting from the "new" to the "old" may be simultaneously handling multitudes of requests from different sources and heading for different destinations. Not just one business application owner may be affected, but quite a few more. So a problem triggered by one request may affect the successful outcome of any other pending request. How does one know what straw broke the camel's back? Like I said, a developer may set it, but the production manager will certainly not forget it. Without something in place to accurately and effectively measure how a transaction is using middleware runtime resources you are running blind.

From Middleware Management to Application Performance Management

As luck would have it (you knew this had to have a happy ending) middleware software systems have facilities that provide paths to monitor, measure and control their activities. This is where middleware management software latches on to help diagnose and remediate problems that may be happening in the middleware runtimes. Middleware management software is a necessary component of the broader application performance management problem. Middleware management software provides key technical and business metrics originating in the middleware software systems that participated in fulfilling a business application transaction. We'll get more into this in a later treatise.

The opinions expressed in this writing are solely my own and do not necessarily reflect the opinions, positions, or strategies or of BMC Software, it's management, or any other organization or entity. If you have any comment or question, good or bad, please feel free to respond and I will be happy to respond to you. Please comment, your feedback is important. Thank you for reading this post.

Filter Blog

By date:
By tag: