Skip navigation
Share This:

Once upon a time, a little programmer wrote a letter to Santa. The letter never arrived and the little programmer was broken hearted. Santa was not pleased because he wanted everyone who believed in him and wrote to  him to be happy. If only Santa had a way to track those letters and alert him when they got lost or some elf fell asleep and didn't process it.


Well, if Santa talks to his BMC Account team, they could offer him the world class BMC Application Performance Management solution and his problems would be solved.


Interested in how our APM solution works? Watch the attached demo and then contact your BMC account team to learn more.

Share This:

BATT Introduction


COMPANY XYZ is a theoretical company providing a full range of highly competitive financial products.  COMPANY XYZ is a long term BMC customer with a need for transaction tracing.  They invited BMC to come and share the BMC Middleware Management Application Transaction Tracing (BATT) vision; the effort ended with COMPANY XYZ deciding to pursue transaction tracing with BMC.


BATT Planning


COMPANY XYZ had told BMC the need for transaction tracing was ?becoming more and more critical?; because COMPANY XYZ was in a ?constant state of flux?.  There were two very specific issues involving transaction tracing:


1. COMPANY XYZ Finances ? This is the core application behind the online COMPANY XYZ.  Transaction throughput problems were infrequent, however when transactions did slow down, it had widespread COMPANY XYZ corporate visibility.  COMPANY XYZ needed a transaction tracing tool to quickly determine when a transaction response time issue exists and to quickly pinpoint where in the transaction flow things were slow.

2. COMPANY XYZ interfacing to external WebSphere MQ clients ? Periodically the COMPANY XYZ needs to send a transaction to an external vendor in the form of a WebSphere MQ message.  Many times the external vendor(s) will insist they never received the MQ message.  COMPANY XYZ had no way of determining if/when the MQ message in question was sent.


BATT Implementation Overview


Since there were two COMPANY XYZ transactional related issues, we decided to kill two birds with one stone.  We did this by creating a BATT transactional model that captured the COMPANY XYZ hop-to-hop response time and using the same BATT transactional model to show when MQ messages were delivered to their external vendor.


BATT Technical Implementation Details


There were two primary technical needs to implement BATT at COMPANY XYZ.  First we needed to define which COMPANY XYZ transaction to trace.  We needed a transaction that starts in the distributed world and accessed the z/OS (mainframe) operating system.  The ?ABC? was the transaction selected.  BTW ? ABC was also the name of the associated z/OS CICS transaction ID.  Secondly, we need to find something in the ABC transaction that we could trace/track across all the silos of technology (WebSphere Application Server (WAS), WebSphere MQ (MQ), and CICS).  By dumping the transaction at each point (WAS, MQ, and CICS) we determine the COMPANY XYZ customer account number was present in all three locations.  This was perfect because we could now trace the transaction across all three technology stacks.

Now we knew the technology stacks involved in the transaction flow, we need to know how the transaction technically flowed across the stacks.  Here is a summary of the technical flow:


1. An MQ message is PUT on a D/S MQ publish queue
2. The message is transmitted to the MVS MQ input queue (via an MQ cluster channel)
3. CICS reads the message from the MVS queue
4. CICS executes the ABC transaction
5. CICS returns an MQ message to the D/S reply queue


By positioning a BATT interception node at the first MQ publish queue and at the last MQ reply queue, we can tell the overall MQ response time.  This overall MQ response time would trace from the distributed MQ queue to the z/OS MQ queue and the time back to the distributed reply MQ queue.  Tracing this route will allow us to quickly determine if there is a response time problem and if the response time is z/OS based or distributed based.

So far, we have identified the ABC transaction as the target transaction.  We have identified the technology stacks used by the ABC transaction.  We have the ability to identify if the transaction response time issues are distributed based or z/OS based.  Now we need to identify the core technical details of the transaction.  These details are:


1. The name of the distributed publication queue ? The BATT interception node needs to know which MQ queue to listen to for the ABC transaction.
2. The name of the distributed reply queue - The BATT interception node needs to know which MQ queue to listen to for the ABC transaction.  BATT will capture the time stamp of the returning transaction.
3. The transaction size ? BATT stores the transaction in the BATT database.  By knowing the size of the transaction, we can effectively size the BATT database.
4. The transaction rate - BATT stores the transaction in the BATT database.  By knowing the transaction rate, we can effectively size the BATT database.
5. The CICS transaction ID ? This way MainView Transaction Analyzer can know exactly which z/OS transaction(s) BATT is interested in.


BATT Value


What does all this transaction tracing mean?  It means COMPANY XYZ can be alerted when the COMPANY XYZ transactions are not meeting their service levels agreements.  It means when there is a transactional response time issue, COMPANY XYZ will know exactly where (distributed or z/OS) the transaction impediment is happening.  It means COMPANY XYZ will know when all MQ messages are sent to their outside vendors.


The Icing on the BATT cake


As COMPANY XYZ was reviewing the response times of their transactions, something unusual began to appear in the BATT transaction table.  The BATT transaction table showed the transactions were slowing after every four to five MQ message were sent to z/OS.  After investigating, we determined the MQ messages were being intentionally slowed (by the COMPANY XYZ JVM based application) while being placed on the MQ publication queue.  This was a COMPANY XYZ throttling mechanism implemented years ago, that most people had forgotten about.  The bottom line ? BATT discovered the COMPANY XYZ transaction were indeed slow and allowed COMPANY XYZ to change their JVM based application accordingly.


The opinions expressed in this writing are solely my own and do not necessarily reflect the opinions, positions, or strategies or of BMC Software, it's management, or any other organization or entity. If you have any comment or question, please feel free to respond and I will be happy to respond to you. Please comment, your feedback is important. Thank you for reading this post.

Share This:

BMC Middleware Management (BMM) is a family of three solutions from BMC Software.  The solutions of the family are BMM-Monitoring, BMM-Administration, and BMM-Transaction Tracing.  One product in the BMM-Monitoring solution is BMC Performance Manager for WebSphere Business Integration.


The white paper below outlines the basic best practices for using BMC Performance Manager for WebSphere Business Integration.  Please download it and enjoy!


Thank you,


Ross Cochran

Share This:

Sure, you’re already using BMC Middleware Management. But are you aware of everything it can really do?


Join us at the Middleware Management Lunch and Learn hosted by the BMC Middleware Management (BMM) group to discover tips and tricks and get a better understanding of new roadmaps and technology.


This pre-Halloween event will take place on October 30th from 12 PM - 5 PM CST.


We’ll also take an in-depth look at the features and functions of the latest versions of BMC Middleware Management (BMM), BMC Application
Transaction Tracing (BATT) and BMC Middleware Administration (BMA)
, and how they can help you:




  • Ensure application performance across all your platforms
  • Speed troubleshooting and facilitate audits by logging all user activity
  • Improve productivity with simplified transactions
  • Automate and proactively manage your middleware environment
  • Reduce costs through reliable monitoring and secure self-service


You’ll also be able to network with peers and hear how they’re managing their middleware. We hope to see you there.




12:00 pm

Lunch & Networking

1:00 pm

BMM and BATT 7.0 Presentation and demo - Terry House

(including integration into BPPM and EUEM)

2:30 pm

BMA Presentation and demo - Terry House

(Secure self-service middleware administration)

3:30 pm

Roadmap presentation – April Hickel

4:00 pm

Open forum


Can’t join us in person? We will be providing a WebEx link so you can join remotely. Please share this invite with anyone in your organization you think would benefit by attending.


Don't have BMM today? Join us and learn about BMM from the experts, our customers!


The attached document has the details and registration information.


Costumes are optional!


Share This:




For many years IBM has supplied an MQ exit for gathering MQ performance statistics.  This has been great for MQ in the distributed world, but not so great for MQ in the z/OS world; because IBM does not support the MQ API exit on z/OS.  Do not fear!  BMC is here!  For over a decade BMC Software has supported an “MQ API-like” exit on z/OS.  Over this past decade this z/OS functionality (called The BMC Software Extension for WebSphere MQ (a.k.a. MQE)), has gathered MQ statistics in many of the world’s Fortune 500 companies. (Note – The MQE in MainView for WebSphere MQ and the BMC Middleware Management WebSphere MQ extension are separate technologies, but related in similar functionality.)




This paper will cover some of the operational aspects of the MQE.  These operational aspects will cover the following:


  1. The history of MQE
  2. The MQ performance metrics are gathered beyond standard MQ monitoring with MQE
  3. The MQE statistics role in MainView Transaction Analyzer
  4. The MQE MQ API trace facility
  5. How to allow the MQE to intercept MQ API call



The History of MQE


In the mid 1990’s as WebSphere MQ management was increasing in popularity and as BMC was known in the industry for out-of-the-box thinking (such as the 3270 Optimizer product); BMC began developing a more robust offering in MQ management.  BMC already had an industry leading MQ management solution with PATROL for MQ, but was already developing a go-deeper MQ management solution.  This new go-deep solution was an interception technology developed from the roots of Ultraopt for VTAM.  BMC knew how to gather MQ
statics that no other vendor could deliver. The resulting development yielded the ability to intercept the MQ MQI API calls.  By intercepting the MQ calls, BMC could see all the MQ application issues associated with the management of MQ.  This interception technology was
accomplished by BMC proprietary (read no IBM exit point) coding.  It was ported to the distributed world and the z/OS (then OS/390 and MVS) world. Since then, IBM replicated the BMC proprietary technique in an IBM MQ exit.  At that point, BMC migrated their proprietary interception technique approach to using the IBM MQ exit.  However, the z/OS MQ interception technology has since remained core technology from BMC, since IBM has never published an equivalent exit on z/OS.


The MQ performance metrics are gathered beyond standard MQ monitoring with MQE


As noted earlier, BMC was a leader in MQ management for both distributed systems and z/OS.  So what was left to add to basic MQ monitoring?  What was left was the statistics available from intercepting the calls of an MQ application to the MQI.  By intercepting these calls, BMC could determine how “busy” a given application was with MQ.  This new type of performance statics could be passed to MainView for setting thresholds and setting associated alerts.  You may be asking, what performance statistics (from MainView views) came from this new MQ interception technology?  Let’s review a few:


The APST view in MainView for WebSphere MQ provides performance details on applications accessing the queue manager.  In this view, look for a high GET/PUT rate; this could be a sign of a runaway MQ application.  Also look for a high object count; this could be a sign of poor MQ application programming.


The MQEST view tracks all open (PUT/GET) attempts by each MQ object.  It is easy to see the busiest MQ objects from an MQI viewpoint (regardless of the size of the MQ messages).  For example you know a specific queue is used to trigger a specific CICS transaction and you want to see if CICS has been issuing the correct amount of PUT and GET calls. This could be a good indicator if CICS is processing the right amount of business transactions.


The MQITRACE view allows you to see the API calls each application is making to MQ.  Each call also has the associated detail for each MQI call (including MQ return codes).  This is very handy in finding an MQ application that is not getting along with MQ.


This concludes the review of the top MQE views.  There are many other MQE related views available.


The MQE statistics role in MainView Transaction Analyzer


The MQE statistics feed the MQ portion of MainView Transaction Analyzer (MV/TA); thus allowing MV/TA to trace all the PUTs and GETs involved in a given transaction.  To correlate WebSphere MQ components when MainView Transaction Analyzer is running, BMC Software Extensions for WebSphere MQ (MQE) must be installed on the appropriate queue managers and must be writing to the LOGSPACE.


The MQE MQ Trace Facility


Ever wonder what MQ is doing at an MQI/API level?  Who is doing all the PUTs and GETs to MQ?  By utilizing the MQE feature, The
MQITRACE view displays MQ API trace records for MainView for WebSphere MQ.


Tracing and viewing the calls of all the MQ applications could be a daunting task, however you can limit your trace views by which jobs to trace by using the TRJOB option, which applications to trace by using the TRAPPL option, which queues to trace by using the TRRESOLVED option, and whether to only do exceptions by using the TREXCEPTION option.


How to allow the MQE to intercept MQ API calls


My colleague Gregg Tuben recently wrote an excellent blog and video on how to dynamically enable the MQE statistics.  Read about it here: MQE.




Once you use the MQE facility, you will never look back.  You will wonder how you managed WebSphere MQ for all these years, without it.




Ross Cochran

Share This:

Do you use BMC Extensions for WebSphere MQ (MQE)? Do you hate
that you have to recycle MQ to change some options or install new versions of
the code? This short video will show you how to use the MQE action in MainView
for  MQ to install or replace MQE without stopping the Queue Manager.

Share This:

Conserving Complexity


In my brief essay on What is Middleware? I touched on how middleware management is a necessary component of the larger application performance management issue, how middleware management naturally needs mechanisms to follow individual business transactions, and some of the advantages and consequences of shifting the complexity of software function out of applications into middleware.

This latter effect, the shifting of complexity into middleware, leads to a discussion of how to manage this “hidden” complexity. With higher complexity comes higher risk. One critical component of managing middleware is the man-machine interface of the middleware automation itself.

Even though some of the burden of running an application depending upon multiple complex systems has shifted from application to middleware, this complexity is still there to a greater or (hopefully) lesser extent and still needs to be managed. But how do we manage this in a way which causes the least risk of disruption to the business?

Clearly one needs to have procedures and tools in place to manage changes to and operations of middleware systems. One could argue that a well-designed fully automated system requires human oversight but little intervention, whereas a poorly-designed one requires more constant human-based decision making and inputs.

But perhaps just as important as good system automation is maximizing situation awareness for the applications and operations teams (operators, administrators and users) while minimizing any unintended effects that automation can potentially cause.

So what is meant by situation awareness (SA)? Situation awareness is the perception of environmental elements with respect to time and/or space, the comprehension of their meaning, and the projection of their status after some variable has changed, such as time, or some other variable, such as a predetermined event.

In this essay, I will briefly outline symptoms of low situational awareness, and how middleware automation should be designed to counter this.

Planes, Trains and Automobiles

The concept of SA has been studied extensively and is a critical skill for decision makers operating within complex environments such as aviation, air traffic control, emergency services, military operations, and other time-sensitive systems where human decision making and appropriate operational execution by humans is the ultimate back-stop in preventing negative consequences and promoting operational efficiency. It is also critical for more ordinary activities such as driving a car or even walking down a sidewalk.

Good SA is manifested in those who are able, in real time, to perceive their environmental state, understand the meaning of that state, and correctly anticipate future potential outcomes. Poor situation awareness often leads to poor (or no) decisions and is often cited as the leading cause of accidents, property or business loss, human casualties, and is usually one of the primary causes cited in accidents due to human error.

In my own main avocation, flying sailplanes (gliders) and airplanes, we are trained that good situation awareness will keep us on the “straight and level”, lead to better decision making, and keep us and out of trouble. I think that some of what is known about the human factors involved in the intimate interplay between flying machines and humans could also be useful in how we go about automating middleware.     

Clearly, SA of the current and future state of middleware systems serving critical business function and understanding how these states affect business function is a critical skill. Just as driving a car or flying an airplane requires a “heads up” attitude with respect to current and future environmental states, managing a set of middleware runtimes to perform a set of business functions on time and within budget can be just as critical for the health of an organization.

What is your Middleware SA Score?

So, what is your middleware SA score? Put more simply, are you able to recognize when your SA score is not up to what it should be? For a few tips, look for the following conditions when managing your middleware or analyzing an application performance problem or outage:

  • Are your feeling a constant state of surprise or “behind the system”?
  • Are basic operational tasks which you normally can complete efficiently and effectively not as easy to do?
  • Are there circumstances where you would always let the system resolve a situation for you rather than you performing the steps manually because you have not practiced that scenario enough?
  • Are you forgetting to do required tasks or not cross-checking what the system is telling you?
  • Are you entering incorrect inputs?
  • Are you incorrectly prioritizing needed tasks?
  • Are you suddenly overwhelmed with tasks at certain times?
  • Are you letting your system do all the work without knowing where it is at?
  • Do you know what three things to do, in the correct order, if the system fails right now?
  • Are you confused about the system state, or unsure of what to do next?
  • Are you deviating from Standard Operating Procedures?
  • Are you letting the system exceed critical limits?


If you recognize any one of these situations in your own performance then you are probably not situationally middleware aware. The risk of negative consequences due to your own mistakes or inaction is higher than it could be.

How do we counter this in the middleware realm?

Raising your Middleware SA Score

When we use middleware monitoring and automation tools we need to approach the design and the use of the tools from practical standpoints. Human factors researchers have much to say about the man-machine interaction with automation, but I will put forth a few tips I think should be considered. Some of these points may be common sense, some not so obvious.


Middleware automation should not be difficult or time consuming to turn on or off. Automation should never get in the way of the business but should always help to manage the business effectively. An example of this is when one is faced with a choice between performing a middleware management task manually and reconfiguring the automation to do it for you: many times it us faster to do it manually. This can be relevant in unforeseen time-critical situations.

The monitoring and automation of middleware should be used appropriately. Users should be trained to understand how the automation works and what the design philosophy behind the monitoring and automation systems are. Policies and procedures should be in place to direct users when to use middleware automation and when not to use it. Users should have a rational framework to help them decide when to use automation.

The design of the middleware automation and monitoring should take overreliance on automation into consideration. One of the factors affecting overreliance on automation is user task workload. If the middleware automation user is task-saturated with middleware monitoring or other tasks, then she will not be able to effectively “monitor the monitor” and will rely much more heavily on automation.

Overreliance on automation may lead to acts of omission or commission where, by design or misuse, a middleware monitor does not alert a user to a situation or where the middleware automation is allowed by the user to carry out an action which is inappropriate to the situation. Use of automation to completely replace manual skills can also lead to significant degradation of a user’s skills.

Feedback on middleware monitoring and automation states, actions and intentions must be salient and informative so that not only will a complacent user’s attention be drawn to it, but will also enable her to intervene effectively.  A user’s involvement provides safety benefits by keeping the user informed and able to intervene.

A middleware monitoring system should also be flexible enough to allow it to be used based on the user’s roles and responsibilities, and not define the user’s role based on the monitoring design. This should take into account the need for a user’s active participation in the automation process even if it reduces the overall system performance from a fully automated standpoint.

System Trust

Middleware monitoring and automation failures, such as false alarms and inappropriate automation, can lead to a user not trusting the system and ignoring real system hazards.  A monitoring threshold which is set very sensitively may result in many alarms being raised, or transient alarms being set off. Eventually, a user may tune out the alarm “noise” and in the process ignore a real hazardous situation. This is especially true when the sampling rate is very frequent.

In the other hand an event which could be catastrophic, but is highly unlikely to occur, may not afford the user enough time to prepare for and execute corrective action. This can also be true when the sampling rate is set too low.

Users also need to be aware that their decision making may contain biases and they should be trained on how to recognize this. Both user interpretation of, and use of, middleware monitored values can exhibit human biases when making decisions.

For example, the bias of representativeness may falsely lead a user to believe that because a situation looks similar or follows a familiar pattern to another previously experienced situation, that the outcome or likelihood of success will be the same as that previously experienced. This fails to take into account the unique circumstances and actual facts of the current situation.

Sometimes when other independent indications contradict the primarily indicated monitoring status (and subsequent automated action) users can be biased towards trusting the primary monitoring indications, and subsequently allow inappropriate actions and future middleware states as recommended or carried out by the automation.

Nag and Gag, or Nudge into Action?

What is needed is a balance between the “noise” and the “silence before the crash”. A good middleware monitoring and automation system will provide not only the ability to set individual sample rates but will also use the monitored information to assign likelihoods or probabilities that the middleware will encounter hazardous states in the near future. This should give the middleware automation user an a list of reasonable choices which will allow them to arrive at a good decision in a timely manner. A very good middleware monitor will also provide the breadcrumbs to show how it arrived at such a conclusion.

To sum up, here are the points I just addressed:

  • Middleware automation should not be difficult or time consuming to turn on or off.
  • Automation should never get in the way of the business.
  • The monitoring and automation of middleware should be used appropriately.
  • Users should be trained to understand how the automation works.
  • Users should never be totally reliant or complacent on automation.
  • Monitoring should involve active user participation in the automation process.
  • Feedback from automation must be salient and informative.
  • Users should never be overloaded with tasks because of the automation.
  • Monitoring design should conform to the user role.
  • Middleware automation should provide the user with a list of choices for good decision making.
  • Middleware monitors should make it easy to understand how it arrived at a recommended action.


Bottom line is, middleware monitoring and automation should never cloud a user’s situation awareness, but enhance it.

What do you think? Are human factors in managing middleware complexity with or without middleware automation an issue in your organization or your own work? I would be interested to hear your experience.

In my next essay I will touch on how some of this is addressed by BMC.

The opinions expressed in this writing are solely my own and do not necessarily reflect the opinions, positions, or strategies or of BMC Software, it's management, or any other organization or entity. If you have any comment or question, please feel free to respond and I will be happy to respond to you. Please comment, your feedback is important. Thank you for reading this post.

Share This:

In a previous blog , DataPower® Monitoring Privileges Explained, I discussed the access level needed by the BMC Middleware Monitoring extension to monitor DataPower®. I thought it would be helpful to provide a step-by-step guide for setting up the Group/Userid on the DataPower® appliance and updating the monitoring extension's configuration file. The attached  PowerPoint contains the instructions and screen shots to walk you through the process. The beginning of the PowerPoint contains our monitoring configuration and some of the WebSphere DataPower® Attributes that are of typical Interest.

Share This:

A recent survey was conducted through TechValidate of BMC’s customers who have deployed BMC Middleware Management.  The research data presents the real-world experiences of more than 200 verified users of BMC Middleware Management solutions. TechValidate publishes only factual data – statistics, deployment facts, and the unfiltered voice of the customer without any editorial commentary.


Customers striving to stay ahead of the competition by running an efficient and modern middleware environment achieved the following benefits with BMC Middleware Managements solutions:



If you are a user of BMC Middleware Management and would like to participate in the survey, please visit TechValidate by clicking here.

Share This:

BMC Middleware Administration is designed to improve the lives of MQ Administrators and to provide access to MQ for other staff on a controlled basis, often referred to as Self-Service.  This capability does not require agents or any files to be on the MQ Servers.  MQ objects can be filtered by name masks and assigned to “Projects” that staff are then assigned to via Groups with specific permissions. This approach to MQ access is popular and gaining traction.  

Today’s discussion is about going beyond traditional MQ Administration into some challenging areas on integrity review.   Imagine having to go back and check every queue manager for a setting to ensure they are all the same.  How about validating that listener control for every listener is set to queue manager control?  What about the ones that aren’t?   Perhaps the challenge is to identify object changes within a specific time period regardless of what product, script or utility was used to make the change.   Performing this function on an object by object basis is tedious, error prone and takes a lot of time.  What if Administrators had a reusable query they could use to search for various objects, attribute values, settings, use cases, etc and do this on a mass level?  Interested?  I hope so.  Please read on.


Standards, Exceptions and the unforeseen…

BMC Middleware Administration has an “Advanced search” capability.  I call it the intelligent search.   It can provide you with visibility into MQ at a level never before thought possible.  Sure, MQ Administrators can query queue managers for attributes and multiple queue managers at the same time on a server by server basis.  What if you have 50 MQ Servers?  Maybe 100 MQ Servers?   How about more than 150 queue managers across multiple operating systems with unique nuances and interfaces?   This becomes more challenging.    This search function provides three basic levels of query: Simple, Complex (think compound Boolean logic), and free form.  Simple is a basic find command while following syntax.  Compound uses at least two attributes in the query.  Free form can be fairly extensive.   You also have an option on the results: Match All (all criteria in search are satisfied), match any, where any of the attributes searched for on object type satisfies the results.  Let’s take a look at a few use cases.


Use case1:  Locate queue managers that have Queue Monitoring (MQMON) set to none or off.

This setting will prevent most if not all MQ Monitoring solutions from identifying the oldest message age on a queue.  Monitoring will also be unable to obtain the last message put\get date\time stamps.    With BMM Administration, you can easily and quickly (in seconds) locate these exceptions and then you can use the “Manage MQ” function to render the change(s).


Use Case 2: Locate Queue Managers that have Listeners NOT set to Queue Manager Control.

One would expect this situation not to arise.  It generally manifests itself in a channel retry alert or listener stopped alert if your MQ Monitoring solution has such capabilities.  BMM does.  Every time the queue manager is restarted, that listener would not start thereby causing a self inflicted MQ channel issue.


Use Case 3: MQ Object (You pick which type) Alteration Date is within some range in time.               Many MQ Administrators rely on scripting to make mass changes.  Maybe the change concern is only for  a specific application with object filtering in place across multiple OS’s and queue managers. Ideally these changes are made to targeted objects for a specific reason.  Sometimes the change is inadvertent. Sometimes it is made by someone else who managed to gain access to objects and make changes.

How do you know??  


Use Case 4: Quick Queue integrity check for specific queues in a project with queue depth within a range.  Say for  illustration 5 to 50 messages ( or 50 to 100 and so forth) currently on the queue. This query will provide you a specific list of every queue meeting that criteria and\or by Project (Think application).


Create your own queries and you can save them in seconds for reuse in a library of “intelligent queries.”

Here are some to consider although you may have you own in mind.  I would love to hear from you.


Queue manager queue monitoring is set to NONE or Off

Listeners NOT set to Queue Manager Control

Alteration Date is within some range in time and\or by application or object mask

Queue integrity check for specific queues in a project with queue depth within a range

Trigger Control: off

Queue Get Inhibited

Queue Put Inhibited

All queues that belong to a specific CLUSTER name

All topics that belong to a CLUSTER

All topics using a specific model queue name assignment

All topics that are local by queue manager name

All subscriptions with a specific topic string.  E.g.  neworders*

All subscriptions with a specific Destination  E.g.  stock_trade*

All subscriptions with a user id equal to “weborder.”

All subscriptions with SubscriberName:order* and TopicString:neworders and Destination:processorders)

All queues where usage is transmission within a specific application

All channels participating in a specific cluster

All channels sharing (connecting) to the same connection name

All channels use a transmission queue beginning with CICS*

StorageClass of a group of queues on z/OS


Finally a way to search MQ intelligently!     

Share This:


One of the five pillars of Application Performance Management (APM) is:

Component deep-dive monitoring in application context

What does “Component deep-dive monitoring in application context” mean?  Simply stated, it means having the ability
to drill-deep into the components of an online transaction.  But what are the components?  The components are
perhaps an HTTP server (aka web server), an application server, or perhaps WebSphere MQ?  (Check here for more about WebSphere MQ).

In this article we will assume we need to monitor WMQ.  We now need to decide; agent or agentless?  Meaning, do we
use an agent or go agentless for the drill-deep monitoring?  With BMC Middleware Management (BMM), BMC offers
both agent support and agentless support for WMQ monitoring.

That is the focus of this paper, explaining the virtues of how each BMM solution approaches agent and/or agentless
support.  We will be covering two products in the BMM solution set:


BMM-Administration – Agentless

BMM-Administration is the industry leader solution for WMQ/TIBCO administration (adding, altering, deleting objects). 
As extra value, it offers ultra-light agentless monitoring of WMQ objects via a WMQ client connection.  Many times this
ultra-light WMQ monitoring is just enough for resource restrained managed systems.

The ultra-light agentless WMQ monitoring offered by BMM-Administration is manifested in the creation and viewing of
events. Think of an event as you would an alert. There are about two dozen of these events that BMM can detect and
alert on to specific WMQ objects within the managed system.  These are the events monitored by BMM-Administration:

1. Queue more than N percent full
2. Queue Manager not accessible
3. Message in DLQ
4. Queue Full
5. Command Server Down
6. XMT queue not serviced
7. Channel retrying and XMIT queue not empty
8. Channel retrying Channel in doubt
9. Channel stopped
10. Queue has more than N messages
11. Queue has more than N messages and no readers
12. Queue is more than N percent full and has no readers
13. Queue has less than N readers Queue has less than N writers
14. Server conn channel has more than N running instances
15. Total running channel count is more than N
16. XMIT queue has more than N messages
17. XMIT queue is more than N percent full
18. First message on queue waiting more than N seconds
19. Oldest message on queue waiting more than N seconds
20. Oldest message on XMIT queue waiting more than N seconds
21. Trigger monitor is not running
22. Channel initiator is not running


BMM-Administration monitors WMQ on an exception basis via events.  Since BMM-Administration does not have a
database of performance metrics, there is no performance history kept.  In other words, you cannot tell the queue
depth of a queue yesterday at noon.  But you can tell if a queue’s depth was over a predefined threshold yesterday at
noon; by reviewing the events from that time frame.


For event notification, BMM-Administration provides two other methods for event notification:

1. Email – Via a POP3 SMTP server
2. Trap – SNMP datagram(s)

A popular misconception of agentless monitoring is there is less overhead.  This is partially true in that there is no
agent processing (CPU and memory usage) on the WMQ platform.  However, there is still overhead (agentless or
agent) to WMQ itself.  WMQ resource consumption is still going to increase (compared to no monitoring), because it
will be processing the commands BMM puts on the WMQ command server.


The most popular reason people choose agentless monitoring is because, there is no software to distribute, install, and
maintain on each server running WMQ.  Also the user does not have to worry about the agent being compatible with
the version of WMQ.  Many times after IBM announces a new version of WMQ, you will have to wait for the vendor to
release an agent to be compatible with the new release of WMQ.


An agent running on the WMQ platform requires OS level and MQ level security to run.  These security requirements
can be tedious to implement and maintain.  With agentless support, these agent related security requirements are not


Another semi-popular reason to think about agentless monitoring is to consider the platform running WMQ.  Is the
platform a supported WMQ platform but not supported by a vendor’s agent (such as Windows XP)?  You are
somewhat forced to consider agentless monitoring, because there is no agent available.


BMM-Monitoring – Agent

Of all the agent verses agentless configurations, the BMM-Monitoring agent configuration is the most robust option.
Since BMM-Monitoring has a backend database for storing MQ performance metric history, a user can go back and see
for example; the queue depth of a target queue yesterday at noon.


An agent allows an authorized user to issue WMQ commands such as starting a queue manager or starting an WMQ
command server.


An agent allows for the collection of hundreds of WMQ performance metrics.  All of these metrics can have associated
thresholds set (verses only 22 out of the box pre-defined events for BMM-Administration).


Agent support has a direct native integration with BMC Proactive Performance Manager (aka Patrol).  This integration
allows all BMM data and events to feed natively (no SNMP traps) into BPPM.


An The BMM agent is intelligent because it will only send in values to parameters that have changed since the last
interval.  This means if a queue depth has not changed since the last interval, the agent will not send a new value. 
This intelligence can significantly cut down on network overhead (as compared to agentless support where all values
are sent at every interval).

Agent based monitoring requires more security/access on the monitored platform.  Typically a running process (agent)
requires possible access to system files, WMQ files, and some degree of WMQ authority.

With the BMM-Monitoring solution, a customer is technically positioned to exploit BMM-Application Transaction Tracing. 
This is because the BMM-Monitoring agent is the same agent used by BMM-Application Transaction Tracing, just with a
different BMM extension loaded into the agent.


BMM-Monitoring – Agentless

The most popular reason people choose agentless monitoring is because, there is no software maintenance to install
on each agent running with WMQ.  Also the user does not have to worry about the agent being compatible with the
version of WMQ.  Many times after IBM announces a new version of WMQ, you will have to wait for the vendor to
release an agent to be compatible with the new release of WMQ.

A popular misconception of agentless monitoring is there is less overhead.  This is partially true in that there is no
agent processing (CPU and memory usage) on the WMQ platform.  However, there is still overhead (agentless or
agent) within WMQ itself.  WMQ resource consumption is still going to increase (compared to no monitoring), because
it will be processing the commands BMM puts on the WMQ command server.  There is still agent processing (CPU and
memory usage) on a platform somewhere, just not on the managed platform.


Agentless monitoring is popular because many people perceive there are no agents involved in monitoring.  Yes, it is
true no agents operate on the monitored platform.  However, there is an agent/process running somewhere
consuming resources.  So please do not slip into believing agentless monitoring is free (i.e. no resources used).  The
agentless processing is accomplished on a platform external to the platform running WMQ.  This external platform can
be Windows, Linux, or AIX.  The WMQ platform can also be Windows, Linux, or AIX.  Different operating
systems/hardware platforms represent data in different ways. For example, string data may be in a different code page
and numeric data may be represented in a different byte order. These differences will cause the collected performance
data to be converted, some portions by WebSphere MQ, some portions by BMM. There is a performance cost
associated with such conversion. To avoid extra work, it is recommended that you deploy your agentless processes on
operating systems that have similar data representation to the target operating system(s)/hardware platform(s)
wherever possible.

Some configuration features found in the agent-based solution are not available in an agentless configuration.
When running agentless, there is no way to issue commands on the target system, which means that certain
WebSphere MQ operations cannot be carried out. Therefore, the following features are not available with the
agentless solution:

1. Delete Queue manager
2. Start/Stop Queue Manager
3. Start Command Server
4. Start Trigger Monitor via runmqtrm command
5. Start Channel Initiator via runmqchi command
6. Start Channel Listener via runmqlsr command
7. Display/Configure Object Security via dspmqaut/setmqaut
8. Service Control (amqmdain)
9. MQ Installation Information
10. MQ Environment Information
11. Discard Broker Resources
12. Set MQM Installation
13. Display MQ Version


It is important to understand that agentless monitoring may also require more network bandwidth than agent-
based monitoring. This is because all WMQ management traffic goes (both ways) over the network connection.
With an agent-based solution, only data values that change between sample intervals are sent across the
network by the agent on the monitored WMQ host.


A target monitored platform may reside inside a DMZ, meaning a BMM agent could have issues communicating
back to the central BMM server.  This condition may require an agentless approach to BMM-Monitoring.

Sometimes the BMM central server supporting all the remote clients (used to manage agentless servers), may
have a limit on the maximum number of remote clients that can be supported.  This means you may have to
add resources to support more agentlessly managed platforms.


One the biggest drawback to agentless monitoring of WMQ, is the monitoring is considered “in-band”.  In band
means the delivery mechanism to monitor WMQ is WMQ itself.  In the vernacular, how can you monitor WMQ if
the WMQ resource (server conn channel) itself is down?  Granted server conn channels rarely go down, but at
least this potential issue should be mentioned.


BMM-Monitoring – Hybrid Monitoring – Agent and Agentless Together

This approach offers the best of both worlds.  For your central corporate WMQ servers you get robust full-bodied WMQ
monitoring with agents.  For your tightly constrained servers, you get the light to ultra-light monitoring with agentless
WMQ monitoring.

You Make the Call

You make the call; agent or agentless?  As with most things in life, there are pros and cons.  There are tradeoffs (CPU,
memory, network bandwidth…) for each approach. Hopefully this paper has helped explain all the BMM-Monitoring
options available for WMQ monitoring.

What is Middleware?

Posted by Uwe Rudloff Aug 19, 2013
Share This:

What is Middleware?

Well, we hear this term thrown around a lot. I have been in the software industry for many years and looking back on it I have developed my own generalized definition around it. Maybe you will agree.

Middleware is any software, that is not the operating system, that mediates interactions between any two other software entities. The goal of software mediation in this context is to simplify or leverage the work of the developer and to optimize the use of system resources. In the course of providing software mediation a side-effect is the added ability to provide points of access for monitoring and control. More on that later.

With this definition it is clear we can see that there are many forms of middleware that has been around even before the term middleware was invented. IBM's DB2, CICS, IMS are a few examples here. More recently, "application servers" which provide object-oriented application serving environments such as IBM WebSphere and Microsoft .Net are what comes to mind when people say "middleware". Middleware for messaging is also a form of middleware such as IBM WebSphere MQ and TIBCO EMS. There are many more forms of middleware which I will not go into.

Glue or goo?

Middleware has become ubiquitous in modern IT establishments. Long gone are the days when a single machine hooked up to hard-wired workstations time-shared its computing resources. Today the marriage of the new and the old is the norm. The glue to the marriage is middleware. Important stuff these marriages.

Although middleware is there to simplify someone's work, it comes at a price. I call it the law of conservation of complexity. What happens is that the complexity is buried a level below the developer's work, but it is not entirely eliminated. Call it vertical complexity. Middleware is still a piece of software and subject to all the same rules of software development and constrained in resource use as any other piece of software. In the end, middleware is ultimately a product of human invention, ultimately managed by people and subject to failure. No magic bullet, just software.

We commonly hear the term "set it and forget it" when talking about using middleware. Nice concept but ultimately unrealistic in practice. A developer may set it. But a production manager will certainly not forget it.

For example, messaging middleware makes using a network much easier for a developer. Essentially a network is used to send messages to remote machines. Messaging middleware makes the job of message delivery easier for the developer by taking care of the networking detail drudgery (did I send it? did it get there? do I need to re-send it?) on behalf of the sender and receiver. That way the developer need not re-invent the programming mistakes of those that went before him. The sending and receiving software need not even know who is doing the sending or receiving, or even when it is happening. Magic!

On the other hand, being another piece of software, messaging middleware uses system resources like any other software. It has taken over the responsibility (and complexity) of handling message delivery in real time. It needs system and network resources to work. It needs to queue messages if the network or receiving application is not working fast enough (or is down). It needs to handle the messages in a way that will guarantee delivery. It needs to have mechanisms in place to recover if things go wrong. Plus, the messaging runtime may be doing its thing for not just one, but possibly many messaging-oriented applications. Normal system software stuff. No Otto, this is not a PC which you can re-boot in order to recover. There is important stuff in these messages!

Enter the Transaction Followers

In a relatively simple scenario, application A and B use messaging middleware to work together. However reality is more complex than this. A typical "marriage of the old and new" involves many more middleware mediators, from web to application, transformation, enrichment, validation, synchronization, routing, messaging and legacy mainframe transaction processing and database subsystems. Here we have horizontal complexity. An assembly line of runtimes helping to complete one real time business interaction needs real oversight in real time. And not just with up/down indications after five minutes of rumination. If the business interaction fails, how do we localize the problem runtime that it depends upon? Even if all runtimes are available business interactions can still fail due to issues happening at a lower logical level within the runtime. From the perspective of a business application owner, this assembly-line of systems may be hidden from view but the problem becomes very visible when something goes wrong.

Diving in Deep

Even worse, each middleware runtime involved in a step of getting from the "new" to the "old" may be simultaneously handling multitudes of requests from different sources and heading for different destinations. Not just one business application owner may be affected, but quite a few more. So a problem triggered by one request may affect the successful outcome of any other pending request. How does one know what straw broke the camel's back? Like I said, a developer may set it, but the production manager will certainly not forget it. Without something in place to accurately and effectively measure how a transaction is using middleware runtime resources you are running blind.

From Middleware Management to Application Performance Management

As luck would have it (you knew this had to have a happy ending) middleware software systems have facilities that provide paths to monitor, measure and control their activities. This is where middleware management software latches on to help diagnose and remediate problems that may be happening in the middleware runtimes. Middleware management software is a necessary component of the broader application performance management problem. Middleware management software provides key technical and business metrics originating in the middleware software systems that participated in fulfilling a business application transaction. We'll get more into this in a later treatise.

The opinions expressed in this writing are solely my own and do not necessarily reflect the opinions, positions, or strategies or of BMC Software, it's management, or any other organization or entity. If you have any comment or question, good or bad, please feel free to respond and I will be happy to respond to you. Please comment, your feedback is important. Thank you for reading this post.

Share This:

In today's complex world of data processing where users expect instant results, middleware plays an increasingly important role. From checking the balance of bank accounts, redeeming awards on frequent flyer or hotel loyalty accounts, making online rental car reservations, shopping online, or simply keeping in touch with friends through social media; middleware is the glue that keeps our modern life's information flowing.


Monitoring and managing this environment can become time consuming for administrators. Looking through logs, handling user calls, and making changes to the environment to keep pace with business requirements can keep you from doing needed maintenance or upgrades to stay current and creating environments for new mission-critical applications. If you miss one problem like a channel being down or a queue filling up, you are at risk of not only losing revenue for your company but also customers.




In this fast-paced ever-changing environment, you need a tool that not only meets your needs today but can also grow as your processing environments change. The BMC Middleware Management suite is comprised of three separately licensed solutions which allow you to add solutions as your needs change. These solutions are BMC Middleware Monitoring (BMM), BMC Application Transaction Tracing (BATT), and BMC Middleware Administration (BMA). Together, they provide a complete solution for your middleware needs that offers distinct advantages.


BMC Middleware Monitoring

BMC Middleware Monitoring (BMM) allows you to monitor and manage messaging-oriented middleware technologies like WebSphere MQ and TIBCO EMS; application servers like WebSphere Application Server and Oracle BEA WebLogic; Enterprise Service Bus' like WebSphere Message Broker; WebSphere DataPower, and TIBCO BusinessWork. This solution provides you the ability to use a single console to manage middleware performance monitoring and automation across platforms, across the enterprise. Through the use of customizable triggers, an early warning system of conditions that could affect business operations can be created with alerts forwarded to e-mail, SNMP, or logs. Out-of-the-box integration for events is provided for BMC Event Management, HP OpenView, and IBM Tivoli. Through the use of real-time views of performance data, you can quickly assess the health and application use of middleware technology. BMM uses BIRT (Business Intelligence and Reporting Tools) to provide out-of-the-box and customizable charts and reports to find trends or peak hours of operation. You can develop strategies to manage these trends, predict future requirements and analyze system issues. Integration with BMC ProactiveNet Performance Management provides the ability to create service models to show business impact and dynamic base line alerting.


BMC Application Transaction Tracing

BMC Application Transaction Tracing (BATT) monitors transactions to ensure service-level adherence and is fully integrated with  BMC Middleware Monitoring. By using BATT, you can proactively identify transaction issues before they impact business service delivery. This solution provides “hop to hop” visibility of transactions across platforms throughout your IT infrastructure and allows you to pinpoint problems and the specific technology tier where they are occurring. The performance and business impact information is presented on easy-to-read dashboards. Using the integrated reports, BATT facilitates charge back reporting and meets audit and compliance requirements. you can automatically alert on and find delayed, failed, or missing transactions. With BMC Application Transaction Tracing, you will be able to speed mean time to resolution (MTTR), improve availability by avoiding unplanned outages, reduce costs by finding and fixing problems quickly, minimize risks associated with application latencies and outages, and monitor and create events based on the content of the transaction. BMC Middleware Monitoring (BMM) and BMC Application Transaction Tracing (BATT) both give you the ability to create application focused, customized dashboards that will allow you to view the health of the middleware infrastructure specific to your mission-critical applications.


BMC Middleware Administration

BMC Middleware Administration (BMA) gives you secure, self-service middleware administration. With BMA, you can view and manage the full set of middleware objects in WebSphere MQ and TIBCO Enterprise Message Service (EMS) environments from one web-based gui. Using a project based structure, authorized users can be allowed to perform all administrative and configuration tasks for WebSphere MQ and TIBCO EMS from a Web browser, eliminating the need to log on to each host. This security model  enables you to group infrastructure objects to provide secure profiles of objects to users and to make navigation faster and easier. BMA assures secure, project-based access so your users can access only the objects relevant to their role or application without spending time setting up complex OAM security authorizations in WebSphere MQ. The BMA solution is agent-less and runs on a single server to provide users access to all of the product’s functions from their desktops – even the administration/control functions. With BMC Middleware Administration, you will be able to enable quicker, more precise rollout of middleware environments, provide visualization of middleware objects at a project level, deliver accountability with an auditable WebSphere MQ/ TIBCO EMS change log, and enforce the use of middleware standards.


The BMC Middleware Management suite will allow you to improve your business service delivery and keep your customers coming back!


Contact your BMC account representative for a presentation or demo today.

Share This:

BMC Software is known for its ability to monitor the health and wealth of a DataPower appliance with BMC Middleware Management-Monitoring (BMM-Monitoring).  But soon after a user has mastered the monitoring of a DataPower appliance with BMM-Monitoring, they begin to wonder about the administration (altering) of the DataPower configuration(s).  How can BMC Software help out here?


BMC Software offers the administration of a DataPower device through the BladeLogic Server Automation (BSA) solution.  Through the DataPower XML Management Interface, BSA can take a snapshot of the DataPower appliance, and also make configuration changes to the device.


Today, many DataPower deployments are primarily a blend of scripted and manual processes in the WebSphere ESB environment.  This process requires constant care and feeding.  Tedious tasks such as deploying classes, checking on object statuses, and managing deployment polices can become a full time job with the DataPower appliance.  Additional DataPower administrative tasks such as backing up, resetting, and quiescing the DataPower domains add to the administrative burden.


BSA can take the complexity out of deploying applications by effortlessly deploying and configuring settings for J2EE applications across multiple DataPower environments.  BSA can quickly and easily automate the deployment of applications and WebSphere configuration changes, significantly reducing the time and effort to deploy applications and improve consistency across any customer DataPower environment.


If you have been in IT for 30 minutes or 30 years, you know the mantra of “always have a backup”; perhaps two backups are good too?  BSA can help here with having a current and reliable DataPower backup by facilitating the automatic backup of your DataPower appliances.  This is accomplished by BSA taking an automated nightly DataPower snapshot.


Do you manage your DataPower appliance or does DataPower manage you?  BMC has the DataPower solutions to monitor DataPower with BMC Middleware Management-Monitoring and solutions to administer DataPower with BladeLogic Server Automation.


You make the call.

Ross Cochran

What is APM?

Posted by Ross Cochran Employee Jul 30, 2013
Share This:


What is APM?


APM is Application Performance Management.  It is the term used today to annotate all the aspects of tracking, tracing, and troubleshooting end users response time.  Nobody can seem to agree on exactly how to define APM; but one thing is clear – Everyone has their own definition.  Everyone from vendors, research firms, and end users; all have their own definition of APM.  This paper will define APM in the simplest of terms.  I will not use any vendor specific product names (including BMC).  I will start from the origins of APM and cover through today’s take on APM.


In the beginning…

In the beginning, there was one computer; pick your flavor – PDP, Univac, IBM, Cray, Cyber……  Remote end-users accessed these systems from simple plain old telephone (POT) lines.  To upgrade from 150 bits per second to 300 bits per second was a big deal.  With such slow lines of communications, end-user response time issues were usually not because of the central computer system.

However, as connection speeds in remote locations begin to rival local area network speeds, the customer’s method of connecting to the central system has become less of an issue.  In parallel to the increasing speeds in communication systems, many distributed systems (UNIX, Linux, Windows…) began to appear between the end-user and central computer system.  The issues of APM just got much bigger; is the end user response time issue in the network, distributed system, or central computer system?


The stage is set

The stage is set; we have super-fast (compared to 40 years ago) networks, distributed systems (web servers, app servers, middleware...), and the still present central computer system.  You are now primed for massive finger pointing between all segments of your IT department.  Most people will want to get ahead of the finger pointing and start collecting end user response times before the end-user calls.  So you first start collecting end-user response times (as seen from the end-users browser).  You also start collecting response times between the distributed systems and the central computer system.  Many people call this collecting the "hop-to-hop" response times.  Make sure you collect response time history too (because you know the guy in accounting is going to complain about response time from a month ago).  So you think, you are ready to address any APM they can throw at you.

All your end-users have their own interpretation of bad response time.  When a bad response time issue arises, will both you and the end-user be able to exactly replicate the mouse clicks that are causing the response time issue?  If not, now what?  How do you get a consistent and repeatable way to generate response time?  You got it; synthetic response time.  Deploy droids (not the cell phone type) in the field running consistent and repeatable scripts.  Be sure to collect and store their response times.   These droid response times will remove all the emotion and subjectivity associated with live end-users.  Plus, with droids not needing bio breaks and vacations, you will now have response times 24 X 7.


Time to go deep

OK, you think you are 100% APM ready now.  You are collecting and storing end-user response times.  You are collecting and storing the hop-to-hop response times of your end-users.  You are collecting and storing synthetic response times 24 X 7.  But what happens when your APM monitoring does indeed report a response time problem.  You see it in your reports; the response times (both real and synthetic) from the ABC application server are bad.  Do you just call-up the ABC application server owner and tell them to fix their problem.  Yah right!

You now need a way to "go-deep" into the ABC application server and make that deep-dive diagnostics part of your normal APM management rhythm.  The answer is to add an application server go-deep tool that will seek out when a servlet, java server page, java message service… are misbehaving causing end-user response time problems.


Things are slow on the central computer system

OK, you know you have this APM thing mastered.  You are collecting and storing end-user response times.  You are collecting and storing the hop-to-hop response times of your end-users.  You are collecting and storing synthetic response times 24 X 7.  You can go deep into am application server and tell the components of application server response time.  But you get the dreaded end-user call, saying their transaction on the central computer system has ended abnormally ("abended").  What do you do now?  Your response time reports do show response time from the central computer system were slow.  But, that is the heritage central computer system, and that system is never slow, until today! Just like you added a go-deep problem determination tool to the ABC application server, you need to add a go-deep problem determination tool for the heritage central computer system.  This will give you insight into which transaction are timing out causing response time issues.


APM nirvana

Finally you have achieved APM nirvana; you determine your end-users response time before they call you; you can quickly isolate issues to the hop of technology causing the response time issue, you can dig deep into the technology layer for probable cause analysis.

There you have it – APM from the beginning to today.


Ross Cochran

Filter Blog

By date:
By tag: