Skip navigation
1 2 3 Previous Next

MainView Middleware Mgmt

76 posts
Share This:

MainView Middleware Administrator V9.0(MVMA) became available on August 23, 2018.  MVMA, formerly known as TrueSight Middleware Administrator or TSMA, has joined the MainView family with this release.  MVMA provides the central integration to be able to include Distributed Queue manager data with MainView for MQ, providing end to end visibility into your entire MQ environment.  With this rebranding the underlying product is still the same easy-to-use, secure and intuitive tool you use to administer your middleware environment today.  However, don’t be concerned if the name has not been changed in all areas you might see. The changes to using the new name within the product will roll out over time.

 

Although I say it is the same product, we have in fact delivered more functionality with this V9.0 release.

There were three major themes for the release: Security, Ease of Use and Currency.  New capabilities include:

 

Security

  • HTTP request header verification
  • Individual authentication
  • Time-based account lockout for the ADMIN_ADMIN security model

Ease of Use

  • Copying / moving messages to another queue manager
  • Support for MQ Cluster Routing in the Dead Letter Replayer

Currency

  • MQ Storage Classes for z/OS
  • MQ logging and log status support
  • MongoDB upgrade and using WiredTiger as the storage engine
  • IBM MQ 9.1 Support

 

To find out how to get information about this release please follow this link.

Learn how Middleware Management provides a complete, digital business-ready solution for central management of IBM® MQ resources.

Share This:

Thank you for participating in the TrueSight Middleware Management Community. This will be the first to be consolidated with other TrueSight Operations Management communities to simplify the experience for participants, and expand opportunities for collaboration.

 

Cloud App 1-RR.png

 

This enhancement will continue to allow you to share information about middleware data, events, resolution of common issues, improving application reliability, and in general, bridging the gap between infrastructure and application management through the monitoring and management of middleware. In addition, you will now be able to reach more TrueSight community members, and share ideas and experiences across the entire TrueSight Operations family of solutions.

 

Your support of this improvement is appreciated and we look forward to your continued and expanded collaboration in the future.

 

What does it mean for you?

You will be able to participate as in the past, on any content, the scope of the community will be broader and as a consequence increase knowledge sharing.

Is there something you need to do?

Yes, if you have not done it already, click follow on the TrueSight Operations Management Community:

Screen Shot 2018-07-27 at 09.07.51.png

 

For more information about the simplification of TrueSight communities read this post.

 

Any question? Comment below

Share This:

Quickly wanted to share this with you.

 

Check out the TrueSight Middleware Management | How-To Videos playlist on YouTube for a collection of How-To Videos covering the TrueSight Middleware Management solutions.

 

Didn't find what you were looking for? Please let us know.

 

A lot of good stuff can also be found in the BMC Knowledge Base. To quickly find the content relevant to the TrueSight Middleware Management solutions, try the pre-filtered searches.

Share This:

For sure you all know BMC Customer Support's Knowledge Base. If you ever have entered a search term on the BMC Support Central's home page, you have used it.

 

However, when looking for a solution to a product problem, entering a too unspecific search term will return a whole lot of results, most of which are referring to products you may not even have heard of by then. Unfortunately, it is quite easy to be unspecific which is due to the huge amount of articles we keep in stock for all of the BMC products.

 

So in order to narrow down the search results to the product family you are actually interested in, you would start to selectively add and/or remove Product Name filters until the better part of the search results finally becomes relevant.

 

This can become quite cumbersome, so for your convenience we prepared a couple of BMC Support Search URL that already contains the relevant product filters.

 

Pre-filtered for TrueSight Middleware and Transaction Management (TrueSight Middleware Monitor, Q Pasa!, Q Nami!, BMC Middleware Management, BMC Middleware and Transaction Management, BMC Middleware Management - Performance and Availability, BMC Middleware Management - Transaction Monitoring, ...)

 

Pre-filtered for MainView Middleware Administrator (TrueSight Middleware Administrator, BMC Middleware Administration, BMC Middleware Management - Administration, ...).

 

Pre-filtered for BMC Middleware Management - Administration for WebSphere MQ (AppWatch).

 

Before these links can be used, please make sure you are already logged on to BMC Support Central with the web browser in which the links will launch.

 

Enjoy and (re-)discover the BMC Support Search.

 

Some other useful links you may like:

- TrueSight Middleware Management How-To Videos on YouTube

- TrueSight Middleware Administrator Online Product Documentation: v8.2, v8.1, v8.0

- MainView Middleware Administrator Online Product Documentation: v9.0

- TrueSight Middleware and Transaction Management Online Product Documentation: v8.1

- Product Upgrades Made Easy: Guide to TrueSight Middleware Management AMIGO Program

Share This:

When objects (queue managers, queues, etc.) are associated with a history template, event template or a logical type those associations remains regardless of whether the object is registered for monitoring.

 

Once associated, the intent is that history will be collected and events trigger whenever the object is registered for monitoring.   However, sometimes that object shouldn’t have been unregistered from monitoring.   This can lead to gaps in an object’s data in trend charts or reports.   It can also lead to event triggers failing to trigger as expected.

 

There a couple different ways to watch for this issue with BMC TrueSight Middleware and Transaction Monitor 8.0.00 or later.   An object that is not registered for monitoring is not presented on the Operations tab.   Noticing a missing object can be hard to do.   The Object Repository tab shows all objects and the objects in the tree that are not registered for monitoring are displayed in blue by default.   You may change the color, font and tooltip via Tools->Preferences->Unmonitored Objects so the objects stand out even more.

 

Whenever a list of associations is displayed (History Tab, Event Tab, Logical Types Tab) the object is also displayed with this same blue color (or the color and font you set in the preferences).   This will help identify why a particular event template may not be working as expected, why history is not available or why the object’s attributes are not showing values on a logical view.

 

Tip: You may register individual objects from the Object Repository Tab by right clicking and selecting Register Object for Monitoring.   Note, all ancestors of that object are also registered for monitoring.   Conversely, you may unregister an object by selecting Unregister Object from Monitoring.   All descendants of that object are also unregistered for monitoring.    The panels on the right side of the Object Repository Tab auto-refresh periodically.   However the tree coloring and fonts do not.   You only need to collapse and re-expand the parent of the object to see the changes.

 

Tip:  Now that object associations remain regardless of whether the object is registered for monitoring.   It is recommended policies be used to do that initial association.    There should be no need to use the maketmpltassoc utility to re-apply templates after a change in monitoring selection as is common for some customers where object monitoring is changed frequently.

Share This:

New to BMC TrueSight Middleware and Transaction Monitor 8.0.00 is the concept that each object has a state.   The states are ADDED, CONFIRMED, NOT-FOUND, DEPRECATED, and DELETED.

 

The addition of an object state is the result of the discovery features added to the product for the WebSphere MQ extension.   During each monitoring cycle the WebSphere MQ extension is able to determine which MQ objects have been added or removed.    When an object is discovered it is said to be confirmed.   Each object also maintains the time when it was first discovered, when it was last confirmed to exist, and when the state last changed.   When a discovery sweep occurs the object may be re-confirmed by updating the time when last confirmed.   See another post by me that discusses when you would use a discovery sweep.

 

There are two ways an object may change to a deleted state.    Consider the scenario that a MQ queue is registered for monitoring as it is required for use by the MQ applications.   Infrastructure changes are underway and the MQ administrator needs to reconfigure MQ.   An MQ administrator knows the queue is going away so they use the Object Repository to select the queue in the tree, right click and select "Deprecate Object".   The deprecate state is sticky in that once an object is deprecated, even if the object is re-confirmed it remains deprecated.  The intention is that this state informs users this object is going away and to avoid it or learn why.   The object remains in that state until "Undo Deprecate Object" is used or an extension says the object no longer exists at which time the object is now in a deleted state.

 

The other scenario is that the extension has determined the object is actually gone.   The object then goes to a not found state.    Perhaps the queue was simply removed by the MQ administrator but how does one know the removal wasn’t done by accident?   Now an MQ administrator can come in and choose to deprecate the object if they know the removal wasn’t by accident or they can figure out why the object was removed and re-create it if necessary.  Deprecating an object in a not found state causes the object to be in a deleted state.

 

Having said all that, the flexibility has been added where you can simply choose to mark an object deleted immediately.  Simply right click and choose "Delete Object".

 

A few notes.

 

An object isn’t really gone from the object repository when in a deleted state.  If you have a large number of deleted objects, the next time you schedule down time for the services you can run dbschema_sync with the -o option to permanently remove them.    Please consider that any history, audit events and event journals will no longer be available for those objects once permanently removed.

 

When an object transitions to a not found or deleted state, the object no longer is registered for monitoring.   So if the MQ administrator accidentally removes an MQ object and then later adds it back you will need to register that object for monitoring again unless you have a policy configured to do it for you.   In addition to the times noted earlier, the time when monitoring was last changed is also maintained for each object.   All this information can be viewed from the Object Repository tab when you select an object from the tree.

Brad Boldon

Restoring an agent

Posted by Brad Boldon Employee Feb 12, 2016
Share This:

Backing up critical data is an important aspect of any production level IT environment.   However, backing up the data is only half the picture; you must be able to restore it as well.    Being able to restore your agents after hardware failures or other mishaps is equally important.    Enhancements made to BMC TrueSight Middleware and Transaction Monitor 8.0.00 services and agents have made this easier.

 

Background Information

 

When backing up the agent and extension information we mainly refer to the current set of objects registered for monitoring, agent and extension preferences, and extension configuration files.

 

In addition to the agent’s local repository of objects (maintained in eaa.xml), the services maintains an object repository and records what objects were previously registered for monitoring.   The service repository allows you to forego backing up the agent’s local repository as we can restore up to date information from the service repository for BMC TrueSight Middleware and Transaction Monitor 8.0.00 and later agents.

 

The WebSphere MQ extension may be configured to discover queue managers and their objects.  When using discovery with policies to register objects for monitoring , the objects and monitoring from a reinstalled agent will be restored with no intervention.   When using other extensions, or when discovery is disabled, you can use the restore feature from the Management Console or repomgr command line tool.

 

The agent and extension preferences (also maintained in eaa.xml) are not duplicated in a service repository.  However, if you use policies to set the agent and extension preferences you can simply apply the policy after restoring your agent installation.    If you are not using policies to set the preferences you should use agentpref to export the preference settings to an xml file.   It is recommend that you export the preference settings after initial configuration and whenever preferences are changed.

 

The agentpref utility is available in the services installation and as an agent package.  If the agent was configured to communicate with the services using a tunnel (i.e. using TLS), and a tunnel is required to communicate with the agent (i.e. only TLS traffic is permitted to or from the agent), then you will need to export and restore the preferences using agentpref locally, on the agent machine.

 

As you can see, using policies as a recovery mechanism is an easier way to recover an agent with up to date information.   However, you should export your policies using the mqsexport command and backup the output zip file.   If necessary use mqsimport to restore the policies.

 

The WebSphere MQ extension is controlled entirely from preferences but other extensions may have files that you configure during installation.   You should back up those files after the initial configuration and whenever they are changed.    In addition, if you aren’t using the standard port or are overriding some connection parameters, remember to back up the eaapi.ini file(s).   Much of this information is specific to your environment and may be readily available from other sources.   However, it is still recommended that this information is backed up to make restoration easier and faster.

 

Restoring an agent

 

Follow these first steps for a simple agent install on a distributed platform like Linux or Windows …
* Get an agent package from the “Download Agents” link on the TMTM launch page.
* Extract the agent package on the agent machine.
* Make sure the permissions are good.
* Start the Extensible Agent (qpea) but don’t start the Configuration Agent (agent) yet.

 

The above steps are for a basic agent install. The TMTM Agent and Extensions guide covers all of the options available.

 

If you are using policies to set preferences, you can apply the policies to the agent using the Management Console (in the Policies tab, find the agent, right-click the agent and “Apply Polices”).   Otherwise, use the agentpref import option to restore the agent and extension preferences.

 

Now that preferences have been restored, start the Configuration Agent (agent).

 

If extension packages have previously been distributed to the agent, and the directories and files in the directory specified by the MQS_HOME environment variable remain, you may install extensions from the packages as was done before.    Otherwise, the packages will be deployed again and once those deployments have completed you may install them.   Check the “Package Distributions” tab of the MC for their status.

 

Finally, after installing the extension packages, restore the configuration files previously backed up or re-configure them from your own preserved information.   You may now start the extensions.

 

If you are using discovery and policies to register objects for monitoring and you are only using the WebSphere MQ extension than nothing more is required.   Otherwise, use the restore option to re-register the objects for monitoring so the agent is in sync with the services.   You may select the agent in the tree on the Object Repository tab of the MC.  Right click and select “Restore”.    You may also use the repomgr tool to perform the restore.    Either will re-register any objects currently registered for monitoring. 


Notes
If you restore an agent that was not newly installed, keep in mind this is not a full sync.   For example, if for some reason the agent thinks that an object is not registered for monitoring but the service’s object repository does, then that object will become registered for monitoring.   Similarly, if the agent thinks that an object is registered for monitoring and the service’s object repository does not, then that object will remain out of sync.    The reconfirm feature from the MC or repomgr command line tool can be used in this case.    The agent’s perception of what is registered for monitoring is used.    Under normal circumstances a reconfirm is not needed and should rarely be used but may be recommended by BMC Technical support.

Share This:

 

What is a discovery sweep and when would you do one?   Beginning with BMC TrueSight Middleware and Transaction Monitor 8.0.00 you may configure the WebSphere MQ monitoring extensions to discover MQ objects such as queues, channels and the queue managers they reside on.   This feature is enabled by default on distributed systems and disabled by default for z/OS.   When enabled, newly added or removed objects are discovered during the next monitoring cycle causing their state to change in the object repository maintained by the services.    This allows you to use polices to register the objects for monitoring, associate them with your event and history templates and perform other actions when the objects are newly created and/or removed.   Policies may use attributes of the object that rarely change such as a description.   These are also known as stable attributes.   For performance, stable attributes are only published if the object is registered for monitoring and the value changes or at the time the object is discovered to exist.

 

So what is a discovery sweep?   A discovery sweep informs the WebSphere MQ monitoring extension to re-discover and publish all stable attributes for existing objects on one or more queue managers.   Essentially, this allows you to sync up those stable attributes for objects that you have not registered for monitoring that may have changed since being discovered or the last discovery sweep.    How often this is needed depends on your environment.   Stable attributes are named that for a reason, they rarely change.    A discovery sweep can take some time and it is not recommended you do it often.   Ideally you would perform a discovery sweep during off-peak times.    You may manually perform a discovery sweep from the Object Repository tab of the Management Console, use the Agent Discovery Sweep policy action with a schedule or use the repomgr command line tool with an external scheduler like cron.

 

One final tip.  If your policies are not being applied to an object that uses stable attributes in the policy expression, you may select the object in the tree of the “Object Repository” tab and view the current values for the stable attributes in the lower right pane.   Note the “Time when a stable attribute of the object was last updated” in the upper right pane.    If you feel they are out of sync, right click on the object’s queue manager in the tree and select “Discover Object” to restrict the discovery to that queue manager.    Alternatively, you may select the object’s agent in the tree and select “Discover Agent” to perform a discovery sweep for all queue mangers with discovery enabled under that agent.   Observe the "Time when a stable attribute of the object was last updated” change in the upper right pane.   Verify the stable attributes in the low right pane.   Policy actions will automatically be applied based on the updated stable attributes.

 

A reminder that although Policies apply to all object types, only the WebSphere MQ  extension from the BMC TrueSight Middleware and Transaction Monitor 8.0.00 or later release supports discovery of new or removed MQ objects.   For other extensions, only those objects registered for monitoring are visible to the services and since they are already registered for monitoring, their stable attributes are always in sync.

Share This:

The BMC Middleware Management products have undergone several name and package changes in the past few years.  This situation has created some confusion among customers that are trying to identify their licensed products while equating them to the physical name of the product they have installed.

 

In the case of Middleware Management, the same physical media is used to package several BMC licensed products.  The name BMC TrueSight Middleware and Transaction Monitor is used generically to refer to four of these licensed products (listed below), each of which offers different levels of related functionality.

 

The following licensed products are available under an older license that is known as the “Gold” license.

  • BMC Middleware Management – Performance and Availability
  • BMC Middleware Management – Transaction Monitoring

 

Customers that have migrated to the new “Simplified” license are more familiar with the products by these names.

  • BMC TrueSight Middleware Monitor
  • BMC TrueSight Middleware Transaction Monitor

 

BMC Middleware Management – Performance and Availability and BMC TrueSight Middleware Monitor both provide performance and availability monitoring of your middleware infrastructure and are newer versions of a product formerly known as Q Pasa!.

 

BMC Middleware Management – Transaction Monitoring and BMC TrueSight Middleware Transaction Monitor both provide transaction tracing capabilities in WebSphere MQ and other technologies and are newer versions of a product formerly known as Q Nami!.

 

Note that the “Simplified” products enable more features than the “Gold” license versions including integration to TrueSight Operations Management.  Moving from “Gold” to “Simplified” involves a license migration, so speak to your BMC representative to discuss it.  The details of the differences between the offerings are discussed in the product installation documentation.

Share This:

Right Sizing for Success

 

In my brief essay on What is Middleware? I touched on some of the advantages and consequences of shifting the complexity of software function out of applications into middleware. In Are You Situationally (Middleware) Aware? I also reflected on how the wrong sorts of automation can make life more difficult for us.

This leads us to think about a right sizing your approach to monitoring. Specifically, using a top-down, bottom-up approach in following the life of a business transaction helps in developing a monitoring strategy which is closely aligned with the goals of your business customers. In a few small steps you are able to quickly find a route to value which your customers will love.

About Transaction Tracing

 

IT and development organizations need to understand how important business processes are being affected by the services that they provide. Application Performance Management is one area of development operations which provides another tool for service improvement. The APM paradigm identifies transaction tracing as one important part of this. And the type of tracing that is needed goes beyond the simple silo tracing approach to a multi-disciplinary approach.


Tracing transactions is nothing new. Since the dawn of computing there has always been a need to trace transaction steps from either a business or technical point of view. Businesses need trace records to reliably audit business steps that have been performed in order to prove completion or compliance. Technical operations need trace records to reliably trace steps that were taken by an algorithm or service to prove correct and timely processing.

 

Frequently, tracing paradigms suffer from numerous shortcomings. They may be too wordy and performance-draining, resulting in too much irrelevant data being collected for the ultimate goal of supporting the business. Or they may be too little too late, since human intervention and expertise is required to manually interpret the results. Or, the tracing may be too isolated, since an identified event recorded in one technologies’ trace may be difficult to relate to an upstream or downstream event in another technologies’ trace.

 

What is new is the need for an effective, real time, correlated view of both the business and technical service steps completed throughout all of the IT services that a business transaction touches. This multi-disciplinary tracing paradigm should be as technology-independent as possible in order to provide for maximum effectiveness. And it should be agile enough to quickly adapt to the changing business and IT landscape.

 

Operational Challenges

 

Today's number of IT components, their configurations, relationships between them, and the constant changes introduced by business and technology needs makes manual tracking difficult. It is impractical and costly to conduct manual audits to maintain IT service relationship and application dependency mapping information. What is needed here is automation that gathers the required information concerning hard and soft IT assets as well as the relationships among them.


On the other hand, the sheer size and complexity of asset/software relationships can also pose a problem. The mere action of automatically mapping a set of assets is in itself meaningless unless thoughtful and meaningful context is placed around the relationships from a business point of view. Again, everything worth knowing about in one’s IT infrastructure is not necessarily important enough to rise to the level the business service view. From the point of view of the business, a “boiling the ocean” service tracing approach is not necessarily efficient or desirable.


This is where a reduction step is necessary to map one or more technical service relationships to a business service for a well-defined service impact view. Correlated, real time transaction tracing which feeds both technical service performance as well as business performance metrics into a business service state view is yet one more aspect of technical service management which is needed to provide the most complete business service impact picture. Clearly, an understanding of the business, how it depends upon the underlying technical services, and how a business transaction is expected to flow through the underlying application services is important to know.

 

Tools and Methodology

 

 

What is needed is a set of tools and methodology for effectively building relevant business context around a set of technical IT services with a minimal amount of effort.


Discovery automates the process of gathering required information and populating and maintaining a Configuration Management Database (CMDB). This is done by discovering IT hardware and software and creating instances of configuration items (CIs) and relationships from the discovered data. A CI can be physical (such as a computer system), logical (such as an installed instance of a software program), or conceptual (such as a business service).

 

The mapping step automatically relates CIs (hardware/software assets) together to form service impact relationships. This is where a reduction methodology helps to crystallize services relationships into meaningful context which have meaning to a running business. This is where knowledge of “what is important to the business” is important for IT to know and apply to the mapping of service relationships. Emerging Big Data analytic approaches can assist in a knowledge-based goal-oriented approach to determining what is important for effective tracing.


The mapping of a business transaction flow among application services using a correlated real time transaction tracing view completes the picture by providing expected inter- and intra-application service elapsed times and business-related data. This provides the basis for enriched alerting within service impact models in the event of any excursion from the expected technical or business metric norms.


Here we outline some best practices for deciding how to select the appropriate technical and application assets and metrics to provide the appropriate context within an APM transaction tracing solution.

 

Top-Down: Four Easy Steps to Right Size Your Transaction Tracing

 

STEP 1: DETERMINE WHAT IS IMPORTANT TO THE BUSINESS

 

Just because a process exists does not mean that it is important enough to be traced.

 

Frequently, not enough thought is given to the relative importance of a process to a business. Tracing for the sake of monitoring brings no value to a business. Tracing is only valuable if it actively used to help the business stay on top of its game.

 

A simple measure of what is important can be summarized as follows. Determine the business processes which:

(1)  Bring the company the most revenue.

(2)  Bring the company the most profits.

(3)  Cost the company the most money in terms of

a.    Direct costs.

b.    Hidden costs due to under- or non-performance risks (monetary penalties or fines).

(4)  Have the highest risk in terms of damage to the company reputation (human visibility) if it fails or is faulty.

(5)  Uses up the most expert people time (war-room scenario) when trying to narrow down the problem focus.

(6)  Must be audited for legal or company policy reasons.

 

Once the relevance of an IT-dependent business process is determined, it is much easier to focus on who will directly benefit from transaction tracing.

 

STEP 2: DETERMINE WHO WILL BENEFIT THE MOST IN THE BUSINESS

 

 

 

Business users can directly benefit from transaction tracing processes.

 

Determine who will benefit the most from any of:

(1)  Transaction history (elapsed times, payload details) for analysis.

(2)  Receiving real time alerts for underperforming transactions. This could include end-to-end as well as inter-activity elapsed times or numeric payload values exceeding numeric thresholds.

(3)  Narrowing the scope of a problem transaction run to a specific technology or runtime. This is so that the appropriate people can be alerted to the issue and provided known information for use in remediation or setting business customer expectations.

(4)  Proof of correct business transaction completion or data compliance.

 

At this point we have identified the appropriate business processes to trace. Once the direct transaction tracing users are determined, a mapping of the subset of runtime assets can be done which is relevant to the most important processes and people of the business.

 

STEP 3: DETERMINE WHERE THE BUSINESS TRANSACTION WILL BE EXPOSED

 

 

Once the appropriate business transaction has been identified, the specific runtimes it traverses (OS type, version, hostnames, IP addresses, and middleware types, versions, and names if any) from logical start to finish of the business transaction need to be identified. Further, the logical order of the runtimes called needs to be determined.

 

It may not be necessary or even appropriate to map every single runtime traversed for that particular business transaction. This would depend on the need (see “Who” and “What”). If eighty percent of the “What” happens in only twenty percent of the runtimes (if this can be reasonably determined ahead of time), then it would be appropriate to start with those runtimes. Another starting point is to merely trace the end-to-end (front to back and end to end) runtimes, and then successively build out tracing to more granular levels in between the beginning and end runtimes when the classes of underperforming transactions and further suspect runtimes have been identified.

 

The most inappropriate way to trace transactions is to monitor every single transaction traversing every single runtime, no matter what business transaction it is, no matter who the audience for the tracing is, and without some guidance as to the expected order of runtime execution. This is also known as “boiling the ocean” and is not appropriate, efficient, or useful.

 

For each runtime call, the resource names being traversed need to be determined. For example, WebSphere MQ resource names are calling application name, queue manager name, queue names or topic strings, and channel names. For WebSphere Application Server the HTTP URI or Servlet Name are the resource names.

 

Above all what is most important in deciding where to expose the transaction is the selection of the transaction tracing toolset. What is needed in any transaction tracing solution is broad technology and OS support and not just individual silo solutions. For example, integrated transaction tracing does not end at a JEE or .Net container wall. The non-JEE and non-.Net runtimes both upstream (before) and downstream (after) the JEE or .Net container must be accessible to the transaction tracing solution, and it must span both distributed and mainframe systems. Anything less will result in a “black-box” myopic view of up- or downstream results based only on elapsed times and with no more granular or contextual contributions to analyzing the transaction flow.

 

STEP 4: DETERMINE TRANSACTION IDENTITY AND ATTRIBUTES

 

There are basically two types of data that are important to know from an exposed transaction tracing data stream. These are transaction identity and transaction payload.

 

Identifying the Transaction

 

 

Transaction identity is important to know since it is the way to uniquely distinguish one distinct transaction from another transaction. The identity of a transaction can be any field, technical- or business-related, that is exposed in the data stream. The identity of a transaction should be unique enough so that for a class of transactions the identity of the transaction does not repeat for a distinctly separate transaction within the same class.

 

If a transaction traverses multiple runtimes (either in parallel, serial, or looped fashion), it is important to have a method to “stitch” the transaction events (i.e. the appearance of the transaction at each runtime) together. The easiest, though not the usual, way is to count on the unique transaction identifier to appear at each transaction event with the same value for that distinct transaction. Depending on the scope of the transaction and how the business application was architected this could be difficult if not impossible since modern transactions typically span multiple runtimes running with different technologies.

 

More typically a distinct transaction identity morphs itself into a new identity, due to either one application service calling another application service, or because one runtime passes the call on to another runtime downstream. In this case a correlating transaction event must be captured to relate the two aliasing transaction identities together.

 

If a distinct transaction is processed multiple times in a runtime in a looped fashion, and it is required to trace the transaction at each loop iteration, then there are two additional sub-types of transaction identity data that are important to know. These are instance data, which uniquely identifies each loop iteration, and the limiting data, which distinctly signals when the loop for that distinct transaction has finished.

Categorizing the Transaction

 

 

Identifying a transaction without context is not useful for most analytic purposes. Although time stamps can be collected and associated with a distinct transaction event, using this data in isolation without business or technical context does not necessarily help solve a performance or business problem.

 

What is more useful is to collect additional data associated with a transaction event. Also known as “payload data”, again, a selective approach to collecting the data associated with a transaction event is the most effective way to trace the transaction. What payload data to collect would again depend upon the need (see “Who” and “What” above).


Collecting payload field values can be useful for classifying the business transaction in some way i.e. by claim type, for example, or by agent name, for reporting and alerting. It may also be useful to collect environmental information related to the transaction event for someone involved in solving a technical problem. Another way to select payload is to monitor and alert on it if the value in the payload field exceeds a threshold, or to filter classes of transactions. The entire transaction can even be flagged as failed based on payload field value(s) or threshold(s). Payload fields can also be collected and stored in history to prove compliance with laws or company policies.

Be selective when collecting payload values. For example, if the same payload value appears in two different transaction events for one distinct transaction, it is only necessary to collect it at one transaction event, not at both events. The correlation through the transaction identity associates that piece of data with all of the other transaction events for that distinct transaction and effort of collecting it again is unnecessary.

The most inappropriate way to collect payload data is to store every piece of data that is accessible at the runtime call at every runtime for “whenever it is needed” and for “whoever may use” it in the future. This is another form of “boiling the ocean” and is not appropriate, efficient or useful. This can potentially require large amounts of IT resources to back the collected data for the benefit of no one.

Bottom-Up: Enrich the Business Service Impact Model With Your Transaction Tracing Metrics

 

Once the appropriate process, people, and supporting transactions have been identified, the resulting exposed transactions, transaction categories, and metrics can then be tied back into the impact models which support the business service.

 

A service impact model should represent, at a minimum, the IT software/hardware assets which support the business services and their relationships to the business services. For example, if a claims system depends upon a hardware server running and OS, then any performance degradation of the hardware or OS should map to the performance degradation of the business service.

 

This two-dimensional relationship mapping is enriched by a third dimension, which is that of the transaction flow. Not only are end-to-end, inter-, and intra-elapsed times available to the impact model, but any other transaction metrics which have been exposed in the payload of the transaction at any transaction event. This encompasses not only technical but also business metrics. As a result, transaction tracing brings a richer context into a service impact model without overwhelming the result.

 

The Results

 

Transaction tracing is an important dimension of the Application Performance Management arena. Not only are technically-oriented tracing alternatives for a technology silo important, but a cross-platform, cross-technology and cross-domain correlated transaction tracing alternative is also needed to add business context to an application performance problem.

 

For this, a business knowledge-based goal-oriented approach to determining what is important to measure, followed by exposing only the relevant data which applies to these business-oriented performance measuring goals goes a long way to reduce the shortcomings of most of today’s siloed APM tracing offerings.

 

The result is a holistic, correlated, cross-platform - from distributed to mainframe and beyond - view of critical business transactions, answering the all-important question: “What happened to my transaction?”

 

 

 

The opinions expressed in this writing are solely my own and do not necessarily reflect the opinions, positions, or strategies or of BMC Software, it's management, or any other organization or entity. If you have any comment or question, please feel free to respond and I will be happy to respond to you. Please comment, your feedback is important. Thank you for reading this post.

 

 

Share This:

Did you ever want to see how the execution groups are performing within your WebSphere Message Broker without having to go through performance data execution group by execution group? Did you?

 

Well as it turns out, BMC Middleware Monitoring has the data and in 5 minutes you can add it to your broker view in BMM.

The out-of-the-box view for a message broker provides you with some basic information about the broker. The broker name, status, uuid, and queue manager name are all important pieces of information.

Broker1.png

With a few simple, quick edits, you can add a tab with a child summary table showing key performance indicators for the execution groups running on the broker.

Broker2.png

At the same time, you can enable the drill down button in front of each execution group in the table to give you a pop-up summary table of KPIs for the message flows deployed to that execution group!

Broker3.png

To view the detailed performance metrics, you will still want to use the navigation tree to pull up the views of the various objects but these simple to add summary tables will assist you in zeroing in on a potential problem.

 

If you would like to know how to do this, comment on this blog or send an email to me (thouse@bmc.com) or contact your friendly neighborhood BMC BMM SC.

Share This:

Now that you successfully implemented your BMC Middleware Monitoring (BMM) solution and are generating alerts for problems detected in your WebSphere MQ (WMQ), DataPower, WebSphere Message Broker (WMB), TIBCO EMS, and WebLogic Application Server environments; you have successfully reduced the number of and duration of outages. Using the out-of-the-box integration with BMC ProactiveNet Performance Monitor (BPPM), the alerts are being directed to the correct teams and resolved in a timely manner. However, when you're logged onto the Management Console in BMM, it's not always easy to determine which alerts are for which area.

One of your friendly neighborhood BMC software consultants has come up with an easy way to tell by just looking. He suggests that you color code your alerts in the management console alerts screen. The choice of colors is up to you but one suggestion would be to use the blue for your WMQ alerts, green for your WMB alerts, wait for your DataPower alerts, and so forth. This method makes it easy to visually determine which alerts are for which group.

If you'd like to know more about implementing this solution, you can respond to this blog or simply e-mail your BMC software consultant.

Share This:

Many may ask how to schedule a change inside WebSphere MQ using BMC’s BMM-Administration solution.

 

Please review the attached pre-recorded WebEx to see how simple it is; to alter an MQ object.

 

https://bmc.webex.com/bmc/lsr.php?RCID=bb420b8b9aa949ccb51bed572adf2cfa

 

Thank you,

 

Ross Cochran

 

The opinions expressed in this writing are solely my own and do not necessarily reflect the opinions, positions, or strategies or of BMC Software, it's management, or any other organization or entity. If you have any comment or question, please feel free to respond and I will be happy to respond to you. Please comment, your feedback is important. Thank you for reading this post.

Share This:

So you have just downloaded and installed BMC Middleware Management – Administration (BMM-Admin).

 

As most Americans, you want to get things running, as fast as possible, without reading the Installation Guide.  That is why I developed the attached PowerPoint, so you can get BMM-Admin running quickly and getting the fastest route to value.

 

The presentation was designed around WebSphere MQ; however, BMM-Admin does also support Tibco EMS.

 

So get ready to administer WebSphere MQ quickly and effectively, by following the attached PowerPoint instructions.

 

The opinions expressed in this writing are solely my own and do not necessarily reflect the opinions, positions, or strategies or of BMC Software, it's management, or any other organization or entity. If you have any comment or question, please feel free to respond and I will be happy to respond to you. Please comment, your feedback is important. Thank you for reading this post.

Share This:

Once upon a time, a little programmer wrote a letter to Santa. The letter never arrived and the little programmer was broken hearted. Santa was not pleased because he wanted everyone who believed in him and wrote to  him to be happy. If only Santa had a way to track those letters and alert him when they got lost or some elf fell asleep and didn't process it.

 

Well, if Santa talks to his BMC Account team, they could offer him the world class BMC Application Performance Management solution and his problems would be solved.

 

Interested in how our APM solution works? Watch the attached demo and then contact your BMC account team to learn more.

Filter Blog

By date:
By tag: