Skip navigation

MainView Middleware Mgmt

14 Posts authored by: Ross Cochran Employee
Share This:

Many may ask how to schedule a change inside WebSphere MQ using BMC’s BMM-Administration solution.

 

Please review the attached pre-recorded WebEx to see how simple it is; to alter an MQ object.

 

https://bmc.webex.com/bmc/lsr.php?RCID=bb420b8b9aa949ccb51bed572adf2cfa

 

Thank you,

 

Ross Cochran

 

The opinions expressed in this writing are solely my own and do not necessarily reflect the opinions, positions, or strategies or of BMC Software, it's management, or any other organization or entity. If you have any comment or question, please feel free to respond and I will be happy to respond to you. Please comment, your feedback is important. Thank you for reading this post.

Share This:

So you have just downloaded and installed BMC Middleware Management – Administration (BMM-Admin).

 

As most Americans, you want to get things running, as fast as possible, without reading the Installation Guide.  That is why I developed the attached PowerPoint, so you can get BMM-Admin running quickly and getting the fastest route to value.

 

The presentation was designed around WebSphere MQ; however, BMM-Admin does also support Tibco EMS.

 

So get ready to administer WebSphere MQ quickly and effectively, by following the attached PowerPoint instructions.

 

The opinions expressed in this writing are solely my own and do not necessarily reflect the opinions, positions, or strategies or of BMC Software, it's management, or any other organization or entity. If you have any comment or question, please feel free to respond and I will be happy to respond to you. Please comment, your feedback is important. Thank you for reading this post.

Share This:

BATT Introduction

 

COMPANY XYZ is a theoretical company providing a full range of highly competitive financial products.  COMPANY XYZ is a long term BMC customer with a need for transaction tracing.  They invited BMC to come and share the BMC Middleware Management Application Transaction Tracing (BATT) vision; the effort ended with COMPANY XYZ deciding to pursue transaction tracing with BMC.

 

BATT Planning

 

COMPANY XYZ had told BMC the need for transaction tracing was ?becoming more and more critical?; because COMPANY XYZ was in a ?constant state of flux?.  There were two very specific issues involving transaction tracing:

 

1. COMPANY XYZ Finances ? This is the core application behind the online COMPANY XYZ.  Transaction throughput problems were infrequent, however when transactions did slow down, it had widespread COMPANY XYZ corporate visibility.  COMPANY XYZ needed a transaction tracing tool to quickly determine when a transaction response time issue exists and to quickly pinpoint where in the transaction flow things were slow.


2. COMPANY XYZ interfacing to external WebSphere MQ clients ? Periodically the COMPANY XYZ needs to send a transaction to an external vendor in the form of a WebSphere MQ message.  Many times the external vendor(s) will insist they never received the MQ message.  COMPANY XYZ had no way of determining if/when the MQ message in question was sent.

 

BATT Implementation Overview

 

Since there were two COMPANY XYZ transactional related issues, we decided to kill two birds with one stone.  We did this by creating a BATT transactional model that captured the COMPANY XYZ hop-to-hop response time and using the same BATT transactional model to show when MQ messages were delivered to their external vendor.

 

BATT Technical Implementation Details

 

There were two primary technical needs to implement BATT at COMPANY XYZ.  First we needed to define which COMPANY XYZ transaction to trace.  We needed a transaction that starts in the distributed world and accessed the z/OS (mainframe) operating system.  The ?ABC? was the transaction selected.  BTW ? ABC was also the name of the associated z/OS CICS transaction ID.  Secondly, we need to find something in the ABC transaction that we could trace/track across all the silos of technology (WebSphere Application Server (WAS), WebSphere MQ (MQ), and CICS).  By dumping the transaction at each point (WAS, MQ, and CICS) we determine the COMPANY XYZ customer account number was present in all three locations.  This was perfect because we could now trace the transaction across all three technology stacks.


Now we knew the technology stacks involved in the transaction flow, we need to know how the transaction technically flowed across the stacks.  Here is a summary of the technical flow:

 

1. An MQ message is PUT on a D/S MQ publish queue
2. The message is transmitted to the MVS MQ input queue (via an MQ cluster channel)
3. CICS reads the message from the MVS queue
4. CICS executes the ABC transaction
5. CICS returns an MQ message to the D/S reply queue

 

By positioning a BATT interception node at the first MQ publish queue and at the last MQ reply queue, we can tell the overall MQ response time.  This overall MQ response time would trace from the distributed MQ queue to the z/OS MQ queue and the time back to the distributed reply MQ queue.  Tracing this route will allow us to quickly determine if there is a response time problem and if the response time is z/OS based or distributed based.


So far, we have identified the ABC transaction as the target transaction.  We have identified the technology stacks used by the ABC transaction.  We have the ability to identify if the transaction response time issues are distributed based or z/OS based.  Now we need to identify the core technical details of the transaction.  These details are:

 

1. The name of the distributed publication queue ? The BATT interception node needs to know which MQ queue to listen to for the ABC transaction.
2. The name of the distributed reply queue - The BATT interception node needs to know which MQ queue to listen to for the ABC transaction.  BATT will capture the time stamp of the returning transaction.
3. The transaction size ? BATT stores the transaction in the BATT database.  By knowing the size of the transaction, we can effectively size the BATT database.
4. The transaction rate - BATT stores the transaction in the BATT database.  By knowing the transaction rate, we can effectively size the BATT database.
5. The CICS transaction ID ? This way MainView Transaction Analyzer can know exactly which z/OS transaction(s) BATT is interested in.

 

BATT Value

 

What does all this transaction tracing mean?  It means COMPANY XYZ can be alerted when the COMPANY XYZ transactions are not meeting their service levels agreements.  It means when there is a transactional response time issue, COMPANY XYZ will know exactly where (distributed or z/OS) the transaction impediment is happening.  It means COMPANY XYZ will know when all MQ messages are sent to their outside vendors.

 

The Icing on the BATT cake

 

As COMPANY XYZ was reviewing the response times of their transactions, something unusual began to appear in the BATT transaction table.  The BATT transaction table showed the transactions were slowing after every four to five MQ message were sent to z/OS.  After investigating, we determined the MQ messages were being intentionally slowed (by the COMPANY XYZ JVM based application) while being placed on the MQ publication queue.  This was a COMPANY XYZ throttling mechanism implemented years ago, that most people had forgotten about.  The bottom line ? BATT discovered the COMPANY XYZ transaction were indeed slow and allowed COMPANY XYZ to change their JVM based application accordingly.

 


The opinions expressed in this writing are solely my own and do not necessarily reflect the opinions, positions, or strategies or of BMC Software, it's management, or any other organization or entity. If you have any comment or question, please feel free to respond and I will be happy to respond to you. Please comment, your feedback is important. Thank you for reading this post.

Share This:

BMC Middleware Management (BMM) is a family of three solutions from BMC Software.  The solutions of the family are BMM-Monitoring, BMM-Administration, and BMM-Transaction Tracing.  One product in the BMM-Monitoring solution is BMC Performance Manager for WebSphere Business Integration.

 

The white paper below outlines the basic best practices for using BMC Performance Manager for WebSphere Business Integration.  Please download it and enjoy!

 

Thank you,

 

Ross Cochran

 

ross_cochran@bmc.com

Share This:

Introduction

 

 

For many years IBM has supplied an MQ exit for gathering MQ performance statistics.  This has been great for MQ in the distributed world, but not so great for MQ in the z/OS world; because IBM does not support the MQ API exit on z/OS.  Do not fear!  BMC is here!  For over a decade BMC Software has supported an “MQ API-like” exit on z/OS.  Over this past decade this z/OS functionality (called The BMC Software Extension for WebSphere MQ (a.k.a. MQE)), has gathered MQ statistics in many of the world’s Fortune 500 companies. (Note – The MQE in MainView for WebSphere MQ and the BMC Middleware Management WebSphere MQ extension are separate technologies, but related in similar functionality.)

 

Content

 

This paper will cover some of the operational aspects of the MQE.  These operational aspects will cover the following:

 

  1. The history of MQE
  2. The MQ performance metrics are gathered beyond standard MQ monitoring with MQE
  3. The MQE statistics role in MainView Transaction Analyzer
  4. The MQE MQ API trace facility
  5. How to allow the MQE to intercept MQ API call

 

    

The History of MQE

 

In the mid 1990’s as WebSphere MQ management was increasing in popularity and as BMC was known in the industry for out-of-the-box thinking (such as the 3270 Optimizer product); BMC began developing a more robust offering in MQ management.  BMC already had an industry leading MQ management solution with PATROL for MQ, but was already developing a go-deeper MQ management solution.  This new go-deep solution was an interception technology developed from the roots of Ultraopt for VTAM.  BMC knew how to gather MQ
statics that no other vendor could deliver. The resulting development yielded the ability to intercept the MQ MQI API calls.  By intercepting the MQ calls, BMC could see all the MQ application issues associated with the management of MQ.  This interception technology was
accomplished by BMC proprietary (read no IBM exit point) coding.  It was ported to the distributed world and the z/OS (then OS/390 and MVS) world. Since then, IBM replicated the BMC proprietary technique in an IBM MQ exit.  At that point, BMC migrated their proprietary interception technique approach to using the IBM MQ exit.  However, the z/OS MQ interception technology has since remained core technology from BMC, since IBM has never published an equivalent exit on z/OS.

 

The MQ performance metrics are gathered beyond standard MQ monitoring with MQE

 

As noted earlier, BMC was a leader in MQ management for both distributed systems and z/OS.  So what was left to add to basic MQ monitoring?  What was left was the statistics available from intercepting the calls of an MQ application to the MQI.  By intercepting these calls, BMC could determine how “busy” a given application was with MQ.  This new type of performance statics could be passed to MainView for setting thresholds and setting associated alerts.  You may be asking, what performance statistics (from MainView views) came from this new MQ interception technology?  Let’s review a few:

 

The APST view in MainView for WebSphere MQ provides performance details on applications accessing the queue manager.  In this view, look for a high GET/PUT rate; this could be a sign of a runaway MQ application.  Also look for a high object count; this could be a sign of poor MQ application programming.

 

The MQEST view tracks all open (PUT/GET) attempts by each MQ object.  It is easy to see the busiest MQ objects from an MQI viewpoint (regardless of the size of the MQ messages).  For example you know a specific queue is used to trigger a specific CICS transaction and you want to see if CICS has been issuing the correct amount of PUT and GET calls. This could be a good indicator if CICS is processing the right amount of business transactions.

 

The MQITRACE view allows you to see the API calls each application is making to MQ.  Each call also has the associated detail for each MQI call (including MQ return codes).  This is very handy in finding an MQ application that is not getting along with MQ.

 

This concludes the review of the top MQE views.  There are many other MQE related views available.

 

The MQE statistics role in MainView Transaction Analyzer

     

The MQE statistics feed the MQ portion of MainView Transaction Analyzer (MV/TA); thus allowing MV/TA to trace all the PUTs and GETs involved in a given transaction.  To correlate WebSphere MQ components when MainView Transaction Analyzer is running, BMC Software Extensions for WebSphere MQ (MQE) must be installed on the appropriate queue managers and must be writing to the LOGSPACE.

 

The MQE MQ Trace Facility

 

Ever wonder what MQ is doing at an MQI/API level?  Who is doing all the PUTs and GETs to MQ?  By utilizing the MQE feature, The
MQITRACE view displays MQ API trace records for MainView for WebSphere MQ.

 

Tracing and viewing the calls of all the MQ applications could be a daunting task, however you can limit your trace views by which jobs to trace by using the TRJOB option, which applications to trace by using the TRAPPL option, which queues to trace by using the TRRESOLVED option, and whether to only do exceptions by using the TREXCEPTION option.

 

How to allow the MQE to intercept MQ API calls

 

My colleague Gregg Tuben recently wrote an excellent blog and video on how to dynamically enable the MQE statistics.  Read about it here: MQE.

 

Conclusion

 

Once you use the MQE facility, you will never look back.  You will wonder how you managed WebSphere MQ for all these years, without it.

 

Enjoy.

 

Ross Cochran

Share This:

Introduction


One of the five pillars of Application Performance Management (APM) is:


Component deep-dive monitoring in application context


What does “Component deep-dive monitoring in application context” mean?  Simply stated, it means having the ability
to drill-deep into the components of an online transaction.  But what are the components?  The components are
perhaps an HTTP server (aka web server), an application server, or perhaps WebSphere MQ?  (Check here https://communities.bmc.com/community/bmcdn/bmc_for_mainframes_and_middleware/middleware/blog/2013/08/19/what-is-middleware for more about WebSphere MQ).


In this article we will assume we need to monitor WMQ.  We now need to decide; agent or agentless?  Meaning, do we
use an agent or go agentless for the drill-deep monitoring?  With BMC Middleware Management (BMM), BMC offers
both agent support and agentless support for WMQ monitoring.


That is the focus of this paper, explaining the virtues of how each BMM solution approaches agent and/or agentless
support.  We will be covering two products in the BMM solution set:


BMM-Administration
BMM-Monitoring


BMM-Administration – Agentless


BMM-Administration is the industry leader solution for WMQ/TIBCO administration (adding, altering, deleting objects). 
As extra value, it offers ultra-light agentless monitoring of WMQ objects via a WMQ client connection.  Many times this
ultra-light WMQ monitoring is just enough for resource restrained managed systems.


The ultra-light agentless WMQ monitoring offered by BMM-Administration is manifested in the creation and viewing of
events. Think of an event as you would an alert. There are about two dozen of these events that BMM can detect and
alert on to specific WMQ objects within the managed system.  These are the events monitored by BMM-Administration:


1. Queue more than N percent full
2. Queue Manager not accessible
3. Message in DLQ
4. Queue Full
5. Command Server Down
6. XMT queue not serviced
7. Channel retrying and XMIT queue not empty
8. Channel retrying Channel in doubt
9. Channel stopped
10. Queue has more than N messages
11. Queue has more than N messages and no readers
12. Queue is more than N percent full and has no readers
13. Queue has less than N readers Queue has less than N writers
14. Server conn channel has more than N running instances
15. Total running channel count is more than N
16. XMIT queue has more than N messages
17. XMIT queue is more than N percent full
18. First message on queue waiting more than N seconds
19. Oldest message on queue waiting more than N seconds
20. Oldest message on XMIT queue waiting more than N seconds
21. Trigger monitor is not running
22. Channel initiator is not running

 

BMM-Administration monitors WMQ on an exception basis via events.  Since BMM-Administration does not have a
database of performance metrics, there is no performance history kept.  In other words, you cannot tell the queue
depth of a queue yesterday at noon.  But you can tell if a queue’s depth was over a predefined threshold yesterday at
noon; by reviewing the events from that time frame.

 

For event notification, BMM-Administration provides two other methods for event notification:


1. Email – Via a POP3 SMTP server
2. Trap – SNMP datagram(s)


A popular misconception of agentless monitoring is there is less overhead.  This is partially true in that there is no
agent processing (CPU and memory usage) on the WMQ platform.  However, there is still overhead (agentless or
agent) to WMQ itself.  WMQ resource consumption is still going to increase (compared to no monitoring), because it
will be processing the commands BMM puts on the WMQ command server.

 

The most popular reason people choose agentless monitoring is because, there is no software to distribute, install, and
maintain on each server running WMQ.  Also the user does not have to worry about the agent being compatible with
the version of WMQ.  Many times after IBM announces a new version of WMQ, you will have to wait for the vendor to
release an agent to be compatible with the new release of WMQ.

 

An agent running on the WMQ platform requires OS level and MQ level security to run.  These security requirements
can be tedious to implement and maintain.  With agentless support, these agent related security requirements are not
needed.

 

Another semi-popular reason to think about agentless monitoring is to consider the platform running WMQ.  Is the
platform a supported WMQ platform but not supported by a vendor’s agent (such as Windows XP)?  You are
somewhat forced to consider agentless monitoring, because there is no agent available.

 

BMM-Monitoring – Agent


Of all the agent verses agentless configurations, the BMM-Monitoring agent configuration is the most robust option.
 
Since BMM-Monitoring has a backend database for storing MQ performance metric history, a user can go back and see
for example; the queue depth of a target queue yesterday at noon.

 

An agent allows an authorized user to issue WMQ commands such as starting a queue manager or starting an WMQ
command server.

 

An agent allows for the collection of hundreds of WMQ performance metrics.  All of these metrics can have associated
thresholds set (verses only 22 out of the box pre-defined events for BMM-Administration).

 

Agent support has a direct native integration with BMC Proactive Performance Manager (aka Patrol).  This integration
allows all BMM data and events to feed natively (no SNMP traps) into BPPM.

 

An The BMM agent is intelligent because it will only send in values to parameters that have changed since the last
interval.  This means if a queue depth has not changed since the last interval, the agent will not send a new value. 
This intelligence can significantly cut down on network overhead (as compared to agentless support where all values
are sent at every interval).


Agent based monitoring requires more security/access on the monitored platform.  Typically a running process (agent)
requires possible access to system files, WMQ files, and some degree of WMQ authority.


With the BMM-Monitoring solution, a customer is technically positioned to exploit BMM-Application Transaction Tracing. 
This is because the BMM-Monitoring agent is the same agent used by BMM-Application Transaction Tracing, just with a
different BMM extension loaded into the agent.

 

BMM-Monitoring – Agentless


The most popular reason people choose agentless monitoring is because, there is no software maintenance to install
on each agent running with WMQ.  Also the user does not have to worry about the agent being compatible with the
version of WMQ.  Many times after IBM announces a new version of WMQ, you will have to wait for the vendor to
release an agent to be compatible with the new release of WMQ.


A popular misconception of agentless monitoring is there is less overhead.  This is partially true in that there is no
agent processing (CPU and memory usage) on the WMQ platform.  However, there is still overhead (agentless or
agent) within WMQ itself.  WMQ resource consumption is still going to increase (compared to no monitoring), because
it will be processing the commands BMM puts on the WMQ command server.  There is still agent processing (CPU and
memory usage) on a platform somewhere, just not on the managed platform.

 

Agentless monitoring is popular because many people perceive there are no agents involved in monitoring.  Yes, it is
true no agents operate on the monitored platform.  However, there is an agent/process running somewhere
consuming resources.  So please do not slip into believing agentless monitoring is free (i.e. no resources used).  The
agentless processing is accomplished on a platform external to the platform running WMQ.  This external platform can
be Windows, Linux, or AIX.  The WMQ platform can also be Windows, Linux, or AIX.  Different operating
systems/hardware platforms represent data in different ways. For example, string data may be in a different code page
and numeric data may be represented in a different byte order. These differences will cause the collected performance
data to be converted, some portions by WebSphere MQ, some portions by BMM. There is a performance cost
associated with such conversion. To avoid extra work, it is recommended that you deploy your agentless processes on
operating systems that have similar data representation to the target operating system(s)/hardware platform(s)
wherever possible.


Some configuration features found in the agent-based solution are not available in an agentless configuration.
When running agentless, there is no way to issue commands on the target system, which means that certain
WebSphere MQ operations cannot be carried out. Therefore, the following features are not available with the
agentless solution:

1. Delete Queue manager
2. Start/Stop Queue Manager
3. Start Command Server
4. Start Trigger Monitor via runmqtrm command
5. Start Channel Initiator via runmqchi command
6. Start Channel Listener via runmqlsr command
7. Display/Configure Object Security via dspmqaut/setmqaut
8. Service Control (amqmdain)
9. MQ Installation Information
10. MQ Environment Information
11. Discard Broker Resources
12. Set MQM Installation
13. Display MQ Version

 

It is important to understand that agentless monitoring may also require more network bandwidth than agent-
based monitoring. This is because all WMQ management traffic goes (both ways) over the network connection.
With an agent-based solution, only data values that change between sample intervals are sent across the
network by the agent on the monitored WMQ host.

 

A target monitored platform may reside inside a DMZ, meaning a BMM agent could have issues communicating
back to the central BMM server.  This condition may require an agentless approach to BMM-Monitoring.

Sometimes the BMM central server supporting all the remote clients (used to manage agentless servers), may
have a limit on the maximum number of remote clients that can be supported.  This means you may have to
add resources to support more agentlessly managed platforms.

 

One the biggest drawback to agentless monitoring of WMQ, is the monitoring is considered “in-band”.  In band
means the delivery mechanism to monitor WMQ is WMQ itself.  In the vernacular, how can you monitor WMQ if
the WMQ resource (server conn channel) itself is down?  Granted server conn channels rarely go down, but at
least this potential issue should be mentioned.

 

BMM-Monitoring – Hybrid Monitoring – Agent and Agentless Together


This approach offers the best of both worlds.  For your central corporate WMQ servers you get robust full-bodied WMQ
monitoring with agents.  For your tightly constrained servers, you get the light to ultra-light monitoring with agentless
WMQ monitoring.


You Make the Call


You make the call; agent or agentless?  As with most things in life, there are pros and cons.  There are tradeoffs (CPU,
memory, network bandwidth…) for each approach. Hopefully this paper has helped explain all the BMM-Monitoring
options available for WMQ monitoring.

Share This:

BMC Software is known for its ability to monitor the health and wealth of a DataPower appliance with BMC Middleware Management-Monitoring (BMM-Monitoring).  But soon after a user has mastered the monitoring of a DataPower appliance with BMM-Monitoring, they begin to wonder about the administration (altering) of the DataPower configuration(s).  How can BMC Software help out here?

 

BMC Software offers the administration of a DataPower device through the BladeLogic Server Automation (BSA) solution.  Through the DataPower XML Management Interface, BSA can take a snapshot of the DataPower appliance, and also make configuration changes to the device.

 

Today, many DataPower deployments are primarily a blend of scripted and manual processes in the WebSphere ESB environment.  This process requires constant care and feeding.  Tedious tasks such as deploying classes, checking on object statuses, and managing deployment polices can become a full time job with the DataPower appliance.  Additional DataPower administrative tasks such as backing up, resetting, and quiescing the DataPower domains add to the administrative burden.

 

BSA can take the complexity out of deploying applications by effortlessly deploying and configuring settings for J2EE applications across multiple DataPower environments.  BSA can quickly and easily automate the deployment of applications and WebSphere configuration changes, significantly reducing the time and effort to deploy applications and improve consistency across any customer DataPower environment.

 

If you have been in IT for 30 minutes or 30 years, you know the mantra of “always have a backup”; perhaps two backups are good too?  BSA can help here with having a current and reliable DataPower backup by facilitating the automatic backup of your DataPower appliances.  This is accomplished by BSA taking an automated nightly DataPower snapshot.

 

Do you manage your DataPower appliance or does DataPower manage you?  BMC has the DataPower solutions to monitor DataPower with BMC Middleware Management-Monitoring and solutions to administer DataPower with BladeLogic Server Automation.

 

You make the call.

Ross Cochran

What is APM?

Posted by Ross Cochran Employee Jul 30, 2013
Share This:

 

What is APM?

 

APM is Application Performance Management.  It is the term used today to annotate all the aspects of tracking, tracing, and troubleshooting end users response time.  Nobody can seem to agree on exactly how to define APM; but one thing is clear – Everyone has their own definition.  Everyone from vendors, research firms, and end users; all have their own definition of APM.  This paper will define APM in the simplest of terms.  I will not use any vendor specific product names (including BMC).  I will start from the origins of APM and cover through today’s take on APM.

 

In the beginning…


In the beginning, there was one computer; pick your flavor – PDP, Univac, IBM, Cray, Cyber……  Remote end-users accessed these systems from simple plain old telephone (POT) lines.  To upgrade from 150 bits per second to 300 bits per second was a big deal.  With such slow lines of communications, end-user response time issues were usually not because of the central computer system.


However, as connection speeds in remote locations begin to rival local area network speeds, the customer’s method of connecting to the central system has become less of an issue.  In parallel to the increasing speeds in communication systems, many distributed systems (UNIX, Linux, Windows…) began to appear between the end-user and central computer system.  The issues of APM just got much bigger; is the end user response time issue in the network, distributed system, or central computer system?

 

The stage is set


The stage is set; we have super-fast (compared to 40 years ago) networks, distributed systems (web servers, app servers, middleware...), and the still present central computer system.  You are now primed for massive finger pointing between all segments of your IT department.  Most people will want to get ahead of the finger pointing and start collecting end user response times before the end-user calls.  So you first start collecting end-user response times (as seen from the end-users browser).  You also start collecting response times between the distributed systems and the central computer system.  Many people call this collecting the "hop-to-hop" response times.  Make sure you collect response time history too (because you know the guy in accounting is going to complain about response time from a month ago).  So you think, you are ready to address any APM they can throw at you.


All your end-users have their own interpretation of bad response time.  When a bad response time issue arises, will both you and the end-user be able to exactly replicate the mouse clicks that are causing the response time issue?  If not, now what?  How do you get a consistent and repeatable way to generate response time?  You got it; synthetic response time.  Deploy droids (not the cell phone type) in the field running consistent and repeatable scripts.  Be sure to collect and store their response times.   These droid response times will remove all the emotion and subjectivity associated with live end-users.  Plus, with droids not needing bio breaks and vacations, you will now have response times 24 X 7.

 

Time to go deep


OK, you think you are 100% APM ready now.  You are collecting and storing end-user response times.  You are collecting and storing the hop-to-hop response times of your end-users.  You are collecting and storing synthetic response times 24 X 7.  But what happens when your APM monitoring does indeed report a response time problem.  You see it in your reports; the response times (both real and synthetic) from the ABC application server are bad.  Do you just call-up the ABC application server owner and tell them to fix their problem.  Yah right!


You now need a way to "go-deep" into the ABC application server and make that deep-dive diagnostics part of your normal APM management rhythm.  The answer is to add an application server go-deep tool that will seek out when a servlet, java server page, java message service… are misbehaving causing end-user response time problems.

 

Things are slow on the central computer system


OK, you know you have this APM thing mastered.  You are collecting and storing end-user response times.  You are collecting and storing the hop-to-hop response times of your end-users.  You are collecting and storing synthetic response times 24 X 7.  You can go deep into am application server and tell the components of application server response time.  But you get the dreaded end-user call, saying their transaction on the central computer system has ended abnormally ("abended").  What do you do now?  Your response time reports do show response time from the central computer system were slow.  But, that is the heritage central computer system, and that system is never slow, until today! Just like you added a go-deep problem determination tool to the ABC application server, you need to add a go-deep problem determination tool for the heritage central computer system.  This will give you insight into which transaction are timing out causing response time issues.

 

APM nirvana


Finally you have achieved APM nirvana; you determine your end-users response time before they call you; you can quickly isolate issues to the hop of technology causing the response time issue, you can dig deep into the technology layer for probable cause analysis.


There you have it – APM from the beginning to today.


Enjoy!


Ross Cochran

Share This:

BMC Middleware Management (BMM) Install Checklist

 

 

Before installing the BMC Middleware Management solution,
use this handy checklist to make sure you are ready to install.

 

 

  • Review and verify all hardware and software
    prerequisites have been satisfied and are supported by BMC.  These are outlined in the BMM install guide.
  • Verify all applicable ports and firewall(s) are
    open
  • Verify IP connectivity between all applicable
    pieces of the BMM solution
  • Verify a BMM userid has proper OS security
    authorization
  • Verify the BMM database has been created and
    initialized
  • Verify the BMM database ID/PSWD is known
  • Validate the appropriate database client
    software is installed on the BMM Enterprise Server
  • The ping time from the BMM server to the DB
    server should be less than 5 MS
  • Verify the BMM software is onsite or has been
    downloaded
  • Verify all applicable outstanding BMM fixes have
    been downloaded from the BMC Support site
  • Verify you have a valid BMC support ID
  • Verify you can logon to the BMC support site and
    update your support profile
  • Validate you can see the BMM products on the BMC
    Support website with your BMC Support ID
Share This:

So you have heard there is a fix pack out for BMC Middleware and Transaction Management 7.0.  So where exactly is it located on www.bmc.com/support?

 

Please see the attached .ppt for the exact location of the download.

 

Enjoy!

 

Ross Cochran

Share This:

Introduction

 

Current users of the MQ KM (aka BMC Performance Manager for WebSphere MQ for Distributed Systems or BMC Performance Manager for WebSphere Business Integration) may be wondering “how to” technically migrate to BMC Middleware and Transaction Management (BMTM) V7.0.  (Please see your BMC Account Manager on how to migrate your MQ KM license).  This paper will be a high level “how to” cookbook in migrating from one technology to the other.  Additionally, BMC can have a software consultant help you with a complimentary one hour migration planning session.  Just let your friendly BMC Account Manager know you want it.

 

Please see the attached document for more technical details.

 

Ross Cochran

Share This:

What exactly do I mean by cross launching from BMC Middleware Management (BMM)?  I’ll tell you shortly.  Let’s first lay some ground work.

 

You’re the MVS Sysprog (yes a legacy term) and the company is rolling out a new MVS based application.  You know there are going to be response time issues (as with any new application).  You also know the MVS system will probably not be the cause of the response time issues, but you have nothing proving exactly where the potential response time issues are.  So you start wondering, how can you gather response time statistics that cover both MVS and the distributed world?  If you only had a tool that could show end user response time on the distributed side of the transaction, then cross launch (told you I’d get there) from the end user response time view into the response time view of the MVS components of the transaction.

 

You now recall how Ross Cochran (your favorite BMC Software Consultant) had been bugging you about looking at BMC’s transaction tracing solutions.  The details of yall’s conversations come into sharp focus.  Yea – This is the stuff Ross has been talking about!

You recall how you could deploy a transaction monitoring touch point in an HTTP server (This is pretty close to your end user).  Then start gathering end user response time statistics in your BMC BMM console.  Of course you will have setup thresholds and alarms on abnormal response times, but more importantly, you now know when the end user is experiencing a response time problem.  But, you are still missing something to show if the problem is distributed based or MVS based.  This is where cross launching comes into play.  Since MainView Transaction Analyzer is already gathering the MVS components of response time (such as the time a transaction spends in CICS, DB2, IMS, and MQ), you need a way of matching the MVS components of response time to the distributed components of response time.  You got it! Cross Launching!  You can select the offending transaction on the distributed side, right mouse click and select the MVS component level you wish to view, and cross launch into the MVS world.  You are then logged into MVS (yes with an MVS userid and password), directly into the specific MVS component details of this specific transaction.  Then you can begin to show MVS was not the cause of the response time issue (which you knew all along).

 

Stayed tuned for my next cross launching posting; there I will dig into the details (parms, panels, keywords, TLA’s, screens, and tabs) of exactly how to track transaction response time from the distributed world through the MVS world.

 

Ross Cochran

Share This:

Mainview for WebSphere MQ will alert when an MQ channel is down, or an MQ queue is filling up.  These types of events are generated when MainView for WebSphere MQ will poll MQ for statistics and statuses.  However, many customers many need real-time/instantaneous notification of an event.  This real-time approach is accomplished by MainView for WebSphere MQ subscribing the WebSphere MQ instrumentation events, such as configuration, channel, and performance events. 

 

This quick course will show you how to use MainView for WebSphere MQ to monitor the WebSphere MQ instrumentation events.  A support ID is required to access this video. 

Share This:

WebSphere MQ is the same on many operating systems; however WebSphere MQ on the z/OS operating system, has many more parameters and activities than the WebSphere MQ on a distributed platform. 

 

This quick course covers many of the those parameters and activities unique to the z/OS operating system.  These unique parameters and activities includes page sets, buffer pools, ZPARMs, and Log Managers.  A support ID is required to access this video. 

Filter Blog

By date:
By tag: