1 of 1 people found this helpful
It sounds like your looking for information on configuring the agent in a hardware clustered environment, right?
There are a number of different ways to do this, depending on exactly what you want out of it.
For example, some users want to view the status of the queue manager regardless on whether it's currently running on the primary or secondary node. Doing it this way allows for history reporting wherever the queue manager was running at the time. This involves having a single agent installation which fails over with the queue manager, and setting the agent's hostname parameter in a way that reflects this, rather than the hostname of the node.
Others want to to monitor it based on which node the queue manager is running on, perhaps in order to trigger events when the queue manage is running on the secondary node. This involves having an agent installed on each node, using the nodes hostname. The queue manager will only appear to be running on one node at a time. When using this method, you'll want to create Policies and Dashboards to make it easy to view the state of the queue manager.
Some users do both, because they want both the functional overview (e.g. the queue manager is running and available "somewhere"), and the detail altering (e.g. Hey! that queue manager has failed over. Why?).
The folks in the support group have helped a number of customers decide what they want, and then helped them figure out how to implement it. They are probably the best resource.
Let me know if you have any other questions.
Thanks for the information , it is really helpful. Although right now we have the cluster name in the host name rather than the active/passive node in console .
Although i have one more question that if the we monitor the node name on which the queue manager is currently running on then do we need to add both the host name (active and passive) in config manager as whenever it get fail over , the other will get discovered in management console automatically.
If you go the route of separate agents on the primary and secondary nodes, then yes, you will have two different agents listed in both the Management Console and in the Configuration Manager. Essentially every object shows up twice in both clients, although only one will be active at a time.
You then create a Dashboard, either with the Dashboard wizard, or by using policies, for the critical objects. This "ties" the two instances of the object together, allowing for all sorts of events.
As an example, an even trigger on the Dashboard type could send out a warning when the queue manager is running on the secondary node, but send a critical message when it isn't running at all.
This is all similar to the out of the box Clustered Queue Manager policies.