Skip navigation

-by Joe Goldberg, Lead Technical Marketing Consultant, Control-M Solutions Marketing, BMC Software Inc.


One of the characteristics of a true workload automation solution is integrated incident management. Automated incident creation is mandatory for many industries subject to regulatory controls and almost every enterprise today depends on incident management to ensure problems are recorded, tracked, routed to the correct party for analysis and eventually resolved.


BMC Control-M offers a variety of methods to interface with incident management systems including SNMP Traps, email, command line interfaces, incident forwarding to event management systems and built in support for BMC Remedy via its standard API.


It is this last option I want to discuss here.


There are two possible points within Control-M where incidents can originate; job post processing defined on the Steps tab of any job and SLA event processing defined on the BIM tab of a “Control-M BIM” job.


To enable this processing, Control-M requires only a few pieces of information:

  • The name of the AR Server hosting Remedy Service Desk
  • The port on which AR Server is listening
  • A User ID and password (this is encrypted)



You may be asking at this point how does this relate to a SaaS offering?


Well, as it turns out Remedy OnDemand uses the same technology as Remedy On Premise. This means the built-in Remedy support Control-M provides works equally well for both flavors of Remedy.


This should be comforting news for every Remedy customer because many Remedy OnDemand users also have some Remedy on premise too. This also enables a migration path or supports coexistence of these two deployment models for Remedy Service Desk.


There is a minor consideration that needs to be addressed and that is the network connection between your enterprise and the Remedy OnDemand instance you are using. This is required but is a minor technical task.


In addition to “just” creating incidents, Control-M can be configured to connect alerts to incidents in a variety of bi-directional relationships. In addition to the “SEPARATE_CLOSE_HANDLE” default option, you can instead choose:

  • HANDLE_ON_CLOSE: Handle the Control-M Alert when the Remedy Incident is resolved or closed
  • CLOSE_ON_HANDLE: Close or resolve the Incident when the Alert is closed
  • BIDIRECTIONAL_CLOSE_HANDLE: Combine the above so that either close/resolving the incident or handling the alert will take care of the other



Adoption of SaaS solutions is growing rapidly. Control-M support for both flavors of Remedy deployment means you can make your choice based on your organizational requirements without compromising the level of automation in handling your workload incidents.


The postings in this blog are my own and do not necessarily represent the opinions or positions of BMC Software

Promotion of Jobs Across Multiple Environments

When multiple environments (production, QA, test, etc) exist and are of the same release level (another best practice!) you can use the Find and Update functionality to make changes and then save to a portable file (drf or xml).   You can access Find and Update in Control-M/Desktop from the Edit menu.  Here is a screen shot showing just some of the powerful string match and update capabilities available with Find and Update. 



After you have made changes you can save the resulting definitions to a drf or xml file and load that file into a different Control-M environment.  If you regularly make the same updates (like changing various fields from test to prod or dev to qa) enter them one time then save those values in the Presets pulldown.  The next time you need to make identical updates (or similar ones) you can load the previously saved preset.  This could save you a lot of time and reduce the chance of typos.  Another option would be to use the exportdefjob command to export job data to an xml file which can then be modified and import into another Control-M environment using the defjob command.  Using the latter method allows you to perform the promotion in a scripted fashion if you so choose.


Return Code/Sysout Text String Checking

In the Steps tab of a job in the JEF you can code jobs to take some automated action based upon the return code the job returns or the appearance of text strings in the sysout file of a job.  Some of the actions available are to: mark a job as ok or notok, generate a shout, send an email, open a Remedy ticket or force in a recovery job. To set this up based upon return codes enter the following on the steps tab of the job:

On Statement   Stmt=*                            Code=COMPSTAT=4


The above example marks a job that ends with a completion status of 4 to be OK (successful) as opposed to the default behavior in this case which would have marked the job NOTOK (failed). 

Note: Check the OSCOMPSTAT value in the log to determine the completion status of the job.


To set this up based upon a string in the sysout file enter:

On Statement   Stmt=*                            Code=*ERROR*


The above example marks a job that ends with the text string ERROR somewhere in the sysout file to NOTOK (failed) regardless of the job’s return code.




I hope you learned something new after reading through part 1 of this blog series (click here to see part 1).  In an effort to keep the knowledge flowing I am happy to present part 2.


Email Options

By configuring your Control-M environment with email parameters you can email individuals or groups when something happens that they need to know about.  There are many parameters you can set related to email configuration.  One place you can set these is via ctm_menu.  Another is in the Control-M Configuration Manager under Control-M/Server System Parameters.  Some ways you can send an email are:


- when performing a Do Mail action from the Steps tab



- when a business service is in jeopardy of finishing late (by using Batch Impact Manager)

- when a LateSub or LateTime event occurs from the PostProc tab by using a mail type of entry from the shout destination table in the To field


Notification Options (aside from email)

In addition to the email options described above, there are many other ways to find out what is happening in the Control-M environment that you may want to take advantage of.  By default, many actions (job fails, agent becomes unavailable, data center disconnects) seen in Control-M will generate a shout to the alert window in EM.  You can choose have these alerts also generate an SNMP trap that gets sent to your own sites SNMP host (check *SNMP parms from the CCM in EM system parameters).  You could also choose to invoke a program with the SendAlarmToScript parameter every time one of these alerts is generated.  If you have Remedy you can define a job to take a DO Remedy action to open a Remedy Service Desk ticket.  To configure the Remedy parameters run either the emremedy_configure (EM) or remedy_configure (CTM/Server) command.  To see more details on some of these notification options check out this prior blog post from Joe Goldberg    Creative Notification Options


Enterprise Manager User Authentication and Passwords

Default authentication for Enterprise Manager is internal authentication.  Using default behavior users and passwords are defined/stored inside the EM application.  System parameters such as PasswordComplexityOnOff and PasswordLifetimeDays can be modified to enforce specific password usage and expiration rules.  By changing another set of system parameters in the Control-M Configuration Manager you can authenticate against your own Active Directory or LDAP source.  This can be configured so that users are still created in EM but password authentication is not managed or maintained by EM.  You can also set it so that EM groups are used instead of EM users.  These EM groups would be associated to AD/LDAP groups (which contain users).  In this method no users are created or maintained in EM at all, just groups with associated EM permissions, which are associated to AD groups and the AD users contained within.  In Version 7, LDAP parameters can be accessed from the CCM.




Best practices are tricky.  What is best for Company X is not necessarily best for Company Y.  Company Z may have 2 distinct Lines of Business they support, and what works for one may not work for the other.  There are many great features and capabilities in Control-M.  I won’t preach to you how exactly you should use them; I just want to make sure you know the power is there.  You can use it as you see fit!  With this in mind, I am starting a blog series on things you should know about when working with Control-M.  Without further adieu….


Standard Naming Conventions

For almost every field in Control-M, the more you follow a standard naming convention the easier your life will be.   By using standard naming conventions you make many actions in Control-M easier to perform, like:


  • more easily secure items you want to limit or allow access to
  • quickly create viewpoints or dynamic filters to affect jobs in your view
  • quickly prevent or allow groups of jobs to run (via quantitative resources for example)
  • change massive amounts of job data, even complex strings, with speed and accuracy

Here is a screen shot showing just some of the powerful functionality in Find and Update.



Job Runtime Statistical Data

ctmjsa is a command that should be run on a regular basis.  The statistics this utility compiles are very useful when you want to set up jobs to alert in cases when they run too long or too short an amount or percentage of time compared to their average.  The system parameter RUNINF_PURGE_LIMIT controls how many of the statistical records are kept and used for the averaging.


Sysout/Log Retention

By default 1 day worth of job sysout and 2 days of log are kept by Control-M.  Modify these 2 system parameters (SYSOUTRETN and IOALOGLM) to increase the amount of sysout and log retained by Control-M.  Then, when viewing an Archived ViewPoint (discussed below) you can see not only what the job flow looked like on a prior day, but you can also access the job sysout and log from that prior day as well, all without leaving Enterprise Manager.


Archived Viewpoints

To see what happened to jobs on a prior day load an archived viewpoint from the Enterprise Manager GUI (select File – Open Archived ViewPoint).    MaxOldDay and MaxOldTotal parameters (found and set inside the Control-M Configuration Manager) control how many of these old networks are kept.  If you match your sysout/log retention values with the Archived ViewPoint retention values you can not only see what jobs ran when on those prior days, but also access their sysout and log information.




To specify details for a new Dollar Universe conversion project

  1. In the Conversion project general information area, specify the following fields:
    1. Project name - Logical name that will be used for managing the conversion project.
    2. Project description - Free text description of the purpose of the conversion project.
  2. In the Scheduling data import area, specify the following fields:
    1. Session List File - List of sessions defined in the Dollar Universe node. For each session, volume and description is displayed. Create the session list file by running the following command in the Dollar Universe environment:

      “uxlst ses ses=* vses=* full”

    2. Session Details File - Detailed description for each session. It contains lists of UPR’s and dependencies between UPR’s that are defined with each session.Create the session details file by running the following command in the Dollar Universe environment:

      “uxshw ses ses=* lnk vses=*”

    3. Task File - Detailed description of task parameters corresponding to the UPR or session that is running.Create the task details file by running the following command in the Dollar Universe environment:

      “uxshw tsk upr=* mu=* ses=* vupr=000 nomodel partage”

    4. UPR File - Set of parameters and conditions that define the UPROC functionality.Create the UPR file by running the following command in the Dollar Universe environment:

      “uxshw upr upr=* vupr=* partage”

    5. Rule File - A rule is an algorithm which translates the required execution periodicity of a task.Create the rule file by running the following command in the Dollar Universe environment:

      “uxshw rul rul=*”

    6. Resource File – Contains the resource definitions.Create the resource file by running the following command in the Dollar Universe environment:

      “uxshw res res=*”

    7. Calendars File – A calendars definition file.Create the calendars file by running the following command in the Dollar Universe environment:

      “uxshw cal exp mu=* since=2010 before=2014 nomodel”

    8. Management Unit Details File - Detailed description for each management unit.Create the management unit details file by running the following command in the Dollar Universe environment:

      “uxshw mu mu=*”

    9. Agent Mapping File – Mapping file (UXSRSV.SCK file) between agent logical and physical names. Copy the UXSRSV.SCK agent mapping file from the MGR folder in the Dollar Universe environment.
  3. To perform Scheduling data import, click Import Scheduling data.The conversion data is captured and the assessment stage is now enabled.




Filter Blog

By date:
By tag: