Skip navigation
1 2 3 Previous Next


288 posts
Share This:

The new Control-M Getting Started Guide is now live on the Control-M Communities site!


Whether you're a new Control-M user or a long-time user, you can benefit from our Getting Started Guide full of easy to consume information and links to more detailed how-to documents and videos.


Some key features of the Control-M Getting Started Guide include:

  • Navigating Control-M video – Control-M subject matter expert, Robby Dick, welcomes viewers and walks them through the Control-M interface in a brief introductory video. 
  • What Control-M can do videos – A library of high-level, 2-3 minute videos that highlight some key Control-M capabilities
  • ‘Connect-With’ video series – For a deeper dive into more technical details of Control-M, you can watch our best-in-class Control-M Support team’s Connect-With video library on YouTube. 
  • Communities discussion forums – You can post questions directly to the Control-M Communities site, start a conversation or suggest new content that would add value for users of the guide


You can access the Control-M Getting Started Guide, here, on the "Learn" page on the Control-M Communities site.


This is a community driven resource so please direct any feedback to Criss Scruggs.

Share This:

So there we were; pioneers on the brave new Isle of Dogs. Shift work may have been playing havoc with our circadian rhythms but the only hardware failure that really fazed us was if the coffee machine went on the blink.


I watched as Canary Wharf emerged from a hole in the ground and waited for the painfully slow Docklands Light Railway to develop (replacement bus service only at weekends). The local population quickly realised that we were but the initial wave of island invaders. We were welcomed into the local pubs but left in no doubt that we were the outsiders, the arrivistes.


It did help that, in our number, we could call on several locals to ease any nascent tension. Dave was from Greenwich, Harry from Chalk Farm. Both were shift leaders and more than happy to pass on their technical knowledge. In fact, Harry was in the process of joining the Operations Analysis group and night shifts would soon be a distant memory for him. Harry, despite being something of a rough diamond, knew his stuff. He had recently discovered SAS analytical software and was busy producing graphs for everything that moved in the datacentre.


On the mainframe itself we had recently started running a product called CA-7. I knew what CA-1 was, that was our tape management system. Was CA-7 something similar? No, CA-7 was a batch job scheduler. I had no idea what this was but I soon realised that its introduction had caused more ripples than the average started task. Anyway, for now it sat there, resolutely doing nothing at all while its implementation was debated.


The first obstacle was that we needed training on this new product. Just getting people off shift and into a 9-5 training course proved hard enough. Eventually a date was agreed and we were joined by the consultant who gamely attempted to train us. He explained that CA-7 executed jobs and, upon successful completion, would automatically run the next one in sequence. But wait, there’s more - CA-7 could run variations in the schedule, according to the calendar. For example, if you had special reports that ran on the last day of the week, simply use a CA-7 calendar to submit these only on those dates.


This started to sound a lot like my job. CA-7 was, in fact, me - but without the dangerously high levels of caffeine input. Ah, but no, not really. Could CA-7 deal with program failures, lack of disk space, network issues, corrupt files? No, the operator needed to be there, vigilantly watching every step of the way and quickly putting the whole thing back on the rails when needed.


Our teacher explained the fundamentals of scheduling, articulated in ways that I still use today. As the course wore on, I thought that it might be better to accept this advance as part of my future. At lunch I sidled up to Dave. “Well, what do you think of CA-7?”


“Not a lot. Anyway, it doesn’t matter, I won’t be using it.”


“Why not, it’s the future, isn’t it?”


“Job scheduling? I have a job and I don’t need it scheduling away. If you’ve any sense, you’ll avoid it.”


Dave felt that CA-7 was the “thin edge of the wedge”, that once it was accepted then our services would be surplus to requirement. I pointed out that although the mainframe was the majority of our work we also had plenty of other systems that needed our skills (half-a-dozen DEC systems, Wang Word Processors, various standalone systems). And somebody would be needed to configure the schedules in CA-7 regardless. Dave was unmoved.


“Listen, I didn’t spend the last 10 years’ learning how these systems work just for some Johnny-come-lately to appear and tell me we’re going to do it all differently.”


“But we’re doing a job that didn’t exist 20 years’ ago, what if our predecessors took the same approach?”


Dave shrugged, “I am happy with my technology. The next kind of technology will cost us our jobs.”


I spent the early part of the afternoon session worrying about Dave’s comments. My world revolved around the job; the idea that I was not personally central to the company’s future plans concerned me deeply.


By the time we reached the afternoon break I was relaxing slightly. CA-7 wasn’t exactly user friendly and it seemed more suited to complex batch environments where hundreds, possibly thousands of jobs ran. I asked Harry for his thoughts. He was, after all, our new Operations Analyst and surely more inclined to welcome new technology?


“Nah, can’t see the point of it. If I want to run a sequence of jobs automatically then there are plenty of other ways to do it without buying a specialist tool.”


“But what about the calendars?”


“Don’t worry, I know when it’s Friday. Listen, you’re going to spend months configuring this thing and then have the overhead of maintaining it and what does it really give you? Saving 20 minutes every batch isn’t going to change the world.”


So there we were, again. On one side my colleague thought he would lose his job to this innovation; on the other hand, our technologist doubted the return on investing time in it. CA-7 was friendless.


In the end (and after some negotiation) CA-7 was used to submit a new batch of some 20 jobs that could run around 01:00 am. As with many new systems, the code in the batch was prone to failure and required manual fixes and subsequent re-runs. “Told you” opined Dave, “load of rubbish.” But the code failures weren’t the fault of CA-7, I pointed out. Nevertheless, “CA-7 problem” would often appear in the shift log and the system remained on the periphery.


I didn’t realise it at the time but suspicion, doubt and general dislike are routine barriers to automation of all varieties. How to overcome these issues were questions for later in my career. For now, I simply added CA-7 to the bottom of my CV and got back to work.


Anyway, I had other things to occupy me - I was off to Switzerland.

Share This:

After publishing this Content as a Control-M Community Document, I decided to open up this as blog  content for comments so that Control-M Administrators and Operators can share issues, histories, sad nightmares troubleshooting and thoughts on how the User Daily process current can be made better by suggesting how can this be addressed as Ideas for enhancing the current process supported by todays Control-M capabilities.

I will share my thoughts, and I hope you share yours too. Enjoy and Share.



Uncovering Control-M User Dailies Secrets - What is an User daily, How to define, run/execute and manage them


What is an User Daily


An User Daily is an abstraction that Control-M utilizes to organize and group the SmartFolders/Folders with jobs together under the same "Entity Name" defined by the control-m user, optionally at the moment of the Job and Folder objects definition with the main intent to split up the overhead of Daily SYSTEM into multiple loads throughout the batch windows in order to allow them to be ordered as a single operation, as one "GROUP Entity" on a different time during the cycle of the production Day until the NDP (New Day Procedure) as part of SYSTEM Daily jobs defined to run ctmudly util as NAMED User Daily Jobs.


The User Daily Entity Name is a 1-10 characters and can also be defined/updated after the tables or jobs are created and checked in to Control-M EM.


As it is an Abstraction Entity Name, unfortunatelly, there are some challenging steps that need to be addressed in order to take advantage this capability and effectivelly have the grouped set of Folders of jobs enabled to be ordered automatically by a mandatory Control-M defined User DAILY jOB with ctmudly util adjusted to execute at specific time defined as desired by the user/Department needs.


How  to plan and Define an User Daily - Lightning the Documentation


  • Firstly: The User Daily Entity is not subject of any naming or security standard and as a Control-M EM User,  there is no easy way to realize its existence. Also, as part of the Job Planning User perspective it is not perveived whether a NEW User Daily is really required or it should be restricted and avoided of being defined once that the Abstration or Concept of Daily is implemented by handling the "Order Method" Folder property when defining/updating a new/existing Job Entity in the Planning Domain which most of the times is completely forgeteable.

The Order Method property existence and its options meaning is almost always overlooked by the Job Planning User when Job Folders is being defined due to the fact that it is controled by a Control-M EM System parameter named AutomaticOrderMethodByDefault as default =1, that means the Folders/Smart Folders currently being defined will belong to Daily SYSTEM or Automatic (Daily).

Determines whether the default for folders that are created by Order Method is automatic or manual.Valid values: 1: Automatic Order Method (Daily) / 0: None (Manual Order)     Default: 1
  • Second: Once  "defined" an user daily entity you need to validate whether there is already an "Used Daily" Job active, or the Control-M Adminsitrator will need to define and extra job entity that runs ctmudly utility with the DAILY NAME as parameter in order to make your daily job tables be effectivelly scanned automatically when the time is right for elligible jobs to placed into Active.

As you can see, the process of splitting up the daily load into small dailies allegedly flexible to any user create their on daily load supported by the Control-M platform is not comprehensive enough and certainly it should be enhanced, simplified and better managed for future BMC Control-M releases as part of CCM management capabilities.Steps required to "Define, Plan and Run" an User Daily:1) Create a Folder and Select "Specific User Daily" from "Order Method" options list and provide a 1-10 characters name.

a) Create a Folder and Select "Specific User Daily"
When Defining new Job Folders in the Control-M Sit,  if this order method property is not noticed or touched, it can pottentially cause problems overloading the DAILY SYSTEM with to many tables to bse scanned an jobs to be placed active during the NDP.Order MethodDefines the method for ordering the entity as one of the following:
  • Automatic (Daily): When set to Automatic, at the same time each day (known as New Day time), each Control-M/Server runs a procedure called New Day. This procedure performs a number of tasks, including scheduling the day’s jobs, and running maintenance and cleanup utilities. The New Day procedures orders the folder or folder jobs.
  • None (Manual Order): The folder is not automatically ordered.
  • Specific User Daily: Identifier used to assign the folder to a specific User Daily job. The User Daily Name is ordered at a specific time of the day. For load balancing purposes, the User Daily jobs are scheduled for different times, throughout the day, other than the New Day time.

(Control-M Control-M Documentation - Control-M - BMC Documentation )

b) Provide a NAME for the desired DAILY

Please note that You only have 1-10 characters to provide the meaningfull name  for the Daily or select from the dinamically distinct list provided based on previous existing daily names associated to Tables of Jobs loaded from Control-M

User Daily name

Defines User Daily jobs whose sole purpose is to order jobs. Instead of directly scheduling production jobs, the New Day procedure can schedule User Daily jobs, and those User Daily jobs can schedule the production jobs. Set User Daily Name when Order Method is set to Specific User Daily.

(Control-M Control-M Documentation - Control-M - BMC Documentation )



2) Create jobs belonging to the Folder recently defined to be part of the desired USER Daily,  and "check in" your jobs to Control-M EM.


Once the Job Folder with at least one job entity is loaded to Control-M EM, the new User Daily Name will be dispalyed as an option for selection on the "User Daily Name" list.


Now, that we have aded a Job Folder assigned to an user defined DAILY, we need to Define the USER DAILY Jobs as part of the SYSTEM DAILY in order to allow the Daily defined to be scanned.


As of this Example, we are defining 3 Daily Jobs as part of a Job Folder assigned to the SYSTEM Daily, that will be the last configuration step required in order to get the User Dailies to work approprietely. The User Dailies jobs must run the cmtudly utility as ctmsrv user and a local agent to CTM Server is required in order to submit for execution the ctmudly command.


a) Create a Job under the newly created Folder Name and provide a Job Name, Description, Command, and Run As user. The Host/Host Group must point to the local agent or leave it blank.



a) Define a Folder name and Order Method

b) Provide mandatory Job Details

c) Adjust the appropriate TIME, for each specific DAILY Job


Define User Daily Jobs

Choose Automatic (Daily) or SYSTEM




Now, after "Check in" the ORDER_USER_DAILIES Folder to Control-M EM, at next NDP, the SYSTEM Daily will load the UDLY_ORDER Jobs.


The UDLY_ORDER will be placed into Active Jobs and will be submitted to run at planned "From Time". When the "From Time" arrives, each UDLY_ORDER will execute ctmudly with its specific DAILY Name as parameter and the Folders of Jobs that is associated the that specific NAMED User Daily will be scanned and the jobs that are elligible to be ordered will be placed Active.


I Hope You ALL like it and find it Helpfull.


My Best

Adriano Gomes

Share This:

Top 3 features.jpg

Control-M is continuously enhanced to better serve the needs of developers, business users as well as operations. Additional enhancements benefiting these groups were made in March 2019, with Control-M 19, and this article will focus on just three major features sets, among the many features delivered with this version. Let’s review them.


1. Enhanced Control-M Automation API 

For customers embedding Control-M workflows in their CI/CD pipeline, Control-M Automation API continues to evolve to give them all the control and ownership they need to accelerate application delivery.


Configuration Services

If you are part of the development team, your mandate is delivering better applications faster. Because very often business applications are made by workflows (dependent jobs and tasks that must execute in a specific order), we assume you are taking advantage of Control-M Jobs-as-Code. If so, you are embedding the orchestration of those workflows as a code artifact, as early as possible in the SDLC, and shift-left building and testing activities. But if you think about the overall SDLC process, you don’t want to just build, run and test application workflows, you also want to be able to configure the environment where those workflows are going to run.


What Control-M 19 provides on top of Jobs-as-Code capabilities is the ability to configure and secure a Control-M environment through Control-M Automation API. In a self-service and fully automated fashion, you can define authorizations for roles, users, and LDAP groups, and manage "run as" users. With that, you get more ownership over all the processes involved with the development of your applications.




  • Automatic onboarding – if you are the administrator of your development team, and a new developer joins your team, you don’t need to wait to get him onboard. You can automatically and dynamically onboard the new developer to Control-M, assigning proper role or authorizations to control access to Control-M resources through Automation API.
  • Manage jobs on any machine – you can manage “run as” users, so you and your developers get immediately authorized to manage and run specific jobs (e.g. Windows jobs) and orchestrate your application workflow on any machine.


Deployment Services

When it’s time to push your application from development to production environment, either you are a developer or a DevOps engineer, you may want to automate the deployment process. Control-M Automation API deploy descriptor not only allows you to automatically move application workflows across staging environments, but you can also change application workflows properties and make them comply with the Control-M destination environment they are going to be moved into.


Control-M 19.1 enhances the deploy descriptor including any job type (built through Application Integrator) in the deployment mechanism, thus expanding the automated deployment to more application workflows.


Provisioning Services

In addition to configuration and deployment, Control-M 19 exposes provisioning services to developers, for them to consume Control-M in a dynamic way and be able to only run what they need, at the pace they need and when they need it. Now they can provision a full Control-M stack, including certain plug-ins, and perform HA related operations.


This capability is even more relevant when developers build their applications in containers, like Docker, Kubernetes, Mesos. Containers are ephemeral, meaning they can be generated, stopped, destroyed and rebuilt at high speed. The new way they introduce to deploy applications does not require a dedicated Control-M environment, but rather it requires Control-M infrastructure – as well as other objects needed for the application to run – to be immediately instantiated, as opposed to go through tickets and change management cycles.


If we think that workflows are an integral part of applications – that allow applications to immediately run and deliver value once the container is deployed – we can understand the importance to make Control-M part of the containerized approach.


Provisioning of Control-M stack enables two use cases that we call “ephemeral” and “rehydration”, inheriting names that were first used by one of our customers.


  • Ephemeral  provisioning of a full Control-M stack on a server or container, for dynamic application workflow orchestration
  • Rehydration – provisioning of a full Control-M stack on a server or container, replacing an older version. In this scenario our customer, who has a mandate of periodic refresh of all containers, instead of upgrading Control-M, replaces the old container with a new one, which includes a newer Control-M version.

2. Enhanced Self Service

If you are a business user, you likely want to get immediate answers related to your business application workflows, and you want to get those answers on your own. Control-M Self Service is what you need to obtain visibility, control, and responsiveness in a context you can easily understand.


Starting from Control-M 19, business users will be able to use viewpoints as part of self-service. Viewpoints further enhance the visibility, control, and comprehension, by focusing on relevant and frequently accessed information. Users can define private viewpoints by filtering on specific users, exceptions, or other workflow properties. Viewpoints can either be used ad-hoc or saved for ongoing use or used on historical data. They can be organized in hierarchy view and a new tree view:


  • Hierarchy view – supports an easy understanding of how workflows are linked to the business they serve and allow zooming in and out depending on the level of details you want to see. This view helps to immediately understand what’s the impact of workflows failure on the business.
  • New tree view – supports an easy understanding of dependencies between jobs and jobs status, so it’s more relevant in case you want to navigate forward and see how your job failure will impact future jobs or navigate backward and see which job first failed that determined your current job failure.


3. Enhanced Control-M Workload Change Manager

If you are part of the operations team, you are challenged for more strict control and governance on the production environment. What makes your job more challenging is the increasing agility in your organization, demanding for more developers to work in a decentralized environment and more development ownership.


How to ensure all developers to adhere to operational standards when they code and maintain their application workflows through Control-M Automation API? How to reduce failures and avoid developers and operations rework?


To help you with this challenge, Control-M Workload Change Manager has been enhanced with more automated control and enforcement through site standards. With Control-M 19 you can define more powerful and flexible site standards, and guide developers to follow them when building and releasing workflows. 


This includes the following major enhancements:


  • New job attributes site standards can now be applied to all Control-M job attributes but application attributes
  • Conditional site standards – you can apply restrictions on a certain job attribute conditioned on another job attribute value. For example, it’s very common to use naming convention for applications, to make them memorable or searchable. Suppose you want to align all application artifacts to the same naming convention, including application workflows and jobs. You can use conditional site standards to enforce job naming convention for jobs belonging to a specific application.
  • Must have rules – you can force users to define certain values for multi-option attributes such as conditions, resources, notifications, variables, and more. For example, SQL query jobs have a database resource prerequisite. Must have rules allow you to force users to link those jobs to the database quantitative resource, so to prevent them running – and failing - if that resource is unavailable or consumed.



Control-M has evolved and continues to evolve to solve the challenges of digital business. While infrastructure, data and application change as required by the digitalization, Control-M supports those changes and simplifies the adoption of technologies and processes the digital change requires. With its adaptive approach, Control-M helps developers, operations and business users to deliver innovation at the speed the business requires.

Share This:

A long time ago, in an IT environment far, far away … I began my career in Computer Operations. Across the IT industry this band of brothers (and no few sisters) has frequently been re-branded with the words “information”, “systems” or “infrastructure” imposed upon it. However, the original title that I encountered is hard to improve on; Computer Operations Group, a small but essential wheel in supporting and supplying platforms for the business user.


Back in that distant galaxy, Computer Operations was often a miscellany of high-end activities (who wants to do an emergency point-in-time database recovery at 3 a.m.?) mixed with the mundane repetition of handling tapes and distributing printed output. At the heart of everything was the overnight batch. An art in itself, the batch was a constantly changing creature, often growing to the point where it threatened to impinge on that sacred resource, online business hours. The batch would plow on overnight, like a hulking cruise ship and (with a bit of luck and judgement) come dawn it would find itself safely berthed.


The batch run was mapped out in a paper-based checklist and your best chance of avoiding icebergs was to use the latest checklist version. My employer invested in a top-end Xerox Documenter (children, this was what desktop computing was meant to be before Apple and Microsoft performed their audacious heist) and I enthusiastically dedicated myself to maintaining the checklist on my outsized Xerox screen. The system was ideal for drawing flow charts and I frequently asked (i.e. bugged) our Operations Analysts if it was possible to shorten the total batch run time by making the flow more efficient. If an update job only had to wait on a few specific files then, under my watch, it would be run as soon as those new files were available.


My eagerness to optimise the batch was not entirely altruistic. True, we did only have a small window in which to resolve job failures (about 2 hours leeway on a weekday night) but I had quickly come to learn that I had my limits when it came to shift work. We could, on average, finish the batch by 4 a.m. This was the point at which I would hit “the wall” – a state of extreme tiredness that would see my shift leader banish me from the bridge area (where all the system consoles flickered on 3270 terminals) to the relative safety of the tape library or the print room. After 4 a.m. I would wander, zombie-like, struggling to replace the ribbon on a huge printer or trying to find a tape cartridge from the 7,000 available in the racks.


The ideal night shift, for me, was to sign-off the end of the batch at 03:59 and repair to the drawing room. Actually, we had no drawing room but what we did have was a coffee area with the comfiest sofas that I have ever encountered. Perhaps they weren’t actually the comfiest ever but all I know is that as soon as my head touched the armrest then I was out like a light. The datacentre was located high in one of the first towers to be built in Docklands, so I would drift off to sleep whilst watching nocturnal London twinkle.


All I needed was 60 minutes sleep and then I was ready to go again. That was just as well, because this was no holiday camp, no sir! The online systems were required to be restarted by 6 a.m. sharp. Databases needed to be made available followed by the ubiquitous CICS systems, the DEC-based systems fired into life and a range of other, less reliable, systems demanded various levels of assistance before they could face the business day. We even had an IBM XT PC (what for, nobody knew, it was certainly no Xerox Documenter).


By 8 a.m. we were happy to hand over to the day shift and head home, another batch put to bed, another checklist completed. However our cosy world was about to be rudely disturbed by the constant roil of technology - and the blot on the Computer Operations horizon was something called batch job scheduling.

Share This:

Hello Control-M Family,


We are happy to announce a new 'Meet The Champions' blog post series, to spotlight those awesome members who dedicate their time to help other members of our community. You might have seen them replying to your questions (or others'), and sharing wisdom to make this a better place to be. We thank them, and we feel it is time the whole community get to know them better! Spotlighted champions will also be invited to be a part of an exclusive community, where they can interact with other champions on improving the overall BMC Communities experience.


In our very first edition, Adriano Gomes from Brazil, talks about his experience with BMC Communities, personal life, and more!




Q. Do you remember how you were introduced to BMC Communities? What was your journey like?

I was in a hurry to answer my BMC Client Management prospect question related to Virtual apps and Certificates, I asked the questions, I got answered, and never came back to say thank you. What a shame!


I have found out that the right answers makes us Eternal. Thanks BMC Communities committed members.


Q. Tell us a bit about your work and goals?

I learned from an elder friend, “work for yourself”! I make my job “my own” every day and I take from it what is best for me.


My goals are not part of my Job, they are under my decision to sow. I am committed to what I decide to sow.


Q. What draws you to participate in BMC Communities?

Giving Back the years BMC invested on me! Taking people out of the mud of “not knowing the right steps to make things work” and shape my Blades.


Q. Did you make any new friends in BMC Communities? Do you have any stories to share?

Indeed, I am very new to Community, since 8/2018. Friends Not Yet, but some free will Followers!


I enjoyed to find the names of my BMC heroes and have my “Follow” requests being accepted by the Top Community Performers.



Q. Do you have any message for the new members of BMC communities?

Don’t keep your questions unspoken! Ask, Seek, and Help yourself out of the pain !

(For every one that asketh receiveth; and he that seeketh findeth; Luke 11.10 KJV)

Adriano edited.png

   View profile.png




Q. What  is your favorite movie(s)?

Kingsman: The Secret Service

(“Save the World is priceless!”)


Q. Who is the greatest player in your favorite sport?

Cristiano Ronaldo


Q. What was the best vacation you have had?

2018 first time to Porto Seguro (Bahia) by driving with the Family.  Lots of time to talk and smile together.


Q. How do you like to spend your spare time?

First (still learning and improving on this subject), Listen to My three women’s

Then, play Brazilian football with distinct group of friends and Watch Netflix Series, currently ^ Shadow Hunters - The  mortal instruments^



Q. If you could pick one thing that could be made better in BMC Communities, what would be it?

Native Mobile Community App!! (Console Messages, Direct Chat, Voice dialogs).

Android for me, please!

adriano family.jpg


Thank You Adriano for all the wonderful work you are doing here!

Community members, please make sure that you visit Adriano's profile, and click 'Follow' (in 'inbox' if you wish to be notified on all activities) to be in touch with him, and be updated.

If you have had an interaction with Adriano that helped you, feel free to share with us in the comments below!

Share This:

This is the final blog of our 5-part series in which we are answering questions attendees submitted during our live Control-M 19 launch webinar on March 26. In case you missed them, here are links to the previous four blogs:


·         Blog #1: Upgrade and version-related questions


·         Blog #2: Container and cloud-related questions.


·         Blog #3: Questions on applications, file transfers and Control-M’s web interface


·         Blog #4: Questions on Control-M Workload Change Manager, Configuration Manager, and Automation API.


Today, we are going to answer all remaining questions.



Q: Have you made Control-M recovery improvements with Version 19?
Kafka is available for microservices. And, several web servers can run concurrently on different nodes to address high availability and load balancing while putting a load balancer/reverse proxy in front of them.


Q: Control-M is becoming more widely used by everybody (for DevOps and agile development, etc.). The registration for user management is still a manual process though. Can this be automated?
User management is available with Control-M 19 as part of Automation API.  If you are using LDAP, the association of LDAP to a role can be managed via API/CLI, allowing you to perform mass updates and automate these processes.


Q: Is Control-M’s reporting facility now web-enabled?  Do users have to have administrative privileges in this version to use/execute reports?
The reporting facility is not web-enabled yet. It currently uses web technology that is embedded within the desktop client. We aim to make it web-enabled in a future release.  Users do not need to have administrative privileges to run reports.


Q: Can we see an example of the reporting facility?
Here is a link to a video that will give you a look at the new reporting facility. It also explains how to migrate from the old reporting facility.


Q: We have installed Version 19, but cannot see the connection profiles listing. What should we do?
Please open a case with Customer Support.


Q: Is it possible to pass parameters from the job to script? If so, how is this handled?
If you are referring to moving parameters from one job to another job that is running a script, you have the ability to define variables (global, pool, local).


Q: When will Control-M be RHEL7 compliant?
Control-M supports RH7 for all components (EM server, Control-M server, agent, applications).


Q: Does licensing remain the same with Control-M 19?
Our current solution package is called Control-M Platform.  Workload Change Manager, Workload Archiving, MFT and MFT Enterprise are available add-ons to this package.  Please contact your account manager for additional information.


Q: Does password encryption work with cold and hot backups in batch jobs?
We need more details on this question, please comment below.



Thanks to everyone who attended the webinar and asked all these great questions! We hope the answers we’ve provided have helped! If you still have any questions on Control-M 19, comment below and we’ll get to work on it!


If you missed the live webinar, you can watch the recording here.

Share This:


This is the fourth in a 5-part blog series in which we are answering questions attendees submitted during our live Control-M 19 launch webinar on March 26. In case you missed them, here are links to the previous blogs:

  • Blog #1: Upgrade and version-related questions
  • Blog #2: Container and cloud-related questions.
  • Blog #3: Questions on Applications, file transfers and Control-M’s web interface

Today, we are going to answer questions on Control-M Workload Change Manager, Configuration Manager, and Automation API.

Control-M Workload Change Manager

Q: Do site standards police Jobs submitted to Control-M "outside" of Control-M? If yes, are they cancelled at time of submission?
If by “outside,” you mean jobs not being managed by Control-M, then no. If the jobs are running in Control-M, (whether they got there through traditional creation via the GUI or as-code through Automation API) then yes, site standards are enforced.

Q: If you do not have Control-M Workload Change Manager are you still able to apply site standards?
No. Site Standards are part of Control-M Workload Change Manager.

Q: Is web-based monitoring and planning only available if you own Control-M Workload Change Manager?
Monitoring is included as part of Control-M’s self-service (web) interface. Planning is part of Workload Change Manager.


Control-M Configuration Manager

Q: Are there any major enhancements to the CCM in Control-M 19?
We have exposed some of the CCM functionality through Control-M Automation API (Rest & CLI), including:

  • User and LDAP management
  • Authorization (roles)
  • Agent, AP and Managed File Transfer upgrade functionality
  • “Run-as-user” management

Q: There’s a new service option in the CCM under the EM components. Where can we find some more information around that?
Check out the Control-M 19 Documentation here for details.


Control-M Automation API

Q: Control-M Automation API did not previously cover all the functions available in the client GUI.  What coverage do you have now? 
There are a couple of job properties that we are planning to address in a future fix pack.  We recommend taking a look at the Automation API online help where you can find the latest information.

Q: Is a conversion tool provided for Control-M Automation API code if an upgrade changes the look and feel of the code (field length or name changes)?
We do not anticipate any changes that would require a conversion tool at this time.

We hope these answers help! If you still have a question on any of the topics above, comment below and we’ll provide and answer or include your question in an upcoming blog. Stay tuned for our final blog, in which we’ll provide answers to all remaining questions.


If you missed the live webinar, you can watch the recording here.

Share This:


This blog is the third in a 5-part series in which we are answering questions attendees submitted during our live Control-M 19 launch webinar on March 26. In case you missed them, here are links to our first blog, where we covered upgrade and version-related questions, or the second blog, where we covered container and cloud-related questions. Today, we’re going to answer questions on applications, file transfers and Control-M’s web interface.



Q: What new features in Control-M 19 provide support for Hadoop?
Control-M now supports Microsoft Azure HDInsight, version 3.6 and above. We are always looking to enhance this capability. If there are any additional feature suggestions, please raise a Request for Enhancement (RFE) with Customer Support.

Q: We use Oracle Cloud for both Financial and Human Capital Management. Are there any methods to streamline?

Q: Does Control-M 19 support integration with CyberArk Vault or Salesforce Cloud?

Q: Is Control-M 19 compatible with Oracle CC&B 2.7.x/C2M 27.7.x?

A: The previous three integrations are best done today using Application Integrator.

Q: Do you plan to upgrade Control-M’s database plugin in Version 19 to support Oracle Database 18 and MSSQL 2016/2018?
While some of these are in the planning stage, we can’t commit to a timeframe. Oracle Database 18c is the equivalent of the patchset, so it is part of the 12.2 release cycle. We currently support MSSQL 2016 and 2017. Version 2018 is not fully supported as SSIS 2018 was never implemented.

Q: Do you plan to display Oracle session IDs in Control-M database jobs in Version 19?
We do not have plans to do so at this point.  In your same job you may be able to execute an additional query to print the session.

Q: Control-M’s current Cognos plugin is 32 Bit. Are there plans to update it to 64 Bit?
We do plan to move to 64 Bit in the future (likely after Control-M 20 is released).

File Transfers

Q: Can Control-M Advanced File Transfer jobs be migrated to Managed File Transfer easily?
Migration from Control-M Advanced File Transfer occurs automatically when installing Control-M Managed File Transfer.

Q: Can we upgrade from Control-M Advanced File Transfer to Managed File Transfer and retain our internal file movement functionality without getting the B2B add-on?
Yes, you can.

Q: Does Control-M Managed File Transfer’s B2B module involve additional licensing cost?
Yes, it is a priced component.

Q: Does the script have be located on the same server where Control-M Managed File Transfer resides?
We need more detail on this question to give an accurate response.  Please comment below to add additional detail.

Q: What encryption do you use for Control-M Managed File Transfer?
Control-M Managed File Transfer supports encryption via SFTP and FTPS. In addition, files can be configured to be encrypted using your own PGP solution via the MFT job.

Q: Is Control-M Advanced File Transfer still supported in Version 19?
Yes, Control-M Advanced File Transfer is still supported in Version 19.

Q: Will Control-M Advanced File Transfer also gain AWS S3 support?
No, S3 support is not available in Control-M Advanced File Transfer.

Q: Does Control-M Managed File Transfer replace Advanced File Transfer licensing costs? Or, is it additional?
Control-M Managed File Transfer is a priced component, independent of Advanced File Transfer.


Control-M’s Web Interface

Q: Can I do any type of forecasting though the Control-M’s web interface?
Not currently, but we are planning to make this available in a future release.

Q: Can you limit the update of user views and site standards?
Yes, you can.



We hope these answers help! If you still have a question on any of the topics above, comment below and we’ll provide and answer or include your question in an upcoming blog.

Stay tuned for blogs 4 and 5, focusing on:

  • - Control-M Workload Change Manager, Configuration Manager, and Automation API
  • - Miscellaneous questions

If you missed the live webinar, you can watch the recording here.

If you missed our first two blogs, you can read them here:

Share This:


This blog is the second in a 5-part series in which we are answering questions attendees submitted during our live Control-M 19 launch webinar on March 26. In our first blog, we focused on upgrade and version-related questions. This week we’ll focus on container and cloud-related questions.



Q: Can you run the Control-M/Server in a container with Control-M 19?

A: Yes. Containers are like machines, so you have always been able to run Control-M in a container. However, with Version 19, the “provision-server” functionality of Control-M Automation API makes it significantly easier to create an image with an embedded Control-M/Server and configure the connection between the server and the EM.


Q: Does Control-M 19 have any integration with Kubernetes?

A: Yes. Kubernetes (K8S) runs containers, so everything in the above answer applies here. Additionally, we have field-developed samples that use the K8S API to run and monitor K8S “JOB” objects. I suggest the following resources to learn more:



Q: Are the new AWS and Azure native integrations only available with Control-M 19?

A: Yes, the AWS Lambda, Step Functions and Batch, and Azure Logic Apps, Functions and Batch job types are only available in Version 19.


Q: Are Azure and AWS the only cloud providers supported?

A: They are the only two supported with a supplied job type in Version 19.  However, the below Google Cloud Platform services can be integrated today using only an API key. With Application Integrator you can also integrate Control-M with most cloud providers.


  • Google Cloud Natural Language API
  • Google Cloud Speech API
  • Google Cloud Translation API
  • Google Cloud Vision API
  • Google Cloud Endpoints
  • Google Cloud Billing Catalog API
  • Cloud Data Loss Prevention API


Q: Are you able to script AWS server creation as a scheduled/defined thing, including selecting the various AWS configuration specifications and the O/S commands to build the environment - things like creating mounts, updating system files, creating app users, etc.?

A: It sounds like you are describing a configuration management tool like AWS Cloud Formation or similar. Control-M can invoke or “drive” such tools as part of a business workflow, but it does not perform those functions directly.


Q: Does Control-M support Azure backups?

A: We have authentication with Azure services via Control-M Application Integrator and Azure Backup is available as CLI. So, it is quite likely you can develop integration with Azure Backup using Control-M Application Integrator.


Q: Is the Azure/cloud connectivity licensed as/is, or is it additional like CMs?

A: This would be best addressed by your sales team/partner.


Q: Is high availability required when using a service, such as AWS with Control-M 19?

A: If the question is whether Control-M-specific high availability is required when running on AWS, it is recommended just as in any other environment based on the criticality of Control-M and outage tolerance, etc.  Running on AWS offers additional options for how high availability is configured, such as using the AWS Relational Database Service (RDS) and if Control-M components need to react to dynamic infrastructure, agent and server provisioning make the process easy and dynamic.


Q: High availability is required when using a service such as AWS with Control-M 19. For example, in on-premises installations, a second M-Sever control is required. If we hire AWS services, the high availability would be responsible for AWS?

A: Today, Control-M is still operated by individual customers who remain responsible for the configuration, including high availability. As mentioned above, there are some additional options such as RDS and dynamic server provisioning that help to simplify that process.


Q: If we use Vaults to pull passwords, how is the Azure Connection Profile Access Key mentioned for each login?

A: This was not tested by the lab. Please open a case with Customer Support.


We hope these answers help! If you still have a question on any of topics above, comment below and we’ll provide an answer or include your question in an upcoming blog.


Stay tuned for upcoming blogs with answers to questions on these topics:

  • - Applications, file transfers and Control-M’s web interface
  • - Control-M Workload Change Manager, Configuration Manager and Automation API
  • - Miscellaneous questions


Missed our first blog on Control-M versions and upgrades? No problem, you can read it here.


If you missed the live webinar, you can watch the recording here.

Share This:




Last week, we did a webinar to introduce and demo the new functionality we rolled out in Control-M 19. With 800+ registrants, we received a lot great questions during the live Q&A session – far more than we could answer during the call. So, over the next few weeks we’ll be posting a 5-part series of blogs here on Communities to answer all the questions we weren’t able to get to.  Each blog will feature a group of topics. Today we’ll cover upgrade and version-related questions.



Q: Is it possible to upgrade from Version 7 or 8 (any fix pack) to Control-M 19?

A: Absolutely. You can always upgrade to the latest version of Control-M, but it would need to be done as a traditional migration (not as an in-place upgrade). You can find more information in the Control-M 19 Migration Guide.



Q: Are Control-M Agents on compatible with Control-M 19?

A: It depends on the specific agent you are looking for. You can find more information on the Control-M 19 compatibility web page.



Q: Can I upgrade from Control-M 9, Fix Pack 1 (or later) to Version 19 on the same server? Or, do I have to Control-M 18 first?

A: You do not have to upgrade to Control-M 18 first. If you are on Version 9, Fix Pack 1 or later, you can choose to do the in-place upgrade (which is quite simple).



Q: I just upgraded to Control-M 18. What is the procedure to update to Version 19?

A: To upgrade from Version 18 to Version 19, you can use the in-place upgrade option. Consult the Control-M 19 Installation Guide for details.



Q: Do you have any documentation on the upgrade process between Version 18 and Version 19?

A: Definitely. When we launched Control-M 18 and announced the new in-place upgrade functionality, we created this video:

The process for upgrading from Version 18 to Version 19 is the same. It can be done in-place with minimal downtime.



Q: How often do you release major versions of Control-M? (i.e. from Version 19 to Version 20)

A: There’s roughly one year between each annual release, with 2 fix packs in between.



Q: When should we expect Control-M 19, Fix Pack 1?

A: Version 19, Fix Pack 1 is expected to be released in roughly 4 months.



Q: I'm currently on Control-M 18, running compatibility mode to Version 9 for a report that is no longer available. Will I still be able to run compatibility mode in Control-M 19 back to Version 9?

A: When you have compatibility mode on, it will stay at that current version. For example, because you are on Version 9 today, when you upgrade to Version 19, the compatibility stays with Version 9.




Thanks for reading! Stay tuned for upcoming blogs with answers to questions on these topics:

     - Containers and the cloud

     - Applications, file transfers and Control-M’s web interface

     - Control-M Workload Change Manager, Configuration Manager and Automation API

     - Miscellaneous questions



Still have a question on any of the above? Comment below and we’ll be sure to provide an answer or include it in an upcoming blog.



If you missed the live webinar, you can watch the recording here.

Share This:




There’s no doubt that organizations need to speed application time-to-market to compete in our very digital world. But, speed can’t be the only factor you consider. The quality of your applications has to be top notch, at all times.

DevOps can improve collaboration and accelerate application delivery. But when you think DevOps, do you think application workflow orchestration? If not, you should. Here’s why.

Developers are using basic tools to code jobs as they build apps, and there is a better way to operate. Here are four ways to deliver higher quality, software faster:

#1 Don’t think of jobs as business logic

And don’t waste time manually scripting job scheduling. Instead, use an application workflow orchestration product to manage flow relationships, success/failure analysis, output capture, and other functions. This ensures consistency across applications (and from application to application) and eliminates headaches for both Dev and Ops.

#2 Test early and often

Think about it… the syntax of source code is checked as it’s created, right? Why not apply a similar approach to job definitions? By automating scheduling, similar notation and interfaces for all jobs can be used at the earliest stages, allowing for early and accurate testing.

#3 Use a source code management (SCM) system

Using an SCM system allows you to fall back to previous versions to identify changes and quickly build/rebuild the application in new environments.

#4 Consider the value equation

A streamlined and efficient Dev process + minimal issues in prod = greater application ROI. Jobs-as-Code is a critical component of this equation.

Jobs-as-Code improves visibility to applications so that, when necessary, the operations and support teams can more quickly identify and resolve problems.


What is Jobs-as-Code?

It’s an approach that enables developers to use their DevOps tools to treat application automation management in the SDLC just like any other code components of an application.


How can Jobs-as-Code improve your application delivery process?

CARFAX uses Control-M to automate Jobs-as-Code:

"A Jobs-as-Code approach is paramount for anyone doing agile development and DevOps. We have been using Control-M for years in operations, and now the product gives our developers full ownership and control of their jobs in a coding environment that is familiar to them, so they can define the business processes they want to automate in production."

Robert Stinnett

Automation Analyst, IT Operations CARFAX


To learn more, read our eBook, Four Ways Developers can Deliver Better Software Faster.

Share This:

Simplify life with Control-M Conversion Tool

If you’re reading this, it’s probably not because you are passionate about conversion technology. Most likely, you’re struggling with one (or more) of the following situations:

  • You are spending too much time and effort managing a mix of automation solutions, and can’t keep up with the rate of change demanded by your company’s modernization initiatives

  • You recognize that a single application workflow orchestration product spanning multi-cloud, on-premise and hybrid environments is critical for innovation, but you’re struggling with how to get there 
  • Your organization has adopted Control-M as corporate standard, but other schedulers that break standards are being used in pockets of the organization

Don’t worry, there’s a solution to all these challenges! Control-M has a powerful Conversion Tool that can convert data from any automation vendor right out of the box, with minimal effort and risk. Combine that with the expertise of a dedicated Control-M specialist, and you’ll have a smooth and safe migration.


The “If it ain’t broke, don’t fix it” mentality kills innovation! Capture.PNG

Typically, companies avoid migrations, in part because they fear the risks don’t outweigh the benefits.

“If it ain’t broke, don’t fix it,” seems to be a common mindset. So, despite all the extra work and costs, organizations stick with the problems they know to avoid migration efforts .


It’s true, depending on the data volumes and complexity, migration could be a major change management project, impacting mission-critical services and requiring high-level approval, budget allocation, skill selection, and, most importantly, risk to production management. But it doesn’t have to be.


Common concerns and fears include:

  • Fear of complexity and risk to the organization
  • Fear of unpredictability (migration duration, results and downtime)
  • Concerns about the skills and resources needed to complete a conversion


Embracing change

The problem with maintaining the status quo, is that it perpetuates a situation that can result in critical business failures and stall innovation efforts.  Here are a few tips on how to use Control-M Conversion Tool to resolve all these common concerns and fears:


  • Take a phased approach - the Control-M Conversion Tool wizard is extremely easy to deploy, and takes you through a phased approach to migration. This reduces complexity and minimizes the risk of costly errors that can happen in manual conversions. You can proceed with confidence because you will have a clear view of what you need to do. And because you are breaking the project into smaller phases that can be reviewed and iterated, you can trust that quality is not being sacrificed. Furthermore, you can control your migration by splitting data into chunks. Migrate application by application, or business flow by business flow, as the migration process allows you to go through workload chains and dependencies.


  • Use reports to minimize uncertainty - there are four major phases to every migration project: analysis, conversion, verification and load. Control-M Conversion Tool produces reports at along the way so you can make decisions, repeat stages if necessary, and incrementally refine the results. These reports help you forecast migration duration, results and downtime.
    • In the analysis, Control-M analyzes, categorizes and sizes input data, and generates reports that help forecast conversion duration and effort.
    • In the conversion phase, Control-M shows you the results, with details on how much data was converted successfully, and what data may require manual adjustments. Messages and recommended actions drive you through iterations until you are satisfied with the results.
    • The verification phase is critical to reducing the downtime associated with your conversion. During this phase, you can use reports to compare schedules and workflows with your forecast plans, so you can determine the optimal time to complete your migration, minimizing risks and downtime.


  • Don’t worry about having the right skills or resources, you’re not alone - our experts have successfully helped many customers complete migration projects. The team can provide support throughout the entire migration, ensuring that your objectives are met at each stage.  As part of our continuous learning and continuous delivery approach, we are constantly improving Control-M Conversion Tool based on customer feedback. Every three weeks updates are delivered and can be automatically applied to your current installation, so you get the best possible conversion efficiency.


Out-of-the-box conversions and beyond

Control-M Conversion Tool offers out-of-the-box rules to convert from the most common schedulers, workload automation products, and applications, and we are constantly adding new solutions and versions. But what really differentiates Control-M from other products is our ability to extend the range of covered solutions to any type of commercial, open-source or homemade scheduling and workload automation tool. In fact, in addition to out-of-the-box conversion rules, users can develop their own rules, through the self-conversion process.


Self conversion puts you in control

With Control-M’s self-conversion capabilities you can convert from any product not covered out of the box. It’s essentially a development framework anyone can use to build their own conversions. Any scheduler or workload automation product, application, or homegrown product with data that can be extracted in XML, JSON, or CSV format, can be converted into Control-M, through easy-to-create conversion rules.


You just need to have basic knowledge of the Groovy language, an understanding of Control-M artifacts (folders, jobs, job types, etc.) and the ability to read Control-M APIs. If you don’t have these pre-requisites, a one-hour workshop with our experts is enough for you to get started. With that, you will be able to identify input data scheduling artifacts – XML elements and attributes – and build your scheduling rules to convert into Control-M data.


To build your conversion rules, you can use a graphical or script mode. The graphical mode requires minimal user input (just XML tags for input elements and attributes to be converted) and creates straightforward conversion rules. If you want to create more sophisticated rule logic, you can switch to script mode, which generates conversion rules code (Control-M APIs + Groovy scripting) available for additional coding. We also have context-sensitive resources and rich sets of samples to help you.


At the heart of the self-conversion process is Control-M’s APIs. They allow you to navigate and retrieve data from the XML document, and create Control-M objects with those elements.

Control-M conversion APIs are also exposed through GitHub to external parties and customers, creating an extensible platform ecosystem where everyone can participate, create and share new conversions and user experiences. You can access it here:


Getting started

To orchestrate all your application workflows through a centralized interface, you have to migrate from old software systems. But without the right tools and skills, it can be difficult and risky. Control-M provides powerful conversion resources to help you automate and test any data conversion, either through native or user-coded rules. And anytime you need support, our experienced professionals can help you successfully complete data migration projects and simplify the management of your business processes.




To learn more about Control-M Conversion Tool and self-conversion:

Control-M Conversion Tool Datasheet

Control-M Self Conversion Overview Video

Self-conversion-api-quickstart repository at GitHub

Share This:

Communities post.jpg

The latest version of BMC’s market-leading application workflow orchestration product, Control-M 19, was released today.


Customers can now simplify workflow orchestration across hybrid cloud environments through new, out of the box integrations for Amazon AWS and Microsoft Azure, including AWS Lambda, Step Functions and Batch, and Azure Logic Apps and Functions. Users get full Control-M capabilities, including file transfer management, plus the elasticity, scalability, and high availability of leading cloud platforms.


Control-M Automation API has new capabilities that now allow Control-M configuration and security to be managed via APIs. Developers can use a Jobs-as-Code approach to accelerate Dev and Ops collaboration by building workflows in JSON and applying the same approach to configuration and security.

Control-M 19’s managed file transfer now natively supports the leading cloud storage system and provides more intelligent file movement.  Enhancements include: support of file movement to and from AWS S3 buckets, policy-driven processing rules and support for the AS2 protocol.


In addition, the enhanced web interface makes it easier to deliver and secure access to workloads for differing roles and organizations across the company, and for users to view and manage specific workloads. Enhancements include, Improved site standards and site customization, allowing for conditional and must-have rules. There is also the introduction of viewpoints; users will be able to use Web Viewpoints as part of self-service to display jobs and jobs flows.


Learn more about Control-M 19 here.

Share This:

AAPI Trial.jpg

Control-M Automation API is an application solution that enables developers to access the capabilities of Control-M including SLA management, sophisticated flow and scheduling control as well as another interface to Control-M.


Operationalize applications faster, at higher quality, by embedding automated workflow scheduling into your dev and release processes. Using a Jobs-as-Code approach with REST APIs and JSON, workflows become versionable, testable, maintainable, and collaborative for developers and DevOps engineers as a part of their CI/CD pipeline.


Take a test drive of Control-M Automation API

Filter Blog

By date:
By tag: