Skip navigation
1 2 3 Previous Next

Control-M

258 posts
Kelsey McRae

Control-M Brag Book #4

Posted by Kelsey McRae Employee Feb 21, 2018
Share:|

BMC digital IT powers 82% of the Fortune 500 companies. Our innovative software solutions enable businesses to transform into digital enterprises for the ultimate competitive advantage. Control-M is transforming digital enterprises across the globe and customers rely on Control-M every day to keep critical systems driving their business, running smoothly. Whether it be financial services, information services or banking, Control-M is changing the lives and businesses of those who use it, and is behind the scenes of happy customers all over the world.

 

Raymond James Financial manages $500B in customer assets with optimized workload automation

Raymond James, a regional investment and financial planning services provider, has become a leading international financial services firm with over $500 billion of assets under their administration. Raymond James serves nearly 2.7 million client accounts in more than 2,600 locations worldwide. Over the past 10 years they have grown exceptionally and the emergence of digital and mobile technologies have increased the importance of their IT organization getting things done faster. Control-M manages RJF’s nearly 2 million monthly jobs across hundreds of applications that access the company’s data warehouse and consolidated data store. Nightly processing ensures that senior management and their 6,500+ financial advisors have the data they need each day to help clients with investment decisions. Control-M is their primary tool to identify, escalate and remediate issues that might delay their batch processing. Simplified monitoring, self-service, and predictive analytics has helped their IT organization absorb a 42% increase in monthly job executions over the past year and Raymond James audit prep now only takes a couple of hours where it previously took two to three weeks. “With Control-M, we can look at applications across the enterprise, identify recurring issues and inefficiencies, and work with people across the organization to figure out how to make things better.” – Chris Haynes, Manager of Workload Engineering, Raymond James Financial

View the Raymond James Customer story video here.

 

RailInc leverages big data and automation to help keep 1.6MM railcars rolling across 140K miles of track

RailInc, an industry leader for railroad IT and data services headquartered in Cary, North Carolina, supports railroads and their customers with essential information to improve safety and optimize rail operations. RailInc has implemented Hadoop for storing, processing, and analyzing data captured from disparate sources. Control-M supports programs like Railinc’s Asset Health Strategic Initiative, which develops tools that enable customers to track equipment usage, identify equipment issues for timely repairs, and safely and efficiently coordinate the movement of millions of railcars. Control-M automates the processes that support analysis of data of 1.6 million railcars across North America. RailInc processes 11 million data points daily and they are expecting to double their growth in data volume from 50 to 100TB in the next three years with the help of Control-M. "The order in which we bring in data and integrate it is key. If we had to orchestrate the interdependencies without a tool like Control-M, we would have to do a lot of custom work, a lot of managing. Control-M makes sure that the applications have all the data they need." - Robert Redd, Release Engineer, RailInc

View the RailInc customer story blog by Robert Redd here.

 

Itaú Unibanco transforms banking with new digital services

Itaú, the 10th largest bank in the world, and the largest financial conglomerate in the Southern Hemisphere has over 94,000 employees, 4,000 branches and 46,000 ATMs that serve a global customer base. Over the last decade, significant advances in mobile technologies have led to increased consumer demand for new digital-first banking services. Control-M is the bank’s primary digital business automation platform, processing over 14 million jobs per month and automating diverse batch application workloads for transactions in retail locations, through ATMs, online and on mobile devices. Itaú Unibanco is leading the industry with the new client-centric banking services it’s rolling out. The bank now opens almost 10% of its new accounts directly through its mobile application. “Control-M is a very important tool that we have. If Control-M stops, the bank stops.” - Leandro Araujo, Head of Production and IT Services, Itaú Unibanco

View the Itaú Unibanco customer story video here.

Share:|

Do you want to visualize the status of internal and external business-to business (B2B) transfers from a central automation platform?

 

Learn how our newest product, Control-M MFT Enterprise B2B, provides a secure method for external business partners to transfer files to and from Control-M Managed File Transfer environments from an easy-to-use web interface and more.

 

Please join us for the next Connect with Control-M webinar on Wednesday February 28th when James Pendergrast will explain the architecture of this add-on and how you can get the most out of it in your organization.  We’ll conclude with a live Q&A session.

 

Register Now!!     

 

 

Target Audience: Administrator, Operator.

 

 

IMPORTANT: Each registration ID can only be used by one person. Please have all interested participants register individually.

Share:|

Using cloud technologies can be a daunting task, especially with the ocean of choice that is now available to the modern enterprise. One thing that can be leveraged from the multitude of cloud providers is the ability to create Big Data clusters with Hadoop for processing information in the cloud, but without spending the large sums of time and effort required to house the clusters of servers on-premise. More importantly, the clusters don't need to be running 24/7 either, and effectively only need to be spun up on-demand for when processing is to occur. This is something that Control-M can handle easily.

 

The AWS Use Case

 

AWS offers several services for using Big Data and for data processing in general, but one service stands out, which is called Elastic Map Reduce. With EMR, users can spin up clusters of any size with Hadoop and various other tools pre-installed. Furthermore, users can not only make use of the AWS CLI toolset to perform actions in the cloud, but more importantly can tie the calls to AWS into the jobs they run in Control-M, either as an embedded script, command, or script file. This allows for total flexibility when using AWS. Control-M also has native integration to AWS, using the Cloud Control Module, whereby a Control-M Agent can interact with any AWS account, and perform actions on EC2 instances that are running, or launch new ones from templates, as examples.

 

Big Deal

 

So, what does this mean for data scientists and business intelligence professionals working to leverage the power of Big Data in the cloud? Well, using an enterprise-grade scheduling solution such as Control-M, users can now employ services in AWS in a streamlined manner, and use the output of what AWS provides to take dynamic action inside of Control-M. EMR is a powerful tool that, if properly leveraged, can result in delivering real valued results quickly, all while minimizing costs. The benefit is that the clusters only need to exist for as long as the processing needs to occur, and can then be eliminated, thus the notion of "ephemeral clusters". By processing what is needed, but only keeping the results, users effectively circumvent the need to keep these machines running 24/7, and still derive value for their company. These flows can also be triggered dynamically, as they are now service-oriented, instead of being regularly scheduled flows.

 

Example

 

What if every day a series of files are received, and need to be treated, but these files are massive? Do you have the steps already built into your schedules to treat them? Do you want to start leveraging AWS? With some modifications, the existing flows that make the calls to perhaps a local Hadoop cluster could be made in the cloud instead, as Control-M has the ability to run anywhere, be it on-premise, off-premise, or multi-cloud, depending on the need. You could not only treat the files as needed with a file watcher automatically, but could also trigger manually whenever you'd like to refresh data in reporting tables, or generate information at a point in time. With Control-M managing these steps, you have much more power at your disposal when invoking AWS.

 

But don't just take my word for it...

 

Using Control-M, we can invoke the AWS services necessary, in the right order, so that data is received, a cluster is instantiated dynamically, attached to Control-M dynamically, data is processed, results are sent out, the cluster is de-instantiated, information is streamed to a dashboard, all done dynamically by ordering a service from a smart phone, monitored by our Batch Impact Manager to ensure the service does not run overtime to meet SLAs. Technologies at play here include the Hadoop Control Module for Control-M, our Automation API for dynamic agent provisioning and deployment, our Managed File Transfer Control Module to handle files in and out, our Batch Impact Manager job type to manage SLAs, the innate ability of Control-M to pass variables between jobs and share information, our Self-Service Portal as well as Control-M for Mobile Devices for use by business stakeholders.

 

Taming AWS

 

We start by ensuring that Control-M has access to the AWS CLI in order to make calls to the AWS account where we're triggering cluster builds and machine instantiation. By invoking the "aws emr create-cluster" command, we can set up a custom cluster with all the bells and whistles that we'd hope for. This will ready a cluster, but while we wait we'll want to invoke "aws emr wait cluster-running" until the cluster is up and running to proceed to the next portion. The AWS CLI is extremely comprehensive, you can trigger almost anything in AWS using it, and can also craft a custom Application Integrator job that can do the specific things you need to do, effectively wrapping the calls and abstracting the runtime into the Control-M WLA or Web client.

 

File Transfers

 

You might want to pipe some files in from outside of your AWS installation, and so MFT jobs would be ideal here. You can leverage FTP, FTPS, and sFTP transfers to bring files in from partners, or from other systems in your multi-cloud environment. These files can be ultimately piped into AWS S3 or HDFS. What’s great with the Control-M MFT module is that you can create jobs that will create dependencies for your successor jobs that are based on files being successfully delivered for ingestion. Here I set up a simple transfer from Azure, since that’s where one of our partners keep their data, and bring it into our server to be ultimately pushed into S3. Hadoop’s CM can actually reference S3 bucket files and pull them into HDFS for processing.

 

Big Data Workflows

 

Control-M’s Hadoop CM can install itself on the leader node of Hadoop clusters and control the flow of HDFS, Yarn, Hive, Sqoop, and most other common binaries found in the Big Data world. With tracking for these kinds of executions, and then the passing of information between jobs and workloads, you can effectively take a platform-agnostic stance when integrating your various disparate platforms together to achieve the results you’re looking for. In this case, we’re just executing some HDFS directory creation commands, some HDFS movement commands, pulling data in from an S3 bucket, and are all pulled into HIVE tables, where I run some very simple analysis. The results are emailed back to me, and the cluster is wiped away.

 

Provisioning

 

Agents can be deployed an un-deployed as needed with Control-M, which makes this kind of ephemeral instantiation a reality. Services like these make Control-M an ideal solution to remain flexible in a world where infrastructure is as malleable as putty, and only running servers for as long as they’re needed is a reality, such as with containers. The Hadoop control module for Control-M is the only CM we make that needs to actually be installed physically on one of the application servers, in this case the leader node of the Hadoop cluster. With the Automation API, all we do is call the necessary commands to go out and pull the packages to the server for provisioning, bringing the agent bundled along with the CM as well, and it can all be detached and removed when done.

 

controlm - Control-M Workload Automation.jpg  controlm - Control-M Workload Automation (2).jpg

ctm @ controlm homectmscripts.jpg

 

Streaming

 

With the support of streaming functions, you can get the information you’ve built to the dashboards that need it, all as part of the workflows that have been built. This is ideal for visibility, and can force the refresh of information after every run, which can show you real-time information about the data you are collecting, and gain insight into trends that could create a competitive advantage. You can pipe this kind of stuff to dynamic dashboards that can show data loads, throughput, areas of interest, you name it. Perhaps you’re running an ELK stack, and are piping information out to it so that it then takes it and performs trend analysis for you.

 

Self-Service

 

All of this can be triggered by business stakeholders if needed, as services, leading to the workflows running on-demand, and refreshing data as needed. Real-time insight that can be called on-demand is a powerful concept, but is a reality when using Control-M to manage these kinds of flows. As mentioned before, you’re on the train coming to work in the morning, you see an email come in noting that the last file transfer you were waiting for overnight has completed, you make sure you’re on VPN with your iPhone or Android device, you connect to Control-M via our mobile app, and you order in your EMR flow in transit so that when you arrive in the office and sit down for your coffee, your jobs are done and the results are delivered.

 

Control-M Self-Service - Google Chrome.jpg

 

SLA Management

 

Take the services that were declared, and were made usable for business stakeholders and tie SLAs to them, so that not only can you have Control-M trend the runtime of the services of these workloads to report on slowdowns, you can also catch problems before they occur. Control-M has these features built in, allowing for long-term analysis of the workloads that stretch past Big Data and out to the rest of the enterprise as well. What's great is that the batch impact manager, while already able to take dynamic action based on a service runtime, will automatically create a service for you when ordered in. This is key, you want to make sure that when you order your flows in, that you’re not starting to creep in terms of delivery time, BIM will track this kind of thing for you.

Control-M Self-Service - Google Chrome (2).jpg

To Wrap Up

 

Since Control-M can be tailored to any situation, invoking AWS to achieve ephemeral clustering is a reality, and one which we are seeing more and more customers take advantage of. The possibility of running only what you need is a major advantage over classic always-on paradigm of hosting data lakes. Control-M with its massive flexibility and toolsets that allow for dynamic integration is an ideal choice for managing any automation that occurs in the cloud.

 

I always like to say that the triggering of the job or the action itself is only a tiny part of what makes up Control-M. The rest of the “bread and butter” of Control-M is what really seals the deal, the on-do actions, the condition passing, the alerting, the SLA management, the ability to play between hosts, the archiving and auditability, everything else that is brought to the table. Being a 3rd party arbiter of enterprise workloads is a powerful thing, I suggest you bring some of that power over to the public cloud and start trying it out!

Share:|

ControlM_Twitter .png

It’s that time again. As VP of Operations you’ve just been handed a change management request form for hardware/software maintenance. This time it relates to a workload automation upgrade. On the form, the checkbox for “critical services interruption” is selected, and the “expected downtime” assessment value is high enough to potentially affect operational efficiency. And to top it off, the budget requirement doesn’t quite fit in with the latest cutbacks handed down through organization. For all these reasons (and the fact that upgrades like this require a lot staff time), you know you’ve got to do some research before giving approval. While inspecting the upgrade process history, you find that past workload automation upgrades were painful, to say the least, and resulted in unplanned downtime of critical applications.

So, how do you ensure this upgrade will go smoothly? What must be done to avoid critical interruptions that may adversely affect your KPIs and budget? If you struggle with these questions every time you need to upgrade your workload automation solution, here’s one more reason you should switch to Control-M!

Control-M 9.0.18, the latest version of BMC’s Digital Business Automation solution, features near-zero downtime in-place upgrades. Forget about the tedious, expensive and risky upgrade process you’re used to. You can upgrade Control-M and access all the latest features in minutes.

 

How does it work?

Here’s how Control-M 9.0.18 simplifies the upgrade process:

First, it reduces the upgraded downtime to almost zero. That helps you maintain business continuity and maximize service availability. It also accelerates the upgrade approval process. In beta, customers reported that negotiation activities for the proper upgrade window could be reduced by 50-70%.

Second, it reduces the time and cost of the upgrade. The new in-place upgrade method eliminates the need for parallel environments. You can now safely upgrade in the same environment you are working on, gradually moving up components. This maintains backward compatibility with the rest of the solution. By eliminating the need for a parallel environment and data migration, you will realize significant cost and time savings. This is especially true in large environments where new IPs are required and the opening of firewalls for large numbers of agents become particularly time consuming. Customers in our beta program with large data centers reported that FTE and other resource savings during the upgrade preparation phase could reach 70-90%.

Beyond simplifying and shortening the negotiation and preparation phases, Control-M 9.0.18 reduces risk during upgrade execution. If you experience issues at any time during the upgrade, the environment can be easily rolled back to the previous version until the issue is resolved.

 

Deploy innovation faster

Starting with Control-M 9.0.18, BMC will begin releasing major updates annually, with fix packs in between. Not only will this will help customers better plan their upgrade activities, it also showcases our commitment to investing in continuous innovation. We’re already hearing from customers that the new annual delivery model will change their upgrade strategy. Today, market dynamics are changing faster than ever! Having access to a regularly scheduled stream of innovation that is quick and easy to consume through in-place upgrades, is fundamental competitive advantage that will help IT organizations achieve true digital business automation”

Click here to learn more about how Control-M can help you deploy innovation faster.

Share:|

Learn more about the Control-M/Agent directories structure and some utilities which can help you solve issues faster and get your jobs back to running again.

 

As a Control-M Administrator it's very useful to know where to start looking when you have a Control-M/Agent issue. This webinar will help you understand where to look and which utilities you can use. 

 

Please join us for the next Connect with Control-M webinar on Wednesday January 31st when Corey Low will explain the Control-M/Agent directories and utilities usage for troubleshooting and solving issues.  Corey will also demonstrate ways to debug the Control-M/Agent and which information to provide to BMC Support when having a problem. Finally, we’ll conclude with a live Q&A session.

 

Register Now!!    

 

IMPORTANT: Each registration ID can only be used by one person. Please have all interested participants register individually.

Share:|

There’s been a lot of talk about General Data Protection Regulation, but I thought it would help to provide you with some facts.

 

Picture1.png

Here’s 250 pages of GDPR information condensed into 10 bullets:

  1. Don’t have a Personally Identifiable Information (PII) data breach!
  2. Notify a legal entity of real or potential breaches within 72 hours.
  3. Remove all personal data when requested by the EU citizen within 30 days.
  4. When requested by EU citizen, within 30 days, provide all personal data and how it was used
  5. Relate the collection and processing of personal data to specific purposes.
  6. Positively verify that someone is of legal age to sign up for the service.
  7. Know where all data resides (keep records of everything).
  8. Design all systems with an appropriate and demonstrable security and process.
  9. Expand privacy accountability and liability to all partners in the ecosystem.
  10. Penalties can be up to 4% of global turnover or €20,000,000 (whichever is higher).

 

It’s no longer about just ‘not having a data breach’, it’s also about what the business is expected to do after a breach occurs.

 

So before I start, let’s clear one thing off the table, which is the question we often get, is Control-M GDPR compliant?

In short, there’s no such thing as an officially certified GDPR vendor. No vendor can be GDPR compliant, only companies can accomplish GDPR compliance through their actions and processes.

 

The right question is “Does Control-M allow your organization to fulfill the GDPR requirements?”

  • An enterprise is the data controller, and when it implements a software package, the enterprise still has to comply with GDPR.
  • The liability of the data processor (Control-M is a data processor!) – meaning the one who processes the data, or is ordered by the data controllers – is now broader than before.
  • The question, when choosing software, is whether it supports the GDPR requirements - from the privacy-by-design principles to the concrete requirements of handling consent per purpose.

 

Now let’s talk about the interesting stuff – how can Control-M help an enterprise establish the right process to meet GDPR rules.

1. Automate the Right to be Forgotten, Right to Access, Data Portability and Notify processes across all parts of the infrastructure

• Reduce cost of performing process, reduce human error, reduce time

• Integrate into the Service Request ticketing system

 

2. Control-M Alerts and Notifications

• Report Data breach within an enforced SLA

• Notify if there has been any problem when a job related with customer data

 

3. Control-M Archiving provides the process evidence to auditors in easy to understand view

• Keeps a record of what and when was executed and also who took any actions (order, cancel, modify, etc.) on them

• Contributes towards Privacy by Design

 

4. Automate the audit / compliance reporting process

• Reduce cost of audit process

• Use Self Service and Mobile interfaces to reduce time to respond to audit

 

5. Use Control-M Managed File Transfer for highly secured and controlled file transfers

• Securely manage file transfers destinations

• Track any file transfer

• Audit the troubleshoot file transfers between the organizations and 3rd parties.

 

6. Provide data lineage by tracking and evidencing data lineage of all activities, into a data lineage platform (e.g. Into Hadoop or into Splunk, or whichever you have)

• Meets compliance requirement to know where customer data is and is not used. Reduced cost of complying, especially for customers already using Control-M

 

7. Integration into Service Request and Change Management tools (for tracking, approval, handle problems, etc.)

 

GDPR is all about the workflow:

“Ensure ongoing confidentiality, integrity, availability, and resilience of customers’ personal data”

 

For more info, please visit the BMC GDPR web page

Share:|

Would you like to improve your Control-M Workflow development cycle?

 

Have you heard about the Automation API WorkBench?

 

In this webinar Ruben Villa will explain and demonstrate how to use some of the latest tools and techniques to speed up your implementation of workflows.

 

This is the link on YouTube for the recorded session:

 

Connect with Control-M: Control-M Automation API: Advanced - YouTube

 

Here is the Q&A for this webinar (Connect with Control-M: Control-M Automation API: Advanced)

 

________________________________________________________________

 

Q: Do I need a separate Control-M license for Automation API?

A: No, Automation API is installed with Control-M/Enterprise Manager 9.0.00.200

________________________________________________________________

 

Q: What types of jobs am I able to run using Workbench?

A: The type of jobs are the same available on the last version of Automation API: Hadoop, Manage File transfer, command, etc.

________________________________________________________________

 

Q: What form is the workbench machine in?

A: It can be Windows or UNIX

________________________________________________________________

 

Q: Also, is it intended to run the workbench locally or on a remote hypervisor?

A: The Workbench runs on a Virtual Box, it can be installed locally or in a remote host

________________________________________________________________

 

Q: What level of access is required for building jobs from the JSON?  I see "emuser", which implies administrator.  Can Update users do the same or is special access required?

A: API uses ctmcli and needs to get 2 tokens for login.  One from the GUI Server (GSR) and a second from the Control-M Configuration Server (CMS). 

Because of this it needs login privileges for both as defined in the CCM under Authorizations for the user that is attempting to login.  Specifically in Privileges for "Control-M Configuration Manager" and "Control-M Workload Automation, Utilities........"

________________________________________________________________

 

Q: Where can I get the workbench?

A: Downloads and Installation info are in:

https://docs.bmc.com/docs/display/public/workloadautomation/Control-M+Automation+API+-+Installation#Control-MAutomationAPI-Installation-workbenchControl-MWorkbench

________________________________________________________________

 

Q: In the demo, are the jobs created in the AJF only? Can you save the job definition instead of creating a job?

A:  Yes, with the Deploy Service, once a job is deployed, it will be scheduled by Control-M according to its scheduling criteria and dependencies.

________________________________________________________________

 

Q: How does Workbench Submission affect Task Counts?  If the DevOps is ordering jobs directly into the environment, won't this increase my task count?

A:  No, as the Workbench is a Virtual Environment.

________________________________________________________________

 

Q: How can Automation API be used to deploy jobs to environments, not just for testing?

A:  You can have multiple environments (connections) and after testing your flows, you can submit the requests to a production environment.

________________________________________________________________

 

Q: Can I use automation API for Informatica jobs?

A:  Informatica Job Type is not available at this moment.

________________________________________________________________

 

Q: Is automation API available in Control-M Version 8?

A: No, Available from Control-M/Enterprise Manager 9.0.00.200

________________________________________________________________

 

Q: Can empass be encrypted instead of clear text?

A: The encryption can be done at the application level.

________________________________________________________________

 

Q: Does job actually run in workbench?

A: No, this is just a simulation.

________________________________________________________________

Share:|

Do you need to move your existing Control-M environment to a new machine?

 

Would you like to automate and accelerate the procedure to keep your business running in the DR environment?

 

Please join us for the next Connect with Control-M webinar on Wednesday January 3rd when Andrea Carmelli will explain the process and demonstrate how to speed up the procedure to restore the hostname and configuration data in the database during disaster recovery when moving to the stand-by environment or replacing the local hostname. We’ll conclude with a live Q&A session.

 

Register Now!!  

 

IMPORTANT: Each registration ID can only be used by one person. Please have all interested participants register individually.

Share:|

Can you believe the New Year is just around the corner? What a year it has been! With one month left in 2017, be sure to join us at one of the events below or start off the New Year with Control-M.

 

We are excited to offer so many Control-M Seminars in many countries around the world. Spend the day with us to learn how your IT Ops, Developers, and DevOps Engineers can deliver applications and new services with the agility your enterprise demands—to drive successful outcomes for your digital business.

 

Explore the latest Control-M features to:

  • Accelerate delivery of services to the business
  • Quickly respond to changes and adopt new technologies
  • Accelerate application development with jobs-as-code
  • Embrace new technologies without compromising existing infrastructure

 

 

DateEventLocationVenue
December 4-6, 2017Gartner Applications Strategies SummitLas VegasCaesars Palace
December 4-7, 2017Gartner Data CenterLas VegasThe Venetian
January, 2018Control-M SeminarCentennial, COTopgolf
January 18-19, 2018DevOps DaysNYCMicrosoft Technology Center
January 19, 2018Control-M SeminarHong KongRegistration link coming soon
January 23, 2018Control-M SeminarAmsterdamMereveld
February 1, 2018Control-M SeminarMadridRegistration link coming soon
February 19-23, 2018Control-M SeminarAustraliaRegistration link coming soon
February 20, 2018Control-M SeminarMunichRegistration link coming soon
February 22, 2018Control-M SeminarHamburgRegistration link coming soon

 

*The Topgolf event has been postponed to January, 2018. More details coming soon.

Share:|

Learn how to deploy Control-M 9.0.00 in the cloud using Amazon Web Services.  During this webinar, we will cover: Installation media for Cloud, Supportability, and Settings & Installation procedure.  Come learn how you can address your changing business needs by utilizing the public cloud enterprise job scheduling capability with Control-M.

 

In this webinar Ted Leavitt will explain and demonstrate how to deploy Control-M on AWS.

 

This is the link on YouTube for the recorded session:

 

Connect with Control-M: Deploying Control-M 9.0.00 on AWS - YouTube

 

Here is the Q&A for this webinar (Connect With Control-M: Deploying Control-M 9.0.00 on AWS)

 

________________________________________________________________

 

Q: What is the official support policy for AWS cloud?
A: Control-M deployment on AWS and Azure is supported with current release.
We support running Control-M with internal
PostgreSQL Database
Oracle Database
PostgreSQL RDS
Oracle RDS
MS SQL RDS
________________________________________________________________

 

Q: What is minimum Control-M version and FP level required on Cloud platform?
A: Control-M/Enterprise Manager V9.0.00 FP400
Control-M/Server V9.0.00 FP300
________________________________________________________________

 

Q: Which installation media should be used for Cloud platform?
A: BMC offers two versions of Control-M installation media. One is for regular installation use, another named with 'Control-M Version 9.0.00 with cloud support V2' for Cloud platform.
________________________________________________________________

 

Q: How do I connect an Agent on AWS to Control-M/Server on premise host across the internet?
A: We assume the premise Server can initiate a TCP/IP connection to the agent machine (at AWS), and Agent on AWS cannot initiate a TCP/IP connection to server. The ideal solution for this scenario is using persistent connection where the Control-M/Server is connecting to the agent. The agent should be configured for "Allow communication". The only port that should be opened is the "Server to Agent" port on the agent machine as inbound.
________________________________________________________________

 

Q: Is there a dedicated AMI in AWS that we can use that has Control-M binaries baked into it? In AWS Marketplace?
A: We do have an AMI, but we do suggest launching an EC2 image and installing Control-M in that image because of limitations of the current Control-M AMI.
________________________________________________________________

 

Q: Can EIP be allocated for RDS Database?
A: You cannot assign an elastic IP to an RDS instance. Please note that RDS provides you with a DNS endpoint using which you can connect to your database instance. This DNS endpoint will not change during the lifetime of your instance. This ensures that then you do not have to reconfigure your application's database connection endpoints as long as you're using the same database instance.
________________________________________________________________

 

Q: For RDS DB, should we enable the AWS automated backup?
A: It’s your own preference, and the Control-M embedded Backup/Restore solution are still recommended while AWS automated backup is a good supplement.
________________________________________________________________

 

Q: For RDS DB, should we enable RDS DB Auto minor version upgrade?  Will it affect Control-M?
A: It will not affect Control-M, as long as the major RDS DB version is compatible with Control-M.
________________________________________________________________

 

Q: Is Amazon Linux compatible with Control-M?
A: EM server and CTM/Server are not supported.  Control-M/Agent 9 Fix Pack 2 supports Amazon Linux 2015.09.x (64 bit) and later with the following prerequisites:
* ksh-20120801-19.el7.x86_64.rpm
* compat-libstdc++-33-3.2.3-61.x86_64.rpm
________________________________________________________________

 

Q: Can we upgrade from old Control-M normal installation to V9 in AWS?
A: Yes you can.
________________________________________________________________

 

Q: Does Control-M support Multi Subnet which is needed and widely used in AWS?
A: At moment, we don't support it. We have already a RFE to address this.
________________________________________________________________

 

Q: Does control-M support AWS s3 file transfer protocol?
A: At this time AFT does not support working with AWS S3. It is planned to be included in future functionality of our Managed File Transfer product. It has not been announced when this will be included in a future release.
________________________________________________________________

 

Q: Does control-M support Elastic File System (EFS)?
A: Yes, Control-M server V9 and Agent v9 support EFS
________________________________________________________________

 

Q: If I have an on premise Control M, can we use the existing software distro after installing the Fix Packs, or do we need the cloud media? Do we need a new license for this?
A:  The existing media (DROST) can be used to install the Control-M in a non-cloud environment.  We do recommend using the "with cloud support V2" media for installing in cloud.  Fix Packs can be applied to both of these installations.  We would advise speaking with your account rep regarding any licensing or entitlement.
________________________________________________________________

 

Q: How is the HA and DR handled for Control-m 9.0 on two distinct regions?
A: As long as the two hosts have connectivity between one another, it behaves no different than HA or DR on-prem.  With HA, there are complications if there is significant latency between the different hosts (and their databases).
________________________________________________________________

 

Q: Is it must to have elastic IP while creating an instance or while choosing from the AWS marketplace?
A: It’s not a "must", but it is highly recommended.  If you do not have an EIP or configured VPC, the public IP of the Control-M host will change with each restart, requiring the EM (CORBA) configuration to be updated to reflect the new public IP.
________________________________________________________________

 

Q: What considerations are there for a mixed AWS environment, where for instance, the EM is installed on the corporate network with some Control-M Servers in AWS and some on company network?
A: The main consideration in such a scenario is the networking.  You must ensure proper connectivity between any hosts on the corporate network and those running in the cloud.
________________________________________________________________

 

Q: what is the minimum space required for the server to install correctly on AWS or was this discussed earlier?
A: The installation guide states a requirement of 100 GB for installation of Control-M on a UNIX host.  This should be sufficient for the base installation and the subsequent installation of the required Fix Packs and patches.
________________________________________________________________

 

Q: Can an on-premises Control-M EM server manage both on-premises Control-M servers and AWS Control-M Servers at the same time?
A: Yes, it is not a problem.
________________________________________________________________

 

Q: What’s the Control M AWS media file format? Is it downloadable?
A: The installation media is available in ISO format as well as zip for Windows and tar.Z for UNIX.
________________________________________________________________

 

Q: Can we have v8 in on premise while installing v9 in AWS?
A: Yes, normal considerations of working with different versions of Control-M would be to be taken into account.
________________________________________________________________

 

Q: How are the Control-M Agents connected to East Region's Control-M/Server will connect in case of DR to the Control-M/Server in the west region?
A: This will work the same as an on-prem installation.  If the Control-M/Server fails-over, when it initiates communication with the M/Agent, it will use its name and the M/Agent will update its primary M/Server to that new name.  There must be network connectivity between the M/Server and M/Agent.
________________________________________________________________

 

Q: Can we ftp the iso image from local desktop environment to EC2?
A: Yes as long as the ftp is enabled at your AMI and allowed by your corporate network.  Typically it would be easier to scp or sftp the image, as this is enabled by default in the Linux EC2.

________________________________________________________________

 

Q: Can primary and secondary CONTROL-M AWS instances be installed in Amazon data centers in different time zones?  Are there any specific configuration issues for this configuration?
A: The same considerations would need to be made as performing an installation on-prem.  It would generally be advisable to have an HA/DR pair configured for the same time zone to avoid confusion.

Share:|

Would you like to improve your Control-M Workflow development cycle?

 

Have you heard about the Automation API WorkBench?

 

Please join us for the next Connect with Control-M webinar on Wednesday November 29th when Ruben Villa will explain and demonstrate how to use some of the latest tools and techniques to speed up your implementation of workflows. We’ll conclude with a live Q&A session.

 

Register Now!! 

 

IMPORTANT: Each registration ID can only be used by one person. Please have all interested participants register individually.

Kelsey McRae

Get to Control-M 9

Posted by Kelsey McRae Employee Nov 1, 2017
Share:|

Technology continues to evolve and new innovations are launched every single week and Control-M is no exception! Every year we spend countless hours gathering customer feedback, improving functionality, and streamlining the user experience to help you with your digital journey. Don’t get left behind by staying on Control-M versions 7 or 8.

 

Here’s a look at what you’re missing if you’re not on Control-M 9. Contact us today if you have any questions about upgrading.

 

Control-M 7

 

Upgrade your Control-M environment to Control-M 9 to enjoy automated agent deployment, a new and improved user interface and promotion between environments.

 

Control-M 9’s automated agent deployment feature allows you to upgrade one or many agents to version 9 or install a fix pack deployment in a few simple steps. This feature also applies to the EM GUI client component and has a wizard interface to take you through the upgrade process. Upgrade quicker and with less risk, resulting in significantly reduced TCO for managing your ever-expanding workload environment.

 

The Control-M 9 user interface is more logical and easier to use than ever before. Whether your role is an administrator or a business user, a scheduler or developer – Control-M 9 retains and adds new features in an interface that is easy for all roles to use and master. With Control-M 9, you’ll involve everyone in the organization in the workloads that are so critical to your business.

 

Promoting jobs from dev to test, or test to prod across your various environments usually involves a lot of manual work like search and replace, and inconsistent operations. These methods can introduce errors at the most inopportune times.  With Workload Change Manager and the Promotion feature, manual work is replaced with promotion rules that operate the exact same way 100% of the time.  You’ll have complete control over the promotion process with consistent, fast, and error-free deployments and production will run smoother and more reliably than ever!

 

Control-M 8

 

Upgrade from Control-M V8 to Control-M 9 and enjoy automated agent deployment, a new and improved user interface, promotion (above) and Application Integrator.

 

Application Integrator is a web-based design tool that guides you through developing custom job types for your Control-M environment.  Relying on scripts or other brute force methods for custom applications can be costly to maintain and can leave you vulnerable to compliance issues. Application Integrator allows your application development and operations teams to work together in order to create custom job types that can be easily built and deployed, allowing you to manage all of your custom application workflows with the same power and efficiency you have with the out-of-box integrations Control-M has today. Application Integrator also give you access to a crowd-sourced community where AI created job types are available for you to use.

 

bmc controlm logo new.JPG

Share:|

Join our webinar to understand how to deploy Control-M 9.0.00 in the cloud using Amazon Web Services.  During this webinar, we will cover: Installation media for Cloud, Supportability, and Settings & Installation procedure.  Come learn how you can address your changing business needs by utilizing the public cloud enterprise job scheduling capability with Control-M.

 

Please join us for the next Connect with Control-M webinar on Wednesday October 25th when Ted Leavitt will explain and demonstrate how to deploy Control-M on AWS. We’ll conclude with a live Q&A session.

 

Register Now!!

 

IMPORTANT: Each registration ID can only be used by one person. Please have all interested participants register individually.
Share:|

Have you ever been curious about how the New Day procedure functions or maybe you would like to know more about optimizing it?

 

Learn how to identify problems, trouble shoot issues and improve performance in this under the hood look at the New Day procedure.

 

This is the link on YouTube for the recorded session:

 

Connect With Control-M: New Day Architecture & Troubleshooting - YouTube

 

Here is the Q&A for this webinar (Connect With Control-M: New Day Architecture & Troubleshooting)

 

________________________________________________________________   

 

Q: Is timing of pre-newday configurable? By default it happens 1 hour before newday, can we change that if we want?   

A: No, it is not possible to change it at the moment, it is working by design

________________________________________________________________   

   

Q: Where can I find the CE log?   

A: You can find CE log in $<SERVER_HOME>/ctm_server/proclog directory on Control-M Server

________________________________________________________________   

   

Q: What's the difference between the CE* log and the p_ctmce* logs?   

A: CE log contains the execution of CE process, p_ctmce is recorded in case of CE process is facing issues

________________________________________________________________   

   

Q: Which are the Control-M/Server processes that execute the New Day?   

A: The CE process is in charge of execution new day, CD internal subsystem is the one that records new day messages in ctmlog table

________________________________________________________________   

   

Q: So just to be clear, jobs with user specified user dailies are NOT included in pre-newday - correct?   

A: Correct, orders only jobs from folders with Automatic order method, this is the same as System user daily on previous Control-M versions

________________________________________________________________   

   

Q: If the server hosting the Control-M/Server crashes during New Day, should the New Day be executed again once CTM/Server is up and running?   

A: Yes, the new day will continue execution once Control-M Server is up and running again

________________________________________________________________   

   

Q: Where do we find out Control Module logs and for how many days?   

A: Control-M module logs like DB, People Soft and more are set in the agent machine in proclog directory

________________________________________________________________   

   

Q: Is it recommended to schedule jobs near to the time New Day runs?   

A: It depends on your needs, you only have to be aware that the server will track the status of the job after server processes are resumed after new day, in case job continues running during New Day,

________________________________________________________________   

   

Q: I have a quick question, for during the time New Day process executes. If Control-M server crashers, after server is up will New day process continue to run?   

A: Yes, the new day will continue execution once Control-M Server is up and running again

________________________________________________________________   

   

Q: In the NEWDAY message we do not see the message where it states PREODER JOBS copied to AJF   

A: It is normal since the pre-order process loads jobs to a temporary file, the user daily is when new jobs are ordered into active environment

________________________________________________________________   

   

Q: These commands can be done by Control-M Version 8 correct?     

A: Correct

________________________________________________________________   

   

Q: What all does the pre new day do?   

A: It cleans statistics, agent logs, and server table entries, remove old jobs and loads new jobs into active environment

________________________________________________________________   

   

Q: If you issue the clean_ajf command after newday ran would it ONLY remove old jobs or would it clean all newly loaded jobs?

A: It will remove all jobs from the environment since it truncates cms_ajf table

________________________________________________________________   

   

Q: What happens if I run 'ctmlog list 49’?   

A: It does not work the time parameter only accepts until 24 hours

________________________________________________________________   

   

Q: Could you please detail, how exactly ctmagcln works? We often see that the sysout files are not removed on some agents when investigated found that batches are defined with different names of same server which conflicts the ctmagcln process.   

A: ctmagclns basically removes logs, status and output files from every agent connected to the server, it's necessary to troubleshoot in case of any files not removed from agent

________________________________________________________________   

   

Q: We are on V8 what are the setting we need to do for PREODER to work?

A: PREORDER runs by default 1 hour before new day

________________________________________________________________   

   

Q: Are commands similar in UNIX?   

A: Yes, commands are the same as in UNIX

________________________________________________________________   

   

Q: What are the parameters related to the log and statistics cleanup value stated in the ctmlog?   

A: The parameter for updating the amount of days to keep Server logs is IOALOGLM this can be updated from configuration manager (CCM), by default we keep 20 runs per job in the statistics table, by running the ctmruninf is possible to delete those statistics outside New Day

________________________________________________________________   

   

Q: Why ended not ok jobs comes to next day, even though it’s not having maxwait?

A: By design the not ok jobs are not removed with the new day, this in order to review if is normal such jobs failed

________________________________________________________________   

   

Q: How can we check if preorder COPY is failed or successful?

A: there is no way to troubleshoot on the preorder copy, you can run ctmudchk command after new day to validate new day ordered all jobs

________________________________________________________________   

   

Q: If we have to hold some job exactly starts at the new day time, how do we go ahead for that?

A: New jobs won't run during new day, Server will start execution jobs once processes are resumed after new day finishes, it is possible to use ctmpsm utility to hold the job by command line while download is performed and WLA client is refreshed

________________________________________________________________   

   

Q: Can we forcefully run the new day process at any other time if we wanted to? If so what utility we need to use?   

A: The new day must run only once a day, however is possible to change new day time by updating DAYTIME parameter from configuration manager (CCM)

________________________________________________________________   

   

Q: What's needed to be checked if the new day is stucked or taking a lot time?

A: You can check the CE and p_ctmce* log files from proclog directory that will give a clue on why new day is stuck on execution

________________________________________________________________

Share:|

Trending in Support: Connect With Control-M: Control-M Database Backup and Recovery Best Practices

 

When was the last time you backed up your Control-M database? When was the last time you tested the backup to make sure everything was working properly?

 

Learn how to schedule a backup of your Control-M database on a regular basis and how to recovery your data and keep your business operations running.

 

This is the link on YouTube for the recorded session:

 

Connect With Control-M: Control-M Database Backup and Recovery Best Practices - YouTube

 

Here is the Q&A for this webinar (Connect With Control-M: Control-M Database Backup and Recovery Best Practices)

 

________________________________________________________________

 

Q: How are you managing how/when you run hot backup and cold backup on Red Hat?  You using crontab job?

A: As mentioned in the presentation, the command line utilities for hot and cold backups may be used in Control-M job definitions to schedule them on a regular basis.

________________________________________________________________

 

Q: Can backups be restored from a setup where EM and Server are hosted on different machines to a setup where EM and Server are on the same machine? If so, how.

A: This is possible and would depend on the type of backup taken and the database configuration as it relates to the Control-M installation, whether the setup on the same machine is a One Install or separate installations on the same host. For example, if EM has its own Postgres database then it would be possible to take the database backup from one machine and restore it on another, then use the restore_host_config  utility (as of version 9.0.00.500) to make the necessary parameter changes.  The utilities that only export data may be used to restore to a different machine, but again care must be taken as host specific values are included in these backups.

________________________________________________________________

 

Q: What is the recommended time to keep older backups and archive logs?

A: This would depend on how far back in the past you would want to have a potential restore point.  The business need would dictate when an older backup is no longer critical to keep.

________________________________________________________________

 

Q: Is there a BMC utility to manage the removal of old archive files, following the execution of the next hot backup?

A: Currently no automated utility exists to manage the removal of old archive log files once a new hot backup is performed. However, a simple delete or move command may be used or create a new directory for these files.

________________________________________________________________

 

Q: Can I just use XML exports instead of backups?

A: This is dependent on your needs, keeping in mind that XML exports save only the data, for example, job definitions, calendars, etc., whereas a backup involves the entire database schema and configuration files and allows the database to be restored in the event of corruption.

________________________________________________________________

 

Q: Does ctm_backup_bcp utility back up all configuration in Control-M/Server and all job definitions?

A: The ctm_backup_bcp utility backs up all data and configuration stored within the database tables of Control-M Server, in bulk copy format.

________________________________________________________________

 

Q: How often do I need to run hot backups?

A: This would depend on the business need for a specific restore point.  Once a hot backup is run, a restore point is created and subsequent archive logs maintain the changes performed in the database from that point until the next host backup is taken. We recommend keeping an eye on the archive log directory as this will continue to grow so long as the hot backup is active. You may want to consider running a fresh hot backup weekly so that all previous archive logs from that point and before may be discarded to retrieve filesystem space.

________________________________________________________________

 

Q: We use mirroring for our Control-M server and have a script that executes the following to back up the EM backup once a night shortly after new day. Is there a benefit to using DBUHotBackup over the "em util -export -type all -type history -file $_[0]"?

A: The EM util export essentially exports the data from the EM database at a particular point in time whereas the hot backup will contain al data up until the current time.

________________________________________________________________

 

Q: Is there a way to apply a cold restore and roll forward using the logs if they were kept?

A: Unfortunately, this is not possible.

________________________________________________________________

 

Q: On Linux, if you are running the hot b/u via a script at the same timing daily, what general type of issues would cause the backup to fail? Sample error .... AP-10 - Execute Script Module: 9 - Failed to invoke command= ....

A: We would have to examine the log files to determine the cause of this particular error, however, some possibilities are that it may be due to the backup directory not being empty, or the Postgres database is not running.

________________________________________________________________

 

Q: Can you paste the backup command via a job in this panel?

A: DBUHotBackup -TRACE_LEVEL info -BACKUP_DIRECTORY /controlm/ctmsrv9/hot_bkp/full -ADMINISTRATOR_PASSWORD manager -REMOVE_UNNECESSARY_LOGS Y

________________________________________________________________

 

Q: How CPU intensive is Hot Backup?

A: This would depend on the amount of data being processed for the hot backup and archive logs, as well as the physical resources on the machine itself hosting the Control-M database.

________________________________________________________________

 

Q: I like the command em psql -c "select.....", is there a similar command for oracle and sqlplus?

A: Here is an example for Oracle:

sqlplus -s /nolog <<EOF

connect user/pass

select * from my table;

quit

EOF

________________________________________________________________

 

Q: What are the advantages of hot/cold backup compared to a bcp backup?

A: A BCP backup on Control-M Server backs up the data contained within the database. The BCP restore assumes a functioning database in order to import the data back in.  A hot or cold backup will perform a backup of the database contents and configuration files, allowing the database to be restored in the event of corruption.

________________________________________________________________

 

Q: Is there a difference in the DBUBackup commands and just using the pg_dump command with Postgres?

A: The utility pg_dump is not a database backup but rather a database export. Conversely, the DBUBackup is a backup of the database configuration and contents to allow a restore in the event of corruption.

________________________________________________________________

 

Q: What about archive log files?  The hot backup apparently doesn't delete them.

A: Currently no automated utility exists to manage the removal of old archive log files once a new hot backup is performed. However, a simple delete or move command may be used or create a new directory for these files.

________________________________________________________________

 

Q: Is the process similar for MSSQL and Oracle, or are those backups done via the MSSQL or Oracle utilities?

A: Currently the hot backup functionality within Control-M is available only for a dedicated Postgres database.  Oracle and MSSQL databases require their own native utilities for such functionality. 

________________________________________________________________

 

Q: Does the contents of these folders need to be deleted each time you perform hot backup or will it overwrite the contents?

A: As mentioned in the presentation, please ensure that the directories used for the backup and archive files cleaned before running the hot backup. 

________________________________________________________________

 

Q: Would there be any purpose in enabling archive mode if we are not using hot backups?

A: No, the archive mode is intended for hot backups.

________________________________________________________________

 

Q: Do I execute the same command for Control-M Server Hot backup?

A: The hot backup commands for Control-M Server and Enterprise Manager are very similar. 

________________________________________________________________

 

Q: Are hot backups up to the minute, or, if there is a failure, could there be recent transactions that are not in the hot backup?

A: The archive logs will retain the most current information that was dumped in the hot backup.  There is the possibility that some transactions may not be included, depending on the point of failure.

________________________________________________________________

 

Q: Can a remote directory be defined as the default location for the hot backup files?

A: So long as the directory can be accessed via a local mount point, it may be used for the hot backup files.

________________________________________________________________

 

Q: Do I execute the same command for Control-M Server Hot backup in windows environment

A: The syntax for the hot backup commands are the same in a Windows environment

________________________________________________________________

 

Q: What is the difference between a cold backup and performing exportdefjob, exportdeffolder, exportdeftable, exportdefcal, copydefcal, copydefjob, etc..?

A: A cold backup is a backup of the Control-M database and the data contained it. The export utilities contain only the data exported at that given time.

________________________________________________________________

 

Q: What is the difference between a cold backup and performing em util -S T7LDM05 -D EM7 -U em800 -export?

A: A cold backup backs up the entire EM database contents and configuration files, whereas a UTIL export merely exports the data contained within the EM database.  The util restore assumes a functioning database in order to import the data back in.

________________________________________________________________

 

Q: Does DBUHotBackup copy all the EM data? Including calendar/shout destinations/user permissions/services etc?

A: The hot backup does backup all data contained in the Control-M EM database, including folders, calendars, users, services, etc.  Please note that shout destinations are actually maintained in the Control-M Server database and would require a backup of Control-M Server.

________________________________________________________________

 

Q: Also, which logs are removed by REMOVE_UNNECESSARY_LOGS?

A: The "REMOVE_UNNECESSARY_LOGS" when set to a value of "Y" keeps the last 7 days of files in the archive directory after a successful hot backup.

________________________________________________________________

 

Q: Which back/restore utilities to use when switching from windows to Linux?

A: The answer would depend on your business needs what data you are looking to backup for a potential restore in the event of a failure.  Generally speaking the cold and hot backup commands as well as the XML utilities mentioned in the presentation may be used on either Windows or Linux installations of Control-M.

________________________________________________________________

 

Q: What are the advantages of hot/cold backup compared to a bcp backup?

A: A BCP backup on Control-M Server backs up the data contained within the database. The BCP restore assumes a functioning database in order to import the data back in.  A hot or cold backup will perform a backup of the database contents and configuration files, allowing the database to be restored in the event of corruption.   

________________________________________________________________

 

Q: Other than time constraints are there any advantages to a cold backup versus a hot backup?

A: Time and space utilization are the primary advantages of a cold versus hot backup.

________________________________________________________________

 

Q: We are using the Control-M high availability function, does BMC still recommend a backup is performed?

A: While the High Availability function in Control-M allows switching between a primary and secondary platform, a backup is still recommended for disaster recovery purposes.

________________________________________________________________

 

Q: I had to create a python script clean up archive files.  I would clean up files from date stamp of .backup file up to the new .backup file.  I am hoping your utilities mentioned do this process.  How do we get information on those? Can you demo those?

A: The "REMOVE_UNNECESSARY_LOGS" parameter when set to a value of "Y" keeps the last 7 days of files in the archive directory after a successful hot backup. For further assistance, please open a support issue and we will gladly address your concerns. 

________________________________________________________________

 

Q: How do you use these backups for DR?  Do you sync them somewhere?  Are you setting somehow the HOSTNAME related things on DR server as needed?  Any tie in with IP address with restore?

A: With the database backups and restores, system specific information is maintained.  For the purposes of disaster recovery, as of the latest fix packs, the "restore_host_config" utility was introduced to allow for changing various parameters when different hosts are involved.

Filter Blog

By date:
By tag: