Skip navigation
1 2 3 Previous Next

Control-M

294 posts
Share This:

Three is the perfect number. I’ve been with BMC for three years and Control-M 20 is the third version I’ve been pleased to take part in.  To keep with that theme, I’ll provide three reasons to upgrade to Control-M 20.

 

BMC’s overall strategy focuses on five technology tenets to help companies achieve success and become an Autonomous Digital Enterprise.  Control-M is the driver behind three of those tenets: automation everywhere, enterprise DevOps and data-driven business.

 

And Control-M 20 advances the BMC strategy by responding to three challenges:

  • Market demand for faster time to market for new digital capabilities is accelerating.
  • Pressure to manage costs and increase efficiency is a constant requirement.
  • Dev/product teams are increasingly turning to modern technologies such as containers, Kubernetes and cloud to improve application development and increase deployment velocity.

 

In this article I will focus on a few Control-M 20 key features that help IT Ops leaders to solve the above challenges.  Hopefully we have captured features that are relevant for most of the readers.  For an entire list of features, you can refer to the following document: Control-M 20 Release Notes.

 

Control-M 20 unlocks business agility 

What organization is not seeking agility today?  No matter the sector, either banking and finance, retail, or manufacturing, every company is building a customer-centric culture to provide their customers with an e-commerce type experience.  They need to respond with agility to meet customer expectations, delivering the goods and services at the right time and at a competitive price.  And even more, they need to anticipate customer expectations by transforming customer data into actionable insights.

 

Control-M 20 helps provide agility by empowering developers with self-service orchestration capabilities so they can develop their applications with the production in mind, facilitating operations’ tasks and speeding up the entire application lifecycle.

 

Three key features, among the others, contribute to this strategic theme and provide:

 

  • More developer empowerment – with controlled administration delegation. With the Role-based Administration feature, developers can be authorized to manage more objects in their development stack, with fully autonomy but controlled access, so they cannot break other teams’ work. Because they don’t need to submit requests to IT, their applications flow faster throughout their lifecycle and get faster into production. One of our customers said: “The Role-based Administration and Automation API will provide more freedom for customers to manage their connection (e.g. bank users can update passwords without sharing it with our team)” – Johann Vermeulen, IT Operations Analyst, BMW South Africa.
  • More sophisticated orchestration - with expanded Automation API functionality.  New job types, as well as new job information and new Control-M functions empower developers to expand the orchestration and bring operational instrumentation into more sophisticated application workflows, for faster-time-to-market of their applications and digital services.
  • Faster development/deployment of workflows - with the new MFT Enterprise LDAP integration.  This integration introduces an external authentication method for developers or other teams to develop and deploy file transfer workflows with reduced risk and improved security.

 

Control-M 20 manages costs and increases efficiency

 

Increasing costs compete and cause tension with agility.  It’s a challenge to allocate investment between optimizing current business practices and investment needed for innovation.

 

Control-M 20 includes features to help with agility and cost management.  Hereafter are but a few examples among the many about how this is accomplished.

 

  • Reduced Control-M management cost – with the centralized management of connection profiles.  The Centralized Connection Profile feature eliminates all the redundant configuration effort required to connect with all the local servers running a certain type of jobs.  Furthermore, since the information is kept centrally, disaster recovery as well as migration and upgrade scenarios can be speed up.
  • Reduced Control-M onboarding cost – through an expanded and intuitive web interface including welcome pages for all tools and major functionalities, integrated guides and how-to-videos.  Here is a testimonial from Nvidia Corporation customer: “We plan to roll out Control-M usage to all our SAP personnel this year using the Web Interface. I see that, the expanded capabilities in Control-M 20 Web, especially in planning and monitoring, will enable us to accelerate and simplify on-boarding them all" – Bala Gampa, Enterprise Architect & SAP Technical Expert, Nvidia. 
  • Eliminated outage risks when transferring external business partners data - with the Active-Active High Availability in MFT Enterprise architecture.  This feature delivers high availability with redundant gateways and hubs and provides an always-up environment that can also offer load balancing capability.

 

Control-M 20 generates business value from technology modernization

 

Companies have transformed the IT infrastructure into a modern dynamic cloud environment capable of supporting business growth in a cost-effective way.  Developers leverage “next gen” architecture like, Docker, Kubernetes and other Platform-as-a-Service technologies to handle the increasing volume of applications with agility and high quality.  However, until applications and data get to production, no value is generated for the business.  Production is where applications deliver value or actionable insights to support strategic decisions.  It’s Control-M that generates business value from technology modernization.  In particular:

 

  • Control-M Automation API operationalize containerized applications, bringing functionalities such as single point of control, service level management and integrated file transfer into those applications and making them production-ready.
  • Control-M Application Integrator helps integrating modern technologies across hybrid environments driving them to deliver the final value as business services.

 

Here below are some key enhancements in Automation API and Application Integrator.

  • Simplify complex containers productization - through new Automation API. They are delivered in an API-first strategy, through continuous monthly releases.  Here is one of our customers’ quote: “Automation API enhancements will allow us to integrate Control-M with our enterprise applications. This will close the gap between all of our apps and data points and position Control-m as a central hub for process flows - as I feel it should be. API Integration is the fast lane and Control-M 20 will make it easier to put the pedal to the metal” – Jonathan Spottswood, Tech Lead, Enterprise Workload Orchestration, GEHA.
  • Integrate Google Cloud services into hybrid orchestration – with an out-of-the-box service account authentication for Google Cloud Platform. You will be able to build new integrations into services or microservices on the Google Cloud Platform in an easier and more secure way.

 

Conclusion

Control-M has evolved and continues to evolve to help organizations in their journey to get better in agility, customer-centricity, actionable insights.  In particular, Control-M 20 solves faster time-to-market challenge, manages costs and increases efficiency, and generates business value from technology modernization.

Share This:

Using DevOps practices and the Automation API from Control-M to reduce a 4+ month, thousands of person-hour project down to just 3 weeks with minimal support personnel.

 

Background

 

A legacy Foreman system had to be upgraded to the latest release to support CARFAX infrastructure. Each server (physical or VM) would need to be touched. Due to the age of the system, and complexity involved, there was no clear cut upgrade path without having to run various scripts to migrate data, including having to have intervention for each server. It was estimated that even with scripting, this process would take over 4 months to complete with thousands of person-hours dedicated to the project.  At one point, for one piece of the project, we looked at having nothing but an entire team doing nothing but “clicking a button” that was required as part of the migration efforts.

 

Automation to the Rescue

 

The project lead had previously worked with Automation team personnel on other projects and was aware of the capabilities that we had with the Automation API and Jobs as Code.  Discussing the project, it was determined that we could utilize the Automation API and Jobs as Code to significantly reduce manual intervention, as well as make sure that CARFAX infrastructure remained stable during the transition and migration period.

 

Issues such as “How do I only take one server from a service down at a time?” was resolved through the use of Quantitative Resources; “How do I make sure that all steps are followed in sequence?” was resolved through the use of dependencies; “If I fail, how do I stop other servers in the same service from proceeding?” was resolved through conditional logic.  Through the web portal we were even able to give System Admin personnel access to a red light/green light dashboard where they could view progress, set TLAs when necessary, and rerun any failed migration efforts.

jac-example.JPG

 

While this reduced the project significantly in terms of person-hour time, we still had one piece that required us to login to a web page and push a button.  With no API or other means available, Automation personnel used a robotic process automation tool to record and process this important step.  With that in place, we then used the Automation API to again control this piece of the puzzle so that all parts were visible, manageable and being monitored through a single pane of glass.

 

The End Result

 

Using one master JSON file with 168 lines of “code” we were able to dynamically create over 16,000 jobs that controlled the migration process. Using the Automation API we were able to provide “black box” automation to pieces of the puzzle where no clear cut automation methods were available.  Through robotic process automation, we managed to eliminate the final hurdle.

 

The 4+ month, thousands of person-hour project was reduced to 3 weeks.  During this 3-week time, Control-M and the Automation API managed most of the migration efforts with systems personnel only having to respond to incidents and failures.  Throughout the project, complete visibility and metrics were available thanks to the built-in web dashboard.

 

All this, thanks the power of the Automation API & DevOps.  What will YOU do with the Automation API today?

Share This:

Learn Anywhere, Anytime!!!

 

We are excited to announce that the Assisted Self-Paced (ASP) offering for the Control-M 9.0.19.x: Fundamentals Operating course is now available for you.

 

This assisted self-paced training covers:

  • Basic Control-M Operations
  • Different Control-M Interfaces used to monitor the Control-M environment
  • How to monitor and manage job processing in the Control-M environment
  • How to access job details
  • Find functionality and how to use it to locate a particular job from the complex workflows
  • How to find job dependencies using the Neighborhood functionality
  • How to use Alerts Monitor to manage and monitor the alerts
  • Leverage the Control-M MFT Dashboard to check the progress and details of all of the file transfers happened across your Control-M environment
  • How to use Workload Policy Monitor to view the current status of associated jobs with a Workload Policy

 

Register for learning, and complete it at your pace, anytime within the specified period.

Check out the given link for more details of Course Abstract and Registration: Control-M 9.0.19.x: Fundamentals Operating (ASP) - BMC Software

Share This:

Are you a user of Control-M Managed File Transfer? Do you want to know how Control-M Managed File Transfer provides an intuitive graphical user interface that gives you instant visibility into the status of internal and external file transfers with a dashboard view?

 

Here comes an exciting education offering for you.

 

We are happy to announce that the Control-M 9.0.19.x: Fundamentals Managed File Transfer Administering course is now available for BMC customers and partners.

 

This one-day instructor-led training (ILT) course talks about how to:

  • Transfer a file to and from remote to local servers
  • Transfer a file between SFTP servers
  • Install Control-M MFT
  • Deploy file transfer to agents
  • Configure Control-M MFT
  • Define connection endpoints
  • Monitor files transfer activity
  • MFT Enterprise Architecture Overview
  • Install B2B Gateway
  • Configure B2B Hub
  • Configure Users for Gateway
  • Configure Virtual Folders
  • Define Rules
  • Use the B2B Gateway to transfer files

 

For more details about the Course Abstract and Registration, please check: Control-M 9.0.19.x: Fundamentals Managed File Transfer Administering - BMC Software

 

Many thanks to everyone who helped us develop this course.

Share This:

I am reaching out to our amazing Control-M community to see if there are a few of you that are able and willing to support us with a research project we are current doing with IDC. We are in search of organizations that have used Control-M for some years and are leveraging Control-M as part of your organization's modernization or digital transformation initiatives.

 

The ask is for a one hour phone call. It will be only you and IDC on the phone (no one from BMC) and your name, company name and any information you shared during the call will remain anonymous (even from BMC).  They would only confirm to let us know that you indeed did have the call, but not anything you discussed.

 

Interested already?  Message me! Criss Scruggs

 

Your responses would be aggregated with responses from other Control-M customers into a single white paper report of which we would be happy to send you a copy when it's hot off the press. If you do wish to attribute any comments or quotes to your company, yourself then you would of course get to review and approve that prior to sharing with BMC or publishing the white paper (again, can be all anonymous as well).

 

Here are a few more details regarding the research:

 

Research: Analysis of the benefits and costs associated with your organizations use of Control-M (you may not have all of the information required but hopefully most of it)

Participant: The person in the organization that is able to best answer questions such as:

  • Your organization's background including size, industry, location, defining characteristics and reasons for using Control-M
  • The role of Control-M in the automation and orchestration of business applications, and the impact on developers, including the quality and timeliness of delivery of new applications and features
  • The use of Control-M to implement and extend DevOps activities and to manage workflows across hybrid and multi-cloud environments
  • The effect of Control-M on data analytics and the ability to leverage data to support business operations
  • The impact of Control-M on business outcomes such as higher revenue, customer satisfaction, compliance, and quality of applications and services
  • Costs (both financial and staff time) and time frame for your organization's use of Control-M

 

Now, for the best part...for the first three Community Members that fit the criteria AND are willing and able to participate AND complete their calls with IDC before May 20th, we will send you the coveted Control-M hoodie and a stylish pair of only-available-from-BMC socks.

 

Please reply to let me know you are interested and I will contact you directly about connecting you to IDC to schedule the interview.

Message me! Criss Scruggs

 

Thank you for being the best customers ever!

 

Lilly Roosken Loreal Hunter

Share This:

** n.b. this was intended to be the Christmas installment, but Christmas got a bit busy. It is a slight departure from the usual tales - less tech, more disasters and a guest appearance by Roderick David Stewart ...

 

“Is your passport valid and do you fancy a trip?” This is a phrase that many young, energetic employees long to hear from their boss. Why yes, my papers were in order and he just needed to tell me where to go (which happened a lot in our office). I had cornered the market in configuring 3174 terminal controllers (used for connecting groups of 3270 terminals to the mainframe) and now our Zurich office needed to expand. More desks equalled more screens.

 

Arriving in Switzerland I was slightly perturbed to find that our branch was located above a shoe shop on a busy shopping street. When I expressed my surprise to the local staff there was laughter. “But we are on Bahnhofstrasse, the most expensive street in the whole of Switzerland, it is the only place to do business! And that shoe shop, don’t go in. You’ll never be able to afford the shoes and, if you do, you will never want to wear any other kind!” Warned off even trying on local shoes, I busied myself installing the hardware and delivering the warm green glow of terminal screens.

 

The local staff took me out for dinner and dropped me off at my hotel. The next morning I headed down to breakfast. On the buffet table there was a large, sculpted ice swan. Orange juice ran down a spout and, suitably chilled, into any proffered glass. I could get used to this. The hotel was the Schweizerhof, a 5-star establishment at the very top of Bahnhofstrasse. I stayed there twice before our secretary (who was new to the job) realised there were various bands of employees and only senior staff should be lodged at the Schweizerhof. On later visits I had to slum it in a 4-star place, just off Bahnhofstrasse.

 

My return visits were necessary as the office overflowed from the confined space above the shoe shop and into several more rooms above a chocolatier. Eventually it was realised that Zurich (and several other branch offices across Switzerland) now needed their own local mainframe. Bahnhofstrasse, clearly, was not a suitable environment for such a beast.

 

By chance we found a datacentre that an insurance company had recently vacated. The only problem was that it was well into the countryside, down the western side of Lake Zurich and a 30-minute drive from the city. The building was in a light industrial park of approximately 10 units, nowhere near any town or village. We moved into the last unit in the row and shared an outer front door with a business that serviced skis. Downstairs the insurance company left us a ready-made area where a mainframe and communications equipment could easily be accommodated. Upstairs was an office suite with a small kitchen and views over a parking area and adjacent meadow. On the far side of the meadow was a small farm which housed several pigs and a few dozen chickens.

 

Initially I played no role in the installation of the datacentre. My colleagues built and configured the hardware, installed communications lines and got the place ready to go live. An emphasis was placed on automation; the site was to be as “lights out” as possible. To this end, CA-7 was installed and would run the bulk of the work. Environmental sensors were also added; should anything unusual occur then an alarm would be raised and the relevant staff informed.

 

Now we just needed to recruit somebody to sit there, 9-to-5, to handle the daily operations. A classified advert was placed in the local Zurich newspaper; it ran for several days without response. A larger ad was published, still no takers. Eventually a half-page splash appeared, pleading for somebody to step forward. The only applicant was a gentleman who insisted that he would need 3 months’ of annual leave.

 

This is where I came in. I was familiar with Zurich and had been tasked with documenting the datacentre and producing a daily checklist of tasks. Why not work in the Swiss datacentre while I was completing these documents? We would surely find local staff sooner or later (very much later, as it transpired) and I would be on hand to train any new recruits.

 

On my first day back in Zurich a local colleague drove me out to the datacentre. I got an access card and a guided tour of the premises. He drove me back, which was just as well as I had no car and, anyway, I could not drive.

 

Installed in my 4-star hovel, I set about organising myself. On my first night there, the phone rang at 01:00 am (this was the phone in my room, being the early 1990’s this tale predates ubiquitous mobile telephony). The night shift leader in London reported that a dial-up line used for critical payment instructions seemed to be dead. Could I pop out to the datacentre and have a look? I jumped into action. There was a taxi rank by the railway station and I handed the driver the address on a piece of paper. He was not entirely sure where our target destination was. After a one-hour meandering drive, between my very basic German and his very basic driving, we managed to find the datacentre. I charged in. The windowless machine room was underground. I found the errant modem, pulled the power, waited 10 seconds and switched it back on. I phoned London; they were already seeing the data flowing through. I had saved the day – and on my first day too!

 

I headed back up the stairs to the office area and out of the front door. The taxi was nowhere to be seen. I found myself standing in a darkened industrial estate in the middle of the Swiss countryside. Back in the office I searched for the number of a taxi firm, to no avail. I rang London again. Luckily the shift leader rapidly located the number of a limousine company in Zurich – would that do? Yes, it would.

 

It turned out that we had an account with this firm as they had been used to shuttle our managers out to the datacentre in the preceding months. Remarkably quickly, a stretch limousine appeared outside the datacentre. A smartly uniformed driver appeared and asked if I was called Mark. Yes, I was the only person ordering a stretch limo on that particular industrial estate, at 03:30 am.

 

The driver introduced himself as René. We motored our way into Zurich and he asked if I would need to book them again. I explained that I would be attending the datacentre on a daily basis, with the (hopefully infrequent) need for an emergency callout at night. In that case, René suggested, I should use their firm.

 

The next day I called my boss in London and pointed out my transportation issues and the fiasco with the taxi. He said that there was a railway station some distance from the datacentre but that he knew it wasn’t easy to get from there to the datacentre. Working on the premise that I should only be there for a couple of weeks’ he agreed that I could use the limousine service for my daily commute.

 

Thereafter, René would collect me as I finished my breakfast in the hotel restaurant. I would position myself at a table where I could see the “limousines only” parking berth and, as soon as my driver arrived, would skip down the steps as he held the car door open for me. The other patrons soon began to wonder who this sophisticated young Englishman was. I enhanced the aura of mystery by always appropriating two boiled eggs from the buffet, wrapping them in a napkin and taking them with me (the datacentre had no food supplies and I wasn’t going to spend any of my money on such luxuries).

 

I quickly completed the documentation. Too quickly, as it turned out. I ran out of things to do and was reduced to staring out of the window. The office area was slightly elevated, about half a floor above ground level. From this position I could see out across the small parking area and meadow to the farm. Two large pigs provided the main entertainment, one pink, the other brown. On one particularly exciting occasion they both escaped and proceeded to fight their way across the pasture. The battle only ended when the farmer appeared and stopped them before they reached the main road.

 

I grew slightly despondent. People would call from London but the topic of conversation would often turn to how the pigs were doing. In the midst of this monotony, René became a welcome distraction and our journeys together gave me the rare opportunity to talk to somebody in person. Regular as clockwork, he collected me at 17:30 and deposited me back at the hotel. He was a fount of knowledge on all things Swiss, from where to buy the best chocolate to which were the best mountain resorts. Every now and then a much younger guy, called Ralph, would be my driver. Ralph didn’t say much, I wasn’t even sure if he spoke English.

 

One point where I did manage to contribute was with the dodgy dial-up modem. Without explanation it would hang, maybe once a week. We would reconfigure the settings but it would soon seize up and require manual intervention. Late night visits became routine.

 

One Thursday afternoon I was sitting in the office area. The building was completely clad in dark-blue mirrored glass. Looking out I had a clear view, but external observers saw nothing but their own reflection if they tried to look in (unless you came into the shared hallway and glanced through the airlock door that secured our office). I noticed that the limo was waiting in one of our parking spaces. But it was only 5 pm, which was a strangely un-Swiss deviation from our normal schedule. However, having completed my duties for the day I decided to take this unanticipated opportunity and vamoose.

 

René was pleased to see that I was able to leave early. He apologised and said that he needed to get to another client after me. He seemed a little excited. “My next passenger will be Rod Stewart.”

 

“Old Rod? Say ‘hi’ to him from me.”

 

“Really? You know Mr Stewart?”

 

I laughed. “Of course, all us guys from London know each other.”

 

René “hot legged” it away after dropping me off. I bet Rod was at the Schweizerhof.

 

The next morning it was silent Ralph who collected me. I guessed that René was still out on the town with Rod. Some guys have all the fun, eh?

 

But René (sans Rod) was waiting for me after my working day ended. However, he sat resolutely in the driver’s seat and I had to open the passenger door myself. This bizarre and unprecedented behaviour continued as he avoided saying hello in response to my cheery greeting.

 

After driving 15 minutes in complete silence I figured it was time to venture a query; “René, is anything wrong?”

 

René exploded; “Wrong? Wrong? Well if you can say that having a very embarrassing professional experience is wrong then, yes, something is most wrong!”

 

“Is this anything to do with me?” I asked, completely mystified.

 

“Sir, I collected Mr Rod Stewart last night …”

 

“Yes?”

 

“And I told him that it was my great pleasure and honour to drive a good friend of his self around Zurich. When he asked who, I told him your name.”

 

“Ah.”

 

“And it seems he has never actually heard of you, not even when I wrote your name down on a piece of paper for him to read.”

 

I resisted the temptation to brazen it out and claim that Rod’s memory was not as good as it used to be. “You see, in Britain we have the tendency to use sarcasm as a way of telling little, small jokes …”

 

“Oh yes, I am hearing of this, what you are calling ‘the lowest form of humour’!”

 

“Well, I have always considered it something of an art form.”

 

“I was very much so embarrassed. I doubt Mr Stewart will choose to use our service in the future.”

 

As another pal of Rod’s once said, sorry seemed to be the hardest word. I felt it best to keep quiet and the journey concluded in reproachful silence.

 

The next day was Saturday and I had developed a weekend routine where I would walk down the eastern side of Lake Zurich, through the area known as the “Gold Coast”. My boss had asked that I didn’t travel too far away, just in case, and these long walks were a good antidote to being cooped up in the datacentre. Back at the hotel I had a message waiting from London. A new modem was being couriered to the datacentre to replace the flaky one. The only time it could safely be replaced and tested was on a Sunday morning; could I get there for 09:00 am tomorrow?

 

I immediately called the limousine service. The lady there told me that René was unavailable and Ralph could only collect me later, getting me to the datacentre at 10:00. No, this wouldn’t do, my operator’s code of honour meant I had to be there at 9. And was René really unavailable, or still sulking about the incident with Rod Stewart?

 

I didn’t have time to waste. I remembered my boss mentioned something about a small train station, not too far from the datacentre. I had seen the station as we drove by, it was probably little more than a mile to walk. I headed to the main Zurich station and quickly established that the train took 25 minutes and ran from early on Sunday morning. I could get the 08:00 am service and be at my desk well before 9. It may not be the downtown train - and I definitely wasn’t sailing - but I would not let the side down.

 

Bright and early, the next morning I got the train out into the countryside. Alighting at the last stop (or the “Endstation” as the announcement helpfully pointed out) I was the only passenger in sight. Forest ran in every direction with several paths leading into the woodland, but these were no more than hiking trails and none of them heading in the direction of the datacentre. I then noticed a bus stop, hurrah! However, with it being a Sunday morning I would have to wait 90 minutes for the next one.

 

No problem, I would just have to walk alongside the road. There was no pavement and it was one of those rural roads that the Swiss seemed to enjoy racing along while they anticipated hurtling round the next corner - but I was determined to complete my mission.

 

As I walked up the lane from the station, I felt the first drop of rain. Not any old standard drop of rain - a big, fat, juicy dollop of water which landed squarely on the top of my head. No worries, I had a kagoule with me and it was 100% showerproof.

 

Out onto the main road I started to march along. The rain began to get heavier; looking up I saw that the sky had suddenly turned very black. I sheltered under a tree for a few minutes but realised that it was not actually providing much protection and, anyway, I did not have time to waste. The rain then intensified and started to shred the leaves on the trees and scatter debris onto the road. My kagoule had long since given up being showerproof and I was soaked. Was it too late to turn back to the station? No, I was surely now closer to my workplace and I struggled onwards.

 

The deluge was now biblical; drains by the sides of the road became inundated and the tarmac flooded. I was edging along a slippery grass bank that bordered the road when I heard the sound of a vehicle approaching round the bend. The delivery truck, if he saw me, didn’t slow down. Despite me being a metre or so from the road, the solid wall of water produced from under his wheels peaked at shoulder height and nearly knocked me off my feet. I stood there choking, gasping and thinking that Rod Stewart, possibly, should take some of the blame for this.

 

I could not have been more sodden. Even inner layers of clothing were now wringing wet and my jeans clung to me and started to chafe. Walking up the slip road to the industrial estate I could not manage full movements and my gait resembled C-3PO as I tried to avoid any friction in my joints.

 

Stepping into the hallway I saw the small parcel waiting for me. Well at least the modem was dry. I picked it up and lumbered through the air-lock security door. I staggered into the kitchen and gingerly removed my kagoule, which quickly made a large puddle on the floor after I draped it on a chair. Looking around I could only find a tea towel with which to dry my face and hands. I shambled into the office area and glanced at the clock; it was 09:00 precisely – despite everything I was actually on time!

 

Through the window I could see the rain apocalypse continuing. The farmer was wrestling with the gate to one of the pens, it suddenly released and water gushed out. One of the pigs made a break for it (possibly using the front crawl) and the farmer tried to corral it back.

 

There was a system console in the office area and it showed a tape mount outstanding. I cautiously descended the spiral staircase into the machine room and squelched my way across to the tape drives. It was at that point I remembered that computers don’t like water and humans aren’t that keen on free-range electricity. I went back upstairs and got the tea towel. The farmer was still flailing and fighting to get the pig back in the pen.

 

Back in the machine room, I mopped my footprints off the floor but, as I moved around, I just left small pools behind me. We had a desk in the machine room for the master console and a phone. I sat down and took off my shoes and socks. Every fourth or fifth floor tile had dozens of ventilation holes. Maybe the draught would be enough to dry out my clothes? No, after a couple of minutes it was clear that the air was too cold and the air-stream too feeble.

 

Looking up I noticed the large air-conditioning unit. These monsters lined the walls of the datacentre and where what made all the racket as they sucked air through the room. Maybe this could help? I pushed the chair up against the unit and stepped up to see how this thing worked. Inside there was a large sloping metal mesh, backed by sponge foam. The air was still basically cold but it was being sucked in at an incredible rate. The only problem was that the mesh was covered in a layer of dust and fluff. Well, better dusty shoes than damp ones. I took my socks and shoes and placed them carefully on the mesh. They clung rigidly to it, as if the machine would devour them if it were not for the protective cover.

 

Back upstairs I resumed my vigil from the window. The farmer seemed to have things under control to a certain extent. He was now securing loose window shutters, however the rain continued to batter his broad-brimmed hat and cascade down his long overcoat.

 

My problem now was that the conditioned air in the office and machine room was making me seriously cold. There was no obvious source of heat and I did not know how much longer I could stand there with my teeth chattering. Thinking of it, where were my London colleagues? I had been onsite for nearly 20 minutes and there had been no phone call. I knew I couldn’t miss the call as the external line rang in both the office area and downstairs on the phone at the operator’s console.

 

Ping! Another tape mount request popped up on the console. I padded downstairs and did the bidding of the computer. Time to check on the progress of my socks and shoes. I jumped up onto the chair and inspected the top of the machine – the socks were now bone dry and the shoes were very nearly there too! It could have only been 5 or 6 minutes at the most. Attacked by nature, save by a machine! I made a sudden decision. I delicately peeled off my clothes (down to my underpants, anything more would have been plain weird) and arranged the soaked items over the ventilation area. With my jeans, sweatshirt and T-shirt I completely covered the mesh (my shoes were still there too). There wasn’t a square inch left free, it was as if the unit had been designed with my emergency in mind.

 

It was now definitely too cold to linger in the machine room. I went back up to the office and got the tea towel to rub myself down (making a mental note to replace it as soon as possible). I decided to call the London office, but there was no reply and I started to think of acid put-downs that I could use once they deigned to call me – “what time do you call this” or “forgot to set the alarm clock, did we?” Yes indeed, in my rage I would not hold back.

 

The farmer was now nowhere to be seen and, likewise, the livestock seemed safely sheltered. All was quiet on the porcine front. I, for my part, had invented turbo-charged clothes drying - albeit with the aid of a piece of hardware costing twice my annual salary. Not bad going, especially considering how badly my Sunday morning had started. I was so happy that I danced a little jig of triumph, tea towel on my head. Nobody could see in through the mirrored windows, so it really wouldn’t have mattered if I had dressed up as Carmen Miranda and salsa-ed through the executive suite.

 

Such was my happy reverie that I hardly noticed the vehicle pulling into the carpark. However I stopped mid-boogie when I saw that he chose one of our 2 allocated company spaces. “Hey mister,” I thought “that is clearly labelled for us, get your own spot!” I pressed up against the window to watch him scamper from his car, around the corner to the front of the building. I held back, perhaps he was employed in the ski workshop and would enter via our shared hallway. In that case he would, briefly, have a view through into our office and he might, briefly, see me there in my briefs.

 

I cautiously peered around the corner from the far end of the corridor that led to the airlock door. To my horror, the man was already shuffling through it. Somehow he had opened our security door! Putting the integrity of our data first (and recalling as much war film dialogue as I could muster) I stepped out into the passage – “Nein! Es ist verboten!” The man looked understandably perplexed.

 

“Was? Was ist verbot?”

 

“Es!” I threw my arms wide to indicate the general area around me. “Raus! Raus!” I pointed back over his shoulder. He looked behind himself and then back at me. Confused, he started to talk German that was way beyond my Sunday afternoon viewing habits. “Sprechen Sie Englisch?” I demanded.

 

He initially shook his head but then said, “ah - Jah!” Frowning with concentration, he cleared his throat and loudly announced - “feed the birds, tuppence a bag!” He looked pleased with himself and nodded as if he had just proven a point. I started to fear that I was dealing with an escaped lunatic, however, at this point I did feel that my lack of clothing was somewhat undermining my authority on the matter.

 

I was standing next to the doorway that led to the spiral staircase and down to the machine room. “Ein moment, bitte”, I said, using a phrase that I had picked up in the last few weeks which vaguely seemed to mean “can you just give me a second?”

 

I flew down the stairs and across the computer room. He followed, prompting me to spin round and insist “Nein!” accompanied by arm gestures to emphasise the point. Oh, sonny Jim, once I get my clothes on you’ll be sorry, you’ve picked on the wrong computer operator today! I shot across to the a/c unit and, as I approached it, got the first hint that events were not quite as unconnected as they might appear. There was a control panel on the front of the air-conditioner with a large power switch and some LEDs. I had never seen these lights display anything other than benign yellow or positive green. Now the panel was a field of angry flashing red.

 

I would have to worry about that later; for now I just needed to get dressed. I scooped up my garments and disappeared behind a modem rack (I felt some modesty was in order). Pulling on my clothing I discovered only the groin area of my stone-washed jeans remained damp, however it wasn’t the time for quibbling over minor details. As I dressed I could hear the intruder muttering, “ach, hier ist die Problem …” Looking around the corner I saw he was now in front of the aircon and tapping at the panel. Oh no, this was too much. You take our parking space, invade our datacentre, trespass in the machine room, now you have the nerve to touch the hardware!

 

As soon as I finished dressing, I marched out to confront him. It was then that I noticed he had a briefcase with him, containing screwdrivers and various other engineering tools. He started to talk rapidly. After a minute I realised the conversation involved something called a “Klimaanlage” – whatever that was – and it seemed to cause him some amusement. Before I could think of something to say, the phone at the console desk rang.

 

It was my boss from London. “Hi, I didn’t think you’d be there yet, but I’m glad I caught you.”

 

“Yeah?” I mumbled.

 

“We’ve got alarms going off on the environmental stuff in the datacentre. It’s weird because we’ve not had any issues before and now we get two at the same time.”

 

“Really, which two?” I heard myself say in a somewhat disconnected monotone.

 

“One for excess moisture, the other is a blockage on the intake for the air-conditioning machine.”

 

“Yes, right, I see …”

 

“Anyway, this is just a heads up ‘cos we’ve got all this stuff remotely monitored and it sets off an alarm here and also calls out the local Swiss firm who do the engineering support for us. One of their guys will be on his way and he should be with you in the next 20 minutes or so.”

 

“I think he’s here already …”

 

“Really? That was quick, great service, eh?”

 

“Yeah, terrific …”

 

“Are you ok? Your voice sounds a bit strange. Never mind, I’ll give you a shout later when we are ready to start the work with the modem stuff.”

 

I put the phone down and indicated to the engineer (who seemed to accept my sudden lack of hostility with good grace) that I was going back upstairs.

 

Five minutes later he joined me in the office area. He ignored my “I don’t understand German” spiel and rattled through his findings. I picked up the gist. Clearly the air conditioning unit wasn’t designed as an ersatz clothesline and completely blocking the air intake had, predictably, caused issues. Likewise, the amount of water that I had dragged in with me had triggered alarms. Fortunately, my proactive role in events appeared to be missing from his written report; neither did “gratuitous nudity” show up (although I wouldn’t know it if it did and I figured there would be a very specific German phrase for that activity).

 

He made a call to his office and boldly reported that “all is OK!” while looking at me with a big smile on his face. He went to get a coffee from the kitchen and returned to look out of the window. “Britisch wetter – very rain!” he demonstrated his mastery of English in a way that I could only dream of when challenged similarly by German. I watched as he drove out of the carpark, taking my dignity and a carbon copy of his report with him. The rain continued to teem down and the parking lot now had several streams gushing across it.

 

Where were my colleagues? It was now 09:45 and there had been no word on the actual job in hand. At least I could put the new modem in place and be ready for when we could test it. I did this and got a coffee and slowly sat down (the damp groin area of my jeans was still causing some discomfort).

 

Eventually, just after 10:00, the phone rang. It was my colleague from the network team.

 

“Hello Mark. Ready to get that annoying modem replaced?”

 

“Ready and waiting … in fact, waiting for a whole hour, if you don’t bleeding-well mind!” (I told you I’d be angry).

 

“What? Didn’t they tell you to get there for 9?”

 

“Yes they did, and I was here at 9 on the dot and I’ve been waiting ever since.”

 

“So you’ve had to wait …” he paused to check the time “… two whole minutes?”

 

“No, one hour. One hour and two minutes - it’s 10 o’clock now.”

 

“No, it’s 9 o’clock. Be there at 9 o’clock, that’s what you were told.”

 

“9 o’clock Swiss time?”

 

“No, 9 o’clock UK time …”

 

“Which is …”

 

“… 10 o’clock Swiss time,” he helpfully completed my sentence.

 

Glancing out of the window I could see the weather had changed as abruptly as it had 90 minutes earlier. Suddenly there was not a cloud in the sky and the sun shone brightly.

Share This:

The new Control-M Getting Started Guide is now live on the Control-M Communities site!

 

Whether you're a new Control-M user or a long-time user, you can benefit from our Getting Started Guide full of easy to consume information and links to more detailed how-to documents and videos.

 

Some key features of the Control-M Getting Started Guide include:

  • Navigating Control-M video – Control-M subject matter expert, Robby Dick, welcomes viewers and walks them through the Control-M interface in a brief introductory video. 
  • What Control-M can do videos – A library of high-level, 2-3 minute videos that highlight some key Control-M capabilities
  • ‘Connect-With’ video series – For a deeper dive into more technical details of Control-M, you can watch our best-in-class Control-M Support team’s Connect-With video library on YouTube. 
  • Communities discussion forums – You can post questions directly to the Control-M Communities site, start a conversation or suggest new content that would add value for users of the guide

 

You can access the Control-M Getting Started Guide, here, on the "Learn" page on the Control-M Communities site.

 

This is a community driven resource so please direct any feedback to Criss Scruggs.

Share This:

So there we were; pioneers on the brave new Isle of Dogs. Shift work may have been playing havoc with our circadian rhythms but the only hardware failure that really fazed us was if the coffee machine went on the blink.

 

I watched as Canary Wharf emerged from a hole in the ground and waited for the painfully slow Docklands Light Railway to develop (replacement bus service only at weekends). The local population quickly realised that we were but the initial wave of island invaders. We were welcomed into the local pubs but left in no doubt that we were the outsiders, the arrivistes.

 

It did help that, in our number, we could call on several locals to ease any nascent tension. Dave was from Greenwich, Harry from Chalk Farm. Both were shift leaders and more than happy to pass on their technical knowledge. In fact, Harry was in the process of joining the Operations Analysis group and night shifts would soon be a distant memory for him. Harry, despite being something of a rough diamond, knew his stuff. He had recently discovered SAS analytical software and was busy producing graphs for everything that moved in the datacentre.

 

On the mainframe itself we had recently started running a product called CA-7. I knew what CA-1 was, that was our tape management system. Was CA-7 something similar? No, CA-7 was a batch job scheduler. I had no idea what this was but I soon realised that its introduction had caused more ripples than the average started task. Anyway, for now it sat there, resolutely doing nothing at all while its implementation was debated.

 

The first obstacle was that we needed training on this new product. Just getting people off shift and into a 9-5 training course proved hard enough. Eventually a date was agreed and we were joined by the consultant who gamely attempted to train us. He explained that CA-7 executed jobs and, upon successful completion, would automatically run the next one in sequence. But wait, there’s more - CA-7 could run variations in the schedule, according to the calendar. For example, if you had special reports that ran on the last day of the week, simply use a CA-7 calendar to submit these only on those dates.

 

This started to sound a lot like my job. CA-7 was, in fact, me - but without the dangerously high levels of caffeine input. Ah, but no, not really. Could CA-7 deal with program failures, lack of disk space, network issues, corrupt files? No, the operator needed to be there, vigilantly watching every step of the way and quickly putting the whole thing back on the rails when needed.

 

Our teacher explained the fundamentals of scheduling, articulated in ways that I still use today. As the course wore on, I thought that it might be better to accept this advance as part of my future. At lunch I sidled up to Dave. “Well, what do you think of CA-7?”

 

“Not a lot. Anyway, it doesn’t matter, I won’t be using it.”

 

“Why not, it’s the future, isn’t it?”

 

“Job scheduling? I have a job and I don’t need it scheduling away. If you’ve any sense, you’ll avoid it.”

 

Dave felt that CA-7 was the “thin edge of the wedge”, that once it was accepted then our services would be surplus to requirement. I pointed out that although the mainframe was the majority of our work we also had plenty of other systems that needed our skills (half-a-dozen DEC systems, Wang Word Processors, various standalone systems). And somebody would be needed to configure the schedules in CA-7 regardless. Dave was unmoved.

 

“Listen, I didn’t spend the last 10 years’ learning how these systems work just for some Johnny-come-lately to appear and tell me we’re going to do it all differently.”

 

“But we’re doing a job that didn’t exist 20 years’ ago, what if our predecessors took the same approach?”

 

Dave shrugged, “I am happy with my technology. The next kind of technology will cost us our jobs.”

 

I spent the early part of the afternoon session worrying about Dave’s comments. My world revolved around the job; the idea that I was not personally central to the company’s future plans concerned me deeply.

 

By the time we reached the afternoon break I was relaxing slightly. CA-7 wasn’t exactly user friendly and it seemed more suited to complex batch environments where hundreds, possibly thousands of jobs ran. I asked Harry for his thoughts. He was, after all, our new Operations Analyst and surely more inclined to welcome new technology?

 

“Nah, can’t see the point of it. If I want to run a sequence of jobs automatically then there are plenty of other ways to do it without buying a specialist tool.”

 

“But what about the calendars?”

 

“Don’t worry, I know when it’s Friday. Listen, you’re going to spend months configuring this thing and then have the overhead of maintaining it and what does it really give you? Saving 20 minutes every batch isn’t going to change the world.”

 

So there we were, again. On one side my colleague thought he would lose his job to this innovation; on the other hand, our technologist doubted the return on investing time in it. CA-7 was friendless.

 

In the end (and after some negotiation) CA-7 was used to submit a new batch of some 20 jobs that could run around 01:00 am. As with many new systems, the code in the batch was prone to failure and required manual fixes and subsequent re-runs. “Told you” opined Dave, “load of rubbish.” But the code failures weren’t the fault of CA-7, I pointed out. Nevertheless, “CA-7 problem” would often appear in the shift log and the system remained on the periphery.

 

I didn’t realise it at the time but suspicion, doubt and general dislike are routine barriers to automation of all varieties. How to overcome these issues were questions for later in my career. For now, I simply added CA-7 to the bottom of my CV and got back to work.

 

Anyway, I had other things to occupy me - I was off to Switzerland.

Share This:

After publishing this Content as a Control-M Community Document, I decided to open up this as blog  content for comments so that Control-M Administrators and Operators can share issues, histories, sad nightmares troubleshooting and thoughts on how the User Daily process current can be made better by suggesting how can this be addressed as Ideas for enhancing the current process supported by todays Control-M capabilities.

I will share my thoughts, and I hope you share yours too. Enjoy and Share.

A>Gomes

 

Uncovering Control-M User Dailies Secrets - What is an User daily, How to define, run/execute and manage them

 

What is an User Daily

 

An User Daily is an abstraction that Control-M utilizes to organize and group the SmartFolders/Folders with jobs together under the same "Entity Name" defined by the control-m user, optionally at the moment of the Job and Folder objects definition with the main intent to split up the overhead of Daily SYSTEM into multiple loads throughout the batch windows in order to allow them to be ordered as a single operation, as one "GROUP Entity" on a different time during the cycle of the production Day until the NDP (New Day Procedure) as part of SYSTEM Daily jobs defined to run ctmudly util as NAMED User Daily Jobs.

 

The User Daily Entity Name is a 1-10 characters and can also be defined/updated after the tables or jobs are created and checked in to Control-M EM.

 

As it is an Abstraction Entity Name, unfortunatelly, there are some challenging steps that need to be addressed in order to take advantage this capability and effectivelly have the grouped set of Folders of jobs enabled to be ordered automatically by a mandatory Control-M defined User DAILY jOB with ctmudly util adjusted to execute at specific time defined as desired by the user/Department needs.

 

How  to plan and Define an User Daily - Lightning the Documentation

 

  • Firstly: The User Daily Entity is not subject of any naming or security standard and as a Control-M EM User,  there is no easy way to realize its existence. Also, as part of the Job Planning User perspective it is not perveived whether a NEW User Daily is really required or it should be restricted and avoided of being defined once that the Abstration or Concept of Daily is implemented by handling the "Order Method" Folder property when defining/updating a new/existing Job Entity in the Planning Domain which most of the times is completely forgeteable.

The Order Method property existence and its options meaning is almost always overlooked by the Job Planning User when Job Folders is being defined due to the fact that it is controled by a Control-M EM System parameter named AutomaticOrderMethodByDefault as default =1, that means the Folders/Smart Folders currently being defined will belong to Daily SYSTEM or Automatic (Daily).

AutomaticOrderMethodByDefault
Determines whether the default for folders that are created by Order Method is automatic or manual.Valid values: 1: Automatic Order Method (Daily) / 0: None (Manual Order)     Default: 1
  • Second: Once  "defined" an user daily entity you need to validate whether there is already an "Used Daily" Job active, or the Control-M Adminsitrator will need to define and extra job entity that runs ctmudly utility with the DAILY NAME as parameter in order to make your daily job tables be effectivelly scanned automatically when the time is right for elligible jobs to placed into Active.

As you can see, the process of splitting up the daily load into small dailies allegedly flexible to any user create their on daily load supported by the Control-M platform is not comprehensive enough and certainly it should be enhanced, simplified and better managed for future BMC Control-M releases as part of CCM management capabilities.Steps required to "Define, Plan and Run" an User Daily:1) Create a Folder and Select "Specific User Daily" from "Order Method" options list and provide a 1-10 characters name.

a) Create a Folder and Select "Specific User Daily"
When Defining new Job Folders in the Control-M Sit,  if this order method property is not noticed or touched, it can pottentially cause problems overloading the DAILY SYSTEM with to many tables to bse scanned an jobs to be placed active during the NDP.Order MethodDefines the method for ordering the entity as one of the following:
  • Automatic (Daily): When set to Automatic, at the same time each day (known as New Day time), each Control-M/Server runs a procedure called New Day. This procedure performs a number of tasks, including scheduling the day’s jobs, and running maintenance and cleanup utilities. The New Day procedures orders the folder or folder jobs.
  • None (Manual Order): The folder is not automatically ordered.
  • Specific User Daily: Identifier used to assign the folder to a specific User Daily job. The User Daily Name is ordered at a specific time of the day. For load balancing purposes, the User Daily jobs are scheduled for different times, throughout the day, other than the New Day time.

(Control-M 9.0.19.100 Control-M Documentation - Control-M - BMC Documentation )

b) Provide a NAME for the desired DAILY

Please note that You only have 1-10 characters to provide the meaningfull name  for the Daily or select from the dinamically distinct list provided based on previous existing daily names associated to Tables of Jobs loaded from Control-M

User Daily name

Defines User Daily jobs whose sole purpose is to order jobs. Instead of directly scheduling production jobs, the New Day procedure can schedule User Daily jobs, and those User Daily jobs can schedule the production jobs. Set User Daily Name when Order Method is set to Specific User Daily.

(Control-M 9.0.19.100 Control-M Documentation - Control-M - BMC Documentation )

 

 

2) Create jobs belonging to the Folder recently defined to be part of the desired USER Daily,  and "check in" your jobs to Control-M EM.

 

Once the Job Folder with at least one job entity is loaded to Control-M EM, the new User Daily Name will be dispalyed as an option for selection on the "User Daily Name" list.

 

Now, that we have aded a Job Folder assigned to an user defined DAILY, we need to Define the USER DAILY Jobs as part of the SYSTEM DAILY in order to allow the Daily defined to be scanned.

 

As of this Example, we are defining 3 Daily Jobs as part of a Job Folder assigned to the SYSTEM Daily, that will be the last configuration step required in order to get the User Dailies to work approprietely. The User Dailies jobs must run the cmtudly utility as ctmsrv user and a local agent to CTM Server is required in order to submit for execution the ctmudly command.

 

a) Create a Job under the newly created Folder Name and provide a Job Name, Description, Command, and Run As user. The Host/Host Group must point to the local agent or leave it blank.

 

 

a) Define a Folder name and Order Method

b) Provide mandatory Job Details

c) Adjust the appropriate TIME, for each specific DAILY Job

 

Define User Daily Jobs

Choose Automatic (Daily) or SYSTEM

 

 

 

Now, after "Check in" the ORDER_USER_DAILIES Folder to Control-M EM, at next NDP, the SYSTEM Daily will load the UDLY_ORDER Jobs.

 

The UDLY_ORDER will be placed into Active Jobs and will be submitted to run at planned "From Time". When the "From Time" arrives, each UDLY_ORDER will execute ctmudly with its specific DAILY Name as parameter and the Folders of Jobs that is associated the that specific NAMED User Daily will be scanned and the jobs that are elligible to be ordered will be placed Active.

 

I Hope You ALL like it and find it Helpfull.

 

My Best

Adriano Gomes

Share This:

Top 3 features.jpg

Control-M is continuously enhanced to better serve the needs of developers, business users as well as operations. Additional enhancements benefiting these groups were made in March 2019, with Control-M 19, and this article will focus on just three major features sets, among the many features delivered with this version. Let’s review them.

 

1. Enhanced Control-M Automation API 

For customers embedding Control-M workflows in their CI/CD pipeline, Control-M Automation API continues to evolve to give them all the control and ownership they need to accelerate application delivery.

 

Configuration Services

If you are part of the development team, your mandate is delivering better applications faster. Because very often business applications are made by workflows (dependent jobs and tasks that must execute in a specific order), we assume you are taking advantage of Control-M Jobs-as-Code. If so, you are embedding the orchestration of those workflows as a code artifact, as early as possible in the SDLC, and shift-left building and testing activities. But if you think about the overall SDLC process, you don’t want to just build, run and test application workflows, you also want to be able to configure the environment where those workflows are going to run.

 

What Control-M 19 provides on top of Jobs-as-Code capabilities is the ability to configure and secure a Control-M environment through Control-M Automation API. In a self-service and fully automated fashion, you can define authorizations for roles, users, and LDAP groups, and manage "run as" users. With that, you get more ownership over all the processes involved with the development of your applications.

 

Examples:

 

  • Automatic onboarding – if you are the administrator of your development team, and a new developer joins your team, you don’t need to wait to get him onboard. You can automatically and dynamically onboard the new developer to Control-M, assigning proper role or authorizations to control access to Control-M resources through Automation API.
  • Manage jobs on any machine – you can manage “run as” users, so you and your developers get immediately authorized to manage and run specific jobs (e.g. Windows jobs) and orchestrate your application workflow on any machine.

 

Deployment Services

When it’s time to push your application from development to production environment, either you are a developer or a DevOps engineer, you may want to automate the deployment process. Control-M Automation API deploy descriptor not only allows you to automatically move application workflows across staging environments, but you can also change application workflows properties and make them comply with the Control-M destination environment they are going to be moved into.

 

Control-M 19.1 enhances the deploy descriptor including any job type (built through Application Integrator) in the deployment mechanism, thus expanding the automated deployment to more application workflows.

 

Provisioning Services

In addition to configuration and deployment, Control-M 19 exposes provisioning services to developers, for them to consume Control-M in a dynamic way and be able to only run what they need, at the pace they need and when they need it. Now they can provision a full Control-M stack, including certain plug-ins, and perform HA related operations.

 

This capability is even more relevant when developers build their applications in containers, like Docker, Kubernetes, Mesos. Containers are ephemeral, meaning they can be generated, stopped, destroyed and rebuilt at high speed. The new way they introduce to deploy applications does not require a dedicated Control-M environment, but rather it requires Control-M infrastructure – as well as other objects needed for the application to run – to be immediately instantiated, as opposed to go through tickets and change management cycles.

 

If we think that workflows are an integral part of applications – that allow applications to immediately run and deliver value once the container is deployed – we can understand the importance to make Control-M part of the containerized approach.

 

Provisioning of Control-M stack enables two use cases that we call “ephemeral” and “rehydration”, inheriting names that were first used by one of our customers.

 

  • Ephemeral  provisioning of a full Control-M stack on a server or container, for dynamic application workflow orchestration
  • Rehydration – provisioning of a full Control-M stack on a server or container, replacing an older version. In this scenario our customer, who has a mandate of periodic refresh of all containers, instead of upgrading Control-M, replaces the old container with a new one, which includes a newer Control-M version.


2. Enhanced Self Service

If you are a business user, you likely want to get immediate answers related to your business application workflows, and you want to get those answers on your own. Control-M Self Service is what you need to obtain visibility, control, and responsiveness in a context you can easily understand.

 

Starting from Control-M 19, business users will be able to use viewpoints as part of self-service. Viewpoints further enhance the visibility, control, and comprehension, by focusing on relevant and frequently accessed information. Users can define private viewpoints by filtering on specific users, exceptions, or other workflow properties. Viewpoints can either be used ad-hoc or saved for ongoing use or used on historical data. They can be organized in hierarchy view and a new tree view:

 

  • Hierarchy view – supports an easy understanding of how workflows are linked to the business they serve and allow zooming in and out depending on the level of details you want to see. This view helps to immediately understand what’s the impact of workflows failure on the business.
  • New tree view – supports an easy understanding of dependencies between jobs and jobs status, so it’s more relevant in case you want to navigate forward and see how your job failure will impact future jobs or navigate backward and see which job first failed that determined your current job failure.

 

3. Enhanced Control-M Workload Change Manager

If you are part of the operations team, you are challenged for more strict control and governance on the production environment. What makes your job more challenging is the increasing agility in your organization, demanding for more developers to work in a decentralized environment and more development ownership.

 

How to ensure all developers to adhere to operational standards when they code and maintain their application workflows through Control-M Automation API? How to reduce failures and avoid developers and operations rework?

 

To help you with this challenge, Control-M Workload Change Manager has been enhanced with more automated control and enforcement through site standards. With Control-M 19 you can define more powerful and flexible site standards, and guide developers to follow them when building and releasing workflows. 

 

This includes the following major enhancements:

 

  • New job attributes site standards can now be applied to all Control-M job attributes but application attributes
  • Conditional site standards – you can apply restrictions on a certain job attribute conditioned on another job attribute value. For example, it’s very common to use naming convention for applications, to make them memorable or searchable. Suppose you want to align all application artifacts to the same naming convention, including application workflows and jobs. You can use conditional site standards to enforce job naming convention for jobs belonging to a specific application.
  • Must have rules – you can force users to define certain values for multi-option attributes such as conditions, resources, notifications, variables, and more. For example, SQL query jobs have a database resource prerequisite. Must have rules allow you to force users to link those jobs to the database quantitative resource, so to prevent them running – and failing - if that resource is unavailable or consumed.

 

Conclusion

Control-M has evolved and continues to evolve to solve the challenges of digital business. While infrastructure, data and application change as required by the digitalization, Control-M supports those changes and simplifies the adoption of technologies and processes the digital change requires. With its adaptive approach, Control-M helps developers, operations and business users to deliver innovation at the speed the business requires.

Share This:

A long time ago, in an IT environment far, far away … I began my career in Computer Operations. Across the IT industry this band of brothers (and no few sisters) has frequently been re-branded with the words “information”, “systems” or “infrastructure” imposed upon it. However, the original title that I encountered is hard to improve on; Computer Operations Group, a small but essential wheel in supporting and supplying platforms for the business user.

 

Back in that distant galaxy, Computer Operations was often a miscellany of high-end activities (who wants to do an emergency point-in-time database recovery at 3 a.m.?) mixed with the mundane repetition of handling tapes and distributing printed output. At the heart of everything was the overnight batch. An art in itself, the batch was a constantly changing creature, often growing to the point where it threatened to impinge on that sacred resource, online business hours. The batch would plow on overnight, like a hulking cruise ship and (with a bit of luck and judgement) come dawn it would find itself safely berthed.

 

The batch run was mapped out in a paper-based checklist and your best chance of avoiding icebergs was to use the latest checklist version. My employer invested in a top-end Xerox Documenter (children, this was what desktop computing was meant to be before Apple and Microsoft performed their audacious heist) and I enthusiastically dedicated myself to maintaining the checklist on my outsized Xerox screen. The system was ideal for drawing flow charts and I frequently asked (i.e. bugged) our Operations Analysts if it was possible to shorten the total batch run time by making the flow more efficient. If an update job only had to wait on a few specific files then, under my watch, it would be run as soon as those new files were available.

 

My eagerness to optimise the batch was not entirely altruistic. True, we did only have a small window in which to resolve job failures (about 2 hours leeway on a weekday night) but I had quickly come to learn that I had my limits when it came to shift work. We could, on average, finish the batch by 4 a.m. This was the point at which I would hit “the wall” – a state of extreme tiredness that would see my shift leader banish me from the bridge area (where all the system consoles flickered on 3270 terminals) to the relative safety of the tape library or the print room. After 4 a.m. I would wander, zombie-like, struggling to replace the ribbon on a huge printer or trying to find a tape cartridge from the 7,000 available in the racks.

 

The ideal night shift, for me, was to sign-off the end of the batch at 03:59 and repair to the drawing room. Actually, we had no drawing room but what we did have was a coffee area with the comfiest sofas that I have ever encountered. Perhaps they weren’t actually the comfiest ever but all I know is that as soon as my head touched the armrest then I was out like a light. The datacentre was located high in one of the first towers to be built in Docklands, so I would drift off to sleep whilst watching nocturnal London twinkle.

 

All I needed was 60 minutes sleep and then I was ready to go again. That was just as well, because this was no holiday camp, no sir! The online systems were required to be restarted by 6 a.m. sharp. Databases needed to be made available followed by the ubiquitous CICS systems, the DEC-based systems fired into life and a range of other, less reliable, systems demanded various levels of assistance before they could face the business day. We even had an IBM XT PC (what for, nobody knew, it was certainly no Xerox Documenter).

 

By 8 a.m. we were happy to hand over to the day shift and head home, another batch put to bed, another checklist completed. However our cosy world was about to be rudely disturbed by the constant roil of technology - and the blot on the Computer Operations horizon was something called batch job scheduling.

Share This:

Hello Control-M Family,

 

We are happy to announce a new 'Meet The Champions' blog post series, to spotlight those awesome members who dedicate their time to help other members of our community. You might have seen them replying to your questions (or others'), and sharing wisdom to make this a better place to be. We thank them, and we feel it is time the whole community get to know them better! Spotlighted champions will also be invited to be a part of an exclusive community, where they can interact with other champions on improving the overall BMC Communities experience.

 

In our very first edition, Adriano Gomes from Brazil, talks about his experience with BMC Communities, personal life, and more!

 

 

 

Q. Do you remember how you were introduced to BMC Communities? What was your journey like?

I was in a hurry to answer my BMC Client Management prospect question related to Virtual apps and Certificates, I asked the questions, I got answered, and never came back to say thank you. What a shame!

 

I have found out that the right answers makes us Eternal. Thanks BMC Communities committed members.

 

Q. Tell us a bit about your work and goals?

I learned from an elder friend, “work for yourself”! I make my job “my own” every day and I take from it what is best for me.

 

My goals are not part of my Job, they are under my decision to sow. I am committed to what I decide to sow.

 

Q. What draws you to participate in BMC Communities?

Giving Back the years BMC invested on me! Taking people out of the mud of “not knowing the right steps to make things work” and shape my Blades.

 

Q. Did you make any new friends in BMC Communities? Do you have any stories to share?

Indeed, I am very new to Community, since 8/2018. Friends Not Yet, but some free will Followers!

 

I enjoyed to find the names of my BMC heroes and have my “Follow” requests being accepted by the Top Community Performers.

 

open-quote-png-6-transparent.png

Q. Do you have any message for the new members of BMC communities?

Don’t keep your questions unspoken! Ask, Seek, and Help yourself out of the pain !

(For every one that asketh receiveth; and he that seeketh findeth; Luke 11.10 KJV)

Adriano edited.png

   View profile.png

 

 

 

Q. What  is your favorite movie(s)?

Kingsman: The Secret Service

(“Save the World is priceless!”)

 

Q. Who is the greatest player in your favorite sport?

Cristiano Ronaldo

 

Q. What was the best vacation you have had?

2018 first time to Porto Seguro (Bahia) by driving with the Family.  Lots of time to talk and smile together.

 

Q. How do you like to spend your spare time?

First (still learning and improving on this subject), Listen to My three women’s

Then, play Brazilian football with distinct group of friends and Watch Netflix Series, currently ^ Shadow Hunters - The  mortal instruments^

 

 

Q. If you could pick one thing that could be made better in BMC Communities, what would be it?

Native Mobile Community App!! (Console Messages, Direct Chat, Voice dialogs).

Android for me, please!

adriano family.jpg

 

Thank You Adriano for all the wonderful work you are doing here!

Community members, please make sure that you visit Adriano's profile, and click 'Follow' (in 'inbox' if you wish to be notified on all activities) to be in touch with him, and be updated.

If you have had an interaction with Adriano that helped you, feel free to share with us in the comments below!

Share This:

This is the final blog of our 5-part series in which we are answering questions attendees submitted during our live Control-M 19 launch webinar on March 26. In case you missed them, here are links to the previous four blogs:

 

·         Blog #1: Upgrade and version-related questions

 

·         Blog #2: Container and cloud-related questions.

 

·         Blog #3: Questions on applications, file transfers and Control-M’s web interface

 

·         Blog #4: Questions on Control-M Workload Change Manager, Configuration Manager, and Automation API.

 

Today, we are going to answer all remaining questions.

 

 

Q: Have you made Control-M recovery improvements with Version 19?
A:
Kafka is available for microservices. And, several web servers can run concurrently on different nodes to address high availability and load balancing while putting a load balancer/reverse proxy in front of them.

 

Q: Control-M is becoming more widely used by everybody (for DevOps and agile development, etc.). The registration for user management is still a manual process though. Can this be automated?
A:
User management is available with Control-M 19 as part of Automation API.  If you are using LDAP, the association of LDAP to a role can be managed via API/CLI, allowing you to perform mass updates and automate these processes.

 

Q: Is Control-M’s reporting facility now web-enabled?  Do users have to have administrative privileges in this version to use/execute reports?
A:
The reporting facility is not web-enabled yet. It currently uses web technology that is embedded within the desktop client. We aim to make it web-enabled in a future release.  Users do not need to have administrative privileges to run reports.

 

Q: Can we see an example of the reporting facility?
A:
Here is a link to a video that will give you a look at the new reporting facility. It also explains how to migrate from the old reporting facility.

 

Q: We have installed Version 19, but cannot see the connection profiles listing. What should we do?
A:
Please open a case with Customer Support.

 

Q: Is it possible to pass parameters from the job to script? If so, how is this handled?
A:
If you are referring to moving parameters from one job to another job that is running a script, you have the ability to define variables (global, pool, local).

 

Q: When will Control-M be RHEL7 compliant?
A:
Control-M supports RH7 for all components (EM server, Control-M server, agent, applications).

 

Q: Does licensing remain the same with Control-M 19?
A:
Our current solution package is called Control-M Platform.  Workload Change Manager, Workload Archiving, MFT and MFT Enterprise are available add-ons to this package.  Please contact your account manager for additional information.

 

Q: Does password encryption work with cold and hot backups in batch jobs?
A:
We need more details on this question, please comment below.

 

 

Thanks to everyone who attended the webinar and asked all these great questions! We hope the answers we’ve provided have helped! If you still have any questions on Control-M 19, comment below and we’ll get to work on it!

 

If you missed the live webinar, you can watch the recording here.

Share This:

 

This is the fourth in a 5-part blog series in which we are answering questions attendees submitted during our live Control-M 19 launch webinar on March 26. In case you missed them, here are links to the previous blogs:

  • Blog #1: Upgrade and version-related questions
  • Blog #2: Container and cloud-related questions.
  • Blog #3: Questions on Applications, file transfers and Control-M’s web interface

Today, we are going to answer questions on Control-M Workload Change Manager, Configuration Manager, and Automation API.


Control-M Workload Change Manager

Q: Do site standards police Jobs submitted to Control-M "outside" of Control-M? If yes, are they cancelled at time of submission?
A:
If by “outside,” you mean jobs not being managed by Control-M, then no. If the jobs are running in Control-M, (whether they got there through traditional creation via the GUI or as-code through Automation API) then yes, site standards are enforced.

Q: If you do not have Control-M Workload Change Manager are you still able to apply site standards?
A:
No. Site Standards are part of Control-M Workload Change Manager.

Q: Is web-based monitoring and planning only available if you own Control-M Workload Change Manager?
A:
Monitoring is included as part of Control-M’s self-service (web) interface. Planning is part of Workload Change Manager.

 

Control-M Configuration Manager

Q: Are there any major enhancements to the CCM in Control-M 19?
A:
We have exposed some of the CCM functionality through Control-M Automation API (Rest & CLI), including:

  • User and LDAP management
  • Authorization (roles)
  • Agent, AP and Managed File Transfer upgrade functionality
  • “Run-as-user” management

Q: There’s a new service option in the CCM under the EM components. Where can we find some more information around that?
A:
Check out the Control-M 19 Documentation here for details.

 

Control-M Automation API

Q: Control-M Automation API did not previously cover all the functions available in the client GUI.  What coverage do you have now? 
A:
There are a couple of job properties that we are planning to address in a future fix pack.  We recommend taking a look at the Automation API online help where you can find the latest information.

Q: Is a conversion tool provided for Control-M Automation API code if an upgrade changes the look and feel of the code (field length or name changes)?
A:
We do not anticipate any changes that would require a conversion tool at this time.

We hope these answers help! If you still have a question on any of the topics above, comment below and we’ll provide and answer or include your question in an upcoming blog. Stay tuned for our final blog, in which we’ll provide answers to all remaining questions.

 

If you missed the live webinar, you can watch the recording here.

Share This:

 

This blog is the third in a 5-part series in which we are answering questions attendees submitted during our live Control-M 19 launch webinar on March 26. In case you missed them, here are links to our first blog, where we covered upgrade and version-related questions, or the second blog, where we covered container and cloud-related questions. Today, we’re going to answer questions on applications, file transfers and Control-M’s web interface.

 

Applications

Q: What new features in Control-M 19 provide support for Hadoop?
A:
Control-M now supports Microsoft Azure HDInsight, version 3.6 and above. We are always looking to enhance this capability. If there are any additional feature suggestions, please raise a Request for Enhancement (RFE) with Customer Support.

Q: We use Oracle Cloud for both Financial and Human Capital Management. Are there any methods to streamline?

Q: Does Control-M 19 support integration with CyberArk Vault or Salesforce Cloud?

Q: Is Control-M 19 compatible with Oracle CC&B 2.7.x/C2M 27.7.x?

A: The previous three integrations are best done today using Application Integrator.

Q: Do you plan to upgrade Control-M’s database plugin in Version 19 to support Oracle Database 18 and MSSQL 2016/2018?
A:
While some of these are in the planning stage, we can’t commit to a timeframe. Oracle Database 18c is the equivalent of the 12.2.0.2 patchset, so it is part of the 12.2 release cycle. We currently support MSSQL 2016 and 2017. Version 2018 is not fully supported as SSIS 2018 was never implemented.

Q: Do you plan to display Oracle session IDs in Control-M database jobs in Version 19?
A:
We do not have plans to do so at this point.  In your same job you may be able to execute an additional query to print the session.

Q: Control-M’s current Cognos plugin is 32 Bit. Are there plans to update it to 64 Bit?
A:
We do plan to move to 64 Bit in the future (likely after Control-M 20 is released).


File Transfers

Q: Can Control-M Advanced File Transfer jobs be migrated to Managed File Transfer easily?
A:
Migration from Control-M Advanced File Transfer occurs automatically when installing Control-M Managed File Transfer.

Q: Can we upgrade from Control-M Advanced File Transfer to Managed File Transfer and retain our internal file movement functionality without getting the B2B add-on?
A:
Yes, you can.

Q: Does Control-M Managed File Transfer’s B2B module involve additional licensing cost?
A:
Yes, it is a priced component.

Q: Does the script have be located on the same server where Control-M Managed File Transfer resides?
A:
We need more detail on this question to give an accurate response.  Please comment below to add additional detail.

Q: What encryption do you use for Control-M Managed File Transfer?
A:
Control-M Managed File Transfer supports encryption via SFTP and FTPS. In addition, files can be configured to be encrypted using your own PGP solution via the MFT job.

Q: Is Control-M Advanced File Transfer still supported in Version 19?
A:
Yes, Control-M Advanced File Transfer is still supported in Version 19.

Q: Will Control-M Advanced File Transfer also gain AWS S3 support?
A:
No, S3 support is not available in Control-M Advanced File Transfer.

Q: Does Control-M Managed File Transfer replace Advanced File Transfer licensing costs? Or, is it additional?
A:
Control-M Managed File Transfer is a priced component, independent of Advanced File Transfer.

 

Control-M’s Web Interface

Q: Can I do any type of forecasting though the Control-M’s web interface?
A:
Not currently, but we are planning to make this available in a future release.

Q: Can you limit the update of user views and site standards?
A:
Yes, you can.

 

 

We hope these answers help! If you still have a question on any of the topics above, comment below and we’ll provide and answer or include your question in an upcoming blog.

Stay tuned for blogs 4 and 5, focusing on:

  • - Control-M Workload Change Manager, Configuration Manager, and Automation API
  • - Miscellaneous questions

If you missed the live webinar, you can watch the recording here.

If you missed our first two blogs, you can read them here:

Filter Blog

By date:
By tag: