1 2 3 Previous Next

BMC Control-M

147 posts
Share: |


Who needs information

When you're living in constant fear

Just give me confirmation

There's some way out of here


According to the Radio K.A.O.S page on Wikipedia, the event that inspired Roger Waters to write the lyrics of the song “Who Needs Information” was the 1985 miners' strike in Britain where a striking worker threw a concrete block off a motorway bridge, killing a taxi driver who was taking a working miner to his job. This was an example for how far people will go to pursue their monetary goals.


You can blame me for using Roger Waters, Pink Floyd, Tom Petty, Queen, or even The Matrix movie for monetary reasons as well, being an employee of a commercial software vendor, but if you are reading this blog there is good chance you as well work for a commercial company with stakeholders, owners or someone that cares about revenue and profitability. You are expected to help your organization be successful and to generate revenue.


Being successful means to have an advantage over competition, and the way to get that advantage these days, is information. The more you know about your market, about your competitors, and most important – your customers, the greater the chances you’ll be successful.

But there’s a catch. The more information you have, the greater the challenge it is to analyze it. Traditional technologies such as relational databases and data warehouses can no longer process the amount of data generated by social channels, or gathered from online websites and other machine generated data, at least not in the timeframes that allows you to take advantage of the processed results to drive relevant business decisions.


Technologies such as Hadoop and in-memory databases are becoming more and more popular these days. Big Data is no longer just a buzzword. If you search for use cases on the websites of all the major Hadoop distributers (for example Hortonworks, Cloudera and MapR), you will find plenty of stories describing how companies are taking advantage of Big Data technologies and Hadoop specifically to become or remain competitive.


But Hadoop is not an island, and it will not replace the traditional database platforms which all your business applications connect to. Commonly large amounts of data are processed by Hadoop and the results are then sent to a legacy data warehouse or back to a mainframe if Hadoop is used to reduce mainframe license costs for example. ERP systems such as SAP, Oracle E-Business Suite, PeopleSoft or others might be involved in the process as well, along with file transfers, direct access to databases, ETL or data integration activities and eventually business intelligence or analytics tools that are used to expose the information to business users.


So how do you make sure that all these systems are in sync?

How do you monitor the process from beginning to end and ensure that you are meeting your deadlines?

How do you manage changes across all these systems, making sure everything is audited and compliant with your company standards and policies?

How do you provide business users self-service access to their parts of the workflow?


You need a solution that that allow you to manage everything from a single point of control, to configure automatic recovery from failures so the amount of manual corrective actions is as minimal as possible (including recovery from point of failure to reduce downtime in case of a problem) and to define proactive SLA/deadline notification so potential delays or failures in critical services are identified as early as possible when you still have the time to fix the problem before you miss the deadline.


And how do you get all these capabilities and more? You select the leading workload automation solution in the market, the only one that really can handle all the applications and platforms that you need. It even has an online open community that allow users to collaborate and share custom integration they created. Specifically for Hadoop, it offers a native interface that can eliminate any scripting, homegrown integration workarounds, or the use of Hadoop-only limited schedulers such as Oozie.


If you want to learn more about Hadoop workflow automation or about centralized approach for automation of any application or platform, simply click on the links above or below...


It’s also the time now to register to BMC Engage, the largest BMC conference of the year, which will have a dedicated track for workload automation - while you can still get the early bird pricing

Make sure to attend Joe Goldberg's session on Hadoop and the "Elephant in Your Computer Room" (Session #386)


If you are planning to attend Strata+Hadoop World in New York City this year, make sure to stop by the BMC booth and see a live demo from one of our experts.


There is plenty information for you online (who needs it anyway?), but if you have any question or need more information, post it here as a comment.


Share: |

Evaluation.pngOf all the projects I have worked on since joining BMC 14 years ago, Control-M Application Integrator is definitely one of the most innovative. For those of you who missed the press release announcement, it is a self-service web design tool that allows “teaching” Control-M how to integrate with any application, easily and quickly: how to submit the jobs to the application, how to monitor the jobs, how to determine if the jobs completed successfully or with errors, and how to abort the jobs if they need to be stopped. This is a first-of-its kind self-service tool that allow designing new job types without any development skills.

In addition, the team also created an online open community, The Control-M Application Hub, where customers and partners can find, share and submit their job types. This is also an industry first community that can allow users to reuse job types created by other community members and collaborate for the greater good.


Throughout the design and development of Control-M Application Integrator, we worked closely with customers and partners, many of whom evaluated the PreGA releases of the product and provided feedback. As I reflected on the release and how we were able to deliver this awesome technology, (and yes, I am a bit biased…) I found myself focusing on four things that customers and partners were able to get out of participating in this process:


1. Your opinion matters

Every product or feature we develop is aimed to solve a business problem and to provide value, and we need to hear from you what challenges you have and how we can help you solve them. When you share this information with us, we can make sure that the product covers your specific use cases and that it includes the features you need the most. It doesn’t guarantee that every enhancement requested will make it to the first release of the product but when we hear consistent feedback from customers we take that into consideration when we build the product roadmap.
With Control-M Application Integrator we have made various usability changes based on feedback from customers, and we added features which were originally considered for future releases. These include for example enhancements to WebService security and the output handling functionality.


2. Your platform of choice

Our quality control team is extremely focused on testing products by use cases provided by customers, and on the platforms that are most commonly used by customers. But when you install, configure and run a PreGA software build in your environment, you increase the chances that defects that are unique to your specific platform or use cases are identified and fixed before the product is released to the market, making it much more stable. With Control-M Application Integrator we were able to address a number of these defects identified by customers, some of those identified in very early stages of the development process.


3. Your application

Control-M Application Integrator allow integrating Control-M with any application, commercial or homegrown. Customers that participated in PreGA evaluation created job types for applications such as SAP Business Objects Data Services, Tibco, Oracle Human Capital Management (HCM), MicroFocus, Lawson, SalesForce.com, GlobalScape, EMC Networker, Teradata, QlikView, and other homegrown banking and telco applications. We expect many of those new job types to be shared with the community in the near future on the Control-M Application Hub.
What applications do you need to integrate Control-M with? To make sure you aren’t recreating the wheel, be sure to check the Control-M Application Hub – and if you create a job type that you don’t see on the hub – share it!


4. Your time to value

Learning how to use a new product in parallel to its development, and testing it on your own specific environment, can dramatically shorten the time it will take you to start using it in production after it is generally available. Many of the customers that participated in the Control-M Application Integrator PreGA program are now in final stages of production deployment, some of them are among the largest global banks and telco companies in the world, and others are leading companies from the retail, healthcare and utilities industries. They all understand that in order for them to remain competitive and grow, they need tools that will allow them to innovate and automate, and that’s exactly what Control-M Application Integrator offers them. Don’t wait to get started – it’s there for you too!



The Control-M product team regularly travels to visit customers around the world in an effort to understand our customer’s ever changing business challenges and needs, and to share news on the latest product releases as well as what’s to come. So next time we meet, ask us about the Control-M PreGA evaluation programs and how you can join.  Help us help you! Get it while you can!


For more information about Control-M Application Integrator, visit www.bmc.com/integrate and www.bmc.com/hub.




Share: |

Learn how to maximize managing Control-M jobs with Viewpoints and Job Versioning.


On Wednesday, July 22, 2015, James Pendergrast will discuss and demonstrate the following:

    • Using Viewpoints to manage your current jobs and active environment
    • Advanced features available in the Monitoring Domain
    • Using Archived Viewpoints to view job flows from previous days
    • Using Job Versioning to audit job changes
    • Using Job Versioning to recover jobs

Don’t miss a live demo of these features.  There will be a Q&A after.
Registration is now open here!

Share: |

It's that time again - and there's not much time left before you miss it!  Chicago and Tampa Control-M User Groups will be meeting before the month is out.  Be sure to mark your calendar and RSVP for the meeting in your area.  It's a great way to meet and network with others in your area working with Control-M - compare your stories and successes and build your local "I have a question, can anyone answer this one?" network!


Here are details for both - again, don't delay - not much time left to get your name on this exclusive list!


Florida User GroupChicago User Group



Friday, June 19th





2202 N. Westshore Blvd., 6th floor

Tampa, FL 33607

Map & Directions

*Free Visitor Parking


Register Here

Agenda at a Glance:

8:30 am Continental Breakfast, Meet-N-Greet

9:15 am President's Welcome & Opening Remarks

9:30 am Control-M Conversion/Migration

10:30 am Morning Break

10:45 am Group Discussion / Q&A Session (Part I)

11:15 am Application Integrator & Archival Tool

12:00 pm Hosted Lunch *Sponsored by GSS

12:30 pm Industry Insights - GSS

1:00 pm User Group Member Roundtable:

2:00 pm Control-M Version 9 Update

2:30 pm Group Discussion / Q&A Session (Part II)

3:00 pm Meeting Adjourns

GSS Infotech logo.png

Special thanks to Steve McCormick and GSS Infotech for sponsoring the user group and providing lunch for everyone!                                           


For questions please contact Jim Gingras (jim_gingras@bmc.com) or Loren Gross (loren_gross@bmc.com).


FL CTM User Group Image.png

Click here to join the Florida Control-M User Group Community.



Tuesday, June 23rd





161 North Clark Street

Chicago, IL 60601

*There will be discounted parking available for those that plan to drive and park near Accenture.




To ensure you have a seat at this exclusive event email Robby Dick (robby_dick@bmc.com) or Jeff Sanderson (Jeffery.s.sanderson@Accenture.com).


Agenda at a Glance:

  9:00 am   Continental Breakfast, Meet-N-Greet

  9:30 am   Welcome & Opening Remarks, Accenture & BMC

10:00 am   User Presentation with Jeff Sanderson, Accenture

10:30 am   Morning Break

10:45 am   Group Discussion / Q&A Session Part I

11:15 am   Application Integrator Deep Dive w/Dave Leigh, BMC

12:00 pm   Lunch

12:45 pm   Group Discussion / Q&A Session Part II

  1:15 pm   Control-M 9 Update / Roadmap w/Robby Dick, BMC

  1:45 pm   Discussion on User Group Moving Forward

  2:00 pm   Meeting Adjourns


Special thanks to Jeff Sanderson and Accenture for hosting this user group meeting!

Share: |

-by Joe Goldberg, Solutions Marketing Manager, Control-M Solutions Marketing, BMC Software Inc.


You’re implementing a new application and you have either downloaded a new Application Integrator job type from the Application Hub or built a job type for it. How is your life better than if you had not done that?2me.png



You can build a Connection Profile which contains all the general information about your application such as which server it’s on, which ports or libraries it uses and any credentials that may be required to run its jobs.


This means none of your jobs have to specify this detailed information and if it ever changes, you just update the connection profile instead of tons of jobs or scripts.


Your auditors and ultimately your management love you because you don’t fail audits and you don’t expose potentially sensitive and thus dangerous information.



When you build jobs, it’s IDENTICAL to the way you build jobs today. Grab the job type from the job palette and drop it into the flow wherever you need it. The forms ensure you specify correct application characteristics because you can select from a list that is retrieved from the application in real time. And if you ever need help with a particular application, the information as to which person or group for you to contact for support is part of the job type. If you use Control-M Workload Change Manager, you can even specify site standards so that these application jobs are built correctly and in adherence to your operational requirements.



When you are running these jobs and someone in the business tells you to hold all jobs for “application x”? NO SWEAT! Find, filter or create a viewpoint (your choice) that shows you all the jobs in the active environment for that application and hold them or delete them or whatever you need to do. Because Application Integrator created a new “job type” you can search for it easily. And if that same person in the business told you to change something in a bunch of those jobs? You can use Find and Update to do that quickly.



What about looking at output when something fails? Well, if your developers wrote their own scripts, they probably put the output somewhere where they can get it. The problem is the person analyzing the current problem may not know where that is or worse not have access to it. And each script could be different and your developers spent a lot of time writing that code. Tell them they don’t have to bother because Control-M will take care of it for them when flows are built using Control-M job types instead of scripts that run as “black boxes”.


Finally, considering you are probably seeing more and more new applications and they’re coming at a faster and faster rate, all the above benefits may sound better and better.


So why not visit the Application Hub today and grab a new job type for a new application you are implementing. If there isn’t one there, perhaps you can build it and help the next person who may need that very same job type. Remember, you have the technology!


The postings in this blog are my own and do not necessarily represent the opinions or positions of BMC
Share: |

-by Joe Goldberg, Control-M Solutions Marketing, BMC Software Inc.


MapReduce, Pig, Hive and Sqoop are “legacy” Big Data applications that require workflow management with Spark, Flume, Kafka and dozens others arriving on a regular basis.  And to make matters more interesting, embedded workload solutions like Oozie and Sparrow don’t know how to handle anything outside their own world.   So how do you cope with complex workflows that include many of these technologies, especially since new ones keep arranging on the scene?


Control-M with Application Integrator is the BMC answer. Control-M provides core application integration out of the box for MapReduce, Pig, Hive, Sqoop and HDFS operations together with every major commercial platform, technology and application from the traditional enterprise computing world. And using Control-M Application Integrator, all other applications not included in the previous lists can be easily supported by building a job type with a simple web-based designer tool.  Or even better, operations and developers can look for job types that may have already been built by other users and shared via the unique Control-M Application Hub. This approach enables Control-M to deliver a complete workflow management platform that meets all your current and future requirements.


BMC Control-M has strongly committed to the Big Data Market and is continuing its investment by:Snap1.png

  1. Platinum sponsorship at Hadoop Summit San Jose 2015
  2. Releasing Application Integrator
  3. Joining the Open Data Platform consortium


Stop by booth P9 at Hadoop Summit and learn more about the most comprehensive workflow solution in the Big Data and Enterprise market - BMC Control-M.



The postings in this blog are my own and do not necessarily represent the opinions or positions of BMC Software

Share: |

If you are running multiple Control-M servers, including Mainframe and Distributed environments, you can control your workload flow between the environments using global conditions. For example, after a job completes on your Mainframe Control-M/Server, it can trigger a job on your Distributed Control-M/Server and vice versa.


On Wednesday, June 17, 2015, Froilan Reyes and Richard Talbert will discuss the following about Using Global Conditions:


• The Global Condition Server(GCS) process overview
• GCS configuration
• Passing conditions between Control-M/Servers
• Demonstrate adding, using and testing global conditions between mainframe and distributed environments

Join us as we go over creating and testing global conditions and don’t miss the live demo. There will also be a Q&A.
Registration is now open here.

Share: |

On Wednesday April 22nd 2015, Neil Blandford will demonstrate how to use Agentless technology to access unsupported platforms and reduce agent maintenance. 
There will be a live demo of Agentless setup for both SSH and WMI.  There will also be a Q&A after the demo.
Q: In my shop WMI is typically locked down due to security reasons - before we could use agentless on windows we would have to address security concerns first - can you share any known ways to sure the agentless/WMI setup.

A: WMI should be accessible by the Agent service user that is configured. In this respect you should treat WMI as you would with any other monitoring/management system.
If you are unable to configure WMI for the remote host, you can always install an Agent.

Q: What user rights are need it for the agent id to prevent security issues with agentless processes?

A: User rights are covered in KA395027

Q: Is BMC_ERROR_LEVEL a special environment variable?

A:This is used to pass the return value back to output file (.dat) on the Remote Host.

Q: Is the passphrase anything we want to make it?

A:Yes you can specify whatever you like. Typically the longer the better.

Q: Can you explain the problems with "impersonation level" on remote hosts from windows machines?
For example, when running a script located on central repository server, having a batch program accessing a
shared folder or printing the job’s output to a network printer.
When the remote host is configured for WMI communication mode, such network resource

A: Agentless is restricted by the Windows “double hop” security mechanism. In this case the OS won’t (by default) pass credentials to a third machine from a remote connection. The machine will attempt an Anonymous connection, which will usually fail.
It is possible to create a trust between the machines to prevent this (you Windows Administrator can do this), however it’s preferable to use local resources in the command/script.

Q: Is there a way to setup the public key on the remote host via control-m or do we have to set the public key directly on the remote host by connecting to it like an admin would and manually setup the public key

A: Yes, you can initially configure the run as user to use a password, you can then create and run a job to place and configure the public key. Once you have this you can then re-configure the run as user.

Q: Are there any additional considerations to set agentless process to a cluster environment?

A: As long as the Agent looks at the VIP or Virtual Hostname correctly it will connect.

Q: If I have the need to run a system command on "thousands" of machines, is there any limitations to using agentless technology vs. agent?

A: The scalability of your Agentless solution depends mainly on the Agent. The Agent will need to have sufficient resources available to handle 1000’s of connections in this case. This is also the case with an Agent installation however. The best option here is to gradually increase the load to test.

Q: Does Agentless support file watcher jobs?

A: Filewatcher depends on many libraries to function, at this time it’s not practical to copy them to the Remote Host to run. So Filewatcher isn’t available.

Q: When we have issue on Control-M Agent we run ctma_data_collector, do we have any similar in RH?

A: The Agent data collector will gather information Related to the Remote Host. When ruuning the Data Collector, be sure to run it on the correct Agent – the Job log indicates which Agent the job was run from.

Q: Is there a limit on the number of jobs that can be run on a Remote Host?

A: There is no defined limit in Control-M, other that resource usage, connections for example.

Q: Can SSL be used with agentless?

A: Agentless uses SSH, WMI, SMB, and SFT depending the configured options.

Q: Is it possible to use the SSH option with Windows?

A: Yes, you will need to configure an SSH Server to connect to.

Q: Where are the SSH keys stored?

A: They keys are stored encrypted in the Control-M Server database.

Q: We have a PRDO and Failover servers. Only one Control-m\EM is available at the time. but agents are running on both servers. Remote host showing the only prod agent in the list. When we transfer to Failover Control-m server, we have to switch?

A: You can do this, alternatively you can use an Agent that has both Control-M configured.

Q: Any recommendations on the best agents to use as the host  i;e the control-m server agent?

A: You can use either depending on your needs and environment.

Q: Is any way we can point remote host to Host name instead of agent directly?

A: The Agent manages communication with the Remote Host and provides the Control-M infrastructure to allow jobs to run. Jobs have to still be submitted to an Agent.

Q: Can a Remote Host be switched to an Agent and vice versa?

A: Yes, an Agent can be converted to a Remote Host via the Configuration Manager.
To convert a Remote Host to an Agent, remove the remote host and install an Agent on the machine.

Q: What are a few of the things you would NOT be able to do with remote host that an actual agent would provide?

A: Control Modules require the Agent installation to be able to run.
The Agent provides the option of a persistent Connection in firewall environments.

Q: Any option can triger notification when Remote Host is unavailable?

A: When a Remote Host becomes unavailable it will be marked as such in the Configuration Manager.

Share: |

On Wednesday May 20th 2015, Pilar Soria will discuss how to install and configure Control-M for Hadoop and demonstrate creating jobs to automate Big Data. The webinar will include:

• What is Control-M for Hadoop?
• Installing, configuring and setup of Control-M for Hadoop
• Creating Map Reduce and Sqoop jobs
• Hadoop job Conversion tool
• Demo configuring Control-M for Hadoop with Kerberos

There will be a Q&A after the demo. Register Here!

Share: |

In today's complex environments, Control-M can be configured for security, remote access and multiple production and testing landscapes.

On Wednesday, March 25, 2015, Ted Leavitt will discuss and demonstrate the following:

•Setting up clients to work with multiple EM environments(Prod, test,dev, …)
•Understanding and configuring the Naming Service( CORBA) in Control-M
•Considerations for VPN and firewalls
•Configuring for a virtual and clustered environments


Q: What do the "Listening address" and "Published address" values in Domain Settings in orbconfigure do?
A: The "Listening address" controls the interface that the naming service creates a LISTEN on.  By default, we suggest leaving this as "All".  The "Published address" is the address the naming services tells the clients it is using and what the clients will attempt to use after initial contact.


Q: if a company is locking down telnet port 23, what are the options for clients
A: Port 23/TCP is used for the telnetd.  We are using the telnet tool to connect to ports other than 23, namely the naming service port (default 13075/tcp) and the endpoints for the EM components.  We do this to test TCP connectivity outside of Control-M, between these two points.


Q: From a teammate that couldn't attend: Can you lock down the connection profile defined in the client? e.g. We would need to stop the user of a dev configured client (on citrix) adding and connecting to the prod WA server
A: In version 8, the client connection settings are stored in each user's ClientUserSettings.xml file (located under %APPDATA%).  Without making this read-only, you cannot "lock down" the connection profiles.  What we can suggest is using one of several methods to automatically select the environment or login to the GUI without prompting for environment.  This can be done using command line arguments for the emwa.exe (WA GUI) executable (i.e. -u -p -s...), or by using the EM_DOMAIN_NAME variable as described in the "Citrix Whitepaper" attached to KA358305.

Q: Is there a non-windowed version of orbconfigure that can be used on a linux installation that's not outfitted with Xwindow?
Priority: N/A
A: On UNIX, the only version available is the XWindow client.  The utility could be run on Windows to modify the same config.xml, or the orbadmin command line utility could be used to do the equivalent of the GUI.

Q: I missed the comment about troubleshooting slow connections.
A: Slow VPN connections are addressed in KA317628.  There are a couple of configurable parameters which may help with the client which are described in this article.

Q: what is the minimum # of ports that I need to open up in my firewall for the EM?
A: We suggest a minimum of 20 ports be opened for the EM components.  More details regarding the opening of ports can be found in the article KA351414.

Q: My Dev & Prod ControlM server both use the same Naming Port.  Is that OK?
A: Different EM environments installed on the same host can use the same port as long as their "Listening Address" and "Published Address" are different (i.e. on different network interfaces).

Q: In the past, we were able to enter specific ports for each component, now it's a range. Is it possible to specify specifics
A: With Control-M v7 and above, several component now use more than one port.  This option has since been removed from the orbconfigure GUI and replaced with the port range.  While it is still technically possible to assign specific ports to individual components, we do discourage this and it can be problematic.  This is discussed in KA351414.

Q: Ted, i have VM ware installed with windows machines, host 1 and host 2  with version 7 and i cannot communicate what are the things i need to check
A: I would suggest using the 'netstat' and 'telnet' utilities to first ensure that there is a LISTEN on the "server" side, and that the "client" can connect to the "server" via the specific TCP port.  This would hold true for M/Server & M/Agent, M/Server and EM Gateway, as well as EM Client and EM GUI Server.  This is discussed at greater length in KA423494.

Q: We use the Reporting Facility to connect to 4 CTMs.  Sometimes we can't logon because it says the database is not available.  After several attempts the logon works.  what areas should i check?
A: The first thing the Reporting Facility does is retrieve the list of servers from the naming service.  After a GUI Server is found, and login proceeds, the Reporting Facility retrieves the database parameters, user and password, from the GUI Server.  It then uses this information to build an ODBC DSN (and tnsnames.ora file if Oracle) to connect to the EM Database.  If you are having problems with database connectivity, I would suggest checking the ODBC DSN which is created in this process and testing connectivity using it.

Q: Our users typically open a separate gui to two different environments - the same installation but they just open another window.  Can this cause any issues since the config.xml is only a single file on the workstation.
A: The Version 7 GUI will update the config.xml as soon as the "Host Name" is set in the Advanced tab, where the v8 GUI will updated this about 3/4 of the way through the login process.  If you are depending on any command line utilities with the EM, these will rely on the current config.xml setting.  If altering the config.xml is a concern, I might suggest using the EM_DOMAIN_NAME variable as described in the "Citrix Whitepaper" attached to KA358305.

Q: when we open workload automation gui, 2 times out of 10, it hangs, also if we can leave GUI open for several hours and do something else on laptop, but then return to GUI, it will not respond, white screen, so we need to close and start over
A: From my recent experience, this may be related to the WA GUI using a large amount of memory on the client PC.  If you have a large number of alerts, please ensure the "Always Monitor Alerts" button is unchecked in your WA GUI Monitoring Domain in Tools -> Alerts.

Q: Is there an impact on system performance if we have more than 20+ users using web-launch client?
A: Web-launch is a way to install the EM client. Instead of having to login to each machine, you will install it using a central location.   It should not have an effect whether you installed it locally or through web launch.

Q: We have two new DR servers: ctm/em and ctm/server which we recently installed.  After we configured log shipping from the prod to dr, corba failed to initialize.  DB is sql.  How do we resolve the corba initialization issue?
A: I suspect that you cloned the EM, and when this was done the CORBA configuration points to the old server.  So you use orbconifgure to resolve.

Share: |

On Wednesday April 22nd 2015, Neil Blandford will demonstrate how to use Agentless technology to access unsupported platforms and reduce agent maintenance. 


There will be a live demo of Agentless setup for both SSH and WMI.  There will also be a Q&A after the demo.


Registration is now open here!

Share: |

-by Joe Goldberg, Control-M Solutions Marketing, BMC Software Inc.


Whether you are already doing hadoop or just trying to learn, the #HadoopSummit in Brussels April 15-16 is for you. Snap4.pngThis is the premier community event in Europe with over 1,000 attendees expected. There are 6 tracks with dozens of sessions covering everything you should be thinking about as you try to determine if Hadoop is right for your organization or how to best implement it if you are already committed to the technology.


This event is also a great opportunity to learn how you can build Big Data Insights faster, as much as 30% faster. And once you get them running, make sure they operate more reliably so developers can sleep at night and be productive in the morning instead of being bleary-eyed from fixing problems all night.


We are joining our partner @Hortonworks, who is hosting this event. Visit @BMCControlM at Booth 12 and come listen to my session; Oozie or Easy; Managing Hadoop Workflows the EASY Way.


The postings in this blog are my own and do not necessarily represent the opinions or positions of BMC Software

Share: |


In today's complex environments, Control-M can be configured for security, remote access and multiple production and testing landscapes.


On Wednesday, March 25, 2015, Ted Leavitt will discuss and demonstrate the following:

  • Setting up clients to work with multiple EM environments(Prod, test,dev, …)
  • Understanding and configuring the Naming Service( CORBA) in Control-M
  • Considerations for VPN and firewalls
  • Configuring for a virtual and clustered environments


Don’t miss a live demo of configuring for these environments.  There will be a Q&A after.

Registration is now open here!


Control-M Workload Automation icon.png

Share: |

Interest in Big Data has gone global with organizations around the world aggressively jumping onto the Hadoop platform. The leader in open source Hadoop is Hortonworks and BMC is proud to be their partner. We have just completed joint promotion of Hadoop workload management with BMC Control-M at #Strataconf in Santa Clara and will continue to spread the word of Control-M workflow management for Hadoop through our participation in the Hortonworks Modern Data Architecture RoadShow. In the last several months this program generated great interest with sold-out attendance in Atlanta, Dallas, New York, Boston, Washington DC, Chicago, Seattle and San Francisco and London.HadoopWorldMap.png


The global tour continues with events scheduled for:

  • Paris – March 3
  • Munich – March 5
  • Tokyo – March 10
  • Los Angeles – March 24
  • Houston – March 26


Each event is a full day of Hadoop information for both business and technical audiences focusing on how organizations can unlock the potential for Hadoop including case studies and a customer speaker.


The cost of attendance is either free or a nominal $99. This makes the event very accessible so the demand will be high. Be sure to register using this link as soon as you can.


  Come and join us to learn how Control-M can help your organization harness the power of Hadoop. Accelerate deployment of applications, run them with the highest level of service quality and gain the agility to quickly get from Big Data to Big Answers to power your business


The postings in this blog are my own and do not necessarily represent the opinions or positions of BMC Software
Share: |

On January 7, 2015 BMC held a webinar titled “Deliver Big Data Applications Faster” where we presented some of the Big Data project challenges that many companies are having these days. We shared the experience we have gained over the past couple of years by helping customers to address these challenges. We’ also discussed how BMC Control-M can help deliver Big Data applications faster and help you make your Hadoop projects more successful.


To watch the recorded webinar, click here.


Here is a transcript of the questions and answers from the event:


Which Hadoop distributions does Control-M support?

Answer: Control-M for Hadoop works against the native Apache Hadoop APIs, so it supports all the distributions and all their releases. We do not yet support the Hadoop 2 release for Windows (only Linux is supported), we have not received any requests from customers to support it yet…


What if I’m running applications that are not supported by Control-M, can I still get some of these benefits you described?

Answer: Yes, definitely. In reality there are endless amount of applications out there and while Control-M can support many of them using CLI’s and WebServices, we are working these days on a new tool that will allow creating “custom” job types with a “native” interface to the application. You will be able to share those “custom” job types with an online community or download job types that were created by other users. But regardless of those custom job types, if the integration with applications that are not yet covered by Control-M is performed via scripts, you still get many other benefits such as the dependencies between jobs, forecasting, SLA management, self-service and mobile access, notification, output analysis and much more. A lot of what is traditionally included in wrapper scripts is provided for every job in Control-M so your scripts can be much more simple and shorter. Just avoid using the native application schedulers to ensure you have an enterprise end-to-end view of the entire workflow, not silos or islands of automation.


We already implemented a bunch of our workflows with Oozie. Is there anything you provide that could help us switch to Control-M?

Answer: The Control-M conversion tool can import Oozie jobs to Control-M very quickly. Even if you are not using Control-M yet for your Hadoop jobs, and you only want to see what those job flows will look like in Control-M, you can do the import, see how easy the process is, and get a better understanding of the Control-M for Hadoop capabilities.


Is the communication between Control-M and Hadoop secured?

Answer: You can configure Kerberos Authentication so credentials are encrypted. This has actually been a major requirement from several customers who told us that such authentication is mandatory.


Do you have any examples from customers that you can share?

Answer: We have a couple of success stories on YouTube that you can watch, and we will soon publish a couple of additional ones. In short, one of our success stories is with a company named MetaScale. This is a company which is owned by SEARS Holdings. They took the same Hadoop based technology that they originally developed for SEARS, and they now offer it to other companies as well. For SEARS they analyzed customers shopping trends using Control-M and Hadoop. They were specifically impressed with Control-M's ability to integrate with so many platforms and applications, and the single point of control. The other success story is with a company called ChipRewards. They are providing Big Data services mainly to the healthcare industry. If you watch their video on YouTube you will see them talk about SLA management and how Batch Impact Manager helps them to meet their batch deadlines.


MetaScale Videos on YouTube:

ChipRewards videos on YouTue:



Do you have any other questions on BMC Control-M for Hadoop and about accelerating the delivery of Big Data projects? Post them here or contact me at tom_geva@bmc.com.


Additional Resources:



dilbert big data.gif


Filter Blog

By date:
By tag: