Skip navigation
Share This:

Every Control-M user has a story to share that others can learn from.  There are plenty of tips, advice, and knowledge shared in the Control-M and Workload Automation Communities.  Why not share your particular experience and knowledge with your Control-M peers and wanna-be Control-m users at the upcoming BMC Engage. 

 

BMC Engage.png

 

 

BMC Engage, the global user conference, is October 13-16  in Orlando, Florida and there is an entire track dedicated to Control-M Workload Automation.   The call for papers for BMC Engage is open now through June 19. 

 

It is a great opportunity to share your knowledge and unique experiences.  Pick a topic that you feel would be of interest to other Control-M users. I just returned from an professional conference and heard some incredible stories from organizations that showed how they have reworked and improved processes, adopted or even innovated on industry trends -- and they inspired me to think differently.

 

If you are not sure what topics would be of interest, just take a look at some of the topics in the Control-M Communities on bmc.com, LinkedIn or Yahoo.  Here’s a few topics from the BMC Control-M Community to help get your thinking started:

 

  • Converting to Control-M from another product
  • Using Control-M with SAP – tips and advice from
    your experiences
  • Upgrading to v8
  • Use cases for Self Service – what you did and
    how you did it
  • Using Control-M with your Hadoop/big data
    analytics applications
  • Working with application teams using Workload
    Change Manager
  • High availability
  • Tips and advice for using Advanced File Transfer

 

There will be BMC Control-M experts presenting sessions, but nothing replaces the hands on and industry specific experience that you have in your organizations.

 

 

http://bmc.g2planet.com/bmcengage2014/cfpSubmit your abstracts here.

 

BMC Engage CFP.png

 

 

 

See you in Orlando!


Share This:

 

This week, the Zoo welcomes Olga Paker from Tel Aviv andRobby Dick from Chicago to talk about the new Control-M Workload Change Manager, a drag-and-drop batch design tool that simplifies job scheduling and enhances cross-team collaboration. With the Change Manager you can reduce the time it takes to make a modification by up to 80 percent.

Share This:

Hadoop Summit, now in its third year, is the leading conference Hadoop Summit.pngfor the Apache Hadoop community. This two-day event includes Apache Hadoop thought leaders showcasing use cases, sharing development, education, networking opportunities and a vendor exhibition featuring the latest on Apache Hadoop.

 

BMC is a first-time sponsor, supporting Hadoop Summit at the Gold level.

 

We hope you’ll join us, we've got some exciting things in store for you:


1. Join us in the Exhibit Hall, Booth G16

Tuesday 7:30 AM – 4:35 PM; 6:05 PM – 7:30 PM

Wednesday 7:30 AM – 6:05 PM

Thursday 7:30 AM – 2:00 PM

 

2. Bringing DevOps to Hadoop (20 Min Fireside Chat)

Presenter: Shamoun Murtza

Date: Tuesday, June 3, 2014

Time: 6:10 – 6:30 pm

 

As Hadoop matures and increasingly becomes an integral part of business operations, Hadoop applications are becoming business critical.  Running these complex infrastructures and applications with large teams is becoming a challenge. While this is not a new problem unique to Hadoop, solving it does require the right tools and sensitivity to Hadoop’s specific requirements.

 

Developers need the ability to make changes quickly and often. They want to be able to make changes, test them, deliver them to the customer, and start working on the next thing. Operations, which might be another group or function, on the other hand wants stability and minimal risk. The business needs a balance to be struck between these two to innovate while remaining stable. Hadoop presents its particular set of challenges to set up an effective DevOps process.

 

Hadoop changes range from the simple to very complex. With the advent of Yarn it has become even more important to make sure that all dependencies and changes  -across code, data and configuration – have been recorded and included in the change before promoting it. If you would like to set up a Hadoop ecosystem where your developers and operations functions can work hand in hand to ensure agility without compromising on risk, your first order of business should be to pick the right tools and process.

 

In this presentation Shamoun Murtza will focus on DevOps challenges relating to Hadoop environments specifically the complexities of managing workload, data, and configuration changes as part of a change promotion and review different strategies that can be used for success.

 

3. Control-M for Hadoop

Learn how to drive maximum value with an enterprise approach to big data. An enterprise approach for Hadoop batch processing will help you simplify your daily management of your big data environments.

 

4. Control-M Workload Change Manager

Control-M Workload Change Manager automates and simplifies batch workflow changes, accelerating the delivery of business services and reducing cost.

 

5. Control-M Self Service

Make more time for high-priority items and empower business users by giving them visibility into their workloads with BMC Control-M Self Service.

 

6. One Cord To Rule Them All: Charge All Cable:

This five in one charging cable features the new iPhone Lightning connector for iPhone 5, 5S and 5C, the Apple 30 pin connector, two Mirco USB connectors and a Mini USB connector. To charge your devices, simply plug this cable into a powered USB port and then into your Apple device or Smart Phone. Charges more than one device at a time if power source is strong enough. Stop by booth G16 to get yours!

ChargeAllCable.png
Will you be at #HadoopSummit? Tweet your thoughts, or share them in the comments section below.

Share This:

-by Joe Goldberg, Control-M Solutions Marketing, BMC Software Inc.

 

So here are some points to ponder:

  • Google Play has already hit one million apps and the iOS App store is not far behind.
  • According to mobile web platform provider Kinvey, the average app takes about 18 weeks to Infographic_How Long to develop an App.jpgdevelop and that is considered by many to be a long time.
  • iOS App Store is adding about 20,000 apps a month; that is about 645 new apps each day
  • Etsy performs over 25 deployments into production every single day (watch a video interview or see slide deck)

 

Yes, that is mainly in the consumer market but whether you call it consumerization or SMAC or “Nexus of Forces”, similar patterns and expectations are emerging in the corporate and enterprise worlds. Companies who fail to adapt will be severely challenged to maintain their competitive positions and those who do will be able to seize significant advantages.

 

One of the manifestations of this demand for accelerated new functionality is new development methodology such as Scrum and Agile. The need to deploy these new applications as quickly as they are developed has given rise to DevOps.

 

Batch is Still King

One factor that has not changed significantly is that workload automation (batch job scheduling) continues to manage a major portion of business processing performed by enterprise IT (some put the number at 70%). In fact, in a recent survey of BMC customers, almost 90% expect their batch workload to increase. Many of the most real-time, interactive and mission critical applications have a significant batch component that is an indispensable part of the processing.

 

Given this technology landscape, it seems reasonable to examine the process by which batch services are created, maintained and deployed. One of the first points to consider is why batch is almost completely absent from the DevOps discussion. If DevOps tools can manage artifacts like config files, jar files, properties files, etc. Why not batch workflow definitions too?

 

Why DevOps Forgot Batch

Perhaps the answer lies in the way workload automation or job scheduling tools are used by organizations today.  Workload Automation is frequently seen as an IT Operations tool. Developers may write scripts and run them manually or via cron (or something similar) and it’s not until the application is being handed off to IT that the topic of scheduling and workflow comes up. Developers may be very knowledgeable in setting up and configuring web application servers or defining tables and objects in relational databases but when it comes to job flow relationships, calendars, abstract resources and post processing actions, they may not have a clue.

 

This may account for some of the arcane processes that customers have implemented to manage job scheduling requests submitted to IT, usually from application developers. Some use Excel spreadsheets, Word documents, custom forms and other such methods. They all share an underlying acknowledgement that application developers are not schedulers and are not very familiar with the concepts or the details of workload automation tools even of those used by the organization in which they are employed.

 

In this context, we can now discuss how changes are made to workflows and consider ways to improve on the current state. Let’s take the example of a utility provider which runs tens of thousands of jobs daily. When a job flow is changed or modified, the application development team submits a document detailing the requirements. According to the IT folks, Application Developers have only a rudimentary understanding of scheduling principles. If left to their own devices, jobs would run serially one after the other, taking significantly longer to complete and completely failing to take advantage of concurrency capabilities in the installed workload automation tool that is available. Each development team largely ignores all other teams and fails to consider the impact of different applications competing for the same resources. And finally, developers show little interest in operational requirements like auditing, reporting and incident management.

 

What A Solution May Look Like

So if this indeed is the current state, what would a solution look like? That was the question that BMC set out to answer about two years ago. The culmination of that effort has just been released and it is called BMC Control-M Workload Change Manager. This new solution finds a balance that enables developers to transfer their knowledge of the application to IT without having to become scheduling experts, capture that information to automate construction of workflows, provide IT schedulers with the ability to enrich the workflows with their knowledge and expertise and provide a collaboration platform that enables all parties to exchange questions, comments and requirements in a structured and managed fashion.

 

Experience It

This solution has been necessary and ardently sought for decades but today is becoming indispensable. There are several ways you can learn more about this solution including taking a live test drive. Please post your comments and observations or tell us what you do today in your environment so that this solution can benefit from the collective wisdom of the entire community of workload automation users (and you just might win an iPad mini).

 

The postings in this blog are my own and do not necessarily represent the opinions or positions of BMC Software

Share This:

-by Joe Goldberg, Solutions Marketing Consultant, BMC Software Inc.

 

With all the technology available today, there is at least one area where weird and unusual processes still rule. For some reason, even though batch workload like payroll, inventory and supply chain management account for about 70% of all business processing done on commercial computers, new methods like DevOps are completely ignoring this critical function.

 

So how do organizations manage changes to their batch workload? Well, we asked a bunch of them and the answers range from the sublime to the ridiculous. Some use the tried and true “pretty please” approach. A developer or someone else who needs some jobs built, calls his or her friendly scheduler and says something like ”please do me a favor” and add this job or change that. Some have built incredibly elaborate systems of forms that have to be filled out and submitted. Until every last bit or information required by the form is acquired, schedulers cannot do anything. Sometimes, the form is filled our correctly on the first try (not very common) and sometimes the request goes back and forth and back and forth until people start getting blue in the face and making some very impolite gestures. Many companies fall somewhere in between and use some kind of request mechanism based on Excel spreadsheets, Word documents or email. One company we spoke with demands a Visio diagram of the desired results and then schedulers manually cut and paste from the Visio into the forms defining the workflow. One company told us they don’t really have a process. When somebody wants a job, they have to hunt down the people who can provide that service and each request is a negotiation that makes the Middle East peace process look trivial.

 

Of course, in all cases, the changes that have to get made DO eventually get made and thus the reference to “it’s LuckySocks.jpgonly weird if it doesn’t work”.  However, the fact that people have to jump through hoops and do unnatural things IS weird and our definition of “work” needs to be seriously revised. First, the entire process needs to be brought out into the light of day and recognized for what it is; inefficient, problem-ridden, expensive and frustrating. Second, the lack of automation introduces errors that cause production failures because the actual job definitions that make it into production are frequently built manually from scratch with no standards validation. Third and perhaps most important, the applications that businesses depend on to keep them competitive or to gain an advantage or address customer needs are being delayed.

 

You may think what’s the big deal? Let everyone use the scheduling tools you already have and problem solved. Well, if it was that easy, everyone would have done that a long time ago. The problems with that approach are that you can’t tell all your application developers or business analysts to become workload automation experts, the IT Operations and Scheduling folks would freak out (rightfully so) at the thought of hordes of users having free and unfettered access to your production environment and your auditors would probably read you the riot act. Even if you could pull of this minor miracle and get such an action approved, your auditors wouldn’t be happy, application developers wouldn’t be happy because they already have a day job and management wouldn’t be happy when production workload failed due to errors introduced by novice or casual users.

 

The solution requires some new capabilities that up until now have not existed. What’s needed is a solution that is simple to deploy and use by non-scheduling and/or casual users without extensive training. This must also be able to support local and site-specific standards and adoptable for different levels of users so that requests are as close to fully formed and ready to run as possible. It must be automated to eliminate the need for manual construction of workloads yet sufficiently collaborative to provide different levels of engagement between requesters and schedulers. Finally, it must be fully controlled and audited to ensure that the critical production environment is not compromised in any way.

 

With such a solution in place, developers and business analysts/owners will be able to define their requirements for application workflows quickly and efficiently. IT can implement these new business functions smoothly and confidently without impactimg the quality of their service delivery. The most efficient use is made of everyone’s time and most important of all, the business gains the benefits of this accelerated application delivery to provide new products and services that can ward off competitors or expand market share.


So next time you have some workflows to create, you can pull out your lucky socks and hope for the best or come back here in a few weeks to learn about a better way.

      

The postings in this blog are my own and do not necessarily represent the opinions or positions of BMC Software

Share This:

 

This week, job scheduler Alon Lebenthal flies into the Zoo to talk about workload automation. Managing jobs across multiple, heterogeneous environments is no easy task. And with the introduction of Hadoop and other open-source  platforms, you need to be smart about how you store and process data in the enterprise. We talk about the needs of major job scheduling hubs, such as Hong Kong Air Cargo Terminal, and how the same issues affect small and midsize organizations. A former Israeli air force IT employee, Alon shares his view of the London weather and his passion for cooking.

Share This:

workload_automation_011514.jpgDuring 2013, four converging technologies that are disrupting almost every business and market today have been acknowledged by the IT industry. Social, Mobile, Analytics and Cloud (SMAC) are also referred to by Gartner as the “Nexus of Forces”. When we talk about SMAC in our BMC Exchange events, we often break it down into two “Mega Trends”:

  • Consumerization: Mainly mobile, social and the general concept of keeping user interfaces as simple to use as possible
  • Industrialization: The transformation of the IT infrastructure to become more efficient and flexible by adopting cloud, big data, streamlined enterprise apps and embracing automation.

The 2014 Workload Automation trends are perfectly aligned with the general IT mega trends and are repeatedly expressed by customers in almost every meeting we have.

 

Deliver Digital Services – Faster

Digital services is what we consume today – so consequently, what business deliver.  And with the “anytime, anywhere” expectations that users have today, the rate of speed at which applications need to be delivered has increased.

By considering the batch job and workflow need in the early design and development phases of projects, you can reduce the time it takes to implement applications in production. Instead of scripting homegrown wrappers that will later be automated by generic scheduling tools, use a Workload Automation solution that provides native abilities to interact with the application, monitor its SLA, analyze workloads completion status, take automatic corrective actions in case of a failure and alert on problems. We have recently heard from a new BMC customer that using Control-M for Hadoop they were able to decrease their Hadoop implementation project time by 46%, about 6 months of development effort! And Hadoop is not a unique example. Same goes for SAP, file transfers, database queries or any other business application.

 

Collaboration & Self Service

Recently I had the pleasure of writing a text message on an old mobile phone keypad, and while I was struggling with switching between digits and letters, case sensitivity modes and languages, I suddenly realized how hard it is to switch back from my iPhone touch keyboard to what used to be an industry standard up until about 6 years ago. The adoption of smartphones and tablets, and the introduction of wearable devices such as Google Glass are all catalysts to increased users demand for easier, simpler and more accessible technology that behaves the way the average human expects.

In 2014, Workload Automation solutions will adopt this approach, providing client software and mobile apps with simple interfaces that are designed to meet the needs of the specific user persona they are targeted for - operators, schedulers, application developers or business users.

Self Service interfaces which allow application developers to submit workload change requests without opening helpdesk tickets, submitting manual forms or becoming Workload Automation experts will appear in 2014. This will help developers focus on building better applications that are aligned with their business demands, instead of developing scripts that automate workloads within these applications. Collaboration between application developers and production control should transform into a common practice in 2014, the same way the iPhone changed the mobile industry in 2007.

 

Expanding automation scope for what you know and what you don’t know

In 2014, organizations will increase Workload Automation’s span of control to cover more business applications than before. Simple-to-use, automated tools will allow the consolidation of workloads from multiple schedulers and business applications into a single solution providing a focal point of control. The production control team will no longer be required to use different tools and interfaces in order to manage batch activity. The use of automatic workload discovery tools will grow in 2014, allowing organizations to identify critical batch elements which they have not been aware of.

 

 

 

In 2014 Workload Automation will evolve from the IT “basement” to the desktops and smartphones of application developers and business users. The increased ROI and reduced time to value will become visible to all stakeholders. Self service, collaboration, workload conversion and workload discovery will become industry standards. It is going to be an exciting year!

Share This:

Big Data is a mega trend that is changing how data centers and businesses operate.  It is being influenced by other technology mega trends -- like Cloud and Mobile.  Shamoun Murtza, the CTO for financial services at BMC Software, has been researching how big data affects organizations -- and how big data is impacted by other technology megatrends.  He recently talked with us about what he is seeing in the marketplace when visiting with companies of all sizes.

 

This blog shares some of the highlights from the discussion with Shamoun.

 

What’s Changing

 

Big Data is one of several major technology mega trends that are shaping the world’s data centers and how organizations operate.  Whether it’s an insurance company, financial institution or government agency, big data is giving new insights that help businesses to seek new opportunities, enter new markets, protect from security breaches and even reduce cost.

 

Shamoun Murtza.png


What’s changed?  Users are changing and how they use data is changing.  You need to rethink that – because mobile is so impactful.  An example of this is a story from a bank in Australia. Quarter over quarter, the bank was seeing a 10 percent reduction in Teller transactions.  Yet for the same period of time, online transactions increased 400 percent.  It’s a completely different usage pattern along with a very different volume of transactions.  So what does this do to your backend processing?

 

Hear more about the nexus of these mega trends in this 3 minute video.


Getting Started with Big Data


Murtza states the first thing you need to do is identify a problem.  This can be a new problem or one that you have not been able to solve with traditional solutions.     There are three main areas that need to be considered:

 

  • Data acquisition.

Identify the data sources and where the data can be acquired.  There can be a lot of considerations for data acquisition – based on the problem you want to solve.

  • Data store.
    There are choices for data storage, but there is a clear winner emerging in the big data area and that is Hadoop.
  • Analytics.

There are a lot of start-ups in this area – in particular when it comes to data visualization.

 

Map these three areas out and begin implementation.  Don’t start large.  Start small and bring in the right data scientists
to help you determine the best approach.data scientists at work.png


If you strip aware the buzz from big data – what you have is batch processing.  This is good news as batch processing is a well-known and established process.


Big data opens a lot of new doors.  That is what is getting companies excited.   Hear more in this 2+ minute video.

 

 

Connect Hadoop to the Enterprise

 

How do we actually extract data out of Hadoop.  You need a method that is resilient and reliable.  So how do we do that?


HDFS – is the Hadoop distributed file system where you store the data.  MapReduce is how you get data out of HDFS.  It is a java-based programming language that works with HDFS.  It processes and gives a result set very fast and very efficiently.  The reality?  It is all batch processing.

 

How do you manage all of the batch processing in Hadoop?  Because of the volume of data and complexity of the analytics, processing in Hadoop can become very complicated very fast when looking at large clusters.  The good news – this is a known problem – i.e., managing batch processing. This is what workload automation solutions are designed to do.  No need to waste your time finding a new solution for Hadoop.  Spend your time on solving the new problems that Hadoop now lets

you solve.

 

control-m for hadoop interface.png

 

See how Control-M can help you with your Hadoop batch processing needs.  It natively supports Hadoop and various Hadoop projects.  So you can now manage Hadoop with the same solution you use for managing all of your other enterprise batch processing.  Don’t spend your time re-inventing the wheel.

 

Many organizations are doing big data to gain a competitive advantage.  If you are not doing big data, does that mean that others have a competitive advantage over you?

 

If you want to listen to the series of thought leadership videos by Shamoun Murtza, here are the links:

 

Think Forward:  Big data and other mega trends

Think Forward:  Get started with big data

Think Forward:  Control-M connects big data to the enterprise

 

It is an exciting time to be in the technology industry.

Share This:

If you are like many millions of folks around the globe, you are making a New Year’s resolution today.  And like most (I am sorry to report), you will more than likely fail in fulfilling your resolution.  As we forge into the New Year, we make promises about eating better, exercising more, and living healthier lifestyles – but only 8% of the well-intended will be successful in reaching
their goal.

 

The most popular New Year’s resolutions are centered around health – losing weight, exercising more, eating healthier – to name a few.  Nobody understands this better than the folks at ChipRewards (http://www.chiprewards.com/company/).  ChipRewards uses a balanced formula of technology, science, people and process to incent folks to choose healthier behaviors – and ultimately live healthier lifestyles.

 

ID-10044296.jpg

 

 

 

Technology is so pervasive in our daily lives today, that we don’t often think about how technology affects our ability to be successful with our short or long term lifestyle goals.  But it is there.   The treadmills we use have technology that is used to emulate a run through the hills or flatlands – at a pace we choose.  We monitor and collect data about our bodies using computerized bracelets like Jawbone and Fit.  And we may even choose to track our caloric intake using a free app on our mobile phone or tablet.  

 

ChipRewards takes all of this data and a lot more-- known today as big data – to develop incentive programs that help folks modify their behavior.

 

ChipRewards uses BMC Control-M to manage the workflows that collect and analyze the very large volumes of data required to have the insight needed to modify behavior.   With the use of BMC Control-M Workload Automation, ChipRewards is able to analyze health related data associated with individuals and use both a carrot and stick approach.  Because the data collected comes in unpredictable volumes, ChipRewards uses Control-M to build a Cloud infrastructure stack that lets them service their data processing needs and then decommission the Cloud stack once processing is completed.

 

By using BMC Control-M to automate the proprietary analytic programs and other proprietary applications, ChipRewards is
successful in designing programs
that help individuals take the necessary
steps for modifying their behavior.

 

 

chiprewards logo.png

 

It is always nice to be associated with products (like Control-M) that helps improve the lives of others.


Whether or not you made a health-related New Year’s resolution – or one at all, the BMC Control-M team wishes you a healthy and successful year.  May 2014 be your best year yet.


ID-10096049.jpg

Share This:

time money.jpg…Plans that either come to naught,

or half a page of scribbled lines…

 

The words of Pink Floyd’s ‘time’ are ageless, and likely that you can relate to them (or at least to specific sentences) no matter how old you are or what stage in life you are in. Challenges we faced ten years ago might be behind us but we still run and run to catch up with the sun, trying to meet project deadlines and to have that new application or business service up and running in production. Hoping that by the time the sun comes up behind us again we will have that magic spell (competitive edge) that will help us win our customer's affection and budgets.

 

In an effort to speed up development plans, we sometimes make shortcuts, forget to take into consideration, or simply are not aware of important factors that are prerequisites to any new application that is promoted to production. These factors are the ones that allow the Production Control team to effectively monitor the application activity by automating common scenarios and reducing manual effort.

 

Kicking around on a piece of ground in your home town,

Waiting for someone or something to show you the way…

 

 

By using a Workload Automation tool that can provide out-of-the-box support for all these requirements, you can eliminate the need to develop these in-house, and instead focus on building the best application for your business.

Here are a couple of examples of services you should expect your Workload Automation solution to provide. By using these you can eliminate a substantial amount of development that is often performed by expensive resources, cut down costs, reduce the number of “hands-off” cycles to Production Control, and eventually - dramatically decrease the time it takes to promote the new application to production, without compromising your specific site's IT standards

  • Dependencies between workloads: This should be done in the easiest way possible, preferably with a drag-and-drop graphical interface, and should allow creating dependencies between workloads of new applications, and workloads of various other business applications that are already part of your IT infrastructure (ERPs, business intelligence and data integration tools, database queries, system backups, file transfers, etc.)
  • Analysis of task outputs: Automatic detection of exit codes and common application error messages. Analysis of the workloads outcome to determine if the workloads completed successfully or if automatic recovery actions need to be taken.
  • Perform manual corrective actions: Modify workloads to correct errors, rerun workloads that failed, cancel workloads while they are running, etc.
  • Proactive notification of potential missed SLAs: Identify if a workload is running for too long or completed much sooner than expected based on historical statistics. Understand the impact of a problematic workload on the overall batch business service.
  • Alerting Capabilities: In case of workload failures or missed SLAs, send emails to the relevant recipients, open helpdesk tickets, and escalate alerts to monitoring frameworks or service impact managers via SNMP or native APIs.
  • Forecasting: Allow planning future changes, identify optimal maintenance time windows and changes to capacity volumes without affecting production activity.
  • Balancing resources: Dynamically adjust the number of workloads that can run simultaneously – in general or for a specific application. Ensure application servers are not running in under or over capacity, and redirect workloads to alternate application servers in case of an outage.
  • Applying site-standard policies: Ensuring the use of proper naming conventions that can allow Production Control to easily categorize workloads and identify their owners.

 

A Workload Automation tool that can provide the above capabilities and facilitate effective communication and collaboration between Application Developers and Production Control can help elevate the relationship between these two teams for the long run, allowing each team to focus on what they do best.

 

A Self-Service offering that allows Application Developers to submit requests for new workloads, changes to existing workloads and monitoring of active workloads would be the next stage in such collaboration. I promise to elaborate on this topic in another blog post. But now the time is gone, the song is over, thought I'd leave you with one of David Gilmor’s best guitar solos ever. Enjoy!

 

 

 

 

lyrics.png

 

To learn more on BMC Workload Automation offering: http://www.bmc.com/solutions/workload-automation/workload-automation.html#.Uq8KKJHmzuo

Share This:

-by Joe Goldberg, Solutions Marketing Consultant, BMC Software Inc.


This week started for me on Saturday evening when I got a plane to head to Berlin, where I arrived Sunday afternoon, in preparation for the Berlin BMC Exchange Day starting on Monday.

 

That event ran until Tuesday afternoon. I flew to London for the UK Exchange Day that started Wednesday morning. Wednesday evening I flew to Paris for the Paris Exchange Day and am writing this blog from Les Docks in Paris; the BMC Exchange Day venue.

 

The theme of BMC Exchange Days is delivery of digital services and the impact of four converging technologies that are disrupting almost every business and market today. Social, Mobile, Analytics and Cloud (SMAC) is one of the ways this phenomenon has been described and it is affecting every aspect of technology and its business application. For us in BMC Control-M, the immediate impact is not always immediately apparent to some of our constituents. It’s easy to think that workload automation is a deeply technical, back room function that remains the domain of technical power users removed from smart phones and tablets and “appified” interfaces. But in reality, there may be lots of users outside of IT who have some level of interest in workload but are not and do not wish to become scheduling experts. Business Analysts, Application Developers and similar constituents may indeed prefer self-service, social and mobile user interfaces that make running a job or building a job as easy as paying bills or booking travel.

 

This may also be true for users and organizations as they begin to delve into Big Data and Hadoop where users may need the services of workload automation to run analytics but have no desire whatsoever to learn the details of managing workload.

 

The best way to find out what customers are thinking is to ask them using the Plain Old Meeting method and that’s what BMC Exchange Days are all about. Social and Mobile may be the new communications and technology consumption paradigm of the future and Analytics and Cloud may indeed drive infrastructure decisions. However, they may not be the only technologies that seemed destined for greatness, which seemed so right they couldn’t fail, and yet our technology history is littered with such “too right or too logical to fail” solutions that either failed outright or never lived up to expectations.

 

So, what’s a vendor to do? And what are its customers to do? The answer is brilliantly simple; they should all talk. And voila, there is the whole idea behind BMC Exchange Days; an opportunity for BMC and its customers to talk. BMC can talk to customers and customers can talk to each other.

 

Of course it is possible to talk via social channels and we pride ourselves at BMC and specifically in @BMCControlM for being socially gregarious. But if you want to ask tough questions and get thoughtful, insightful answers; in other words to deeply engage with your customers and have then speak candidly to you as a vendor and their peers, it’s difficult to replace a good old fashioned face to face meeting.

 

I believe that this desire to truly connect with our customers not only through social channels but also by building relationships by scheduling meetings at events such as BMC Exchange Days as well as on site at customer locations, distinguished BMC from other vendors.

 

So, if there’s a topic you wish to discuss or if you want to learn more about how BMC can help your organization more easily adopt SMAC for your workload automation or any other aspect of managing your IT environment, let us know. Send us an email, tweet us, post on our wall or maybe, just maybe, pick up the Plain Old Telephone System (POTS) and give us a call.

 

The postings in this blog are my own and do not necessarily represent the opinions or positions of BMC Software

Share This:

-by Joe Goldberg, Solutions Marketing Consultant, BMC Software Inc.


I had the opportunity to be a “fly on the wall” as several IT managers from a large Financial Services organization discussed the challenges of “DevOps” and Agile development.

 

They mentioned the well-known problems of the contradictory goals of pushing out new services versus maintaining stability and mitigating risk of change when one of them made a controversial statement; he said a lot of programmers are “ignorant”. This required him to go on and explain that what he meant was that they are ignorant of issues outside of their domain. The claim is that the pressure of Agile Development is not just the potential of destabilizing the production environment. As technology innovation accelerates, general technical skills are eroding (and are not being replenished at a sufficient rate). Programmers fall back on pulling bits and pieces of code from libraries or co-workers without properly assessing any deficiencies that may exist in such snippets for their specific applications. The result is code being pushed into production that is not production ready.

 

GIGO.JPG.jpg

I must confess that I am an “IT” guy and have been for well over three decades so my perception may be skewed. But imagine an IT infrastructure that I would build to accommodate a hypothetical environment. I could build and configure servers, set up a network and install management tools. When the first application that tried to run in this environment needed Java, it would fail because that wasn’t part of the “hypothetical environment” (I am an old geezer after all). If I would argue “everything worked just great in testing”, it would be a completely worthless statement. However, I argue that something very similar happens with applications. Programmers write code that executes successfully in test environments where they are either testing with a small subset of data or in an environment unencumbered with resource contention and tons of other applications that all have performance requirements. So writing code that performs a “commit” after every SQL statement or instantiating a Java VM or loading hundreds of DLLs when they only need a single one and repeating these functions over and over, “tests” just fine. Once in production however, it runs like the proverbial “pig” or may even fail under stress. And once a poorly performing application makes it into production, guess who “owns” it and becomes responsible for making it run?

 

So I believe it’s not enough to just improve the mechanics of getting applications into production. As important as that is, it is just as important to ensure that applications are designed, written and tested to run successfully in a production environment. This must also be included in the DevOps discussion because it is the absence of this discipline that significantly contributes to the reticence of the “Ops” guys to accept “Dev” applications into production in the first place. I am calling this additional wrinkle Dev4Ops.

 

Without this discussion, DevOps runs the risk of becoming an exercise of accelerating the injection of potential Garbage IN to the production environment with the predictably deplorable results.

 

The postings in this blog are my own and do not necessarily represent the opinions or positions of BMC Software

Share This:

Ein-Gedi.jpgYou have a new script developed and ready to be automated as part of a bigger business flow. What are the next steps?

  • Create a new job definition, and point it to the script location. Use your company naming convention standard for the job, application and other categorization attributes.
  • Specify the OS account that will be used to execute the script on the target server.
  • Define the scheduling criteria (the days on which the jobs should run) and the submission criteria (the time window and prerequisites which need to be fulfilled before the job can run).
  • Define the dependencies between the job and other processes in the business flow.
  • Configure post processing actions to recover from various scenarios on which the script can fail and the notifications that should be issued in such cases. This may also include opening helpdesk tickets to audit the failure or to get the relevant support team involved.
  • Add job documentation to have all related information available for Production Control.

Deadlines and SLA can be defined at the individual job level, but in most cases a much better approach is to define it at the batch service level.

 

Is that it?Ein-Gedi-Pool.jpg

You've ticked all the above check-boxes. You've run a forecast simulation to verify the scheduling definitions. You think you are ready to promote this change to production.

Before you do that – think about the following scenario:  what will happen if someone modifies the script on the target server? Will that change be audited? If the next run of the job fails because of the script modification – will Production Control be able to associate the change (assuming they are aware of it) to the failure? What tools does Production Control have in order to recover from the error?

 

Without getting into complicated or expensive change management practices, there is one very simple thing you can do:

Use Embedded Scripts.

 

When embedding scripts within the job definition you can immediately get the following benefits:

  1. Any change to the script is audited and can be identified by Production Control as a potential cause of the job failure.
  2. Changes can be rolled-back by Production Control without the need to access the target server and manually modify the script. In some cases Production Control are not even authorized to remote access to application servers. If the target server is mainframe, iSeries, Tandem, Unisys or OpenVMS for example, we are talking about a whole different set of skills which are not required when using embedded scripts.
  3. You can roll back all changes made up to a certain point in time, including both the job attributes and the embedded script. Deleted jobs will be restored. Modified jobs will be rolled back. New jobs that were created after that point in time will be deleted.
  4. You can compare between the job versions and see if the script was modified and if so – what was the change.
  5. The embedded script can be more than just a batch or a shell script. It can be a PowerShell script, a Perl script or other scripting languages. You can also embed SQL statements if you run database jobs.
  6. You can run a single copy of an embedded script on multiple target servers. This way if changes are required you can modify the script only once. Add agentless scheduling to the equation and you have a real “zero footprint” environment.

 

Including scripts as embedded parts of job definitions can be a very efficient practice for both Application Developers and Production Control. It will reduce the time it takes to recover from errors and increase your ability to meet your auditor requirements.

 

Do you use embedded scripts? If so – share the details with us: what type of scripts do you run, how does it work for you, what challenges do embedded scripts help you address and what challenges still remain…

 

DeadSeaView.jpg

Note: the photos in this post are my idea of an “all inclusive deluxe” vacation. It’s the most beautiful place in the world, and the lowest point on earth. This is where I live!

Can You Ride an Elephant?

Posted by Tom Geva Oct 24, 2013
Share This:

Hadoop.pngBig Data, Big Data, Big Data. Is that just another passing buzzword or is it here to stay? What does it really mean? Is my data big enough to be managed by Big Data technologies?

 

Those are questions that many of us ask these days and there’s a good reason for that. Just like the early days of cloud computing, when every one of us had a different idea in mind to what it actually was, Big Data technologies are commonly mistaken to be considered as only relevant for the petabyte club members. But the truth is that there are benefits of using Hadoop for example, even for lower scales of data. Some of these benefits are also discussed in an interview with Mike Olson, Cloudera CEO.

 

  • Affordable infrastructure: You do not need to purchase expensive hardware or a high-end storage infrastructure for Hadoop. The idea is to use commodity hardware with locally attached storage.  The hardware and storage can be physical or virtual, it can be on promise or hosted on a public cloud such as Amazon Elastic Compute Cloud (EC2), and you can dynamically add or remove resources to scale for changing processing levels.

 

  • Rapid data processing: Many of us are starting our journey with single-node Hadoop clusters in test environments but Hadoop is designed to run on multi-node clusters, which allow parallel and balanced data processing. Today, there are already companies such as the music-streaming service Spotify, that manage Hadoop environments with hundreds of nodes, each is capable of independently processing the pieces of data it holds, much faster than any single server can process, regardless of how many CPUs it has.

 

  • Manage unstructured data: Unlike traditional relational databases that rely on predefined data schemas, Hadoop Distributed File System (HDFS) allows you store any format of data in an unstructured manner. These data can be videos, photos, music, streams of social media content or anything else. It doesn’t mean you do not need to plan ahead and figure out which business questions you want to answer, but it definitely means that when new questions arise, it will be much easier for you to make the adjustments that will allow you to answer them.

 

  • Redundancy & High Availability: Hadoop distributes each piece of data to multiple nodes in the cluster (number of copies is configurable) so if one of the nodes fails (which is more likely to happen due to the use of commodity hardware), you will not lose any data. This eliminates the need to use expensive RAID devices or commercial cluster software.

 

  • MF Costs reduction:  Most companies that manage large data repositories on mainframes seek ways to reduce CPU peaks in order to cut down software license costs. This is often quite a challenge, especially during end of quarter or end of year times, or during holiday shopping seasons. By shifting some of that processing activity to Hadoop you can reduce your costs and sometimes even get the processing done faster.

 

It will take some time until the Internet of Things hit us all and we will be flooded by an unmanageable amount of data that will leave us with no choice but to use Big Data technologies. There is no reason to wait until then if we can already now adopt these for the benefits they provide and the value they add, addressing some of the challenges that are not necessarily volume dependent.

 

 

dilbert big data.png

Share This:

-by Joe Goldberg, Solutions Marketing Consultant, BMC Software Inc.

 

The human creative spirit takes flight on a regular basis. We are lucky to be living in an age of seemingly constant, breath-taking innovation. Just when worship-alone.jpgyou’d think “It just can’t get better than this” something comes along that is.

 

So it’s a real mystery why management of new IT technology seems to be stuck in an all-too-familiar rut. Some wonderful new thing comes along, savior status is assigned and expectations are immediately raised to ridiculous levels. Organizations re-direct their best and brightest staff and their financial resources to adopt or implement whatever that new technology is. They may even build some prototype or get through a successful POC or even go beyond that and deliver some really useful business service.

 

And all the while never stopping to think for very long how they will actually sustain the ongoing operation of this magical new stuff.

 

You’d think didn’t we go through client/server, distributed computing, internet/intranet, web servers, cloud, BYOD and countless other trends that exploded onto the scene and then took forever to actually gain traction and deliver business value because no one thought about how to sustain them.WinstonChurchill.jpg

 

I guess  "Those who don't know history are destined to repeat it.", and so we do, we do and we do ....

 

So now we have the Big Data trend and its most popular technology Hadoop.

 

Everybody either already has it or is implementing it or wants it. When asked about managing it, purists wax poetic about the Open Source culture and ecosystem while simultaneously stating preferences for “packaged” distributions because they’re more stable, more tested and may have some “value added capabilities”. Is the time spent scripting and adapting and integrating all free? And once their science projects and POCs are ready for business “prime time” will the time and effort and complexity that the Infrastructure and  Operations team have to invest to get this stuff co-existing with and living in an enterprise IT environment also be free? And how will these shiny new projects integrate with “traditional” mainframe, Unix, Linux, ERP and other systems? And what will happen when regulatory and compliance requirements are applied and someone raises dirty words like incident and change management and auditing? Or if someone asks about SLAs or forecasting or dozens of other capabilities that are taken for granted for the “traditional” applications that have been running in production for ever?

 

 

 

The good news is that it's never too late to start doing things right. You can build Hadoop applications with their ultimate operational goal in mind by using an enterprise grade workload automation solution. BMC Control-M provides deep integration with Hadoop so you can use it out of the box with no scripting. This will save you lots of time and money. Developers can focus on building the very best Big Data applications and getting them into production quickly with the confidence they will exceed the most stringent operational requirements.

 

 

Your business will get the deep insights and operational efficiencies that are the promise of Big Data and your IT staff will be able to meet the highest levels of service quality most efficiently.

 

And, you will all be able to brag about your extensive knowledge of history!

.

 

The postings in this blog are my own and do not necessarily represent the opinions or positions of BMC Software

Filter Blog

By date:
By tag: