Skip navigation
Share This:

-by Joe Goldberg, Control-M Solutions Marketing, BMC Software Inc.

 

So here are some points to ponder:

  • Google Play has already hit one million apps and the iOS App store is not far behind.
  • According to mobile web platform provider Kinvey, the average app takes about 18 weeks to Infographic_How Long to develop an App.jpgdevelop and that is considered by many to be a long time.
  • iOS App Store is adding about 20,000 apps a month; that is about 645 new apps each day
  • Etsy performs over 25 deployments into production every single day (watch a video interview or see slide deck)

 

Yes, that is mainly in the consumer market but whether you call it consumerization or SMAC or “Nexus of Forces”, similar patterns and expectations are emerging in the corporate and enterprise worlds. Companies who fail to adapt will be severely challenged to maintain their competitive positions and those who do will be able to seize significant advantages.

 

One of the manifestations of this demand for accelerated new functionality is new development methodology such as Scrum and Agile. The need to deploy these new applications as quickly as they are developed has given rise to DevOps.

 

Batch is Still King

One factor that has not changed significantly is that workload automation (batch job scheduling) continues to manage a major portion of business processing performed by enterprise IT (some put the number at 70%). In fact, in a recent survey of BMC customers, almost 90% expect their batch workload to increase. Many of the most real-time, interactive and mission critical applications have a significant batch component that is an indispensable part of the processing.

 

Given this technology landscape, it seems reasonable to examine the process by which batch services are created, maintained and deployed. One of the first points to consider is why batch is almost completely absent from the DevOps discussion. If DevOps tools can manage artifacts like config files, jar files, properties files, etc. Why not batch workflow definitions too?

 

Why DevOps Forgot Batch

Perhaps the answer lies in the way workload automation or job scheduling tools are used by organizations today.  Workload Automation is frequently seen as an IT Operations tool. Developers may write scripts and run them manually or via cron (or something similar) and it’s not until the application is being handed off to IT that the topic of scheduling and workflow comes up. Developers may be very knowledgeable in setting up and configuring web application servers or defining tables and objects in relational databases but when it comes to job flow relationships, calendars, abstract resources and post processing actions, they may not have a clue.

 

This may account for some of the arcane processes that customers have implemented to manage job scheduling requests submitted to IT, usually from application developers. Some use Excel spreadsheets, Word documents, custom forms and other such methods. They all share an underlying acknowledgement that application developers are not schedulers and are not very familiar with the concepts or the details of workload automation tools even of those used by the organization in which they are employed.

 

In this context, we can now discuss how changes are made to workflows and consider ways to improve on the current state. Let’s take the example of a utility provider which runs tens of thousands of jobs daily. When a job flow is changed or modified, the application development team submits a document detailing the requirements. According to the IT folks, Application Developers have only a rudimentary understanding of scheduling principles. If left to their own devices, jobs would run serially one after the other, taking significantly longer to complete and completely failing to take advantage of concurrency capabilities in the installed workload automation tool that is available. Each development team largely ignores all other teams and fails to consider the impact of different applications competing for the same resources. And finally, developers show little interest in operational requirements like auditing, reporting and incident management.

 

What A Solution May Look Like

So if this indeed is the current state, what would a solution look like? That was the question that BMC set out to answer about two years ago. The culmination of that effort has just been released and it is called BMC Control-M Workload Change Manager. This new solution finds a balance that enables developers to transfer their knowledge of the application to IT without having to become scheduling experts, capture that information to automate construction of workflows, provide IT schedulers with the ability to enrich the workflows with their knowledge and expertise and provide a collaboration platform that enables all parties to exchange questions, comments and requirements in a structured and managed fashion.

 

Experience It

This solution has been necessary and ardently sought for decades but today is becoming indispensable. There are several ways you can learn more about this solution including taking a live test drive. Please post your comments and observations or tell us what you do today in your environment so that this solution can benefit from the collective wisdom of the entire community of workload automation users (and you just might win an iPad mini).

 

The postings in this blog are my own and do not necessarily represent the opinions or positions of BMC Software

Share This:

-by Joe Goldberg, Solutions Marketing Consultant, BMC Software Inc.

 

With all the technology available today, there is at least one area where weird and unusual processes still rule. For some reason, even though batch workload like payroll, inventory and supply chain management account for about 70% of all business processing done on commercial computers, new methods like DevOps are completely ignoring this critical function.

 

So how do organizations manage changes to their batch workload? Well, we asked a bunch of them and the answers range from the sublime to the ridiculous. Some use the tried and true “pretty please” approach. A developer or someone else who needs some jobs built, calls his or her friendly scheduler and says something like ”please do me a favor” and add this job or change that. Some have built incredibly elaborate systems of forms that have to be filled out and submitted. Until every last bit or information required by the form is acquired, schedulers cannot do anything. Sometimes, the form is filled our correctly on the first try (not very common) and sometimes the request goes back and forth and back and forth until people start getting blue in the face and making some very impolite gestures. Many companies fall somewhere in between and use some kind of request mechanism based on Excel spreadsheets, Word documents or email. One company we spoke with demands a Visio diagram of the desired results and then schedulers manually cut and paste from the Visio into the forms defining the workflow. One company told us they don’t really have a process. When somebody wants a job, they have to hunt down the people who can provide that service and each request is a negotiation that makes the Middle East peace process look trivial.

 

Of course, in all cases, the changes that have to get made DO eventually get made and thus the reference to “it’s LuckySocks.jpgonly weird if it doesn’t work”.  However, the fact that people have to jump through hoops and do unnatural things IS weird and our definition of “work” needs to be seriously revised. First, the entire process needs to be brought out into the light of day and recognized for what it is; inefficient, problem-ridden, expensive and frustrating. Second, the lack of automation introduces errors that cause production failures because the actual job definitions that make it into production are frequently built manually from scratch with no standards validation. Third and perhaps most important, the applications that businesses depend on to keep them competitive or to gain an advantage or address customer needs are being delayed.

 

You may think what’s the big deal? Let everyone use the scheduling tools you already have and problem solved. Well, if it was that easy, everyone would have done that a long time ago. The problems with that approach are that you can’t tell all your application developers or business analysts to become workload automation experts, the IT Operations and Scheduling folks would freak out (rightfully so) at the thought of hordes of users having free and unfettered access to your production environment and your auditors would probably read you the riot act. Even if you could pull of this minor miracle and get such an action approved, your auditors wouldn’t be happy, application developers wouldn’t be happy because they already have a day job and management wouldn’t be happy when production workload failed due to errors introduced by novice or casual users.

 

The solution requires some new capabilities that up until now have not existed. What’s needed is a solution that is simple to deploy and use by non-scheduling and/or casual users without extensive training. This must also be able to support local and site-specific standards and adoptable for different levels of users so that requests are as close to fully formed and ready to run as possible. It must be automated to eliminate the need for manual construction of workloads yet sufficiently collaborative to provide different levels of engagement between requesters and schedulers. Finally, it must be fully controlled and audited to ensure that the critical production environment is not compromised in any way.

 

With such a solution in place, developers and business analysts/owners will be able to define their requirements for application workflows quickly and efficiently. IT can implement these new business functions smoothly and confidently without impactimg the quality of their service delivery. The most efficient use is made of everyone’s time and most important of all, the business gains the benefits of this accelerated application delivery to provide new products and services that can ward off competitors or expand market share.


So next time you have some workflows to create, you can pull out your lucky socks and hope for the best or come back here in a few weeks to learn about a better way.

      

The postings in this blog are my own and do not necessarily represent the opinions or positions of BMC Software

Share This:

-by Joe Goldberg, Solutions Marketing Consultant, BMC Software Inc.


This week started for me on Saturday evening when I got a plane to head to Berlin, where I arrived Sunday afternoon, in preparation for the Berlin BMC Exchange Day starting on Monday.

 

That event ran until Tuesday afternoon. I flew to London for the UK Exchange Day that started Wednesday morning. Wednesday evening I flew to Paris for the Paris Exchange Day and am writing this blog from Les Docks in Paris; the BMC Exchange Day venue.

 

The theme of BMC Exchange Days is delivery of digital services and the impact of four converging technologies that are disrupting almost every business and market today. Social, Mobile, Analytics and Cloud (SMAC) is one of the ways this phenomenon has been described and it is affecting every aspect of technology and its business application. For us in BMC Control-M, the immediate impact is not always immediately apparent to some of our constituents. It’s easy to think that workload automation is a deeply technical, back room function that remains the domain of technical power users removed from smart phones and tablets and “appified” interfaces. But in reality, there may be lots of users outside of IT who have some level of interest in workload but are not and do not wish to become scheduling experts. Business Analysts, Application Developers and similar constituents may indeed prefer self-service, social and mobile user interfaces that make running a job or building a job as easy as paying bills or booking travel.

 

This may also be true for users and organizations as they begin to delve into Big Data and Hadoop where users may need the services of workload automation to run analytics but have no desire whatsoever to learn the details of managing workload.

 

The best way to find out what customers are thinking is to ask them using the Plain Old Meeting method and that’s what BMC Exchange Days are all about. Social and Mobile may be the new communications and technology consumption paradigm of the future and Analytics and Cloud may indeed drive infrastructure decisions. However, they may not be the only technologies that seemed destined for greatness, which seemed so right they couldn’t fail, and yet our technology history is littered with such “too right or too logical to fail” solutions that either failed outright or never lived up to expectations.

 

So, what’s a vendor to do? And what are its customers to do? The answer is brilliantly simple; they should all talk. And voila, there is the whole idea behind BMC Exchange Days; an opportunity for BMC and its customers to talk. BMC can talk to customers and customers can talk to each other.

 

Of course it is possible to talk via social channels and we pride ourselves at BMC and specifically in @BMCControlM for being socially gregarious. But if you want to ask tough questions and get thoughtful, insightful answers; in other words to deeply engage with your customers and have then speak candidly to you as a vendor and their peers, it’s difficult to replace a good old fashioned face to face meeting.

 

I believe that this desire to truly connect with our customers not only through social channels but also by building relationships by scheduling meetings at events such as BMC Exchange Days as well as on site at customer locations, distinguished BMC from other vendors.

 

So, if there’s a topic you wish to discuss or if you want to learn more about how BMC can help your organization more easily adopt SMAC for your workload automation or any other aspect of managing your IT environment, let us know. Send us an email, tweet us, post on our wall or maybe, just maybe, pick up the Plain Old Telephone System (POTS) and give us a call.

 

The postings in this blog are my own and do not necessarily represent the opinions or positions of BMC Software

Share This:

-by Joe Goldberg, Solutions Marketing Consultant, BMC Software Inc.


I had the opportunity to be a “fly on the wall” as several IT managers from a large Financial Services organization discussed the challenges of “DevOps” and Agile development.

 

They mentioned the well-known problems of the contradictory goals of pushing out new services versus maintaining stability and mitigating risk of change when one of them made a controversial statement; he said a lot of programmers are “ignorant”. This required him to go on and explain that what he meant was that they are ignorant of issues outside of their domain. The claim is that the pressure of Agile Development is not just the potential of destabilizing the production environment. As technology innovation accelerates, general technical skills are eroding (and are not being replenished at a sufficient rate). Programmers fall back on pulling bits and pieces of code from libraries or co-workers without properly assessing any deficiencies that may exist in such snippets for their specific applications. The result is code being pushed into production that is not production ready.

 

GIGO.JPG.jpg

I must confess that I am an “IT” guy and have been for well over three decades so my perception may be skewed. But imagine an IT infrastructure that I would build to accommodate a hypothetical environment. I could build and configure servers, set up a network and install management tools. When the first application that tried to run in this environment needed Java, it would fail because that wasn’t part of the “hypothetical environment” (I am an old geezer after all). If I would argue “everything worked just great in testing”, it would be a completely worthless statement. However, I argue that something very similar happens with applications. Programmers write code that executes successfully in test environments where they are either testing with a small subset of data or in an environment unencumbered with resource contention and tons of other applications that all have performance requirements. So writing code that performs a “commit” after every SQL statement or instantiating a Java VM or loading hundreds of DLLs when they only need a single one and repeating these functions over and over, “tests” just fine. Once in production however, it runs like the proverbial “pig” or may even fail under stress. And once a poorly performing application makes it into production, guess who “owns” it and becomes responsible for making it run?

 

So I believe it’s not enough to just improve the mechanics of getting applications into production. As important as that is, it is just as important to ensure that applications are designed, written and tested to run successfully in a production environment. This must also be included in the DevOps discussion because it is the absence of this discipline that significantly contributes to the reticence of the “Ops” guys to accept “Dev” applications into production in the first place. I am calling this additional wrinkle Dev4Ops.

 

Without this discussion, DevOps runs the risk of becoming an exercise of accelerating the injection of potential Garbage IN to the production environment with the predictably deplorable results.

 

The postings in this blog are my own and do not necessarily represent the opinions or positions of BMC Software

Share This:

-by Joe Goldberg, Solutions Marketing Consultant, BMC Software Inc.

 

The human creative spirit takes flight on a regular basis. We are lucky to be living in an age of seemingly constant, breath-taking innovation. Just when worship-alone.jpgyou’d think “It just can’t get better than this” something comes along that is.

 

So it’s a real mystery why management of new IT technology seems to be stuck in an all-too-familiar rut. Some wonderful new thing comes along, savior status is assigned and expectations are immediately raised to ridiculous levels. Organizations re-direct their best and brightest staff and their financial resources to adopt or implement whatever that new technology is. They may even build some prototype or get through a successful POC or even go beyond that and deliver some really useful business service.

 

And all the while never stopping to think for very long how they will actually sustain the ongoing operation of this magical new stuff.

 

You’d think didn’t we go through client/server, distributed computing, internet/intranet, web servers, cloud, BYOD and countless other trends that exploded onto the scene and then took forever to actually gain traction and deliver business value because no one thought about how to sustain them.WinstonChurchill.jpg

 

I guess  "Those who don't know history are destined to repeat it.", and so we do, we do and we do ....

 

So now we have the Big Data trend and its most popular technology Hadoop.

 

Everybody either already has it or is implementing it or wants it. When asked about managing it, purists wax poetic about the Open Source culture and ecosystem while simultaneously stating preferences for “packaged” distributions because they’re more stable, more tested and may have some “value added capabilities”. Is the time spent scripting and adapting and integrating all free? And once their science projects and POCs are ready for business “prime time” will the time and effort and complexity that the Infrastructure and  Operations team have to invest to get this stuff co-existing with and living in an enterprise IT environment also be free? And how will these shiny new projects integrate with “traditional” mainframe, Unix, Linux, ERP and other systems? And what will happen when regulatory and compliance requirements are applied and someone raises dirty words like incident and change management and auditing? Or if someone asks about SLAs or forecasting or dozens of other capabilities that are taken for granted for the “traditional” applications that have been running in production for ever?

 

 

 

The good news is that it's never too late to start doing things right. You can build Hadoop applications with their ultimate operational goal in mind by using an enterprise grade workload automation solution. BMC Control-M provides deep integration with Hadoop so you can use it out of the box with no scripting. This will save you lots of time and money. Developers can focus on building the very best Big Data applications and getting them into production quickly with the confidence they will exceed the most stringent operational requirements.

 

 

Your business will get the deep insights and operational efficiencies that are the promise of Big Data and your IT staff will be able to meet the highest levels of service quality most efficiently.

 

And, you will all be able to brag about your extensive knowledge of history!

.

 

The postings in this blog are my own and do not necessarily represent the opinions or positions of BMC Software

Share This:

-by Joe Goldberg, Lead Technical Marketing Consultant, Control-M Solutions Marketing, BMC Software Inc.

 

I continue to be asked to explain the difference(s) between Workload Automation and other automation disciplines specifically IT Process Automation so I’ve decided to write this post. I will begin with semantics.

 

Workload Automation used to be called Job Scheduling or Batch Processing. Gartner Inc. describes this evolution as the need to  “manage mixed workloads across a heterogeneous computing landscape ... not only capable of end-to-end automation with minimal human intervention, but this resource-aware automation and workload management is driven by business policies".

 

IT Process Automation continues to have several pseudonyms including Run Book Automation and Orchestration.

All of these terms have great significance and reflect both the differences as well as the struggle for clarity that is the reason for this post. If we carefully examine the terms, we can gain useful insight into these disciplines.

 

Workload Automation means automating the workload. Well, Duh! But bear with me for a minute. We have workload such as payroll, inventory, invoicing, etc. and this discipline enables us to automate and manage that workload that is today and has been forever, one of the lynchpins of commercial computing. The older, retired term of job scheduling also has this same connotation; we have jobs already and this discipline does the scheduling or sequencing of those jobs in some (hopefully) optimal fashion. In Workload Automation, we do not create the workload as part of the process of managing it. Rather the work already exists and we have to make sure it runs as quickly, efficiently and error-free as possible.

 

IT Process Automation implies we have processes, things that people do, and we need to automate them. Because these processes are frequently done interactively or manually, Process Automation focuses on and provides facilities for translating manual, human actions into something that computers can do. This is also implied by the term Run Book Automation – taking instructions usually intended for IT Operations staff and documented in a run book – and automating those actions.

 

Hopefully, you’re thinking “that’s pretty simple. Why the confusion?”. Partly, I think it’s due to the workflow thing. Because the resulting automation in both cases is a series of steps collected into a flow, one can be misled into thinking these disciplines are the same.  Another source of confusion results from the “processes” or “workload” being automated, which can include the execution of programs, binaries or executables.

 

Is there an overlap? Sure. But there’s a reason why the names are different and this gets me into the Communism angle.

 

  “All for one and one for all” worked for the Three Musketeers but I don’t think it works for Automation. Just because a management discipline has “automation” in its name, th3Musketeers.jpgat doesn’t mean it’s equal to, equivalent or even similar to any other automation discipline.

 

When the industry coins a term, it is usually because there are specific domain requirements, processes, skills and conventions that prevail in that space. It is not at all common to have the same group of people be expert in these various domains. That is also called “Domain Specificity” (try saying THAT ten times fast). For example, a workload automation solution has a specific way to express “Run this job on the third Tuesday of a business accounting calendar (sometimes known as a 4-4-5 calendar) but only if it is not a holiday. If it IS a holiday, run it on the next business day”. Using BMC Software’s workload automation solution BMC Control-M as an example, you define a periodic calendar using a wizard-like graphical Calendar definition form once. You can then use this calendar for many jobs and for many years. In your job, you then select that Calendar, specify a “Holiday” confirmation calendar (this too is a once-yearly, or less frequent if you know your company’s holidays in advance, activity where you define holidays), select the third week of the month and select Tuesday. You also select the “shift” option. Anyone familiar with Control-M can do that in a few minutes. You can then select the “forecast” option to verify that you got it right. Try to replicate that in an orchestration product. Control-M has domain-specific functions for business scheduling, the orchestration solution does not.

 

Similarly, IT Process Automation solutions, such as BMC Atrium Orchestrator, have adapters that enable you to mimic a human. Let’s say an application on a remote server seems to have failed. A human operator will attempt to ping the machine. If that’s successful, he or she will attempt to log in using an SSH client like puTTY. If that fails, the person opens an incident. The attempt is retried five minutes later and now the SSH connection is established but it turns out there is some component failures causing the lack of response. The operator collects some logs and updates the incident, etc. This type of iterative, multi-tool interaction requiring the ability to interact with tools that are dedicated to or primarily used within IT, is best done by “IT Process” automation.

 

That is why there are different job titles, different salaries, different tools and different approaches for different disciplines. Assuming there is a substantial pool of generic “Automation Experts” that can be equally adept at managing Server Configuration, Provisioning, Service Desk, Workload Automation, Process Automation, Application Release Automation, Capacity Management, etc. is, in my opinion, unrealistic. So is assuming the very same tool can do all these jobs. It would be nice; and the idea is hardly new. Anybody remember the most infamous “F” word in the IT business - Frameworks? How did THAT work for you? Can you imagine a single “device” capable of moving people, moving dirt and paving roads just because such contraptions have four wheels and can move forward or backwards?

 

OK, so let’s assume for a moment that I convinced you there may be good reason for specialization. Does that mean there’s no relationship between these disciplines? Of course there is. But I’ll save that for another post.

 

The postings in this blog are my own and do not necessarily represent the opinions or positions of BMC Software

Share This:

-by Joe Goldberg, Lead Technical Marketing Consultant, Control-M Solutions Marketing, BMC Software Inc.

 

Am I the only one bothered by the frequent surprise in the media that Workload Automation is NOT dead, is one of the foundation disciplines in the data Center AND stems from “batch processing” waayyyy back in the dawn of computing” (or some such similar statement)?

 

Well, D’uh!?!

 

Do you think virtualization popped up one day fully formed? Do you think that interactive/real-time computing, the internet, world wide web, mobile computing, Cloud Computing and even Big Data were immaculately conceived by some geek in a sterile room just last year? 

Surprise! ALL of these technologies have their root is the dawn of computing! In fact, one can argue that is the precise definition of “the dawn of computing”.

 

Virtualization was introduced in 1972 with the release of VM/370. Interactive computing has been around since the PDP machines and with transaction monitors like CICS in 1968 and the “Time Sharing Option” (TSO) in 1971. The World Wide Web was conceived in the mid-1980s and Cloud Computing is arguably built on top of all the above technologies.

 

This stroll through the history of computing shows that technology evolves, usually slowly, and over a long period of time. We sometimes take it for granted that it’s easy to make software easy to use, powerful, stable AND do exactly what we want. History shows us just how hard and how long it takes to achieve these characteristics (if we ever manage to get there at all).

 

As we come to the end of another year, we see the typical predictions for the future and statements about what will drive our industry. I feel confident that whatever those technologies or business needs turn out to be, they will be trends that were developing over some period of time and they will be built on concepts and work stemming from “the dawn of computing” because after all just as it says in Ecclesiastes 1:9 “What has been will be again, what has been done will be done again; there is nothing new under the sun”.

 

I am happy to be working for a company that I believe continuously improves the solutions it delivers. That believes that creative solutions delivered today have to be developed, enhanced and nurtured over a long period of time. And most importantly, that the process is an evolutionary one that requires dedication and commitment over a long period of time rather than expecting revolutionary technology to spring up fully formed  and enterprise grade as if by magic.

 

It is my hope and expectation for the coming year that we continue to evolve our solutions and build on the foundations that have been established in the past because no matter how big we are, we will achieve even more by “standing on the shoulders” of those that came before us.

 

The postings in this blog are my own and do not necessarily represent the opinions or positions of BMC Software

Share This:

-by Joe Goldberg, Solutions Marketing Consultant, BMC Software Inc.Kermit.jpg

 

Recently, I was discussing with a colleague why many mainframers are so attached to their 3270 “green screen” terminals.

 

Some pundits in the IT world lob derogatory names at these mainframe users and call them “mainframe bigots” or “dinosaurs".

 

Well, in the interest of full disclosure, I started on and spent many years on the mainframe playing with DOS, VM and MVS – right through to z/OS.  And I stand with all those dinosaurs and declare that there’s a whole lot of good that can be done on a 3270. At this point in my career, I like to describe myself as “platform neutral” and feel I have broadened my horizons a bit. But there’s still no text editor like ISPF and the mainframe platform still handles way too much of our planet’s transactions and data for any serious IT professional to dismiss its benefits.

 

But, I do have to admit that graphical interfaces can be stunning, informative, intuitive and simply way cool!

 

So how do I reconcile the worlds of 3270 and GUI? Simple!

 

The 3270 is THE user interface for the mainframe. If you live and breathe mainframe, there is little if any logic that should cause you to abandon 3270. The Graphical users of the world can scoff but riddle me this; If I were to give a Windows or Unix Admin, only a web browser and not allow that admin to open a Windows Explorer, or Command Prompt or a terminal window, xterm, etc. what do you think THEY would say?

 

Before you gather the mobs, light up the torches and sharpen your pitchforks, let me say that I am not encouraging anyone to abandon GUIs or to start using 3270 displays.

 

Instead, I’d like to submit for your consideration the following position: There are two (at least) categories of tools/solutions in the IT world. The first is platform-specific; the second is multi-platform or enterprise.

 

Platform-specific solutions are intended primarily for and run on a specific platform. In the case of z/OS or mainframe in general, such tools may include SDSF or ISPF or SMP/E. You can argue that a graphical interface for such tools could be useful for users that are unfamiliar with the mainframe and 3270 but do you really think someone unfamiliar with z/OS should be “driving” SMP/E?

 

Enterprise solutions however, by definition, are not specific to any one platform or technology. Such solutions should, and I would argue must, be independent of platform specific technologies and interfaces. I believe such independence and insulation from platform specifics is actually a requirement for any solution to credibly call itself an enterprise solution.

 

There may be some difference of opinion as to what technology is best suited for platform independent user interfaces and what attributes such UIs should possess. However, these are details that will be debated forever as technology evolves. There was a time when Windows was deemed to be the undisputed platform for graphical user interfaces. Today some argue it is a web browser and others insist we have already entered the age of the tablet or the smartphone. I believe there are still good arguments to be made for all of these with some still better suited to support complex solutions intended for power users while others are the better choice for casual users. Eventually, we may choose just one but I suspect our indecision will continue with the likely addition of new interfaces we don’t even yet imagine. Regardless, these considerations are for applications and tools that are not bound to specific platforms that already have their own “standards”.

 

So next time you are tempted to denigrate the 3270 user, think about your own work habits and the tools you use. You may find you too have a “vi”, notepad, windows explorer, xterm or similar interface lurking in your closet.

 

The postings in this blog are my own and do not necessarily represent the opinions or positions of BMC Software

Share This:

-by Joe Goldberg, Lead Technical Marketing Consultant, Control-M Solutions Marketing, BMC Software Inc.

 

Warren Buffet said “It's far better to buy a wonderful company at a fair price than a fair company at a wonderful price.”WarrenBuffet.jpg

 

 

The same can be said for enterprise workload automation software (and probably most things).

 

When it comes to workload automation, it is surprising to discover that most companies don’t have a clue how much they spend for this critical IT Management function.

 

In some cases, the ignorance begins even with the price, which can result from purchasing tools that were bundled with server or networking hardware or in some other “bundle”. However, even if it is clear how much was spent to purchase the software, the far greater costs of actually managing workload automation is frequently completely invisible.

 

This is somewhat mystifying in the day of chargeback/showback and IT financial transparency initiatives. However, as hard as it may be to believe, it is nevertheless true.

 

Take any group of workload automation users and ask any of the following questions:

  • Do you have a single workload automation standard or does your organization use several tools?
  • Does anyone have a single automation solution that is deployed on every single server/computing “endpoint”?
  • Of the organizations that use (SAP, Oracle eBusiness Suite, Peoplesoft, Business Objects, Informatica, VMware, SQL Server, Oracle database, IBM UDB…any of hundreds of commercial applications) how many use a single workload automation solution to manage the scheduling of those applications together with their in-house business applications?
  • Of the organizations that use (File Transfer, Web Services, messaging … other technologies prevalent in the workload automation environment) how many use a single workload automation solution to manage the scheduling of these technologies together with their other business applications?

You probably get the gist of where this is going.

 

The typical organization uses several tools, many “driven” by business analysts and application administrators who allegedly have a day job that is NOT scheduling.

 

For each such situation, the organization is paying high-value staff to perform work that is not their primary responsibility.

 

Frequently this causes delays to projects and tasks that ARE their primary responsibility. Lack of visibility and knowledge in the IT organization hampers centralized services, results in poor or non-existent logging and auditing, failure of change management processes, SLA breaches and degraded service.

 

The above sad state of affairs can frequently be traced to having workload automation solutions that are “fair”.

 

 

They may have been good once. They may still be OK but you have several. Or you may have gotten them at a “wonderful price”.

 

However you got there, your costs are much higher that you think and will only continue to increase unless you make the move to a true enterprise workload automation solution that delivers a single point of control for the broad set of applications and technologies that you use today, that provides comprehensive auditing and compliance out of the box, that delivers sophisticated Service Level Management, empowers Business Users with intuitive self service and mobility facilities and that is supported by a vendor that has demonstrated through action, rather than just marketing hype, a deep commitment to workload automation.

 

 

If you are looking for a suggestion, I’d be happy to oblige.

The postings in this blog are my own and do not necessarily represent the opinions or positions of BMC Software

Filter Blog

By date:
By tag: