Skip navigation
Share This:

workload_automation_011514.jpgDuring 2013, four converging technologies that are disrupting almost every business and market today have been acknowledged by the IT industry. Social, Mobile, Analytics and Cloud (SMAC) are also referred to by Gartner as the “Nexus of Forces”. When we talk about SMAC in our BMC Exchange events, we often break it down into two “Mega Trends”:

  • Consumerization: Mainly mobile, social and the general concept of keeping user interfaces as simple to use as possible
  • Industrialization: The transformation of the IT infrastructure to become more efficient and flexible by adopting cloud, big data, streamlined enterprise apps and embracing automation.

The 2014 Workload Automation trends are perfectly aligned with the general IT mega trends and are repeatedly expressed by customers in almost every meeting we have.

 

Deliver Digital Services – Faster

Digital services is what we consume today – so consequently, what business deliver.  And with the “anytime, anywhere” expectations that users have today, the rate of speed at which applications need to be delivered has increased.

By considering the batch job and workflow need in the early design and development phases of projects, you can reduce the time it takes to implement applications in production. Instead of scripting homegrown wrappers that will later be automated by generic scheduling tools, use a Workload Automation solution that provides native abilities to interact with the application, monitor its SLA, analyze workloads completion status, take automatic corrective actions in case of a failure and alert on problems. We have recently heard from a new BMC customer that using Control-M for Hadoop they were able to decrease their Hadoop implementation project time by 46%, about 6 months of development effort! And Hadoop is not a unique example. Same goes for SAP, file transfers, database queries or any other business application.

 

Collaboration & Self Service

Recently I had the pleasure of writing a text message on an old mobile phone keypad, and while I was struggling with switching between digits and letters, case sensitivity modes and languages, I suddenly realized how hard it is to switch back from my iPhone touch keyboard to what used to be an industry standard up until about 6 years ago. The adoption of smartphones and tablets, and the introduction of wearable devices such as Google Glass are all catalysts to increased users demand for easier, simpler and more accessible technology that behaves the way the average human expects.

In 2014, Workload Automation solutions will adopt this approach, providing client software and mobile apps with simple interfaces that are designed to meet the needs of the specific user persona they are targeted for - operators, schedulers, application developers or business users.

Self Service interfaces which allow application developers to submit workload change requests without opening helpdesk tickets, submitting manual forms or becoming Workload Automation experts will appear in 2014. This will help developers focus on building better applications that are aligned with their business demands, instead of developing scripts that automate workloads within these applications. Collaboration between application developers and production control should transform into a common practice in 2014, the same way the iPhone changed the mobile industry in 2007.

 

Expanding automation scope for what you know and what you don’t know

In 2014, organizations will increase Workload Automation’s span of control to cover more business applications than before. Simple-to-use, automated tools will allow the consolidation of workloads from multiple schedulers and business applications into a single solution providing a focal point of control. The production control team will no longer be required to use different tools and interfaces in order to manage batch activity. The use of automatic workload discovery tools will grow in 2014, allowing organizations to identify critical batch elements which they have not been aware of.

 

 

 

In 2014 Workload Automation will evolve from the IT “basement” to the desktops and smartphones of application developers and business users. The increased ROI and reduced time to value will become visible to all stakeholders. Self service, collaboration, workload conversion and workload discovery will become industry standards. It is going to be an exciting year!

Share This:

time money.jpg…Plans that either come to naught,

or half a page of scribbled lines…

 

The words of Pink Floyd’s ‘time’ are ageless, and likely that you can relate to them (or at least to specific sentences) no matter how old you are or what stage in life you are in. Challenges we faced ten years ago might be behind us but we still run and run to catch up with the sun, trying to meet project deadlines and to have that new application or business service up and running in production. Hoping that by the time the sun comes up behind us again we will have that magic spell (competitive edge) that will help us win our customer's affection and budgets.

 

In an effort to speed up development plans, we sometimes make shortcuts, forget to take into consideration, or simply are not aware of important factors that are prerequisites to any new application that is promoted to production. These factors are the ones that allow the Production Control team to effectively monitor the application activity by automating common scenarios and reducing manual effort.

 

Kicking around on a piece of ground in your home town,

Waiting for someone or something to show you the way…

 

 

By using a Workload Automation tool that can provide out-of-the-box support for all these requirements, you can eliminate the need to develop these in-house, and instead focus on building the best application for your business.

Here are a couple of examples of services you should expect your Workload Automation solution to provide. By using these you can eliminate a substantial amount of development that is often performed by expensive resources, cut down costs, reduce the number of “hands-off” cycles to Production Control, and eventually - dramatically decrease the time it takes to promote the new application to production, without compromising your specific site's IT standards

  • Dependencies between workloads: This should be done in the easiest way possible, preferably with a drag-and-drop graphical interface, and should allow creating dependencies between workloads of new applications, and workloads of various other business applications that are already part of your IT infrastructure (ERPs, business intelligence and data integration tools, database queries, system backups, file transfers, etc.)
  • Analysis of task outputs: Automatic detection of exit codes and common application error messages. Analysis of the workloads outcome to determine if the workloads completed successfully or if automatic recovery actions need to be taken.
  • Perform manual corrective actions: Modify workloads to correct errors, rerun workloads that failed, cancel workloads while they are running, etc.
  • Proactive notification of potential missed SLAs: Identify if a workload is running for too long or completed much sooner than expected based on historical statistics. Understand the impact of a problematic workload on the overall batch business service.
  • Alerting Capabilities: In case of workload failures or missed SLAs, send emails to the relevant recipients, open helpdesk tickets, and escalate alerts to monitoring frameworks or service impact managers via SNMP or native APIs.
  • Forecasting: Allow planning future changes, identify optimal maintenance time windows and changes to capacity volumes without affecting production activity.
  • Balancing resources: Dynamically adjust the number of workloads that can run simultaneously – in general or for a specific application. Ensure application servers are not running in under or over capacity, and redirect workloads to alternate application servers in case of an outage.
  • Applying site-standard policies: Ensuring the use of proper naming conventions that can allow Production Control to easily categorize workloads and identify their owners.

 

A Workload Automation tool that can provide the above capabilities and facilitate effective communication and collaboration between Application Developers and Production Control can help elevate the relationship between these two teams for the long run, allowing each team to focus on what they do best.

 

A Self-Service offering that allows Application Developers to submit requests for new workloads, changes to existing workloads and monitoring of active workloads would be the next stage in such collaboration. I promise to elaborate on this topic in another blog post. But now the time is gone, the song is over, thought I'd leave you with one of David Gilmor’s best guitar solos ever. Enjoy!

 

 

 

 

lyrics.png

 

To learn more on BMC Workload Automation offering: http://www.bmc.com/solutions/workload-automation/workload-automation.html#.Uq8KKJHmzuo

Share This:

Ein-Gedi.jpgYou have a new script developed and ready to be automated as part of a bigger business flow. What are the next steps?

  • Create a new job definition, and point it to the script location. Use your company naming convention standard for the job, application and other categorization attributes.
  • Specify the OS account that will be used to execute the script on the target server.
  • Define the scheduling criteria (the days on which the jobs should run) and the submission criteria (the time window and prerequisites which need to be fulfilled before the job can run).
  • Define the dependencies between the job and other processes in the business flow.
  • Configure post processing actions to recover from various scenarios on which the script can fail and the notifications that should be issued in such cases. This may also include opening helpdesk tickets to audit the failure or to get the relevant support team involved.
  • Add job documentation to have all related information available for Production Control.

Deadlines and SLA can be defined at the individual job level, but in most cases a much better approach is to define it at the batch service level.

 

Is that it?Ein-Gedi-Pool.jpg

You've ticked all the above check-boxes. You've run a forecast simulation to verify the scheduling definitions. You think you are ready to promote this change to production.

Before you do that – think about the following scenario:  what will happen if someone modifies the script on the target server? Will that change be audited? If the next run of the job fails because of the script modification – will Production Control be able to associate the change (assuming they are aware of it) to the failure? What tools does Production Control have in order to recover from the error?

 

Without getting into complicated or expensive change management practices, there is one very simple thing you can do:

Use Embedded Scripts.

 

When embedding scripts within the job definition you can immediately get the following benefits:

  1. Any change to the script is audited and can be identified by Production Control as a potential cause of the job failure.
  2. Changes can be rolled-back by Production Control without the need to access the target server and manually modify the script. In some cases Production Control are not even authorized to remote access to application servers. If the target server is mainframe, iSeries, Tandem, Unisys or OpenVMS for example, we are talking about a whole different set of skills which are not required when using embedded scripts.
  3. You can roll back all changes made up to a certain point in time, including both the job attributes and the embedded script. Deleted jobs will be restored. Modified jobs will be rolled back. New jobs that were created after that point in time will be deleted.
  4. You can compare between the job versions and see if the script was modified and if so – what was the change.
  5. The embedded script can be more than just a batch or a shell script. It can be a PowerShell script, a Perl script or other scripting languages. You can also embed SQL statements if you run database jobs.
  6. You can run a single copy of an embedded script on multiple target servers. This way if changes are required you can modify the script only once. Add agentless scheduling to the equation and you have a real “zero footprint” environment.

 

Including scripts as embedded parts of job definitions can be a very efficient practice for both Application Developers and Production Control. It will reduce the time it takes to recover from errors and increase your ability to meet your auditor requirements.

 

Do you use embedded scripts? If so – share the details with us: what type of scripts do you run, how does it work for you, what challenges do embedded scripts help you address and what challenges still remain…

 

DeadSeaView.jpg

Note: the photos in this post are my idea of an “all inclusive deluxe” vacation. It’s the most beautiful place in the world, and the lowest point on earth. This is where I live!

Can You Ride an Elephant?

Posted by Tom Geva Oct 24, 2013
Share This:

Hadoop.pngBig Data, Big Data, Big Data. Is that just another passing buzzword or is it here to stay? What does it really mean? Is my data big enough to be managed by Big Data technologies?

 

Those are questions that many of us ask these days and there’s a good reason for that. Just like the early days of cloud computing, when every one of us had a different idea in mind to what it actually was, Big Data technologies are commonly mistaken to be considered as only relevant for the petabyte club members. But the truth is that there are benefits of using Hadoop for example, even for lower scales of data. Some of these benefits are also discussed in an interview with Mike Olson, Cloudera CEO.

 

  • Affordable infrastructure: You do not need to purchase expensive hardware or a high-end storage infrastructure for Hadoop. The idea is to use commodity hardware with locally attached storage.  The hardware and storage can be physical or virtual, it can be on promise or hosted on a public cloud such as Amazon Elastic Compute Cloud (EC2), and you can dynamically add or remove resources to scale for changing processing levels.

 

  • Rapid data processing: Many of us are starting our journey with single-node Hadoop clusters in test environments but Hadoop is designed to run on multi-node clusters, which allow parallel and balanced data processing. Today, there are already companies such as the music-streaming service Spotify, that manage Hadoop environments with hundreds of nodes, each is capable of independently processing the pieces of data it holds, much faster than any single server can process, regardless of how many CPUs it has.

 

  • Manage unstructured data: Unlike traditional relational databases that rely on predefined data schemas, Hadoop Distributed File System (HDFS) allows you store any format of data in an unstructured manner. These data can be videos, photos, music, streams of social media content or anything else. It doesn’t mean you do not need to plan ahead and figure out which business questions you want to answer, but it definitely means that when new questions arise, it will be much easier for you to make the adjustments that will allow you to answer them.

 

  • Redundancy & High Availability: Hadoop distributes each piece of data to multiple nodes in the cluster (number of copies is configurable) so if one of the nodes fails (which is more likely to happen due to the use of commodity hardware), you will not lose any data. This eliminates the need to use expensive RAID devices or commercial cluster software.

 

  • MF Costs reduction:  Most companies that manage large data repositories on mainframes seek ways to reduce CPU peaks in order to cut down software license costs. This is often quite a challenge, especially during end of quarter or end of year times, or during holiday shopping seasons. By shifting some of that processing activity to Hadoop you can reduce your costs and sometimes even get the processing done faster.

 

It will take some time until the Internet of Things hit us all and we will be flooded by an unmanageable amount of data that will leave us with no choice but to use Big Data technologies. There is no reason to wait until then if we can already now adopt these for the benefits they provide and the value they add, addressing some of the challenges that are not necessarily volume dependent.

 

 

dilbert big data.png

Share This:

In July 2012 I participated in a BMC Big Data Summit that took place in BMC headquarters in Houston, Texas. I met architects from many of the BMC product lines and we discussed Hadoop and Big Data opportunities along with customers use cases, challenges and needs (I’ve described some of the Workload Automation use cases before in an article published in the Enterprise Systems Media magazine). This is the “behind the scenes” story of the team that made Control-M for Hadoop a reality.

 

Amit Cohen.jpgAmit Cohen, Product Management

As soon as we had the green light for the project, R&D started the technical research and we identified together the potential content for the first release. With the Big Data market and the Hadoop ecosystem being so dynamic, we wanted to ensure that what we developed actually addressed customer needs. We validated both the potential content and the use cases with customers in North America, EMEA and AP, and adjusted our plans according to their input. The support for HDFS file watching for example was added to the content based on such feedback.

 

We learned what type of challenges companies experienced when using Hadoop specific schedulers (such as Oozie) and realized that we can deliver immediate value by offering the ability to manage Hadoop batch jobs with the same power and ease of the “traditional” enterprise processing. Control-M for Hadoop allows application developers to focus on developing Hadoop programs rather than wasting time with writing and debugging wrapper scripts that schedule those programs.

 

The main challenge that customers shared with us was the ability to integrate Hadoop jobs with data integration activities, analytics tasks, and with file transfers - types of integration for which Control-M was specifically designed. Proactive notification on missed SLAs and self-service offering for application developers were asked for as well. Having Control-M able to easily integrate mainframe and distributed tasks was also a key factor for those customers who shifted data processing from DB2 to Hadoop in order to reduce MF processing costs.

 

Avner Waldman.pngShaul Segal.pngAvner Waldman & Shaul Segal: Research & Development

We have always been passionate about learning new technologies but the team excitement increased to a new level after the discussions we had with our customers and the understanding of the value we would be providing them. We learned that we had customers running Hadoop jobs with Control-M for years, using homegrown wrapper scripts, but looked for a tighter, more “native” integration that would reduce effort and risk. We helped them eliminate these scripts by replacing them with jobs defined from a simple and powerful graphical user interface.

 

We started the research on a couple of single-node Hadoop clusters but this wasn’t enough for us. We knew that our customers’ Hadoop environments were more complex, and that Control-M for Hadoop must support these environments. We tested various Hadoop distributions in multi nodes cluster configurations. Our virtualized infrastructure allowed us to provision Linux instances quickly and the feedback from our customers helped us to configure the Hadoop clusters in a way that is as similar as possible to their environments.

 

Gad Ron.jpgGad Ron: Architect

Being involved with the first BMC Big Data initiative was the thing that got me excited the most. Other product lines are now following us with developing additional offerings around Big Data, but we got to be the leading team. We’ve been “playing” with Hadoop for a couple of years now and finally reached a point where the market demand for enterprise support justified the development costs. Now that the first release of Control-M for Hadoop is available and customers are adopting it, we have a larger community to get feedback from and we are already looking into additional use cases for the next release.

 

I love the idea of new and innovative technology that is mostly batch oriented. Over the years we heard people saying that IT is turning to a completely online driven approach but the truth is the exact opposite. It’s like saying that the mainframe is dying…

 

Analyst predictions on the Big Data market growth encourage us to invest in additional research in Big Data technologies. NoSQL databases and in memory databases (such as SAP HANA) are now on the table as well, next to the social, mobile and cloud initiatives. We are also witnessing a trend of silo Big Data exploratory projects turning into enterprise-wide Big Data initiatives. Our customers are looking for tools to support such a shift.

 

 

Oranit Atias.pngOranit Atias: Project Management

Communication is always a key factor in these types of projects. We all learned the new technology together, shared customer inputs and worked collectively to ensure we met our project deadlines and quality standards. In fact, we were able to complete the project ahead of time. The alignment of all participating teams including documentation and support to the dynamic nature of the project was all I could ask for. When I saw the Times Square ad and Control-M for Hadoop on the www.bmc.com front page I couldn’t be more proud and felt privileged to be part of the team.

 

 

Abi Yavo.jpgAvi Biton.pngAbi Yavo & Avi Biton: Quality Assurance

The feedback from the customers that evaluated preGA releases of Control-M for Hadoop helped us to design and execute the testing use cases. We made sure that our testing coverage included the same platforms and configurations that those customers use and they in return now have a much more stable and robust solution that meets their needs, IT standards and configurations. The learning curve of the new technology was relatively short due to the fact we’ve been there with R&D and product management since the beginning of the project. We participated in the technical research, the discussions with the customers and the release specifications planning. This was a truly team effort. We ended up with a shorter release cycle and eventually a better product.

 

 

Robin Reddick.jpgRobin Reddick: Solution Marketing (@robinreddick)

Working on Control-M for Hadoop let me do two of my favorite things as a solutions marketer – bring a new product to market AND in a new market area.  I began researching the big data market opportunities well over a year ago.  With all of the initial big data processing being batch (scheduled), it was a perfect fit for Control-M and just a matter of the right time.

 

Reaching out to customers, understanding their needs, and then working with product management, sales, and other stakeholders in the company – it was all great fun, and of course challenging as well.

 

The best part of the entire effort was working closely with customers to understand their business needs. Every customer I spoke with was using Hadoop and big data to learn more about their business –to make better informed decisions.  Each of them was passionate and excited about the opportunities they were finding to offer better and even personalized service, new products, and improve their business operations.  Their excitement was infectious.

 

Memorable moments? The first meeting with a customer, which was MetaScale, and understanding just what a game-changer big data really is for businesses.  The first conversation with a sales rep -- they called Hadoop “Hoopla” the entire conversation making me realize how important training would be. And throughout -- working with my team-mate Joe Goldberg who was relentless and remarkable the entire time.

 

It’s not over yet. This is just the beginning!  

 

 

Times Square.jpg

 

 

 

7906_653877177976647_543953503_n.jpg

Share This:

In the previous post I talked about the analysts perspective, about identifying the best vendor and about localized needs. What other aspects should you consider when evaluating workload automation solutions?


Operator, I need an exit!  Fast!

6 - Operator, I need an exit Fast.png

Now let’s talk about support. There is no doubt that such a mission critical element of your business must have 24/7/365 support, but ask yourself (and your vendor) the following questions:

  • How many support analysts are actually available when you need them?
  • How good is the service they offer?
  • What level of service do they provide (SLA)?
  • How well do they collaborate with R&D ado they have an escalation process that works?
  • Will you have some face-to-face time with R&D managers so they fully understand your implementation, challenges and needs?

Try to understand how the vendor’s support organization works and how effective it is. There are ways to measure that. CSI (Customer Satisfaction Index) is one way and NPS (Net Promoter Score) is another. CSI measures customer satisfaction based on the service they receive from customer support in response to a support ticket. NPS measures how likely customers are to recommend a vendor to their colleagues or other companies. The former is usually based on how customer support handled specific support cases, while the latter considers a broader level of satisfaction based on the vendor’s performance as a whole.

Bottom line – the level of support you receive from a vendor has direct implication on how quickly you will recover from failures and what type of exposure (positive or negative) you as the product owner will have with your managers.

 

Oh my god Persephone, how could you do this?

7 - Oh my god Persephone, how could you do this.png

In the movie The Matrix Reloaded Persephone (the amazing Monica Bellucci) associates a price tag with the help she offers to get the team to The Keymaker. Commercial products also have a price tag :-)

There are different methods to measure capacity and to bundle add-ons. Each vendor has a unique way to handle this, and it isn’t always easy to compare them if they measure or package their solutions differently. Below are few common methods used to measure capacity:

  • The number of workloads automated each day or the peak number of workloads over a period of time (usually a year)
  • The capacity of the servers running the workloads  (CPU on the distributed side and MIPS on the mainframe)
  • The number of component instances deployed.

You should plan for growth. Ask the sales rep how increases in capacity within the license term will be handled, and what level of flexibility does the vendor offer. Ask your rep about the different pricing methods available and determine which one fits your current needs and is best suited for future growth considerations.

 

Neo... nobody has ever done this before. That's why it's going to work.

8 - Neo... nobody has ever done this before. Thats why its going to work.png

Last but not least is the product perspective. This is probably the most straightforward consideration when it comes to functionality (can the product do what you need it to do?). But you should also take into consideration the following aspects:

• The investment required to maintain and administer it over time including:

  • Installing new components
  • Upgrading to new versions
  • Integrating with company IT standards such as LDAP/Active Directory, your Helpdesk system or monitoring framework.

 

You should check how the workload automation solution integrates with various applications you have such as:

  • ERPs (SAP, Oracle eBusiness Suite, PeopleSoft, etc.)
  • Business Intelligence/Analytics tools (SAP Business Objects, Cognos, Oracle BI, etc.)
  • Data Integration (ETL) tools
  • Databases (Oracle, DB2, Sybase, MSSQL, etc.)
  • File transfers (FTP, SFTP, etc.)
  • Backup solutions (Netbackup, IBM TSM, etc.)
  • Technologies such as WebServices, messaging and Java applications.

 

Identify advanced product capabilities such as:

  • Alerting on missed SLA’s in the event that batch services are late and will not meet their deadline.
  • Self Service offering that allows application developers to  access their own workloads without requiring them to call the operator or to generate expensive helpdesk tickets
  • Forecasting how workloads will run across dates in the future and allowing you to plan maintenance activities accordingly.

 

Ask your vendor about their offering and vision regarding emerging technologies/market trends such Social, Cloud, Mobile and BigData.

 

Make sure you understand what the product release cycle looks like (major releases, maintenances, patches, etc.) and their track record over the past few years

 

That's the closest I can get you, you better grow some wings..

We've talked about the vendor, the ecosystem, support, pricing and product capabilities. All of these should be taken into account along with analysts’ input and customer references. Ask your sales rep for this information but make sure to also check it yourself using online communities, social media and any other resources available to you.

 

 

 

                                     

 

 

The postings in this blog are my own and do not necessarily represent the opinions or positions of BMC Software
Share This:

Why I wrote this post

Before I started this blog I made a list of topics I would like to write about and the question of how to choose the best workload automation product seems like the perfect one to start with. What are the aspects you should take into consideration when evaluating a workload automation product and how to decrease the chances that you will find yourself repeating this process in a couple of years because you selected the wrong product or vendor.

 

Workload Automation is Critical

1 - Workload Automation is Critical.png

In the past 12 years I’ve heard endless examples of how critical workload automation is to businesses: banks that will be fined in millions of dollars if they will miss batch deadlines. Retail shops that their chain of supply will simply break without batch, and manufacturing businesses that will not be able to maintain their production line without batch. These examples show how mission critical application is a workload automation solution but there are also examples of how much money, effort and time companies have been saving by using a good workload automation product. I’ve heard a user say that by automating the printout of mails and sorting envelops in the right order his company is able to save thousands of euros that otherwise would have been paid to the post office. A company that have dramatically increased the number of workloads they automate while decreasing the number of employees that manage them. And the list goes on. You can browse the websites of any workload automation vendor and find success stories with additional examples, or use services such as Tech validate which conduct impartial market researches.


What’s The Matrix has to do with Workload Automation?

2 - What’s The Matrix has to do with Workload Automation.png

So - why naming this post with a quote from the legendary movie The Matrix? First, because I love this movie… Second, because you have a choice to make. Many companies make a decision that is mainly based on price, which is indeed a very important factor in today’s economy, but there are other factors you should be aware of before you take the final decision, especially if you plan for growth in the near future. If you take the blue pill or in other words select a workload automation product that doesn’t meet your current or future needs you will “wake in your bed and you believe whatever you want to believe”… sooner or later you will be facing the challenges and instead of saving Trinity and Zion (increasing productivity, reducing costs and reducing risk), you will be busy with putting out fires...

 

So where should you start?

 

I'd ask you to sit down, but, you're not going to anyway

3 - Id ask you to sit down, but, youre not going to anyway.png

First thing to do is probably to read analysts’ reports. Analysts are much like the Oracle. They can’t tell you the future but they can help you make the right choices by providing you information from their researches. Vendors share with them their roadmap and vision and they interview customers that provide information about the vendor ability to execute. They have the wide market perspective and they can point out the pros & cons of each vendor. They can help you not to break the vase…

Gartner has their Magic Quadrant (MQ) reports, EMA has their Radar report and there are a few other like Forrester & IDC. Those reports are not published every year so Google for the latest one or ask the sales reps from the vendors you work with for a copy.


I am the Architect. I created the Matrix.

4 - I am the Architect. I created the Matrix.pngNext to consider is the vendor as a company. What are the chances the vendor will still be around in the next few years? Is it financially stable? Will this vendor be around to support you through financial challenging time? What are the chances this vendor will be merged into or acquired by a 3rd party and will change focus from workload automation? What local presence does this vendor has in your country, not just in general but with workload automation specialists?

Vendors have different levels of focus into workload automation. Ask yourself (or the vendor sales rep) the following questions:

  • How long is the vendor playing in the workload automation market?
  • How many developers are working on the next releases of their products and fixing bugs?
  • How many years those developers work with the product?
  • What percentage of revenue is invested back in R&D and support?

One way to check it will be browsing the vendor's website. You can see if workload automation is mentioned on its home page and how much content is available online.The level of focus will most likely indicate the level of commitment this vendor has for the product.  The Matrix architect is completely focused on building, optimizing and perfecting the Matrix. Similarly, the level of the vendor focus on workload automation will most likely indicate how soon a new operating system version or a database platform releases will be supported and what are chances that the product meets latest security standards. It will most likely also indicate what level of innovation is expected in the future.You should also know that some vendors offer multiple workload automation products which clearly show a lack of focus. These have been acquired by the vendor over the years and even if renamed to be part of a single brand – these are still completely different products with very little integration between them (mainframe and distributed products for example).

Perhaps we are asking the wrong questions

5 - Perhaps we are asking the wrong questions.pngI’ve mentioned earlier the vendor local presence in your country and I would like to elaborate on that. The power that the Merovingian has in the Matrix movie is knowledge and one way this is demonstrated by in the movie is his ability to express himself in any language he feels comfortable with. You might want to have similar abilities when using your workload automation solution. Ask yourself:

  • Is your workload automation interface localized?
  • Does the vendor offer localized product documentation and marketing collateral?
  • Can you find local resources in your region (vendor services or partners) that can help you with the initial deployment, education, upgrades and migration projects if you switch from one product to another?

In some countries and industries (such as the federal /government ones) localization is a mandatory requirement. Look for references in your country or industry and check what local events, such as annual seminars and user groups are taking place near you.

 


What's next? Part 2:

  • Operator, I need an exit!  Fast!
  • Oh my god Persephone, how could you do this?
  • Neo... nobody has ever done this before. That's why it's going to work.

 

 

The postings in this blog are my own and do not necessarily represent the opinions or positions of BMC Software

Filter Blog

By date:
By tag: