Search BMC.com
Search

Share: |


- by Joe Goldberg, Lead Technical Marketing Consultant, BMC Software Inc.

 

As we move into the second layer of the automation stack we encounter the convergence of business and IT processing that is one of the drivers behind the “Workload” terminology that has been adopted for what previously was job scheduling. The workload we are managing is expanding beyond traditional application boundaries and is interacting with transactional and real-time IT environments.

Automation Stack.jpg

Service Level Management for batch business workload can be a topic of a separate post but at a minimum, there must be a way to define and express service levels or deadlines, a business oriented name for the service and actions to be taken if an SLA event is detected. Once defined, the workload automation solution must be able to proactively monitor progress towards SLA fulfillment. Once a potential breach is detected it is immensely helpful to have visualization and simulation facilities that help you to identify the source of the potential delay and to take corrective action.

 

 

Batch workload does not operate in a vacuum. The batch services we have been discussing are usually components of business services. Let’s consider a financial services organization. Interactive banking services such as deposits and funds transfers are usually finalized in batch. This is why a transaction made “after close of business”, for example, does not appear in your account until the following day. If financial account reconciliation or inter-bank transfers fail to process, web-based users may not be allowed to perform transactions because their account balances may not be correct. In order to visualize these relationships to put Batch Services into proper context, IT Service Modeling and Impact Analysis must incorporate batch SLAs. This visualization is enabled via the CMDB. The CIs and their relationships form the Service Model.

 

Let’s skip ahead to the topic we really want to discuss today; Dynamic Workload Management.

 

The dynamic aspects of today’s workload relate to workload itself as well as the environment in which it operates. As the importance and relevance of workload automation continues to permeate the organization, non-traditional users such as financial analysts, business unit managers, and application owners are finding they must apply workload automation in order to do their jobs. These users frequently submit analytics and other ad-hoc jobs. This translates into a larger, more volatile workload than has existed before.

 

From the IT perspective, virtualization and cloud technologies have made infrastructure less rigid than in the past. Organizations are seeking cost optimization benefits by configuring application resources for average usage with the intent of scaling up or down as required, based on the demand.

These two factors, random workload and on-demand infrastructure, combine to create the need for dynamic workload management.

Dynamic Workload Management introduces several capabilities to help organizations deal with unpredictable workload while maximizing the benefits of elastic infrastructure. The first is a policy-based approach to managing incoming work. Instead of designing workload to operate at specific times, workload policies categorize workload and define rules that are applied whenever those certain categories of work arrive. For example, when analytics jobs are submitted by the Finance department between 9 to 5 (high resource utilization periods), only a small number of those jobs can run concurrently.  However, if it is financial quarter end, Finance analytics get close to top priority and we run these jobs as quickly as possible with much higher limits on concurrency. A policy-based approach lets us pre-define how we want to handle the workload mix and then allows the technology to apply these rules automatically.

 

Similarly, we can define how resources are made available to different categories of workload. It has been a good practice for quite some time to avoid using explicit hostnames when defining where workload runs. This mechanism is called node grouping. A logical name is assigned to a collection of server resources and workload is bound to that logical node group. Dynamic Workload Management extends this concept inseveral ways.

First, the node group on which workload will execute is now determined by policies. Using the Finance example from above, we can send work to a small node group when we are constraining the workload but to a larger and more powerful collection of servers during financial quarter end.

Second, the actual resources that make up a node group can vary dynamically. We can add either physical or virtual servers, including cloud-provisioned servers. We can also re-purpose physical servers. Using the classic example of a web or database server that supports a real-time application nine to five (if anyone still has such applications), we can define participation rules that add our database server to the workload environment during its idle time so that we can make productive use of what otherwise would be a wasted resource. Agentless Scheduling and dynamic nodegroups make this process significantly easier.

Third, the jobs that are submitted must become independent of specific hosts. Traditionally, jobs are tightly bound to a small group of machines. If we then wish to re-direct such jobs elsewhere, it becomes a complex, manual effort. To gain the flexibility required, the workload automation solution must directly manage the control language (scripts, JCL, etc.). This capability is called embedded scripts and allows jobs to be sent to any available machine, including a dynamically provisioned one that did not exist until moments ago, without concern about availability of scripts on that target host.

 

In conclusion, Dynamic Workload Management can be described as the combination of Workload Virtualization which allows us to manage our batch workload in a way that makes it independent of the IT infrastructure in which it must run and an elastic infrastructure that is itself also independent of physical hardware constraints.

 

The postings in this blog are my own and do not necessarily reflect the opinion or position of BMC Software Inc.
Share: |


Common wisdom says you should buy vehicle  that will meet your needs 90% of the time, and rent whatever you need for the rest. Since my car spends 90% of its time at an airport garage or driving to an airport, I've got a small car that's comfortable for me and one other (and will seat 4 in a pinch).

 

So it is with automation tasks.  I will commonly build out install packages for the 90% use case: SQL Server installation is a common task, which, while it must be executed correctly and ideally the same way every time, is not terribly interesting once you've done the initial configuration.  There are a dozen other middleware components, a dozen agents, a dozen common configurations: changing the name of the built-in Administrator account, for example.  While all of these are "common" tasks, I often end up talking to people trying to automate the most complex configurations in their environment, or seeking tools to address 100% of the tasks at their hands. 

 

I share their interest: I rarely want to bother with the easier tasks in a given environment:it's boring, and once you've installed Oracle 11g a couple of times, there's really not that much that's interesting in it, unless you're trying to do something completely different (like stand up a 3-node super-HA RAC).  All of that said, I've been noticing lately that it's far easier for us to set aside the basic work that needs to be done, the first 80-90% of automation tasks, like setting up the various patching, security, regulatory or build compliance audits, or building provisioning or software deployment packages. 

 

Instead we tend to focus on whether that last remediation instruction is exactly correct, on whether one condition in particular works correctly on 100% of the systems.  Unfortunately, that last 10% seems to cost as much (time, money, resources) as the first 90%.  A customer I know of has a metric on their software installs: every time one of their senior resources doesn't have to spend an hour staring at billboards while a given software package installs, they add $40 to the automation bucket, and at the end of the year they total it up.

 

Now, it's not fun or exciting to setup and maintain the "first 90%" jobs, but they get the job done, and you'd be surprised how much they'll save your organization over a year.  If you want to know exactly how much, just setup a quick report to measure the number of runs over the course of a year.  Then you'll know how much time you freed up to work on the "more interesting" tasks.

Share: |


In Paris on a recent business trip, I went for a very late lunch after a rather successful meeting. As the lunch wound down, I started imagining the parallels between a restaurant and an IT department.

 

Both organisations exist to serve individuals, whether they be patrons or customers. However, there is a crucial difference in the way that the organisations relate to their customers. In the restaurant where I was sitting, after some minor negotiation occasioned by the lateness of our arrival, we were quickly able to order starters and mains, quibble over the vintages of wines (ah, Paris!), and then relax in the certainty of receiving our dishes in short order.

 

If the restaurant had been run along the lines of the typical datacenter, it would have been a different experience. I think it would happen a bit like this:

 

You enter the restaurant from the street. Although it is full of staff, there is no clear division of roles. Nobody welcomes you, so eventually you choose a table at random and manage to flag down what you hope is a waiter. He looks displeased and mutters something incomprehensible, which you take to mean that he is either not a waiter, or not the right waiter. After another failed attempt, you do manage to make contact with a waiter, who instead of presenting a menu, asks what you would like to eat.

 

Taken aback, you ask what he recommends. He explains that the chef specialises in a particular type of artisanal grain, grown by reclusive monks in a remote region, ground by hand between grindstones made of rock sourced from an inaccessible Himalayan valley, and so on. After he has finished declaiming the wonderful properties and pedigree of this grain, he looks at you expectantly. You inquire whether it would be possible to make this grain into a dish, perhaps bread or pasta. The waiter inquires what type of salt would be your preference for cooking the pasta, whether it should be cooked in tap water or perhaps bottled Evian, whether extra-virgin olive oil or sunflower seed should be used to season it, and so on.

 

Quite exhausted by the interrogation, you sit back to await the arrival of your dish. What eventually emerges is a gigantic tray of pasta, but unfortunately it has been served without sauce. When you timidly inquire about the sauce possibilities offered by the restaurant, your waiter explains that he will need to involve the condiment waiter, as he himself is not competent in this rarefied domain.

 

After another cross-examination on the various ingredients, eventually a tiny sauce-boat is produced, evidently insufficient to cover your mound of pasta. However, when you point this out, both waiters are irritated with you, and explain that you never once mentioned quantities to either of them, so they provided their respective standards.

 

Cutting the metaphor short at this point, what I have described is the typical experience of and end-user requesting IT services. First, they are left to themselves and are not guided in their choices in any structured way. Then they are deluged with unnecessary amounts of detail, which nevertheless manages to leave out or obscure some key points. Finally, they are handed off between different functions (systems, networks, databases, apps, security, ...), generally in a manner which makes it clear that no particular communication is taking place between those functions.

 

What IT consumers want is an experience more akin to mine in the restaurant: a clear menu of choices, each with an associated price. Customisation is possible - vinaigrette without mustard for the salad, or a different size of storage assigned - but starting from well-understood basics. This is not to say that patrons are uninterested in the artisanal grain or the custom Linux kernel, just that they are more interested in the results in terms of tastier bread or better performance than in the detail of how those results are achieved, let alone the choices involved. They trust the chef and the sysadmin to choose the appropriate ingredients and in the correct quantity to produce the dish or service which they have requested.

 

Would you not prefer to be served by the discreetly competent maitre d' at Chez Cloud, the IT restaurant that is so different from what you were used to?

Share: |


Michael Schrage, research fellow at MIT Sloan School’sCenter for Digital Business, wrote a blog post recently that had the title," Why You Should Automate Parts of Your Job to Save It."  When I saw his post come across my tweet stream, the catchy title compelled me to read the post right away.  Schrage's thesis is that automation should now be the norm as we go through our day to day work, and if people who aren't "using technology to trim their cognitive burden or get the same process result 25 seconds faster is less likely to get a raise or a glowing job review " or even maintain employment with their current company. 

 

IT and Data Center Operations is ripe for many levels of automation, but many companies suffer through their hacked together processes that are based on years of tradition rather than any significant innovation.  Every major management software company has offered automation suites for years.  Open Source alternatives for Data Center Automation are prevalent. Yet, even with this ability to automate and lack of execution employees are still retained and rewarded for maintaining the status quo.

 

The simple explanation for this is the simple concept of "rewarding A, while hoping for B". IT Management hopes that their staffs will spend time automating and refining processes.  They launch automation initiatives, and may even spend several thousands of dollars on automation software.  But several months later the level of automation is at the same level it was before the initiative. 

 

I've seen this time and time again in large and small companies that have sought to incorporate more automation.  Luckily, it is an easy problem to fix.  First, you need to align the reward structure of the organization with the initiatives you want to accomplish.  Throw out the performance reviews that say nebulous, difficult to measure statements such as "Shows initiative and alignment with departments goals".  Replace them instead with clearly defined measureable statements that support the departments initiatives such as, "Automate service build process for mid-tier and database services".  You can even get more granular with the statements, and make the more fine tuned statements end of quarter goals rather than end of year objectives.

 

Second, now that the performance measures are in place, align your rewards with those measures. Work with HR and senior management to build an award structure that supports the departments (and company's) initiatives.  Rating someone on a scale of 1-5 can be useless to motivate them to execute on the initiatives. Instead have clear cut measures and rewards such as 100% automation equals 100% bonus, 75% automation = 75 % bonus, 50% automation = 50% bonus.

 

Now that you have aligned the performance measures and the rewards, it's time to get out of the away. Empower your people with the decision rights to execute the initiative.  Delegate to them the power to get things done, and support them when they encounter roadblocks.  Let them use their talent to choose the best solution, the processes that need refined, and the path to get there.  Break down the political barriers for them, and let them show thier talent.

 

Automation in IT is a requirement for today's fast moving businesses.  Aligning IT's measurements, rewards, and decision rights to enable and support automation can have significant long term benefits to the company. 

Share: |


http://images.drive.com.au/drive_images/Editorial/2008/08/26/26ford_m_m.jpg

 

When speaking with customers and prospects about datacenter automation, I generally find two responses:

 

  1. "I would love to automate as much as possible but need help figuring out where to start."
  2. "Our environment is too complex.  We will never be able to automate (though we would like to)."

 

The second response typically comes from someone who believes in the ideals of automation but is struggling with visualizing how automation would work in the real world complexities and customizations of their environment.  By nature and definition, automation is easiest to implement in an environment where there is not much variation.  With that said, it is also important to emphasize that there are lessons to be learned from history and from other industries that have already solved the problem of finding the right balance between automation and customizations.

 

Instead of taking a black and white approach to automation, the key to realizing value and efficiency is to strike the right balance between what should be automated and what should not.  The automotive industry is a great example of an industry that has already solved this problem.  Being an industry that is very competitive, every manufacturer has immense pressures to strike the right balance between reducing costs via automation while simultaneously providing flexibility and differentiation for their products and processes.  Sounds familiar no?

 

To that end, I thought I would pass along an interesting research paper written by Igor Gorlach and Oliver Wessel from the School of Engineering at Nelson Mandela University.  The research paper outlines the methodologies used to determine the optimal level of manufacturing automation in the automotive industry and uses Volkswagen AG as a case study.  The study analyzes 3 production sites and 3 different products (the Golf A5 assembly line in Wolfsburg, The Touran assembly line, and the Golf A5 assembly line in South Africa), to determine the optimal level of automation using key indexes of cost, quality, productivity, and flexibility.  Useful information with direct correlation to the challenges that we find in IT today.

 

Good day!

Share: |


stah110729.jpgLast week I was in India and spent time watching some of the final strides of the U.S. Debt Limit debate while there.  It was very interesting getting an international perspective on the issues we are having here in the US…the funny thing was the perspective wasn’t too different than what people were saying here in the U.S.  Overall, no matter where you’re opinion was on who was right, who was wrong, weather we should have revenue adjustments, etc. there was one thing that I think everyone could agree on: there was an enormous amount of division across party lines across almost all of the issues.  Even when you would see some compromise in one chamber, whether it be the senate, house, or the oval office, it just turned all the eyes to the next branch to see if they could get there as well.  In the end, whether I was reading about this in India or the US, most believed that this silo’d and partisan approach was a massive, in many people’s opinion unneeded, inhibitor to progress. 

 

As all of this was unfolding, it made me think about the politics of managing a data center.  The walls between the server, network, storage, and application teams can be just silo’d, and in some cases they are just as set in their ways, as what we saw in congress.  The big question is: when will companies with this silo’d approach have their debt limit debate?  When will a catalyst come where the business demands things that IT can’t accomplish due to their structure?  Talking with different customers, some are saying the time is here or right around the corner.  The concept of cloud has created a demand from the business to provide dynamic and service based infrastructures.  For many companies the focus has been on provisioning today, but quickly the asks are streaming to other areas whether it be compliance, reporting, etc.  The question for CIOs and IT departments quickly becomes are you ready for your debt crisis?  Do you have automation across your infrastructure?  Is it capable of providing a single face to the business?  The good news is right now it appears IT is more prepared than congress was!

Frank Yu

Of Datacenters and Puppies

Posted by Frank Yu Aug 2, 2011
Share: |


kodi.JPG


 

I got a puppy last week. I had always wanted a dog and heard so many great things about man’s best friend.  I wasn’t really sure what I wanted the dog to be; a pet, a companion, a guardian, a protector, or maybe all of the above?  I figured that I’ll just sort it out later.  I went online and looked at pictures of puppies in my area and picked one that I really liked.  He’s a Mastiff and German Shepherd mix, but I didn’t know what that meant.  I went to his foster home, where his foster parents showed him to me.  I watched him play around for a bit and after he came by to lick my hand a few times, the decision was made.  I took him home that day.

 

Thinking back, I’m surprised that this process isn’t too different from what many of our clients go through with getting their datacenter automation solutions.  They know that automation is a great idea and have always wanted to get a good automation solution.  They hear from their peers, read in magazines, and see at conferences that automation is the path to the future, so they can’t wait to get it into their datacenter too.  They aren’t really sure what exactly the problem they are trying to solve is and what they would like to get out of such a solution, they think they’ll figure it out along the way.  They read through the marketing slides from various leading industry vendors and find one that looks the most appealing.  They go to the vendor and see a demo of the product.  During the demo, they see some great features that are interesting, sexy, and come with a lot of flash and sizzle.   After the demo, they make the decision and purchase the product.

 

After getting my puppy, I realized I didn’t have any supplies for him.  Off I went to the pet store and came back with a crate, food, bowls, treats, shampoo, toys, collar, leash, and a dog bed.  I got everything home, put out food and water, and let the puppy loose in the house.  Over the next few days, chaos ensued.  Furniture and rugs were chewed, cats were chased, things were knocked over, dog business was done in various rooms of the house, and no sleep was to be had.  I knew I had gotten in over my head and desperately scoured the bookstores for dog training guides.

 

Our customers aren’t too different here either.  They get their nice new automation solutionand try to implement it on their own. Very quickly they realize that it’s not as easy as they had imagined.  They need the proper hardware resources to host the solution, they need experienced personnel who have the skill to do the installation, they need to understand the best practices to follow for the implementation, and they need to allocate people and time away from other projects to perform the install.  Over the months following the purchase, things are very hectic.  The environment acquired to host the solution isn’t properly scoped so it doesn’t fit;  the engineers assigned to put in the solution don’t have the skill nor the time to install and learn a new product; after much struggle, when the product is finally put in, it wasn’t installed according to best practices and performance is horrible.  At this point, many customers realize that they are in over their heads and reach out to us for help.

 

After reading the various books and guides on the meaning and responsibilities of dog ownership, I realized what I had done wrong and what I should’ve have done.  I should have figured out my purpose of getting a dog first and chose the right one accordingly.  I should have properly prepared for owning a dog, both physically and mentally, before getting one.  I should dedicate time and energy to train the dog and provide positive leadership to him.  I should continue to work with him so we both can have a rewarding relationship.

 

I always give the same advice to our clients:  Understand the challenges and problems you intend to overcome with the solution. Fully understand the solution and do your due diligence before acquiring it.  Make sure the proper preparations are done, both environmental and personnel resources, before putting it in.  Leverage the experience and best practices knowledge of professional services for the implementation.  After the solution is in place, dedicate time and people to learn how to properly use the product through education, training, and practice.  Continue to work with the product so it can be used to gain the most benefit.

 

With knowledge, preparation, time, energy, and alot of patience, I hope to have a very fulfilling relationship with my new dog.  I hope all of our clients will have the same with their datacenter automation solutions.

Filter Blog

By date:
By tag:
It's amazing what I.T. was meant to be.