Search BMC.com
Search

1 2 Previous Next

Optimize IT

16 Posts authored by: Denise P. Kalm
Share: |


Regular cooks look for a recipe, check their cupboards and fridge and then go shopping for the missing ingredients.  Home chefs either peruse their kitchen or walk the grocery store, looking for inspiration. A great chef can create a new dish from leftovers and bits of this and that.  They simply see things in a new way.  Taking a second, and even a third look at ingredients inspires them.strawberry.gif

 

How much do you find yourself doing your job on auto-pilot, looking at the same things in the same way, using your tools exactly as you did 10 years ago?  But the challenges change as do your tools.  When was the last time you looked at what you did and how you do it with fresh eyes?  Start with your job – what has changed this year?  Have your company’s priorities or business focus changed?  Are you moving from a primarily storefront interface to the web? To mobile? Are you virtualizing or moving to a cloud?  Look back 10 years (or 5) and see if your job is different now. Then ask – are you still doing it the same way?

 

Next, look at the tools you use to do your job.  Are you on the current release? Have you read up on all the new features?  Do you even use a lot of what is available with the tool now?  Find at least one capability that you could use, but haven’t, and learn it.  Approach your old tool with fresh eyes; what else can it do for you?  Once upon a time, I got a notion that my modeling tool, now called BMC Capacity Management for Mainframes, could do more than just predict the future, as it is commonly used.  It could also tell me whether our various disaster recovery plans would actually work.  For some key scenarios, we were able to determine that they wouldn’t perform, even though, on the surface, they looked okay.  A new use for an old tool – what could be better?

 

Ask who else might benefit from the information generated by the tool?  Too often, we don’t see a way to share the information, or in some cases, don’t want to share the information, but it can be great to share with the business a report that shows them their business transaction counts paired with their IT costs. They can then see how they are doing from a profitability standpoint.  And you can include response time and other metrics showing them the quality of your work. 

 

Tomorrow, take a step back and review your data center “kitchen.”  What else can you make from what you already have?  How can you get more value out of those old ingredients?  You may be surprised at what you find.

rotator_ourstory.jpg

Share: |


Back when I was a performance analyst, I began getting some oddly technical requests from senior management , literally “out of the blue.”  In one famous case, someone with a fancy title, ESVP or some other interesting combination of letters, demanded that we immediately begin to do “parallel SYSPLEX.”  I had a notion that he didn’t actually know what parallel SYSPLEX meant, so I asked “How much of it should we do?”  He replied, “We’re a bank. Let’s be conservative and do about 10% to start.”  I filed that request under a dump and ignored it.  But it got me wondering – where are these ideas coming from?  It was then I discovered a selection of management magazines, heralding the next new thing for IT, urging managers to get on board.  We were ordered to convert all VSAM files to DB2, to move from the mainframe to UNIX, taking perfectly good CICS systems and moving them all to MRO and more. 

 

These “good ideas” could absorb an army of technicians without necessarily resulting in any business benefit.  It isn’t that any of them were necessarily bad, but you had to be reading more carefully to understand under what circumstances these ideas were warranted and the cost of making those choices.  At the same time, real world issues presented themselves, but to many, it seemed a career-limiting move to focus on those. 

Now, the buzz is cloud and again, it’s not that moving work to the cloud is bad.  But you have to ask first – What problem are you trying to solve and will this be the best way to solve it? The savvy technician – the one who wants to retain his job while still doing the right thing – will take the following steps:

 

  1. Read the magazines.  If you don’t know what your managers are reading, you won’t really understand what is behind the request.  Figure out what the “free lunch” is to them and whether or not your real world works that way.
  2. Develop the right list of questions.  No manager in the world likes being told by his senior technician that he is an idiot.  But if you ask powerful questions, you can work together to understand the real problem and then, derive a good solution.
  3. Understand the business. At the heart of it all is the value the business gets from IT versus the cost.  Powerful arguments, when needed, will always involve framing the issue in terms of business value. 
  4. Be a diplomat.  Diplomacy is the art of letting somebody else have your way.
  5. Be prepared to learn more.
  6. Use this as a solution-buying occasion

Once an idea has been agreed upon, you will quickly find your new challenges require new tools.  If you have done the research and you are ready to implement the new direction, you should know before you start what tools you need so you can manage it.  If you wait, it is much harder to upgrade your toolset.  But in the beginning of a project, it can be readily folded into the cost of the project.  These journals can be your friend, or your enemy.  It’s your choice.  Don’t let a crisis go to waste.

Share: |


Excellence can take on many different meanings depending on your perspective. It can be a rating that indicates a level of superiority. For students, excellence is often rewarded with an “A” on a report card. An athlete who wins a gold medal is recognized for his or her excellence in a sport. Or excellence could refer to the quality of something that is exceptionally good – like my favorite brand of chocolate chip cookies.

 

 

However, since this is a technology blog, I’d like to shift the focus to excellence in data center automation. To drive efficiencies, automation requires developing and implementing effective processes.

 

 

BMC’s Ben Newton and Tim Fessenden discussed the concept of a Center of Excellence (COE) for the data center, and how this approach brings together talent from many disciplines to create and maintain automation best practices.  They explained how business analysts, people in engineering and operations, as well in other groups, can work together to drive automation and business value. They also identified steps for building the center. Their article describes this approach in: How a Center of Excellence Can Boost Automation Benefits http://documents.bmc.com/products/documents/26/37/222637/222637.pdf

Share: |


What is computer performance?  Those of us in the field think we know what it means, but do we really?  Is the user’s view of performance what we actually measure? 

 

I found myself pondering over this question when I found myself talking to one of my computer-illiterate friends.  He’s super-smart, but has no patience to learn the technical underpinnings of his PC.  I can relate – I don’t actually know how my flat panel TV works, and have only a vague notion of how my carfrustrated user.jpg moves me from one place to another.  I just expect them to work.  That’s how he feels about PCs.  When he is browsing, opening a program or trying to print, he blames the PC itself for any slowdowns.  I found him searching for faster machines, using the GHz rating to determine the relative speed. 

 

Any performance analyst worth their salt is shaking their heads right now.  Clock rating is only one of so many components that we need to look at to understand performance.  We pat ourselves on the back; we are so much more knowledgeable than the average user.  But do we have tunnel vision too?  Is our pool of data including all aspects of an end user experience? 

sand.jpg

 

We work in silos of data ourselves.  My friend’s is very limited – he only sees one piece of hardware and one aspect of that hardware as the problem.  But when I did performance for a living, we weren’t all that much better. We had a metric called response time (yes, mainframes measure that), but it wasn’t really what the user saw.  That number represented the point when data arrived at the mainframe with a request to when the request came back to the mainframe. All the back-end network, potentially other servers and the internet was completely ignored. 

 

First, we need to know what we are measuring and what we should be measuring.  We want to clock from the moment the end user hits “enter” to when he receives a response on whatever device he chose.  Fortunately, there are solutions that can simulate this or actually measure user interactions. We simply have to employ them to get a real number.

 

Second, we have to understand what a transaction is to our user.  Not what we think of it as in IT terms, but the real business transaction that we deliver.  It helps to have a way to map it – which servers does it traverse?  Which networks? What data stores does it need?  We have always tried to measure the components of response time, the “speeds and feeds,” or more accurately, the “using and the waiting times,” but now, we can’t understand where the problem is unless we know the transaction path.   Again, there are tools that help you build a CMDB, automatically discovering the assets and relationships.  But you have to know that this needs to be done.

 

Finally, you have to move this all to a proactive approach, where you set thresholds and monitor, to detect and repair issues before the user sees them (or buys a new PC, because his is “too slow.”)   You need to do this because most users blame the owners of a web site or program for their performance woes, not their PC.  And that means they blame you.  And understand this – cloud will not fix this problem for you – it only makes it more difficult.  Get this right now, using the right automation tools, so you can limit those panicked help desk calls.  Be a performance hero by understanding what your users mean by performance.

Share: |


I recently had a chance to chat with Ron Kaminski, ITS Sr. Consultant, and Shane Z. Smith – their capacity planner who is focused on web site and data center performance.  Ron had previously written a wonderful paper on the above topic and I wanted to learn more. clouds.jpg

 

DK:   You previously wrote a paper about these concerns, but not everyone had a chance to read it. And things may have changed. Could you give us some insights into what people are missing as they approach SaaS and Cloud?

 

Ron K:  People think you don’t need to bother with tools – just throw more hardware at the problem.  But they miss why they have a capacity issue.  As an example, in a file server farm, you might find that the real work only takes up 1% of the resources used.  Large amounts of resources are devoted to things like virus scanners, a huge consumer.  No one expects virus scanners to take up 70% of the machine.  Infrastructure decisions are being made without understanding how much (or how little) of the CPU utilization is for real work (how little).

 

Kimberly Clark has sites everywhere in the world, so when you do backups or maintenance activity based on North American time, good times to do it in the US may be peak usage time for the other side of the globe.   To manage this effectively, you need to be able to differentiate between real work and support work.  People who just look at total CPU don’t see that. 

 

Shane S:  Without tools based on workload characterization, we wouldn’t have caught these issues.  And even trying to get the data can be a problem.  You can’t look at data at this level in PaaS (platform as a service), so that’s a problem. And SaaS vendors require proof that performance problems aren’t at your end.  You need tools that help you do that. 

 

Ron K:  We needed to create our own collector, because we have a lot of virtual terminals.  Did you know that if you leave a DOS window open, it burns 100% of the CPU?  People aren’t aware of this impact.   We need the ability to detect what process is causing the problem and then pass rules that minimize the impact.  As an example, some screen savers can take 100% of the CPU too.  By banning selected programs or behaviors, we have recovered as much as 40 CPUs worth of resources.  You need a tool to get to the details.  Otherwise, you are like a one-legged guy in a horse race.  Vendor tools need to get easier to use and be a lot more scalable.

 

With 3000 nodes, you need something that is scale-aware.  There is going to be lots of manual characterization. 

 

DK:  Doesn’t a CMDB help with workload characterization?

 

Ron K:  Yes, it is going to be essential to make this scale.  And some people will use one workload characterization file to manage everything – discover it once and apply it to every situation – but that works best with a more homogenous environment. 

But this isn’t perfect.  People’s assumptions of what is on a node versus what shows up in the CMDB are often out of sync.  Ideally, I want a capacity planning tool to feed this information into a CMDB.  Get people out of the business of doing workload characterization. 

Capacity planners need to understand what is driving CPU busy – if not, they will wildly overprovision.  Disk is often what causes them to die, or one bad link in the network.  In a global firm, it can be very difficult to determine these choke points.  Without war in the world, all companies are going to be global and we need the right tools to do that.

 

DK:  Do these initiatives change the role of a capacity planner? Eliminate it? 

 

Shane S:  That depends on the situation.  With PaaS, we have zero insight into workloads and performance – PaaS is stateless.  You can’t get performance metrics from them.   What you get is total CPU and total memory – that’s all.  You can’t do capacity planning like that.  With IaaS (infrastructure as a service), you need to get the vendor to let you put in your tools.  It can take time.  In the private cloud – you must do it.  In PaaS, you are reliant on the PaaS to provide you monitoring/performance tools to get the information you required for capacity planning.

 

Ron K:  The problem is one of scale. You can’t do it by hand.  You need tools.  There are some things you can get even if you only get totals. If you have a 4-CPU box and one CPU is routinely 100% busy, this is a loop or something that is just soaking up CPU, like a DOS command window on a virtual desktop.  If you can’t get a collector in there, get a mini-collector that once a day goes in and collects process consumption.  You can get information by comparing day to day – it may help to flush out what is just using too much resources.  You have to be sneakier as a capacity planner now.  I’m looking to vendor partners who come up with tools that help us manage the complexity and the scale.  But for now, we’re working to deploy these mini-collectors so we can least point out the silliness.

 

Shane S:  The stateless environment is a challenge – it is much harder to get the data.

 

Ron K:  Service providers want to charge you.  If 30% of your environments are doing something stupid, you are just wasting money.  Capacity planning is an issue of scale – there can be a lot of savings there.

 

DK:   What do you mean when you see these technologies as having an “Achilles Heel?” 

 

Ron K:  This issue is ubiquitous web access – people think that because anything that can be done can and should be done on the web.  But how you choose to get it there can impact resource demand.  Just because you want to do this, there are thousands of ways to do it.  You need to think about this, focus on efficiency and select better.  They call it code for a reason.  It isn’t obvious what is using a lot of resources and so choices in coding can have a huge resource impact.  Choices must take into account the distances, especially for chatty applications.  We’re going to look back in 30 yrs and laugh at the code we are running now.  I told my mother back in 1970, “In the future, no one is going to read books anymore.  People would read off screens in the future.”  In the future, people won’t be writing code. 

 

The current manual method in IT takes too long to get things running. In 20 years, vendors will have this automated.  Users will be saying what they want the app to do.  Human interfaces, like Siri, will know what the rules are and enable the automation of applications, without IT coders. Those programs will quickly learn how to optimize the code.  We will always need capacity planning tools until the automation is excellent, and I put it to you that we will always need them for advanced analysis. Coding as a career has to end.  Data is just getting bigger – so the impact of bad coding decisions is going to get worse.

 

In reality, there are too many nodes to do this the way we used to do it.  Large scale automation in the future will help.  I believe corporate data centers will go away – everything will be on the web or in the cloud.   But then the tools will have to scale even more.  I think IaaS will be the software vendor’s next big challenge.  We need this because too many still see capacity planning simply as telling you how much to buy, not how efficient your systems can be.  Large scale vendors of services need to find a way to handle these problems. 

 

Capacity planning teams will shrink – automation will replace them.  The future is global firms.

 

Shane S:  Automation will eventually take over, but the vendors are extremely far away from that.  PaaS and IaaS don’t seem to be doing capacity planning properly, but it is in their interest to do this.

 

Ron K:  This is a big opportunity for a vendor and it would give better value for their customers too.  It might result in a new thing – CPaaS.   Cloud users need something that doesn’t require their cloud providers to install something – we need new tools which can use the data already available.    Both cloud providers and cloud users need more detail.  As a user of a cloud, you would want to know that another user is sucking up so much resource that it is impacting you.

 

DK:  Isn’t that the same kind of problem that we had when we first started sharing CPU resources?

 

Ron K:  Yes, and we still need the kind of data that shows you when this is happening.  “Hey, cloud vendor – why does my performance suck when I’m doing the same work as I was last hour?” This tool needs to exist. 

 

What we don’t have is time – there is so much to do, but never enough people to get it all done.  Tools could help but not if we have to write them.   I recently converted my home system to MACs and now, when software is updated, it is automatic – I don’t have to worry about it.  That is the future.  Upgrades should just happen.

 

DK: What’s the best way for CPs to adapt to these technologies? 

 

Ron K:   Take a step back and see how you spend your time.  Figure out what can be automated.  Do that.  If you don’t find ways to make capacity planning scalable, you won’t be able to get it done and answer  questionss fast enough to be relevant.  You need to be fast.

 

Shane S:  Automation is key.  And you need to understand the toolset that you have to make automation work.  Don’t rely on IaaS tools to work properly – validate them. Make your own automated alerts to weed out these problems. 

Share: |


I know – I’ve already lost you. What the heck is a Sous Vide Supreme?  If you are an aficionado of Top Chef, several of the contestants swore by this device which allows you to vacuum pack food and cook it at constant temperature, thus delivering such perfection as a steak that is exactly the right color all the way through.  If you are a food lover, you know the difficulty of getting proteins cooked exactly the way you want consistently.  Sous vide is a way to do that, as well as to experiment with new textures, cook flavors in more intensely, etc.  After seeing what you could do, I had to try it… but at the time, the units cost $1500.  That’s fine for a real chef, but home chefs don’t spend that kind of money.

 

So I jury-rigged it, using a candy thermometer, my Dutch oven and a lot of attention and patience.  I found it is extremely difficult to hold a temperature over time, ensuring the same results.  I also was not able to get a real vacuum, but boil-in bags worked reasonably well.  But this method was nit-picky and time-consuming.

steak.jpgsteak-sous-vide.jpg

 

 

 

   

On my last birthday, my kind sister gave me not only the Sous Vide Supreme, but the vacuum sealer, so everything would be perfect (and yes, the price came down).  Now, I can literally toss together my seasonings, vacuum up a protein and dump it into the machine and let it cook as long as 4 (or more) hours, without a care in the world.  The results will be perfect!

 

I get SO much more done because the cooking is on auto-pilot. And it’s better than just an automated Denise.  The results are better than you can ensure by doing it yourself.  This made me think about automation.  As a long-time capacity and performance geek, automation always seemed to be a way of saying that my employer could replace me with software.  I knew this wasn’t the case, but there was that lingering fear that ceding control to software would leave me without a purpose.

 

But you have to start somewhere.  And with fewer people to do the work, you have to work smarter.  It began with moving over to do UNIX performance.  I started with shell scripts, which were a pain to write, didn’t really give me the data the way I wanted to see it, but at least were something.  Then, we found a few freeware tools that were a bit better.  This was good as my Korn shell programming was not my strong suit.  Finally, I caved in and we got what is now known as BMC Capacity Optimization.  I discovered that what it did were things it could do much better than I could, leaving me free to do the job no tool really can do, which is interpret the data adding the politics and culture – data the software cannot obtain.

 

Once I let go of those day-to-day manual processes, I found that not only was life less challenging, but it was also less boring.  Software automation does the things that you only found interesting the first time you did them.  It also lowers your risk.  Software rarely makes mistakes; people don’t have a perfect track record, particularly when on call.  Finally (and best of all), automation elevated my status.  When I was no longer down in the weeds, fussing over the “perfect steak,” I had time to create a “feast” of value and understand more clearly what the business needed from IT.  Automation is freedom!  Automation means career success, if you let it.

 

I’ll confess – none of what I was able to do tasted as great as those sous vide steaks or salmon.  But where would I have had the time or the mental cycles to experiment with food without automation?  Get out there and get some – you never know what interesting things you can do with the time you recover until you begin.

Denise P. Kalm

DIVING INTO THE CLOUD

Posted by Denise P. Kalm Feb 17, 2012
Share: |


We have a saying in the scuba diving world – “Plan your dive, then dive your plan.”  It sounds almost too simple, but in diving, living by these words ensures that you keep on…living.  As I read articles in the Divers Alert Network magazine – our go-to guide on diving safely – the one thing that keeps coming up is how the casualties referenced either failed to plan or deviated from their plans.  A frequent example is how two guys decide to go to Ginnie Springs in Florida to do some cave diving – forget that they aren’t certified, don’t have the special equipment and have no idea what they are up against. They have heard about the gin-clear waters and want to see what it is like.  We rarely hear their stories because this is a recipe for disaster.  The other common case is what happens when the situation changes and the diver abandons a well-thought-out dive.  If you’ve seen the film “Open Water,” you know what happens there.  If your dive is to be 30 minutes, you surface at 30 minutes, even if you find something really cool our there.  You don’t hang out longer, figuring that the boat will simply wait.  (Yes, it should, but do you want to take the chance?)

 

Divers spend a lot of time training so they can earn their C-card, the certificate that entitles them to rent air.  Unfortunately, that training is just a beginning and not a plan for real-world diving.  Not only do you need that training, but you need to continue to learn and then practice your learning with rigor.  If you don’t check your equipment and that of your buddy, there will be a time when you regret it.

 

scuba.jpg

So what does this have to do with Cloud?  I’m so glad you asked.  As I watch people talk about cloud, I see too often a failure to follow the diving mantra.  Caught up on the financial benefits of cloud, too many rush into it without looking into how they will ensure SLAs in a cloud environment.  Customers won’t just cut you some slack if performance is worse after you move. They don’t care that it saved you money – they just want service.  Your business can suffer “decompression sickness” if you don’t plan to manage the applications you migrate.

 

Cloud managers can also get distracted – just like divers – and deviate from their plans (when they have them).  Again, the results can be fatal to your business.  There are high paid consultants who can come in and give your business “decompression treatments” but by then, the patient is in trouble.

So what should you do?  Learn from divers and take the time to do it right.

 

  1. Get your C-card - Learn enough to evaluate what cloud can provide for you and what it will cost.  Which applications belong there? Which ones don’t?  Private, hybrid or public?
  2. Plan your dive - Plan your cloud migration carefully, considering not just what you want to do from an application basis but also how you will manage it.
  3. Dive your plan – Execute as you planned.  If situations come up that require a course-correction, go back to your planning exercise and adjust it with the same diligence and discussion you performed the first time.  Take the time to do it right.

 

 

As a Master Scuba Diver, I apply these lessons to many aspects of my life.  Learn from the living divers and make your cloud migration a business-enhancer. Don’t get bent!

Share: |


Ben Newton, Senior Manager in Solutions Marketing is a passionate advocate of exploiting data center automation.   We began the conversation around the subject of automating disaster recovery (or business continuity as Ben prefers).  “If you plan thinking only about disasters, you’re talking about dealing with FEMA and limited situations like that.  What we really want to do is to design a plan to keep businesses running, no matter what problems occur.  And to start, you need to define what you are trying to accomplish.  Simple requirements lead to simple solutions, but simple solutions don’t necessarily accomplish your goals”


 

He notes that it isn’t just about getting bits and bytes to a new location.  Even if you blindly copy data over to the new location, there is no guarantee it is going to work.  In most cases, you need to be up in hours, if not in minutes – so you need to be confident that you stand the recovery site up quickly.  And there is so much more to business continuity than the data.  Ben also notes that virtualization has changed the nature of the game. Virtualization makes it much easier to blindly copy data to a backup location, but will those images work? What state are they in? To obtain the lowest cost, quick time to value, and accurate failover capability, you need to build a robust plan.  It begins understanding the business; which applications/systems are mission-critical?  What is your configuration management process?  How do you migrate applications?  How current can you keep your data?  What is the sequence of events needed to recover systems? Ben sees a robust DR strategy as a catalyst to move to mature data center management. 


 

As an example, less mature plans don’t understand the implication of different IP spaces.  You could no more move Waltham houses wholesale to Houston than you can move most networked systems to another IP space.  The solution to this challenge is to understand your configuration and develop full stack provisioning for mission-critical applications.  In other words, automate the end-to-end build of your most important systems, so you can quickly stand up new systems – while taking into account the environment they are in. And you also need to understand your application release process, and make sure that the DR site is always up to date with the latest versions of your custom applications.


 

Ben had personal experience with this while designing a DR plan as a consultant before BMC. From a BMC perspective, he says that you can start with the Bladelogic suite of products.  BMC Network Automation ensures the ability to port your network, no matter how complex.  BMC Server Automation rebuilds your OS and configures it, BMC Middleware Automation builds your complicated middleware applications, like WebSphere or WebLogic.  BMC Database Automation helps you recover your databases. As an example, Ben has seen this product help you stand up Oracle RAC clusters in 2 hours.  Assess your capacity needs for this with BMC Capacity Optimization.   Understand your resources and relationships with BMC Discovery and Dependency Mapping then use BMC Atrium Orchestrator to pull together the sequence and scripts of automation.  Get application release management underway with Release Process Management.  Then, include BMC Cloud Lifecycle Management – this turns your business continuity strategy into a private cloud.  By thinking bigger – automation is good for every aspect of data center management – you can get business backing to obtain the benefit of cloud, as well as the benefits of better day-to-day management AND disaster recovery.


 

“Use this opportunity to do more than just DR – otherwise, it is just a wasted opportunity.  Move to a private cloud and solve multiple challenges with one effort.”

 

 

 

automation.jpg

Share: |


malamutees.jpg

Almost every day, I find myself at my local Safeway, picking up food for the next dinner or a running in to get a forgotten item.  The grocery store is across the street from my complex, which makes it easy to plan badly and be very spontaneous.  The “cost” is low and I can always use more exercise.  So I don’t operate proactively and thus, have the luxury of responding to our “in the moment” food desires.  It works for us.  When I share this experience with a friend who lives in a remote part of Colorado, he said they didn’t have that opportunity; it is a 45-minute drive to town, so they have to plan.  It’s even more critical for residents of remote parts of Alaska.  They must plan meals and buy food for at least a month at a time.  And they have to alter their plans or pay more based on what is available. 

 

As I made yet another trip to Safeway, I found myself wondering – how many data center managers operate exactly as I do?  How many are trapped in the “weeds” of day-to-day operations, just trying to survive, while assuming that their next “provisioning” will be as easy as mine is?    Unless your office is at HP or IBM (or similar), you can’t get the hardware you need when you need it, at the price you want for your company.  You have to plan.  You have to work with the business to understand future needs, so you can minimize your costs and have just what you need, when you need it. 

 

But as a colleague pointed out, it isn’t just hardware planning.  Many of your fellow employees view IT services as if it is a local resource, like my local Safeway. When they need a new service or capability, it should be as simple as a “walk across the street.”  But in IT, we all know it isn’t.  This too means working with the business to help them understand the effort and resources required to deliver on the services they want to offer.  It’s teamwork, and teamwork means planning.  Just as my husband and I sit down regularly, so I can present him with menu ideas, the IT process is a collaboration resulting in a “meal” that both sides enjoy and can support.  When this alliance is functioning, sometimes the business will okay a trip across the street to Safeway (getting hardware even when it isn’t a great deal or hiring some consultants to speed the process.)  But without that alliance, IT sees the provisioning of services like a 7-day sled-dog trip; the business sees it as a quick trip to Safeway. 

 

On a side note, sometimes I just open my fridge and plan a meal around what I have, going for some innovative combinations.  You can do this too, by looking at the resources you already have (people, hardware and software) and putting them together in new and different ways to offer a better service experience.

food.jpg

Share: |


When I started in performance and capacity planning, we weren’t all that concerned about money.  IT was a cost-center; everyone accepted this and it wasn’t a problem.  We had the luxury of extra capacity and loads of time to plan.  This approach made the job much easier, but also less valuable to the organization.  Our quarterly capacity plans were rarely of interest to anyone.  The business would let us buy what we said we needed.  It was sort of the same way in our personal lives.  Remember going out to dinner or drinks whenever you wanted to?  It just seemed to be an easier time.

 

Ah, for the good old days!  Outsourcing and offshoring put IT in the position of competing and cost was the trigger.  This inflection began in the late ‘80’s and only got worse as hardware got cheaper, and companies began to undervalue capacity planning.  Why pay an expert a high salary AND fund software for them when a server is so cheap?  The same challenge will keep arising as long as we don’t change what we do and how we message it.  It seems so easy to just buy cheap servers even if that isn’t always the right answer (anyone ever see more CPU capacity thrown at a memory problem?)  And now, we are all more cost conscious whether it is entertainment (Netflix versus going out to movies and cooking at home rather than dining out) or necessities (do we really need to go to the doctor – can’t we just gut it out?)  Everyone is squeezing the last cent out of their dollars.

 

 

dollar.png

 

 

 

 

Capacity planning is providing cost-effective performance and availability by understanding the relationship between IT resources and business transactions and managing to business KPIs.  We are business-enablers.  The result of the buy cheap (penny-foolish) approach was the rooms full of under-utilized or even completely unused servers.  It might have looked cheaper, but it really wasn’t.

 

A well-thought-out capacity planning process involves collection of “all the data, all the time,” fed into an automation engine which produces reports, analytics and feeds into modeling and trending engines.  This allows for a continuous cycle of analysis and improvement.  You aren’t doing the IT equivalent of “making copies;” you spend your valuable time understanding the business and proactively managing the environment to provide the business just-in-time capacity to provide the SLAs they demand.  It elevates your job to a new level – you (and everyone else) can see how your work contributes to business success.   Of course, it requires a new kind of capacity planner, one who is willing to learn the business, understand the variety of platforms applications run across and who will rise from the weeds of their favorite silo.  Don’t be afraid of “beginner’s mind;” this is how we grow, learn and add more value.

 

“We keep moving forward, opening new doors, and doing new things, because we're curious and curiosity keeps leading us down new paths.”

                          •        Walt Disney
Share: |


For anyone who has been in the performance field for a while, the truism seemed to hold true.  If you did a great job and performance was excellent, management thought they didn’t need you. If the performance was poor, you had failed.  The job of performance management was poorly understood, in part because of the arcane processes we used to manage it – people simply thought good response time was “magic.”

 

As systems became more complex and business applications hopped across servers, geographies and even across companies, people began to be more aware that performance doesn’t “just happen.”  The internet quickly weeds out the performance-haves from the have-nots; people simply won’t wait and they don’t have to.  There are alternatives for every service and every product.  Customers have begun to value great response time as a competitive differentiator.  The management of great companies is beginning to realize that as well. 

 

Great performance analysts usually move up through the ranks from a variety of roles – developer, database specialist, systems analyst…   They have a wide perspective because they have to.  You can’t do the job unless you can both understand all the components at a deep, detailed level while still seeing the transaction holistically.  It’s not for everyone. The best truly “feel the need for speed” and this shows up in their daily lives.  Got a good shortcut, anyone?

 

But with all that complexity and increasing demand for stellar performance, the job has become more difficult. Add to this that in most organizations, management has been skinnying down the ranks, so you are either managing alone or with one other person who may or may not have the experience to do the job you can do.  And the job just keeps getting bigger and more challenging. 

 

The solution?  Great tools that give you all the data, all the time, filtered down to show you what you need to see.  Gone are the days where 90% of your day had to be spent running scripts to collect and transform data and then sifting through too many metrics to find ones of interest.  Even if you enjoyed that (you didn’t really, did you?), you simply don’t have the time.  Let automation and intelligent software be your friend, allowing the time to do the expert analysis and problem solving.  That’s what’s fun about the job.

 

For the first time in performance management history, we have the chance to be heroes.  But only if we can make our systems hum.  Consider taking your game to the next level by selecting a performance management solution that provides ALL the information and intelligence you need, so that you can be a key player in your company’s success.

 

 

IT hero.jpg

Share: |


How did it get to be December?  As usual, I find myself behind in planning, shopping, baking and my Christmas cards.  Years ago, I would shop (and make gifts) throughout the year, with a plan that took into account the world without the Internet.  My photo Christmas cards had to be taken, the film sent in, and then I had a long wait for the pictures to come back.   I would select one, send the negative back in and order my cards.  Weeks later, I had the final product.  This year, I shot the pictures and minutes later, was selecting from the assortment, uploading the chosen one, configuring the exact card stock and the message I wanted – the whole process took about 15 minutes, including posing my rabbit, Cisco.  My shopping consisted of surfing the web and selecting items which were delivered either to recipients afar, or to me for wrapping.  Even recipe planning is simply a click away. With the simplicity of the web, my only problem was waiting too long to get started.

 

Xmas.jpg

 

You may have applied these principles to your Christmas planning, but have you done the same at work?  Are you frantically working in a reactive manner, not exploiting the new capabilities that your software can offer you? Do you have too much shelf-ware?  It is easy to find yourself caught up in the emergency of the moment, but the only way to work smarter, not harder, is to shift your thinking from how you have always done things to how you should be doing them now.  If you hate the long lines and crowds and endless reels of the same Christmas carol in your mall stores and have found a way to avoid it, why not find a way to automate any processes you can at work, so you can free yourself up for much more interesting work?

 

The Web Christmas is all about performance – how fast you can get everything done.  The deadline is fixed, so you have no choice but to be efficient.  Let your software tools make you more efficient at work.  Take the time to install and configure automation to collect and transform data, manage alerts, build your web sites and post your reports.  The web didn’t make picking our Christmas gifts easier – it just made it easier to buy them.  Software doesn’t take away the need for smart technicians – it simply empowers them to spend their time on analysis, interpretation and decision-making.  My love is performance and capacity planning.  And back in the day, just like my old-style Christmas, I spent way too much time on minutiae, such as running SAS jobs against raw SMF data, massaging that into Harvard Graphics, then writing html code to post to a web site.  What are you doing today that you should have given over to automation a long time ago?

 

Give yourself the best Christmas gift of all this year – the gift of time.  None of us can truly accomplish all we need to do for our jobs without making the most of the software we own.  Make this next year the year that you get smarter about how you do your job.  We could all use a little more time, couldn’t we?

Share: |


I just returned from CMG 2011 (Computer Measurement Group), a conference that has been supporting performance analysts and capacity planners since the ‘70’s.  A common theme was the difficulty of getting to attend; the diminishing numbers at the many conferences highlight the problem.  As it was in the early days, again, management seems to see conferences as a boondoggle, rather than the valuable learning and networking opportunity it truly can be.

The economy doesn’t help; everyone is looking for a way to save money and the cost of a conference (registration and travel) can look like a great opportunity to save.  But a lot of the problem rests squarely on us, the attendees. We aren’t selling the value, perhaps because we have lost sight of it.  With the speed of innovation increasing logarithmically, “keeping up” is becoming almost impossible.  A technical conference is a great way to jumpstart your learning.  You have the technical training in sessions, but you also have the opportunity to learn from the vendors on-site as well as to spend time with colleagues who may be struggling with the same challenges you are.

Besides learning what’s new, you have to learn how to improve the way you do your job.  Each year, talented experts are figuring out smarter ways to work and vendors are improving their products to free you from tasks you don’t really have time to do.  But you can’t exploit any of it unless you know about it.  A conference is the fastest way to learn about working smart.

Finally, most of us can get deep in the weeds of our work.  At a conference, we pull our heads up and gain a valuable perspective we can take back with us.  It is a brief pause in a too-busy life, helping us to plan, rethink strategies and make a plan for the following year.  The value of this cannot be measured, but the failure to get this perspective or to get this education will translate into your company (and you) losing your competitive edge.

Financial advisors tell you to “pay yourself first.” By this, they mean that you should make sure that your first dollars earned go into investments, not impulse purchases.  In our careers, we also need to consciously invest in ourselves and one important component of this is to keep up to date.  With an information glut available to us and no time to sift through it all, a conference may be the only way to efficiently gain knowledge.  Make it a priority in 2012; insist on getting the education you need to provide your company with that “edge” they seek.  Then, come home and share what you learned with your team.  Sharing it reinforces it and helps move new ideas from concept to practice.

Share: |


Are you a capacity planner?  If not, how are you going to manage the food orgy that Thanksgiving offers, an orgy of variety and quantity?  For experts in capacity planning, this challenge should seem familiar.  You only have X amount of capacity – your stomach.  There isn’t going to be a way to buy more space.  But by careful planning, you can stretch that capacity (without pain, in most cases).  The secret is planning.

Start with the menu.  Find out everything you can about the foods planned, especially considering any appetizers that might fill you up before the main event.  Arrange by priority:

Foods you can’t live without:  High priority work

Foods you love: Medium priority work

Foods you like: Lower priority work

Anything else: Discretionary work

Next, you need to understand your workloads (menu items).  Some items, like soup, may take up only a small amount of your stomach resource.  Others, like the rich marshmallow-laden yams, may make a sizable dent in your available space. 

Factor the capacity demands of the foods you love first, so you can see what is possible.  Perhaps you can “run” a little of the yams, if you ignore the medium priority chips and dip served earlier.  Two slices of turkey instead of 4 might mean that there is room for pumpkin pie.  Line up your plan and see how it looks – IEB-Eyeball is always a great start, especially when you do not have a robust capacity modeling tool.

Your menu might look like this:

  • 2-3 appetizer shrimp (they’re always large – Aunt Gena brings wonderful ones)
  • 1 C. of mushroom soup (it’s a liquid – they shouldn’t count)
  • 2 slices of turkey with gravy
  • 3 T of stuffing
  • 3 T of yams
  • 3 T of mashed potatoes
  • 2 T of cranberry sauce (looks good this year – not that horrible canned stuff)
  • 2 T of green bean casserole
  • 2 glasses of shiraz (okay – really, really big glasses)
  • 1 large slice of pumpkin pie with whipped cream

If it looks reasonable – you know you can eat that – then reassess the quantities of each workload (food item).  Otherwise, reassess.  And remember that there may be latent demand waiting for you (food brought by other guests).  So save some room. In fact, you might want to adjust your workloads a little so that you have some “white space,” in case some high priority “work” shows up.  You need to be able to adapt, while still keeping to your plan.  Remember to focus on high priority work – discretionary work may beg to be consumed, but you can’t – you have a plan.

As we move into the actual day, performance management becomes more of an issue – the more turkey and wine, the more sluggish your performance will be.  Tryptophan and ethanol are a deadly combination that not only slow your metabolism, but also can blind you to your plan.  Like looping transactions or memory leaks, the more you activate these workloads, the more difficult planning can be. 

Finally, remember that performance is all about agility.  If you overload capacity, tomorrow, that rush to the mall for Black Friday will seem like a stretch objective.  Bring lots of Tupperware and take home leftovers for when your stomach has the capacity to enjoy more.  Happy Thanksgiving!

Share: |


MISE EN PLACE FOR CLOUDS

Cloud Insight from the Food Network

 

As a big fan of Top Chef and a reasonably talented home cook, I wondered how I could elevate my talents without the investment of time and money in chef school.  So I began to watch more closely to determine what factors might enable real chefs to produce wonderful cuisine.  There are many answers to this question, but the first is a French term (aren’t all good cooking terms French?) – mise en place.  It is translated as “to put in place,” and to a chef means that everything needed to prepare a dish is ready.  Leeks are cleaned and sliced, sauces prepped, herbs lined up, pans selected, proteins prepped.  Little things matter. Do you preheat your pan before adding the oil, then the protein?  Everything comes together like magic when you ensure you are ready to cook. 

Clouds are considered data center magic, but unless you take the time to prepare before you launch your cloud, the results will be less optimal.  There are many considerations (public, private or hybrid), platforms, etc., but a key decision that is too often left to last is how you will manage your cloud.  No matter how you design your applications (or how much or little control you have over the infrastructure), you will need to have certain abilities before your cloud becomes production.  Your mise en place should include:

  • Continuous Deep-Dive Diagnostics — Continuously collect data so that you have it right where you need it, when you need it.  Inventory the cupboard to be sure you have all the ingredients you need for the dish
  • 20/20 Visibility — Just the right data, at the right time.  Only have the ingredients you need at hand, but be sure to have them all.
  • End User Experience Measurement – knowing exactly what your customers are experiencing, before they are unhappy and leave your site.  Taste your dish early and often, so that you can correct issues before your diner tastes the dish.
  • Real Time Behavior Learning – learning what is normal behavior and automatically detecting problems, so they can be more rapidly solved.  Chefs make dishes many times before serving them to guest.  Make sure you know how it should taste, before you bring it to production.
  • Service Availability and Performance Management – avoid outages and reduce downtime by proactively monitoring and managing your end-to-end business transactions.  Keep tasting and correct your dish as you go. 
  • Capacity Optimization – Be prepared to handle changes in capacity needs, both because of increased transaction demand as well as changes in the transaction mix.  This means understanding all the resources involved and keeping on top of their utilization, as well as being able to project impact if one resource is in demand.  Chefs frequently have to adjust to changing numbers of diners; you will too.

Perhaps you were hoping that it would be as simple as turning on a switch.  But getting this right, in advance, will ensure that you can satisfy your customers, increase profits and reduce risks.  Become a cloud chef, implementing a mise en place plan for your cloud. 

Filter Blog

By date:
By tag:
It's amazing what I.T. was meant to be.