Search BMC.com
Search

Richard Voninski

Tool Evolution

Posted by Richard Voninski Jan 31, 2011
Share: |


Friends know I am passionate about my photography. I am always trying to hone my craft and looking for the latest and best techniques to produce the best results. HDR (high dynamic range imaging) is the ability to take multiple photographs of the same image with different exposures and then to combine them in order to get better shadow details and highlights. Without blending images it is sometimes very hard to achieve an image with a good balance of shadows and highlights. The idea for HDR has been around since the early 2000’s and with some images people have achieved some great results but it has been very hit or miss. In the mid 2005’s some companies started to write some tools that allowed us to combine images but again the results were hit or miss and the tools had controls that were difficult to use and understand. Recently companies are just starting to put together tools that are both easy to use, repeatable results and controls that make sense to use. The image here is an example of HDR and is a blend of 5 images taken in 1/3 stop intervals.

 

_MG_2034_HDR.jpg

 

I started to think about how BMC’s tools are very much like the HDR example I illustrate above. We have a vast collection of software with varying capabilities (and ease of use) and strive to refine, enhance and integrate these solutions so that we can solve problems our customers give us. When I 1st came to BMC 4 years ago the idea of being able to choose a specific type of computer and service from a browser and have all of the components (server, software, configurations, monitoring, network containers, CMDB etc) all get kicked off was definitely a seed in our minds but a lot of work needed to be done in order to make it work. So we went through a variety of stages to develop the solution (which we call CLM – Cloud Lifecycle Management) which required customer input, engineer input and a hell of a lot of brain (I mean sweat) equity to develop our initial solutions. Much like the HDR example this solution is evolving over time and requires the attention of engineers, developers and customers ideas in order to grow.

 

One last point I would like to make. Recently I have been working with a customer who is beta testing our 8.1 Bladelogic for Servers solution. I ran a week long seminar where we went through all aspects of server lifecycle managed through Bladelogic. This customer is a large Solaris customer and needed to provision virtual zones. Since they were running on our beta version I engaged our development team directly to have some detailed discussions. The developers not only provided the information required to teach the customer but they actively solicited from them ideas on how to improve the product and wanted to clearly understand how the customer would be using the functionality within their organization. I have been on other POC’s (proof of concepts) where specific use cases that customers have asked for have become ‘out of the box’ functionality in the next release of the software. We need your help to understand your organizations functions and how you would like our software to work in order to enhance (and improve) the functionality.

Bill Robinson

When Dev is Prod

Posted by Bill Robinson Jan 26, 2011
Share: |


Many large environments have replica environments used for testing and development – the names may vary but you end up w/ DEV, TEST and PROD.  DEV is typically pretty open and loose, TEST and PROD are more locked down. Many times these networks are segregated for “security purposes” which seems to be code for lazy network administrators or because of that one time someone shut down the wrong (production) database.  What happens in these environments often is that the management tools (like BBSA) brought in to manage the environments also gets classified as DEV, TEST and PROD and treated as such.

 

Here’s the problem with that:  Managing the DEV and TEST systems is a production activity.  If you cannot manage your DEV systems, that is a production level outage because you are now impacting your ability to push change through your environment.  Let’s say your BBSA goes down because you were tinkering with it – it’s DEV after all.  Now you need to push an emergency code change through the deployment and validation process.  No BBSA?  Time for a manual workaround that may or may not result in untested code getting to production.  TEST monitoring upgrade fails?  The QA person that spent 3 hours trying to figure out why an application won’t load might have wanted to know if a service was running.  Just because a system is in the DEV environment doesn’t mean it should be treated like a DEV system.  So while you do need to have a place to test configuration changes and updates to your infrastructure systems, you should not be using those same systems to actually manage systems that are important to day-to-day operations.

 

And a word about the environment separation:  The physical separation between networks often makes management and deployment much more complicated.  You now have multiple infrastructure systems to manage and you are manually moving data between multiple environments, which is error prone.  Even with some automation, you will still need to synchronize logins, permissions and application content (eg a folder structure in BBSA).  Understandably you don’t want spill over between the environments and you want to make sure the right people have access to the right systems – this is where Role Based Access Control really helps.  It gives you granular control over the who and what.  Allowing some level of connectivity between multiple environments lets you do easy comparisons between DEV and PROD to see what the changes were (and maybe why something isn’t working here and not there) – assuming the permissions are setup appropriately of course.  Separate also results in addition infrastructure to maintain and additional cost.

Fred Breton

Is it OK to delegate?

Posted by Fred Breton Jan 18, 2011
Share: |


Delegate.jpg

"Is ok to delegate?" is one of the major question when you want to get value from Data Center Automation (DCA). Let's see why and let's try to understand what should be part of Automation solution to provide delegation capabilities.

 

What is expected from DCA is to do more with less: more productivity, more quality and faster, faster as time to market is critical in an economic environment with so much competition. Automation enable the capability to move execution of tasks to less skilled people, to people who're not experts but just need to have something done. Everybody understand that I can drastically reduce the time between when a request is initiated and when it's done if I can have the requester to execute just "pushing a button", the execution becomes delegated to the requester himself. Let's see thru a story that happened to me what is the first capability that a DCA solution need if we want to promote execution delegation and so reduce cost and time to market.

 

Some months ago, I was involved in a proof of concept about Automation where the main topic was to show the capability to reduce the time to provide a test platform to software manager for they could do their functional tests when a new release is provided from development. The average time to build such environment was 10 weeks. When I first look what needed to be done, I thought the job will be easy as I could not see how I won't be able to reduce this to two days max.

So I started the POC by reviewing with each technical teams involved in the process what they were exactly doing and how. I was very surprised to discover that almost tasks were already automated thru scripts.

 

I went deeply in the process to understand why it took so much time. I discovered that the teams involved were mostly expert teams like DBA, sys admin, websphere admin etc..., who has requests coming from various sources with the highest priority going to operations. So this kind of provisioning or environment checks were going to their to do list with middle level of priority and took an average of a week to become the active task. As you can imagine, I ask them why they were not delegating the execution as they've already automated the task execution allowing a monkey to do it.

They were not delgating because their scripts needed high level of priviledge to be executed, and there was no way for people outside of their expert teams could have this level of privilege as the risk should be to high and they didn't want to endorse this responsability. When I show them how my solution could efficiently segregate the duties allowing execution of their scripts with the right privilege restriction according to the role of the guy we were delegating to, they had no issue to delegate. The result was a big win: 10 weeks to 2 days.

 

Bottom line is that experts already automated a lot of tasks most of the time but IT doesn't get so much value from this automation because of lack of delegations. The first main lock is segregation of duties capability. That's why if you want to get big value from DCA, you need to choose a solution that provides granular Role Base Access Control to achieve the right level of segregation of duties, first point to enable task delegation. There are other helpful points that I will address in a next post.

Michael Ducy

Scripty

Posted by Michael Ducy Jan 14, 2011
Share: |


scripty_ep1_big.pngClick on the above image for a larger version

 

Does the above cartoon seem all too familiar for you?  Have you been in such a situation where a talented scripter wreaked havoc on production systems causing the business to lose money?  Personally, I have been on both sides.  I have been Scripty McScripterton, and I have worked with Scripty.  Scripty is often the main source of automation in an IT Organization, and can often be the main cause of IT pain.

 

While the ability to think rapidly and design solutions to problems is a great trait, the ever reliance on IT systems to run revenue generating functions of the business means that Scripty (like you and me) has to be reined into the fold.  The problem with bringing such people into the fold is that Scripty often sees the increased need for process as a way to limit and control his (or her) ability for free, innovative thought.  For example, I worked at a DotCom company where Scripty ran roughshod through the organization.  A new VP of Operations attempted to roll out ITIL processes to rein in the loose cannons, but ended up failing because Scripty and his peers saw the increase need for process as a way to limit their work.  They never saw (nor was it sold to them) that the process was there to make the entire organization - and their individual lives - better.

 

When reining in Scripty, organizations should keep the following in mind:

 

  • Show Scripty "what's in it for them" - Show Scripty and his peers that it is in their best interest to adopt new processes.  Tighter processes and better control often leads to less downtime, higher availability, and better performance which in the end means less after hours work for operations teams.  In addition, better operational performance should translate to stronger business performance.  Thus bonuses could be tied to achieving these operational goals, and tighter processes is one route to achieving those bonuses.

 

  • Make process easy for them - In my days as Scripty, they last thing I wanted to do was fill out a change request.  I would often call the NOC and request that they open the change for me.  Once the change was approved I could proceed with my work.  While this worked at the time, it is often more effective to find products with native integration to change management systems.  For example, ensure that your Server Automation suite can automatically open changes for tasks, and can automatically execute these tasks once the change is approved.  Additionally, work with your Change Management team to find tasks that can be preapproved - requiring a change ticket, but automatically approved by the Change Board - allowing work to proceed unhampered.

 

  • Include Scripty in the decisions - Nothing spells doom for any new initiative more than not including people in the decision process.  Include smart and influential people in the circle that you most want to adopt the new process.  For example, include the smart network administrator that is respected by his peers in an initiative to roll out a Network Automation solution.  This person can act as your champion to the rest of the group - selling the solution while you are not there, defending the project's goals, and bringing others to support the project.

 

Scripty is a valuable member of many organizations.  They possess a can do, innovative personality that is indispensible for solving problems in an organization.   However, left unchecked, Scripty can cause, or already has caused, impact to the effectiveness of your IT organization.  But take heart, Scripty can be a productive and contributing member of an IT team.  After all, I'm living proof.

Share: |


Happy New Year and we apologize for our long hiatus. I have been thinking about what to write about, and the idea that comes to mind first is - New Year’s resolutions. We all do them – exercise more, stop smoking, eat better. We don’t always keep them, but they serve a very useful purpose. I find that, as I get older, there can sometimes be an inexorable sense of inertia. In other words, the natural state of things is to continue on the same course rather to reexamine things and introduce changes. So, the New Year’s resolution is a very useful and convenient reason to ask – “If I could change something, what would be it be, and how would I change it?”

 

So, now you may be asking yourself if this has changed to some kind of confessional blog, or will I be addressing IT. I assure you, I do have an application to IT in mind. Doesn’t IT suffer from the same problem? Don’t most IT organizations struggle with the problem of inertia? Once a team of people, however well-trained and motivated, do something a certain way for long enough, it becomes very difficult to change. The natural tendency of humans to resist changes often prevents IT operations from improving. Doesn’t IT need New Year’s resolutions too? I’ll answer for you – YES.

 

So, what could an IT team do? I think the same suggestions that you have heard for an individual’s resolution apply here. I’ll give you my perspective:

 

  • Keep it Simple and Focused
    • For a resolution to motivated, it needs to easy to understand and straightforward to explain to others. Maybe “Reduce Application Failure by 15%” is better than “Eliminate all preventable issues with end-user acceptance and application releases”

  • Make it Achievable
    • The temptation is to shoot high in the euphoria of the moment, and then lose steam when it quickly becomes clear that is not achievable. An example would be “Achieve 90% CIS for production servers” rather than “Achieve full compliance on all servers”.

  • Make it Measurable
    • If you can’t measure it, then how do you know you succeeded? So focusing on metrics helps define a finish line and your progress to the goal. For example – “Reduce Application Release failures by 20%” rather than “Achieve more Successful Application Launches”

 

So, good luck with your personal and your IT resolutions, and remember that fighting the inertia of low expectations is more important than achieving individual goals.

Filter Blog

By date:
By tag:
It's amazing what I.T. was meant to be.