Search BMC.com
Search

Optimize IT

6 Posts authored by: Fred Breton
Share: |


I'm sure that some of you thought reading the tittle: "Where is going this guy associating content management with automation?" So let's have a quick look on what are those two things:

 

  • Content management is an inherently collaborative process. Goals of content management is to provide access to the right content to the right people on the form they are able to consume it.

 

  • Automation is the use of control systems and information technologies to reduce the need for human work in the production of goods and services. Goal of automation is to have people taking care of the business having the product line efficient, reactive and predictable at the lowest possible cost.

 

In Data Center the goal of automation is to have a responsive IT that provides services in an efficient, reactive and predictable way and guess what... at the lowest possible cost. Usually the first focus is to first automate repetitve task with no added value and progressively move to more complex tasks. The main requirement when people are searching tools on this area is easiness to create content and to maintain content, so it is natural to split tools by technical area and functional area: Server, Network, Storage, Virtualization, Application, process, help desk... And all are focusing on their expertise and needs to create content to serve their customers but only on their specific area.

If this approche provide some result, increasing efficiency, reducing cost on some area with some good ROI most organization keep issue on governance management, policy enforcement, bottle neck on high skilled people. Here are some typical examples:

  • Changes are not documented because a sysadmin could manage 20 times more servers than before but now time to document the change is more than twice than time to apply the change to put in an other tool data that he already have in his own tool. So, he just doesn't do it anymore.
  • People are still executing manually some tasks that have already automation content because they don't know it, they don't think they'll win time as the way to use this content is too complex.
  • People are able to exactly specify their need and it could be achieved using existing automation content but this content is in a form that make no sense for them and they need to go thru experts that are involved on all projets and are bottle neck.
  • A task require cross domain tasks and so knowlege of each automation tools when information needed to specify this initial task doesn't require any specific knowledge of each domain. Several experts need to be involved each time.

 

And I can find many other examples. This is mostly a content management problem: publishing the right data to the right place with the right credential in the right format that make it usuable by the targeted end user. That means that for the same automation content you may need to publish it on different way (format) regarding targeted user, it has to be easy to consume by user if you want to get the highest value. That's not a big news, to achieve this you need a platform, a real platform and not a bunch of tools only able to work on their area.

 

So when you're building DCA, even if you need to go step by step, be sure that you're using building blocks that are participating on a real platfom.

Share: |


SOA.jpg

A big challenge for companies was to accelerate development of their business application to provide more value to their customers or to address a new market before their competitors.   J2EE platforms provide this value allowing developer to not take care of targeted infrastructure and to easily reuse existing modules. To simplify, now developers only take care of the business logic and don't need to think about the fact that the application will run on a cluster, on a specific OS using a specific database, they don't need to implement messaging services, basically they don’t need to take care on low layer and can focus on business logic.

The point is that if developers don't need any more to take care of low level layers because of the abstraction layers provided by J2EEplatforms, this last one still needs to be setup, managed and maintained by operation. If J2EE platform allowed more productivity for development it added complexity for operation. Bottom line is that to move application from test to production could take more time than to develop  the application itself and operation becomes a bottleneck.

 

More and more company try to automate J2EE platform management and many of them have a lot of scripts to do so. Each time an upgrade is done they need to review their scripts or to rebuild them. The need to have solution to automate this area is clear and there is some try to use server automation tools to achieve this. Usually, server automation tools won't provide so much value than homemade scripts. Why? Because server automation tools are server centric when an application, a J2EE platform are not server centric. For example, when I want to configure a JDBC provider, I don't want to setup this service for a specific OS instance or physical server, I want to setup this service for a Java server (JVM) or a Cluster that are part of my J2EE infrastructure. And on the last case, Cluster may means several physical server at the low layer level.

 

For automation, in J2EE context, I need a tool that allows me to create a package for a service or an application that is independent of the topology of my J2EE infrastructure as they maybe different on the different environment used to test, qualify the application before pushing it on production. Without this kind of feature, that means I need to create a package for each kind of topology and environment which seriously decrease automation capabilities and value.

Being able to manage J2EE platforms through the abstraction layer with topology independency capability is even critical with virtualization and Cloud computing as topology could move very fast.

 

Conclusion is that to get real value from automation on J2EE platforms, the automation tool needs to be able to address those platforms through their native API with enough abstraction, package parameterization and topology independence enablement  capabilities. By the way, low layer management capabilities are also required for initial provisioning of J2EE infrastructure. The good news is that BARA could help to achieve J2EE platform management automation. First of all, BARA is on top of BSA that allows to do initial provisioning of J2EE platforms like installing WebSphere or WebLogic. But ARA provides capabilities to not be server centric but to target J2EE objects with enough abstraction layer to act on the same way for different J2EE platforms and to provide topology independency. What I mean here is that with ARA I could deploy the same application package to either standalone server (JVM) or to Clusters which enable real application release management. I get J2EE platform management automation with infrastructure independency from physical severs to cloud through virtualization.

Fred Breton

I had a dream ....

Posted by Fred Breton May 16, 2011
Share: |


matrix.jpg

      I had a dream and it was not about electric sheep but more about freedom, freeing time and doing more. Before sharing my dream I should explain in which context it happened.

 

     Those last days I was working on configuring an environment that required several blocks to have an application running. Looking on what needed to be installed for the application to run, I realized that 80% is things I’ve already installed on the last 5 years and at least 15% of the 20% remainder has been already done by people I know. But even more, for sure, at least 99% of what I’ve to do has already be done 100’s time by peoples of data center community. Purpose of the environments I use to build is to be used for demo or to do some tests and... I’m working about automation.

As usual, I started to look on my “private catalogue” which means, directory structure and packages, scripts I’ve on various automation environment. At the end of the day, I found less than 20% of my needs for various reasons:

  1. Some of my content was crappy, was not enough parameterized, was too much specific
  2. Some was not anymore on my storage (or I didn’t find it…, try to find a script you didn’t use since 3 years)
  3. Content of my folks was not enough parameterized and/or documented

 

Bottom line is that I did 90% manually because the clock was running and I had no time to improve content I had or wait others to provide usable content.

 

On the mid time I got some e-mails requesting help on some topics where I didn’t efficiently help or I even don’t answer because I was not able to provide easy and immediate content to use (point 1 before) and I was already running after cycles to achieve all of what I’ve on my plate.

 

I was so disappointed that such story could happen to me when I'm working on automation, on the age of Cloud and social media. That followed me on the night and I made a dream…:

 

I was in front of a web UI, I was designing the architecture of the environment I needed to build. I was specifying devices (server, network, storage…) and their relationship regarding their role and I was dragging and dropping some components (OS, patches, middleware, DB, software, hardening level, network zone…) from a centralized catalogue (on the Cloud) containing private content, public content and content that was shared between groups of people I’m part of. From the catalogue, I could see thread of comments for each content, how much time it was successfully used per context, I had accessed to advices, experience sharing…

I created the parameterization relationships between my components and the template of my services was done. I was ready to deploy. To do that I just needed to map the template with an environment that was composed of VMs, physical devices and Cloud resources and then I requested a deployment. All happened well, and so I published this services in the catalogue providing access to various groups who may need it, putting a description and some information were automatically added:

-     one successful deployment, kind of environment on which it happened,

-     all sub components were updated to increment the number of successful deployments on the relative context (kind of OS, version…)

-     post on the community to the groups that I provided access to.

 

During the deployment time, having a look on the posts, I saw a request from a PS guy who was on site. He wanted to know if someone already created some rules for specific compliance checks. With 3 clicks I provided to him access to one of my private content I've created one month ago that should make the job. He immediately got access to it from customer site and 30mn later, after few checks, he had the job done for the customer. Better, he even added few improvements to the content I provided to him.

 

BMC software has the building blocks to provide this service to Data Center with its BSM platforms. I’m not actually managing data center but I need this kind of service as many people building environments for test, demo, training or production whatever is the size of their environment or company. BMC could provide this putting BSM on SaaS architecture with multi-tenent capability and community capabilities.

 

How much of you had the same dream?

Fred Breton

DCA or Cloud?

Posted by Fred Breton Feb 28, 2011
Share: |


  On my last post I spoke about the importance of delegation capability to get value from DCA. We saw that one important point was the ability to segregate duties to be able to restrict people to only execute what they need to do. That is the first point for expert teams accept to delegate execution of tasks impacting the environment they're managing.

 

  Successfully delegating task to even the requester itself doesn't mean to just put the right credential on some content. People need to know that they've access to this content, they need to know they can directly execute it and how to execute. So you need to have a solution allowing people who creates the content to publish it to people who consumes the content. You typically need a solution with a service catalogue where publisher (the expert teams) could publish there offerings to the right audience and where consumer could access to the offering they need regarding their role in the company.

 

  Considering this and various post on this blog I need to ask a question: what are the main differentiators between a solution able to achieve DCA requirements and a solution able to achieve Cloud computing requirements?

 

Thanks to share your thoughts on this point.

Fred Breton

Is it OK to delegate?

Posted by Fred Breton Jan 18, 2011
Share: |


Delegate.jpg

"Is ok to delegate?" is one of the major question when you want to get value from Data Center Automation (DCA). Let's see why and let's try to understand what should be part of Automation solution to provide delegation capabilities.

 

What is expected from DCA is to do more with less: more productivity, more quality and faster, faster as time to market is critical in an economic environment with so much competition. Automation enable the capability to move execution of tasks to less skilled people, to people who're not experts but just need to have something done. Everybody understand that I can drastically reduce the time between when a request is initiated and when it's done if I can have the requester to execute just "pushing a button", the execution becomes delegated to the requester himself. Let's see thru a story that happened to me what is the first capability that a DCA solution need if we want to promote execution delegation and so reduce cost and time to market.

 

Some months ago, I was involved in a proof of concept about Automation where the main topic was to show the capability to reduce the time to provide a test platform to software manager for they could do their functional tests when a new release is provided from development. The average time to build such environment was 10 weeks. When I first look what needed to be done, I thought the job will be easy as I could not see how I won't be able to reduce this to two days max.

So I started the POC by reviewing with each technical teams involved in the process what they were exactly doing and how. I was very surprised to discover that almost tasks were already automated thru scripts.

 

I went deeply in the process to understand why it took so much time. I discovered that the teams involved were mostly expert teams like DBA, sys admin, websphere admin etc..., who has requests coming from various sources with the highest priority going to operations. So this kind of provisioning or environment checks were going to their to do list with middle level of priority and took an average of a week to become the active task. As you can imagine, I ask them why they were not delegating the execution as they've already automated the task execution allowing a monkey to do it.

They were not delgating because their scripts needed high level of priviledge to be executed, and there was no way for people outside of their expert teams could have this level of privilege as the risk should be to high and they didn't want to endorse this responsability. When I show them how my solution could efficiently segregate the duties allowing execution of their scripts with the right privilege restriction according to the role of the guy we were delegating to, they had no issue to delegate. The result was a big win: 10 weeks to 2 days.

 

Bottom line is that experts already automated a lot of tasks most of the time but IT doesn't get so much value from this automation because of lack of delegations. The first main lock is segregation of duties capability. That's why if you want to get big value from DCA, you need to choose a solution that provides granular Role Base Access Control to achieve the right level of segregation of duties, first point to enable task delegation. There are other helpful points that I will address in a next post.

Fred Breton

The Chicken and the Egg

Posted by Fred Breton Oct 26, 2010
Share: |


Chicken_or_Egg.jpgEverybody knows this question to confuse a kid (or even an adult ;-)): "Which comes first, the chicken or the egg?". I would compare this to someone ask me: "what should we implement first, automation of provisioning or automation of compliance and configuration checking?"

 

I don't know the answer to the first question, but what I know that without chickens you can't get eggs and without eggs you can't get chickens. So the bottom line is that you need both, and who really cares what comes first when it's time to get food in the plate. It's exactly the same thing with Data Center Automation (DCA), you need both: automated provisioning and automated compliance checking - and both need to be integrated.

 

If you try to automate provisioning without automated configuration checking, and if you don't want to have a lot of failures, you'll need to manually check that targeted environments meet prerequisites, which means you won't get as much value as expected. I remember some years ago a customer who wanted to test BladeLogic for Server for provisioning, in particular for full stack provisioning and independent deployment of technical agents, middle-ware, application upgrade.  Their needs were to reduce cost of provisioning and time from request to delivery. When I started to explain to them that to meet their goals, they also need to automate configuration checks, their answer was that they know their environment very well, that they have a strong build and change management process that guarantees control and knowledge of configuration. I then told them that I didn't understand  why we were here if provisioning was already fully automated. When I learned that a strong provisioning process didn't mean fully automated, I proposed them to use our solution to audit 10 rules of their configuration policy on 100 servers to test how well they followed their own process. Guess what: 50% of their servers were not fully compliant, with some severe security issues and configuration drift.

 

If you automate compliance without the automation of provisioning, you get a clear status of your compliance level regarding your policy, but no mechanism to efficiently remediate the drift and to avoid spending most of your operational resources to just remediate compliance drift. I remember the case of a bank who bought a dedicated tool to check compliance and this tool was doing his job very well, so that management could get clear status of compliance levels. However, there were thousands of configuration issues regarding the policies and week after week they couldn't see much improvement. Operations were not able to remediate enough efficiently. It became clear to them that automation of compliance check and provisioning automation needed to be integrated.

 

Behind this there is a basic reality: to automate operations you need to normalize, but normalization changes over time because of technology evolution and business requirements. Enforcing normalization means you need to be able to check and remediate drift in an efficient way. So, like for the chicken and egg, if you want to achieve DCA and get value from it, no matter from where you start first: compliance automation or provisioning automation ; you need to have both integrated.

Filter Blog

By date:
By tag:
It's amazing what I.T. was meant to be.