Optimize IT

13 Posts authored by: Michael Ducy
Share: |

I’ve participated in and listened in on some interesting discussions this week in regards to Cloud Computing. The first was a discussion in which Massimo Re Ferrè attempted to create what he called the Cloud Magic Rectangle (more on that in a minute). The second discussion I listened in on was between Massimo and Randy Bias. While I didn’t track the entire conversation, one thing that Randy said really stuck out. Randy mentioned the concept of organizations that lean forward vs. organizations that lean back.


The Cloud Magic Rectangle

Let’s first look at the Magic Rectangle. As you might have realized, the name pokes fun at Gartner’s Magic Quadrant concept. The goal was to bucket different Cloud solutions into 3 categories: Orchestrated Clouds, Policy Based Clouds, and Design for Fail Clouds. Each category has various characteristics based on the value proposition, how the cloud is built, and the benefits to the end user.


Generally speaking Massimo has it right with what makes up each category. Where I take issue is in his conclusion where he lumps BMC in with the Orchestrated Cloud camp. The first release of BMC’s Cloud Lifecycle Management  was very much an Orchestrated Cloud solution. But as our product has matured over the last 2 ½ years, we have shifted more towards a Policy-Based Cloud. I won’t bother to run through an exhaustive list, but features like Public or Private Cloud Support, APIs, Scalability, Multi Hypervisor Support, Integrated Layer 2 network support, Integrated Security, Hardware Agnostic, Virtual DC Support, and Policy Based Placement have all been features of CLM for at least 1 year, if not since the product’s inception.


But beyond the error in categorization, something else struck me. Massimo told me that he got feedback  that BMC shouldn’t have even been included in the Orchestrated Cloud column (ie, we’re not Cloud or Cloud Management).  This is similar to other sentiments I’ve heard where people have called us “legacy” vendors, “Cloud Washed Shite”. That was coupled with the argument that in order to be “policy based cloud” customers should be able to download a trial of your software and install it themselves like they can with VCloud Director and Microsoft System Center 2012.


While these are interesting arguments they couldn’t be farther from the truth. First, if you think building a Cloud is as simple as downloading a trial software package, and throwing it on some lab hardware, you’re not long for the Cloud world. Sure, you can grab the install, build yourself a “Cloud” and roll that out within your little siloed organization. But if you want an enterprise wide solution that needs to bring along with it the baggage of the last 25+ years of Client-Server computing, you need expertise and experience that a implementation partner can bring. In the end, our professional services isn’t about installing the software, it’s about designing a solution to meet your enterprises short and long term goals. And most importantly, about serving the needs of the business.


On the other point, I turn to a blog post I read by Randy Bias regarding complexity and simplicity in Cloud building. An interesting point Randy made was that if you want to build Amazon Web Services (AWS) in 2012, you don’t try to build AWS in 2012. Instead you start simple and build AWS as it was in 2008 or 2009. Then you layer on more features by iterating on top of this foundation. BMC CLM helps customers to build this foundation, and then layer on the features they need in the future. Additionally, much of our product development is driven off of customer’s requests for new features, and as the customer matures, we mature with them. Two years ago we had no Capacity Aware Placement. Now CLM can determine which compute pools will provide you the resources required to fulfill your request.


Forward or Backwards Leaning

And that brings me around to the other conversation I listened in on between Randy and Massimo. Randy mentioned that a difference in opinion between him and Massimo could be due to Randy dealing with more forward leaning organizations vs. backward leaning organizations. This is an important distinction and one that is often lost in the “Cloud Wars”. Many pundits say “Enterprise IT needs to be like NetFlix”. In principal, this is a good idea. NetFlix has done some pretty amazing things in the Cloud, Operations, and Development space, and they should be applauded for their work. In practice, everyone being NetFlix is much more difficult. NetFlix is a very forward leaning organization. As a young company, they are not tied down with the baggage of years of “Business as Usual”.


Many IT organizations are backward Leaning, or at the very least middle leaning. They have years and years of baggage that they are dragging with them on the Cloud journey. If they’ve managed to shed some of this baggage, they’ve become more middle leaning. But in the end they need companies that understand where these IT organizations are coming from, and how they can move forward, while still managing this baggage on a day to day basis as it is not going away anytime soon. 


And that is the beauty of BMC’s overall solution and strategy. We can help optimize your IT organization by providing consulting services to start you on the Cloud journey, we can provide you products that are constantly maturing in the features and functionality required for Cloud, and we can manage all that baggage you are bringing along with you. In the end, it is about meeting the needs of the business, something BMC has helped companies do for years. Cloud doesn’t change this end goal, it simply changes the speed and way you achieve that goal.

Share: |

I admit it. I listen to sports radio. Often, I turn it on when I am bored with the political situation of the day or when there is a sports story that I am actually interested in hearing. The other day, I was listening to a late-morning national sports program. I often find the broadcaster a bit annoying, but I manage to find nuggets that are relevant to me.


The broadcaster mentioned the concept of the “New Reality” and how people are unable to accept it. He was specifically speaking of realignment in college football, but he expanded his example to include other areas of life. The New Reality is that people don’t have home phones anymore, people don’t subscribe to newspapers, people don’t buy CDs, and people don’t go to Blockbuster. A certain segment of our society is naturally shook up by this change. Publishers and Record Labels are crying foul that they can’t make their outdated business models work, and Blockbuster is long gone. Traditionalists will decry the erosion of Americana and everything we hold sacred.


This New Reality is a consequence of the world evolving. Both businesses and individuals have to respond to this evolution. Most business leaders are fairly adept at staying ahead of these New Realities, with the few exceptions that eventually die (see Blockbuster). One area of business that is notoriously bad at accepting New Realities is the IT Department.


I venture to state that there is no department as resistant to change in any organization as the IT Department. IT is often the last on board and the first off the boat when it comes to aligning to the company’s overall goals and initiatives. As pointed out in “The Real Business of IT”, the mere fact that we speak of “Aligning IT to the Business” shows that there is a severe disconnect between the goals of the business and the goals of IT. The authors go on to point out that this disconnection is one reason that the IT Departments are now being run by CIOs with a CFO background or CIOs that report directly to the CFO; in other words IT is just another cost center. That is not a recipe for success for IT.


So as an industry, how do we begin to solve this problem? When I first moved to Columbus, I had coffee with Angelo Mazzocco, CIO of a local medical company. He was a technologist at heart, starting out as a programmer and moving up through the organization. The statement I remember most from this meeting was, “My value to the business increased when I could read and understand a [Profit and Loss] statement.” That simple statement stuck with me and I believe it is the heart of solving the perception of IT not being able to accept New Realities or change and thus, being seen as a cost center.


The average IT Manager has very little training in the basics of business. If you look at how most IT Managers progress in their careers, they start out as an individual contributor in an IT-focused role. Often, they hold a degree in a technology-focused area, such as Computer Science, or they hold a degree in a completely unrelated area. They are promoted based on their ability to execute ITs goals, and their ability to lead from an IT perspective. As this focus gets them ahead, they resist change. When the business approaches the IT department with their own goals, IT Management seeks to reconcile those goals with ITs goals. If the goals clash or they simply don’t understand business well enough to offer an IT solution, then the proverbial “Wall of IT” is raised.


This basic premise has been documented before, and is credited with giving rise to new concepts such as DevOps which promotes a tighter coupling between the Business, Development, and IT Operations. It is also documented in management literature under the premise of “Rewarding A while hoping for B”. To begin solving this problem, IT needs to focus on two basic things. First we need more Business Education for IT Leaders. Programs such as the Technology Leadership MBA at Carnegie Mellon are a good example. IT Leaders need to understand how and why the business operates as it does. Basic concepts revolving around marketing, competitive strategy, accounting, and leadership should be part of any IT Leader’s toolbox.


Second, IT goals need to be the Business’ Goals. IT should eliminate the IT-focused goals of Uptime, SLAs, etc., replacing them with Increased Earnings per Share (EPS), Revenue Growth, Customer Retention, etc. From CIO to the individual contributor of IT, the Goal should always be the Business-centric goal. When a request for a new project or change (New Reality) comes in, the focus for IT should be “How can we add value and achieve the goals of the business?” This realignment needs to be understood by all levels, including the IT individual contributor. They need to see the connection between a firewall rule change and its impact on EPS.


One could argue that changes in the organizational structure of IT are also required, but organizational changes happen so frequently and rarely produce any results. In the end, the goal is to help IT understand why this New Reality is important for the business, and how IT can most effectively bring about this change. Business will continue to evolve and IT needs to be ready to help with this change, being good stewards of the business first, and technologists second.

Share: |

I found myself caught up recently in a Twitter storm over why, or why not, multi-hypervisor support in Cloud, Automation, and Management software is important. The proponents were positioned on the side of cost, and the detractors were against the complexity such a solution introduces. Multi-Hypervisor support is important for 2 reasons: managing costs, and reducing vendor lock-in.

Managing Costs

One of the big reasons enterprises are clamoring for this feature is the increasing cost of virtualization. Ziff Davis estimates the budgeted cost of virtualization increased 17% in 2011 and will increase another 8% in 2012 (pdf report). This increase in budget is of course directly correlated to the significant trend in virtualization adoption, but it also quickly becomes a sore point for CIOs, as this increase in budget needs to somehow be funded. This increased demand has CIOs looking for other options in the Hypervisor space, such as Citrix’s Xen Server. Additionally, organizations with enterprise agreements – of which there are many – are being courted by Microsoft to use Microsoft’s HyperV platform for free. Of course, “free” is relative, and there are plenty of hidden costs with a free hypervisor, but from a purely budgetary perspective, free is attractive for many CIOs.

Another reason for this increased budgetary expense is VMware exploiting their strong market position. Off the record, CIOs have told me that renewals are becoming increasingly expensive and difficult with VMware. Anecdotes abound on blogs and the Internet over the costly upgrade and renewal process associated with the licensing of VMware VSphere.

Reducing Lock-in

This leveraging of VMware’s market position has made many CIOs realize one key point; they find themselves locked into one particular platform with no secondary plan. In economics this is known as a “Hold-Up Problem” (yes like “Stick’em Up Mr. CIO.”) This fear was increased when VMware introduced its latest revision, VSphere 5. As part of the product introduction, VMware changed its licensing model to be based not only on CPUs of host systems, but also the RAM that would be available to the running virtual machines.

As servers become more and more dense (more cores, more memory), they can, in theory, run more virtualized guests. VMware most likely envisioned a future where companies would decrease their license spend as server density increases, and thus rolled out the new licensing plan. But the plan backfired. VMware customers revolted, dubbing the new licensing model vTax, and many CIOs woke up to the risk of being at the mercy of their virtualization vendor. Eventually VMware relented and revised their new vRAM licensing model to be more generous, but the snowball had already been pushed down the hill. Enterprises are now looking more seriously at alternative hypervisors.


Multiple hypervisor platforms increase complexity; there’s no way around that. But so does having multiple operating systems, multiple application server platforms, multiple hardware vendors, etc. IT organizations have figured out how to overcome the complexity that a new platform introduces, and IT has managed to survive.

Cynicism aside, the level of complexity can be reduced if operations teams are operating effectively. First, basic knowledge of one hypervisor platform can speed adoption of the new platform. For instance, a competent VMware Administrator should have knowledge of Clusters, Data Stores, Hosts, and Guests and how those systems interact. The same concepts exist in Citrix Xen Server (albeit with different names), and this base knowledge can be leveraged to quickly ramp up on the new technology. If organizations have competent staff running their virtualization environments, they should be able to quickly ramp up on new technologies. If they aren’t then you might want to reevaluate your staffing plans.

Second, solutions exist that are multiple hypervisor aware; the key is how you use them. BMC’s own Cloud Lifecycle Management supports VMware and Citrix, BMC’s BladeLogic supports four x86 hypervisors and two Unix hypervisors, and Open Source solutions such as OpenStack supports multiple hypervisors. In using one of these solutions, you need to ensure that you are abstracting on the correct level.

Say for instance you wish to deploy MySQL servers to both a Xen environment and a VMware environment. Traditional, single hypervisor organizations with no automation would build a template containing all the software preinstalled. New instances are easy – point, click, clone. With multi-hypervisors you need to abstract one layer up, on the automation layer. You build basic templates in Xen and VMware, and then you build a common package in your automation solution to deploy MySQL to the new instance. As a side effect, you now have a package that can be deployed to virtually any server whenever you need a new MySQL installation. Complexity isn’t necessarily increased; the work is just moved to a different part of the operations stack. And in the end, your operations teams need to be moving towards this better operating model. Multi-hypervisor support simply accelerates this change.

In the end, multi-hypervisor support is becoming a larger and larger part of the enterprise’s playbook. It may initially seem to introduce complexity, but this perceived complexity can be overcome by adjusting your operations models. The scare that VMware introduced into CIOs dreams this past summer has helped to accelerate organizations multi-hypervisor goals. Whether you agree with it or not, more and more vendors will (or are) emulate BMC’s support of multiple hypervisors as more and more CIOs demand it.

For more on this discussion, and more viewpoints you can check out these Blog Posts:


Virtualization Costs, Virtualization Advantages and the Case for Multi-Hypervisors by Massimo Re Ferrè


Why Enterprises Will Force Down the Cost of Virtualization by Mark Thiele

Share: |

Krispy_Kreme_Doughnuts.jpgI’ve been progressing through the completion of my MBA over the last five years.  It has been a long process and in 2 weeks it will finally come to an end.   When I tell people that I’ve been working on my MBA, they often ask, “What are you going to do with it?”  It’s as if the completion of the degree requires that I now shift over from IT to Finance, or into management, etc.  But there is one thing that my MBA has taught me that many people miss: IT needs more people with business skills.


The primary premise for this statement is that IT is reinventing the wheel time and time again.  Business thinkers have solved many of the problems that IT is just now beginning to crack with things like automation and Cloud.  Take for instance the entire study of Operations Management.  Wikipedia defines Operations Management as, “… an area of management concerned with overseeing, designing, and redesigning business operations in the production of goods and/or services.”  This entire branch of study is about producing goods and services cheaper, better, and faster, but little of this theory has made its way into IT.  Ask an IT person what “Operations Management” means to them and you will most likely get a response that entails people sitting in a NOC, responding to alarms, fighting fires, and keeping the lights on.  You might also get a negative response in regards to server and infrastructure management; something to the effect of “Operations is too slow and unresponsive to our needs.”  This negative response is directly related to the “anti-ops” movements such as NoOps which hope to remove Operations from the picture altogether.


This general lack of understanding of business theories is why we are just now picking up steam and momentum around initiatives like automation and Cloud, some 30 years after the advent of the personal computer.  At the heart of it, many automation initiatives (and cloud for that matter) focus on the process of getting new services and software available to the customer (be it the business or end consumer).  All in all, this is essentially a production line; Network, Storage, Servers, OS, Apps, and Configs are all points in the production line.  Operations Management presents several theories on running production lines, with examples from successful companies such as Toyota or Honda.  So if the business has figured out how to make T-Shirts in China from cotton that comes from West Texas, and ship them through Madagascar back to the US, or produce cars in Ohio with parts sourced from all over the world,  why is IT still taking 4-6 weeks to get services to the end user?  Even donuts have production lines.


Another great example is billing for Cloud services.  Many IT organizations that are embarking on Cloud have the desire to bill other lines of business for the services they consume.  Many IT people are befuddled as to how this should be done; some buy software to do it, others write their own.  Others have no understanding of how to achieve this.  But billing and charge back are just  a simple branch of Cost Accounting.  Understanding and applying the basics of Cost Accounting can jumpstart IT down the path of achieving their goals.


In the end, it’s about having the right people with the right knowledge.  IT staff are often focused on one thing; the widgets that make up their world.  Very rare is the IT employee who can realize that this firewall rule change he is fighting has the potential to increase the company’s Earnings Per Share.  Even more rare is the IT person that can think outside of their world and apply the tips, tools, and techniques that have been successful in other industries.  IT staff often have the impression that what they do is “special” and there is nothing to be learned from anywhere else.  DevOps has started to change a section of this mindset, and started to break down the traditional barriers, but it still has a long row to hoe.


The next time someone asks me what I am going to do with my MBA I think I’ll respond, “I’m staying right here; there’s A LOT of work to do.”



*Image via Flickr courtesy of Niel T:

Share: |

A major topic in the adoption of Data Center Automation, or any massive change in an organization, is if the culture will accept that change.  Resistance tends to be positively correlated to the impact and severity of the change.  Data Center Automation tends to try to breakdown many walls in an IT Organization, and can be one of the more heavily resisted initiatives.


A good example of overcoming resistance to change is the story of Jack Welch, former GE CEO.  When Jack started at GE, he instituted a policy to make GE the #1 or #2 company in any line of business that GE was involved with. This decision was met with extreme levels of resistance and was not a popular decision.  As a result of this decision, GE closed unprofitable lines of business and completely revamped GE.  Under Jack’s tenure, GE grew from a $14 billion company to a $410 billion company.  Lesson: Sometimes the unpopular decision is still the right one.


Jack also had another interesting philosophy in regards to organizational culture.  Jack thought the people in an organization could be placed in 1 of 4 buckets, as seen below.





Makes  Money For Company


Doesn’t  Make Money



Fits Culture



1. Retain



2. Retrain



Doesn’t Fit  Culture



3. Eliminate



4. Eliminate




Essentially, employees can be lumped in one of the above buckets.  Fits culture and makes money (Bucket 1); fits culture but doesn’t make money (2); doesn’t fit but makes money (3); doesn’t fit and doesn’t make money (4). Those employees in bucket 1 are easy to know what to do with.  Those employees in bucket 2 should be retrained in another part of the organization; maybe they just aren’t in the right position.  Bucket 4 is easy to deal with; cut them loose and give those employees the opportunity to be productive at another organization. Hanging onto them only drags the employee and the organization down further. 


Bucket 3 is where most of the challenge lies.  These employees help make the company money, but they do not fit the overall culture of the company.  Their day to day actions are often subversive to the goals of the company, but through random acts of heroics, they produce results for the organization and are often retained.  Think of how many people like this you have encountered in your own IT organization. The person in the meeting that has an objection to every topic brought up.  The person that never lets the ball move forward.  But in a moment of crisis, they step up to fix a problem, or end an outage. Then are then regarded as priceless and irreplaceable.  Jack’s philosophy was to let these people go as well. The 5% of them being productive is outweighed by the 95% of them being a cancer to the organization.  Lesson:While it is tough to let someone go, you are only holding that person back from being productive in another organization by retaining them.


The last lesson from Jack that can be applied to IT organizations is the need for big disruptive change.  Jack’s philosophy was that incremental change is much easier to resist than big sweeping changes.  Big sweeping changes often bring more people on board at once, and the detractors are more easy to spot and isolate.  Incremental changes are often about grass-root campaigns that are started by one or two people and are easily squashed by detractors.  Also, people are often hesitant to join incremental change initiatives because of fear of failure or retributions.  People don’t often have the guts to be the first follower, and incremental change fails.  Lesson: Don’t be afraid to blow up the world and shake things up by making big sweeping changes.


Any new IT initiative is sure to be met with resistance.  While Jack Welch’s ideas were seen as revolutionary at the time, Jack has been regarded as one of the best managers of the 20th century. Change is tough, but using some of the basic tenants of Jack’s management philosophy, you can make change happen in your IT organization. 

Michael Ducy

What's Missing?

Posted by Michael Ducy Sep 13, 2011
Share: |

A recent article in Bloomberg Businessweek spoke about 2 automation startups, Chef and Puppet, that could help companies ease the transition to Cloud in their organization.  When I first read the article with my insider knowledge my response was "Yeah, no kidding, automation is critical."  But when I stepped back and looked at the article with a less biased view I understood better what the article was trying to get across.


The article ended with the statement:


"Investors have bet $20.5 million that Puppet and Chef, competing server-management tools, will be at the forefront of cloud computing."


Taking a moment to digest this, I realized something; most companies still don't get it.  Most companies still don't understand that datacenter automation is the keystone to getting any meaningful initiative accomplished in their organization.  Whether is it Cloud, IT Auditing and Compliance, or Application Release, datacenter automation is the key requirement needed to accomplish these goals.


But automation is nothing new.  As the article pointed out, Puppet has been around since 2005 and was founded by a former employee of Bladelogic, BMC's Datacenter Automation Suite.  The "Big 3" - HP, IBM and CA - all have their own automation solutions, and there are a myriad of other solutions from smaller niche players.  So with all this percieved saturation in the market, why do Venture Capitalists still feel that there is significant room for growth that they would invest $20.5 million in what appears to be an established market?


Here are some possibilities:

  • Current suppliers of automation software do not meet customer needs - If this is the case, what should the current suppliers be offering that these startups do?
  • IT Organizations are reluctant to adopt automation - If this is the case then there is still significant market share for the startups to claim. What needs to be done to increase adoption?
  • Barriers to enter the automation market are low, especially with an open source model - Thus one can expect startups like this to pop up from time to time.


What are your thoughts?  Why do you think that new entrants are able to enter the datacenter automation space?  What should automation vendors be doing differently to meet the needs of their customers?  Leave your thoughts in the comments below.

Share: |

Michael Schrage, research fellow at MIT Sloan School’sCenter for Digital Business, wrote a blog post recently that had the title," Why You Should Automate Parts of Your Job to Save It."  When I saw his post come across my tweet stream, the catchy title compelled me to read the post right away.  Schrage's thesis is that automation should now be the norm as we go through our day to day work, and if people who aren't "using technology to trim their cognitive burden or get the same process result 25 seconds faster is less likely to get a raise or a glowing job review " or even maintain employment with their current company. 


IT and Data Center Operations is ripe for many levels of automation, but many companies suffer through their hacked together processes that are based on years of tradition rather than any significant innovation.  Every major management software company has offered automation suites for years.  Open Source alternatives for Data Center Automation are prevalent. Yet, even with this ability to automate and lack of execution employees are still retained and rewarded for maintaining the status quo.


The simple explanation for this is the simple concept of "rewarding A, while hoping for B". IT Management hopes that their staffs will spend time automating and refining processes.  They launch automation initiatives, and may even spend several thousands of dollars on automation software.  But several months later the level of automation is at the same level it was before the initiative. 


I've seen this time and time again in large and small companies that have sought to incorporate more automation.  Luckily, it is an easy problem to fix.  First, you need to align the reward structure of the organization with the initiatives you want to accomplish.  Throw out the performance reviews that say nebulous, difficult to measure statements such as "Shows initiative and alignment with departments goals".  Replace them instead with clearly defined measureable statements that support the departments initiatives such as, "Automate service build process for mid-tier and database services".  You can even get more granular with the statements, and make the more fine tuned statements end of quarter goals rather than end of year objectives.


Second, now that the performance measures are in place, align your rewards with those measures. Work with HR and senior management to build an award structure that supports the departments (and company's) initiatives.  Rating someone on a scale of 1-5 can be useless to motivate them to execute on the initiatives. Instead have clear cut measures and rewards such as 100% automation equals 100% bonus, 75% automation = 75 % bonus, 50% automation = 50% bonus.


Now that you have aligned the performance measures and the rewards, it's time to get out of the away. Empower your people with the decision rights to execute the initiative.  Delegate to them the power to get things done, and support them when they encounter roadblocks.  Let them use their talent to choose the best solution, the processes that need refined, and the path to get there.  Break down the political barriers for them, and let them show thier talent.


Automation in IT is a requirement for today's fast moving businesses.  Aligning IT's measurements, rewards, and decision rights to enable and support automation can have significant long term benefits to the company. 

Share: |



When I worked in an operations capacity, we would build virtually everything we needed.  Need a new monitoring tool?  Build it.  Need a log analysis tool? Build it.  Need a new critical messaging system? Build it.  While this worked to "save the company money", and solved the immediate need of the department, it was by no means a fool-proof plan for success. 


The biggest flaw was in the huge technical debt we were creating for the organization.  For those unfamiliar with the term, a technical debt works very similar to the concept of financial debt.    You borrow money, for some immediate benefit, and over time you have to pay it back.  There are of course different payment structures for financial debt.   You might make periodic payments for a fixed term until the loan is paid off.  You may pay only the interest on a periodic basis, hopefully planning to pay off the entire debt one day.  You might make periodic payments leading up to one large lump sum.   In the financial world, these payments are known as servicing your debt.  Technical Debt works the same way and needs to be serviced periodically.  Feature requests, bug fixes, and support issues are all ways you service your technical debt.


Some technical debts may be small.  For instance, a script that you wrote to perform a specific task.  That technical debt is analogous to buying a cup of coffee on your credit card.  In and of itself, this debt won't create too many problems.  This kind of debt can cause problems when you accumulate too much of it.  Go to the coffee shop multiple times per day, every day, and soon the debt starts to pile up.  Same thing with your scripts.  Develop too many of them, too often, outside of an automation framework, and you'll end up spending all your time maintaining and supporting these scripts; thus servicing your technical debt.


Some technical debts may be much larger.  For instance a company I talked to recently wanted to build their own cloud management software leveraging open source technologies.  This creates a huge technical debt for their organization comparable to mortgaging your house.  This debt includes the software development, the ongoing support of the software, feature enhancements, and bug fixes.  In addition to the increased work that is required, there is a long lead time until you can start to leverage the investment you are making in the software you a building.  Compound that with the lack of industry knowledge and  technical breadth that your team could be missing, and your debt increases substantially.


While I acknowledge that not all pieces of software in an organization will be Commercial Off the Shelf software, it is important to understand what technical debts you are accumulating when Building Your Own.  Most organization's goals in Building Your Own is to save the upfront costs of purchasing the software.  But what most organizations forget that this comes at a tradeoff of a huge technical debt they incur.  In making these decisions, organizations should ask themselves; Is this the best decision for the long term future of the company?  and Does this work fit the core competency of my company and the goals and objectives the CEO has laid out for us as a whole? 


And always remember the real question is never "Can we?", but "Should we?"

Michael Ducy

NoOps? No Way.

Posted by Michael Ducy May 3, 2011
Share: |

There was a time when a NoOp was a machine code routine sent to a microprocessor that did nothing, other than slow down the execution of the program for various reasons.  Today Ops, or Operations, is seen as the routine that slows down the execution of a company.  This view of Operations has given birth to methodologies such as DevOps - where Operations takes a more Development focused attitude of quick releases, nimble infrastructure, a level of automation and even developers executing operations functions.


From my perspective, DevOps has taken rise due to the ongoing conflict between development, operations and the business.  Development tends to be more strongly aligned with achieving the goals of the business, and operations tends to be in the way of development executing those goals.  Operations often acts as the canary in the coal mine, calling out all the potential pitfalls, and trying to hedge against every edge case no matter how small.


With the advent of cloud platforms and the ongoing struggle of Development vs. Operations, a new methodology has formed - NoOps.  Instead of slowing down execution, NoOps seeks to speed up the execution of the business by removing the greatest detractors of progress, Operations.  NoOps relies on third party providers to manage infrastructure, and developers can have relatively unfettered access to the platform to push new projects, fixes, and business initiatives.


As a former employee of Operations departments, NoOps at first strikes me to be an attack on my livelihood.  But taking the emotion out of the equation, it is completely understandable why companies want to remove the ball and chain of Operations.  Operations departments often have a single focus - how is this change going to impact my operations - versus the more correct view of "how is this going to effect the organization as a whole."  The notion of NoOps is simply of a natural extension of "innovate or die", or "survival of the fittest".  Operations departments have long been the harbinger of doom and gloom, thus giving the impression that they are holding back the progress of the organization.  Thus, the natural inclination is to eliminate those things that hold back progress.


But is NoOps really the way to go?  I think the recent Amazon EBS outage would give you an definite answer of "No NoOps".  I make this statement not based on how the outage came about, but rather based on the large number of companies that suffered an outage because they didn't have a scalable, reliable, and redundant architecture.  These companies relied on a single point of failure, one Amazon availability zone, which any student of Operations could tell you is a big no-no.  Eliminating Operations means you eliminate years of experience in building environments that are scalable, reliable, and redundant.


Rather than eliminating Operations, Operations needs to evolve.  Operations can start the evolution by focusing on three things:


  • Institute Dynamic Business Service Management (BSM) and eliminate the old Operations mind set of knobs and widgets.  BSM focuses on aligning the Operations department to the needs of the business, much like development has already done.  Instead of looking at the individual servers, switches, or routers, BSM focuses on the services IT provides to the business and how those services impact the bottom line.
  • Institute automation to become more agile and flexible.  This will allow Operations teams to better respond to the needs of the business with repeatability and consistency.
  • Stop saying no.  Instead Operations needs to start listening, understanding, and collaborating on moving the business forward.  Do you still think that .01% chance of incurring a $50,000 outage is a reason not to do a new project?  Use basic tools like decision trees to convince your teams that projects make sense from a dollars perspective, edge cases included.


Or course, these are just 3 starting points for turning around your Operations departments.  Ask your peers in development and the lines of business what your Ops team can do to avoid becoming NoOps, or post your suggestions below.

Michael Ducy


Posted by Michael Ducy Mar 4, 2011
Share: |

Flipping through the Harvard Business Review I saw an ad that had the tag line,


"Complexity presents an opportunity and a threat at the same time."


Sitting back and thinking about this statement in the context of Data Center Automation, I realized that most organizations are challenged by the complexity of their processes and not by the actual technology.  This complexity presents itself as a threat to the organization in several ways.


First, the complexity makes it harder to bring in new people to perform the same job.  When people who have designed overly complex systems leave an organization (or even a department), it is often hard to find people that can fully replace this individual to manage the complex system.  I have personally seen people promoted into different roles in an organization only to be constantly brought back to support a complex system they designed.  Some organizations may be hesitant to eliminate under-performing staff due to their specialized knowledge of a complex system or promote individuals into new roles because there will be no one left to support the complex system.


Second, complexity may prevent organizations from undertaking new initiatives.  When new initiatives come face to face with the complex systems, many organizations will either scrap the new initiative, or build complexity into the new initiative to handle this edge case. This creates a vicious "Web of Complexity" that only exacerbates the problem.


Third, when complex systems fail, the mean-time-to-repair (MTTR) is often longer and more difficult.  Specialized experts must be called in to fix the problem, requiring them to work after hours, weekends, or during vacation.  This causes undue stress on an organization and its people, thus reducing the overall effectiveness of the organization.


What can organizations looking at Data Center Automation do about complexity?  They can begin by using their automation initiatives as a chance to reduce and remove the complexity that has been built into systems over time.  Often, complexity is built into systems to support legacy methods.  As you review your systems and processes, ensure that these legacy ways of doing things are still required.  Have your staff approach the situation with open minds, realizing that processes built several years ago can most likely be optimized and made less complex, or completely removed with automation.  Often, complexity is built in because there was no simple way to perform a task.  With an automation solution that provides a framework for automation, rather than just a scripting platform, much of this complexity can be removed.


According to this Harvard Business Review blog post, complexity is weighing heavily on CEO's minds these days.  Tackling complexity as part of your Data Center Automation initiatives presents an opportunity for organizations by giving them the chance to experience better MTTR, increased staff happiness, increased agility to take on new initiatives, and less dependency on underperforming staff. 


And best of all, your CEO will sleep better at night knowing his IT organization has reduced complexity.

Michael Ducy


Posted by Michael Ducy Jan 14, 2011
Share: |

scripty_ep1_big.pngClick on the above image for a larger version


Does the above cartoon seem all too familiar for you?  Have you been in such a situation where a talented scripter wreaked havoc on production systems causing the business to lose money?  Personally, I have been on both sides.  I have been Scripty McScripterton, and I have worked with Scripty.  Scripty is often the main source of automation in an IT Organization, and can often be the main cause of IT pain.


While the ability to think rapidly and design solutions to problems is a great trait, the ever reliance on IT systems to run revenue generating functions of the business means that Scripty (like you and me) has to be reined into the fold.  The problem with bringing such people into the fold is that Scripty often sees the increased need for process as a way to limit and control his (or her) ability for free, innovative thought.  For example, I worked at a DotCom company where Scripty ran roughshod through the organization.  A new VP of Operations attempted to roll out ITIL processes to rein in the loose cannons, but ended up failing because Scripty and his peers saw the increase need for process as a way to limit their work.  They never saw (nor was it sold to them) that the process was there to make the entire organization - and their individual lives - better.


When reining in Scripty, organizations should keep the following in mind:


  • Show Scripty "what's in it for them" - Show Scripty and his peers that it is in their best interest to adopt new processes.  Tighter processes and better control often leads to less downtime, higher availability, and better performance which in the end means less after hours work for operations teams.  In addition, better operational performance should translate to stronger business performance.  Thus bonuses could be tied to achieving these operational goals, and tighter processes is one route to achieving those bonuses.


  • Make process easy for them - In my days as Scripty, they last thing I wanted to do was fill out a change request.  I would often call the NOC and request that they open the change for me.  Once the change was approved I could proceed with my work.  While this worked at the time, it is often more effective to find products with native integration to change management systems.  For example, ensure that your Server Automation suite can automatically open changes for tasks, and can automatically execute these tasks once the change is approved.  Additionally, work with your Change Management team to find tasks that can be preapproved - requiring a change ticket, but automatically approved by the Change Board - allowing work to proceed unhampered.


  • Include Scripty in the decisions - Nothing spells doom for any new initiative more than not including people in the decision process.  Include smart and influential people in the circle that you most want to adopt the new process.  For example, include the smart network administrator that is respected by his peers in an initiative to roll out a Network Automation solution.  This person can act as your champion to the rest of the group - selling the solution while you are not there, defending the project's goals, and bringing others to support the project.


Scripty is a valuable member of many organizations.  They possess a can do, innovative personality that is indispensible for solving problems in an organization.   However, left unchecked, Scripty can cause, or already has caused, impact to the effectiveness of your IT organization.  But take heart, Scripty can be a productive and contributing member of an IT team.  After all, I'm living proof.

Michael Ducy

Organize For Success

Posted by Michael Ducy Nov 16, 2010
Share: |

Much is said about implementing automation solutions. Technically speaking, implementing automation is simple.  Problems are introduced when the organization that is attempting to adopt the solution clashes with the automation.  Often in organizations, individual teams have some level of automation.  This automation works well for them, and in their world they see no reason to change.  In order to successfully implement automation in this environment, organizations will need to reorganize to be most effective.  In reorganizing, companies should consider the following.


Form Teams Focused on Automation


From my experience, companies that form automation teams tend to be more successful than teams that simply attempt to layer on automation to existing processes and teams.  These automation teams can assist the other parts of the organization in implementing the parts of the automation solution that is relevant to them.  For example, the automation team would help the security team with creating compliance scans (PCI, SOX, etc) for servers.


In successful companies, automation teams are often broken down into Automation Administrators and Automation Developers.  Automation Administrators run the day to day operations of the automation solution.  They help teams report on the success of particular automation jobs and troubleshoot any problems with automation.


Automation Developers help the various teams in the organization initially setup automation .   Often these individuals have the experience and mindset needed to implement automation.  Also,  they have an in-depth knowledge of the automation solution, as well as a programming background.  These characteristics are important as these individuals will need to implement new automation, and translate existing automation to the new solution.  Additionally the Automation Developers help teams modify automation jobs to fit the changing needs of the organization.


Build cross platform teams


Build teams that are focused on more than just one specific function.  Instead of having a Windows Server team and a Linux Server, have one server team.  Individuals in the team may still focus on a particular specialty, but having a single team helps break down walls that prevent innovation and growth.  Individuals in the team can assist each other in developing and implementing automation.  Automation solutions such as BladeLogic are cross platform.  A cross platform team in the organization can maximize the value of such a solution, helping each member of the team maximize the value of the solution.


Plan, Build, Run


I used to think that reorganization into a Plan, Build, Run structure was a ineffective structure for organizations.  After stepping back and looking at successful companies, a Plan, Build, Run structure makes total sense for organizations looking to implement automation.


Plan - the plan stage can work with the rest of the organization to determine what solutions should be automated, gather requirements for automation, and act as a liaison to the build team.  This team should consist of individuals that have experience with automation, understand the intricacies involved in implementing automation, and one who knows the right questions to ask regarding automation.


Build - the build team should be focused solely on implementing the automation.  This team should consist of individuals that have a strong knowledge of the automation solution and understand how to get the most out of it.  This team should also have a multi-platform foundation, as they will be focusing on building automation for the entire organization.


Run - the run team ensures that the automation is working and producing the desired results.  They can also assist other teams in making small modification to the automation jobs (for example, deploy a new patch via an existing automated process).


Required Automation Features


In creating such an organizational structure, it is important that your automation solution can support this structure.  Your automation solution should have these key features to support your new organization:

  • Cross Platform Support - Unix, Linux, and Windows should all be manageable through one interface with a common set of functions and features that are applicable to all platforms.
  • Role Based Access Controls - Strong Role Based Access Controls that allow you to granularly give and remove access to elements of the automation solution.  In addition you should be able to easily promote packages between teams.
  • Packaging Technology - your automation solution needs a strong cross-platform packaging technology that makes it easy to update and change existing processes, as well as rapidly develop new solutions.


With an automation solution that has these features in place, and organizing your teams into a structure that supports automation, you can ensure your company's success in adopting data center automation.

Share: |

One of the biggest challenges when faced with an automation initiative is where to get started.  Many organizations see value in automation, but approach the implementation of automation wrong.  Everyone has their specific project that "automation is perfect for", or, for example, they want to redefine the entire way the Operation Center executes Standard Operating Procedures (SOPs) by automating the entire run book.   These projects can be doomed from the start if those on the project team aren't careful.


Companies undergoing automation initiatives can avoid some of the most common problems by keeping 4 key points in mind:


  1. Use an Iterative Approach
  2. Leverage Native Functionality
  3. Reinvent the Wheel
  4. Parameterize when Possible


Use an Iterative Approach


IT Engineers are prone to the common mistake that perfection is required from day 1.  Contrast this with the methodology that is common on the business side: an iterative approach to perfection.  Many engineers sit down in a meeting and try to come up with every edge case and compensating controls for each.  This leads to over discussing each edge case, possible solution, and possible problem.  Weeks or months pass with very little progress.


Instead, IT Engineers should find small wins that they can use to build success, build credibility, and build buy-in.  When starting an automation initiative, find small easy to solve problems that impact a large number of people.   Sometimes this can be as simple as automated restarts of an application server to reduce the Mean Time To Repair.  This is an excellent example because it has immediate results for the business (less unhappy customers), the customers of the application (they can actually use the application and generate less support calls), and the developers of the application (they can focus on fixing the cause of the problem, rather than firefighting).


Leverage Native Functionality


Automation solutions have abundant functionality to interface with a number of different systems; from the application layer such as Tomcat, to the OS Layer, to the Hypervisor layer.  When interacting with these various systems, it is always advised to use the automation solution's native functionality.  Using the native functionality has a few benefits.  First, it speeds time to value as you don't have to reengineer the work the automation vendor has already done for you.  Second, it allows you to leverage other functionality of the automation solution, such as roll back on failure, that would otherwise not be available.  Third, it allows your automation vendor to more easily support you if there is a problem in the automation process.  Support is always more difficult when the customer has created home grown customizations.


Reinvent the Wheel


Don't be afraid to start from scratch.  Sure, that automated process that the intern scripted for you over a summer is cool, but the reality is that the process is most likely undocumented, unsupportable, and insecure.  Sometimes the best thing you can do is step back from a process, and  evaluate if what was defined months, or even years ago, is still applicable to the way things are done today.  Are people circumventing steps?  Are people afraid to make changes because they don't know what will break?   Do people not even bother to use the defined process and instead have already implemented their own process?  Then it might be time to reinvent the wheel.


Parameterize when Possible


Reusability is key in any automation initiative.  As you take the small steps to perfection, remember the overarching goal of automation and make your automation processes as reusable as possible.  An automation solution should allow you to parameterize (variable substitution) key parts of the automation process.  This allows you to reuse these automation processes for other tasks by simply changing the input parameters or arguments to the process.


Automation in the Data Center is a key component to many initiative that IT is undertaking today.  Cloud Computing, Physical to Virtual Migrations, and Day-to-Day Operations all require automation.  Keeping in mind these 4 key points when undertaking an automation initiative can help ensure you are successful in the larger initiative and goals of your IT organization.

Filter Blog

By date:
By tag:
It's amazing what I.T. was meant to be.