Search BMC.com
Search

1 6 7 8 9 10 11 Previous Next

Optimize IT

151 Posts
Share: |


Driving home last night, listening to Marketplace on NPR I heard the attached segment about making improvements in managing the food supply chain. As it went on I thought, "I wish all BMC could hear this" because the food supply chain is an apt metaphor for the application release process. Consider:

 

  1. The primary business driver is Time to Market - the "produce" can't be delivered too quickly
  2. UNLESS hasty or sloppy delivery processes contaminate the product or fail to remove contamination.
  3. The product changes hands frequently and covers a lot of distance.
  4. The earlier contamination is detected, isolated and removed, the less costly it is and the less negative impact it has on the market.
  5. Current practices are manual, ad hoc and ineffective.
  6. Regulatory compliance is mandating changes to current processes.
  7. The solution is a combination of logistics, automation, human workflows and end-to-end traceability.
  8. IBM and Microsoft are getting in the game, but they aren't the innovators in the space.

 

Please take 4 minutes to listen to the attached segment and:

  • when you hear a reference to produce, think "application"
  • when you hear "contamination", think "configuration error or application bug"
  • when you hear "people gettng sick and dying", think "major service outage"
  • when you hear "market" think "data center"
  • ...

 

you get the idea.

Share: |


I am convinced at this point that my computer has it in for me, or at least some of the software on it. I know all of you have been there. You are happily going along your merry way, and the Computer (notice the capital C, indicating the personification of the computer and its evil intent) decides that it has had enough of your attempts at work prose, and fails, freezes, or thoughtfully reboots your laptop. As I was waiting for a particular software application for writing documents from a Washington State-based company to finish thinking about the document I was working on, I had some time to think myself. Why does this frustrate me so much? I used to go get coffee while my computer booted in the old days. The "Blue Screen of Death" was close to a social cliche at one point. So what changed, other than the obvious improvements in the code of certain software companies?

 

One word - EXPECTATIONS

MH900445054.JPG

 

Once you experience something better, the natural inclination of any human is to become desensitized to what was, at one point, absolutely astounding. We usually go even one step further and start to feel entitled to what was, only a short time before, out of reach. We have all seen it in our lives. I am longer content with staying in youth hostels and using my jacket for a pillow while traveling. A car with no air conditioning seems like an abuse of my basic human rights. I expect any website to be slick and "Web 2.0" as a matter of course. I can't make buying decisions without some sort of rating system and hundreds of user comments to guide me.

 

So, how does this relate to Data Center Automation? Well, I look at our industry right now, and I see a lot of expectation setting. Cloud Computing is the IT revolution of this decade. It shouldn't matter where your infrastructure is. "Compute Capacity" should be delivered not in weeks, but minutes. Business Applications should see updates every few weeks, not every few months. The list goes on and on, and we at BMC have built our business on achieving those expectations.

 

There is a rub, though. As often happens with IT revolutions like cloud, the inclination of both vendors and users is to reduce the solution to a single, easy to grasp concept. "To the Cloud" says the hard-working, small business CEO. "Isn't virtualization the same thing as cloud?". "Traditional Management and Automation Techniques no longer apply in the cloud". And as always happens, the unwary consumer is caught up in these over-simplications, and ends up paying the price.

 

So, what's the bottom line? I am excited about the changes in our industry, and I believe that many of the lofty predictions will seem like child's play 10 years from now. I also believe that users expectations of IT are higher now than any time before. I think this decade will be one of the most exciting for IT ever. However, I also believe that the repercussions for an ill-conceived dive into the new world could be a rude awakening for IT organizations. So what's my advice? Look at the end-to-end process for delivering your services, and make sure that you, and those who seek to counsel you, don't over-simplify the solution. It is better to consider the more comprehensive solution than to be stuck with hundreds of angry users that have expectations you can't meet.

Share: |


...These are the voyages of the starship Enterprise Software...

 

http://www.howtogeek.com/wp-content/uploads/2010/04/startrek04.jpg

 

With all cheeky Star Trek references aside, you probably figured out that I was referring to databases. We have spent a good amount of time over the past several months discussing datacenter automation from the perspective of servers and applications as well as automation’s overall value to cloud computing.  Considering that virtually every application ultimately interfaces with database on the backend, we would be remiss if we did not discuss the topic of databases, an arguably perfect candidate for automation.

 

Before we begin, lets take a look at why databases are important in general.


  • They are everywhere:  The primary purpose of a database is to provide data to applications. Although they are hardly seen and we may not interact with them directly, virtually every application we interact with is ultimately dependent on a database. When you read a Facebook profile, post an update to Twitter, or perform a Google search, you are interacting with a database. In fact, these very words you are reading on this blog post right now is generated from a database.

  • They are closest to the business:  Businesses depend on databases more than any other component in the datacenter. If a webserver or an application crashes, they can be replaced relatively easily as webservers and applications simply provide functionality. Using the Facebook example above, if Facebook lost every web and application server but had their databases intact, you could argue that they could rebuild their webservers, reload their applications, and eventually come back online. In contrast, if Facebook lost every database server and the data associated with them, there would essentially be no Facebook.

 

So now that we have established why databases are important, let’s take a look at why databases are a perfect candidate for automation:


  • High-complexity: Due to the fact that databases are so critical to the business and must always be available, database technologies have increasingly become more and more complex over the years to support increasingly robust features for performance, high-availability, and disaster recovery. When setting up a new database manually, it is so easy nowadays to miss that 1 step in the 100 step process that is typically required to setup an enterprise database. Automation can ensure that the 100 step process is followed correctly 100 percent of the time.

  • Expensive DBAs: Because of the increasingly high criticality and high complexity of databases nowadays, database administrators are generally much more expensive than other IT personnel. Freeing up a DBA’s precious time by automating many of the more remedial tasks, such as the provisioning and patching of databases, can yield big returns.

  • Cloud Computing: Given all the spotlight on cloud computing nowadays, it is important to note that database automation is key to enabling real cloud computing. The benefits of being able to provision servers and applications within 2 minutes will ultimately go unrealized if the databases that these applications depend on takes 2 months to provision.

 

With BMC's recent acquisition of GridApp Systems, BMC's Service Automation line of products have been rounded out into a very robust portfolio for realizing the efficiency, consistency, and manageability that automation provides into one of the final frontiers of the datacenter.



Frank Yu

To the cloud...oh wait

Posted by Frank Yu Mar 18, 2011
Share: |


With cloud being the hottest topic in IT for the past year, a lot of companies have started going down the path of implementing their own private or hybrid cloud. However, based on the feedback that I’ve been getting from various customers, many of them are stuck.

 

The problem is that they are treating cloud as a standalone entity. Considerations were made on how to expose it to end users, how to rapidly provision large number of virtual guests, and how to recycle the resources back into the pool after decommissioning. These requirements are often tested in a self contained lab through a pilot program, and most of them turn out to be fairly successful. However, when the time comes to take the cloud solution from the lab into a real production environment, many challenges arise:

 

  • How does the network in which the virtual guest will reside get provisioned?
  • How will the decision on which IP is assigned to which server be made?
  • How will the assigned IP be mapped to the correct name in DNS?
  • What about VLAN creation and assignment?
  • What about load balancers in the environment?
  • What about firewalls and ACL rules?
  • How can change management be integrated into the solution to govern the requests?
  • What about costing and chargeback?
  • How will multi-tenancy concerns be addressed?
  • How will the application stack be deployed on top of the virtual guest?
  • How will the newly provisioned server be added into monitoring?
  • How will the server get patched and secured?
  • What about servers that need to follow regulatory compliance?
  • How will the server retirement process remove all the related entries in the entire environment?

    Stop_Sign.jpg

 

It is nearly impossible to bolt on bits and pieces of software and integration points onto a standalone cloud solution that wasn't initially designed to meet these challenges. This is why so many companies are getting stuck and coming to us for help. A cloud solution that does a great job providing on demand compute but doesn't easily fit into rest of the IT infrastructure will have an extremely difficult time standing up in production.

 

This is why I make the same recommendation to all our clients: When it comes to building a cloud solution, do not look at it in a vacuum. Instead, approach it as a vital part of IT that will need to work in concert with rest of the environment; and be mindful of all the considerations associated with that.

Share: |


DevOps describes the cultural mashup of Development and Operations
to address the Application Service Delivery bottleneck.

http://dev2ops.org/storage/WallOfConfusion.png

In Boston last week, the first DevOpsDays event of 2011 was held. Two days later at the CloudConnect conference in Santa Clara, there was a DevOps and Automation track.

 

Perhaps you’re asking yourself, “What is this ‘DevOps’, and what does it mean to me?” So glad you asked.

 

With Agile methodologies and Java application servers, Development is producing application changes faster than ever before. Those changes go through a release process, at the end of which Operations deploys them into Production. With cloud and web-based platforms, Operations can instantly deploy application changes and users will see those changes as soon as they refresh their browsers.

 

With Development producing changes faster and Operations able to make them available almost instantaneously, the release process in between has become the time-to-market bottleneck. Relieving this bottleneck is complicated by differences in professional cultures. Operations sees Development as cavalier about operational discipline; Development sees Operations as inflexible, imposing processes and approvals that hinder productivity. When things were simpler this culture clash was not such a big deal, but increasing change rates and greater environmental complexity have created real business issues around the delivery and stability of application changes. This is why Gartner is reporting a sharp increase in inquiries around application release solutions.

 

Some of our customers process over a thousand application changes per week and run their applications through nine different testing environments. The backlogs are becoming unacceptable. Operations must often borrow developers to assist with deployment and configuration, detracting from Development’s productivity. Detailed documentation must be written, tailored and updated - for every application, and for each environment between Development and Production. Without an Application Service Delivery strategy, the release process is labor intensive, time consuming and error prone. It is mission critical work that must be done by expert administrators who are doing their best with generic, ad hoc tools like command line interfaces, scripts, paper documents, spreadsheets, email, conference calls and instant messaging.

 

DevOps describes the movement to create a cultural mashup of Development and Operations to address the Application Service Delivery bottleneck. Development must produce applications that are more easily deployed and configured. Operations must become more agile in their workflows. The whole release process must be coordinated end-to-end as a symphony of activity that includes change management, human workflows, automated tasks and configuration data management.

 

For more on DevOps, check out the Dev2Ops website.

Share: |


There is an age old question within Customer Support on how to manage your customer support center – with support staff who are generalists or with support staff who are specialists. Typically in a Level 1 support center, it is important to staff with generalists who can answer general question, and escalate to a specialist when more detailed technical support is needed. In the support ecosystem, it is that deep level of technical knowledge that a support engineer holds sacred to himself; it is what used to separate him from the rest.

 

In this world of ever growing interconnectivity - where one product must seamlessly talk to another, -our expert support staff can no longer pride themselves in their ability to go deeply technical in a single product. In order to stand out from the rest, they need to truly understand how the customer is using the product (use cases) AND they need to continue to provide their technical expertise in their major product line while having generalist knowledge. Solutions are made of up of multiple products that communicate with each other and share information with each other via the CMDB or other mechanisms such as APIs.

mad_scientist.jpg

 

In order to prepare our staff to support this solution and other solutions, we need our staff to remain technically deep in their primary area of expertise, yet they need to be generalists in the products that interact within relevant use cases. In order to deliver the best service to our customers, we have to ensure that the best support person handles the solution issue from end to end so that our customers do not get bounced around from product team to product team. This means that the support engineer needs to be technical enough to handle the integrations between the products and have a strong support infrastructure behind him as well as understanding the context in which the customer is using the solution. We have done extensive training with our support engineers to this end. We will not build end to end experts overnight, but I think you will see that your support experience will be much more adapted to the way you are using our solutions.

 

 

Not only does our support delivery have to change, our entire support infrastructure must also adapt.

  • With DCA you will see our traditional documentation has started to transform to deliver documentation in a much more solution-centric way; It will take you end-to-end from the use case perspective, while delivering documentation to any part of the solution in just a mouse click away.. If you have not checked it out already, check out the new online DCA documentation (please note: login information required). I think you’ll be pleasantly surprised by the experience.

 

  • Another challenge we face in supporting solutions is that our internal support labs can no longer be product centric. The introduction of VMs allowed us to bring up and down use cases for testing in a matter of minutes, something, that without automation would have taken a day or two to configure.

 

  • And soon, from our support site, you will be able to enter an issue based on solution. We will no longer require you, the user, to determine the product that is the root cause of the problem. Our goal is to take the burden off of you to determine what product your problem is related to and to only describe the problem once while maintaining consistency and knowledge with a single support expert.

 

I’d love to hear about your experiences, so please, feel free to post questions, concerns and comments!

Share: |


desert-long.jpg

The ‘Cloud’ is the next hottest vehicle for IT to deliver value to the enterprise.  At its heart, by empowering end users to rapidly access resources, you can accelerate the time to deliver new products, reduce issues caused with misconfiguration and react more swiftly to capacity constraints.  This is why many enterprises are making Cloud one of their top initiatives in the next 12 months. 

 

Most Cloud offerings focus on virtual machine (VM) availability, access control and charge back.  While a self service vehicle to provide pre-built systems to users can add value, it ignores the most important part of the puzzle, the application stack.  As the Director of IT at a large financial player noted, what good is empowering end users to access systems if those systems don’t actually do anything.  In many cases, it can cause more harm than good.  The big question is why don’t most offerings handle this?

 

The reality is that most Cloud suites have little visibility into the application tier.  Applications are their own beasts with their own rules.  J2EE apps need to be managed differently than databases.  Relying on users to build and maintain this content is a guaranteed failure.  For an infrastructure player that simply understands some components of the full offering, they have no hope of enabling end customers to actualize the value of the Cloud.  For example, I worked with an enterprise where their Cloud vendor was arguing to create a full VM for every slight application variation that existed (literally every schema variation in development would be another virtual machine).  Of course this was impossible to maintain and the customer quickly recognized that it would never be practical.  

 

The key to enabling the Cloud is full stack functionality – from system deployment, to app deployment, to database deployment to change management.  This, combined with a well constructed, multi-tenant, self-service offering, is the killer application that will truly make the Cloud vision a reality.  Without it you are left with a partial solution that will breed chaos.  We may see clouds in the sky but if they don’t produce rain, we’re all in big trouble.

Share: |


Why “lights-out” management of your infrastructure does not mean “lights-out” resources

 

Customers, with the help of BMC, tend to build their business case on the backs of three key areas of potential savings:

  • Retirement of displaced products (think Shavlik, Patchlink, Tripwire, Symantec, or other point tools)
  • Risk mitigation avoid penalties for audit failures/findings by customer’s external audit staff)
  • Reduction in labor costs

 

The first two are relatively straight-forward. If we displace a product, the customer saves on the maintenance stream. If a customer can avoid an audit finding, they save the dollars associated with whatever fee is levied upon them. The third – reduction in labor costs – is where I find that we (customer and BMC) tend to trip up. Here’s why.

 

In building the business case, we message the potential windfall in savings from labor cost reduction while simultaneously failing to educate the decision makers on the need to invest in resources for the roll-out, ongoing development, and maintenance of the automation platform – also known as a customer "Center of Excellence". Why? Because it waters down the ROI. And because automation carries with it the allure of autonomous actions taken by some all-knowing, all-powerful system. We know better, don’t we?

bullseye.jpg

 

Now, this doesn’t mean the customer will allocate all resources from the anticipated reduction to the automation platform – they will still see significant savings on the labor side no matter what the size of the COE is – but the customer should plan to budget some labor cost associated with the solution on a continual basis – and should socialize this very early on in the conversations with business leaders.

 

Ok – so you've taken the bait. You’re buying into this COE concept, but don't know where to start? There are three core competencies to any successful COE:

 

Engineering

These people author content for new use cases, expand existing use cases, evaluate new features, and enable the rest of the org on the value of adopting automation. Generally speaking, they are viewed as the development arm of the COE. For a medium sized implementation (a few use cases, < 5,000 servers), you’ll need no more than 1 or 2 resources devoted to this effort – with an extended virtual team providing guidance on their areas of expertise.

 

Operations

This function is designed to maintain the automation platform infrastructure, including monitoring of the environment, expansion of the infrastructure, and root cause analysis on key issues impacting product use cases. Generally speaking, they are viewed as the NOC & Support arm of the COE and own all care & feeding. For a medium sized implementation, assume another resource.

 

Business Analysis

These people assess the business impact and ROI associated with each of the use cases. Before a new use case is on-boarded, the business analyst figures out the anticipated savings/value achieved by the use case – and once implemented and placed into production – reports on the ongoing value of the use case. In many organizations, the same people who run the “Engineering” aspects of the COE can service this function. In larger implementations, a Program Manager will be designated.

 

Most organizations that adopt automation are technology oriented, so they gravitate to the first two functions because that's what they know how to do best. They will ask questions like "Can the infrastructure handle one more use case?" rather than asking the more important question - "What is the business value of adopting this user case?". The last function - business analysis - is routinely overlooked, but is critical to the success of any implementation. After all, if the business leaders don’t see the value of their investment, why would they bother investing more?

 

Every successful implementation of BMC’s automation platform has some variant of this structure – whether they call it the “Center of Excellence”, "Platform Architecture”, “Engineering”, or something else. And virtually all of the escalations I deal with have misaligned expectations on what it will take to manage and support the platform. For customers investing millions in a solution that is promising millions in savings, a COE with dedicated focus on achieving the promised business results is a smart investment - but only if you want to succeed...

Michael Ducy

Complexity

Posted by Michael Ducy Mar 4, 2011
Share: |


Flipping through the Harvard Business Review I saw an ad that had the tag line,

 

"Complexity presents an opportunity and a threat at the same time."

 

Sitting back and thinking about this statement in the context of Data Center Automation, I realized that most organizations are challenged by the complexity of their processes and not by the actual technology.  This complexity presents itself as a threat to the organization in several ways.

 

First, the complexity makes it harder to bring in new people to perform the same job.  When people who have designed overly complex systems leave an organization (or even a department), it is often hard to find people that can fully replace this individual to manage the complex system.  I have personally seen people promoted into different roles in an organization only to be constantly brought back to support a complex system they designed.  Some organizations may be hesitant to eliminate under-performing staff due to their specialized knowledge of a complex system or promote individuals into new roles because there will be no one left to support the complex system.

 

Second, complexity may prevent organizations from undertaking new initiatives.  When new initiatives come face to face with the complex systems, many organizations will either scrap the new initiative, or build complexity into the new initiative to handle this edge case. This creates a vicious "Web of Complexity" that only exacerbates the problem.

 

Third, when complex systems fail, the mean-time-to-repair (MTTR) is often longer and more difficult.  Specialized experts must be called in to fix the problem, requiring them to work after hours, weekends, or during vacation.  This causes undue stress on an organization and its people, thus reducing the overall effectiveness of the organization.

 

What can organizations looking at Data Center Automation do about complexity?  They can begin by using their automation initiatives as a chance to reduce and remove the complexity that has been built into systems over time.  Often, complexity is built into systems to support legacy methods.  As you review your systems and processes, ensure that these legacy ways of doing things are still required.  Have your staff approach the situation with open minds, realizing that processes built several years ago can most likely be optimized and made less complex, or completely removed with automation.  Often, complexity is built in because there was no simple way to perform a task.  With an automation solution that provides a framework for automation, rather than just a scripting platform, much of this complexity can be removed.

 

According to this Harvard Business Review blog post, complexity is weighing heavily on CEO's minds these days.  Tackling complexity as part of your Data Center Automation initiatives presents an opportunity for organizations by giving them the chance to experience better MTTR, increased staff happiness, increased agility to take on new initiatives, and less dependency on underperforming staff. 

 

And best of all, your CEO will sleep better at night knowing his IT organization has reduced complexity.

Share: |


So you like the idea of fully automating Cloud provisiong, but are not ready to invest the time and money in jumping to that level.  There are alternatives that can deliver more automation benefits fromwhat you have or with smaller investment.

 

What automated actions would you launch in BMC Atrium Orchestrator from the server admin console of BMC BladeLogic Server Automation?   This new feature was released recentlly in version 8.1 of BMC BladeLogic Server Automation to reduce server admin workload as well as the number of different user interfaces a server admin has to leverage in executing tasks.  Any task automation workflows defined in BMC Atrium Orchestrator can be initiated to accomplish simple tasks, such as 360 ping test or query fro hardware information.  Or maybe more complex tasks may be of interest to you, such as adding a new server to a network load-balancer at the same time as the server provisioning job is created. 

 

Here is an example of how easy it is to select a BMC Atrium Orchestrator workflow from the BMC BladeLogic Server Autoamtion console:

 

BLjobBAOscreenshoot_3mar11.png

 

Many routine and repetitive tasks can be easily pre-defined in BMC Atrium Orchestrator to be called from this drop-down menu.  Only your imagination of how to reduce your manual efforts can lead you to exploit all the benefits available to your operations.  Putting this capability in place is not difficult, as it does not require the substantial projects you might expect for Change Managment process integration or Cloud Computing self-service provisioning.  However, this server adminstration productivity enhancement does utilze the same orchestration technology that is the foundation for orchestrating tasks in these more comprehensive processes.  You will not duplicate the costs of executing simpler workflows with this solution when the time comes to implement larger scale soluitions and you can still execute the simple task automation on the same orchestration platform supporting the grander functions.  Yet another option for incremental improvement of server configuration management you can choose on your roadmap to the next level of data center operations maturity - on the way to Cloud automation.

 

Maybe you have other ideas reagarding how to use this new feathre that would benefit your efficiency and effectiveness - maybe simple - maybe more comlex.  Tell me about them.

Fred Breton

DCA or Cloud?

Posted by Fred Breton Feb 28, 2011
Share: |


  On my last post I spoke about the importance of delegation capability to get value from DCA. We saw that one important point was the ability to segregate duties to be able to restrict people to only execute what they need to do. That is the first point for expert teams accept to delegate execution of tasks impacting the environment they're managing.

 

  Successfully delegating task to even the requester itself doesn't mean to just put the right credential on some content. People need to know that they've access to this content, they need to know they can directly execute it and how to execute. So you need to have a solution allowing people who creates the content to publish it to people who consumes the content. You typically need a solution with a service catalogue where publisher (the expert teams) could publish there offerings to the right audience and where consumer could access to the offering they need regarding their role in the company.

 

  Considering this and various post on this blog I need to ask a question: what are the main differentiators between a solution able to achieve DCA requirements and a solution able to achieve Cloud computing requirements?

 

Thanks to share your thoughts on this point.

Share: |


As anyone who has spent much time working in or with the software industry knows, it take a lot of talented people people to design, build, market, and sell great products. So, I am very excited to say that I have invited some colleagues from across the spectrum to join our already fantastic contributor community for the DCA Blog. I think this will give our readers a chance to read different perspectives on the BMC products and the Data Center Automation area in general.

 

So, I will leave it as a surprise as to who is going to contribute, but I look forward to some great new topics and discussions going forward.

 

Thank you for reading!

Bill Robinson

Automating Chaos

Posted by Bill Robinson Feb 23, 2011
Share: |


microfire.jpg

I was at a customer some time ago who had purchased BBSA to automate their application deployments.  They were struggling to get the tool to work and I was sent in to straighten things out.  The customer had their BBSA servers and production infrastructure in one location and had developers across an ocean and WAN link from that site.  Their ‘packages’ were weighing in around 2GB in size and they were complaining that they could never get anything packaged because BBSA would timeout during the upload process (w/ the BBSA gui traversing the Atlantic) of this 2GB package.  Previously they had been using ftp w/ great success.  The customer wanted to do a drop in replacement of BBSA for their ftp process – minimal change to their current process.

 

A 2GB package for application code seems large so I asked them to describe the process in more detail.  The developer would make a couple code changes (two or three files), tar up the whole directory (each of their customer’s had everything in one directory, w/ multiple customers sharing a system), copy it over to the central location (across a slow WAN link), and then someone in the central location would untar the file, do a find/replace on IPs, hostnames, and other configuration settings relevant to the production environment and make it live.  What was in the tarball?  Multiple JDKs, JREs, Tomcat, JBoss, multiple version of WebSphere, WebLogic, OAS depending on the customer requirements.  The net of this was that for a minor code update 2GB of data was being pushed across an ocean, and each system had duplicate installations of Java and the Application Servers, for each customer.

 

This is chaos.  Most application servers support a single install and multiple instances.  Our customer could have been more firm with their customers on supported versions of Java and the Application Servers and hardcoding IPs and hostnames and doing a find replace could better be accomplished by using configuration files in the appropriate places so there was one central place to make changes.  So instead of re-deploying all of this for a small code change, only the change is deployed.  All of this could have been done without BBSA.  BBSA would have automated the installation of the applications and could have been used to managed the configuration files and deployments of the now smaller bits of code updates.  There is little value to automation if you are unable to standardize your environment and to modify your processes. 

 

There were a few people at the customer who agreed with me and had even proposed similar changes but they did not have high hopes because “this is the way we do things and it’s working fine” was the response to any request for change. Hopefully they were able to fight the political battles and make some changes. 

 

It’s easy to take a software solution and try to use it as a drop in replacement for a current process, but to get real value you need to evaluate your processes and see if in light of new technologies they still make good sense.

Share: |


“Water and stuff, of course”. True, True, but I am thinking about the great metaphorical IT Cloud. I remember that term from the Dot.Com days, and it was really just a way of saying – “It’s really complicated. Don’t worry about it. You don’t need to know”. There is probably some joke in there about dot.com company business plans, but the basic tenet was sound. You don’t need to worry about how we are delivering this service to you – just enjoy it and take advantage of this new found freedom to focus on your business, not the IT.

 

But now you have been asked to design and build a private cloud. The ambiguity inherent in a lot of cloud computing discussions can be frustrating.

 

I feel your pain. Personally, I like to know what is going on under the covers.

 

So, what about BMC? As a long time data center automation geek, one of the things that always gets me excited is the underlying provisioning mechanism in BMC’s solution. As a former customer, my team was an early adopter of the idea of Full Stack Provisioning. We wanted every element of the server build to be automated - from the operating system, to the base applications, to the business applications, and data. But I always felt like we were fighting an uphill battle. IT management was still an intensely personal affair. Server and Application admins wanted direct control of their devices and software.

cloud gears.jpg

 

So, with the “advent” of cloud computing, I feel like the idea of Full Stack Provisioning has finally found its moment. The idea of Self-Service from a Service Catalog forces more consistency on IT than was normal before. And the implied expectations of near real-time service delivery and instant gratification have made pervasive automation a non-negotiable part of the solution (I have to take a breath after that sentence). Cloud Computing has also encouraging the idea of modularity – what I like to call the “Lego Principle”. The basic idea is that when a provider is building a service for the cloud, they should build it out of common, reusable blocks, instead of reinventing the wheel every time – a common operating system base, common monitoring agents, common application server installs, etc. The actual “configuration” should be minimal.

 

So, what is the take away here? Two things. First of all, as you consider your plans to build a private cloud, consider how you will achieve Full-Stack Provisioning in the cloud. Second, if you agree that this is important, you can prepare your team for the transition. The most important thing to look at is how you are building and deploying your servers, applications on those servers, and all of the surrounding infrastructure (network, storage, etc.). You can start building the process and automation you need in the cloud in your traditional infrastructure today. Don't wait, jump on the Full-Stack band wagon.

David Koppe

Recycled Building blocks

Posted by David Koppe Feb 9, 2011
Share: |


I sometimes think how difficult our job is, with constantly changing technologies, new tools, new versions of old tools, new and old vendors, acquisitions, all coming out on different schedules, more or less often than each other.  IT infrastructure folks like us have to keep up with all of this, figure out how to make stuff work together and help form a strategy about how to make the best of it all, how do we keep up with it all?!?

 

 

One method I've always found useful when I'm learning a new tool or technology, is to break things down into building blocks.  What are the tiers of an application?  Client, server, agent etc. What function or functions does each component perform?  For each component, how does it function? How does it start, run and stop? How do you configure it?  Does it depend on any other technologies or tools?  If so, you can break those down too. 

 

 

So, rather than seeing a complex array of tools across an IT environment that seems to daunting to deal with, you see a myriad of individual components that are actually each pretty simple.

 

 

The other thing that I've realized is that there is actually very little that is truly "new" in any "new" technology.  An operating system is an operating system, whether it's Windows, Linux, UNIX, OS/400 or z/OS.  When you break them down into components, they do the same types of things - starting, managing and running processes; handling various types of hardware; receiving input from users and sending output to them.  Is TCP/IP really that different to SNA on the mainframe? Well, obviously in many ways, yes, but in the sense that it manages how applications and devices talk to each other, not really.  Similarly with a concept that's on everybody's lips these days - virtualization.  IBM and many other mainframe vendors had hypervisors and virtual machine concepts decades ago.  Obviously newer iterations of a technology bring an innovation to bear, combine a concept that's been around a while with other, newer, technologies to differentiate themselves, but at a high level it's really the same thing.

 

 

Why does this matter?  Well, when you've broken something down into its components, most likely you'll realize that you've already seen most of them in some other guise.  So, all of a sudden that daunting collection of IT stuff is mostly a set of building blocks you've seen before.

 

 

Then it dawned on me that this reminded of something else - how we approach automation.  The configuration object dictionary in Bladelogic Server Automation breaks servers down into the building blocks that make up that server.  A BLPackage is a way of representing the components that make up a piece of software or business application.  Component templates are similar. This can also extend into Atrium Orchestrator - adapters, adapter modules, operations actions and so on, are based on breaking an automated IT process down into its' component functions.  Once you have a module that performs a function, you can recycle it in many other workflows and processes.

 

 

So that's how I approach understanding the complex universe of technology we all face every day.  And by a handy coincidence, the same approach helps in designing solutions to simplify that universe!

Filter Blog

By date:
By tag:
It's amazing what I.T. was meant to be.