Share: |

The below excerpt is from a Partner Insights paper

By Colin Lacey, Vice President, Data Center Transformation Services and Solutions, Technology Consulting and Integration Services, Unisys Alan Chhabra, Director, Global Services Cloud Practice, BMC Software




Setting up and configuring your cloud infrastructure is the first phase in your cloud journey — and is only the start of gaining value from the cloud.  To drive the most business value for your organization, you need to understand what’s involved in the deployment and operations phases, and when and where to turn for outside help.



When to Look for Help

To successfully deploy and operate a cloud environment, you need to understand your application needs and business goals first. Then, you need to implement the tools and processes required to monitor and manage the rich set of integrated technologies that make up the cloud environment. These range from storage to management and virtualization, and all the way through to authentication and billing.



Not all IT organizations will be equally able, or willing, to take on these ongoing deployment and operations chores. There are system integrators (SIs) that offer a full advisory process built around a decision workflow to determine which applications and services should run in which environments, such as a conventional in house data center versus a public or private cloud.



For those applications that are well suited to the cloud, the system integrator should suggest options that will make the use of that cloud more efficient or effective. The SI should be able, for each application, to suggest what changes are required, the level of effort required to modify the application, and whether the benefit is worth the effort.



Investing time and effort upfront to determine how you will populate and manage your cloud is essential to getting the most benefit from cloud computing in the short and long term.




For information about how BMC and Unisys work together to optimize your cloud deployment and operations…


Share: |

All this capacity isn’t free. It might be cheaper, but it isn’t free. I like to tell customers that clouds don’t lower your total cost of IT. They lower the total cost per business service. Why? Because IT has removed the processes and barriers to business use of technology – and that means more services will be requested. More services mean more costs. But, each service is cheaper – and presumably beneficial to the business. So, total costs stay about the same, but a lot more gets done. Progess!


But, this doesn’t remove the problem of payment. Back in the day, when I was in an internal development shop, we used to identify a business case for a new server to support a new web app – and then someone would fill out forms in triplicate and the procurement guy would negotiate with the hardware vendor and a box would appear a month later. Then we’d wait for a sysadmin to configure the box, and eventually, we had a new development environment. The business set aside money on day 1, the system came in on day 30, and the check was ready to pay the hardware vendor.

The cloud is the opposite of this. The cloud requires IT tocut a check to a hardware and infrastructure vendor on day 1.  Then, once the systems are on-boarded, you wait. You wait for the business to request a cloud service. Then, if you have a chargeback or showback model in place, you get compensated for what the business used.

That means IT, on its own, has to cut a check well before any business need is established. For this, you need to understand what services are being delivered – and how you’re going to recoup your costs. IThas become much more of a traditional supplier, building a product before orders come in.

Chargeback is often a sticky topic for internal IT:

  1. The mechanisms for actually charging back the users in the company’s financial systems are often not in place
  2. The business is loathe to move from a fixed-cost model for IT to a variable one, where their bill could change from month to month, quarter to quarter
  3. The models for chargeback vary wildly by organization and industry

So, as an intermediate step to true “pay by the drink” IT, showback seems to be a good option. Calculate the costs of service delivery –in the context of the value delivered, of course – and share them with the business. Transparency increases trust and the likelihood of progress.

IT has long awaited the day when their work would be properly valued and rewarded – or at least compensated. They key seems to be flexibility in the pricing model married with transparency in the costing model. No single algorithm will work across the board, in all companies. The first priority is ensuring the business supports ongoing investment in the cloud – so keep your eye on the prize.

Without that, the rest is a moot point.

Criteria 9: Expose transparent showback or chargeback of the cost of delivering business services.

Criteria 10: Seamlessly integrate internal financial transactions — as easy as in the public cloud.


Every so often, we do a blog series. This blog series was born of dozens of cloud customer conversations, outlining the requirements for a cloud. You can read it from the start here. Can't wait for the riveting conclusion - check out the whitepaper I wrote on Cloud Requirements.

Share: |

With all the interesting news lately surrounding Citrix’s entry into the Apache Software Foundation and Rackspace’s endorsement and support of OpenStack by powering their cloud with it got me thinking… What does “open” really mean?


Let’s have a little fun and just define “open” as an adjective because let’s face it, that's what we are doing, describing products as “open”.  Using as my reference point, here is how they define the adjective “open”:


  1. “not closed or barred at the time, as a doorway by a door, a window by a sash, or a gateway by a gate: to leave the windows open at night.”
  2. “(of a door, gate, window sash, or the like) set so as to permit passage through the opening it can be used to close.”
  3. “having no means of closing or barring: an open portico.”
  4. “having the interior immediately accessible, as a box with the lid raised or a drawer that is pulled out.”
  5. “relatively free of obstructions to sight, movement, or internal arrangement: an open floor plan.”




I found the first part of the definition interesting.  If we take BMC’s Cloud Lifecycle Manager and apply it here, I'd say it would go a little like:


“not closed or barred at the time, as with a RESTful API: to offering up full insight and integration into your products via documentation, a partnership and a RESTful API.”

Next we have the physical description of a door, gate or window… let’s spin this a bit and apply it to BMC’s Cloud Lifecycle Manager:


“(of an RESTful API) set so as to permit interfacing through an opening it can be used  to integrate.”

Next we address the thought of locking one out:


“having no means of closing or barring: open infrastructure support.”

Accessibility comes to mind when debating “open”:


“having the interior immediately accessible, as a RESTful API with the proper documentation or the ability to connect and manage any cloud whether public or private.”

Finally we address what is notably referred to as “no lock-in”:


“relatively free of obstructions to integration, to existing vendors, or to internal/external investments: an open infrastructure plan.”

Piecing this all together, does being “open” really have to mean "open source software" is part of the software stack?? In my opinion, I would say no!


Conveniently as I began to type this blog entry, I had the opportunity to attend the Amazon AWS Summit in New York last week.  While there, I talked with good amount of attendees asking them: “What is open to you?” Surprisingly, being “open” is more about "no lock-in" and the ability to support multiple vendors than it is about having open source software being part of the “value” in a particular stack.  At BMC, we like to refer to “open” as “Infrastructure Neutral” because let’s face it, that is what matters when implementing cloud in your enterprise! 


Sure, having open source software in your stack is an aspect of “open”, but let’s face it… while a sampling of target customers may consume a product that has some open source in the stack, only a handful will actually want to customize that open source code base and actually have the development staff to do so. For most, “open”is really about:


  1. What clouds do you support?
  2. What hypervisors do you support?
  3. Can you leverage and manage my existing network hardware?
  4. Can you leverage and manage my existing storage hardware?
  5. Can you manage my private cloud as well as public cloud(s)?
  6. Can I on-board and manage existing application workloads and multi-tiered applications?
  7. Can I provide a single self-service portal for all the above?


Lump this all together and ask the almighty question… Am I locked to one particular vendor whether at the cloud, infrastructure or management layers?


Translate all the above and we get “Infrastructure Neutral” which is really code for “No Lock-In” which is really code for “Open”.  So now that we have the terminology fleshed out, in my next blog I will tell you how BMC’s Cloud Lifecycle Manager fits my definition of “open” and where it is “Infrastructure Neutral”.  Until then, remember, “open” is more than just "open source"!

Share: |

Great reading from a paper by Colin Lacey, Vice President, Data Center Transformation Services and Solutions, Technology Consulting and Integration Services, Unisys Alan Chhabra, Senior Director, Global Services Cloud and Data Center Practices, BMC Software


Planning and establishing a fully integrated cloud architecture (along with its related IT management processes) is just the beginning of the journey to realizing cost and agility benefits from the cloud. The next steps, end-user enablement, deployment and operations are just as critical. Establishing an operational cloud involves migrating existing applications and data to the cloud without interrupting business operations. It also involves the ongoing effort to cost-effectively manage a complex, often heterogeneous infrastructure. To drive the most business value for your organization, you need to understand what’s involved in the deployment and ongoing operational phase, and when and where to turn for outside help.


Build for Success

The first phase in your cloud journey could be  compared to building a new office building. A quality building should not only protect its occupants from the elements and keep their property secure, but should also make them more productive, adapt to their changing needs, and introduce new technologies (such as energy-saving windows) to deliver more value over time.


The cloud equivalent of the “design-and-build” processes for an office building involves putting the physical and management infrastructure in place and creating a service catalog — a menu of services — that will be available to deliver business services. However, just as an office building is only an empty shell until tenants move in, so too is a cloud an empty container until you fill it with the applications, infrastructure, and services that help your business reach its goals. The value you get from your cloud depends on what services are available, how well these services are managed, and how well they support your business objectives.


You will need to migrate existing applications, infrastructure, and data, as well as cost-effectively maintain and evolve your cloud as your needs change.  You also need to communicate and internally market the business services your cloud has and will enable. 


In an office building, facilities managers are responsible for customizing offices for various tenants, assuring basic services, and performing preventive maintenance. In the cloud world, cloud administrators support agreed-upon levels of uptime and performance, apply configuration changes and security patches, and manage access control and authentication. They also integrate cloud applications with legacy systems and dynamically reconfigure the environment as business needs change.


While you can achieve significant benefits in the deployment and operations phases, there are also considerable risks, including the following:


  • Overpromising or under-delivering in terms of the cost, reliability, or flexibility of cloud applications, especially as your IT staff is learning how to manage them.
  • Not being able to meet the demand for cloud applications or services if they become widely accepted.
  • Being locked into agreements with a single external cloud vendor. This can be avoided by providing a “reverse migration” path if your organization decides to bring applications or services back in-house.
  • Not properly managing the need to balance the migration effort with new initiatives, resulting in delays and increased costs. For example, one bank has largely contracted out its cloud application migration activities in order to keep internal IT focused on business priorities.


Just as most office buildings don’t expect individual tenants to be experts in building management, many cloud operations processes may require skills that IT may or may not have in-house.


To learn about more considerations and approaches for successfully deploying and operating a cloud environment — including when to look outside your organization for help — read “You’re Cloud-Ready: Now What? Managing Deployment and Operations.”


Dominic Wellington

iPads in the clouds

Posted by Dominic Wellington Apr 18, 2012
Share: |

This week is BMC's big sales kick-off meeting, which explains the radio silence around here. However on the last leg of my trip to Reno I got into a conversation wich, while not in the least about cloud, does have some bearing on what we do with it and how we try to deploy it. 


My seatmate on the little regional jet was a lady of a certain age, who to her credit was travelling to Squaw Valley for the skiing. We got chatting, and I was happy to talk in hopes of heading off the worst effects of jet-lag. Comparing her new iPad to my scuffed old first-gen model, it became obvious that she did not think of the device as a computer. A computer was a balky, intimidating device that required infrastructure and maintenance to function. The iPad in contrast just got out of my seatmate's way, and let her do what she cared about: e-mail her children or video-call her grandchildren, look at family photographs, and so on. How all of this was working behind the scenes could not have interested her less; all she cared about was the result. 


A proper cloud management platform should work in much the same way: get out of the users' way, and  let them get what they want. The trick, of course, is that not all users will want the same things or have the same expectations. Role-based views are key, presenting each user with data and options appropriate for them and their jobs. 


Of course behind the scenes there has to be a lot of technology taking care of delivering on those expectations. An iPad is a very impressive device, and technically-minded people can geek out for ages on how it works, but most users don't know or care. The technology has to be transparent in order for the cloud or the iPad to be successful. 


Now if you'll forgive me, I have to head to my next meeting, so I'm going to post this - from my iPad, of course.

Brian Singer

The Cloud Wars

Posted by Brian Singer Apr 12, 2012
Share: |

There has been quite a bit of stir in the industry thanks to certain announcements that have been made around different cloud stacks. Between OpenStack, Cloudstack, AWS, and others (toss some of those terms into Google News to get an idea of what’s been going on), it’s clear that the industry hasn’t coalesced around a single cloud standard yet. Fortunately, BMC customers don’t have to worry too much about it.


We take a very customer-centric approach to our products. In this case, that means that we designed BMC Cloud Lifecycle Management to be future proof. We don’t have a crystal ball, and neither do our customers. What we do believe is that it’s important to provide choice and flexibility – what is right for your organization today may not be right in 3-4 years.


Assuming the IaaS layer becomes commoditized in the next 3-4 years, will you be able to switch out your current infrastructure for cheaper, commoditized infrastructure without completely changing your cloud management platform? Will you be able to switch out or add public cloud providers which may have entirely different API’s? BMC customers can answer both questions in the affirmative.


We can sleep at night knowing that for our customers, it doesn’t really matter which stack becomes the de-facto standard for cloud IaaS – or if a standard even does emerge. CLM’s architecture and provider API allows them to swap out one hypervisor for another hypervisor, one public cloud provider for another public cloud provider, or one IaaS stack for another IaaS stack, or add any of the above to their existing infrastructure options.


At the end of the day, the real value is in the services that IT is providing on top of this infrastructure, and that’s where BMC customers are best positioned to add value to their organizations now, and in the future.

Share: |

One surefire path to performance degradation is running out of resources. If you haven’t got enough cloud back there in the datacenter, you’ll run into some natural delays fulfilling requests. To the end user, the cloud is magic. It’s this unlimited spigot of computing goodness that will never cease to deliver. Just as you don’t consider the wind farms and nuclear reactors as you pop in an HD surround sound DVD, the end user isn’t going to consider your datacenter and the boxes within it as they request a new service.

And that’s how you want it to be.

But, in order to ensure this blissful ignorance doesn’t become a rolling brownout, it’s your job as the supply manager to understand and manage capacity. While wandering through a datacenter checking for red lights and open racks might be one approach, it’s clearly more nuanced in reality:

  1. Understand what you actually have in place – and how many services it can serve. Each cloud service requires different resource profiles, so this type of analysis requires not just an understanding of your boxes – but an understanding of your service options.
  2. Estimate the rate at which these services are being requested, consumed and retired – and, ideally, any factors that might change that over time, like the introduction of a new service or new user group
  3. Have an range of available capacity-growing options, from leveraging public cloud resources to investing in additional internal resources, accounting for constraints like budgets, floor space, and so forth

With the technology in place to collect and analyze this information, you, as the provider of the service, are in a better position to accurately estimate and communicate the capacity of your cloud – and thus drive ongoing investment.

And, while quantification of megabytes and flops might be academically intriguing, the business making this investment typically prefers to understand all of this in terms of business services. Marketing tripled its use of sharepoint. If the web site traffic doubles next month. We’re buying resources to support 25 developers out on the public cloud.  

This is because capacity is nothing – if you don’t know what you’re doing with it.

Criteria 6: Monitor current resource capacity and utilization in business service terms.

Criteria 7: Estimate upcoming changes to utilization and calculate what-if scenarios.

Criteria 8: Establish a set of capacity-growth options, both internal and external.


Every so often, we do a blog series. This blog series was born of dozens of cloud customer conversations, outlining the requirements for a cloud. You can read it from the start here. Can't wait for the riveting conclusion - check out the whitepaper I wrote on Cloud Requirements.

Dominic Wellington

IT as a utility

Posted by Dominic Wellington Apr 3, 2012
Share: |

There is a lot of talk about cloud enabling a "utility" model of IT, where accessing IT capacity is as easy as hitting the light switch, and users worry about how that is achieved exactly as much as they worry about whether the light will come on - i.e., not at all.


However, that metaphor can be taken too far. Last Thursday I could be found in Paris, taking part in a panel together with a competitor, two representatives from cloud-using companies, and a lawyer specialising in IT compliance issues. At one point the competitor, pushing hard for public cloud, made the analogy that some decades ago factory owners were faced with the choice of whether to keep electrical generating capacity on site, or trust the national grid to deliver electricity on demand. Of course his point was that on-site main power generators were expensive and unnecessary, and by extension on-site IT would be shown also to be expensive and unnecessary.


Unfortunately for my opponent, the person seated between us was a representative of a company specializing in electrical systems, including back-up generators, UPS, and the like. It was therefore all too easy for me to point out that there was a thriving market in providing exactly that on-site capacity, perhaps not for the bulk of loads, but definitely to ensure that mission-critical services stayed up and avaiable even if third-party suppliers were having trouble delivering electricity...


This is exactly the case with cloud. There are a lot of services where pushing them out to the public cloud makes perfect sense, as they are not particularly demanding or differentiated. However, there is a hard core of services for which that is not a good fit, and then a spectrum in between. For most organizations, the correct answer is going to be a mix of on-premise and third-party capacity, and probably with various different suppliers at each end too. This real-world messy situation is why we are committed to heterogeneity, rather than taking a simplistic metaphor a step too far.

Filter Blog

By date:
By tag:
It's amazing what I.T. was meant to be.