One of the big challenges with using public cloud services is that IT is responsible for SLAs even though a large part of infrastructure is outside of their control. Megaupload, while it did promote illegal content, was being used as a cloud storage service by many folks. The fact that it was taken offline with no way to get content, legal or illegal, back from it is something of concern for anyone thinking about cloud. I’m discussing the issues involved and steps that organizations can take to protect themselves from the risk of a megaupload-like occurance.
Being with BMC I am afforded the opportunity to study the cloud initiatives of large enterprises across a wide swath of industries. One of the things I see that consistently keeps the leaders of these organizations up at night is wondering whether anyone will actually use the clouds that they are planning on building. It’s a valid concern, as one of the drivers an astonishing number of cloud initiatives that we see is that end-users are going around IT to access their own cloud resources, creating shadow IT organizations (which @lilacschoenbeck addresses in last week’s post). Thus, the question for IT leaders is how to keep their cloud initiative from being the IT equivalent of “if you build it, they will come.”
In addressing that question, I always like to look for examples of folks who have done it successfully. And there’s no better public example of that than Amazon Web Services. Organizations looking to take advantage of cloud can take a few lessons from AWS in how to offer cloud services that people want to use. A lot has been written about the secret of their success, but I think that one of the best examples is their recently launched service called DynamoDB.
DynamoDB is a database offering that is, at its core, NoSQL as a service. It’s getting a lot of great press and reviews because it takes a service that is both very valuable to developers (scalable NoSQL) and also painful to setup and manage, and completely abstracts away all of the complexity. Developers simply have to start sending data to the service and they don’t have to get their hands dirty with all the underlying muck such as replication, latency, partitioning, etc. AWS has figured out, correctly, that developers want nothing to do with database administration, and they have accordingly built a service (like many of their services) that simply removes that thorn from their customer's side.
What this boils down to is something we are big on at BMC when it comes clouds, “giving the people what they want.” In the enterprise that translates to providing complete business services to end-users and abstracting away all the muck. If your users are ordering up infrastructure from your cloud, but still have to go to all the trouble of configuring that infrastructure, configuring an application, installing middleware, configuring IP addresses, and generally doing things which are painful, then they will find a way around the offerings that IT has blessed. Amazon has figured out that the secret to success is focusing on giving customers exactly what they want, without any additional complexity. By taking a similar approach, IT can ensure their users will embrace the cloud services they offer.
In the last few weeks, I’ve talked with numerous enterprise,service providers, esteemed analysts from multiple excellent firms, and even the occasional blogger. Cloud is in. Cloud is hot. Cloud has budgets and timelines and mandates and executive support. A lot of companies have allocated a lot of resources towards this cloud thing. Why?
First, no good trend goes untapped. That’s just the way of the world – from beanie babies to VDI, all people and organizations must respond to giant market movements by either joining, or explaining why they are not joining. Cloud makes reasonable sense, so the vast majority of enterprises are joining. I’m sure there’s a group of conscientious abstainers – but I also am certain they answer a lot of questions.
Second, fear. Straight up fear. Why? Let’s face it – these business unit users, or “informal buyers,” in the Forrester nomenclature, are buying cloud. These are marketing people who buy SaaS services for social media, R&D types who need to run massive calculations and developers who need testing environments. Whether you like it or not, these folks are out there with their AmEx cards, buying some Amazon or some Rackspace or some Terramark.
They are circumventing IT. They are going around IT to get their job done.
They have some good reasons for doing this:
You aren’t selling what they are buying. You have no offerings that meet their needs – or, in a small number of cases, they don’t know about offerings you do have.
Your process is cumbersome. It takes more time than the relative ease of the public cloud.
Your pricing won’t work for them. Either it’s too pricey, or it involves internal IT budget processes, or it involves finance and approvals and online procurement systems. It’s messy.
Since we’re in IT, we know there are a number of risks associated with haphazard external cloud usage, not the least of which are compliance, disaster recovery, data loss and even surprisingly high bills. But,to these informal buyers, these are not considerations.
This brings us to the current IT conundrum – and the motivation for this investment in cloud. As IT, you have 2 choices:
Take real steps to give the people what they want – and serve their needs, either as a broker of external resources or as a provider of private cloud
Write off this increasing population of IT users and stick to your knitting of enterprise datacenter applications.
Most IT shops are choosing option A, in an effort to avoid being relegated to increasingly “legacy” functions.
Of course, the clouds being built are serving 2 masters: first, the datacenter applications in virtual and physical environments which could benefit from consolidation and flexibility – and second, these informal buyers and their insatiable need for computing power. These are 2 very different masters. The first is IT building cloud for IT – and we can only hope to be self-aware enough to know what we want. The second is IT serving a totally different group.
If you build it, will they come?
To answer this, I’m doing a blog series on key cloud requirements. Not RFP requirements – like “scalability to 5 million nodes” –but real considerations when building your cloud, gleaned from a couple years of customer, analyst, and user interactions. There are good clouds. There are bad clouds. And if you invest in building it, you’d like to see it succeed.
Every so often, we do a blog series. This blog series was born of dozens of cloud customer conversations, outlining the requirements for a cloud. Read the next in the series here.
I talk a lot about cloud computing, here and elsewhere, but it has recently become clear to me that this topic is one of those where people use the same words to mean different things. In a spirit of clarity therefore, I would like to lay out what I mean when I talk about cloud, why this is important, and what it means to cloud projects out there in the Real World.
Let us start from an objective third party's definition. The US National Institute of Science and Technology (NIST) came up with their own definition, which lists five essential characteristics of cloud computing: on-demand self-service, broad network access, resource pooling, rapid elasticity or expansion, and measured service. It also lists three "service models" (software, platform and infrastructure), and four "deployment models" (private, community, public and hybrid) that together categorize ways to deliver cloud services. The definition is intended to serve as a means for broad comparisons of cloud services and deployment strategies, and to provide a baseline for discussion from what is cloud computing to how to best use cloud computing.
Gartner have their own definition of cloud computing as a style of computing in which scalable and elastic IT-enabled capabilities are delivered as a service to external customers using Internet technologies. This is a slight revision of Gartner's original definition published in 2008. Gartner has removed "massively scalable" and replaced it with "scalable and elastic" as an indicator that the important characteristic of scale is the ability to scale up and down, not just to massive size.
Many have argued about whether these definitions are useful or not, but my own position is that the definitions are out there and are Good Enough for most purposes, and certainly enough to act as a starting point for general discussions. The next step therefore is to make these definitions a bit more concrete.
The topic of cloud is very fashionable, which means that the market is crowded with offerings which claim to be cloudy. Leaving out the blatant examples of cloudwashing (no, hosted e-mail is not a cloud service), we might categorize the offerings on the market as follows.
First, we have the converged hardware platforms. These are companies who were already in the business of selling blade server systems, and who have added features and software support to make a "cloud in a box", a rack that customers can plug into their datacenter alongside their existing infrastructure and start providing cloud services.
Next we have the hypervisor vendors. The message here is that hardware is commodity, and the intelligence and value live at the hypervisor level. Most vendors have started broadening their offering beyond just virtualization of compute resources and into network and even storage virtualization.
Then we have the disruptors. These are the big public cloud operators who offer a fully-hosted service, with no need to buy either hardware or software, just a contract and a web browser. Many people only mean the public cloud when they talk about cloud computing and see this as the future for all but the most demanding customers.
Finally we have the "other" category, for offerings which do not fit neatly into any of the others. Here we find telcos and system integrators who have begun offering their own cloud services as an extension of their existing relationships with their customers, either by operating a public cloud themselves or by overlaying their expertise on top of one of the other offerings described above.
You may have noticed that BMC's Cloud Lifecycle Management offering does not come under any of the headings mentioned. This is because we think that all of these models will be successful, and all of them will have to coexist with each other and with customers' existing IT infrastructure. Our offering therefore sits at a different level, focusing on the business users and the services they require, and translating those requests into technical actions on a variety of underlying types of infrastructure. As far as we're concerned, this might mean interacting with a fully-integrated hardware and software stack like Cisco's UCS, or it might mean creating, modifying or destroying VMWare virtual servers, or it might mean spinning up a whole virtual server farm on Amazon AWS. Satisfying the user's request might also mean talking to a Unix virtualization or partitioning scheme such as AIX LPAR, or setting up a physical system without any hypervisor at all. It might even mean setting up something a bit out of the ordinary like zLinux on a mainframe.
All of these satisfy the NIST and Gartner definitions. As long as we're doing it on-demand, it's accessible over the network, we allow pooling and scaling, and we measure the resources, it doesn't matter if what we're provisioning is actually a Difference Engine. The most important criterion is whether it satisfies the user's requirement, and platform zealotry does not help with that goal.
Some people will question such a wide net, suggesting that the particular demands of the cloud computing model demand specific, dedicated hardware and/or software layers. Having been in the datacenter trenches, albeit before server virtualization really took off, I would like to explain my reasoning. There are already any number of silos in the datacenter: the Windows admins don't talk to the Linux admins, who consider the Unix admins a bunch of stick-in-the-muds. The Unix admins have their own tribes: AIX, Solaris, HP-UX and more, and the only thing they all agree on is that the mainframe guys are a bunch of dinosaurs. All of these groups have their own management tools, their own processes, their own standards, and their own reporting tools - and that's before we even get into what is going on in the network or storage teams. It's often very hard for the CIO to get a unified view of what is going on in his or her datacenter.
Given such a complex and balkanized situation, why would you add yet another silo (or walled garden, if you don't like the silo analogy)? This is why Cloud Lifecycle Management talks to pretty much everything that's in the data center (and if it doesn't today, we're working on it). If you've got a mainframe sitting in your datacenter, that's a great way to spin up resources on-demand, paying only for what you use, while keeping everything inside your own datacenter. If your apps need to run on Unix, and you have lots of investment and experience on Unix, why would you change platform? If you have different hypervisors - perhaps as a result of a merger, perhaps as a deliberate strategic choice to avoid lock-in, or just because two separate groups started virtualization projects at opposite ends of the company - the last thing you want is for your decision to keep one, both or none to be driven by limitations in what cloud management platforms can support.
Of course this only addresses one level of the full stack. A bare VM is of little use to anyone, unless it's accessible over the network, has some storage assigned to it, is compliant to internal and external policies, is secured and hardened, has appropriate management tools installed, and whatever application-level components are required: database, middleware, application code.
Don't get bogged down in arguments about what the cloud is or is not. Focus on what you want to do with the cloud, then choose a platform that will let you achieve those goals.
This week, Microsoft launched System Center 2012. It was launched as a management and administrative toolset for the cloud, admittedly a Microsoft-oriented view of cloud. When VMware launched vSphere 5 and its administrative/ management tools earlier this year, we predicted a few things would happen in the marketplace. Now seems an excellent time to revisit those forecasts.
Competition for the cloud infrastructure would continue to grow.
It is clear from the announcement that Microsoft is trying to up the ante against VMware and cloud infrastructure providers. It has expanded its view of cloud infrastructure to include physical resources, similar to what Cisco did in its recent CloudVerse announcements. It has also filled the previous gap in its strategy to include both public and private cloud as well as application workloads. It even included they idea of multi-hypervisor administration (Hyper V, ESX, and Xen) in its core admin console. This is good for the customer – competition is good, it drives up value per dollar and helps reduce proprietary barriers.
The value of management would rise in the cloud marketplace.
Microsoft stated very clearly that management of the cloud was the most important driver of cloud and data center success. They absolutely have it right. The difference, of course, is what they mean by management. In this week’s announcement, it was equated to administrative functions, augmented with self service and automation. These are necessary but insufficient to get value from cloud implementations. Customers need to coherently and consistently plan, deliver and operate cloud environments and services to get business value from them.
Vendors with a vested interest in infrastructure make less than ideal management providers.
Management and infrastructure are different disciplines. Infrastructure providers focus on the efficacy, footprint and cost of ownership of their infrastructures. In the Microsoft announcement, the focus was naturally on Windows Server, Hyper V and Azure environments. In this world, management acts as ‘insurance’ against risk, and to overcome the roadblocks to proliferation of the infrastructure. But, if you think of management as the controlling layer of the cloud and data center, then your cloud management solution needs to come from a specialist. This means the ability to manage full stack services, through their whole lifecycle, via automated policy, across a realistic range of resources and cloud providers at large scale and low cost. It’s not just about building big clouds, it’s about guaranteeing that the right services are being delivered to customers in a satisfactory way. In other words, making cloud successful in the eyes of the business.
The cloud is moving forward. The Microsoft announcements are a constructive step toward a more mature, less hype driven cloud market. And they have helped - directly and indirectly - solidify the connection between customer value and management of the cloud.
Cloud computing, in its simplest form, provides a frame work for organizing data center improvement. Cloud reaches well beyond flashy,“nice-to-have” technologies. It’s based on the logical convergence of real and mature technologies, such as consolidation, automation, and virtualization. At the same time, however, cloud represents a revolutionary advance that requires far more from IT than simply putting a request console on the front end of a virtualization engine. Cloud success requires a cultural transformation of IT.
Column Technologies has been working closely with IT organizations to help them meet their objectives with this transformation.Through this involvement, we have gleaned four lessons based on real-world experience. These lessons can help you to avoid common pitfalls as you navigate your path to the cloud.
Our experiences come primarily from working with customers who are building on-premises, private clouds. These customers view the private cloud as a pathway to a hybrid cloud that combines private and public cloud services. The enterprises we have worked with want to understand the cultural transformation required for effective cloud computing.
They also want to master cloud technologies before they offload services to public cloud providers. That way, if something goes awry,they are not wholly dependent on an outsourcer to remedy the situation. Although the objectives of companies building private clouds typically differ from those building public clouds, the lessons apply to both;
David Savino is the chief technology officer and one of the founders of Column Technologies. He presents Column’s vision of business centric IT to global customers across many vertical markets. He has been instrumental in the development of many of Column’s strategic accounts and key to the growth of IT solution partnerships. Savino speaks often at industry events, where he champions IT process improvement and technology that works. He holds advanced certifications in networking, ITIL, and PRINCE2. He is currently leading Column’s cloud computing consultancy.
Column Technologies, Inc., is a global technology company dedicated to providing operational enhancement products,services, and solutions to small, midsize, and enterprise organizations, as well as to the public and federal sector. Headquartered in the United States,Column has more than 300 employees around the world, as well as offices in Australia, Canada, India, Singapore, South Africa, and the United Kingdom.Column’s success is sustained by a collaborative business methodology approach that integrates people, process, technology, and support. The company focuses on developing long-term partnerships with its customers. Column’s goal is to deliver world-class enterprise solutions that benefit customers by improving performance, reducing operational costs, and providing automation.
On a recent business trip to Stockholm, I had the chance to visit the Vasa museum. In case you're not familiar with the story of this ship, here is the short version: in 1628, Sweden launched the biggest and most fearsome warship the world had ever seen - which promptly rolled over and sank without ever leaving Stockholm harbour, or even getting out of sight of its own dock. There it lay for 330 years, before being raised in the 1950s and painstakingly reassembled and restored.
The reason for the embarrassing catastrophe that befell the Vasa was a combination of lack of planning and conflicting inputs from the project's backers, notably king Gustavus Adolphus. At the time there was no concept of blueprints or anything similar in ship-building. The master would work from a few basic measurements, but the design never left his head. This difficulty was compounded by the king's desire to underline his and his country's status as a great power in Europe by building the most impressive ship the world had ever seen. (One wonders if the other kings teased him mercilessly about the results.) The outcome was that the Vasa's design was inherently unstable due to a combination of low draft and less ballast than was required. This characteristic led the ship to heel over to one side, at which point the original flaw was compounded by the decision to fit two rows of gun ports, the lower of which was very close to the water line even when the ship was on an even keel. What this meant was that the ship had a tendency to roll to the side until the gun ports were under water, at which point it suddenly ceased to float, a tendency it demonstrated on its abruptly curtailed maiden voyage.
What has this got to do with clouds? Simple: if a cloud project is to be successful, it needs careful planning to ensure that it delivers on its expectations and does not turn into a catastrophic embarrassment for all concerned. In addition, because cloud projects touch so many different areas of a company, there will be many inputs, sometimes conflicting with each other. The cloud project owners and architects need to be able to take these inputs on board, but deliver something workable that is not compromised, if they are to avoid their cloud project suffering the same fate as the Vasa.
For example, I was talking to a prospective customer recently, and they were agreeing with everything I said and even in some cases extending the concepts and showing some real maturity of thinking around their cloud project. However, the meeting came off the rails when it became clear that while they had taken on board all of the various technical ideas behind the cloud and were thinking some way into the future with ideas around advanced auto-scaling and self-healing infrastructures, they still did not plan on opening this cloud platform up to end users. In their vision, users would still communicate their requests to IT, who would attempt to understand the requirements and enter them into the cloud portal for automated delivery. They had not understood the colossal bottleneck that this would introduce into cloud resource delivery, with the near-certainty of delay, misunderstandings, misinterpretation, and simple data-entry errors that the additional manual step would introduce.
Another example is the company that, having settled upon its cloud roadmap ("we start with infrastructure, then we add management tools, then we include middleware, …"), put up the greatest resistance to the application team wanting to accelerate inclusion of certain application components that would make the developers' lives much easier. The architects there had forgotten about the real world of wind and waves, products and deadlines, in favor of their drawing-board. The risk in those types of situations is that the frustrated application team go and find another cloud that will fulfill their requirements, but it's not under the control or even the view of internal IT; it's a third-party public cloud which IT has no visibility into, with all the business risks that entails (security, compliance, IPR, availability, licensing, etc.). This is what is sometimes called "shadow IT", and it's not to say that public clouds are a bad thing, or that more guns on a warship are a bad idea, just that both need to be allowed for properly in the plan.
At BMC to date we have delivered dozens of cloud platforms for customers that are in production around the world, and hundreds that are at various stages of pilot, proof, or implementation. This has given us a good understanding of what works and what does not in the real world, in terms of people, process and technology, and we have codified these best practices into our products and consulting offerings. In the same way that building ships nowadays is very much a science, not an art, we would very much like to share this hard-won knowledge with you and go sailing around the world, instead of having to engage in costly and embarrassing salvage missions later. Give us a call, or check out www.bmc.com/cloud.
Q: What makes a cloud good enough? A: How serious are you?
The cloud marketplace is evolving fast. I have come to the conclusion, based on customer conversations and primary research over the last 6 months, that the first phase of cloud is rapidly ending. If you called phase 1 the "good enough cloud", you would not be far off the mark.
A good enough cloud is one that gets you through the next planning cycle and gets the business guys off IT’s case. It’s a project that gets fast approval and it’s built from the bottom up, with the techies involved from the get go. So far so good.
However, it’s usually built on a single hypervisor environment. In fact, it might be a simply an extension of one virtualization implementation with a VM request mechanism. Good enough, but not great for a number of reasons, including vendor lock in and having to come up with a business justification for the next project. And it does nothing to keep those crazy devops guys from continuing to run up Amazon Web Services bills.
Second, today’s “good enough cloud" is often the extension of a pilot, and it appeals to one, maybe two audiences in the user community. This works for a while, but the moment they want to host something other than say sharepoint, it means big trouble. How does our cloud deal with that Oracle DB on Solaris anyway?
Third, if all goes well, it gets managed "just like everything else". Except that it’s not like everything else. The cloud changes frequently. New patches every week or two, maybe new code deliveries every two or four, with vmotion turned on and your monitoring console based on 20 year old technology. Who manages the indecipherable events that emerge? I couldn’t come up with a better recipe for outages.
A smart friend told me once that people, no matter how brilliant, fall back on old patterns and experiences when they are afraid or don’t know what to do. When the situation is overwhelming, the response is usually underwhelming.
Cloud presents an opportunity to line IT up with business users and decision makers and deliver the outcomes they need. It’s not a time to settle for good enough. Be serious, be audacious and go for better or great. Don’t be afraid.
You hearevery day a lot about Cloud, but you hear very little about Cloud planning. Everyone is scrambling to get to a cloud, any cloud, but how do you know when you get there it will be the right one?
Is it a case of fire, fire,aim? And if you do aim, what does that entail? What should you be thinking about in terms of cloud planning;
Faced with the mandate to pursue cloud but unsure how to begin?
Architecting a cloud is complex and involves many stakeholders
Initial requirements must drive an architecture to meet immediate needs
Clouds built today must also be designed to meet tomorrow’s business needs
What you would like to get to is a state with; a clear definition of business requirements for the cloud, to design cloud services that are designed to meet the needs of business, to have a fully integrated cloud architecture and ITmanagement processes and create a cloud architecture that leverages public cloud resources.
Join us for a discussion with Sr Solutions Manager Brian Singer on the importance of proper cloud planning.
I can’t say I read a lot of technology blogs on vacation.But, when Rob Enderle posted this piece highlighting our Cloud Solutions Planning Workshop, it caught my attention. And not just because it’s about us –that’s far too self-centered.
Because it’s about our new Google overlords. Just kidding, of course – though my mind did wander briefly to the legendary Google employee benefits like organic baskets of veggies delivered to my office. Then, I recalled that I’m based in Boston, and the only organic locally-grown veggie available in the dead of winter is a potato.
Enderle summarized the situation well. Fundamentally, we all know that a stitch in time saves nine. And yet, being men (or women) of action, we prefer leaping into the task of execution. In cloud, that’s spelled disaster for many IT shops. I’m increasingly hearing tales of:
IT groups that are on their 2nd and 3rdattempts at cloud, as the first ones did not meet the expectations of the business
IT leaders whose careers are resting on the success or failure of their cloud initiatives
Companies being locked in to technology decisions that are not scaling – because they didn’t think through the implications at the start
It’s not hard to see how these situations come to pass – and how to avoid them. The most fascinating thing about the workshop, in my mind, is the really rapid results. 3 weeks. If you could commit 3 weeks to knowing you’ve made the right decision for your company – in a decision that will consume millions of dollars in resources and even more people time – why not? Why wouldn’t you take the time to ensure success, to learn from the experience of others and for your own peace of mind?
Which makes me wish I had a solutions planning workshop for other parts of my life too… I might have some better veggies in the freezer now.
Cloud projects have high visibility and lofty expectations,but too many initiatives fall short due to a lack of alignment between businessand IT.
Organizations that embark on cloud computing initiatives without a comprehensive cloud plan are challenged to meet the demanding expectations and potential value that cloud computing can deliver. Cloud initiatives without a sound planning approach generally prove that “if you build it, they will come”is not a viable business strategy. On the other hand, plans driven by business need; architected to deliver business services;and designed to drive end-user value maximize returns.
Based on hundreds of cloud consulting projects, BMC Global Services has integrated effective front-end planning into its best-practiceapproach to all cloud projects. Simply stated, cloud projects that effectively map business need to IT,people, and process are the most likely to meet business expectations.
The cloud plan developed for you by BMC (including a supporting roadmap) identifies the right business services; controls costs by leveraging a scalable, flexible architecture; integrates business and IT processes to improve IT efficiency; and clearly determines who, when, and how to enable and communicate to end users. It also outlines expected service levels and defines the financial models that measure value and justify future investment.
What is a Cloud Solution Planning Workshop?
The Cloud Solution Planning Workshop is a one-to-three week activity that helps you define and design a best-in-class cloud architecture and roadmap for your organization. Utilizing an interactive workshop format,BMC consultants and architects will help you:
Understand and refine your objectives for cloud computing
Conduct a detailed review of best-in-class cloud environment models
Analyze your current IT environment — across both physical and virtual infrastructure
Detail requirements and model use cases for your cloud deployment
Define a phased solution roadmap with near-term milestones for deployment — in as little as 30 days
Conduct a detailed gap analysis between your current and desired states
Perform risk, change, and organizational readiness assessments
Because the Cloud Solution Planning Workshop focuses on your organization’s business needs, the detailed action plan delivered at the end of the workshop is strongly connected to — and driven by — your actual business requirements. At the end of the workshop, you will understand the exact level of effort, risk,and process change necessary to plan, deploy, and manage a cloud environment in your organization.
Planning and establishing a fully integrated cloud architecture (along with its related IT management processes) is just the beginning of realizing the cost and agility benefits ofthe cloud. The next steps, deployment and operations, are just as critical —and go on longer. Establishing an operational cloud involves migrating existing applications and data to the cloud without interrupting business operations. It also involves cost-effectively managing a complex and often heterogeneous infrastructure over time.
To drive the most business value for your organization, you need to understand what’s involved in the deployment and operations phases, and when and where to turn for outside help.
Setting up and configuring your cloud infrastructure is the first phase in your cloud journey — and is only the start of gaining value from the cloud.
Compare this phase to building a new office building. A quality building should not only protect its occupants from the elements and keep their property secure, but should also make them more productive, adapt to their changing needs, and introduce new technologies (such as energy-saving windows) to deliver more value over time.
The cloud equivalent of the “design-and-build” processes for an office building involves putting the physical and management infrastructure in place and creating a service catalog — a menu of services — that can be offered to cloud users. However, just as an office building is only an empty shell until tenants move in,so too is a cloud an empty container until you fill it with the applications and services that help your business reach its goals. The value you get from your cloud depends on what services are available, how well these services are managed, and how well they support your business objectives.
In the ongoing deployment and operations phases, you will need to migrate existing applications and data, as well as cost-effectively maintain and evolve your cloud as your needs change.
In an office building,facilities managers are responsible for customizing offices for various tenants, assuring basic services,and performing preventive maintenance. In the cloud world, cloud administrators support agreed-upon levels of uptime and performance, apply configuration changes and securitypatches, and manage access control and authentication. They also integrate cloud applications with legacy systems and dynamically reconfigure the environment as business needs change.
Just as most office buildings don’t expect individual tenants to be experts in building management, many cloud operations processes may require skills that IT may or may not have in-house. Here are some of the specific steps and the challenges and risks they involve.