Share: |

Even though it’s only Wednesday, VMWorld is starting to wind down a bit. Most of the major stuff takes place Tuesday and Wednesday morning, so as a vendor, it’s a good time to finally catch our collective breathe. It’s been a fantastic show for BMC with a steady stream of demos, great conversations with customers, and a ton of excitement for what we have to show. Most of the demand has been for our flagship cloud product, Cloud Lifecycle Management. Everyone at the show either has a cloud initiative, or is thinking about starting one so it’s a good time to be a cloud vendor. Of course picking up on this trend, many companies are cloudwashing products which are only vaguely cloud – but people seem to be pretty wise to that fact at this point.


I will do a full postmortem next week when I’ve had more time to digest everything, here are some quick bullets about what I’ve observed so far:


•          Enterprises and service providers are moving past the “kick-the-tires” phase with cloud and starting to realize that if they don’t have a strategy in place soon, they’re going to be at a competitive disadvantage. One piece of sage advice I heard this week was that it’s important to take the perspective that not taking action is an actual decision. Organizations that aren’t pursuing a cloud strategy should be making that decision consciously, knowing the risks of doing nothing.



•          Customers are getting savvier about what it’s going to take to deploy production clouds. I’ve been very impressed with the general maturity level of the folks I’ve had conversations with. They realize that automated provisioning is not enough for cloud, and that management functions like capacity optimization and end-user experience management are essential for a successful cloud deployment. What’s more, to get cloud outside of a development environment, they know they’re going to need governance and policy in their cloud.



•          IT as a service is going to dominate the conversation for years to come. It was no accident that this was the main topic of Paul Maritz’s keynote this year. I agree wholeheartedly, the most productive IT organizations will take a service-centric approach to meeting their business goals with IT. They will need a strategy to get there, and the technology to enable it. It’s an exciting time to be in IT. It’s worth pointing out that there are 45,000 people at Dreamforce this year, and that it’s the largest IT show in the world.


That’s it for now, so stay tuned for the full wrap-up coming next week. At the show? Watching from afar? Let me know what you think!

Share: |

While you are in the waiting room to see the good doctor or on a long plane flight and traveling through multiple airports, get a chance to download this podcast with BMC CIO Mark Settle ( and yes, this guy really was a rocket scientist)  and hear more about what it takes to be cloud ready, establishing trust in your and the cloud and other things such as;

  mark settle.jpg


  • Overcoming barriers to adopting cloud in both government agencies and the private sector.What do you see as some of the key considerations to becoming cloud-ready?


  • What are keys to enabling trust in the cloud?


  • steps”or conditions for a company to truly be cloud ready


  • the IT industry has arrived at a historic crossroads

Mark Settle is the Chief Information Officer at BMC Software. Mark joined BMC in June 2008. He has served as the CIO of four Fortune 300 companies: Corporate Express, Arrow Electronics, Visa International, and Occidental Petroleum.He is a former Air Force officer and NASA Program Scientist.

Share: |

DrCloud will be holding office hours this week at VMworld, BMCsoftware booth #521 with office hours of;

Monday: 4:30-7:30

Tues 9-5

Wed 9-5 and Thur 10-2.

If you are experiencing symptoms of bad cloud, wild cloud or just want something to cure the common cloud, stop by and met with the experienced internists of BMC. 


And just like Dr. House, you might not get a chance to actually meet me, because just like Greg, I may be off catching my favorite soap or up to some other form of mayhem.   dr-house.jpg

But if you don’t believe me on why you should stop by, take a gander at some of what the industry has been saying this past week;

  • Ben Kepes of  Diversity wrote; “With impeccable timing, BMC introduces updated cloud operations solutions,” “Into this maelstrom comes BMC, who chose today to launch their new cloud operations capabilities to give visibility and control to cloud operations. BMC’s Cloud Lifecycle Management product is integrated with its Capacity Optimization and Performance Management products…to give a closed loop to monitoring, measurement and management.”  Read the entire post.

  • The Financial Times reported that BMC wants to stop the clouds from falling. While businesses are looking to public or private clouds to replace traditional IT infrastructure, building a cloud computing environment is just the first step.“Managing the ongoing operations of a ‘hybrid’ cloud represents the critical other half of the story,” say BMC’s CTO Kia Behnia.

  • Mike Vizard of ITBusiness Edge had this to say; “After several high-profile outages of cloud computing services, a lot of IT organizations are naturally wondering if they have the right tools and processes in place to manage a cloud computingenvironment”.  The article then explainsthat; IT organizations need significantly more visibility into systems and processes in the cloud because it's not always easy to identify the cascading effects any one event might have on delivering any number of IT services.

  • Jason Meyers of Cloud IT Pro discussed BMC beefs up cloud management in his recent blog, “BMC Software brings performance and capacity management into the mix to helpIT pros address outages and also provides a full view of their cloud-based infrastructure”

  • Anuradha Shukla  of Cloud tweaks covered the announcement,  BMC introduces preventative approach to cloud operations, with this quote from Gartner analyst Cameron Haight, research vice president, Gartner, Inc : “The goal of cost optimization and increased business agility through the cloud will only be realized with processes and tools that are designed specifically to address the new operational issues that cloud-based services present,” and he added:  “IT organizations seeking to remain relevant in a new world of choices will seek to optimize their client’s satisfaction by making their transition to the cloud astransparent as possible while they (IT operations) do the heavy lifting.”


Normally with reviews like that I would add "I laughed, I cried, it was better than Cats", but since we are going to be in Vegas, perhaps a Cirque de Soleil reference would be more apprapos; "how do those guys do that".  It's easy, stop by and see.




Lilac Schoenbeck

Free Virtual Cows!

Posted by Lilac Schoenbeck Aug 25, 2011
Share: |

I’m the first to admit, I love a nice frivolous game. I am known for my.. shall we say… mild addiction to Bejeweled. There’s something magically compelling about those shiny little orbs. And, to answer the inevitable question – no, there doesn’t appear to be much strategy to Bejeweled. It’s just fun.


In the cloud world, purveyors of online games like Zynga are often hailed as testaments to the glory of the cloud model. As a former startup person myself, I applaud the use of much-cheaper resources, in place of buying boxes and AC units, and would question any startup business plan with serverCapEx line items. What happens when a cloud goes down? Well, you can architect around it, if you’re clever like NetFlix – or, if you’re Zynga, you can give your users a bonus free virtual cow!


Why a cow? In Farmville, a cow is a pretty valuable creature to possess, and most players of Farmville on Facebook would probably consider a couple hours of downtime a small price to pay for a digital bovine. It’s a very generous compensation. I applaud their ingenuity.


But, here’s the thing: in an enterprise IT shop, a virtual cow is not going to cut it.  If a business system goes down for a day or two, there are no virtual cows that compensate for lost productivity, frustration, and worse.


So, as we go into VMworld next week, I’d very much like tosee the highlights of the show be real businesses running real production clouds. We have those customers in Harris Corp, SAP, JDA software, QTS, Bezeq International, and many many more I cannot name here. I’d like to see the triumphs of organizations and service providers delivering real business value– and not just provisioning quickly, but meeting SLAs over time.


It’s fun to talk about science experiment clouds and POCs inthe server room. It’s fun to talk about silo’ed clouds that can provision 3000VMs in 3 minutes [I’ve never met a business leader chomping at the bit for faster copies of bare Linux]. It’s fun to describe all the free farm animals and growing gaming base. And I’m all for having fun.


But, let’s get real this year – and talk about serious business. Real clouds that do work – for your business.  

Share: |

It’s that time of year again. We finally have good weather in Boston, meaningless pre-season football games are on TV, the baseball playoff races are starting to heat up, and VMWorld is just around the corner. The machine gun fire of “cloudy” marketing can be heard off in the distance, a foreboding peek at what is to come.


At BMC, while cloud defines our strategy, things feel a little bit different – more tangible. In the past, the cloud was more of an abstract construct that some early adopters were poking at, a stretch goal for many, the reality was that most people were just looking for virtualization on steroids. What we’re seeing in the market now goes far beyond that, and our customers are building real clouds that are transformational to IT.


Against that backdrop, we have VMWorld 2011 – the biggest cloudy show of the year. Here are the top 5 things I’m hoping to take away:


1.          The technologies that are necessary to move the cloud into production


Now that cloud is moving past the “early adopters” phase that most technology goes through, focus has turned to the technologies and processes that are needed to make cloud work in production environments. It’s not enough to simply automate the provisioning of workloads, and I have a feeling that many companies, BMC included, will be focusing on this aspect of cloud computing.



2.          A clearer idea of “cloudwashed” technology versus actual cloudy technology


In the rush to capitalize on the buzzword du jour, nearly every software and hardware company in existence has cloudwashed their technology. Coming out of VMWorld, I hope to gain a clearer idea of what the industry views as the requirements for cloud. Requirements have certainly evolved over the past couple of years – what passed for a cloud last year will not necessarily meet the use cases that folks have in mind these days. Based on those requirements, it should be easy to pick out cloudwashed, versus actual cloudy technology.



3.          The level of customer fatigue for all things cloudy


It’s both an exciting time to be in IT, and a frustrating one. Imagine you are the person responsible for steering your company’s cloud initiative. Congratulations, you have the ability to completely transform IT. Now imagine how many companies are marketing to you. You get the picture. Coming out of VMWorld, I hope to get a clearer idea of what customers are tired hearing of about. It’s clear that the cloud is not just a fad, there are tangible business benefits, but that doesn’t mean customers aren’t tired of hearing about it. What are the topics that still get them excited, that they want to hear about from vendors, and what will make fall asleep?



4.          Confidentiality or availability?


Big data breaches get the headlines, but availability can destroy a business just as easily, as we’ve found out from recent cloud failures. So, which one has cloud architects, CIOs and admins more worried? My feeling is that it’s availability…



5.          The next big thing


Cloud is around to stay, but it’s definitely reaching its limits when it comes to marketing. So, what will we all be talking about next year? Cloud again, or something new?



What are your thoughts? If you are going to be at VMWorld, what are your priorities? If not, what do you hope to learn from afar? Or are you just going to turn off your computer for a week?

Share: |

1)       While bad hair can be treated by a quick rinse/set or trip to the stylist, a bad cloud experience is a lot harder to roll back, can cause both short term andl ong term heartburn( more so than the short term double looks of having bad hair) and affect your revenue streams


2)       While bad hair can get you stares in the short term, having bad cloud can get mocked and glared at  by your boss in the longterm ( or actually may be just a short term situation as there is no more long term you)   bad hair_one.jpg


3)       While bad hair may have gotten you labeled as a geek in high school, having bad cloud will definitely get you kicked off the nerd table in the lunch room at work, because as we know, being at the geek/nerd table in the cloud is the cool table


4)       While having bad hair and showing a touch of grey can show your age, bad cloud will  really make you go grey and will not indicate worldly wisdom.


5)       While bad hair can be the result of a trip in the Porsche convertible, bad hair can leave your career and the business “hanging in the breeze”


6)       While having bad hair can indicate to some that “all the lights are on and no one’s home”, having bad cloud can mean that “everyone’s home and the lights are not on”


7)       While having a bad hair day may repel the opposite sex, having a bad cloud day can have your badge repealed.


8)       While having a bad hair day may have cost you $10 at Supercuts, having a bad cloud day could cost you $1M/ minute


So what other analogies can you think of?

And what are we talking about? What is a bad cloudDr.Cloud?

Visit us at VMworld 2011, booth #1229 to find out more………..

Share: |

To cloud or not to cloud? Is that the right question?

It’s not a question, but George Santayana once observed, “Those who cannot remember the past are condemned to repeat it.” Cloud represents tremendous opportunity, but it’s the latest of many paradigm shifts. Here’s a better question, “Can we learn from the past?”


Take a step back and consider what all of the previous game-changing technologies had in common. Whether you recall the client-server or n-tier architecture transition or the adoption of server virtualization – which is still largely ongoing, according to most analysts – the adoption of these new technologies and operating paradigms had many fundamental similarities.


Apologies to William Shakespeare aside, a recent web search for the phrase “to cloud or not to cloud” turns up over a dozen current articles on the latest challenges and opportunities in the cloud. To their credit, the authors all advise caution and offer many excellent recommendations on how to evaluate cloud providers, build a cloud business plan, and minimize risk to the business as you move forward with your adoption of cloud. Cloud promises new ways to control costs, offer innovative new services, and respond more quickly to new business opportunities. Those promises may even sound eerily familiar.




Each of these new technologies had to pass the economic sniff test before they met with wide adoption. Vendors offering game-changing technology offered bargain-basement prices to make the new technology broadly available as quickly as possible, but the broader market usually waited for early adopters to “skin their knees” on the technology first. Once that first wave of adopters found the worst of the bugs, enterprises looked to the vendors to offer best practices for adoption and for analysts to develop ROI models that described the lowest-risk adoption plan.


If you found this blog post – either through a web search, RSS feed, or linked site – you’re probably at least considering the ROI of cloud already, so let’s just assume for the sake of argument that the economics of cloud are somewhat compelling. It doesn’t matter whether the particular solution that entices you is SaaS-based CRM from or IaaS from Amazon or something else. If the economics make sense, you have moved past window-shopping and now you have to consider how to reconcile yet another game-changing technology with the way you currently run your business.


Remember as you carry out your economic evaluation of cloud to consider the costs of tools and training. Tools require headcount to operate and training is an ongoing rather than a one-time expense, so if you buy tools that can manage the existing services as well as new cloud-based resources you’ll save money on the tools. If the processes that you employ to manage the current IT environment also encompass services you manage in the cloud, the implementation and management of those services will add little to the current cost of management.




No one ever made a wholesale “light switch” transition to new technology. At best, the transition was managed along asset refresh timelines. In other words, as the old platforms were depreciated or old OS and application versions were no longer supported, new gear was installed and the new paradigm was implemented. This enabled IT to sunset old applications with the hardware on which they ran and to implement new technology as it made business sense.


Can your business make a wholesale transition to cloud? Can you use SaaS-based CRM or expense reporting? Would private or public cloud-based VM hosting work for your developers? While it’s possible that one or more of these technologies would work for your business, it’s highly unlikely that you will be able to move everything. The more reasonable strategy – as recommended by all of the “to cloud or not to cloud” authors – is to choose the services that make sense to run in the cloud and migrate them. The implication of this perfectly reasonable strategy is that you will also need to adopt new tools and processes to manage these cloud-based services.


Tools and processes that encompass cloud-based services as well as the services that remain in your traditional data center will reduce the cost of migration as well as the operational costs of services in both environments.


Skill Gap


Every new technology solution required new skills. New skills take time to acquire, whether through training or new talent acquisition. Even if you hire an expert off the street, it still takes time for that person to familiarize themselves with your business’ standard operating procedures and expectations. It also takes time to teach them how to work with all of the other legacy systems upon which they will continue to depend.


Remember that the skill gap is really two gaps. There’s the gap between existing employees’ skills and the skills they will need to work with new technology. There’s also the gap between newly-hired experts’ experience and your business’ policies and procedures, or “how things are done around here.” If your processes are well-ordered and well-documented, existing staff can be repurposed while some of the team members are trained on the new technology. Good processes also make it easy for new team members to come up to speed on how their efforts will complement the rest of the organization.


A transition to new technology – like cloud – offers the opportunity to establish a set of operational processes that govern how IT works with the existing infrastructure as well as the new technology. Look at proven process frameworks like ITIL as a way to standardize operations. Change management and software license management apply to the cloud as well as to the traditional IT environment.


What was the question again?


BMC’s vision for Business Service Management includes a single view of the physical, virtual, and cloud-based resources that IT uses to provide services to the business. When we decided to adapt to the cloud, we built BMC Cloud Lifecycle Management on the foundation of trusted cross-platform management solutions, including Remedy and BladeLogic. Change management processes that have been tried, tested, and validated in thousands of enterprises now extend seamlessly to the cloud. Provisioning processes that deploy physical and virtual machines in the datacenter now treat Amazon EC2 and other cloud providers as equally valid provisioning targets.


Thousands of IT organizations have learned the hard way that tools and processes that operate in siloes are neither flexible nor scalable.Use this opportunity to adopt processes that will enable your organization to adapt to technology changes, not just respond to them.


Cloud promises to change everything. Again. The question “To cloud or not to cloud?” may sound like a choice, but the real choice is whether or not we learn from the past.

Share: |

So, you have budget and a blank piece of paper. The powers that be have read all the great marketing out there, and they want a cloud. And they want it now. Should you go download the open source cloud du jour and get rolling? Or should you take a more measured approach, and create a strategy to transition to cloud before jumping in? Both approaches have merits, but in order to reap all the benefits of cloud computing, a little planning goes a long way.


You’ve been around IT awhile and have seen some pretty major transitions. From mainframe to client-server to virtualization, you’ve seen it all. Somehow cloud feels different – like IT has one last chance to rectify all its past mistakes and finally quiet all of the critics. You’ve got to get it right, but you don’t quite know how to start.


Welcome to the world that many organizations just like yours are living in. We all want the benefits of cloud computing that are well known – agility and cost savings. But, as with any IT initiative, the road to cloud is fraught with dangers. Is cloud a technology, or a strategy? How do you separate out cloudwashing from the technology you actually need to build and run a cloud? How do you future proof your cloud so that it will be able to handle demand as it subsumes more and more of IT? And how do you deal with the internal political hurdles that come with anything that’s new and shiny? Our experience so far with cloud tells us that the only way to start is with the business problems you are trying to solve with cloud.


BMC’s CTO Kia Behnia is very clear when it comes to cloud – it is definitely a strategy, not a technology. If you treat cloud as a technology, you are in for a big disappointment. Organizations must take existing IT processes and ensure that they can be brought into the cloud environment. Cloud computing does not make disciplines such as change management or ITIL go away. Cloud could make them easier through automation. However, if left as an afterthought, as a bolt-on to a technological purchase, they will become an albatross hanging over the head of a cloud initiative – and could even doom it to failure.


If you begin your path to cloud by defining the business problems you are trying to solve, and explicitly setting forth goals, you will be starting down the right path. Next comes translating those business problems into the services you want to deliver. Once services are defined, then we get into the fun stuff – architecture, capacity planning, service level agreements, ITIL, and on-going operations.


Do you have to figure all this stuff out before standing up a cloud in a lab environment, or using a public cloud such as EC2? Absolutely not – you can absolutely download something and start tinkering today. But for a real cloud strategy that delivers on the promises of agility and cost savings, thoughtful planning is a must.

Share: |

A lot of talk lately about where things are going in regards to cloud, the idea of the consumerization of IT, what that is and what that means for IT organizations and the folks who run them.


BMCCTO Kia Be sat down with Forbes and host Quentin Hardy for a multi partdiscussion and Beyond the Consumerization of IT and how BMC finds new customers with data analysis.   Kia_at_forbes.png


Kia shares his perspective of future of technology over the next 3-5 years and how he believes the consumerization of IT, mobility and apps will be a priority.  This is a good start to a great week as we continue the countdown toVMworld2011.

Share: |

A number of weeks ago, VMware, the behemoth of the cloud infrastructure marketplace, adjusted their pricing model. The well publicized change to their core pricing for vSphere infrastructure occurred on July 12 and attracted much comment; from customers, commentators, partners, and competitors. In the brave new connected world we live in, the majority of the comment and analysis came from world of social media. Commentary ranged from the superficial to the detailed. Within days, someone who was late to the news could use social media to trace it, understand it, and quickly form a fact-based reaction to it.  


It’s no shock that social media and cloud are inextricably woven, and there is nothing particularly new with the blogosphere reacting to changes and announcements.


What happened next was interesting and significant – in a moment of mission critical crowdsourcing, VMware responded. Within 3 weeks of the original announcement, on August 3, VMware announced in one of its own blogs (called RethinkIT) that it was modifying some of the key terms of the new pricing model. It was a very clear blog, it contained data and examples, and it tackled the perceived issues head on.


The analysis across the web continues, but so far the crowd has responded favorably to the modifications - easier to adopt in the short term, and a more straightforward basis for longer term infrastructure planning. Of course, I assert that BMC Capacity Optimization is the best platform to do this planning with.


One of my first blogs here at BMC had to do with the “medium is the message”, based loosely on Marshall McLuhan’s groundbreaking work in the 1960’s and 70's. Paraphrasing one of his key tenants - first people shape their communication tools, thereafter, the tools shape the people. 


The fact that VMware used new media to announce pricing changes to its foundational vSphere infrastructure, and then was able to listen via the same media and react to the market within weeks, was impressive. It is one thing for an individual, a startup or media company to do this. It is a different thing for large, leading infrastructure provider to do so.


So, hats off the VMware for walking the walk, and for listening and responding to the market and customers quickly. And let’s all face it, there is no turning back. The distance between the supplier and the customer is getting smaller and smaller, and cloud computing is accelerating that change. The tools we create are shaping everything we do. This experience shows us that companies that embrace the new world order will be rewarded, while laggards will be left with nothing more than angry crowds.

Share: |

A wise analyst from Forrester (Frank Gillett) spoke with our team yesterday, and opined on the grammatical use of the word “Cloud.”Cloud is often used as a noun, he said, when it should be an adjective. Sadly,I’ve heard it used as a verb as well – and am sure a preposition is not far behind.


Being a grammar goose, in no small part due to the excessive sentence diagramming that dominated my high school curriculum, this sparked some thinking in my mind.


“To the Cloud” is inherently meaningless. The cloud is an abstraction layer – an ephemeral interface – concealing the humming datacenter within. The cloud itself does nothing – it is nothing – until it is asked.  Some clouds are soda machines. Others are baristas.But, the power of the cloud is in the delivery of the goods.


So, when we go “to the cloud,” we’re actually turning to thecloud to deliver something – something, thus, inherently cloudy. Cloudy soda –or cloudy coffee. Cloud becomes the delivery mechanism for something valuable –and thus, an adjective.


Cloudy soda machines, therefore, give you Coke, Coke Zero,Barqs root beer and Fanta. If you want Grape Soda (why? For the love of pete,why?), the cloud can’t help you.


Cloudy barista, on the other hand, can be asked for anything. Cloudy beverages of all sorts can be made – and if you’re lucky, Italian grape sodas too. We discussed this topic in a prior post.


Either way, Cloud should be an adjective, describing the service delivered, rather than a noun. And that explains why BMC does “Cloud Management.” Not Cloud. Cloudy services get delivered as a result of Cloud management.Cloud itself is the intangible abstraction. Cloudy infrastructure supports that abstraction layer.


But then, precious few folks have ever diagrammed a 100 wordsentence (it took butcher paper), so I’ll stick with critiquing dangling participles.

Share: |

Part 1 – What’s So Unique About the Cloud?


When I speak with customers, I often get asked why operations management in cloud is different than in their existing data center.While his question seems innocent, it’s actually a very important and complex question. In order to answer, we need to examine what is unique about the cloud compared to the traditional data center – which could have both physical and virtualized infrastructure. I like to think of three attributes which define cloud computing.


Cloud Attributs.pngElasticity – Think about pools instead of boxes

Many business buyers look to cloud in order to solve immediate business demands that their traditional data center could not address. Some of those demands could be seasonal peaks, or they could be because of exponential growth that outpaces the data center’s expansion plans.The cloud offers elastic resources that can be scaled up and down to meet these types of dynamic demands.

The economical way to do that from the cloud provider’s point of view, whether an internal or external provider, is to make sure the cloud infrastructure can support a large and varied number of resources. These resources have to be pooled to provide high utilization and high availability. While a typical enterprise data center houses thousands of servers with 20-35% of utilization, a single Cloud could easily have hundreds of thousands of virtual machines and up to 70-80% of utilization.In the traditional data center, you can deal with the box (the physical serveror virtual server). In the Cloud, you have to live with pools of compute units, storages, and network devices. This is a big paradigm shift between the Cloud and the traditional data center that operations must deal with.


Agility – Think about services instead of applications or machines

Business buyers look to cloud to deliver business services.They not only want their service, they want their service to be available immediately and to run reliably.  With thousands or tens of thousands of service requests, the cloud has to ensure services are reliably provisioned, ready to be used by cloud users, and ready to be monitored and operated by the administrators. Manuel processes are not feasible in a cloud environment for any of those steps. The entire on-boarding (and off-boarding) process needs tobe standardized, automated, and self-service.

Service reliability is not a new requirement. It is also a very important measurement in the traditional data center. However, maintaining high levels of reliability becomes even more critical and challenging in the Cloud, due to the dynamic nature of demand, and the wide variety of workloads. This has a huge impact on how operations must be architected in a cloud environment. We will talk about this in more details in part 2 of this series. The bottom line is that along with provisioning, operational tasks and processes also need to be automated.


Efficiency – Think about sharing instead of owning

While many business buyers look to cloud to give them business agility, IT buyers mainly seek a way to reduce the costs. More specifically, they want to reduce their CapEx, or fixed costs. Cloud provides economies of scale that reduce unit costs. Combined with an on-demand model, this translates into lower variable costs that get passed on to cloudusers. It is worth noting that economies of scale do not only translate to savings on CapEx through better utilization of commodity resources, but also to OpEx through automated management process.


Shifting the paradigm from boxes, applications, ownership in traditional data center to pools, services, and sharing in the cloud requires you to adapt your current operations management to a new paradigm – cloud operations management. In part 2 of this series, we will delve deeper into the pieces you should consider when you build out your operations management in the cloud.

Share: |

Today we talk with cloud thought leader Lilac Schoenbeck on the makeup of the BMC Cloud Lifecycle Management solution and hear her thoughts on;

  • Cloud trends in enterprise IT?
  • Operational excellence, automation, and service delivery models —  in the dynamic environment that cloud architectures
  • What are service blue prints


"The infrastructure choices that you make are business/ financial decisions that have to be revisited periodically. You don’t want to lock yourself into a management layer that constrains your choices going forward," says Lilac Schoenbeck.


Share: |

Our podcast team just published a new podcast focusing on cloud operations. In It, Tom Drain asks me the following questions:


  1. So, how about we start at the beginning: Where do we begin with business cloud operations?
  2. So we’re making things easier for the end user, how does performance play into this?
  3. Let’s talk about the transition to hybrid data center, how might this impact IT; the end users?
  4. What’s new in the cloud related to virtual machines, capacity and provisioning?
  5. How do cloud operations play into a properly managed cloud?
  6. Any other words of wisdom you’d like to leave with our users?


Head on over and check out what I had to say.

Share: |

By Mark Settle, Chief Information Officer, BMC Software


Although cloud computing is still in a fairly early stage of adoption by IT practitioners, it has been fully embraced by IT vendors selling software, hardware, and services. In fact, “fully adopted” is a polite way of referencing the “feeding frenzy” that has occurred over the past two years as vendors of every stripe and description have linked their value propositions to the cloud computing bandwagon.


Investments in new tools and technologies are a necessary,but not sufficient, precondition for realizing the theoretical benefits of  cloud computing. Equally important, and perhaps more difficult to achieve, arethe changes in operational procedures, procurement practices, and organizational structures that must accompany these investments. In principle,cloud computing provides businesses with new ways of virtualizing their business application portfolios, virtualizing and pooling their IT infrastructure assets, and gaining virtual access to highly scalable computing resources on an “as-needed” basis.


Companies will find it difficult, however, to realize thegains in business agility and cost efficiency afforded by these newcapabilities unless they specifically address the following issues.   Atlas V .jpg

Countdown to Cloud Readiness

As a CIO, you and yourorganization will not be “Cloud-Ready” until:

  • You have a single sign-on architecture that can easily be replicated for both “on-premise” and “off-premise” applications.

Users of SaaS (Software as a Service) applications don’twant to manage multiple authentication procedures to gain access to the toolsthey need to perform their jobs. As smart phones and tablet computers becomemore ubiquitous in the workplace, conventional VPN solutions for enablingsecure access to SaaS tools are being viewed as increasingly cumbersome andanachronistic. Users want to be directly URL-enabled to gain access to theirbusiness applications through a wide variety of devices, increasing the needfor robust and extendable security architectures.

  • You establish strong, service oriented architecture (SOA) competencies in managingour existing application portfolio.

SaaS applications present a wide variety of data integration challenges. Invariably, they need to exchange data with corporate databases within the corporate firewall, other “on-premise” applications, and other SaaSproducts. Moving data among these different entities with the appropriate synchronization and ETL procedures can be quite challenging. It’s not advisableto be expanding your SOA and SaaS management skills at the same time.  Hopefully, you have the SOA sophistication required to manage the integration of SaaS products into your pre-existing application and database ecosystem.

  • You proactively manage the online experience of your business users.

How will you ever be able to manage the performance of your SaaS providers if you don’t proactively monitor the availability, response times, and integrity of their services from all of your major operating locations? If you are not proactively monitoring the quality of the services they are delivering, you are implicitly relying on your users to detect and report performance issues. At best, that’s a fairly random and inconsistent process.


At worst, it’s a tremendous inconvenience to impose on yourusers and will invariably result in longer recovery times in the event of a problem or failure. If you are not already performing this type of surveillance on your existing applications, you will be challenged to develop such competencies as your SaaS portfolio expands.

  • You fully incorporate SaaS applications in your disaster recovery (DR)plans.

DR planners are typically thrilled to learn that their company plans to expand the use of SaaS applications. They think that a “SaaS-first” strategy will reduce the scope of their responsibilities since theinfrastructure supporting SaaS tools is no longer owned or operated by their organization. Although there’s a certain logic to that perspective, the truth of the matter is that SaaS applications are inextricably linked to the security applications, corporate databases, and “on-premise” applications that must have formal DR protection plans. If those plans fail in whole or in part, they may compromise access to SaaS applications or the integrity of the data being delivered by SaaS applications.

  • You have full ownership and control of the infrastructure resourcessupporting your business applications.

Private clouds are constructed by virtualizing all components of your operating infrastructure (i.e., servers, storage, and networks), pooling capacity, and allocating capacity in a dynamic fashion to satisfy the ever-changing needs of your business. The financial benefit of private cloud computing is the ability to optimize capacity utilization of the overall pool, instead of optimizing the utilization of individual clusters of assets. If your corporate finance group thinks they need to be consulted before you start virtualizing the servers hosting their applications or co-locating their applications on servers being used by other departments, then you’ve got some significant political issues to overcome before you will realize tangible business benefits through virtualization.

  • Your storage and network teams realize that cloud computing and server virtualizationare two very different things.

Storage, network, and server engineers need to stop tryingto optimize the availability, performance, utilization, and scalability of their individual technologies. Instead, they need to transform themselves into infrastructure engineers that understand how their technologies work togetherto deliver services to end users. With this understanding, they need to optimize the effectiveness and resiliency of the integrated technology stack that is being used to support individual business applications. Server,storage, and network technologies are converging faster than the skills, job descriptions, and organizational structures we use to manage them. If the engineering and operations teams managing these technologies are in a state of denial about the technology convergence that is happening around them, you’re not ready for the cloud!

  • You are able to standardize on a limited number of technology architecturesto support the majority of your development, test, and production requirements.

Technology diversity in the data center will stymie the most well intended and enthusiastic efforts to construct a private cloud. Optimizing the performance and utilization of pooled resources requires the ability to move workloads across those resources and reassign the resources when they are no longer needed. The critical efficiencies in provisioning times, availability,response times, and capacity utilization that cloud computing can deliver in principle will not be realized in practice if every application team requires aunique combination of app/web/DB server platforms, storage-tiering solutions,and network bandwidth. Standardization of software utilities, DBMSes, and patchlevels above the OS layer is also required to deliver functional environments to application dev/test teams on a self-serve basis. One of the abiding IT principles that must be continually relearned by successive generations of ITpractitioners is that standardization is the key to affordability, and affordability is the key to business agility. Technology standardizationin itiatives should precede any and all private cloud computing initiatives.

  • You are able to procure infrastructure capacity in advance of demand.

If your current procurement procedures require incremental investments in infrastructure capacity to be justified on a project-by-project basis, you will find it difficult (if not impossible) to maintain the surplus capacity in the server farms, storage pools, and network circuits that’s required to optimize the overall performance of your private cloud. CIOs require are chargeable “debit card” from their CFOs that will enable them to procure capacity in advance of demand to achieve higher levels of overall asset utilization. Surplus capacity is also needed to assure users that their future needs will not be compromised if they return assets to the global pool when no longer needed. Traditional project-based procurement policies were initially designed to deliver hardware to users on an “as-needed” basis. Ironically, they have had just the opposite effect, requiring tortuously long lead times to move from purchase order approval to hardware availability. “Debit card” procurement practices will enable the just-in-time access to internal computing resources that users have sought for a long, long time.

  • You routinely monitor and manage the utilization of existing assets.

As indicated above, the principal financial justification for adopting a cloud-computing framework is the ability to achieve a greater return on infrastructure investments through improvements in capacity utilization.  Mainframe-based IT shops closely monitor the utilization of their mainframer esources because they are so expensive. Mainframe utilization levels of 90+percent are standard in most IT shops during prime shift; many operate at even higher levels. The capacity utilization of distributed computing environments receives much less attention because incremental capacity can be procured at modest expense in response to individual user requests. Server and storage virtualization has made capacity management relevant again within distributed environments.


If you do not already have rigorous practices for monitoring, reporting, and managing the utilization of distributed computing resources, you will be poorly prepared to quantify the financial benefits achieved through cloud computing. If you don’t know the utilization levels ofyour internal resources, how will you decide when it’s cost effective to employ public cloud providers to satisfy spikes in demand? Inability to quantify improvements in capacity utilization and translate those improvements into financial terms will likely undermine the overall sustainability of any cloud initiative.

Ready for Liftoff?

The IT industry has arrived at a historic crossroads. TheY2K experiences that occurred more than a decade ago taught us how to virtualize our workforce, enlisting the aid of IT professionals from around the world in remediating Y2K issues embedded in legacy business applications. SaaS tools, which once were thought to be niche applications solely supporting salesforce automation, have become ubiquitous. SaaS applications can now support a wide variety of front office, middle office, and back office processes. Annual revenues of — the bellwether of the SaaS industry — have exceeded $1 billion, a meteoric accomplishment for any startup software companyover a ten-year period. Most recently, Amazon has emerged as the industrypioneer in furnishing virtual access to scalable computing resources on demand.  Amazon’s success has given rise to a variety of competing public cloud providers.


In a largely unplanned and unanticipated fashion, we have reached a seminal convergence of trends in which our professional workforces,application portfolios, and underlying infrastructures can all be virtualized to varying degrees. Every commercial company is seeking to leverage these trends to reduce cost and increase agility. Those that confront and overcomethe challenges outlined here will be ready for liftoff to a new world in whichrevolutionary responses to competitive threats and opportunities are enabled by IT.



About the Author

Mark Settle, chief information officer for BMC Software,joined the company in 2008. He has served as the CIO of four Fortune 300 companies:Corporate Express, Arrow Electronics, Visa International, and OccidentalPetroleum. Settle has worked in a variety of industries, including consumerproducts, high-tech distribution, financial services, and oil and gas. Hereceived his bachelor’s and master’s degrees from MIT and a PhD from BrownUniversity. He is also a former Air Force officer and NASA Program Scientist.

Share: |

Today, we at BMC launched cloud operations capabilities to safeguard cloud users. As part of that, we released our latest version of BMC Proactive Performance Management. There are many new things included in this release. But one of them that I am really excited about is its focus on cloud operations.

Behavior learning in the cloud

Cloud, compared to traditional IT infrastructure, has its own uniqueness that requires a different approach on how you manage its day-to-day operations. Its mixed workloads, elastic nature, and service-centric principal dictates any static, reactive, and disparate operations solution won't be able to generate enough power to propel the new cloud engine. That's why I am excited to see that we are focusing on tuning the analytic engine to better understand this new set of behavior of cloud services and resources. For example, the engine can now  support hundreds of VMs provisioned per hour and readjust the behavior learning within minutes. This release is just the first step in that direction. But the team at BMC worked really hard to understand from customers and the market and assessed the current knowledge based on many year's of successful application of the behavior learning engine  in the virtualization environment. There are many we can leverage and some we can't. But that's the point. The cloud is different from anything we have seen so far, virtualization or not. We found out and have learned from the customer that behavior learning capabilities is generating bigger and bigger value in a cloud environment where dynamic rules the world, from the IT process, the service, to the infrastructure resources.

Get your value fast

Cloud market develops rapidly. Our customers who want to compete in this space need to make their offering available fast. So how to make sure the operations solution can be up and running is one of the focus in this release. Now there will be a guided wizard to allow you plan, install, and configure all the necessary cloud management pieces, including lifecycle management and proactive performance management. One of our customers used to take 2 weeks to put the whole cloud management solution up and running in a small environment. Now they have done that in just 2 days with an even more robust solution.

Scalability for cloud deployment

In this release, we also address the scalability difference between a typical service provider cloud environment and an enterprise data center. The solution now is able to provide performance data from 50,000 cloud devices to hundreds concurrent user access. This enables not only the administrator will see and act upon the data but also the cloud end users will get those data, just like what you can get from Amazon CloudWatch.

Public cloud monitoring for enterprise

Speaking of Amazon CloudWatch, many of our customers who deployed their services in to EC2 monitor the data constantly. But they couldn't do is to translate those data into actionable insight automatically. Now, we are providing out-of-the-box capabilities (aka."knowledge module" if you are familiar with our product) to allow enterprises to pull in the performance data from CloudWatch for their instances and feed into our behavior learning engine. You can even build a service that across both your provide and Amazon EC2 and use our solution to measure its availability, impact to your business, and workload by leveraging both our remote and in-guest monitoring capabilities on those public cloud instances. In addition, we also let you monitor Microsoft Azure remotely if you are building applications in its PaaS environment.


Of course, this is just a subset of new features we put in to this release. We will start share more information in the coming weeks. I will be in VMWorld next month, you can meet us and see our demo in BMC booth. I look forward to meeting you there and chatting more about how the cloud operations could be evolved.

Share: |

In IT, the more things change, the more they stay the same. In a world with physical servers, IT was responsible for ensuring SLAs, discovering and fixing issues, and ensuring that budgetary goals were being met.


Guess what? In a world where services are running on cloud infrastructure, whether it be public, private, or hybrid cloud, IT is still responsible for all of these things. Someone, somewhere, is still going to be filing helpdesk tickets, and those tickets are still going to be routed to IT. And while some things are easier in the cloud, the fact that it introduces a new layer of abstraction introduces a host of new challenges. Not to mention that existing physical infrastructure isn’t exactly being tossed out onto the street, it still needs to be managed alongside new cloud infrastructure.


For cloud operations, it’s critical to get this right from the start or risk a failed cloud initiative. Organizations must put a premium on managing the end-user experience, managing performance, and managing capacity in order to get the maximum benefit from their cloud environments. Without strong operational management of clouds, we risk not only a failed cloud implementation, but a massive waste of money alongside it.


Starting with The End-User


This all begs the question: where to begin with cloud operations? Business clouds are about delivering services to an end-user. If nothing else, IT has to ensure that that end-user is receiving an experience that’s consistent with their expectations. The end-user doesn’t necessarily know or care where a service is running (thought they might have some control over it) – whether it’s in a private cloud, hosted in a data-center, or if it’s running in a hosted cloud, or a public cloud. IT must be able to start from an individual end-user and not only proactively discover problems, but trace them back to the source.


To do that, IT must be able to peel back the layers of abstraction like an onion. A problem could occur in a physical network connection at the ISP-level, an internal network, or a virtual network being managed by a virtual switch. CPU utilization could be running rampant and slowing down a service on a virtual machine, or on the physical machine that the service is actually running on. Each introduces an entirely different set of remediation procedures. End-user experience management has to happen at the service or application level, but it cannot stop there if IT wants to be able to solve problems as they crop up. But what happens if there is a problem, but it hasn’t impacted the end-users experience?


Does Performance Matter? Yes!


Though we start by focusing on end-user experience, IT still must pay attention to, and manage performance in the cloud. Consider that in an old, idealized physical world, if a problem caused a service to use 80% of the CPU on a server, it’s not all that important to debug. There is probably only one service running on the server, and save for using a little bit more power, it’s not going to bother anyone.


In a virtual world, it becomes a bit more important to remediate the problem because there is money to be saved by virtualizing other workloads on that server. However, in a cloud-based world, where usage is metered and pay-as-you-go, problems such as this quickly become a huge waste of money. And that’s true for both private and public clouds. If we care about monitoring performance, then it follows that we care about performance metrics. It’s crucial for cloud operations to establish baselines on the metrics that cost money and impact the end-user – CPU usage, RAM usage, response times, and latency – and then to monitor performance against those baselines. Without performance monitoring, you are flying blind in the cloud. You can easily wipe out all of the cost savings the cloud could provide, or end up in a situation where the cloud is costing more than traditional infrastructure.


Beyond Performance - Capacity


Virtualization is the precursor to cloud. It’s a key enabling technology that has allowed organizations to dramatically increase hardware utilization. Managing capacity is important to unlocking the potential of virtualization. We need to know ahead of time whether service x and service y can run on the same machine. If service y starts taking too many resources, we need to know in order to migrate it before it slows down service x.


Cloud takes capacity management a step beyond virtualization. It’s not only important; cloud cannot exist without it – at least not the kind of cloud that will provide lasting benefits to your business. In a cloud environment, deployment of services and the underlying virtual machines is automated. Provisioning happens without human intervention. It follows, then, that capacity management must be automated as well. The provisioning engine has to know which physical servicers have the resources to run a service. It also has to understand when the underlying infrastructure is nearing capacity limits, and provide the ability to rebalance infrastructure when capacity constraints are hit. With the cloud being an abstracted pool of physical resources, it won’t be immediately clear when those resources are close to being used up unless there is a capacity management solution in place.


Additionally, capacity management can make or break your cloud when it comes to cost efficiency. If you have resources provisioned for services that aren’t being used, that’s going to be costing you serious money. And with the new resource-based pricing models for virtualization, you must optimize physical server and virtual configuration very carefully. It’s crucial to only provision VM’s as they are used, AND to de-provision them when they are no longer needed. Your cloud management solution should be able to handle that level of automation without requiring manual intervention. This is much harder than it sounds, and requires a level of workflow maturity that many solutions don’t yet have.


The More Things Change…


Yes, the cloud is here to stay. The benefits of a properly managed cloud are impossible to ignore. The key phrase there is “properly managed.” In order to properly manage a cloud, there must be an emphasis on operations in the domains of end-user experience management, performance management, and capacity management. Without these, any cloud strategy is likely doomed to fail. In order for cloud technology to move from a lab to a production environment, these things must be in place. The cloud is a chance for IT to reboot itself, to prove that it is a key enabler of the business and that it can help the business achieve its goals. IT can achieve these objectives through the cloud, but only with a strong commitment to cloud operations.

Share: |

I’m neurotic about my to do list. If I wake up in the middle of the night having forgotten to .. say… plug in my phone.. I tend to repeat the words “Plug in Phone” in my head until I wake up and do it. It just lingers in my head. Then, added to it is.. Take Out the Trash…. Reload The Parking Quarters in the Car… Buy TP… It’s a series of signals or indicators that measure the status of items in my life, setting trigger alarms that long-ago were meant to be addressed by clever RFID technology.


In life, as in datacenters and clouds, there are a series of ongoing tasks, often prompted by depleted capacity or excess utilization that must be tended to on a regular basis. Sensors, usually in the form of eyeballs,tell us when we need to take action, a series of alerting mechanisms go off in our brains to ensure we don’t run out of toothpaste. If you think of an average household, there are hundreds of items, from dryer sheets to shoe polish, which are in varying states of depletion at all times. Endowed with the power of vision and the urgency of paper goods adoration, we somehow manage to get through this capacity management challenge. If we fail, we find ourselves with poor performance – our hair remains unwashed, pending shampoo. Or we turn to the towel paper in place of the Kleenex.


Datacenters – and indeed – clouds are the same. Only, in many ways, worse. Cloud services, unlike my bottle of hand soap, can’t easily be seen. If it’s running out of resources, I can’t just eyeball it. And running a cloud means dozens and hundreds of supplies, being depleted by an ever shifting group of house guests. And, because these house guests are customers, they have expectations. Poor performance is unacceptable. They may even be paying for the service.


Today, BMC is announcing our Cloud Operations solutions.They are designed to be the proverbial RFID technology of your cloud. Sensors to identify capacity challenges and potential performance issue. End to end..from the underlying infrastructure to the end user’s experience. Why? Because our customers are seeing that the cloud is not just provisioning. It’s about delivering consistent service, day in, day out. Keeping the cloud services stocked, ensuring performance doesn’t suffer. Just as running a household is an ongoing effort, running a cloud requires success on day 2, day 3.. until that service is decommissioned.


Check out our press release on Avoiding Cloud Failure. Our solutions. And ask yourself… are you in danger of running out of toothpaste? How will your users feel tomorrow morning?

Share: |

The US Bureau of Labor Statistics (BLS) releases new data on jobs once every month. This data is some incredibly valuable as it has the power to single-handedly moves entire markets when it’s released. Not surprisingly, there’s a rush to get at the data once it is released. One such release happened this morning, and against a backdrop of investor fear and uncertainty, the website that hosts the data crashed under the deluge of visitors. It was down for a total of 52 minutes according to the Wall Street Journal’s live bloggers.


This is a great example of why the government needs to move to cloud, and quickly. It’s simply not economical for the BLS to keep the capacity around needed to service the once a month rush, over the other 29 or 30 days a month when all they see is the occasional visitor. I’m sure they know what their load will look like, can predict maximum load, and know how much capacity they need. If they were using a government cloud, they could reserve that capacity ahead of time, have stand-by capacity, and quickly add capacity if needed. The best part is they would only need to pay for what they used. The scale of the cloud can easily handle these demands, as that scale is shared across many agencies.


These are the sorts of business use cases that drive the need for cloud. No matter if your business is manufacturing, pharmaceuticals, or banking, there will be spikes in demand that are often predictable. If you plan for these spikes, the cloud can lead to tremendous cost savings across the business. If you don’t plan, or plan poorly, it can lead to costly outages or force you to use higher-cost cloud capacity.


A key part of managing cost and capacity is the ability to tie business metrics to capacity. For example, the BLS knows roughly how many visitors it’s going to get. It doesn’t necessarily know how demanding each visitor is going to be on its webservers and databases. Using a capacity planning tool which can translate business metrics to actual capacity requirements, they can plan for different scenarios based on the underlying business drivers. They might even be able to predict how much capacity they need based on the investment climate at the time (how interested the market is in their data).


The cloud provides business and government a way to economically handle peaks and valleys in demand. Good capacity planning is essential to actually taking advantage of what the cloud has to offer. And your business can’t afford to ignore the benefits.

Share: |

Cloud computing is a major departure from the traditional ITservice delivery model.


Services are no longer tied to dedicated hardware silos. Instead, virtualized resources — servers, network devices, and storage —are abstracted from the hardware. They move freely about the infrastructure,delivering services when and where they are needed. But this doesn’t mean that you have to completely reinvent IT to implement cloud computing. re invention.jpg


The Impact of Cloud Computing Many IT organizations have been driven by three major goals: achieve extreme agility in responding to the demands of the business, drive down service delivery costs, and minimize risk.  Cloud computing doesn’t change these objectives. It simply accelerates progress toward them.


Moreover, the fundamental technologies that enable cloud computing — virtualization, automation, and user portals — have been around for many years. In fact, you’re probably already using them in your data center.  Moving to cloud computing involves leveraging these technologies and combining them in new and innovative ways.


Cloud computing, however, does have a dramatic impact on ITorganizations. To make a successful move to the cloud, IT must undergo four major changes: infrastructure transformation, service transformation, process and organizational transformation, and cultural transformation. Each one has important implications for service management in the cloud.


Do get @DrCloudBMC’shome spun recipe for these areas, ya gotta read therest of this great article by BMC thought leaders Lilac Schoenbeckand  BMC VP of  Cloud Strategy Herb VanHook

Brian Singer

Cloud Ops. Why We Care.

Posted by Brian Singer Aug 1, 2011
Share: |

As many blog posts do, this one starts with an anecdote. After a recent conference in Copenhagen, Ezra boarded the subway to get to his hotel. He saw a schedule of upcoming trains — tracked by the minute —posted on the platform. Shortly after he boarded his train, however, it stopped in a tunnel.

Another train was passing in the other direction. The limited resource of train track was at capacity, at midnight on a Tuesday. The announcer said, “The other train should be passing through sometime soon.” Ezra puzzled over a train system that predicted the exact position of some — but not all — of its trains.


Ezra thought about a cloud he had recently designed, and how well it managed capacity in these situations. It was similar to a train system. Uneven workloads and peaks in utilization typically followed a rather predictable pattern — every holiday season or every month’s end. Predictable bursts, like crossing trains, were accounted for. Only those anomalies that defied forecasting created an exception — such as a 20-inning baseball game, or the descent of 10,000 conference goers onto a northern European train system.


Cloud Operations is about running and optimizing your hybrid cloud to deliver superior service by understanding the current state of your resources and how they will change going forward to meet future needs. While often perceived to be temporary, most cloud services live for weeks, months, or even years — and will continue to grow as shared resources increasingly support traditional IT workloads. When you add up all the operating systems, middleware, tooling, applications, and now hypervisors in the average IT environment, this management task can overwhelm many organizations with ongoing repair and maintenance.


Once a cloud has been designed and is up and running, it is important to optimize it to deliver quality service to each of its users. To do so, IT should proactively monitor for performance issues and maintain optimal use of capacity resources. Three key components are required:


1. Service Level Enforcement: The way to make sure a cloud provider is addressing a customer’s business requirements and risks is through an actively managed service level agreement (SLA).


2. Proactive Service Performance Management: IT can proactively manage performance across your public and private cloud infrastructures through predictive analytics for performance monitoring and identification of performance issues.


3. Continuous Resource Optimization: To make the best use of cloud resources, IT must actively manage the capacity of the broad cloud infrastructure and also right-size individual cloud services on an ongoing basis.


We will be talking more about these and other facets of cloud ops in the coming week. It's a topic that gets us very exited, as it's so critical to creating clouds that work for your business. Stay tuned!

Filter Blog

By date:
By tag:
It's amazing what I.T. was meant to be.