Search BMC.com
Search

Doctor Cloud

Central in Chicago

Posted by Doctor Cloud Mar 29, 2011
Share: |


Dear Dr. Cloud,

 

My company was leading edge a few years back in adopting a CMDB – and in truth, it really did help. It makes sense to keep configurations and inventories in one place. If we were a manufacturer, having this core inventory system would be critical to our business, and for IT, it’s no different. But now, the new cloud team is building out a new service environment – and I worry about the centrality of our beloved CMDB. Thoughts?

 

--Central in Chicago

 

Dear Central,

 

This is a very valid concern. How central is something that is separated into chunks? Sure, you can build a virtualized environment CMDB, but how does that facilitate the central source of truth? Then, maybe add a cloud services CMDB, and an external cloud services CMDB, and soon enough you’ll have to check in 10 places for a patch or config change.

 

So, the best practice is to maintain that central source of truth in a central CMDB. So, that way, the same service can move from physical to virtual to private cloud and public cloud – and remain consistently tracked. There really are 2 ways of doing this… either opt for a solution with an integrated CMDB, or federate the CMDBs centrally to ensure you can treat them as a centralized CMDB. Either way, cloud services and the cloud service catalog should be included.

 

This is one of many wonderful examples of not tossing out the baby with the bathwater. We learned a lot about IT management in the past 20 years… Let’s ensure the best of it makes it to the cloud.

 

Dr. Cloud answers cloudy questions on Tuesdays (or when he's late, Wednesdays). To reach the good doctor, email drcloud@bmc.com

Doctor Cloud

Tired of all the FUD!

Posted by Doctor Cloud Mar 25, 2011
Share: |


Dear Dr. Cloud,

 

I have been hearing a lot about security and compliance in the cloud. All this fear-mongering seems to be going on. Why can’t everyone just relax? We’re just running some simple BI and sharepoint in our cloud – why should we be worried about compliance at all? It’s all internal workloads on an internal cloud!

 

-- Safe in Saskatchewan

 

Dear Safe in Saskatchewan,

 

Ah YES! The Cloud, easy to use, and POTENTIALLY easy to misuse. Sip this drink slowly, and I will explain.

 

Many fear the inevitable, eventually one of YOUR “solution dependent” end user execs is going to ask YOU; “How did you help me so quickly with that agency report when it takes the data center folks  AT LEAST 6 weeks to handle a new production request”? THEN you may have to pull back the curtain on YOUR cloud based BI or Sharepoint store and show him who the Wizard really is.  THEN you will likely become part of the team who shares the “inconvenient truth”, the speed and elasticity you have demonstrated for a while was due to the “Yellow Brick Road” you and Toto have been traipsing down..poppy fields non-withstanding.

 

At that point, you will have to defend your choice and maybe help him build a case for Cloud based deployments of some of his “near production” workloads. Now here comes the medicine, bitter as it is. WHEN you show him the Wizard, and he takes another look at the BI reports you are producing, you better NOT have some private, protected, or personal and confidential (like salary, SSN, home address, DOB, UGH) data next to the name of the top 10 selling multi-line agents in the Midwest you shot out of your Cloud Based BI  or SharePoint cannon…WITHOUT being able to demonstrate that the same “data privacy” standards used in the BIG SLOW production shop have been implemented in YOUR Cloud based and FAST solution.. I know, BIG GULP…your blood pressure just went up to 150 over 100 BTW.

 

The plain truth always results in freedom. In this case, the plain truth is that IF your cloud based sharepoint or BI application contains “real production data”, the same policies and standards and controls that protect that data in the “BIG SHOP” need to be in place in your Cloud environment FOR THE COMMON GOOD.

 

The freedom of the truth is that the “other side” of being policy compliant says that IF your data is NOT “policy confined” YOU ARE FREE! Use the Cloud without ANY constraint, and feel good about it. And share the KoolAid.

 

As the Cloud becomes the revolutionary and transformational architecture that it has the potential to be, a few simple tips will become more important. They are:

TIP 1: Categorize services- This is the first and most important step- the freedom to choose and the freedom to use is dependent on this step.

TIP 2: Develop, document, and enforce internal compliance policies- Makes sense- only protect that which deserves protection.

TIP 3: Extend Internal Compliance policies to public cloud providers- Makes MORE SENSE- make sure all the hard work in 1 and 2 is used when we go out to the public cloud with workloads that DESERVE to be compliant.

TIP 4: Provide Effective supplier management- And now we have rapid and elastic deployment of governed services…COOL BEANS

 

Two of my interns have written a “Industry Insight” paper titled: “Got Compliance Anxiety- Don’t Just Say “No” to the Public Cloud”, and it is available at this link: https://intranet.bmc.com/wwmarketing/sites001/solutions3/White%20Papers/Got%20Compliance%20Anxiety%20Don’t%20Just%20Say%20No%20to%20the%20Public%20Cloud.pdf

 

I promise, reading that paper will help you and your followers and foes better understand the “Plain Truth”, and be better prepared for the Brave New World of Cloud Computing.

 

Dr. Cloud

Share: |


Dear Dr. Cloud,

 

There’s has always been a lot of talk about policy-based management of IT. When I was in my last job, we thought about doing it for virtualization. Before that, autonomic computing elevated policy-based management to trekkie-like heights of sentience. And all the while, precious little of it has actually been implemented in any datacenter – and we’re all still alive and well (though distinctly lacking in android peers.)  And now it’s back.. even in the BMC announcements. What’s the point?

 

--Poo-Pooing Policy in Portugal

 

Dear Portugal (since Poo Poo is just not right),

 

It’s true. Systems management vendors for years have described a utopian world in which our IT environments are self-healing, self-diagnosing and self-soothing. Then, Sci Fi writers writer about HAL and Data and Talkie Toaster*. I’ve often wondered which was the cause of the other.

 

Nonetheless, let’s look at reality.  Back in the good old days when servers were things you could kick, connections were brightly colored and glee was gotten through self-reporting broken system fans, the idea of autonomic computing was somewhat less than compelling. Sure, you could ostensibly flip on a new chip if the old one burned out, or engage a back-up if the fan was dying. But, almost all the use cases for self-healing policy-based behavior were fundamentally physically tied.

 

Now, a server lives in a cluster – somewhere in that cluster – or datacenter – floating virtually over the hardware. In fact, a good multi-tier cloud service can float across many bits of hardware. The good news is, you can’t stub your toe kicking it. Still better is the ability to make a bunch of different changes to the service without setting foot in a datacenter. You can move it, grow it, shrink it, turn it off, back it up, scan it, and on and on. And, you can look at its needs in the context of all the other cloud services with which it shares space, and prioritize between them. You can make better initial placement decisions – and better ongoing management choices.

 

But, that’s a lot of data to collect. You’re basically describing an ongoing optimization algorithm with many many data points – dozens or hundreds of services to consider. If you aren’t actually a sentient android, that’s a great deal of math for a single person to do on an ongoing basis. And, even if you have a fairly stable cadre of services and resources, how fast can you actually respond to a change in the environment? Humans tend to be .. slower than machines.

 

Hence the need for rules that automatically govern the actions taken by the systems. And a policy-engine is simply the big giant brain that makes sense of those rules, prioritizing some and dismissing others in order to execute an order and Make It So. Certainly, some situations will still call for human intervention, but most of the routine drudgery of ensuring things are healthy and happy can be automated.

 

Finally, in this, the era of cloud, the policy engine is delivering on its promise… and you can save your time for higher-order tasks.

 

* 2001: A Space Odyssey; Star Trek: The Next Generation; and for the esoterically oriented, Red Dwarf.

 

Dr. Cloud answers cloudy questions on Tuesdays (or when he's late, Wednesdays). To reach the good doctor, email drcloud@bmc.com

Share: |


Dear Dr. Cloud –

 

I’ve been in IT for ages. I’ve always managed a huge backlog of to-do items. 80% of my job was filled with requests for new servers, configuring boxes, and supporting new business needs. But, the “other 80%” of my job was care and feeding: error fixes, physical maintenance, patching servers, and all that stuff that is invisible unless it’s broken. It’s a thankless job – but it’s the errors that set the pager beeping.

 

But – I wonder – what’s the role of that in cloud? I feel like it can’t possibly all go away – and yet no one is talking about it.

 

--Maintaining in Minnesota

 

Dear Maintaining,

 

I can’t tell you how glad I am to have received your note. You’re so right – the pager will not stop beeping just because the services are delivered through the cloud environment. I’ve been spending a lot of time thinking about the ongoing operations of a cloud environment these days. Some of us are calling it “Day 2” operations – the day after provisioning is done. Either way, there are things to do!

 

Firstly, all the glory of service level management must be maintained. That implies, in support of it, performance management of each cloud service, as well as capacity management to address certain types of issues – like resource shortages.

 

In the cloud, service levels are still going to be critical to delivering business value – only the levers by which they can be impacted have changed. Traditional levers continue to exist, like administrator response time. However, now, it is easier to pull other levers like adding capacity, moving a workload from one location to another, reconfiguring networks, and so on – all of which can happen without interacting with the physical systems.

 

Of course, to ensure service levels are adequate, performance management tools must be in place. If you cannot measure the performance of a multi-tier cloud service, it is hard to tell whether the performance components of SLAs are met. As workloads are increasingly sent to public cloud environments, that performance management – and indeed, therefore, that service level management – must be equally proficient at identifying issues in 3rd party-hosted cloud services.

 

Beyond service levels, a great deal of traditional ongoing management remains relevant. Incident management, change management – and one of my favorites, patch management. Why? Well, depending on how you architect your cloud solution, you might be embarking on more patch management in your cloud than you did in the physical environment. Lilac Schoenbeck wrote a whole blog on this over here at BMC’s communities… check it out.

 

So, you’re right, Maintaining. It’s not over when the service gets provisioned.

 

Dr. Cloud answers cloudy questions on Tuesdays (or when he's late, Wednesdays). To reach the good doctor, email drcloud@bmc.com

It's amazing what I.T. was meant to be.