One of the most frequently recurring conversations around cloud computing is about trying to define what cloud actually is. Some will say that cloud is just virtualisation with a few bells and whistles, while others will claim it’s just basic automated infrastructure.
Both of those positions are actually correct, except for the crucial word “just”. Both virtualisation and automation are necessary, but not sufficient, to build a cloud.
The difference between virtualisation and cloud has been explored here before, so I won’t repeat my colleagues’ points. Instead, I wanted to focus on automation, and how putting an intelligent automation strategy is an important step on the path to the cloud.
Many enterprises are debating their move to the cloud. It’s no longer a matter of if, but of how and when. Is it better to try for a big bang approach - “Great news, everybody! From Monday, everything is in the cloud!”? Or is it better to go with a phased approach, trying to get each step right?
One good way to get to cloud is try to get a good level of standardisation and automation in place first, before opening the doors to end users. Intelligent data center automation solutions such as BMC’s BladeLogic suite will help users smooth the path to cloud. The first step is to get a handle on what is going on in the data center today. With that information, it becomes possible to define what the desired state is, understand the difference between the actual state and the goal state, and address that discrepancy. This will make everyone’s life much easier when the time comes to move to a true cloud computing model.
That second survey says: "Of the 26% with no plans to use public cloud services, 58% cite security as the reason. Even among those using or considering the cloud, it’s a worry: 52% name security defects as the No. 1 concern.”
The reasons for this caution are clear enough. Even in the mainstream press there is a constant drumbeat of stories about security; if it’s not the Patriot Act, it’s what Wikileaks has to say about the NSA, and in between those it’s Stuxnet or the latest credit-card data theft. Against that background, what responsible CIO would not be concerned?
On the other hand, cloud computing has all sorts of unique advantages. It takes a (very) long time to procure a new server in an existing data center, and then it can take even longer before it is actually available and delivering value to users. By then, the original requirement may have changed or evolved. The equipment that was procured is now under- or over-sized - but there is no way to resize physical infrastructure, and most vendors won’t allow customers to return purchased kit.
Private cloud approaches begin to address this problem by making it easier to share resources and achieve higher levels of utilisation on existing hardware. Public cloud models take this approach a step further by allowing users to request capacity on-demand and resize or return it at any time.
This new flexibility is very attractive - but not if it comes at the cost of security breaches or compliance violations, with the attendant financial penalties and bad press. How, then, to take advantage of the promise of cloud computing, especially public cloud, without getting bitten?
Nowadays, security is no longer all or nothing. To take an analogy I have used before: IT used to build a wall, and everything that was inside the wall was “safe”, while everything outside was untrusted and considered dangerous. The points of contact between inside and outside - what would have been the gates in actual walls, but which were known as DMZs - were carefully monitored.
The old binary model no longer applies. In today’s model, instead of the simple choice of secure or insecure, we have more of a gradient. Systems and data which require high levels of security can be placed on infrastructure which satisfies their requirements, while progressively less sensitive workloads can take advantage of infrastructure types which have placed their emphasis on cost, speed of delivery, or geographical availability.
Trying to do this by hand in real-time is impossible, which means that the process of routing requests to the correct infrastructure options must be automated. In general, automation is the key to unlocking the value of cloud computing, but in particular it can help answer the difficult questions that security requirements can bring. To find out more about how BMC can help your organisation take advantage of cloud computing without compromising on security, please visit www.bmc.com/cloud.
Has your first attempt at building a private, public, or hybrid cloud left you disappointed and disillusioned with the promise of cloud computing?
Or, are you worried that cloud computing will not live up to the desired outcomes for your organization and those promised cloud benefits will remain out of reach?
If so, check out this short whiteboard animation on how BMC’s Cloud Lifecycle Management avoids the industry’s most common pitfalls: vendor lock-in, complex scripting, limited customization, and VM sprawl.
If there has been one overarching theme of the last few years in IT, it has been the changing relationship between enterprise IT departments and the users that they support.
Users have always wanted more IT faster, and this has always driven advances in the field. Minicomputers were the shadow IT of their day, democratising access to computing that had previously been locked up in mainframes. (By the way, did you know that the mainframe is fifty years young, and still going strong?)
Departments would purchase their own minicomputers to avoid having to share time on the big corporate machines with others. This new breed of machine introduced application compatibility for the first time. In other words, it was no longer necessary to program for a specific machine. Higher-level languages also made that task of programming much easier.
Microcomputers and personal desktop computers were the next step in that evolution. At this stage it became feasible for people to have their own personal machine and run their own tasks in their own time, and for a while IT departments lost much of their control. The arrival of computer networks swung the balance the other way, until the widespread adoption of mobile devices started the swing back again.
Seen in this way, cloud computing is just the latest move in a long dance. The tempo is increasing, however, and it becomes more critical to make the right moves.
One make-or-break move is the very first public one, when a company decides to shift at least some of its workloads to the public cloud. It’s important to remember that Amazon was not designed to be traditional IT and trying to treat it that way is a route to failure.
To get an idea of the sort of problems we want to avoid, here’s an example from a completely different domain. If you have ever furnished a house or a flat, the odds are good that you have wandered around IKEA, feeling lost and disoriented, and possibly having a furious argument with your significant other as well.
Assuming the shopping trip didn’t end in mayhem and disaster - and personally I always count it as a success when I get out of IKEA without either of those - you may well have bought an Expedit shelving unit. The things are ubiquitous, together with their cousins, the Billy shelving units. I should know, I own both.
The bad news is, IKEA is discontinuing the Expedit and replacing it with a slightly different unit, the Kallax. This has infuriated customers who liked being able to replace or extend their existing furniture with additional bits.
What has this got to do with IT? What IKEA has done is break backwards compatibility in their products: you can no longer just get “more of the same”, and unless you are furnishing an entire new home, you will probably have to deal with both the old and the new model at the same time.
Enterprise IT departments are facing the same problem with cloud computing. They want to take advantage of the fantastic capabilities of this new model, but they need to do it without breaking the things that are working for their users today. They don’t have the luxury that startups do of engineering their entire operation from the ground up for cloud. They have a history, and all sorts of things that are built on top of that history.
On the other hand, they can’t just treat a virtual server in the public cloud as being the same as the physical blade server humming away in their datacenter. For a start, much of the advantage of the public cloud is based around a fundamentally different operating model. It has been said that servers used to be pets, given individual names, pampered and hand-reared, while in the cloud we treat them like cattle, giving them numbers and putting them down as soon as it’s convenient.
The public cloud is great, but it works best for certain workloads. On the other hand, there are plenty of workloads that are still better off running on-premises, or even (gasp!) directly on physical hardware. The trick is knowing the difference, and managing your entire IT estate that way.
This is part and parcel of BMC’s New IT: make it easy for users to get what they need, when they need it. To find out more about what BMC can do to make your cloud initiative successful, please visit www.bmc.com/cloud.
Now that cloud has proved itself in the market and is a key strategic imperative for IT, the focus for savvy cloud architects is increasingly how to optimize private and public cloud use within their organization. A key part of that is taking a strategic & planned approach to migration of applications, both existing and new, to a cloud environment.
Often customers have a pool of applications running in their environment and they want to move some of them to a private or public cloud offering leading to important questions:
How can I determine which applications are good candidates for cloud?
How do I prioritize which applications to move to cloud first?
Alan Chhabra, AVP of Cloud and Automation Sales Specialists at BMC, talks to customers about these issues a lot and has seen firsthand what works, what doesn’t and what to watch out for when migrating applications to the cloud. In a previous life as head of Cloud Global Services at BMC, Alan helped enterprises plan, build and run hybrid cloud management platforms.
In terms of prioritizing which applications to move first, Alan advises the following:
I would not start with a very static and unique / custom environment that has less than 5 servers deployed as there is no immediate gain here. Brownfield (existing) and greenfield (new) applications should be approached differently in the way that you prioritize migrating them to a cloud environment.
For brownfield applications already running, I would focus on 2 key areas to determine priority for migration:
Prioritize moving the most common applications that can be grouped into pools to drive standardization and reduce VM sprawl. For example, if 500 slightly variant weblogic servers can be migrated under the BMC cloud management platform, it forces standardization of that environment and drives more efficient devops. If those 500 servers were the same post standardization, it would be easier to patch, keep compliant, manage change, and deploy configuration changes at once. Subsequently, the ongoing administration task is easier and this drives significant increases in devops efficiencies and agility.
2. Capacity Management and Dynamic Agility
With the intelligent capacity management functionality in BMC Cloud Management, the cloud becomes your best means of optimizing capacity utilization. Therefore, you should prioritize moving application pools that are having either over or under capacity issues. For example:
Dynamic Applications like a retail website that need more horsepower towards the Holiday season but have idle capacity during the summer would be good candidates to move over.
Static applications that never change and have servers that are well utilized would take a lower priority.
3. Greenfield Applications
New applications that are popular, requested daily and lend themselves well to self-service should be prioritized for the move to cloud. This will drive the best ROI for the business through decreases in time to deployment ,higher service levels, and willingness for developers to decommission after usage.
For BMC Cloud customers, we’ve introduced pre-packaged content for cloud and automation use cases –BMC ZipKits – to help drive your decisions around app cloud migration. These ZipKits, provided free on our cloud communities’ site and released on an ongoing basis, are out-of-the-box service blueprints for common services which drive faster time to value for cloud and automation solutions.
In an upcoming blog, we’ll explore the private vs public cloud question and how to determine where to place what apps so stay tuned!
For more information on BMC Cloud Management, go to www.bmc.com/cloud
The news that Microsoft had appointed a CEO from its cloud division has brought a lot of interest. For a start, Satya Nadella is only the third CEO in the company’s 39-year history, after Bill Gates and Steve Ballmer. For another, the selection process has been surprisingly drawn-out, with a good five months passing between Ballmer's resignation and Nadella’s appointment.
Many observers were surprised that Stephen Elop did not make the cut. There had been assumptions that his tour at Nokia was the preliminary to a triumphant return to Microsoft, but in the end things did not work out that way.
The choice of Nadella over Elop is a very strong indication of the importance of cloud computing to Microsoft and by extension to the whole enterprise IT market. Choosing Elop would have been an indication that Microsoft saw the future as being about Windows Phone, Surface and Xbox. Quite aside from the fact that only the Xbox is a market success, and even that may not be profitable, this would have been a consumer focus that ceded the back-end, including the cloud, to somebody else.
Nadella on the other hand was there for the creation of Microsoft’s cloud services, building Office365 (SaaS) and Azure (IaaS or PaaS depending on whom you ask). He also embraced the heterogeneous or polyglot world that we live in today, so that today Microsoft uses open-source products and frameworks such as node.js. This would have been unthinkable to the Microsoft of the Nineties.
What Nadella’s appointment signals about Microsoft’s strategy is the coming of age of cloud computing. Of course execution is key, and we look forward to finding out what Microsoft will do next, but it can only be a good thing for the cloud computing market if Azure continues to develop and keep the other big players (Amazon and Google) honest.
Each of those Big Three public cloud platforms has its own strengths and its specific focus. There is no “one size fits all” solution in this market, so it’s good news for users of cloud computing that as big a player as Microsoft reinforces its commitment to cloud in such a public way.
There's an old joke that in China, it's just food. The main thing that will happen in 2014 is that it will be just computing.
Cloud has gone mainstream. Nobody, whether start-up or enterprise, can afford to ignore cloud-based delivery options. In fact, in many places it's now the default, which can lead to its own problems.
The biggest change in 2014 is the way in which IT is being turned inside out. Whereas before the rhythm of IT was set by operations teams, now the tempo comes from users, developers, and even outside customers. IT operations teams had always relied on being able to set their own agenda, making changes in their own time and drawing their own map of what is inside or outside the perimeter.
The new world of IT doesn't work like that. It's a bit like when modern cities burst their medieval walls, spreading into what had been fields under the walls. The old model of patrolling the walls, keeping the moat filled and closing the gates at night was no longer much use to defend the newly sprawling city.
New strategies were required to manage and defend this new sort of city, and new approaches are required for IT as well.
One of my first customer meetings of 2014 brought a new term: "polyglot management". This is what we used to call heterogeneous management, but I think calling it polyglot may be more descriptive. Each part of the managed infrastructure speaks its own language, and the management layer is able to speak each of those languages to communicate with the infrastructure.
That same customer meeting confirmed to me that the polyglot cloud is here to stay. The meeting was with a customer of many years's standing, a bank with a large mainframe footprint as well as distributed systems. The bank's IT team had always tried to consolidate and rationalise their infrastructure, limiting vendors and platforms, ideally to a single choice. Their initial approaches to cloud computing were based on this same model: pick one option and roll it out everywhere.
Over time and after discussions with both existing suppliers and potential new ones, the CTO realised that this approach would not work. The bank would still try to limit the number of platforms, but now they are thinking in terms of two to three core platforms, with the potential for short-term use of other platforms on a project basis.
When a team so committed to consolidation adopts the heterogeneous, polyglot vision, I think it's safe to say that it's a reality. They have come down from their walls and are moving around, talking to citizens/users and building a more flexible structure that can take them all into the future.
This is what is happening in 2014. Cloud is fading into the background because it is everywhere. It's just computing.
2013 was indeed a big year for Cloud Computing. This was really the year when cloud cemented itself as an essential part of any enterprise IT strategy. If you're an enterprise that's not doing cloud computing right now, you're definitely looking at it and planning for it. Here are some other observations of the journey that cloud computing took over 2013:
The Complexity of Cloud becomes clear:
It was a year when the complexity of cloud computing began to be fully appreciated. As the cloud market matured, more and more enterprises started to leverage a hybrid cloud strategy to take them to the next level of cloud computing, and this brought additional complexity. From private clouds to public clouds, IaaS to PaaS to SaaS and all the management components around that, there's a strong realization that doing cloud well is not easy. As we round out the year, managing the complexity of cloud is a focus for many enterprises.
Automation is Key:
With our strong heritage in BladeLogic Data Center Automation, we've been banging on about this one for quite some time but, over the last year, the market came to fully agree with us that automation across the datacenter is the cornerstone of a successful cloud strategy. A strong foundation in automation can accelerate the path to successful cloud and ensure the business gets a well managed, high performing cloud that delivers cost savings and agility over the long term. Many of our customers are leveraging the investment they’ve already made in automation as they fully embrace cloud computing - and others are getting their automation house in order so they can do so.
Cloud Experimentation -> Cloud Optimization
And, as the maturity of cloud computing grows, there is increased focus on optimization of cloud. The time for experimentation and cloud for cloud’s sake is gone. Sure, the benefits of cloud are many and great – but achieving these for the business in the long term requires holistic management of cloud computing across the entire IT infrastructure. It must be able to deliver the performance, SLA’s and availability levels that are expected of traditional IT. There's a resulting increased focus on the day 2 operational requirements of cloud computing across factors such as performance monitoring, capacity management, automated configuration & compliance and fully integrated ITIL processes such as change management and the CMDB.
Moving up the stack to applications:
The cost and agility benefits of Infrastructure-as-a-service (IaaS) have been proven and many organizations are leveraging this as part of their cloud strategy. To really get the most benefits from cloud though, there is a push to move up the stack from infrastructure to deploying platforms & applications; delivering users full business services in the cloud. During 2013, the cloud management capabilities required to move beyond infrastructure in the cloud to successfully deploy platforms and applications in a cloud environment were a key focus.
A Tighter Link Between Cloud & DevOps:
Cloud is the natural enabling technology for the operational side of DevOps. In fact, many say DevOps was born from the cloud with a little bit of Agile thrown in as accelerant. Enterprises are realizing how Cloud+DevOps can take application release to the next level and are focusing on how to make adoption of DevOps practices a business success.
So, all in all a big year for cloud computing and we at BMC are looking forward to more developments as the market continues to evolve next year.
Herb VanHook, VP and Deputy CTO, BMC Software, spoke to CIO Custom Solutions Group recently about the future of cloud, and how IT organizations can more effectively take advantage of its benefits. To get the maximum technology and business outcomes from cloud, IT leaders may need a new mindset, one of possibility and flexibility. They’ll need to avoid compartmentalizing the necessarily complex needs of heterogeneous clouds, and must instead:
“…embrace cloud computing as a set of next-generation IT options that will enhance and transform the existing IT space.”
VanHook goes on to say that such a change in mindset may require a complete organizational shift and a lot of upfront planning. Intimidating? Maybe—but by evolving their governance models and best practices, IT leaders will find themselves in a better place not just for standing-up their cloud environment, but for guaranteeing it thrives in the future. If IT hits the ground running, it will be in a better place to ensure operational issues don’t thwart business outcomes—and to justify cloud spending.
“To demonstrate success for cloud initiatives, organizations must show how they are better leveraging capital assets, reducing overall service costs, and enabling the velocity of business.”
Read the full text of the interview and get VanHook’s Four Keys to Cloud Success here. For more information about how BMC can help your organization prepare for the future of cloud, visit www.bmc.com/cloud.
We're thrilled to be a part of this year's Gartner Datacenter Conference 2013. Starting next week (December 9th - 12th) in lovely Las Vegas, BMC will be meeting with customers and analysts alike to discuss new advancements in cloud computing, ongoing data center consolidation and orchestration, and how to optimize lines of IT such that they can become a profit-center.
Come and see the BMC team in Booth #510 where we'll be doing demos of BMC Cloud Lifecycle Management, BladeLogic Server Automation, and BMC Control-M and Hadoop for managing big data.
Don't miss our presentation during the show
Challenges and Solutions for the New IT Era
Tuesday Dec 10th - 1:45pm
Location: Titian 2304
Herb VanHook, VP and Deputy CTO of Cloud and Data Center Automation will share insights on the modern data center and how to best optimize it for your business.
It promises to be an exciting and informative event. The BMC team hopes to see you there!
Is this caused by problems like inconsistent configurations from servers, networks, databases or applications causing you not to sleep? Do you have stress from the business to deliver complete application delivery process from development to production while also reducing costs, decreasing errors, and accelerating time to value for business-critical services?
As IT professionals from around the world come to Las Vegas, you can speak to industry professionals that can show you:
Cloud computing by providing fully configured cloud services, placed intelligently, based on policies, to a wide range of virtual infrastructures and public cloud providers.
Deliver consistent configurations from servers, networks, databases or applications
Optimize resource allocation and workload placement in modern physical, virtual, environments, and Mainframe environments
Understand and manage to the users experience of your web based applications
Distinguish between general and intermittent slowdowns, allowing drill-down into details
If you would like to share these market challenges with us please come and have a conversation with us during the Gartner Datacenter Conference in the Exposition Hall booth #510.
Now that we are seeing more widespread enterprise adoption of cloud, there’s an increasing focus on the underlying technology that supports, monitors, orchestrates and manages the cloud. In their latest Decision Matrix report, Ovum has rated BMC a clear technology leader for selecting a cloud management solution. This is a great validation of the constant focus we've had on delivering a cloud management platform that supports holistic cloud management across existing IT infrastructure, systems and processes.
BMC was rated a clear leader in the technology features dimension for selecting a cloud management solution, scoring an average of over 8 out of 10 across key technology features including:
Security & Backup
Provisioning & Automation
Reporting & Integration
“BMC scored a maximum in the cloud management feature, which demonstrated the breadth of integration BMC has with public cloud platforms as well as private cloud technologies. Ovum considers this feature will become of increasing importance, particularly as we do not expect a single cloud standard to be agreed within the short to medium term, if ever.”
With hybrid cloud becoming more of a reality (now or in the future) for most enterprises, having the underlying robust technology to enable management of multiple clouds; private or public; across diverse platforms is essential.
In the report, Ovum points out that as they evolve from virtualization to cloud computing, enterprises are being challenged by the need for “cross platform management & control and a holistic approach”. It’s an important point - let’s not forget that there are still a large percentage of companies at the virtualization stage that have not yet fully adopted cloud computing. Considering the cloud management & orchestration technology they will need to support this transition from virtualization to cloud that will enable them to manage cloud holistically in a hybrid cloud model will be essential.
The BMC team is having a great time here at AWS Re:Invent, Las Vegas. The Venetian is buzzing with over 7,000 people all learning about cloud computing and how to manage it. We're meeting up with customers who are already using our cloud management platform or are using other BMC solutions and want to leverage that investment as they move to cloud computing.
One of the highlights for me so far was the keynote yesterday - AWS bought on stage customers spanning enterprises (Dow Jones) to start-ups (Atomic Fiction) who provided great examples of how leveraging public cloud is driving agility and cost savings across their businesses.
One of the things that stood out in the keynote was how cloud is helping companies be more entrepreneurial - not just start-ups but enterprises as well. Cloud is enabling larger enterprises to more easily adopt that entrepreneurial spirit of a start-up. The on demand, scalable nature of the public cloud is driving a culture of innovation - with the potential failure of trying something new no longer being such a big risk to the business. With IT compute resources being able to be spun up in minutes instead of weeks, companies large and small are able to take the risks needed for innovation much more easily.
Continuing along this theme was an interesting discussion on why companies are adopting the AWS public cloud and, according to AWS, these reasons include security (AWS continues to release improvements in this area), availability (the AWS cloud now covers 9 regions with 25 availability zones and 46 edge locations) and lower costs (driven by CAPEX conversion to OPEX, economies of scale and on demand access to IT resources). In line with the feedback that we've been getting from our own cloud customers was the assertion that increased agility is the main driver for adoption of public cloud. Enterprises can't afford to be slow and the on demand compute capacity combined with services in the AWS public cloud gives them the speed they need.
Not surprisingly, AWS believes the trend towards public cloud adoption will only continue to grow and that enterprises will eventually source most of their IT resources and capacity from the public cloud rather than on premise. However, AWS does see the reality today being that enterprises want to easily combine their on premise IT infrastructure with the AWS public cloud and they need the tools to help them manage on premise and public cloud seamlessly. BMC got a shout-out at this point with Andy Jassy, SVP of AWS, noting how BMC Software is helping companies to manage this new hybrid IT reality. We provide one, unified management platform for enterprises to manage, govern and optimize the performance of both on premise IT and public cloud deployments.
AWS made some new product announcements during the keynote as well including AWS Cloud Trail for governance and compliance and Amazon Workspaces, a virtual cloud desktop service. Both of these announcements point to the needs customers have to transfer their traditional IT management processes to their cloud management platform and to drive more value from cloud with services.
Don't forget to stop by our booth #1000 - we'd love to chat with you!
Go here to find out more about the AWS and BMC Software alliance and how we help enterprises manage their workloads in the AWS public cloud.
If you are doing anything in cloud computing these days, you are probably doing it at least in part with Amazon Web Services, known as AWS to its many friends. This is what has made re:Invent, the AWS trade show, such a critical part of the cloud computing circuit, even though this year marks only the second re:Invent.
It’s amazing to think how quickly all this has happened. AWS was only launched in 2004, but in that short time the service has displaced many players who were much better established on paper. As I have had occasion to say, with cloud computing it’s no longer a question of when, but simply of how.
A huge part of the reason for the rapid success of AWS has been its ease of adoption. Users frustrated with the slow and bureaucratic delivery options offered by their internal corporate IT departments jumped at the chance to do their own IT procurement with nothing more than a credit card. Their users’ sudden migration to AWS provided a salutary wake-up call to those IT departments who had relied for too long on being the only choice users had.
However, this success brought with it some headaches. The enterprise view of AWS is coloured by fears of compliance violations and security breaches, not to mention fear of downtime. That last is unfortunately not unreasonable, given the repeated troubles of the US-East-1 AWS zone, but in fairness it has to be said that good application design, spanning multiple availability zones, can mitigate or even nullify the impact of AWS outages.
After going through the usual phases of dealing with public cloud options encroaching on “their” turf (denial, anger, bargaining, depression, and finally acceptance), corporate IT departments have by and large realised the great benefits that AWS can offer, and are focusing on how to access those features and services in a way that works for them and their constraints. For instance, AWS operates according to a Shared Responsibility Model, meaning that they will ensure the security and compliance of the infrastructure problem, but customers are responsible for everything from the operating system on up. Also, AWS operates much like a bartender who is happy to continue serving customers without asking too many questions. If they are buying with the corporate credit card, oversight is for their manager, whether we are talking about drinks or cloud services.
BMC is here to help with both parts of that problem. Intelligent placement logic built right into our Service Governor can ensure that only services appropriate for AWS are routed there, while other services with more specialised requirements - whether around geographical location, performance requirements, security standards, or technical compatibility - are routed to other infrastructure options. Once deployed, ongoing compliance audits with automated remediation capabilities can ensure that those services stay compliant to whatever standards until they are decommissioned at the end of the originating business requirement.
Thanks to our deep experience with processes and procedures, BMC can also offer plug-in integration of ITIL change management processes across all cloud delivery options, including AWS. Of course this does not mean having to convene the Change Advisory Board every time someone needs an AMI to be deployed! Rather, standard changes are approved automatically, created for the audit trail and the service model, while manual approval is reserved for anything out of the ordinary.
If you want to find out more about how BMC can work with you to make your AWS initiatives successful, Alan Chhabra, our VP of Worldwide Cloud Sales, will be talking about "Professional Grade Cloud for your Hybrid Needs” on the 14th at 4.15pm in Lido 3103. You can also come and visit us any time at booth 1000 to talk about your specific use cases and see a live demo of the various BMC products that could help make it a winning project. After all, the cloud loves a winner.