There is a lot of talk about cloud enabling a "utility" model of IT, where accessing IT capacity is as easy as hitting the light switch, and users worry about how that is achieved exactly as much as they worry about whether the light will come on - i.e., not at all.
However, that metaphor can be taken too far. Last Thursday I could be found in Paris, taking part in a panel together with a competitor, two representatives from cloud-using companies, and a lawyer specialising in IT compliance issues. At one point the competitor, pushing hard for public cloud, made the analogy that some decades ago factory owners were faced with the choice of whether to keep electrical generating capacity on site, or trust the national grid to deliver electricity on demand. Of course his point was that on-site main power generators were expensive and unnecessary, and by extension on-site IT would be shown also to be expensive and unnecessary.
Unfortunately for my opponent, the person seated between us was a representative of a company specializing in electrical systems, including back-up generators, UPS, and the like. It was therefore all too easy for me to point out that there was a thriving market in providing exactly that on-site capacity, perhaps not for the bulk of loads, but definitely to ensure that mission-critical services stayed up and avaiable even if third-party suppliers were having trouble delivering electricity...
This is exactly the case with cloud. There are a lot of services where pushing them out to the public cloud makes perfect sense, as they are not particularly demanding or differentiated. However, there is a hard core of services for which that is not a good fit, and then a spectrum in between. For most organizations, the correct answer is going to be a mix of on-premise and third-party capacity, and probably with various different suppliers at each end too. This real-world messy situation is why we are committed to heterogeneity, rather than taking a simplistic metaphor a step too far.