Public cloud is a good thing only when an appropriate strategy is applied to leverage it to the benefit of the business. While is can be less expensive for some workloads, it can be more expensive for others — without a thoughtful, strategic approach, it can destroy value rather than create it. In other words, “Public cloud doesn’t fix stupid.”
That’s the conclusion drawn by Jason Anderson, chief architect at Datalink, a cloud services provider in Eden Prairie, Minnesota, based on the findings of a recent IT optimization survey of U.S. IT executives that was commissioned by Datalink. In a recent interview, Anderson discussed the survey, and what Datalink gleaned from it, at some length. I asked him if the survey results prompted Datalink to change anything it had been doing in order to better serve its customers. He said the company has, in fact, changed its focus:
What we had been talking to customers about for quite a while was that they need to get a handle on their cloud strategy, and make sure that if you’re an IT executive, you want to be at the center of the cloud conversation, and be a broker of IT services. That had been our message. It’s not that we think that that is wrong, or was wrong. But what we learned from the survey was that a lot of IT executives get that message already, so we really don’t have to pound on that. Instead, we need to get them better armed with the how to do that. So we shifted our focus to really saying, “OK, the how is to focus on your workloads, and embrace the fact that you’re going to have multiple platforms.” What was clarified for us in the survey was that we really need to take a very workload-focused view of the world. Know going into it that, except for some very small organizations, or ones that are so specialized they only have a handful of applications, they’re going to have multiple platforms, and that both on-prem[ises] and public cloud are going to be a part of the mix.
Anderson went on to address what I found to be a particularly interesting survey finding — that 40 percent of the respondents had pulled at least one workload back from the public cloud:
In follow-up conversations with our customers, they told us they’re not taking everything out. But they’re realizing that you can’t just move applications to a public cloud and expect to have a successful deployment. You need to re-platform. Architecture still matters, and strategy is critical when you’re putting a workload into public cloud — you have to do it with purpose. A number of workloads have ended up on public cloud without architecture, without strategy. Those are the ones that have had to be pulled back, and a lot of it has come down to cost as one major concern, the other being risk — things like regulatory security. It’s those types of concerns where somebody realized, “Holy crap, we’ve got a really critical workload that has extremely valuable company data running on public cloud. We didn’t do that on purpose — we need to take a timeout with this, pull it back, and then figure out what our strategy should be.” So we don’t see those customers abandoning public cloud — they’re realizing that mistakes have been made in workloads that they’ve put out there that they then need to pull back.
That pull-back can be painful, Anderson said, but necessary:
That’s a non-trivial decision to make that choice, and it’s also a non-trivial cost. It’s a lot more expensive to pull it out then it was to put it there in the first place. The businesses and their IT organizations have realized that public cloud is not a panacea. It is not right for every workload, and many of them have discovered that the hard way. Maybe at some point in the future those workloads will end up back in public cloud, once they’ve been appropriately re-platformed to take advantage of public cloud by maintaining the appropriate services, availability and security that the applications require.
I also found it interesting that the top reason respondents gave for abandoning public cloud deployments was security. I mentioned to Anderson that I could see see where security concerns would be a top reason for not deploying to the public cloud in the first place. But abandoning the public cloud due to security concerns infers that actual security issues arose, as opposed to it just being a perception thing. I asked Anderson for his thoughts on that, and he said that what they found in talking to some of their customers who responded to the survey was that it wasn’t necessarily the case that they encountered security issues while they were in the public cloud:
It was more a matter of applications having ended up in public cloud without a strategy, without it being a conscious decision by the entire organization. It may have been through shadow IT, through lines of business that don’t have the level of experience and expertise that IT has around security and regulatory concerns. So applications have been discovered in public cloud that probably never should have been there in the first place from a security perspective. The other primary issue on the security side has been with customers who realized that they have not properly architected the environment.
According to Anderson, it all boils down to recognizing that there’s nothing automatic about security:
You need to understand how you provide an integrated security architecture as part of your public cloud, whether that’s [Amazon Web Services] or Google or [Microsoft] Azure. Those platforms provide excellent tools to assist you with creating secure environments, but it doesn’t happen automatically. You don’t automatically get secure authentication to access your resources without setting up the appropriate access controls. You don’t automatically get border security, you don’t automatically get intrusion detection and prevention capabilities. All of that requires an appropriate architecture, whether it is on-premises or in public cloud. So we continue to work with our customers on the fact that architecture matters.
A contributing writer on IT management and career topics with IT Business Edge since 2009, Don Tennant began his technology journalism career in 1990 in Hong Kong, where he served as editor of the Hong Kong edition of Computerworld. After returning to the U.S. in 2000, he became Editor in Chief of the U.S. edition of Computerworld, and later assumed the editorial directorship of Computerworld and InfoWorld. Don was presented with the 2007 Timothy White Award for Editorial Integrity by American Business Media, and he is a recipient of the Jesse H. Neal National Business Journalism Award for editorial excellence in news coverage. Follow him on Twitter @dontennant.