Back in the early 1980s, every household had to have a personal computer – or so we were told. It could balance your checkbook, help the kids with their homework, organize your shopping list, and on and on.
And a few months after making this expensive purchase, many heads of those households looked on in dismay as it sat in the corner of the living room collecting dust.
In many ways, the enterprise is experiencing this same pattern in the cloud. By now, most organizations have bought into the premise that the cloud is cheaper, more flexible and easier to use than legacy infrastructure, but many are starting to realize that, like the PC, the cloud is only valuable if it is used correctly, and for the right reasons.
One way to avoid this problem is through a cloud integrator who can analyze your workflows and identify the optimal deployment for both operational and strategic objectives, says Rob Selby of hyperscale services provider Adapt. The objective of the cloud integrator is to find the “best execution venues” for cloud workloads by weighing factors like performance, reliability, cost and security and then matching appropriate solutions to enterprise priorities. In this way, the enterprise gains a single point of contact for all things cloud-related who can guide cloud consumption based on the needs of the business model, not the needs of the IT department.
With the pace of business gaining exponentially, however, no single person can manually direct all workloads to their optimal resources. This is where automation comes in. Companies like Cirba are starting to specialize in advanced control analytics that can automate the routing processing across disparate hybrid infrastructure. The newest version of the company’s software, 9.0, offers single-console management of VM placement across Microsoft, Amazon and IBM clouds. The system removes much of the guesswork surrounding workload allocation by providing real-time analysis of security, compliance, licensing and other application needs and then weighing them against available options. At the same time, it provides granular visibility into on- and off-premises workload performance to continuously fine-tune deployments for optimal performance.
Another key factor in achieving an optimal data environment is software-defined networking (SDN), says tech writer Cheryl J. Ajluni. As most IT executives know already, workloads tend to get sticky in static infrastructure environments. Once they are deployed onto a server, it is rather difficult to get them off. Undoing these physical dependencies and ushering in the kind of flexibility required of digital business models requires broad programmability on the network. This allows policy instructions to be deployed across multiple hosts quickly and easily, which translates into a more flexible and scalable data ecosystem that can easily span both internal and external resources.
The enterprise has been struggling to optimize its data environment since long before the cloud arrived. The problem so far has been that most solutions have targeted a single workload or piece of infrastructure, leading to greater complexity and less cohesiveness across the data center.
The cloud represents an opportunity to break this pattern, but workload management must be deployed as a core component of an integrated strategy, not as an afterthought. Otherwise, you’ll likely wind up with multiple applications sitting on multiple clouds, all of them just collecting dust.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.