The long-term benefits of cloud computing and software-defined everything are pretty well known: improved infrastructure flexibility, dynamic configuration and ultimately the complete virtualization of end-to-end data environments.
In the near term, however, one of the immediate payoffs is supposed to be higher resource utilization. With more of the action taking place in software rather than hardware, virtualization and the cloud should closely match data loads with available resources. In effect, this is the primary means by which advanced technologies are expected to lower capital and operating costs, both by powering down systems until they are actually in need and by cutting down on the rampant over-provisioning that most enterprises employ as a hedge against resource unavailability.
But the results are mixed. Certainly with straight-up virtualization, any gains in server utilization are quickly undone by the need to provision additional storage. As Tintri CEO, Keiran Harty, points out, flash-based storage has helped to alleviate this problem, but the change is occurring incrementally. In fact, in an age of heightened energy awareness, more than 60 percent of data centers reported an increase in energy usage last year, primarily due to the need for active storage capacity to handle the up-and-down data loads of virtualized server infrastructure.
Another problem is the tendency of virtual environments to fragment data to a high degree, says ITWeb’s Elize Holl. Not only does this drive the provisioning of unnecessary server, storage and networking resources, but it makes it difficult to harness and analyze data in an effective manner. Fragmentation is likely to increase in the coming years as users grow accustomed to provisioning their own resources either in internal or external clouds and incorporating personal devices and their related mobile storage architectures.
One of the most effective tools to combat over-provisioning is network intelligence, according to Lyatiss CEO, Pascale Vicat-Blanc. With advanced application defined networking (ADN) in place, organizations gain the ability to coordinate resource utilization to suit traffic patterns, which not only alleviates bottlenecks and resource contention on the network level, but can also be used to more accurately gauge data requirements for server and storage systems. As data environments become more dynamic and users begin to demand real-time performance in both their personal and professional lives, network intelligence is likely to emerge as the next must-have in the enterprise.
Indeed, this new, integrated world is likely to be the death knell for the last bastion of over-provisioned resources, the data silo. As ScaleComputing’s Vanessa Alvarez notes, legacy silo-based architectures can no longer handle the dynamic data environments emerging on the cloud, so even highly integrated silos will have to be flattened out if the enterprises that own them hope to function in the 21st Century. This will not be an easy or quick transition for many organizations, but it will ultimately lead to a more flexible, less redundant infrastructure, with the added benefit of lower operational and capital expenses.
Over-provisioning is one of those practices that will fall by the wayside even if the enterprise takes no active measures to combat it. The normal refresh cycle will eventually weed out aging infrastructure and the management stacks that control them in favor of a leaner, more efficient hardware footprint.
But it’s also fair to say that organizations that take a proactive approach to the problem will find themselves in better position to take advantage of technologies that are improving data environments across the board. Now that resource efficiency is starting to supplant raw power when it comes to data infrastructure, using only what you need at any given time will likely emerge as a driving force in the transition to cloud-based computing.