Getting up on the cloud is a top priority for the vast majority of enterprises, as well it should be. The cloud provides near limitless resources and offers a level of utilization that traditional data center infrastructure can’t come close to matching.
But the fact remains that enterprises will continue to use their local resources for the foreseeable future as well, considering (a) they were very expensive to acquire and shouldn’t be tossed out on a dime, and (b) there’s still a lot of life left in the old enterprise plant and it will be able to render very valuable service to the organization for some time to come.
And herein lies the rub, because it turns out that cloud and on-premise environments have a way to go when it comes to creating a cohesive, integrated environment, meaning that data still must jump through numerous hoops if the enterprise hopes to create a universal, federated infrastructure.
The problem presents itself even in relatively basic cloud environments like SaaS, according to Ventana Research’s Mark A. Smith. Few service providers have given much thought to the interchange of data between the cloud and traditional enterprise infrastructure, so even though the cloud side of the house provides tremendous benefits in process management, data quality and resource utilization, the transition back to the enterprise usually involves substantial migration, exportation and custom coding. Ultimately, this ends up diminishing the organization’s ability to provide a consistent, secure and high-quality environment across data and application layers.
This isn’t to say that progress has come to a complete halt. New techniques like Cloud-integrated Storage (CiS) are creating bridge-like architectures designed to smooth out the transition between enterprise and cloud, creating, in the words of StorSimple’s Marc Farley, the “missing link for enterprise storage customers.” The system works by creating an on-premise SAN that exports LUNs to cloud storage services, which can be used for snapshots, backup, inactive and archival data. The concept is similar to hybrid SAN technology that spans HDD and SSD tiers, with an additional high-latency module for the cloud. Farley added that key applications include the storage of unstructured data currently occupying primary storage tiers.
There is also hope that new efforts to standardize cloud environments will make it easier to extend automation and other management tools across architectures. rPath’s Enterprise Cloud Adoption Framework looks to nail down key aspects of what the company calls “applications-enabling infrastructure,” such as compute, network, storage, even OS and middleware platforms. The framework lays out various infrastructure and application models, ranging from fragmented and non-standard designed to meet unique requirements to more elastic approaches of pooled resources. Tailoring these builds to existing enterprise infrastructure could go a long way toward unifying various cloud and non-cloud architectures into an integrated platform.
Still other approaches are taking aim at the very way in which cloud resources are utilized. A company called Tappin has a unique approach to data sharing that seeks to circumvent the very notion of what is and is not the cloud. Unlike traditional services in which data is pulled from end-point devices into a centralized repository, Tappin has the devices communicating and sharing data with each other directly. This essentially turns the entire infrastructure into a giant NAS, with data moving seamlessly from user to user. To function properly, however, devices need to be on and connected at all times, and the system uses a fair amount of compression and optimization to maintain reasonable traffic loads.
Of course, one of the best ways to ensure fluidity and interoperability across local and cloud infrastructures is to make the data center more cloud-like. Private clouds at least provide a high degree of resource manipulation and pooling capability that can at least forge a degree of commonality with public services.
But fulfilling the ultimate goal of a fully integrated data environment spanning a global infrastructure of owned and third-party resources, well, it looks like there are still a few bugs to work out.