At the end of the day, data infrastructure is measured by the level of performance it lends to the application. Things like cost, ease of management and upgrade paths are important as well, but rarely are these given precedence over the need to improve the way data is maintained and manipulated.
The cloud can be a double-edge sword when it comes to app performance, however, as it offers both the scalability and flexibility to push capabilities to new levels but at the expense of a lack of control that inhibits the ability to ensure that things are working the way you want them to.
Cloud providers and application management developers are sensitive to this issue because it remains one of the key stumbling blocks to gaining a greater share of the application workload, particularly the lucrative market for mission-critical functions. For this reason, there continues to be a steady stream of software-based solutions aimed at boosting performance and accountability in the cloud.
For instance, Actifio recently launched Actifio One, which the company describes as an application resiliency service designed to unite data protection and other functions for applications distributed across multiple resource platforms. It does this by affording not just simple redundancy and recovery services but by acting as a host to the application during the recovery process. The system is based on the company’s Virtual Data Pipeline platform, a data virtualization technology that enables live encryption, snapshots, replication and other tools within an overarching application and SLA-defined framework. When service is disrupted, it can instantly field a virtual environment to maintain accessibility to the affected app.
Top providers like Google are also ramping up support for enterprise-class services. The company’s Cloud Trace service offers a highly granular view of the inner workings of the Google App Engine, offering users detailed information as to exactly where bottlenecks and other problems lie. The system analyzes instruction streams, traces and other markers to determine the amount of time an app is engaged with an underlying Google resource. This can be used to not only address problems that exist with Google’s infrastructure, but identify issues with particular applications or add-ons so they can be tweaked to improve functionality. The system is available in beta via the Google Developer Console.
Still others are looking to shield the application as much as possible from underlying hardware via advanced virtualization and abstraction. A company called Ravello says it has found a way to encapsulate the entire application workload using its SaaS-based Cloud Application Hypervisor, allowing it to operate on any infrastructure or architecture as if it is still in its native environment. The system is built on the proprietary HVX “nested” hypervisor that envelopes the compute, storage and networking functionality needed by the application so admins do not have to manually configure resources every time the app is deployed on a new cloud.
Ravello says its solution sits lower on the stack than a container solution like Docker, which gives it the ability to incorporate containers within its virtual environment, and is in fact working on integrating Docker directly in a future release. In the meantime, cloud providers like CliQr are moving ahead with their own container deployments as a means to boost application portability on their hosted platforms. CliQr has extended full support to Docker on its CloudCenter stack, offering command line and graphical control over single, multiple and composite deployments, with resources embedded in the container, the host system or both. At the same time, Docker containers can interact with non-containerized applications as long as they reside within the same cloud.
Application performance in the cloud will likely remain as a top area of software development for a while longer. It is debatable whether the enterprise requires deep-dive visibility and control over third-party infrastructure, but the application environment should afford plenty of flexibility – at least if the provider hopes to attract top-end enterprise workloads.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.