In the mobile industry, the catchphrase is: “There’s an app for that.” For the enterprise, we are quickly moving toward “There’s a service for that.”
Across the board, virtually everything and anything that was once under IT’s exclusive control is now migrating to a service model. This gives business units unprecedented flexibility when it comes to provisioning and defining their own data environments, but is it necessarily healthy for the enterprise in general?
The rise of XaaS (Everything as a Service) is the natural outcome of virtualization and cloud computing, says datacenters.com’s Rachel McAree. What we’re seeing is the beginning of the end of the old model of hardware and software deployment featuring amortization, designed obsolescence and a high degree of IT-led hand-holding and integration in favor of a more fluid model based on predictable expenses and outcomes. Whether you are talking about simple IT resources or the plethora of business applications currently in play, the service model is undeniably easier on the pocketbook – unless you’re doing it wrong.
The biggest knock against services is that they tend to be generic, with little or no capacity for the level of customization that many enterprises have come to rely on. However, providers like Zadara say they are offering just that with the new Enterprise Storage as a Service (STaaS) system using the Microsoft Azure or AWS public clouds. The platform uses the company’s Virtual Private Storage Array (VPSA) to provide file and block storage with the performance and consistency of a single-tenant solution at multitenant scale and pricing. The company says it can provide seamless deployment across multiple clouds and maintain high performance even for high IOPS, low latency applications that were previously deemed unsuitable for public clouds.
At the same time, the tools for converting legacy architectures to support internal service-based application delivery are hitting the mainstream. SUSE’s new OpenStack distribution, SUSE Cloud 3, provides automated configuration and deployment by maintaining high availability and rapid startup of private cloud configurations for dev/test, batch processing and other functions. The system features the SUSE Linux Enterprise High Availability Extension for the establishment of open source data clusters, which prevents control plane failures and other issues that lead to service disruption and access denial. As well, the Administrative Server module features a project-based installation and management framework for control of physical resources.
But with all this self-service provisioning going on, CIOs will naturally wonder whether too much freedom will come back to haunt the enterprise. According to Jelastic’s John E. Derrick, this concern is valid, but there also needs to be a balance between IT’s need to control the data environment and knowledge workers’ ability to get their jobs done. Ultimately, IT should model themselves on the hosting community, which places a high priority on customer service while at the same time providing wide latitude when it comes to provisioning and managing the user environment. This can be a tricky dance because the line between data environment and underlying infrastructure can be a fine one, particularly when it comes to security and governance policies. But in the end, it would benefit the enterprise greatly if IT were to build and maintain the sandbox and let users design their own castles.
Realistically, it is hard to see the enterprise industry transition to a public cloud-based service model to an extreme degree given that issues like security and availability will always trump low cost and convenience. But once both private and hybrid clouds are in place, it will be much easier to put IT on a service footing that encompasses the reliability and control of internal infrastructure and the scalability of external resources.
The XaaS model will likely thrive in the enterprise, but it will consist of a multitude of underlying physical and virtual architectures.