More

    Momentum Builds for Another Try at Utility Computing

    Slide Show

    Data Lakes: 8 Enterprise Data Management Requirements

    Utility-style computing has been a dream of the technology elite for some time. But it’s only lately, with the advent of cloud computing and fully virtualized infrastructure, that the underpinnings are finally in place to make it happen – provided there is the will to see it through.

    That’s a big if, considering powerful interests are backing the status quo, and the business model for the enterprise giving up its responsibility for infrastructure entirely is not fully mature, Uber notwithstanding.

    But as Acquia’s Chris Stone pointed out on TechCrunch recently, it happened in the power, water and telephone industries, so there is no reason it couldn’t happen for data as well. What’s needed is a different mindset regarding the relationship between infrastructure and applications and a realization that only the top layer of the data stack is what most people need to get the job done. A single, standard API enabling a provider-agnostic ambient data environment on top of cloud infrastructure is all it would take to make today’s messy processes of provisioning and configuring resources as easy as flipping a light switch. Is this too good to be true? Perhaps, but it’s hard to ignore the potential benefits it would bring to data users around the world.

    And it’s not like leading cloud providers are not aware of the possibilities. At Google’s NEXT conference in San Francisco earlier this year, Eric Schmidt, chairman of Google’s parent company Alphabet, laid out the three largely misunderstood developments needed to create such a system. It starts with serverless architecture, which doesn’t mean there are no more servers, just that they are hidden beneath a layer of self-service automation. From there, we evolve to a NoOps architecture, which again does not mean there are no more operations or even Dev/Ops on the enterprise side, but functions like building, scripting and managing infrastructure are automated as well. And finally there is machine learning to take over labor-intensive jobs like data ingestion and training. Ultimately, we find ourselves in an environment where virtually anyone can create apps and services without knowing a thing about servers, storage, networking or the myriad other pieces that comprise the infrastructure stack.

    Amazon is moving in this direction as well with offerings like the Lambda platform, which provides a serverless environment for running all manner of applications without dealing with provisioning, managing or integration. As the company’s Peter Sbarski and Sam Kroonenburg noted in their recent book, “Serverless Architectures on AWS,” Lambda executes code on a massively parallel scale and in such a way that it responds to events rather than have the developer respond manually. By taking care of low-level tasks like provisioning compute resources, installing software, and deploying containers, developers can focus on improving the code itself and building loosely coupled, scalable architectures. This will be crucial in the near future as competitive realities force organizations to leverage services and microservices to drive revenue, rather than build and manage closed infrastructure.

    Utility computing would also be a boon to open source environments as it provides a means to federate advanced architectures across virtually any hardware platform. Just recently, OpenStack developer Mirantis teamed up with job processing solutions provider Iron.io to introduce serverless capabilities to OpenStack. The goal is to develop advanced telemetry solutions that would allow OpenStack to capture internal system events and trigger automated workflows, essentially providing resources as needed without direct input from users.

    The first phase of the plan is creating an application package under the OpenStack Murano catalog that is validated for the IronMQ cloud messaging architecture. This will allow developers to write logic as a Docker image that supports event-driven processing, providing an easy-to-use and highly adaptable environment to create new services and deploy them at scale.

    Serverless, utility-style computing is certainly the cloud as it could be, but it is still quite different from the cloud as it is now. Unlike water or electricity, the value of data is tied intrinsically to the goals and capabilities of the user, and it can fluctuate greatly according to the capabilities of underlying infrastructure.

    Placing everyone on equal footing when provisioning resources may suit the majority, but there will always be a minority looking to gain an edge by appropriating both the upper layers and the foundations of the data environment.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles