More

    The Quiet Revolution of Serverless Computing

    Slide Show

    5 Essential Elements in Building an Agile Data Center

    Amid the buzz surrounding the cloud, the IoT, Big Data and digital transformation, it’s a bit surprising that one new development that could underpin all of this activity has remained largely under the radar.

    Serverless computing gets its name not by the absence of a server in the computational chain, but by the way its deployment, configuration and management are shielded from the user perspective. This allows the knowledge worker to concentrate on app performance and their own productivity rather than underlying resource considerations. As such, it has great potential for the emerging service-driven business model in which traditional workflows, revenue streams and other factors are supported on a purely digital basis.

    But as with any architecture, there are upsides and downsides, which means serverless architectures will only be appropriate for certain functions, and organizations will not only have to master their use and optimization but integrate them into broader data ecosystems as well.

    Amazon ushered in the serverless market with its Lambda service in 2014. Since then, both Microsoft and Google have rolled out similar services on their respective clouds. And late last year, IBM added a number of new features to its OpenWhisk platform aimed at making it easier for developers to build secure, scalable environments using a variety of open, third-party tools. (Disclosure: I provide content services for IBM.)

    A serverless architecture’s chief benefit is to provide a more efficient consumption model for public cloud resources, says Talkin’ Cloud’s Christopher Tozzi. By offering a means to load and execute code on-demand, it offers a level of autonomy over infrastructure that few developers have seen before, particularly on IT-governed on-premises architectures. Serverless is most useful when dealing with intermittent, data-intensive workloads that require fast turnaround and scale-out resources that would remain idle between jobs, but it can also provide support for self-service environments in which IT has neither the time nor the resources to cater to everyone’s request.

    Depending on the performance characteristics of the workload, however, serverless might not be the best choice, says IT consultant David Linthicum. Simply pushing data across the cloud introduces latency no matter what architecture you are using, and serverless can compound this issue by having to figure out how to get resources to respond to the application – much more so than a pre-configured virtual environment. Monitoring and debugging a serverless system is also more complicated because workloads tend to become distributed across multiple resources. Still, the upsides of serverless are generally better than the downsides for most workloads, simply by virtue of the fact that it provides a more efficient means of cloud consumption for high-volume activities.

    One interesting caveat to the entire serverless movement is how the technology will mesh with containers, which are also intended to streamline resource utilization in the cloud. According to Apprenda’s Chris Gaun, the two technologies actually might end up serving different needs in the enterprise. As he explained to tech journalist Matt Asay, serverless architectures like Lambda probably won’t be used exclusively to support app development that is already in progress, while container orchestration of microservices will likely be cost-prohibitive in a serverless environment. Serverless also requires highly granular management of diverse application functions, says Apcera’s Dean Sheehan, which few enterprises have the manpower to deal with, so a gradual step up to containers will likely happen before a broader shift to serverless takes place.

    Regardless of how all of this plays out, the enterprise is poised to see rapid gains in app development and overall data productivity in the next few years as resource provisioning and configuration cease to be a prerequisite for actual computing.

    Ultimately, this may bring about the democratization of data that technology futurists have long predicted, where those who wish to use data infrastructure for their own ends no longer have to wait for experts to set them up.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles