Is It Time for the Data Center OS?

    Slide Show

    Top Trends Driving the Data Center in 2015

    It doesn’t take a lot of imagination to see the digital ecosystem as a series of concentric circles. On the processor level, there are a number of cores all linked by internal logic. The PC contains multiple chips and related devices controlled by an operating system. The data center ties multiple PCs, servers, storage devices and the like into a working environment, and now the cloud is connecting multiple data centers across distributed architectures.

    At each circle, then, there is a collection of parts overseen by a software management stack, and as circles are added to the perimeter, the need for tighter integration within the inner architectures increases in order to better serve the entire data ecosystem.

    It is for this reason that many data architects are warming to the idea of the data center operating system. With the data center now just a piece of a larger computing environment, it makes no more sense to manage pieces like servers, storage and networking on an individual basis than to have multiple OS’s on the PC, one for the processors, another for the disk drive, etc. As tech investor Sudip Chakrabarti noted on InfoWorld recently, the advent of virtualization, microservices and scale-out infrastructure in general are fueling the need to manage the data center as a computer so the distributed architecture can assume the role of the data center.

    This is exactly what Mesosphere is going for with its aptly named Data Center Operating System (DCOS). Built on the Apache Mesos core, the system is already available for beta testing on AWS and Azure, and was recently released as a free community edition for IT professionals to muck with before launching into the full enterprise suite. The ultimate goal is to simplify infrastructure management and apply advanced analytics and other tools to improve workflows and load management, all in the name of improving data efficiency through lower costs and improved resource utilization.

    A data center OS is the only way to realize the dream of the application-centric enterprise, says Mesos co-creator Benjamin Hindman. With the new breed of applications no longer residing on a single server, tight coordination between compute and other resources will be required similar to the way multithreading works across many cores. In this way, abstracting hardware is the wrong approach for truly distributed systems. What is needed is the equivalent of a Portable Operating System Interface (POSIX) that can break down the highly partitioned architectures we have now and unify elements like analytics, databases, web servers and the like under a single computing paradigm. This is really the only way to prevent the same silo-based architectures that plague the data center and keep it from being recreated in virtual, distributed architectures.

    It also goes a long way toward automating the “New Stack” to make it less susceptible to human error, says venture capitalist Vinod Khosla. In order to function effectively, the enterprise will need to aggregate and disaggregate virtual resources on a continual basis, and ideally this is best handled on the application level so as to deliver the highest performance to the user while maintaining the most efficient consumption model for the enterprise. Again, this is roughly equivalent to modern PC software, which can launch itself and provision its own resources rather than require the user to do it manually. In this vein, the data center OS is not only desirable, but inevitable.

    Naturally, shifting from a legacy management stack to an integrated operating system will not happen without a few hiccups, and even those who have a working knowledge of Linux and the various Apache distributions will probably face a lengthy and complicated deployment of a system like Mesosphere DCOS.

    But as the enterprise becomes more distributed, each node in the architecture will have to function as a unit in order to produce the highest degree of operational flexibility to the overall data environment. The best way to do that is to unite all discrete functions under a single, automated management regime.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Latest Articles