More

    Even in the SDDC, There’s Open and Then There’s Open

    Slide Show

    Eight Critical Forces Shaping Data Center Strategy

    The software-defined data center (SDDC) is certain to achieve new levels of functionality by severing the last formal ties between hardware and software, but it will nonetheless require a great deal of coordination on the physical layer to make it happen.

    The key question going forward, however, is how to do it. The great debate underway at the moment is an oldie but a goodie: Do you want a vendor-centric approach in which upper-level software sits on custom ASICs in server, storage and network devices, or do you build a generic hardware layer using open, commodity components? If we think of the data center as a PC on a global network, it’s the same Microsoft vs. Apple dispute all over again.

    For those of you who lean toward the open side of things, the idea of numerous multivendor components all humming away in perfect harmony is very appealing. Infrastructure costs should be a fraction of today’s data center, and as long as everything conforms to common standards like OpenFlow and OpenStack, there should be no problem. This is the ideal that open platform providers like Red Hat are presenting—in short, the end of proprietary infrastructure as we know it.

    A deeper dive into the technology, however, suggests that even open platforms will not be as vendor-independent as they seem. On the silicon level, for instance, SDDC environments will likely incorporate key architectures like Intel’s SDI (software-defined infrastructure). This is not necessarily a bad thing, as the format is slated to conform to leading open standards, and is even positioned to appear as a major component in Facebook’s Open Compute Project. But it is nonetheless unsettling that some tech analysts like Patrick Moorhead are already referring to SDI as “the one architecture to rule them all.”

    What does that mean? In Intel’s vision, open hardware works best as the building blocks to software-defined architectures. The easier it is to put the blocks together, the faster the enterprise can achieve the scale required for Web-facing operations and other high-volume/high-speed functions. All the enterprise needs to do is make sure that all processors in the data ecosystem—and those that would cover servers, storage system and network components—are compatible (SDI would be a convenient way to link legacy x86-based infrastructure with new Atom-powered devices) and the rest should be easy. If someone wanted to deploy a non-SDI-compliant device that nonetheless supports OpenFlow and/or OpenStack? That will probably be OK, but you might not gain the full functionality of an end-to-end SDI infrastructure.

    This is part of what Intel was talking about last summer when it sent top executives on the lecture circuit with visions of a re-architected data center. The company is pursuing a range of technologies designed to help the enterprise conform data environments to the needs of users, rather than the other way around. These include everything from new rack designs and HPC systems to updates to Atom processors and new Broadwell Xeons. Behind it all is the goal to convert static infrastructure into dynamic, automated ecosystems capable of pulling disparate resources to bear on any given workload.

    The big elephant in the room, of course, is whether Intel has the edge in building scale-out data infrastructure, or will that go to the rival ARM architecture. I have yet to see an end-to-end SDN or DCCS architecture from any of the ARM designers, but that isn’t necessarily surprising. In fact, the ARM community may very well restrict itself to OpenFlow and OpenStack, producing a higher degree of vendor flexibility in hardware buying decisions, but perhaps at the expense of higher integration costs.

    Again, there is nothing inherently wrong with either option. And indeed, there will probably be varying degrees of openness in all environments, from loose federations of devices to highly architected infrastructure where one vendor rides herd over all others.

    Just be aware that even in the software-defined universe, things like plug-and-play, interoperability and universal automation are not likely to be had without some compromises to the leading vendor platforms.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles