SHARE
Facebook X Pinterest WhatsApp

Even in the SDDC, There’s Open and Then There’s Open

Eight Critical Forces Shaping Data Center Strategy The software-defined data center (SDDC) is certain to achieve new levels of functionality by severing the last formal ties between hardware and software, but it will nonetheless require a great deal of coordination on the physical layer to make it happen. The key question going forward, however, is […]

Written By
thumbnail
Arthur Cole
Arthur Cole
Feb 7, 2014
Slide Show

Eight Critical Forces Shaping Data Center Strategy

The software-defined data center (SDDC) is certain to achieve new levels of functionality by severing the last formal ties between hardware and software, but it will nonetheless require a great deal of coordination on the physical layer to make it happen.

The key question going forward, however, is how to do it. The great debate underway at the moment is an oldie but a goodie: Do you want a vendor-centric approach in which upper-level software sits on custom ASICs in server, storage and network devices, or do you build a generic hardware layer using open, commodity components? If we think of the data center as a PC on a global network, it’s the same Microsoft vs. Apple dispute all over again.

For those of you who lean toward the open side of things, the idea of numerous multivendor components all humming away in perfect harmony is very appealing. Infrastructure costs should be a fraction of today’s data center, and as long as everything conforms to common standards like OpenFlow and OpenStack, there should be no problem. This is the ideal that open platform providers like Red Hat are presenting—in short, the end of proprietary infrastructure as we know it.

A deeper dive into the technology, however, suggests that even open platforms will not be as vendor-independent as they seem. On the silicon level, for instance, SDDC environments will likely incorporate key architectures like Intel’s SDI (software-defined infrastructure). This is not necessarily a bad thing, as the format is slated to conform to leading open standards, and is even positioned to appear as a major component in Facebook’s Open Compute Project. But it is nonetheless unsettling that some tech analysts like Patrick Moorhead are already referring to SDI as “the one architecture to rule them all.”

What does that mean? In Intel’s vision, open hardware works best as the building blocks to software-defined architectures. The easier it is to put the blocks together, the faster the enterprise can achieve the scale required for Web-facing operations and other high-volume/high-speed functions. All the enterprise needs to do is make sure that all processors in the data ecosystem—and those that would cover servers, storage system and network components—are compatible (SDI would be a convenient way to link legacy x86-based infrastructure with new Atom-powered devices) and the rest should be easy. If someone wanted to deploy a non-SDI-compliant device that nonetheless supports OpenFlow and/or OpenStack? That will probably be OK, but you might not gain the full functionality of an end-to-end SDI infrastructure.

This is part of what Intel was talking about last summer when it sent top executives on the lecture circuit with visions of a re-architected data center. The company is pursuing a range of technologies designed to help the enterprise conform data environments to the needs of users, rather than the other way around. These include everything from new rack designs and HPC systems to updates to Atom processors and new Broadwell Xeons. Behind it all is the goal to convert static infrastructure into dynamic, automated ecosystems capable of pulling disparate resources to bear on any given workload.

The big elephant in the room, of course, is whether Intel has the edge in building scale-out data infrastructure, or will that go to the rival ARM architecture. I have yet to see an end-to-end SDN or DCCS architecture from any of the ARM designers, but that isn’t necessarily surprising. In fact, the ARM community may very well restrict itself to OpenFlow and OpenStack, producing a higher degree of vendor flexibility in hardware buying decisions, but perhaps at the expense of higher integration costs.

Again, there is nothing inherently wrong with either option. And indeed, there will probably be varying degrees of openness in all environments, from loose federations of devices to highly architected infrastructure where one vendor rides herd over all others.

Just be aware that even in the software-defined universe, things like plug-and-play, interoperability and universal automation are not likely to be had without some compromises to the leading vendor platforms.

Recommended for you...

Top Managed Service Providers (MSPs) 2022
Observability: Why It’s a Red Hot Tech Term
Tom Taulli
Jul 19, 2022
Top GRC Platforms & Tools in 2022
Jira vs. ServiceNow: Features, Pricing, and Comparison
Surajdeep Singh
Jun 17, 2022
IT Business Edge Logo

The go-to resource for IT professionals from all corners of the tech world looking for cutting edge technology solutions that solve their unique business challenges. We aim to help these professionals grow their knowledge base and authority in their field with the top news and trends in the technology space.

Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.