In Cloud Infrastructure, It’s Open vs. Proprietary All Over Again

    Slide Show

    Ten Reasons Why OpenStack Will Rule the Enterprise

    Of all the divergent paths that data center architectures could take in the coming years, with the advent of virtualization, the cloud, SDN and all the rest, it is somewhat incongruous that decisions regarding physical layer infrastructure should fall into two primary camps: proprietary vs. commodity.

    These two approaches have been battling for enterprise hearts and minds for some time, but these days the argument isn’t so much over costs and capabilities as it is about how best to lay the foundation for the advanced, dynamic architectures that are coming the way of IT.

    Take Oracle, for example. The company has long championed the tight integration of hardware and software as the best means to provide optimal data performance, so much so that its initial reaction to the cloud was rather dismissive. These days, though, the company is all about the cloud and other advanced architectures, provided they reside on an integrated platform like the M6 cluster or the Exadata Database Machine. With both hardware and software working in conjunction, the argument goes, the enterprise will gain a higher level of productivity than is available through conglomerations of commodity boxes running open source systems.

    Even companies that have embraced open protocols like OpenFlow and OpenStack can still work a proprietary angle into their plans. For instance, Cisco has taken the lead in cloud infrastructure deployments, primarily through its Open Network Environment (ONE) portfolio, which includes many of the leading open source frameworks of the day. Of course, organizations that deploy ONE using Cisco hardware, such as the Nexus 6000, gain access to proprietary, programmable ASICs that allow for optimal performance through advanced mapping and other techniques. In other words, you can still integrate non-Cisco technology into your ONE environment, but it won’t perform as well as Cisco hardware.

    Open source purists, however, say that is a distinction without a difference. Smaller networking firms like Pica8 are banking on the notion that the user community has grown tired of the big vendor lock-in strategies that have hampered enterprise flexibility and are finally ready for a truly open, fully federated data infrastructure. As CEO James Liao explained to the recent J.P. Morgan SDN Forum, only by completely decoupling software from underlying hardware will the enterprise be able to capitalize on the opportunities that SDN, and cloud computing in general, provide. With enterprise architectures becoming increasingly distributed over the cloud anyway, why bother spending all that money on not-completely-open Cisco and Oracle platforms that probably won’t be replicated over the wider cloud ecosystem?

    But it’s not just the hungry start-ups that are touting open source solutions on commodity hardware. Top tier vendor IBM has reinforced its commitment to Linux by investing close to $1 billion in the development of open Power Systems servers and the opening of a new development center in Montpellier, France. Now, it’s hard to describe the Power line as commodity servers, but the fact remains that if enterprises and cloud providers up their demand for top-flight devices in order to run open source, cloud-ready application and service environments, it merely adds a high-end component to an already healthy commodity business for Big Blue.

    In a way, this is reminiscent of the legendary Apple/Microsoft feud of the early PC era in which Apple championed integrated hardware/software solutions while Microsoft favored placing its OS on low-cost, third-party PCs. This time, however, we are operating on a much larger scale with the data center acting in place of the new PC and the cloud serving as the global data center.

    Microsoft ultimately prevailed in the PC wars, but Apple had the last laugh by reinventing the entire user-data-enterprise relationship through tablets and smartphones. The same could happen again in the cloud, but history does not always repeat itself, and as much as the enterprise likes to control costs and maintain high degrees of flexibility, it also has a responsibility to provide the most advanced data environment it can afford.

    And that leads me to believe that the commodity cloud will exist for the vast majority of applications and services, while proprietary infrastructure—provided it can provide a true value-added solution—will support the tasks that matter the most.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles