The buzz surrounding Facebook’s Open Compute Project is increasing, with some predicting that it will reshape the entire enterprise-vendor relationship and throw a monkey wrench into longstanding sales and distribution channels.https://o1.qnsr.com/log/p.gif?;n=203;c=204663295;s=11915;x=7936;f=201904081034270;u=j;z=TIMESTAMP;a=20410779;e=i
But while the accomplishments the initiative has achieved so far are impressive, they are not as earth-shattering as they appear—at least not yet.
Facebook is well-known for skipping traditional hardware and software channels, opting instead for purpose-built platforms of its own design that are said to be more efficient and easier to deploy and maintain. The company opened its platform to the general enterprise community about two years ago as the Open Compute Project, drawing a number of backers from the very same vendor community that was cut out of the loop as Facebook was building its own infrastructure.
The most recent development is the introduction of a new set of network specifications that would produce a range of generic devices that are capable of working across multiple operating systems. This would provide for much more flexibility in network designs while at the same time cutting into the lucrative product lines that Cisco and Juniper have built around their proprietary software platforms. The specs were submitted by Intel, Mellanox, Broadcom and Cumulus covering top-of-rack devices and boot software that works across third-party platforms, and are currently under review by the group’s Contribution Committee.
At the same time, Penguin Computing is coming out with the Tundra Open HPC clustered architecture that packs up to 108 HPC servers—three nodes per 19-inch unit—into the OCP’s 42U Open Rack form factor. Using Intel E5-2600 V2 processors, such a configuration would yield more than 40 TFLOPs of performance with about 50TB of RAM. And since the Open Rack design features shared power and networking, as well as easy access for hardware servicing, the platform is expected to fuel the push toward exascale infrastructure, ultimately driving ultra-dense configurations into the service provider and even top-end enterprise industries.
A key component in all this, however, is the interconnect. Massively scaled-out infrastructure is only as good as its ability to harness the collective power of individual components. To that end, LSI Corp. has introduced the new Nytro XP6200 series of PCIe Flash cards designed specifically for OCP environments. The cards utilize the company’s SandForce controller and proprietary host-offload architecture to handle I/O and Flash management tasks, which the company says improves performance consistency for read-intensive applications and provides a 30 percent boost in power efficiency.
The thing to remember about all this is that OCP architectures can only be effectively implemented at hyperscale proportions. This benefits top-tier enterprises like Facebook and Google, which use their own proprietary infrastructure, as well as large multinationals like GM and Coca-Cola that consume huge quantities of data hardware every year. But the garden variety enterprises will still be better off with traditional vendor platforms given that current distribution channels provide the economies of scale that can keep costs low in exchange for a certain amount of flexibility in overall infrastructure design.
Of course, this assumes that small and midsize organizations will continue to build and maintain internal infrastructure despite the economic pressure to pull more resources from the cloud. And this isn’t necessarily a safe assumption. Hardware spending for owned-and-operated data centers is on the decline even while cloud providers are bulking up.
And if the IT industry is truly transitioning to a utility-style architecture, in which data services are delivered from regional, exascale facilities, then open, interoperable and highly modular infrastructure will become the new order of the day.