Top server vendors seem to have gotten the message when it comes to stripped-down white box solutions, but whether they can make a go of this strange new world is still very much up in the air.
White box is emerging as the go-to hardware platform for cloud-facing, scale-out infrastructure, which is likely to become the dominant form of enterprise computing in the new century. And many of today’s traditional enterprises are in need of a quick way to scale their physical footprints to Big Data levels without breaking the IT budget. But will this be enough for traditional server vendors as they seek to provide commodity (read: low-margin) machines to non-hyperscale buyers largely through existing distribution channels?
To be sure, there is opportunity in the white box market. According to Dell’Oro Group, white box sales to the cloud industry doubled in the fourth quarter of 2014 compared to the same period a year ago. However, this was driven largely by the top buyers, namely Amazon, Facebook, Google and Microsoft, and these were sourced primarily from ODM (Original Device Manufacturers) in the Pacific Rim. The firm estimates that the top providers are buying more servers per quarter than the entire installed base of second-tier providers like Rackspace and eBay. And this is also driving activity in other segments like storage and networking as demand for improved access and migration to and from the server farm increases.
But while there is certainly a per-unit cost advantage in buying in bulk from an ODM, the fact remains that the vast majority of the enterprise industry will continue to operate on a smaller scale and will want to maintain a healthy data infrastructure in-house. This is where the so-called branded white box players hope to make a killing.
Supermicro, for one, recently upped its game in commodity blade infrastructure by teaming up with CoreOS to integrate the Tectonic container management system into its white box devices. Tectonic is based on the Kubernetes container system favored by Google, which should give Supermicro a strong hand in supporting not just low-cost, scale-out architectures, but the emerging suite of microservices that are starting to fuel sharing and collaborative environments. The blades themselves will sport the latest Intel processors that enable workload isolation and security through the Trusted Execution Technology (TXT) and Trusted Compute Pools (TCP) formats.
Meanwhile, HP is charting its own course through the white box shoals with the Cloudline portfolio. The devices conform to Facebook’s Open Compute Project and are manufactured by Chinese ODM Foxconn. The company estimates that Cloudline machines could be 10 percent to 25 percent cheaper than higher-end Proliants, depending on how they are configured. To be sure, bulk purchases of similar machines directly from Foxconn or any other ODM would push the price down further, but HP is counting on support from its existing enterprise customer base as it builds advanced architectures atop generic systems. In this way, HP will be able to leverage its long-standing supply chain partnerships while still providing commodity solutions to end users.
Even non-server vendors are turning toward white box solutions as they seek to build modular scale-out architectures. EMC, for example, recently launched the VSPEX BLUE platform that incorporates a commodity server with the VMware EVO:Rail software stack and EMC’s own management and automation capabilities to provide an easily deployable, end-to-end computing environment. The platform features a diverse set of EMC and VMware tools, such as the RecoverPoint backup system and the Data Protection Advanced application acquired from Isilon. As well, there is the CloudArray option that enables tiered cloud storage across distributed architectures.
To cloud purists, this is all a huge waste of time. Once the enterprise moves its entire data infrastructure to a third-party provider, the only ones buying infrastructure will be the hyperscale providers – and they have no use for vendor-configured solutions. But if that comes to pass – and that is a big if – the average enterprise will still have a need for highly specialized hosted infrastructure, not all of which can be easily recreated in software.
And in the meantime, today’s platform vendors have every reason to believe the enterprise will want to keep a certain portion of the data load close to home, as long as it supports advanced, cloud functionality. The hardware in this setup won’t generate the same revenue streams as in the good old days, but it does provide a means to integrate advanced architectures higher up the stack. And going forward, this is where the real data productivity action will be.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.