For enterprises looking to scale resources to cloud and Big Data levels, commodity hardware is a tough proposition to turn down. It’s cheap, flexible and capable of supporting the software-defined, dynamic resource configuration capabilities that will be in high demand for next-generation platforms and services.
But does that necessarily mean it is the best option in all cases or for all enterprises? The devil, as they say, is in the details, which means data executives will have to think long and hard about what they want to do with their new infrastructure before they start buying the pieces to put it together.
Companies like EMC certainly see the value of commodity infrastructure, particularly when it is their hardware you build it on. The company’s new ScaleIO Node is designed to support the same scale-out, server-SAN architectures that large cloud providers have built using ODM hardware, but without the architectural and integration issues that go with them. As a fully packaged offering, ScaleIO promises a pre-validated, tested and configured solution right out of the box, which speeds up the deployment process and requires far less in-house maintenance expertise than an all-software solution.
The decision to go with a full commodity solution or a vendor platform usually rests with the level of customization that is required. As Data Center Knowledge’s Bill Kleyman points out, a standard server chassis can be outfitted with a range of solid-state and hard-disk drives depending on the desired throughput, while new generations of hyper-converged virtual controllers can deliver peak performance and capacity in support of advanced cloud architectures. At the same time, it is becoming easier to implement dynamic policy management, particularly in abstract data environments that essentially encapsulate workloads within portable operating environments for distribution across diverse hardware configurations.
This is part of the reason why leading IT vendors are warming up to third-party software platforms. HP recently added support for the Pica8 PicOS on its Altoline switch portfolio, offering the same OpenFlow 1.4 Layer 2 and Layer 3 capabilities that are emerging in many white box solutions. The deal allows PicOS to be integrated into HP’s Virtual Application Networks (VAN) controller, which should enable improved deployment and control of software-defined networks and act as an alternative to fully integrated solutions like Cisco’s Application Centric Infrastructure.
d
All of these issues are likely to become paramount when the enterprise starts to build out its own Big Data support infrastructure. When scale is pushed to the extreme, small differences in cost and complexity can make a big difference on the bottom line. That is why Arcadia Data’s David M. Fishman says that organizations should weigh vendor claims, both proprietary hardware and open-source software, regarding the simplification and convergence of infrastructure for large Hadoop clusters and other Big Data initiatives. Simple is best in most circumstances, but it is still important for organizations to avoid oversimplifying the hardware only to deal with exorbitant complexity in software.
Building new infrastructure from scratch is never fun, but at least today’s commodity solutions—even the proprietary-commodity systems from EMC, Cisco and others—offer an easier path to scale-out architectures than the silo-inducing solutions of the past.
The result will be a more streamlined, yet highly flexible, data center that is more apt to foster advanced data initiatives than inhibit them.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.