Weighing the Pros and Cons of Commodity Infrastructure

    Slide Show

    10 Surprising Ways Automation Can Simplify IT

    Data infrastructure built on commodity hardware has a lot going for it: lower costs, higher flexibility, and the ability to rapidly scale to meet fluctuating workloads. But simply swapping out proprietary platforms for customizable software architectures is not the end of the story. A lot more must be considered before we even get close to the open, dynamic data environments that most organizations are striving for.

    The leading example of commodity infrastructure is Facebook, which recently unveiled plans for yet another massive data center in Europe – this time in Clonee, Ireland. The facility will utilize the company’s Open Compute Project framework, which relies on advanced software architectures atop low-cost commodity hardware and is now available to the enterprise community at large in the form of a series of reference architectures that are free for the asking. The idea is that garden variety enterprises and cloud providers will build their own modular infrastructure to support the kinds of abstract, software-defined environments needed for Big Data, the Internet of Things and other emerging initiatives.

    The problem, though, is that few organizations can match the prowess of Facebook. When you are large enough to order commodity systems on the wholesale level and then maintain the in-house expertise to deploy, configure and maintain such hyperscale constructs on an ongoing basis, then OCP may be for you. Otherwise, you’re going to need help, which more than likely will come from the same vendor community that helped build the static, silo-based infrastructure you’re trying to supplant.

    This is why platforms like IBM’s Power portfolio are making a comeback. In 2013, the company launched the OpenPower Foundation that allowed third-party vendors to build hardware on a commodity basis. With top-end partners like Nvidia and Mellanox on board, the Power platform is now positioned to tap the same scale-out workloads that OCP is gunning for, except in a more integrated, easily configurable platform with key value-adds like service and support from long-time enterprise partners. This is probably the reason IBM recently reported improved quarterly results for the Power line after nearly two years of declining sales. (Disclosure: I provide writing services to IBM’s Point B and Beyond blog site.)

    But simply putting the hardware and software in place is not enough to make a go of commoditized data infrastructure. Penguin Computing is staking a claim against top vendors like Dell and HPE by guiding the enterprise through the myriad challenges that come with optimizing deployments for scaled-out workloads. This could encompass tasks as basic as job scheduling which, if not done properly, can bring a deployment to its knees as messaging and compilation requests scale to astronomical levels. Commodity infrastructure may be based on generic hardware, but it may still prove ineffectual if not tailored to specific workloads.

    And it might not even be the best solution for highly scalable data environments, says Compuware CEO Chris O’Malley. The fact is that the installed base of mainframes currently handles more transactions per day than Google, Twitter, YouTube and Facebook combined, and these legacy systems are second to none when it comes to reliability, security and availability. Cloud-based, distributed architectures also tend to succumb to spiraling costs as workloads ramp up, while existing mainframe licensing cost (MLC) platforms are highly effective at keeping the tab in check, even at scale. As Compuware is demonstrating within its own data infrastructure, the most cost-effective hybrid solution will likely pair mainframes at home and in the cloud, rather than distributed commodity infrastructure.

    If one thing can be said of the evolution of data infrastructure, it’s that it is changing from a homogenous, monolithic entity to a multi-faceted, dynamic environment that provides a range of hardware and software options in support of the optimal data experience. While it may be tempting to embrace the old “build it and they will come” mentality,  these days the wiser course is to figure out what you need to get done first and then select the right infrastructure, architecture and application sets to accomplish that goal.

    The difference may be only a split-second in terms of performance, but increasingly, that will be all that separates the winners from the losers.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles