Without doubt, the enterprise has embraced a radically new infrastructure based on commodity hardware over the past decade. But will this trend continue as more and more data loads find their way onto the cloud?
This is the obvious conclusion for the simple reason that the cloud represents massively scaled out architecture, so it makes no difference whether data is housed on-premise or on hosted infrastructure. Commodity systems offer the cheapest way to make resources available to large numbers of diverse users.
Clearly, the market for commodity servers shows no signs of slowing down. One of the latest entries is Lenovo’s ThinkServer TD330, billed as the company’s first entry into true enterprise-class computing. The machine supports up to 16 cores using the Xeon E5-2400 platform and can hold nearly 2 TB of RAM, as well as various hard drive and networking components and cache systems. It also features a range of simplified management and configuration tools to enable rapid deployment for applications like desktop virtualization, transactional processing and distributed workflow management. And it lists for less than $1,000.
Much of the new cloud-ready software is already prepping for a commodity future. Yottabyte’s new Cloud operating system was built from the ground up for deployment not just on commodity servers, but storage, networking and related software components as well. The package features a distributed file system and Web-based interface designed to provision and pool storage resources and then tier them according to data loads. It also allows enterprises to provision, manage and secure resources across public, private and hybrid infrastructure.
Amid all this talk about hardware, however, is it possible that cloud services themselves are becoming commoditized, allowing enterprises to mix and match offerings from a range of third-party providers? That’s the intriguing possibility raised by the Open Data Center Alliance’s new usage models designed to foster interoperability across services. The plan raises the possibility of lowering cloud costs even further and increasing the ability to mash up services to suit increasingly divergent application and data needs. However, as IT News’ Justin Warren points out, the plan will need the buy-in of large numbers of providers, who may balk to reducing the uniqueness of their clouds unless there is clear and overwhelming demand from the user community.
Both commodity architectures and the cloud are only a means to an end, however. And for many enterprises, the end is the ability to handle the crushing burden of Big Data. As Jason Collier, CTO of Scale Computing, explained recently to Techworld, widespread load distribution is the only economical way for the average organization to provision the supercomputer-scale environment needed for Big Data analytics. Reams of unstructured data are becoming increasingly valuable to competitive organizations, and scaled out, massively parallel environments offer a key advantage in turning raw data into usable knowledge.
Still, I’ve mentioned many times in the past that just because a particular branch of technology is tailored for one challenge doesn’t mean it is appropriate for all challenges. Indeed, large organizations will find that the large, powerful servers of old, and mainframes in particular, are ideally suited for highly virtualized environments and crunching large database applications. Those functions won’t diminish just because the cloud and Big Data pose a new set of problems.
Commodity systems offer a lot of value, but most organizations will undoubtedly find that diversity of technology will serve them best as the data universe becomes more complex.