The data center is quickly moving toward hyperscale architectures, the result of both advancing technologies and economic forces weighing on the enterprise.https://o1.qnsr.com/log/p.gif?;n=203;c=204663295;s=11915;x=7936;f=201904081034270;u=j;z=TIMESTAMP;a=20410779;e=i
The question, though, is not whether hyperscale deployments will increase in numbers or even come to dominate the IT industry, but will the owned-and-operated data center model simply become too burdensome for the vast majority of organizations?
On the economic front, it’s hard to argue against the hyperscale model. As Google, Facebook, Amazon and others have proven, volume hardware and software deployments can reach the point at which a single buyer becomes a channel in itself—that is, the company consumes in such volumes that it can custom-order its own platforms directly from the chip- and board-level suppliers that cater to the big OEMs. And in the case of Facebook, these designs are starting to trickle into the IT industry at large through initiatives like the Open Compute Project.
This, in turn, has put the large IT vendors on notice that the old ways of doing business are under threat. IBM, for one, is leveraging its top-end Power architecture for hyperscale deployments through the OpenPOWER Consortium, drawing such disparate participants as Mellanox, Google and Taiwanese motherboard designer Tyan. The seriousness with which IBM is taking the hyperscale challenge is evidenced by the fact that it has agreed to open up the POWER architecture, even the POWER IP format, to third-party developers.
But don’t get the idea that hyperscale can be done only with top-end systems like the POWER architecture. In fact, hyperscale is seen as the main driver for low-power microservers that are hitting the channel. Using low-power processors like the ARM or Intel’s Atom, and housed in either small form-factor hardware or even an SoC solution, microservers offer the twin benefits of rapid scalability and the ability to handle multiple simultaneous small packet transactions—the kind characterized by large Web-facing organizations like Google and Facebook. In this way, organizations not only gain ready access to an infrastructure more geared toward the mobile, collaborative environments favored by today’s knowledge workforce, but can do so at a fraction of the cost it took to build today’s legacy data centers.
For the most part, though, it seems that the IT industry is willing to embrace both hyperscale and traditional data center models. HP, for one, seems intent on having all bases covered with new generations of its Proliant servers for small, medium and large organizations, plus Atom-based microservers as part of its Project Moonshot family due out next year. Ultimately, even the microserver lines could adopt some of the trappings of standard servers, such as Dual NICs and out-of-band management capabilities.
In the end, it could very well be that, as IT consultant David Strom expects, traditional and hyperscale data centers will end up serving different roles in the enterprise, and therefore require different hardware and software configurations. But many enterprises are feeling the strain of running their own IT organization like never before, and the lure of hosted infrastructure is growing stronger all the time.
That dynamic will only gain speed as more data makes its way to the cloud, forcing traditional hardware manufacturers to up their profit margins as volume sales dwindle. This in turn will increase the cost of object-oriented infrastructure even more, leading to even more outsourcing.
Few people intend for this to happen, but once on a roll, market forces tend to go where they will, and at best, the internal data center could wind up a mere shell of what it is now.