The inner workings of the data infrastructures of the top cloud providers has long been a source of fascination in the IT industry, primarily because it eschews traditional vendor designs for a high degree of customization. But does this represent the future of the data center or merely a side development that caters to Web-facing operations?
Facebook has made no secret of its open source initiative, with the intention of playing a leading role in remaking the data center for the 21st Century. The company has even gone so far as to launch the Open Compute Project Foundation that aims to turn its data center architecture into an industry template, distributed free of charge. The company claims nearly a 40 percent power reduction compared to previous designs, leading to a 25 percent cut in operating expenses. It is interesting to note that while both Google and Facebook have custom built the vast majority of their hardware infrastructures, Google has operated largely in secret.
But that may be changing. After years of speculation about what lies behind its data center walls, Google has begun talking about the design and philosophy for what is probably the largest data infrastructure on the planet. The company chose a recent Phoenix trade show to reveal its development timelines and the thinking behind each phase, ranging from the largely containerized model in 2005 to more standardized infrastructure intended to meet ISO and OHSAS qualifications. Along the way, the company embraced scale-out and energy-efficient technologies in bids to both increase productivity and lower operational costs.
Meanwhile, many of the newest cloud service firms are taking a completely different tack when it comes to the data center — namely, use someone else’s. Storage provider Box, in fact, relies primarily on co-location facilities from Equinix for its Box Accelerator application, bolstered by Amazon Web Services for a quick resource boost when necessary. The company says it prefers Equinix due to its strong carrier network support and global coverage, which includes data centers in Chicago, northern Virginia, Amsterdam, Sydney, Tokyo and Hong Kong. Box says this arrangement satisfies its requirements for security, agility, scalability and reliability at a much lower cost than building its own bricks-and-mortar data facilities.
By nature, however, Web- and cloud-based services require a high degree of flexibility to keep up with the dynamic nature of mobile and collaborative data environments. For traditional enterprise purposes, there is still plenty of room within existing data architectures to incorporate both virtual and cloud platforms to support evolving enterprise applications.
So in the final analysis, yes, there is a new breed of data center on the horizon, but this does not mean it is time to put the old data center out to pasture just yet.