That the data center will have to evolve in order to keep up with changing application and data workloads is a given at this point. Static, silo-based architectures simply lack the flexibility that knowledge workers need to compete in a dynamic data economy.
But exactly how will this change be implemented? And when all is said and done, what sort of data center will we have?
According to a company called Mesosphere, the data center will become the new computer. The firm provides management software that helps hyperscale clients like Google and Twitter coordinate and pool resources across diverse application loads. By offering compute cluster, command line and API access to developers, the Apache-based platform enables broad deployment and scalability without the need for direct IT involvement. As well, it allows numerous low-level support tasks to be automated, essentially allowing users to call up applications or save data in the data center the way they do on a PC: Click the icon and let the system figure out the best way to handle it.
This data-center-as-computer analogy carries even farther when you consider that the cloud is the new networked enterprise. Google recently unveiled a cross-region HTTP load balancer that allows data environments to interact seamlessly across multiple data centers. The service, which has not yet made it into production environments, allows developers to target traffic to users in key geographic regions and to build content-aware load balancing that can forward HTTP requests to select instances that are optimized for the particular load. In this way, enterprises can more easily tailor a distributed environment to specific use cases even if users and resources are separated by great distances.
At the same time, Microsoft is toying with the idea of deploying field programmable gate arrays (FPGAs) across more than 1,600 data centers as a way to improve results for the Bing search engine. The company reports that initial tests show a 40-fold speed improvement by augmenting standard Xeon processors and Altera FPGA chips and then offloading key algorithms to the programmable silicon. With this architecture, which Microsoft hopes to roll out en masse early next year, the company can support Bing processes without about half the servers it uses today.
These and other developments are already causing IT executives to wonder what the data ecosystem will look like in 10 years. According to Emerson Network Power, it seems that modular, automated, hyperscale infrastructure based largely on commodity hardware will be the dominant trend. In the company’s Data Center 2025 survey, a key driver in this transformation will be the need to not only store large amounts of data but also to access and retrieve it quickly. As well, there will be a growing need to share data across multiple infrastructures more readily and a decline in the need to own data center assets outright in favor of providing service through a more utility-based approach.
Of course, a lot can happen in 10 years, so we can expect needs and desires to change as technology advances. Heck, with all the recent developments in quantum computing, we’ll probably all have our own data centers implanted in our brains before long.
In the meantime, it’s important to remember that simple capacity expansion or server upgrades should no longer be viewed as the means to improve what you already have. Rather, they should lay the foundation for a complete reimagining of what data centers and the data environment can do and how they should operate.
The way forward is not crystal clear, but at least the broad outlines are starting to come into focus.