Hyperscale data infrastructure is said to be the most efficient ever devised. It has to be if designers hope to manage all that data without blowing out electrical circuits or frying critical components.
To that end, the industry has remade virtually everything that was once held sacred in the data center – from the smallest processor to the largest rack configuration. The results, so far, have been astounding, with top Web-facing firms like Google and Facebook able to field data infrastructure at scales that were inconceivable just a few short years ago.
But the development cycle is not over. In fact, it’s hardly begun, and it’s likely that we’ve only just scratched the surface regarding both scale and density.
A key consideration is cooling. As devices become smaller and more modular, the need to effectively cool critical components becomes more important, and more of a challenge. According to Frost & Sullivan, the market for data center cooling equipment is on pace to nearly double by 2018 to almost $2 billion, with much of the growth coming from greenfield deployments. In fact, retrofitting existing plant with high-efficiency cooling is seen by many as a net loss, considering much of that legacy infrastructure is not suitable for future software-defined operations anyway.
As hardware configurations become increasingly dense, however, traditional air-cooling methods tend to produce diminishing returns. That’s why some leading organizations are turning to liquid-cooled technologies, which provide more effective heat exchange and can be more easily directed toward critical components. Allied Control’s newest facility in Hong Kong, for example, uses a liquid cooling system that can support loads up to 225 kW per rack – more than 10 times what it has currently deployed across its 24-rack architecture. The company is offering the design as a commercial product, dubbed Immersion-2, as a solution for ASIC-based containerized infrastructure.
Not everyone is ready for the bleeding edge, however, which is why many manufacturers are offering hyperscale-optimized components using familiar footprints. LSI, for one, was quick to recognize that the heat generated in highly dense configurations is enough to melt even the most hardy solid-state storage components. In response, the company devised the Nytro XP6200 series PCIe card that does not require its own heat sink to remain operational. As the system and processor architect, Robert Ober, explained to SiliconANGLE, disaggregation of components in highly dense environments is key to effectively managing devices that have different temperature thresholds.
On a larger scale, new rack designs are vital to maintaining functional temperatures amid tightly packed systems. Dell recently released the G5 rack for hyperscale environments that measures 52U in height and can hold up to 120 one-third-width server nodes. It also features the Infrastructure Manager software stack that tracks key metrics like airflow, CPU and memory temperature, and system utilization to provide rack-level management to facility operators.
Scaling infrastructure up and out is the only way to accommodate burgeoning data loads, but remember that as hardware footprints become larger, so do the operational challenges. Increased density will naturally result in increased heat generation while at the same time making it more difficult to get the cooling medium to critical components.
That’s the main reason why hyperscale is not just a bigger, better data center, but an altogether new one.