More

    The Quest for More Flexible Rack-Level Architectures

    Server technology doesn’t inspire much conversation around the IT water cooler these days. With Flash technology taking over the storage farm and network infrastructure about to be “software defined,” it seems that plain old servers, even virtualized ones, just aren’t exciting anymore.

    But that may be about to change. With technologies like server-side memory pushing densities even higher, leading developers are experimenting with new rack-level technologies aimed at harnessing the power of multiple devices in order to accommodate advancing data requirements in new and innovative ways.

    At Intel, for example, the name of the game these days is disaggregation. Through advanced interconnect technologies like silicon photonics, the company is closing in on production models of new “rack-scale” architectures in which current integrated elements like CPUs, NICs, memory cards and even power supplies can be manipulated as discrete components. This would enable new levels of efficiency by giving users and applications exactly what they need for a given task – no more tying up available cache or bandwidth just because someone needs to access multiple processing cores. And with advanced automation taking care of most of the dirty work, IT staff can focus on higher-order tasks like policy enforcement and systems engineering.

    Enterprises looking to get a head start on this kind of intra-rack resource pooling can always look to the PCIe interconnect as the basis for a new networking fabric, according to PLX Technology’s Larry Chisvin. In close quarters, PCIe enables a much higher degree of flexibility than either Ethernet or InfiniBand, considering that virtually every component in the server already has its own PCIe connection. This also removes a lot of the bridging hardware that typical clustered architectures require, improving both latency and power consumption.

    As for the racks themselves, a consensus is growing among data center managers that it is high time for a redesign. As Datacenter Dynamics found out in a recent survey, complaints range from a lack of standard sizing to inadequate power distribution to poor access to cabling. As the enterprise strives to bring legacy infrastructure to 21st Century standards, flexibility is likely to be a primary requirement, particularly among cloud and colocation providers. It is well-known that as server density increases, so does power density, and the last thing the CIO wants to hear is that data loads can’t be met because existing racks can’t handle it.

    Conventional wisdom holds that the major advances in data agility and performance will come through software-defined architectures. This may be true now that true virtual networking has severed the last link between data environments and underlying hardware. But it is also true that the staid physical infrastructure that served the enterprise for so long in the past needs to loosen up quite a bit if existing data centers are to remain relevant in the cloud era.

    At some point, data and applications need to find homes in the physical world, and the more accommodating the data center can be to dynamic workload environments, the more the enterprise will benefit from its hardware investment.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles