More

    Overcoming Moore’s Limitations to Boost Data Center Performance

    Slide Show

    5 Essential Elements in Building an Agile Data Center

    Digital workflows are only as effective as underlying infrastructure allows them to be. And on a fundamental level, this is governed largely by the capabilities of core processors.

    The narrative of the past decade, however, is that basic chip technologies are reaching their practical limits, that Moore’s Law is all but maxed out because we cannot make transistors any smaller without adding complex, and expensive, circuitry that simply impedes the performance gains of the smaller architectures.

    But this isn’t the end of the story. As it turns out, emerging applications are starting to push performance by harnessing the power of multiple chips rather than relying on a single device, and chip manufacturers are coming up with innovative ways to cater to this growing trend.

    AMD, for instance, is looking to reclaim its stake in the enterprise server business through a new line of CPU/GPU devices targeting high-performance workloads. According to PC World, the company is bent on adapting the knowledge it gained blending CPUs and GPUs for XBox and PlayStation gaming consoles for the forthcoming Zen processor, creating a mega-chip that would compete against Intel and Nvidia multichip options on price and integration simplicity. As it stands now, initial Zen-based machines, due next year, will feature a CPU-only design, but AMD CEO Lisa Su has told investors that a combined device will “come in time” because “it makes a lot of sense.”

    Meanwhile, Intel has seen steady growth in its Data Center Group – not just in CPUs but in ancillary technologies like the Xeon Phi accelerator and the Omni-Path interconnect. With more than 90 percent of the enterprise server market under its belt, the company is largely competing against its own previous chip generations rather than rivals like AMD and ARM, says Serverwatch’s Sean Michael Kerner. Next up is the Broadwell-E architecture, a 14 nm design that sports 10 cores capable of handling 20 threads at a boosted clock speed of 3.5 GHz.

    As for ARM, the company is in the midst of being taken over by Japan’s SoftBank for a cool $32 billion, and the new owner is not making any bones about leveraging the firm’s dominance in mobile platforms to crack the data center market. The Nikkei Asian Review recently quoted SoftBank chairman Masayoshi Son as saying ARM’s small presence in the data center is an opportunity to increase its reach, particularly given the drive to lower power consumption across increasingly distributed data infrastructure. ARM is already working with partners like Qualcomm and Huawei to advance its presence in the enterprise, but it is unclear what direction it will follow once SoftBank takes it private.

    Farther out on the horizon, technologies like cognitive intelligence and neural computing promise to make processors function more like human brains. IBM’s Watson program is already building quantum processors and “neuromorphic chip designs,” says SiliconANGLE’s Maria Deutscher, and is now working on advanced phase-change materials to build artificial neurons that can process both digital and analog data. This would mimic the brain’s ability to ingest information and perform calculations simultaneously, which would reduce complexity and power consumption by not having to shuttle data between individual semiconductor components while at the same time speeding up processing dramatically, particularly in complex workloads like artificial intelligence.

    Moore’s Law, it should be noted, was never about doubling computing power every 18 months, but simply doubling the number of transistors in that time frame. As it turns out, however, there are plenty of other ways to boost performance rather than simply throwing more transistors onto the die.

    Processors are still the fundamental building blocks for all things digital, but fortunately making them more powerful is not the sole determinant in building out advanced data ecosystems.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Save

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles