You would have to look far and wide to find someone who wants less power, less performance from their data center. But even after all the architectural tweaks, the virtualization of resources, improvements in automation and all the rest, higher performance is driven by what goes on in the processor.https://o1.qnsr.com/log/p.gif?;n=203;c=204663295;s=11915;x=7936;f=201904081034270;u=j;z=TIMESTAMP;a=20410779;e=iUnfortunately, Moore’s Law is starting to bump up against various Newtonian Laws of physics; that is, performance gains cannot be maintained by shrinking the processes on the die for much longer. A few generations to go, and the computing industry will have to push quantum architectures into production environments in order to maintain the momentum of the past 40 years. And the fact is that we are nowhere near to making this leap.
So what to do? Increasingly, those broader architectural and software-driven constructs mentioned above will have to carry the load in order to deliver on the promises that have been made to the data-using community. To be sure, new classes of processor will help support these capabilities, such as Nvidia’s GPU-based Pascal architecture and its applications in developments like artificial intelligence and deep neural networks (DNNs). The latest advancement is a 16 nm FinFET process from TSMC that Nvidia has incorporated into the new GP100 device. This combination will probably set the gaming world all atwitter when it is released later this year, but coupled with the CUDA 8 programming language will also have broad impact on enterprise applications, says Forbes’ Patrick Moorhead.
Intel has begun shipping its latest line of Xeon X5 processors with integrated field-programmable gate arrays (FPGAs) from Altera to produce more flexible architectures that can then be performance-optimized for key server applications. Specifically, the design puts the Arria10 FPGA on the new 14nm Xeon E5 2600 v4 “Broadwell” device as a way to make them more amenable to cloud and web-scale workloads, which have a tendency to shift in size and purpose very rapidly. At the same time, Intel is employing a range of co-processors and ASICs to provide advanced acceleration and application-specific functionality that address both the performance and power consumption needs of modern workloads.
In the lab, of course, work is progressing on a number of fronts to drive as much performance as possible out of silicon before its physical limitations are met. At Cambridge University, for instance, researchers are looking at superconductors and “spintronics” to drive an entirely new generation of processing technology. As explained by TechCrunch’s Natasha Lomas, spintronics is a method of spinning electronics in various ways so as to alter their magnetic and electrical properties. The hope is that by uniting this technology with that of superconductiveness, devices can be developed that eliminate energy loss on the die without driving up the cooling requirements to the point where the net gain in performance and energy efficiency is lost. The research is still in a very nascent stage, however, with a working prototype at least five years away.
But if you can’t push the envelope in hardware much longer, perhaps there are ways to boost performance in software. This line of thinking is taking root in the OpenPOWER Foundation that IBM has created with Google, Micron, Samsung and others. At a recent summit, Rackspace’s Aaron Sullivan put out a call to Python developers to focus their attention on the power barricade that arises at around the 7 nm process level. This would require the Python community to focus more attention on basic logic rather than more profitable scripting endeavors, says Data Center Knowledge’s Scott Fulton III, but in the long run would benefit everyone by devising more robust and powerful data environments. IBM is working on a new Coherent Accelerator Processor Interface (CAPI) for the Power8 line that would support this effort by offloading parallel operations from the CPU to an FPGA to take advantage of more targeted programming.
We should have known all along that Moore’s “law” wasn’t so much a law as a guideline that would one day hit the practical limits of reality. Modern data environments are so layered and so complex that there is still plenty of wiggle room to make it seem that basic processing is making strides. But at the end of the day, only a new foundational electronic paradigm can push the field of computing toward a new distant horizon.
There is always the hope that a major breakthrough in chip construction will emerge soon, but in the meantime, the data industry will just have to get used to the idea that limitless progress and ever-expanding capabilities are no more a birthright than in any other field of science.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.