More

    New Chips for a New Computing Era

    New chip-level developments are occurring at a steady pace, so much so that in a few short years we could see a radical new foundation for the way the world creates and manipulates data.

    This is happening just in time, since conventional architectures are close to maxing out the physical limitations of Moore’s Law. The question, however, is whether new processing technologies will make it to commercial production in time to keep up with the dramatic increase in data demand.

    Just this week, researchers at MIT unveiled a new 3-D architecture that combines compute and storage on the same chip. The device consists of carbon nanotubes, rather than plain silicon, coupled with millions of resistive random access memory cells (RRAM) to dramatically increase throughput compared to conventional interconnects. Using a vertical layering pattern of logic and memory, the chip is able to handle more data at a much lower temperature than a 2-D silicon design, and consumes far less energy as well. Developers say the design can impact applications ranging from traditional computing to personalized medicine.

    Increasingly, chip development is focusing on ways to emulate the human brain’s ability to recognize images and intuitively deal with unfamiliar situations. This kind of neuromorphic computing requires a break with the von Neumann architecture that has formed the basis of chip design since the 1940s, say researchers at the Lawrence Berkeley National Laboratory. Von Neumann relies on increasingly faster sequential processing, while neuromorphic strives for massive parallel operations to mimic the brain’s millions of neurons and synapses. The effort is still very much in its nascent stage, but if successful, it could affect everything from sensor-driven data management to particle physics research.

    The field of on-chip photonics also got a boost recently with the development of the first processor-based laser. The Lionix Company, in conjunction with the Netherlands’ University of Twente MESA+ research institute, has developed a narrowband diode laser on a chip, overcoming the tricky problem of producing a beam stable enough to achieve a maximum bandwidth of just 290 Hz. That’s the point at which light particles can maintain the same frequency across the compact chip design, which developers say is about 10 times more accurate than any other photonics solution.

    As chip designs become more complex, however, it gets harder to spot the flaws that stifle efficiency. This is why companies like Nvidia are looking to machine learning and other forms of artificial intelligence to help with the design and engineering phases. As the company’s James Morra noted at the Design Automation Conference last month, providing chip-makers with more data is no longer enough. When it comes to optimizing billions of transistors on a single chip, they need tools to actually start making decisions. Like in the rest of the world, however, these systems are just barely starting to have an impact on design and manufacturing processes, so it could be a while before they gain sufficient knowledge to make an appreciable difference.

    There’s always a better way to make a mousetrap, and there will always be a better way to convert electrons (or even smaller particles) into information. But the better technology does not always make it to the mainstream. It takes a perfect storm of design, production, marketing, distribution and a host of other factors to provide a solution that solves problems in an economically feasible way.

    Research into new chip architectures will continue unabated, with much of it leading nowhere. But if just one design rises to the top, it could set the data universe on an all-new trajectory.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles