More

    Can Current Architectures Support a 1,000-Core Chip?

    Slide Show

    The Rise of Integrated IT Infrastructure Systems: Top Enterprise Use Cases

    The wires were abuzz this morning with news of a 1,000-core chip developed at the University of California, Davis, and, while the feat is impressive, it looks like the device is both more and less than it seems.

    The team, which also included members of IBM, according to ZDNet provided the basic 32nm CMOS technology, and has certainly broken new ground. The chip not only packs 1,000 cores on a 7.94mm by 7.82mm die but also pushes the power requirements to the point where it can be run on a single AA battery while still churning out 115 billion instructions per second at a clock speed of about 1.78 GHz. This is far beyond the experimental 48-core device that Intel is working on (although it too has the potential to scale up to 1,000 cores) and a 1,000-core FPGA developed by the universities of Glasgow and Massachusetts earlier in the decade.

    But while more cores are always theoretically welcome, it is important to note that there is a lot more to computing than just squeezing more processing centers onto a layer of silicon. You need to have advanced architectures that can make use of many cores efficiently, and the fact is that the software industry is still struggling to make the maximum use of the relatively low core-counts that exist in most devices today.

    DataCore, for example, is in many respects on the cutting edge of parallel processing in the data center, but not by simply pushing more simultaneous applications onto processors, says tech consultant Dan Kusnetzky. Instead, the latest release of the SANsymphony platform taps unused cores to accelerate storage and caching. In this way, applications can overcome the real bottleneck in most environments, serial I/O, in order to drive faster turnaround, lower costs and higher resource utilization. The trick, in other words, is not to push more data onto multicore platforms, but to get data loads in and out more quickly to kick all operations into high gear.

    Higher up the stack, of course, anything that can help software engage a multicore environment is welcome, but again the industry is already struggling to keep up with the two dozen or so cores on today’s most advanced devices, let alone a 1,000-core behemoth. Backers of Big Data languages like Python, for example, know the value of scale-out computing but are still struggling with the multi-thread workarounds needed to fully leverage today’s chip configurations, says InfoWorld’s Serdar Yegulalp. A key stumbling block is the CPython internal memory system (memory I/O again; maybe they should talk to DataCore), which features a global locking mechanism that stifles parallel execution in order to maintain a cohesive global state. While not an insurmountable problem, the trick is to enable multicore support without sacrificing either performance or data integrity.

    Another issue is the challenge of getting higher numbers of cores to communicate effectively so as not to impede overall performance. A company called KnuEdge is taking a bead on this problem through advanced fabric technology intended to support neural chip technology that could impact everything from data security to “thinking” machines, says VentureBeat’s Dean Takahashi. The KnuPath interconnect is built on something the company calls LambdaFabric computing, which is currently being applied to cutting-edge devices of 256 cores, although it has the potential to seamlessly connect half-a-million devices with latency as low as 400 nanoseconds. Initial applications, naturally, are primarily military.

    While upping the core-count is always a welcome development, it is important to remember that today’s tech industry has only just started tapping the potential of today’s multicore devices, so any practical applications for a 1,000-core chip are many years in the future. For today’s needs, the real work is happening on the software layer and the interconnect, even though many of the key advancements gain little notice outside the core development communities.

    So in much the same way that Napoleon Bonaparte probably wouldn’t have made much use of an F-14 Tomcat, today’s data ecosystem is not quite ready for a 1,000-core processor. Someday, perhaps, but not today.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Save

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles