With that advent of new architectures such as the Westmere-class of Nehalem processors from Intel and extensions to the amount of memory that is available to applications in, for example the Unified Computing System from Cisco, major leaps in application performance can now be accomplished affordably.
What this ultimately means, says GigaSpaces CTO Nati Shalom, is that we’ll soon seen the arrival of Tera Scale Computing where trillions of calculations per second will be made possible by harnessing the power of thousands of processor cores in parallel. Gigaspaces, which makes an application server that can be deployed in memory, is hoping to take advantage of this trend to drive the next generation of application deployment on private and public cloud computing infrastructure that will rely heavily on the latest and greatest processors.
Shalom says the implications of these advances are profound on two levels. The first is that new classes of applications will be enabled thanks to the ability to process more data in real time than ever before. Secondly, Shalom argues that the rise of these new architectures may mean an end to distributed computing as we know it today. The sheer density of the next generation of servers, coupled with increased network bandwidth, may mean that it will simply become unnecessary to distribute applications.
Of course, we’re still a long way from writing applications in parallel to take advantage of all these available power cores. But Shalom says Gigaspaces is taking a significant step in the right direction by making its application server available on UCS systems from Cisco. The memory extensions in that system essentially make it a whole lot easier to run entire applications in memory. That, says Shalom, is the first step towards moving to a Tera Scale Computing model.