Time was that when an application took a few more minutes than normal to boot up or failed to access needed data right away, it led to a few choice words by the user and that was that. But in today's world of split-second financial transactions and real-time medical imaging, application and data latency can have serious repercussions.
Even while processing and networking technology continues to ramp up the speed factor, there are still too many ways in which today's high-speed architectures can suddenly drop to a slow crawl.
Part of the problem is that many existing applications (and quite a few of the new ones) are not optimized for changing hardware environments, according to Bloor Research analyst David Norfolk. Applications that are not adept at multiprocessing, for example, only use one of several cores on a multicore chip, which tend to have slower clock speeds than single-core devices because of their increased processing power. New multiprocessing frameworks like J2EE should help.
Then again, the problem could be I/O bottlenecks between the application server and storage, according to Gary Orenstein of Gear6. Typically, this happens in heavily trafficked environments or when there is a lot of server virtualization/consolidation and data migration. His recommendation is a series of scalable caching appliances that can store frequently accessed data to lessen the pressure on mechanical disks, particularly during peak loads.
Jon Stokes, writing in Australia's Ars Technica, is bullish on solid-state memory when it comes to lowering latency and decreasing power consumption. Using an SSD as a cache for a typical magnetic storage system will boost access times for database and Web-based applications in particular, and allow standard drives to spin down during idle periods.
Of course, enhanced server and storage capabilities won't offer much improvement unless the interconnect can handle the required load. With Fibre Channel limited to the storage network, the real decision for the server farm is between Infiniband and 10 GbE, according to HPCwire's Michael Feldman. While Infiniband is firmly set in the high-performance computing sphere, many smaller organizations may want to build on existing Ethernet infrastructures if the price/performance ratio is right.
Computer technology has always been about speed and power, but only lately have those two goals started to diverge. Sure, you can optimize your hardware infrastructure with multicore chips and virtualization, but unless you keep a sharp eye on how your data is being processed, you'll only be running in place.