Improving Application Latency

Arthur Cole

Time was that when an application took a few more minutes than normal to boot up or failed to access needed data right away, it led to a few choice words by the user and that was that. But in today's world of split-second financial transactions and real-time medical imaging, application and data latency can have serious repercussions.

 

Even while processing and networking technology continues to ramp up the speed factor, there are still too many ways in which today's high-speed architectures can suddenly drop to a slow crawl.

 

Part of the problem is that many existing applications (and quite a few of the new ones) are not optimized for changing hardware environments, according to Bloor Research analyst David Norfolk. Applications that are not adept at multiprocessing, for example, only use one of several cores on a multicore chip, which tend to have slower clock speeds than single-core devices because of their increased processing power. New multiprocessing frameworks like J2EE should help.

 

Then again, the problem could be I/O bottlenecks between the application server and storage, according to Gary Orenstein of Gear6. Typically, this happens in heavily trafficked environments or when there is a lot of server virtualization/consolidation and data migration. His recommendation is a series of scalable caching appliances that can store frequently accessed data to lessen the pressure on mechanical disks, particularly during peak loads.

 

Jon Stokes, writing in Australia's Ars Technica, is bullish on solid-state memory when it comes to lowering latency and decreasing power consumption. Using an SSD as a cache for a typical magnetic storage system will boost access times for database and Web-based applications in particular, and allow standard drives to spin down during idle periods.


 

Of course, enhanced server and storage capabilities won't offer much improvement unless the interconnect can handle the required load. With Fibre Channel limited to the storage network, the real decision for the server farm is between Infiniband and 10 GbE, according to HPCwire's Michael Feldman. While Infiniband is firmly set in the high-performance computing sphere, many smaller organizations may want to build on existing Ethernet infrastructures if the price/performance ratio is right.

 

Computer technology has always been about speed and power, but only lately have those two goals started to diverge. Sure, you can optimize your hardware infrastructure with multicore chips and virtualization, but unless you keep a sharp eye on how your data is being processed, you'll only be running in place.



Add Comment      Leave a comment on this blog post
Jan 7, 2008 8:23 AM B. Scott Michel B. Scott Michel  says:
According to David Patterson's HPEC presentation in 2006, latency is approximately related to the sqrt(bandwidth). There may be a few micro-refinements, such as solid state memory, but that doesn't help the latency penalty encountered by just going off-chip.BTW: The interconnects get faster. But latency doesn't correspondingly increase. It gets worse. Reply
Jan 7, 2008 8:24 AM B. Scott Michel B. Scott Michel  says:
substitute "correspondingly decrease" for "correspondingly increase". Reply
Aug 21, 2010 2:56 AM hosted PBX hosted PBX  says:

Improving application latency is the key goal of all application developers. If the latency is height, it will be inconvenient for the application users. The discussions on the factors that cause high speed applications to crawl down, provided her are extremely useful and important for all developers. This post helps us to understand what makes the applications to slow down and hence how to fix it. So being an application developer I will definitely follow the tips and techniques mentioned by Arthur Cole.

Reply
Jan 3, 2011 9:07 AM Sport Betting Sport Betting  says:

Improving disk performance also helps reduce latency. Keep disk cool and reduce the number of huge single files stored on the system partition. That way I may help reduce scanning times for the very busy system partition drive.

Reply
Feb 1, 2011 5:56 AM Mkki Mkki  says:

the power is not always increasing as in the case of new and smaller devices that run on batteries. Laptops and tablets aren't as fast as pc's but they need the same applications or similar on them. Battery power and the lasting of it is becoming too more demand in the future when people don't need the raw processing power.

Reply

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.