If a company's convergence applications slow down, chances are bandwidth is not the problem.
Much of the discussion of connectivity and convergence almost reflexively uses the total amount of bandwidth available as the key metric. While it certainly is a vital statistic, a far lower profile set of metrics are just as important in determining how well a convergence applications such as video streaming or VoIP performs.
The most common of these statistics -- jitter and latency -- deal with the timeliness and predictability with which data packets get to their destination. If these metrics are askew, convergence applications will perform poorly no matter how much bandwidth is thrown at them. This a good primer at Smart Communications on four important topics concerning non-bandwidth elements that can threaten a converged application.
End-to-end latency, as the name implies, is the time it takes a packet flow to reach its remote end point. The writer says that this time should be no greater than 300 milliseconds.
Intra-stream latency refers to individual packets with latencies that deviate from the stream's normal transit time by more than 30 to 35 milliseconds.
Inter-stream latency occurs when the difference between audio and video stream latencies are great enough to impact the end product. In video conferencing -- the topic of this piece -- inter-stream latency can result in poor audio/video coordination, most evident in voice and video synchronization problems.
Finally, network jitter is "desequencing" or loss of data packets. Another good explanation of these terms is available at Griffin Internet.
Overall bandwidth remains an important topic. Though it is plentiful, it is not infinite. The writer of this thoughtful NewTeeVee piece says that the concept of unlimited bandwidth often is mistaken for instantaneous availability of data or an application. For one thing, the transmission of data over fiber is limited by the speed of light. More practically, the signals need to go through various network elements, and this takes time. Packets in most cases spend at least some time on coaxial or even copper cables, which have slower transmission speeds than fiber. Perhaps, the writer says, in the near future some version of Ethernet or a wireless variant will remove the last mile hurdles.
One of the most demanding of all online applications has nothing to do with business (other than those who make a killing running them). It's online gaming. This piece at the MIT Technology Review buttresses the notion that challenges to sophisticated convergence applications are at least as dependent upon how applications and networks operate than on overall available bandwidth.
Online games bog down when too many moves are made in a short period of time. Efforts to alleviate the problem involve running parallel versions of the game -- low-fidelity deep in the application and high-fidelity for display to participants -- that track the action. When the two versions don't match, the story says, the differences are deduced and updates are sent to all the computers participating in the sessions. The details are complex, but the bottom line is that the system is meant to confront difficulties that exist because of the inability of the network to process information quickly enough, not to compensate for a lack of bandwidth.
Brave readers will be rewarded for working through this complex article at The VoIP Magazine that deals with latency and jitter in VoIP. VoIP, the writer says, creates two types of traffic. The first is a call control layer that doesn't have very exacting requirements. The other layer contains the call data itself. The writer describes the encoding standards that are used -- G.711 and G.729 -- and discusses the characteristics of each. It's dense material. The takeaway for a non-engineer is that the encoding scheme chosen and the way in which it is used is the key to call quality.