The Internet has evolved significantly since HTTP 1.1 was introduced 17 years ago. During this evolution, we’ve seen many enhancements to improve a user’s online experience, such as the development of rich content. However, delivering these improvements came at one particular cost: performance. These evolving performance challenges were something that HTTP 1.1 was not designed to handle.
In February 2015, the Internet Engineering Task Force (IETF), an international community of network designers, operators, vendors and researchers concerned with the evolution of Internet architecture, released a new HTTP/2 version to address those challenges and adapt to the progression that Internet content has undergone.
As HTTP/2 took a long time to arrive, many interim best practices were developed to bypass the performance bottlenecks of HTTP 1.1. However, we learned that many of those HTTP 1.1 performance-enhancing practices would actually contribute to slowing web application delivery rather than accelerating it when using the new HTTP/2 protocol. In this slideshow, Radware‘s Yaron Azerual takes a look at a few examples organizations should consider.
HTTP1.1 Workarounds Slowing HTTP/2 Delivery
Click through for HTTP1.1 workarounds that actually cause performance slowdowns for HTTP/2, as identified by Radware’s Yaron Azerual.
HTTP 1.1 only allowed a single transaction per TCP connection, meaning that a client could send just one resource request at a time, and needed to wait for the server reply to be completed before he could send the next request. In order to bypass this limitation, “domain sharding” was used to create multiple TCP connections and allow for multiple transactions to take place in parallel and accelerate the delivery of the web page.
With HTTP/2, this practice is no longer needed, as the new protocol version allows multiple transactions to be multiplexed over a single TCP connection at the same time, also allowing the server to send interleaved replies, in a different order than the order in which the requests were sent. In fact, continuing to use multiple TCP connections with HTTP/2 will take its toll on the setup time it takes to build each connection, as well as in the amount of compute resources it requires from the server and the browsers.
CSS Sprites and Images Consolidation
Another common practice that existed during the time of HTTP 1.1 was the consolidation of multiple images and using CSS sprites, which reduce the number of image resources per page, and thus the number of requests required to deliver the page to the browser. The tradeoff of this best practice is that by consolidating multiple images together, one would reduce the likelihood that those images could be reused later on from the browser’s cache, thus hurting performance instead of improving it.
As the number of requests per page is no longer an issue with HTTP/2, it is better to avoid consolidating images, in turn maximizing caching efficiency.
Another best practice that was commonly used for HTTP 1.1 was in-lining of objects (like CSSs and Java Scripts) into the HTML code, again, reducing the number of resources and thus requests that need to be separately fetched per page. When using in-lining, objects cannot be stored in the browser cache, which means their payload has to be sent over and over again, therefore hurting performance.
With HTTP/2, this practice should be avoided. Since HTTP/2 is much more efficient with its ability to multiplex transactions over a single TCP connection and compress their headers, sending each resource separately increases the caching efficiency and the overall performance of a web application.
Client-Server Encrypted Connection
Encryption impacts performance, both on the server side and the browser side. With HTTP 1.1, avoiding communication encryption made perfect sense. However, with the current availability of browsers that have implemented HTTP/2, it is practically impossible to support HTTP/2 without mandating an encrypted browser-server connection. While this may sound counter-productive, the accelerated performance offered by HTTP/2 is often much greater than the performance penalty of using encrypted connections.
HTTP 1.1 mandates that for each resource request, the full HTML header will be sent over and over again, even if it’s exactly the same header information as in the previous 50 resources requests of that same page. In some cases, the payload of that repetitive header information transmission with every resource request could be summed up to more than the page content’s payload. This was a good reason to reduce the header size as much as possible.
With HTTP/2, it is no longer an issue, as the new protocol version uses a new, more efficient header compression mechanism, which not only reduces the size of the header’s payload, but also eliminates the need to resend the full header time and time again for every resource, and instead requires the user to only send the unique parameters for each header.