dcsimg

Best Practices for HTTP1.1 Becoming Bad Practices for HTTP/2

  • Best Practices for HTTP1.1 Becoming Bad Practices for HTTP/2-

    Header-Size Reduction

    HTTP 1.1 mandates that for each resource request, the full HTML header will be sent over and over again, even if it's exactly the same header information as in the previous 50 resources requests of that same page. In some cases, the payload of that repetitive header information transmission with every resource request could be summed up to more than the page content's payload. This was a good reason to reduce the header size as much as possible.

    With HTTP/2, it is no longer an issue, as the new protocol version uses a new, more efficient header compression mechanism, which not only reduces the size of the header's payload, but also eliminates the need to resend the full header time and time again for every resource, and instead requires the user to only send the unique parameters for each header.

1 | 2 | 3 | 4 | 5 | 6 | 7

Best Practices for HTTP1.1 Becoming Bad Practices for HTTP/2

  • 1 | 2 | 3 | 4 | 5 | 6 | 7
  • Best Practices for HTTP1.1 Becoming Bad Practices for HTTP/2-6

    Header-Size Reduction

    HTTP 1.1 mandates that for each resource request, the full HTML header will be sent over and over again, even if it's exactly the same header information as in the previous 50 resources requests of that same page. In some cases, the payload of that repetitive header information transmission with every resource request could be summed up to more than the page content's payload. This was a good reason to reduce the header size as much as possible.

    With HTTP/2, it is no longer an issue, as the new protocol version uses a new, more efficient header compression mechanism, which not only reduces the size of the header's payload, but also eliminates the need to resend the full header time and time again for every resource, and instead requires the user to only send the unique parameters for each header.

The Internet has evolved significantly since HTTP 1.1 was introduced 17 years ago. During this evolution, we've seen many enhancements to improve a user's online experience, such as the development of rich content. However, delivering these improvements came at one particular cost: performance. These evolving performance challenges were something that HTTP 1.1 was not designed to handle.

In February 2015, the Internet Engineering Task Force (IETF), an international community of network designers, operators, vendors and researchers concerned with the evolution of Internet architecture, released a new HTTP/2 version to address those challenges and adapt to the progression that Internet content has undergone.

As HTTP/2 took a long time to arrive, many interim best practices were developed to bypass the performance bottlenecks of HTTP 1.1. However, we learned that many of those HTTP 1.1 performance-enhancing practices would actually contribute to slowing web application delivery rather than accelerating it when using the new HTTP/2 protocol. In this slideshow, Radware's Yaron Azerual takes a look at a few examples organizations should consider.