What Does HTTP/2 Achieve?
Whenever a new protocol version is developed, it needs actual concrete goals. The most obvious goal is backward compatible with its predecessor, HTTP 1.1. Without that ability, every server in the world will have to switch to HTTP/2 for you to be able to browse their websites. While maintaining compatibility with the older version, this new protocol will make use of advanced techniques as measures against latency, making pages load faster. This is the primary goal, the problem that HTTP/2 plans to address most aggressively. Other improvements include added security and compatibility with reverse proxies. In the big scheme of things, HTTP/2 is not going to be that much different from HTTP 1.1. As you surf the internet, the strongest effect you will feel is that webpages will load significantly faster as long as they support the new version.
How Does HTTP/2 Make The Web Faster?
To say that “HTTP/2 makes everything faster” is a disservice to the amount of work that actually takes place behind the scenes to accomplish this. The HTTP 1.1 protocol is riddled with a series of issues that were acceptable in the earliest years of the 21st century but no longer make sense to continue to live with in a time where bandwidth is cheaper and servers are expected to load pages at much faster rates. The chief way in which HTTP/2 plans to address page loading times is by compressing the header (a piece of data sent by your client to request that a server give you the data inside of a webpage you’re visiting). This minimizes the amount of time that your computer “shakes hands” with the destination server by reducing the amount of data that has to be sent. Nowadays, processors are powerful enough to handle millions of decompressions in a short amount of time. It makes more sense to do this now. While the above will only take care of the latency in the initial request, there are also ways that HTTP/2 plans to take care of your entire interaction with a website. It will directly implement server push technologies, which allow servers to be more active in the communications process. Until recently, you had to send requests periodically to the server, making it interprets the headers you churn out every time you ask for information. With HTTP/2, the server will send you new data when it appears. Lastly, HTTP/2 will do something called “multiplexing” when you send requests. In HTTP 1.1, there was a problem: Every new packet took precedence over the last one. All of them were processed in a linear fashion, leading to a problem called “head-of-line blocking”. Basically, a server’s performance was limited by the fact that it would have to process the first packet that comes to it while leaving the rest in a queue. If the packet took a long time to process, all the other packets had to wait in line for their turn. With HTTP/2, multiple packets will be processed at the same time. With this combination of different “cures”, HTTP/2 will do everything it can to avoid slowdowns due to HTTP-specific problems. This will be particularly advantageous for websites with smaller servers that aren’t connected to as much bandwidth as the ones running Facebook and Google. If you have questions or ideas, be sure to leave a comment with your thoughts!