Although its name was initially proposed as the acronym for "Quick UDP Internet Connections", in IETF's use of the word, QUIC is not an acronym; it is simply the name of the protocol.[3][8][1] QUIC improves performance of connection-oriented web applications that are currently using Transmission Control Protocol (TCP).[2][9] It does this by establishing a number of multiplexed connections between two endpoints using User Datagram Protocol (UDP), and is designed to obsolete TCP at the transport layer for many applications, thus earning the protocol the occasional nickname "TCP/2".[14]
QUIC works hand-in-hand with HTTP/3's multiplexed connections, allowing multiple streams of data to reach all the endpoints independently, and hence independent of packet losses involving other streams. In contrast, HTTP/2 hosted on TCP can suffer head-of-line-blocking delays if multiple streams are multiplexed on a TCP connection, and any of the TCP packets on that connection are delayed or lost.
QUIC's secondary goals include reduced connection and transport latency, and bandwidth estimation in each direction to avoid congestion. It also moves congestion control algorithms into the user space at both endpoints, rather than the kernel space, which it is claimed[by whom?] will allow these algorithms to improve more rapidly. Additionally, the protocol can be extended with forward error correction (FEC) to further improve performance when errors are expected, and this is seen as the next step in the protocol's evolution. It has been designed to avoid protocol ossification so that it remains evolvable, unlike TCP, which has suffered significant ossification.
In June 2015, an Internet Draft of a specification for QUIC was submitted to the IETF for standardization.[15][16] A QUIC working group was established in 2016.[17] In October 2018, the IETF's HTTP and QUIC Working Groups jointly decided to call the HTTP mapping over QUIC "HTTP/3" in advance of making it a worldwide standard.[18] In May 2021, the IETF standardized QUIC in RFC9000, supported by RFC8999, RFC9001 and RFC9002.[19]DNS-over-QUIC is another application.
Transmission Control Protocol, or TCP, aims to provide an interface for sending streams of data between two endpoints. Data is handed to the TCP system, which ensures the data makes it to the other end in exactly the same form, or the connection will indicate that an error condition exists.[20]
To do this, TCP breaks up the data into network packets and adds small amounts of data to each packet. This additional data includes a sequence number that is used to detect packets that are lost or arrive out of order, and a checksum that allows the errors within packet data to be detected. When either problem occurs, TCP uses automatic repeat request (ARQ) to tell the sender to re-send the lost or damaged packet.[20]
In most implementations, TCP will see any error on a connection as a blocking operation, stopping further transfers until the error is resolved or the connection is considered failed. If a single connection is being used to send multiple streams of data, as is the case in the HTTP/2 protocol, all of these streams are blocked although only one of them might have a problem. For instance, if a single error occurs while downloading a GIF image used for a favicon, the entire rest of the page will wait while that problem is resolved.[20] This phenomenon is known as head-of-line blocking.
As the TCP system is designed to look like a "data pipe", or stream, it deliberately contains little understanding of the data it transmits. If that data has additional requirements, like encryption using TLS, this must be set up by systems running on top of TCP, using TCP to communicate with similar software on the other end of the connection. Each of these sorts of setup tasks requires its own handshake process. This often requires several round-trips of requests and responses until the connection is established. Due to the inherent latency of long-distance communications, this can add significant overhead to the overall transmission.[20]
TCP has suffered from protocol ossification,[21] due to its wire image being in cleartext and hence visible to and malleable by middleboxes.[22] One measurement found that a third of paths across the Internet encounter at least one intermediary that modifies TCP metadata, and 6.5% of paths encounter harmful ossifying effects from intermediaries.[23] Extensions to TCP have been affected: the design of Multipath TCP (MPTCP) was constrained by middlebox behaviour,[24][25] and the deployment of TCP Fast Open has been likewise hindered.[26][21]
Characteristics
In the context of supporting encryptedHTTP traffic, QUIC serves a similar role as TCP, but with reduced latency during connection setup and more efficient loss recovery when multiple HTTP streams are multiplexed over a single connection. It does this primarily through two changes that rely on the understanding of the behaviour of HTTP traffic.[20]
The first change is to greatly reduce overhead during connection setup. As most HTTP connections will demand TLS, QUIC makes the exchange of setup keys and supported protocols part of the initial handshake process. When a client opens a connection, the response packet includes the data needed for future packets to use encryption. This eliminates the need to set up the TCP connection and then negotiate the security protocol via additional packets. Other protocols can be serviced in the same way, combining multiple steps into a single request–response pair. This data can then be used both for following requests in the initial setup, as well as future requests that would otherwise be negotiated as separate connections.[20]
The second change is to use UDP rather than TCP as its basis, which does not include loss recovery. Instead, each QUIC stream is separately flow controlled and lost data is retransmitted at the level of QUIC, not UDP. This means that if an error occurs in one stream, like the favicon example above, the protocol stack can continue servicing other streams independently. This can be very useful in improving performance on error-prone links, as in most cases considerable additional data may be received before TCP notices a packet is missing or broken, and all of this data is blocked or even flushed while the error is corrected. In QUIC, this data is free to be processed while the single multiplexed stream is repaired.[27]
QUIC includes a number of other changes that improve overall latency and throughput. For instance, the packets are encrypted individually, so that they do not result in the encrypted data waiting for partial packets. This is not generally possible under TCP, where the encryption records are in a bytestream and the protocol stack is unaware of higher-layer boundaries within this stream. These can be negotiated by the layers running on top, but QUIC aims to do all of this in a single handshake process.[8]
Another goal of the QUIC system was to improve performance during network-switching events, like what happens when a user of a mobile device moves from a local WiFi hotspot to a mobile network. When this occurs on TCP, a lengthy process starts where every existing connection times out one-by-one and is then re-established on demand. To solve this problem, QUIC includes a connection identifier to uniquely identify the connection to the server regardless of source. This allows the connection to be re-established simply by sending a packet, which always contains this ID, as the original connection ID will still be valid even if the user's IP address changes.[28]
QUIC can be implemented in the application space, as opposed to being in the operating system kernel. This generally invokes additional overhead due to context switches as data is moved between applications. However, in the case of QUIC, the protocol stack is intended to be used by a single application, with each application using QUIC having its own connections hosted on UDP. Ultimately the difference could be very small because much of the overall HTTP/2 stack is already in the applications (or their libraries, more commonly). Placing the remaining parts in those libraries, essentially the error correction, has little effect on the HTTP/2 stack's size or overall complexity.[8]
This organization allows future changes to be made more easily as it does not require changes to the kernel for updates. One of QUIC's longer-term goals is to add new systems for forward error correction (FEC) and improved congestion control.[28]
One concern about the move from TCP to UDP is that TCP is widely adopted and many of the "middleboxes" in the internet infrastructure are tuned for TCP and rate-limit or even block UDP. Google carried out a number of exploratory experiments to characterize this and found that only a small number of connections were blocked in this manner.[3] This led to the use of a rapid fallback-to-TCP system; Chromium's network stack opens both a QUIC and traditional TCP connection at the same time, which allows it to fall back with negligible latency.[29]
QUIC has been specifically designed to be deployable, evolvable and to have anti-ossification properties;[30] it is the first IETF transport protocol to deliberately minimise its wire image for these ends.[31] Beyond encrypted headers, it is 'greased'[32] and it has protocol invariants explicitly specified.[33]
The protocol that was created by Google and taken to the IETF under the name QUIC (already in 2012 around QUIC version 20) is quite different from the QUIC that has continued to evolve and be refined within the IETF. The original Google QUIC was designed to be a general purpose protocol, though it was initially deployed as a protocol to support HTTP(S) in Chromium. The current evolution of the IETF QUIC protocol is a general purpose transport protocol. Chromium developers continued to track the evolution of IETF QUIC's standardization efforts to adopt and fully comply with the most recent internet standards for QUIC in Chromium.
Applications
QUIC was developed with HTTP in mind, and HTTP/3 was its first application.[34][35]DNS-over-QUIC is an application of QUIC to name resolution, providing security for data transferred between resolvers similar to DNS-over-TLS.[36] The IETF is developing applications of QUIC for secure network tunnelling[35] and streaming media delivery.[37]XMPP has experimentally been adapted to use QUIC.[38] Another application is SMB over QUIC, which, according to Microsoft, can offer an "SMB VPN" without affecting the user experience.[39] SMB clients use TCP by default and will attempt QUIC if the TCP attempt fails or if intentionally requiring QUIC.
Adoption
Browser support
The QUIC code was experimentally developed in Google Chrome starting in 2012,[4] and was announced as part of Chromium version 29 (released on August 20, 2013).[18] It is currently enabled by default in Chromium and Chrome.[40]
Apple added experimental support in the WebKit engine through the Safari Technology Preview 104 in April 2020.[42] Official support was added in Safari 14, included in macOS Big Sur and iOS 14,[43] but the feature needed to be turned on manually.[44] It was later enabled by default in Safari 16.[13]
Client support
The cronet library for QUIC and other protocols is available to Android applications as a module loadable via Google Play Services.[45]
cURL 7.66, released 11 September 2019, supports HTTP/3 (and thus QUIC).[46][47]
In October 2020, Facebook announced[48] that it has successfully migrated its apps, including Instagram, and server infrastructure to QUIC, with already 75% of its Internet traffic using QUIC. All mobile apps from Google support QUIC, including YouTube and Gmail.[49][50]Uber's mobile app also uses QUIC.[50]
Server support
As of 2017[update], there are several actively maintained implementations. Google servers support QUIC and Google has published a prototype server.[51]Akamai Technologies has been supporting QUIC since July 2016.[52][53] A Go implementation called quic-go[54] is also available, and powers experimental QUIC support in the Caddy server.[55] On July 11, 2017, LiteSpeed Technologies officially began supporting QUIC in their load balancer (WebADC)[56] and LiteSpeed Web Server products.[57] As of October 2019[update], 88.6% of QUIC websites used LiteSpeed and 10.8% used Nginx.[58] Although at first only Google servers supported HTTP-over-QUIC connections, Facebook also launched the technology in 2018,[18] and Cloudflare has been offering QUIC support on a beta basis since 2018.[59] The HAProxy load balancer added experimental support for QUIC in March 2022[60] and declared it production-ready in March 2023.[61] As of April 2023[update], 8.9% of all websites use QUIC,[62] up from 5% in March 2021. Microsoft Windows Server 2022 supports both HTTP/3[63] and SMB over QUIC[64][10] protocols via MsQuic. The Application Delivery Controller of Citrix (Citrix ADC, NetScaler) can function as a QUIC proxy since version 13.[65][66]
In addition, there are several stale community projects: libquic[67] was created by extracting the Chromium implementation of QUIC and modifying it to minimize dependency requirements, and goquic[68] provides Go bindings of libquic. Finally, quic-reverse-proxy[69] is a Docker image that acts as a reverse proxy server, translating QUIC requests into plain HTTP that can be understood by the origin server.
.NET 5 introduces experimental support for QUIC using the MsQuic library.[70]
Source code
QUIC or gQUIC implementations available in source form
This is the source code of the Chrome web browser and the reference gQUIC implementation. It contains a standalone gQUIC and QUIC client and server programs that can be used for testing. Browsable source code. This version is also the basis of LINE's stellite and Google's cronet.
A cross platform QUIC implementation from Microsoft designed to be a general purpose QUIC library. Used in Windows and cross platform by .NET. Rust and C# interop layers available are available, as well as convenience C++ wrapper classes.