Traditional Culture Encyclopedia - Traditional customs - When does tcp congestion control not work?

When does tcp congestion control not work?

Fairness Fairness means that when congestion occurs, each source end (or different TCP connections or UDP datagrams established by the same source end) can fairly share the same network resources (such as bandwidth, cache, etc.).

Origins at the same level should receive the same amount of network resources.

The fundamental reason for fairness is that congestion will inevitably lead to packet loss, and packet loss will lead to competition among data flows for limited network resources, and data flows with weak competition capabilities will suffer more damage.

Therefore, there is no congestion and therefore no fairness issue.

The fairness problem at the TCP layer is manifested in two aspects: (1) Connection-oriented TCP and connectionless UDP react and process congestion indications differently when congestion occurs, leading to unfair use of network resources.

When congestion occurs, the TCP data flow with a congestion control response mechanism will enter the congestion avoidance stage according to the congestion control steps, thereby actively reducing the amount of data sent into the network.

However, for connectionless datagram UDP, since there is no end-to-end congestion control mechanism, even if the network issues congestion indications (such as packet loss, duplicate ACKs received, etc.), UDP will not reduce the data sent to the network like TCP.

quantity.

As a result, TCP data flows that comply with congestion control will get fewer and fewer network resources, while UDP without congestion control will get more and more network resources, which leads to serious unfair distribution of network resources at various sources.

Unfair allocation of network resources will in turn aggravate congestion and may even lead to congestion collapse.

Therefore, how to determine whether each data flow strictly abides by TCP congestion control when congestion occurs, and how to "punish" behavior that does not comply with the congestion control protocol, has become a hot topic in the current research on congestion control.

The fundamental way to solve the fairness problem of congestion control at the transport layer is to fully use the end-to-end congestion control mechanism.

(2) There are also fairness issues between some TCP connections.

The reason for the problem is that some TCPs use large window sizes before congestion, or their RTT is smaller, or the packets are larger than other TCPs, so they also take up more bandwidth.

RTT unfairness The AIMD congestion window update strategy also has some flaws. The sum-increasing strategy causes the sender to increase the congestion window of the data flow by the size of a data packet within a round-trip delay (RTT). Therefore, when different data

When flows compete for network bottleneck bandwidth, the congestion window of a TCP data flow with a smaller RTT will increase faster than a TCP data flow with a large RTT, thus occupying more network bandwidth resources.

Additional notes: The line quality between China and the United States is not very good, with long rtt and frequent packet loss.

The TCP protocol loses packets when it succeeds and when it fails; TCP is designed to solve the problem of reliable transmission on unreliable lines, that is, to solve packet loss, but packet loss greatly reduces the TCP transmission speed.

The HTTP protocol uses the TCP protocol at the transport layer, so the download speed of web pages depends on the speed of TCP single-thread download (because web pages are downloaded in a single thread).

The main reason why packet loss significantly reduces TCP transmission speed is the packet loss retransmission mechanism. What controls this mechanism is the TCP congestion control algorithm.

Several sets of TCP congestion control algorithms are provided in the Linux kernel. Those that have been loaded into the kernel can be seen through the kernel parameter net.ipv4.tcp_available_congestion_control.

1. Vegas In 1994, Brakmo proposed a new congestion control mechanism, TCP Vegas, to control congestion from another perspective.

As can be seen from the front, TCP's congestion control is based on packet loss. Once packet loss occurs, the congestion window is adjusted. However, packet loss is not necessarily due to the network entering congestion, but because the RTT value is closely related to the network operation condition

relationship, so TCP Vegas uses changes in the RTT value to determine whether the network is congested, thereby adjusting the congestion control window.

If it finds that the RTT is increasing, Vegas thinks that the network is congested, so it starts to reduce the congestion window. If the RTT becomes smaller, Vegas thinks that the network congestion is gradually being relieved, so it increases the congestion window again.

Because Vegas does not use packet loss to determine the available bandwidth of the network, but uses RTT changes to determine the available bandwidth of the network, it can more accurately detect the available bandwidth of the network, resulting in better efficiency.

However, Vegas has a flaw, and it can be said to be fatal, which ultimately affects TCP Vegas and is not used on a large scale on the Internet.