Monday, September 13, 2010

Sept 13 Congestion Control papers

Random Early Detection Gateways for Congestion Avoidance

 Internet had a big congestion problem because everyone kept sending packets in until the packets dropped again. It's inefficient and unfair for everyone because some connections transfer many small packets and routers get full by these connections. The congestion problem is real as we can see in 1980's backbone congestion collapse. Jacobson and Floyd suggest that we predict and signal a possible congestion problem to the source in order to regulate the traffic in the Internet. 
  RED is different than other congestion avoidance mechanism because it accounts for different bandwidth usage among the users that send packets in. An endpoint that sends more packets and use more bandwidth than other endpoints is more likely to have its packet marked/dropped. This solves the problem of random drop and drop-tail congestion avoidance mechanism because the probability of getting dropped is affected by how much you send in, and thus helping out connections that are not mainly causing the traffic.
  The biggest problem with these kinds of Internet traffic control is that the users/endpoints need to cooperate with RED system. Endpoints can always ignore the congestion problem indicated by RED system and the congestion problem would persist. It is so easy to cheat the system especially with the use of UDP these days. The paper said RED is implements over TCP layer and UDP would use the bandwidth unfairly. As Internet gets more applications where the data are time-sensitive, RED would easily be violated by the greed for performance.


Why Flow-Completion Time is the Right metric for Congestion Control and why this means we need new algorithms.
  This is an interesting paper that critiques the conservative approach of TCP in allocating the bandwidth resource. I can clearly see and feel delay caused by the slow-start of TCP when RTT for packets get big. Since we use Internet almost everyday and pull information from all over the globe, we frequently click here and there, then get frustrated when loading a webpage takes more than few seconds. Downloading a pdf document or an image is relatively fast but we know it can be a lot faster with the absence of the slow-start phase in TCP. Now that the Internet is used almost every single place we walk by, the scalability of the congestion control is being tested.
  The main idea of Rate Control Protocol is simple in that everyone gets an equal bandwidth to send its packets, instead of fighting for resource in TCP. Just like round-robin scheduling in OS scheduling, everybody gets equal resources adjusted for the number of flows in the link. Ideally, this is what TCP wants to achieve through its AIMD phase. TCP is more conservative than RCP because of the slow start phase.
  Personally, I am skeptic about RCP because it is aiming for the completion time and not thinking about the traffic it is going to bring to the Internet. TCP implemented a slow-start phase because it really didn't want to congest the network by sending in many big packets first only to realize that the packets are being dropped due to congestion. Nowadays however, improvements in the link layer shortens the RTT for packets and provides a much faster feedback than before when TCP was implemented. RCP is more relevant to today's beastly internet backbone service.  Also, as many flows come to one link, the link is going to get bottle-necked more easily because RCP will assign the same rate for everyone coming to the link, whereas TCP will get impacted gradually. Flow completion time is important but I feel that emphasizing the completion time alone can lead to much bigger traffic because RCP will synchronize the endpoints and bring about frustration to more users.

No comments:

Post a Comment