Part 1 - Flow Control
To observe common flow control and error control mechanisms used in networks. You will examine how TCP works in order to demonstrate the method it uses to control the rate of transmission or flow control. All protocols have methods to assure that transmitted information will not overrun the receiver. We will watch the sliding windows form and grow through the process of slow start.
Explanation and Background
The TCP/IP Protocol Suite has two protocols at the Transport layer, both of which are designed to operate with applications on the upper layers. UDP (User Datagram Protocol) is a datagram service, designed to carry small messages between applications with as little overhead as possible. The only information carried in the UDP header is for the source and destination port numbers, the length of the datagram and a checksum. UDP makes no guarantees about whether the data will arrive, how quickly it will arrive, or whether it will arrive in any particular order. It is called a pass-through protocol because it does nothing other than pass the data it is given. It is used for audio, video, and other real-time data because of the lack of overhead and because packets missed in such data flows must be ignored anyway. TCP (Transmission Control Protocol), on the other hand, is a byte stream service that guarantees that a stream of bytes of any length received at the source will be exactly duplicated at the destination. To do this, TCP offers several services. Segmentation: TCP splits the byte stream up into pieces referred to as segments that will traverse the network. The size of these segments is usually controlled by the layer two protocol being used (i.e., Ethernet) and is referred to as the Maximum Segment Size or MSS. The MSS is agreed to as part of the initial handshake when TCP sets up the connection between source and destination. Packet Ordering: Packets are sent in order, and that order is maintained and restored at the destination, even when packets arrive at odd times or are missing, which they may do since they are carried by IP. Data Integrity: TCP executes a full CRC checksum on each segment which must be verified at the destination. If data is corrupt, the segment is discarded and resent. Duplicated packets are destroyed. Acknowledgement: By default, TCP uses a cumulative acknowledgement scheme in which it acknowledges receipt of the entire data stream up to that point. This scheme is the primary means of error control when packets are lost or corrupted. The scheme most commonly used is actually selective acknowledgement or SACK. SACK allows the program to specify blocks of segments as having been received while requesting retransmission of missing segments. It is much more efficient but can only be used when both ends stipulate it. Fortunately, most TCP/IP stacks these days support this option. In addition, there are some additional flavors of TCP/IP, most notably Tahoe and Reno, which support different options. Retransmission: TCP will retransmit any packets/segments not acknowledged or received in error. Flow Control: TCP uses what is called the sliding window within buffers to coordinate sending the data between the source and the receiver. If packets are sent too fast and buffers fill at the destination, causing packets to be discarded, the source will detect this through the acknowledgements and will slow its transmission, usually by half. When acknowledgements are received again, transmission will speed up. Congestion Control: This is a set of processes for TCP to avoid congesting the network and is sometimes referred to as Network Congestion Control. TCP uses several algorithms for sending data depending on the conditions on the network. In Slow Start, for instance, TCP starts a byte stream by sending one packet and waiting for a response. When it receives the acknowledgement, it will send two packets and wait, etc. Normally, a TCP receiver will acknowledge every...
Please join StudyMode to read the full document