To observe common flow control and error control mechanisms used in networks.
You will examine how TCP works in order to demonstrate the method it uses to control the rate of transmission or flow control. All protocols have methods to assure that transmitted information will not overrun the receiver. We will watch the sliding windows form and grow through the process of slow start.
Explanation and Background
The TCP/IP Protocol Suite has two protocols at the Transport layer, both of which are designed to operate with applications on the upper layers.
UDP (User Datagram Protocol) is a datagram service, designed to carry small messages between applications with as little overhead as possible. The only information carried in the UDP header is for the source and destination port numbers, the length of the datagram and a checksum. UDP makes no guarantees about whether the data will arrive, how quickly it will arrive, or whether it will arrive in any particular order. It is called a pass-through protocol because it does nothing other than pass the data it is given. It is used for audio, video, and other real-time data because of the lack of overhead and because packets missed in such data flows must be ignored anyway.
TCP (Transmission Control Protocol), on the other hand, is a byte stream service that guarantees that a stream of bytes of any length received at the source will be exactly duplicated at the destination. To do this, TCP offers several services.
Segmentation: TCP splits the byte stream up into pieces referred to as segments that will traverse the network. The size of these segments is usually controlled by the layer two protocol being used (i.e., Ethernet) and is referred to as the Maximum Segment Size or MSS. The MSS is agreed to as part of the initial handshake when TCP sets up the connection between source and destination.
Packet Ordering: Packets are sent in order, and that order is maintained and restored