Basic Explanation of User Datagram Protocol (UDP)

UDP, short for User Datagram Protocol is one of the members of Internet Protocol. In 1980, it was designed by David P. Reed. With UDP, computer applications can send messages to other hosts on the IP network. It uses a simple communication model with a minimum number of protocols. It is suitable for purposes where error checking and correction are not necessary; UDP avoids such processing.

UDP Characteristics

The main characteristics of UDP (User Datagram Protocol) are as following:

  • It cannot deliver data that is larger than 65467 bytes. UDP is only a good protocol for delivering data when you don’t care too much about whether the target receives the packet or not.
  • It does not guarantee the delivery of data (losses can occur). Applications that use this protocol must be set to allow for loss, errors, and duplication. Otherwise, they will fail.
  • It will not send corrupt data to the destination. UDP has a 16-bit checksum, which is not the best way to detect a multi-bit error but is not bad. It is certainly possible for an undetected error to sneak in, but very unlikely.

UDP Header Format


The above diagram represents the UDP header format. UDP header is an 8-bytes fixed and simple header. The first 8 Bytes contains all necessary header information and the remaining part consist of data. It has the following parts:

Source Port

  • It has a 16-bit field size.
  • It is used for sending applications.

Destination Port

  • It has a 16-bit field size.
  • It is used for receiving applications.


  • It has a 16-bit field size.
  • It combines the length of the UDP Header and Encapsulated data.


  • It has a 16-bit field size.
  • It is not necessarily required in UDP.