Loss of UDP Packets at Fast Transfer Rates in LabVIEW

Updated Mar 3, 2023

Reported In


  • Ethernet Device


  • LabVIEW Full (Legacy)

Issue Details

  • When the UDP packets come in at high data rates (greater then 2Mb/s), they start getting lost, especially when the CPU gets loaded by other tasks. Why are these packets lost?
  • How can I reduce packet loss when communicating over UDP in LabVIEW?


LabVIEW may not be able to keep up with the UDP socket buffer at high data rates. This problem only occurs when LabVIEW is moving the buffer into a queue or writing it to file. One solution is to increase the Windows socket buffer size. This allows more time for LabVIEW to manipulate the buffer contents and perform the next buffer read before the incoming data overflows the socket buffer.

You can set the Windows socket buffer size by making a call to setsockopt function in the wsock32.dll.  Attached are VIs to automatically set and read back the buffer size on the defined socket connection, as well as an example VI using the buffer set/read VIs.  A similar method can be used for TCP to increase the transfer rate. These examples set the size, in bytes, of the socket buffer.

Note: UDP is not a lossless protocol so there is no guarantee of a completely delivery. Increasing the Windows socket buffer size may allow you to read UDP packets at higher rates, however, another protocol should be used if your application depends on a lossless connection.