Network Stream Delay Between Sending and Receiving Data

Updated Jul 29, 2018

Reported In

Software

  • LabVIEW

Issue Details

I have an application that utilizes network streams to send and receive data. I am noticing a large delay between sending and receiving these packets of data regardless of their size. Why is the delay the same for these? How can I optimize my transmissions to be faster?

Solution

Two common metrics when tuning the performance of a network stream are throughput and latency. While throughput can be streamlined with simple data types, if the latency is coming from within the network structure, the delay in the program's execution can appear constant regardless of the data types being passed. To test the network's effect on the program's latency try these steps:
  1. Run the transmitter and receiver programs on the same domain within the network and observe the execution time. (If the network architecture is causing the latency, the program should run faster).
  2. Run the transmitter and receiver programs locally on the same computer to eliminate the network completely and observe the execution time. (If the network architecture is causing the latency, the program should run faster).
  3. Ping across the network from the transmitter IP address to the receiver IP address to collect the time spent crossing the network. If the latency across the network is close to the observed execution time of the program, the network is most likely the cause of the latency.
  4. If the ping troubleshooting step shows the network is the cause of the latency, performing a traceroute (another type of network diagnostic) can show where in the network is causing the delay.

Additional Information

The measured throughput and latency depend on a number of factors, including the specifications of the systems involved in the communication, the network interface used for the communication, and the overall congestion and reliability of the network itself. 

More complicated data types will tend to have poorer throughput than simpler data types.  The amount of work required to transfer an element of a given data type is determined by the following factors:
  1. The complexity and size of the data type itself.  For example, arbitrary data types like clusters, which include other data types as sub-elements and can contain arbitrary levels of nesting, are inherently more complicated to parse and construct than data types that have a fixed structure.
  2. How efficiently the stream endpoint can manage the memory required to store elements of the data type.  If the data type is fixed in size, the endpoint can store all elements in a contiguous block of memory.  If the data type is variable sized, the endpoint must manage multiple blocks of memory and occasionally allocate and de-allocate memory at run time.

WAS THIS ARTICLE HELPFUL?

Not Helpful