Memory Leak When Using Network Streams in LabVIEW

Updated Sep 15, 2020

Reported In


  • LabVIEW

Issue Details

I'm using the Network Streams Functions to transfer data across a network, but there appears to be a memory leak and I can see the RAM usage gradually increasing over time on one or both of the computers. Why is this happening, and what can I do to remedy this situation?


Memory leaks commonly refer to situations when memory usage will increase until the system runs out of memory. To prevent this, in LabVIEW 2011 and earlier, you should specify the size of the input and output buffers. This will limit the maximum size of the buffers and assuming that the size you specify does not exceed the available RAM, you will not run out of memory.

To set the size, wire a constant to the writer buffer size and reader buffer size inputs of the Create Network Stream Writer Endpoint and Create Network Stream Reader Endpoint functions respectively. These became required inputs starting in LabVIEW 2012.

Additional Information

Some fluctuation in memory usage are expected when using non-scalar data types such as clusters, waveforms, images or variants. This is because the size of these data types can vary during runtime.

To compensate for this fluctuation, the buffer does not contain the elements but rather pointers to the actual data. Memory for each element is then allocated dynamically at runtime as elements are written to the buffer. The dynamic allocation of memory can cause the CPU to work much harder then it needs and so it can be more efficient to use scalar data types like numerics to transfer your data instead of the non-scalar data types. 

For example, it is better to use an array of doubles instead of a waveform. With the Array of Doubles you can use the Write Multiple Elements to Stream and Read Multiple Elements From Stream functions to transfer your data across the network as in the images below. The use of the scalar values removes the need to dynamically allocate memory freeing the CPU from having to perform the operation. This decreases CPU usage while maintaining the same data rate.