Message Delays With 24+ Actors - Actor Framework

Updated Oct 27, 2021

Reported In

Software

  • LabVIEW
  • Actor Framework Message Maker

Issue Details

When I am using more than 23 DAQ actors the data messages back to the caller are received in continually slowing bursts.

Is there any limitation when using LabVIEW Actor Framework when the numbers of actors increase? Or something related to the memory that we can configure?

Solution

Solution #1:
By default, LabVIEW has a threads capacity of 24 threads and this number is determined by a heuristic that for our purposes can be simplified to simply a value of 24 on most systems. This also explains why we are seeing a system bog down as soon as it spins up 24 or more DAQ actors. 
 
There is an INI token that can override the thread capacity:
ExecThreadGrowCapacity=[num]

 
Solution #2: 
DAQmx code is in a separate shared library from LabVIEW. The CLFN node that's used to call into DAQmx functions (.DLL on windows, .SO on Linux) doesn't know what's happening within the shared library so the LabVIEW execution system isn't able to free up the thread and run other code while the reader is waiting for the data to become available. Conversely, something like a TCP method is directly integrated with the LabVIEW execution system and the execution system can run other code while something like a TCP read waits for data.


We have created an example that further demonstrates the event-driven acquisition on 32 parallel tasks. See the file attached.