Why Does DAQmx Hardware Timed Single Point Sample Mode Double the AI Convert Clock Rate?

Updated Jan 8, 2018

Reported In


  • NI-DAQmx



Issue Details

I have noticed that AI Convert Clock Rate is at least twice as high when in Hardware Timed Single Point Acquisition as it is when using buffered AI operations such as Finite Sampling or Continuous Sampling Mode. Why is the AI Convert Clock Rate doubled by default when the AI sampling mode is set to Hardware Timed Single Point as opposed to Finite or Continuous Sampling Mode?


With the Hardware Timed Single Point Mode for NI-DAQmx, the default AI convert clock rate is different than it is for buffered operations. With buffered operations, the convert clock is evenly spaced out in the full sample period to allow the maximum possible settling time between channels in the scan. This behavior minimizes ghosting between each multiplexed channel. However, it also causes potential problems with single point PID RT applications because it allows for too little time between the last multiplexed channel being sampled and the start of the next scan for PID processing. 

Therefore, the default behavior of the AI Convert Clock for Hardware Timed Single Point operations in NI-DAQmx is to evenly space out the multiplexed channels in half of the sample clock period. This effectively doubles the AI Convert Clock rate.

Additional Information

To set the AI Convert Clock rate to a specific value in LabVIEW, use a DAQmx Timing Property Node. Within this property node, set the property to AIConv.Rate by navigating to More»AI Convert»Rate (please see image below). 

Please note that the AIConv.Rate property must be at least as high as the SampClk.Rate(Sample Clock Rate) property times the number of multiplexed channels in the scan. Otherwise, a DAQmx error will occur at run-time.


Not Helpful