Using RDMA Components for Data Sharing in VeriStand

Updated Jul 23, 2025

Environment

Hardware

  • PXIe-8285
  • PXIe-8280

Software

  • VeriStand

Driver

  • NI-RDMA

As the reflective memory card is obsolete, RDMA modules are recent and recommended for data sharing. The RDMA plugin component for the Data Sharing Framework custom device implements a point-to-point communication mechanism that enables high-throughput, low-latency data transmission between HIL targets running in parallel. This mechanism allows nodes in a real-time measurement and control system to share VeriStand Channel Data. 

Using this document as the guidance to create a new Data Sharing Framework custom device that leverages the RDMA plugin component to transmit data between HIL targets.

Software Requirement

Hardware Requirement

  • NI-RDMA supported hardware (NI PXIe-8280 or NI PXIe-8285).

Prerequisites

Prior to reading this article ensure that you have general knowledge of RDMA technology and are familiar with NI software such as NI-MAX and NI-VeriStand. 

Before proceeding any further in this document, ensure that the RDMA interfaces are physically connected to each other. View the "System Settings" tab for each target you will use. If you have connected the hardware correctly, there should be multiple IP Addresses listed under the "IP Address" section. Find the IP address(es) assigned to the RDMA module(s) in the list. Take note of these addresses, as we will need them to configure the RDMA Data Sharing Framework plugin.

On the page for the newly added Data Sharing Framework Custom Device, click New to launch a dialog box to configure the custom device by setting the values in the DSF configuration cluster. Configure as follows:

  1. Configure the Plugin-level settings

Purpose: Configure the top-level settings (specifically, timing) for each plugin. For the purposes of this tutorial, there is only one plugin to configure: RDMA. Configure the first element of the plugins array as follows:

  • name : Set this value to specify how the plugin will appear in the System Explorer. It does not matter what you choose to use for this field, as long as it is not empty. 
  • components : Set the first element to be "RDMA" 
  • cycle timing : Set the decimation to 0 to ensure that the plugin runs inline with the PCL. 
  • component settings : Leave this field blank, as there are no component settings for the RDMA Plugin. 

  1. Threads

The RDMA plugin does not currently support multi-threaded configurations. Edit the plugin configuration to have your desired number of sessions in one thread.

  1. Configure the Transfer Groups

Purpose: Transfer Groups are responsible for grouping transfers that should be executed with the same cycle timing values, while adding information on the direction (TX or RX) of the transfers.

We will need to create two transfer groups: one for the TX endpoint, and one for the RX endpoint. Each transfer group will be represented by a single element in the transfer groups array.

Configure each transfer group as follows:

  • core.name : Use this to specify how the transfer group will appear in the System Definition. 
  • core.direction : Specifies whether the transfer(s) in this group will be TX or RX.
  • core.cycle timing : Configure how you wish, or use the values depicted below. Remember, a decimation of 0 puts the transfer group in line with the PCL and the active engine buffer will be the inline Buffer. Any other decimation will make the transfer asynchronous and the active buffer will be Async Buffer.
  • core.timeout behavior : Choose whatever you prefer.
  • core.enable conversion: Choose whatever you prefer.
  • component settings : Leave empty, as there are no component settings for RDMA transfer groups.

Note: The effective timing for each end of a data transfer must be equal to ensure deterministic data transfer. The effective timing can be calculated as follows: 

[Effective Timing] = [Target PCL Rate] / ([DSF Custom Device Decimation] * [DSF Plugin Decimation] * [DSF Transfer Group Decimation])

If TX and RX endpoints are configured with differing effective timings, then the transfer group late count monitoring channel (located under each transfer group) on the endpoint effectively running faster will begin ticking up upon deployment.

  1. Configure the Transfers in Each Transfer Group

Purpose: Transfers represents a single block of channel data to be transmitted or received. Transfers are composed of one or multiple channels. For the purpose of RDMA, a single transfer contains the data for all the channels that are updated using a single RDMA buffer.

Configure each transfer as follows:

  • core. name : Use to specify how the transfer will be labeled in the System Definition.
  • component setting :
    • component : "RDMA"
    • values :
      • element 0
        • key : "local address"
        • value : enter the IP address of the RDMA module that was denoted earlier ("local" means the IP address of the module being used for this transfer)
      • element 1
        • key : "local port"
        • value : choose an available local port 
      • element 2 (only required for transfers in TX transfer groups)
        • key : "destination address"
        • value : enter the IP address of the RDMA module being used as the RX endpoint 
      • element 3 (only required for transfers in TX transfer groups)
        • key : "destination port"
        • value : choose an available destination port at the address specified by "destination address" (not required for transfers within an RX transfer group) 

  1. Configure the Channels

There is no special configuration needed for the channels. You may configure them however you wish, with a few constraints:

  • You may add as many channels per transfer as you would like. Just keep in mind that each endpoint for a given connection must have the same number of channels. For the purposes of this tutorial, we recommend a single channel per transfer. Note: channels[n] in the RX transfer is sent from channels[n] in the corresponding TX transfer.
  • Make sure that the engine data type and string data type match between each endpoint.

There are no component settings for RDMA channels, so leave that field blank.

  1. Wrap up the configuration
  • Double check that all of the values you entered are correct, because once you apply the settings, you will not be able to return to this cluster to modify the configuration.
  • Click "Apply" to add this configuration to the VeriStand System Definition, and close the dialog box. Note: While the number of transfer groups, transfers, channels, etc. is fixed, you may still modify the configuration of each node in the System Definition by using the VeriStand System Definition.
  1. Export the configuration (optional, but strongly recommended)

Click the Export... button on the Data Sharing Framework custom device page to export a JSON representation of the configuration that you just created. This will enable you to quickly configure similar DSF custom devices or quickly modify the existing configuration by editing the JSON then using the Import... feature.

NOTE: If you'd like to quickly configure the plugin without interacting with the DSF configuration cluster, there exists a TX and RX example configuration file attached. Modify the fields to match your setup and use the Import... feature.

  1. Repeat Data Sharing Framework Configuration for each target in the System Definition
  2. Mapping the channels that want to be shared with the correspondent target to channels under My TX Transfer. Mapping the channels under My RX Transfer to the channels that want to use the data shared by the correspondent target.

  1. Save and Close the VeriStand System Explorer