This content is not available in your preferred language.

The content is shown in another available language. Your browser may include features that can help translate the text.

Configure RAID Using PXIe-8267 in NI Linux Real-Time System

Updated Nov 11, 2024

Environment

Hardware

  • PXIe-8267

Other

  • PuTTY - SSH Client (optional to access NI Linux RT shell remotely) 

The PXIe-8267 PXI Express high-speed data storage module features large-capacity, high-throughput storage in a single PXI Express slot. With NVMe M.2 solid-state drive, the PXIe-8267 is ideal for stream-to-disk or stream-from-disk applications requiring sustained, reliable data throughput such as high-speed signal intelligence, RF record and playback, and multi-sensor data acquisitions systems.

 

This document describes on how to setup and configure RAID using multiple drives in a PXIe-8267 on NI Linux RT. Another document that describes on getting started for Windows can be found in this link.

 

Hardware configuration used in this tutorial as follow:

  • PXI Chassis: PXIe-1082
  • PXI Controller: PXIe-8880 running NI Linux Real-Time Operating System
  • PXIe-8267 Data Storage Module with 4 x 1TB NVMe SSDs

 

In order to execute the steps in this document you need to access the NI Linux RT shell using one of the methods below.

 

The following steps start with assumption that the user already has access and able to login into NI Linux RT shell.

Below is the overall step on how to configure the drive in NI Linux RT system from the shell.

  1. Identify available disks and unmount
  2. Format each SSD
  3. Use LVM to create Volume Group and Logical Volume
  4. Create ext4 filesystem and mountpoint for LVM drive
  5. Test the LVM drive by creating a file
  6. Check the LVM drives availability
  7. Automatically mount SSDs after reboot

Identify available disks and unmount

1.Execute lsblk -f command to identify the SSDs. The available drives in the system will be listed as below image where nvme0n1~nvme3n1 is the NVMe M.2 drives connected to PXIe-8267. 

KB1.jpg

 

2.Execute umount /dev/nvme*n1 command to unmount them.

KB2.jpg

 

Format each SSD

1.Use fdisk /dev/nvme0n1 command to format nvme0n1 SSD. fdisk is an interactive command which requires user inputs. Use the following sequence of inputs to format the SSD successfully.

KB3.jpg

 

2.Repeat the same step for each SSD that you want to format.

 

Use LVM to create Volume Group and Logical Volume

1.Install lvm2 software tools using opkg (NI Linux RT default package manager) by executing the below commands.

opkg update
opkg install lvm2

2.Execute vgcreate rd00 /dev/nvme?n1p1 to create Volume Group (VG). (Refer to vgcreate manual page for explanation on command arguments)

KB4.jpg

 

3.Execute lvcreate -l +100%FREE -n mrd rd00 to create Logical Volume (LV). (Refer to lvcreate manual page for explanation on command arguments)

KB5.jpg

 

 Create ext4 filesystem and mountpoint for LVM drive

1.Execute mkfs.ext4 /dev/rd00/mrd to create ext4 file system on the previously created logical volume. (Refer to mkfs manual page for details)

mkfs.ext4 /dev/rd00/mrd

 

2.Execute mkdir /home/raid command to create mountpoint directory named raid under the /home directory.

mkdir /home/raid

 

3.Change write/read access to the mountpoint directory by executing the below command. (Refer to chmod manual page for details)

chmod o+rwx /home/raid

 

4.Use mount /dev/rd00/mrd /home/raid command to mount the LVM drive to the mountpoint created previously.

mount /dev/rd00/mrd /home/raid

 

Test the LVM drive by Creating a File

1.Use the following steps to create a Hello World text file.

cd /home/raid
echo “hello world” > hello_world.txt

2.Check the contents of the created hello_world.txt by executing cat hello_world.txt command. If the configuration executed properly the content of the created hello_world.txt file is returned as below.

KB6.jpg

 

Check the LVM Drives Availability

1.Use lsblk -fp to list the available drives. Notice the mapping between LVM drive and mountpoint.

KB7.jpg

 

2.Alternatively, it is also possible to use mount -l to list down the mounted devices.

KB8.jpg

 

Automatically Mount SSDs After Reboot

The following steps are necessary to automatically mount SSDs after reboot.

1.Use blkid to find out the UUID of the mapped LVM drive. In the below example we get the UUID of the created LVM drive as 7800bcda-f014-496b-beb9-8b757e9f8830.

fstab1.jpg

 

2.Execute cat /etc/fstab to view the content of fstab file. Below is an example of default fstab file contents.

fstab2.jpg

 

3.Use vi /etc/fstab to modify fstab file contents. The LVM drive must be included in the fstab file. Below is an example of a fstab file after adding the LVM drives.

fstab3.jpg

Note: "vi" is a command line interface text editor for Linux. Please refer to publicly available resources on how to use vi to edit files in Linux.

 

4.Use vi /etc/init.d/start_ssd.sh to create a new script to be executed when boot up. The script contents follow:

#!/bin/sh
vgchange -aa rd00
mount /dev/rd00/mrd /home/raid

You can verify the contents of the created start_ssd.sh script file using cat /etc/init.d/start_ssd.sh

fstab4.jpg

 

5.Execute update-rc.d start_ssd.sh defaults to register the start_ssd.sh as part of scripts to be executed during system startup.

fstab5.jpg

Verify That the Configuration Persist After Reboot

Reboot the system and use lsblk -fp. If perform correctly, the LVM drive will be mounted successfully as shown in the following figure.

result.jpg