This content is not available in your preferred language.

The content is shown in another available language. Your browser may include features that can help translate the text.

Analyze 16-bit Images with NI Vision Software

Updated Dec 28, 2022

Environment

Software

  • Vision Assistant
  • Vision Development Module
  • LabVIEW

Driver

  • NI-IMAQ

I need to analyze a 16-bit image using NI-Vision. My data was stored using unsigned values (U16), but the Image datatype is defined with signed values (I16).

Many applications and camera drivers create 16-bit image data using unsigned values. This means that the darkest pixels have a value of 0 and the brightest have a value of b1111111111111111 (xFFFF or d65535). However, the Image datatype used in LabVIEW and NI-IMAQ reads data values using a signed interpretation. This interpretation difference can be overcome by subtracting a constant value of x8000 (d32768) from the raw values . However, neither the U16 nor I16 ranges allow this subtraction. (If xFFFF is subtracted from the original values, an underflow will occur.) There are two standard solutions to this problem:

Solution Option 1: 
  1. A 32-bit signed integer, which has a range that can accommodate every possible value in the operation, can be used as an intermediate state.
  2. The U16 image data is cast to I32, then the constant is subtracted, then
  3. The new data (which is in the allowed range of an I16) is cast to I16.
  4. Use the resulting array for conversion to the Image datatype

Solution Option 2: (Architectural Solution)
You can take advantage of computer architecture and uses a common XOR gate with typecasting to accomplish the same task as the code  In a practical application, this code will execute faster and more efficiently than that above.