Many applications and camera drivers create 16-bit image data using unsigned values. This means that the darkest pixels have a value of
and the brightest have a value of
). However, the Image
datatype used in LabVIEW and NI-IMAQ reads data values using a signed interpretation. This interpretation difference can be overcome by subtracting a constant value of
) from the raw values . However, neither the U16 nor I16 ranges allow this subtraction. (If
is subtracted from the original values, an underflow will occur.) There are two standard solutions to this problem:Solution Option 1:
Solution Option 2: (Architectural Solution)
- A 32-bit signed integer, which has a range that can accommodate every possible value in the operation, can be used as an intermediate state.
- The U16 image data is cast to I32, then the constant is subtracted, then
- The new data (which is in the allowed range of an I16) is cast to I16.
- Use the resulting array for conversion to the Image datatype
You can take advantage of computer architecture and uses a common XOR gate with typecasting to accomplish the same task as the code In a practical application, this code will execute faster and more efficiently than that above.