Many applications and camera drivers create 16-bit image data using unsigned values. This means that the darkest pixels have a value of
0
and the brightest have a value of
b1111111111111111
(
xFFFF
or
d65535
). However, the
Image datatype used in LabVIEW and NI-IMAQ reads data values using a signed interpretation. This interpretation difference can be overcome by subtracting a constant value of
x8000
(
d32768
) from the raw values . However, neither the U16 nor I16 ranges allow this subtraction. (If
xFFFF
is subtracted from the original values, an underflow will occur.) There are two standard solutions to this problem:
Solution Option 1:
- A 32-bit signed integer, which has a range that can accommodate every possible value in the operation, can be used as an intermediate state.
- The U16 image data is cast to I32, then the constant is subtracted, then
- The new data (which is in the allowed range of an I16) is cast to I16.
- Use the resulting array for conversion to the Image datatype
Solution Option 2: (Architectural Solution)You can take advantage of computer architecture and uses a common XOR gate with typecasting to accomplish the same task as the code In a practical application, this code will execute faster and more efficiently than that above.