How Can I Read the Binary Representation of a Floating Point Number in LabVIEW?

Updated Dec 20, 2017

Reported In


  • LabVIEW Professional
  • LabVIEW Full
  • LabVIEW Base

Issue Details

I'm trying to read the binary representation of a floating point number in LabVIEW. I tried using the Type Cast function to read the floating point number as a Boolean array but I only get 8 bits for a 64 bit double precision number. I would expect to get an array of Boolean that had 64 elements. What am I missing?


 In order to get the actual binary representation, you will first need to typecast the floating point number into an array of integers. You can then feed this array into an auto-indexed for loop. In each iteration of the loop, an integer is converted to a Boolean array with the "Number to Boolean Array" function found in the Boolean palette. For an illustration, see the picture below. 

Additional Information

When you typecast your floating point number into an array of Booleans, you will only get a Boolean element for each byte in your number. For example, if you have a double precision number (8 bytes), you will get a Boolean array with a size of 8 elements. Although a Boolean element could theoretically only take one bit of memory, they each actually consume a whole byte.

LabVIEW floating point numbers are stored in IEEE 754 format. The following table shows the layout for single (32-bit) and double (64-bit) precision floating-point values. The number of bits for each field are shown (bit ranges are in square brackets): 
Single Precision1 [31]8 [30-23]23 [22-00]127
Double Precision1 [63]11 [62-52]52 [51-00]1023

Note: When running the example code from the image below, the bytes will be swapped on an Intel based computer because of the Little Endian format. So the order of the bytes will be as follows (by row of the array, starting at index 0): 7, 8, 5, 6, 3, 4, 1, 2.


Not Helpful