Unable to Obtain Precision of 7 Digits or More with SGL Datatypes in LabVIEW

Updated Nov 5, 2020

Reported In


  • LabVIEW
  • LabVIEW FPGA Module

Issue Details

  • When I write the value 123456789 in a numeric control of SGL datatype on the LabVIEW Front Panel, it automatically gets coerced to 123456792. Why am I unable to obtain precision higher than 7 digits? I am using this in an FPGA VI. What is a possible alternative? 
  • When I try to add two floating-point numbers with 7 or more digits, using numeric controls of SGL datatype, I get an incorrect output. I believe the output value is losing its precision beyond the 7th digit. Is this expected?
  • I want to represent a very large number within the limits of 3.4e+38 using SGL datatype. Why am I unable to obtain uncompromised precision till the 9th or 10th digit even though it is a 32-bit datatype?


The SGL datatype is of IEEE 754 binary32 format which implies that even though it is a 32-bit datatype, only 23 bits are used to represent the significand in memory as shown below. The total precision of SGL datatype is 6 1/2 decimal digits. Thus, if a decimal string with more than 6 significant digits is converted to SGL representation, it might result in a round-off error produced by quantization of the real number in binary. 

The following alternatives might be useful while trying to obtain precision greater than 6 decimal digits.