What is the Precision Difference Between Float and Double Datatypes?

Updated Jan 3, 2023

Reported In

Software

  • LabWindows/CVI 6.0 Full
  • LabVIEW Base

Issue Details

I am considering using either float datatype or the double datatype in my program. What is the difference between these datatypes?

Solution

A variable of type float only has 7 digits of precision whereas a variable of type double has 15 digits of precision. If you need better accuracy, use double instead of float.

Additional Information

When using a floating-point datatype, you will see a rounding of numeric values, such as the following:
 
Floating Point Rounding Precision
Value EnteredValue in the floating-point standard
1.341.340000
51.451.400002
112.56112.559998

In this case, precision of a numerical quantity is the measure of the detail in which the quantity is expressed in decimal digits.

For more information about the loss of precision in floating-point numbers, see the following article.
Why Do My Floating Point Numbers Lose Precision?