Integer Division in VeriStand Real-Time Sequences Yields Unexpected Results

Updated Apr 24, 2018

Reported In

Software

  • VeriStand

Issue Details

I have a question about the type conversion/inference rules used in VeriStand Real-Time Sequences. The results of integer divisions in Expressions created in the Stimulus Profile Editor are not what I expect in comparison to other operations. To depict this I'll show an integer addition first, then compare an integer division to it.

Example 1:
UInt32 a = 4294967295
UInt32 b = 1 
DBL c = a + b 

The result I expect is 0, and VeriStand indeed outputs 0.
Explanation: I assign the maximum possible value of 2^32-1 (=4294967295) to a UInt32 variable. Then I add 1 to it. Because the result is outside the range that can be represented by UInt32 an overflow occurs during the addition and the result is wrapped around to the UInt32 minimum of 0. When assigning this result to the variable afterwards, this integer 0 is converted to a floating-point 0.

Example 2:
UInt32 d = 1 
UInt32 e = 3 
DBL f = d / e


I expect VeriStand to behave comparably to example 1 here: performing an integer division of d and e first which results in 0, and then converting this integer 0 to a floating-point 0 when assigning it to f. But in VeriStand the resulting value of f is 0.33333 instead!

So the result of the division of two integers obviously is already a double. If the "+" operation from example 1 would do the same and already be using the double data type internally, it would output 4294967296 rather than 0.


Obviously, VeriStand does a floating point division. Why does it do a floating point division when I use the "/" operator but keeps the original types when I use the "+" operator?

Solution

Both operators work as expected. The divide operation is special, as VeriStand supports both "floating-point division" as well as "integer division". It uses two different operators for these: "/" and "quotient":

"x / y" conducts a floating-point division and yields a high-precision result.
"quotient(x,y)" returns the number of times y evenly divides x. The type returned is the largest data type of the two inputs:



Using the example from the section Issue Details above,
UInt32 d = 1 
UInt32 e = 3 
DBL f = quotient(d, e)

returns your expected floating-point 0 instead of 0.33333.

WAS THIS ARTICLE HELPFUL?

Not Helpful