Some help with Spark & Floating Point Overflow

I was trying to write a StringToFloat procedure that works with Spark. I was able to do one for StringToInteger, but I’m having some problems with Gnatprove and the overflow checking, and I’m not quite sure why. I’m not too experienced with Ada, much less Spark. This is the section that is the problem:

         Val := Float (Whole);
         pragma Assert (Val >= 0.0);
         AccFloat := Float (Acc);
         pragma Assert (AccFloat >= 0.0);
         NumFracDigitsFloat := Float (NumFracDigits);
         pragma Assert (NumFracDigitsFloat >= 1.0);
         Frac := AccFloat / NumFracDigitsFloat;
         pragma Assert (Frac >= 0.0);
         if Float'Last - Frac > Val then
            Val := Val + Frac;
         else
            Success := False;
            return;
         end if;

I work with Naturals until this point and then convert them to Float and do some math, which is why I can assert that the float values are positive. This is what I get from Gnatprove:

marktest.adb:120:27: medium: float overflow check might fail
  120 |         Frac := AccFloat / NumFracDigitsFloat;
      |                 ~~~~~~~~~^~~~~~~~~~~~~~~~~~~~
  reason for check: result of floating-point division must be bounded

marktest.adb:123:24: medium: float overflow check might fail
  123 |            Val := Val + Frac;
      |                   ~~~~^~~~~~
  reason for check: result of floating-point addition must be bounded

So, my question is, for the division, if I have asserted that the denominator is >= 1.0, the result of the division will be less than or equal to the numerator, so how can it overflow?

Likewise, in the addition, I was trying to do a check to make sure there wasn’t any overflow, and I have asserted that all the values are positive, I’m not sure where the overflow would happen there either. Are there some nuances with floating point that I need to check for?

Likely you are running gnatprove at level 0 (the default), which gives provers very little resources to try and figure things out. Floating-point stuff is pretty complicated for provers, so even the most trivial things require a little more resources. You can give them more time by e.g. setting the level a little higher: with --level=2, I was able to make all the unproved checks disappear.

Thank you, that was very helpful! I was at least able to get it to solve the addition (which actually worked with --level=1), but not the division. However, I did get the division to work by looping and dividing by 10.0 for every digit to the right of the decimal. Funny that it was okay with dividing by 10.0 repeatedly, but not dividing by a larger number.

I tried this program:

procedure Main with SPARK_Mode is
   procedure P (AccFloat, NumFracDigitsFloat : Float; Frac : out Float; Val : in out Float)
   with Pre => AccFloat >= 0.0 and then NumFracDigitsFloat >= 1.0
   is
   begin
      Frac := AccFloat / NumFracDigitsFloat;
      if Float'Last - Frac > Val then
         Val := Val + Frac;
      else
         return;
      end if;
   end P;
begin
   null;
end Main;

with gnatprove 12 and it proves at level 2. Probably your program looks a little different …
I wouldn’t recommend the repeated division method, as you are probably losing a lot of precision.

I think SPARK was probably just confused by the complexity of the function. I moved the division into a separate function similar to what you have with the procedure and everything passes with --level=1.
Thanks again for your help!

I also have been playing around with string to integer/float conversion, but I was not able to get the conversion to floating types to run without warnings.

From what I found online, floating point arithmetic in SPARK is tricky and you need to provide types with enough “bounds” to assure correct calculations. Since my goal was to return the smallest type that can represent a given string, I was not able to tell SPARK that my conversion preconditions are safe enough.

You may find my code in the vaton repository