CONSTRAINT_ERROR on Enqueue operartion

I’ve come across some weird behaviour with some type conversions and enqueuing operations and wondering why this is happening.

Consider the following code (the algorithm isn’t particularly relevant here just sharing it in case anyone wants to run that):

with Ada.Containers.Unbounded_Priority_Queues;
with Ada.Containers.Synchronized_Queue_Interfaces;

with Ada.Integer_Text_IO; use Ada.Integer_Text_IO;

procedure Min_Stone_Sum is
   package Ints_Interface is
     new Ada.Containers.Synchronized_Queue_Interfaces (Element_Type => Natural);
   function Ints_Priority (I : Natural) return Natural is (I);
   function Ints_Before (Left, Right : Natural) return Boolean is (Left > Right);

   package Ints_Priority_Queue is
     new Ada.Containers.Unbounded_Priority_Queues (Queue_Interfaces => Ints_Interface,
                                                   Queue_Priority   => Integer,
                                                   Get_Priority     => Ints_Priority,
                                                   Before           => Ints_Before);
   use Ints_Priority_Queue;
   use Ada.Containers;

   type Nat_Array is array (Positive range <>) of Natural;

   function Min_Sum (Arr : Nat_Array ; K : Positive) return Positive is
      Q : Queue;
      Temp : Positive;
      Sum  : Natural := 0;
      N    : Natural;
   begin
      for N of Arr loop
         Q.Enqueue (N);
      end loop;

      for I in 1 .. K loop
         Q.Dequeue (Temp);
         N := Natural (Float'Ceiling (Float (Temp) / 2.0));
         Q.Enqueue (N);
      end loop;

      while Q.Current_Use /= 0 loop
         Q.Dequeue (Temp);
         Sum := @ + Temp;
      end loop;

      return Sum;
   end Min_Sum;

begin
   declare
      Arr : constant Nat_Array := [4,3,6,7];
      K   : constant Positive := 3;
   begin
      Put (Min_Sum (Arr, K)); --   Expected Sum = 12
   end;
end Min_Stone_Sum;

So, the core of the calculation is:

Q.Dequeue (Temp);
N := Natural (Float'Ceiling (Float (Temp) / 2.0));
Q.Enqueue (N);

which dequeues the largest value, halves it, takes the ceiling of that and pops it back to the (priority) queue (not sure if that’s the “best” way or the most optimal one but that’s how it does this).

Now, if I try to bypass N and do something like:

for I in 1 .. K loop
   Q.Dequeue (Temp);
   --  N := Natural (Float'Ceiling (Float (Temp) / 2.0));
   Q.Enqueue (Natural (Float'Ceiling (Float (Temp) / 2.0)));
end loop;

I am getting raised CONSTRAINT_ERROR : min_stone_sum.adb:35 invalid data (line 35 is the Enqueue operation)

Any thoughts as to why it does that since assigning to a Natural variable using the exact same expression works just fine?

Using gnat_native = "=15.2.1"

Thanks!

That error seems to be caused by -gnatVo, but I can’t tell you why. If you change Temp to Natural (which it probably should be to match your queue element type) the error goes away, but it also goes away if you initialize Temp to 1. It’s like the compiler doesn’t treat Dequeue as writing a valid value to Temp - perhaps because it “could” have written 0 which is outside the range of Positive. However, that doesn’t explain why you’re not getting an error with your use of N.

Oops, Temp was left to Positive from an earlier version of the program..thanks for spotting that.

Interesting and confusing at the same time about the verification flags. So, just like you say removing -gnatVo prevents the constraint error (that flag was actually pulled in by the -gnatVa I was specifying) though Temp never actually gets any negative or zero value in this run.

However, based on the flag description in the gnat ug it would make more sense if that failure was down to -gnatVp which Validity checks for parameters as opposed to -gnatVo which Validity checks for operator and attribute operands. If I get this right, these flags insert checks at the call site of the type they are guarding (i.e. attributes or subprogram arguments and return types) so if the above is failing due to -gnatVo then that can only be the Float'Ceiling call? Though that doesn’t really explain why it’s not failing on the assignment of N. :thinking:

The fact that the constraint error goes away when the type of Temp is changed to Natural feels as if there’s somehow a blind type check with no actual value check which raises the exception.

I did some more digging and found that if you compile with -gnatG you can view an IR of your program that explains a bit more:

      temp : positive; -- we are uninitialized here

      ...

      T1040b : positive := temp; -- first use of our uninitialized positive
      [constraint_error when
        not (unsigned!(T1040b) in 1 .. 16#7FFF_FFFF#)
        "invalid data"] -- this is the validity check that is failing

      L_2 : for i in 1 .. k loop
         ...

The constraint error is happening before the loop even starts.

However, in the version using N, the first use of temp is assigning to it inside the loop and the validity check prior to the loop is gone. This assignment is present in both cases, so presumably if there were no check prior to the loop it would work.

      L_2 : for i in 1 .. k loop
         ...
         T1033b := P.element.all; -- result of dequeue
         [constraint_error when
           not (T1033b >= 1)
           "range check failed"]
         temp := T1033b;          -- first use of temp, which is now inside the loop
         ...

Perhaps a compiler bug?

I think it’s clear that you’ve uncovered a compiler error. So I have nothing to add about that.

However, I would like to comment on your algorithm for calculating the value to enqueue. IIUC, if Temp is even, you want Temp / 2 (Integer division), but if Temp is odd, you want to round up the mathematical result to the next higher Integer. You don’t need Float’Ceiling for that; Natural (Float (Temp) / 2.0) will do what you want [ARM 4.6(33): “If the target type is an integer type and the operand type is real, the result is rounded to the nearest integer (away from zero if exactly halfway between two integers).”]

But I think the clearest way to implement this is to stick to Integer operations:

Temp / 2 + (if Temp rem 2 = 0 then 0 else 1)

or simply

Temp / 2 + Temp rem 2
2 Likes

Interesting..so the constraint error is actually raised due to the (random) uninitialized variable getting an out-of-range value? I guess this would mean that it shouldn’t happen deterministically on every run..:thinking:

Using the -gnatG (which is great btw!) I get a totally different picture. So, no checks have been added right after the declaration of temp:

- - 24:       Temp : Positive;
      temp : positive;

- - 25:       Sum  : Natural := 0;
      sum : natural := 0;

- - 26:       N    : Natural;
      n : natural;
      L_1 : label
      L_2 : label
      L_3 : label
      procedure min_stone_sum__min_sum___finalizer;
      freeze min_stone_sum__min_sum___finalizer []
...................................................................................
...................................................................................

Rather all checks have been added in the for-loop:

- - 32:       for I in 1 .. K loop
      [constraint_error when
        not (unsigned!(k) in 1 .. 16#7FFF_FFFF#)
        "invalid data"]

- - 33:          Q.Dequeue (Temp);
- - 34:          --  N := Natural (Float'Ceiling (Float (Temp) / 2.0));

- - 35:          Q.Enqueue (Natural (Float'Ceiling (Float (Temp) / 2.0)));
      T1134b : positive := temp;
      [constraint_error when
        not (unsigned!(T1134b) in 1 .. 16#7FFF_FFFF#)
        "invalid data"]
      L_2 : for i in 1 .. k loop
         T1127b : min_stone_sum__ints_interface__element_type;
         B1130b : declare
            X :
              system__tasking__protected_objects__protected_entry_index :=
...................................................................................
...................................................................................

so Temp seems to first be added in the queue and then checked for constraint errors.

Also, the upper value (K) of the for-loop seems to be checked for constraint violations on every iteration (line 32) though being an in parameter (i.e. constant) one would think this could be checked just once at the call site and not bother checking that again but I may be missing the full picture.

As a side note, -gnatG is a pretty nice way to get a feeling of the hidden cost of using a synchronized data structure in a serialized non-concurrent context..

Thanks!

1 Like

That’s really interesting to know, so effectively one gets a round-upwards mode for free simply by type-converting to an integral type.

Agree, the second suggestion feels more readable and states the intention more explicitly. It could be that Float’Ceiling is actually implemented in one of these two ways..? I can’t really navigate to the source code in my GnatStudio (I guess that’s an intrinsic implementation?)

Thanks very much.

IIRC, GNAT usually initialises values to faulty ones on variables that are uninitialised. This way, whenever they are first used, an exception will be thrown if it truely is uninitialised. Though I have not read the full thread and I may be missing something here…

Best regards,
Fer

1 Like

It is an optional feature of GNAT (fortunately, because of the performance impact).
It is available via the pragma Initialize_Scalars; as configuration pragma. Why not a compiler switch? Ask AdaCore!
Combined with the standard Ada checks plus the GNAT-specific validity checks (-gnatV*), you squash an impressive amount of subtle bugs! Next level from that is fuzzing.

3 Likes