Hi, I have an issue with the meaning of out and in-out parameter modes in Spark. See the following:
subtype Digit_Index_Type is Digit_Count_Type range 1 .. 2..16;
type Very_Long (Length : Digit_Index_Type) is private;
procedure Make_From_Hex_String (Number : in String; Result : out Very_Long; Valid : out Boolean)
with Depends => ((Result, Valid) => (Number, Result)), Pre => Number'Length = 2*Result.Length;
The book says:
This might seem surprising given that Result is an out parameter. However […] the actual parameter used in the call has a specific value of the discriminant set by the caller.
What I don’t understand is the meaning of parameter mode with Spark. If you have to read the discrminant, shouldn’t the object be “in out” instead ?
I always assumed non-mutable parametrized types could just never be out, because you necessarily have to check their discriminants. It seems confirmed by the spark manual:
Parameter mode out (resp. global mode Output) indicates that the object denoted in the parameter (resp. data dependencies) should be completely initialized before returning from the subprogram. It should not be read in the program prior to initialization.
Parameter mode in out (resp. global mode In_Out) indicates that the object denoted in the parameter (resp. data dependencies) should be completely initialized before calling the subprogram. It can be written in the subprogram.
the book from cormick dates from 2015, maybe the language changed since then ? Did I miss something ?
No.
You’re running into the same issue with unconstrained vs constrained types that trips newcomers up. (Particularly strings, which I explain here & here.)
For a quick illustration, consider:
Type Message(Length : Natural) is record
Text : String(1..Length):= (Others => ' ');
End record;
Invalid : Message;
Valid_1 : Message(5);
Valid_2 : Message:= (Length => 5, Text => "Hello");
Now, as you can see, Valid_2 initializes with a valid object, and the compiler can therefore take those properties as its constraints. However, Valid_1 doesn’t have an object initializing it, but is instead supplying those constraints itself. Last, Invalid is not constrained by initialization, nor by subtype-constraint, and therefore the compiler cannot determine its size.
Now, when it comes to parameter-modes, something similar happens, in that even if the parameter is unconstrained, it cannot be called to operate on a value which is not itself constrained —just as the above shows— and therefore, the object it is called with determines those constraints. Consider:
Procedure Reset( Object : out Message ) is
Begin
Object.Text:= (Object.Text'Range => ' ');
End Reset;
-- …
Reset( Valid_2 );
In this case, Valid_2 provides the constraint [that of Length-5], and therefore the range therein is 1..5.
So in short, you’re telling me that discriminants in the context of parameter modes are not considered like normal fields, but rather like type attributes, so you can read them before initializing an out parameter. I thought of that possibility but wanted confirmation. Makes sense
That said, it should really be spelled out clearly in the manual.
No, not quite, I’m saying that in order to be supplied to an out parameter you have to have an object, this existance of a value (the object) supplies the constraint.
I mean it is, even in the Ada83 LRM (6.2), second paragraph after the preformatted text/explaination:
For a parameter whose type is an array, record, or task type, an implementation may likewise achieve the above effects by copy, as for scalar types. In addition, if copy is used for a parameter of mode out, then copy-in is required at least for the bounds and discriminants of the actual parameter and of its subcomponents, and also for each subcomponent whose type is an access type. Alternatively, an implementation may achieve these effects by reference, that is, by arranging that every use of the formal parameter (to read or to update its value) be treated as a use of the associated actual parameter, throughout the execution of the subprogram call. The language does not define which of these two mechanisms is to be adopted for parameter passing, nor whether different calls to the same subprogram are to use the same mechanism. The execution of a program is erroneous if its effect depends on which mechanism is selected by the implementation.
In contrast to Ada 95 and later, in Ada 83 reading of out-parameters was forbidden. This led many programmers astray. But even then, reading bounds and discrimints was allowed. How else could for instance the following code work:
Message: String (1 .. 80);
Text_IO.Get_Line (Message (30 .. 40)); -- already in Ada 83 correct
I know that it’s necessary , I just wanted to point out that a line stating explicitly “discriminants are not considered normal fields so you can always read them” in the spark guide would be welcome !
But Spark only constrains certain uses of (normal) Ada. Everything else is like in the RM - which states this rule already (admittedly, the RM is not easy to read) .
Wasn’t that better in Ada 83, preventing reading garbage or was that possible in 83 too? (a possible read after free security issue that spark can prevent or the quite recent automatic zeroing).
In my active time, I’ve written safety-critical avionics, but never was in touch with Spark. With “automatic zeroing”, you mean Rust-like access object owning? I do not knwo anything about this. (I’ve been in retirement for more than a decade now.)
Reading garbage has always been possible, think uninitialized variables. So Ada 95 ruled reading out parameters before assignment was like reading those.
What was very annoying in Ada 83 was that you couldn’t read out parameters even after assignment; You had to keep a separate out-to-be variable and assign this before return.