SPARK seems to be synonymous with formal verification but as great as that is that isn’t the main reason that I use it. I use SPARK mode because it removes the few as they are potential gotchas in Ada. Such as elaboration issues with tagged types or initialisation issues with any type but also perhaps even more useful when using inheritance. There are a number of benefits it seems to me even without any contracts. The number of checks on Unchecked_Conversion in SPARK was jaw dropping for me. Rust enforces initialisation and requires unsafe otherwise but SPARK allows safe declaration of unitialised variables generally.
Perhaps a renaissance can be formed on the back of SPARK now supporting most of Ada perhaps allowing Ada/SPARK to be seen as a modern yet solid and stable language?
One of flaws of Ada 83 design was not making constant the default and thus requiring initialization unless the programmer explicitly wants to circumvent safety.
I wished Ada would require a lot less Unchecked_Conversion which in most cases used to defeat typing because rules are too coarse to allow things which are safe or because programmers are too lazy to use correct constructs.
BTW, you do not need SPARK to make something unbearably primitive out of Ada. The profiles serve this purpose.
I can’t agree with that and unchecked conversion is easily managed. Concurrency might be an issue but you can always turn SPARK mode off as needed. The potentially ensuing warning noise about SPARK making assumptions might not be ideal but major issues will still show up hard.
No need to manage something unnecessary and outright dangerous. There is a reason why they are called unchecked.
Differently to alleged tagged types elaboration issues, which I never ever had. I do not know what you mean, provided that you apparently do not use tagged types at all.
A bad choice of words in my opinion. Many checks and not just size occur in Unchecked_Conversion. What problem do you suggest that ‘Valid and cryptography do not overcome (Endianness? which may not even be an issue for many use cases).
ABE (Access before elaboration). Static is the default in Gnat unless SPARK is enabled so you don’t even need elaboration pragmas with SPARK.
-
Static elaboration model
This is the middle ground of the three models. When the static model is in effect, GNAT makes the following assumptions:
-
Only code at the library level and in package body statements within all units in a partition is considered to be elaboration code.
-
All invocations in elaboration will take place at run time, regardless of conditional execution.
GNAT performs extensive diagnostics on a unit-by-unit basis for all scenarios that invoke internal targets. In addition, GNAT generates run-time checks for all external targets and for all scenarios that may exhibit ABE problems.
The elaboration order is obtained by honoring all with clauses, purity and preelaborability of units, presence of elaboration-control pragmas, and all invocations in elaboration code. An order obtained using the static model is guaranteed to be ABE problem-free, excluding dispatching calls and access-to-subprogram types.
-
-
SPARK elaboration model
This is the most conservative of the three models and enforces the SPARK rules of elaboration as defined in the SPARK Reference Manual, section 7.7. The SPARK model is in effect only when a scenario and a target reside in a region subject to
SPARK_Mode On, otherwise the dynamic or static model is in effect.The SPARK model is enabled with compiler switch
-gnatd.v.
I’m warming to tagged types somewhat to avoid some code repetition (unfortunately not of use in the lowest level code) and allow code isolation (largely due to your own comments in the news group pushing me to re-evaluate, so thank you). I’m still not fully grasping the complexity trade off considerations yet.
Of course you need a safe way to have uninitialized data: (1) stateful shared objects [perhaps in a library], and (2) memory-mapped-IO… because initializing memory-mapped IO is necessarily writing out a value.
OOP (Tagged) is the wrong mechanism to avoid code-duplication. While there is some reduction by having common-operations in ancestors, this is a byproduct of the hierarchy and inheritance should not be used effect such consolidation that does not naturally arise from your hierarchy. — The proper mechanism for consolidating code is the generic.
'Valid is itself a problem, need not be there. What is with cryptography? I do not need Unchecked_Conversion there.
I stopped to use elaboration pragmas long ago. They are not needed. Again, what is the problem with elaboration of tagged types. Have you a code sample?
OO is exactly the mechanism of reuse (per inheritance) and structuring the program through moving unrelated implementations in independent modules.
It is unclear what you mean.
Generics (macros) is a reuse method characterized by
- Weak typing. Generic contracts are from weak to non-existent
- Absence of proper static checking. An instantiation may fail even if the contract is satisfied and everything was compiled. As such generics are very questionable in production code, because there is no way to either test or formally validate them.
- Generics poison the code. Once generic everything is generic there is no way back
- Lack of modularity. A generic unit cannot be put in a library, its not Ada code, it is a meta description of a macro expansion.
- Impossibility to have polymorphic objects. Instances (macro expansions) of the same generic are completely independent.
Hu, I do not understand. What is the problem with 'Valid?
The problem is that you cannot rely on it. If all patterns of the value representation are valid values then X’Value is identically true. What is the use case? Randomly generate a bit pattern and see if you can interpret it as something useful?
I have a suspicion that people referring to X’Value are those who try to cut corners when implementing network protocols and hardware I/O. They misuse representation clauses and then try to fix the mess by X’Value. All is instead of honest decoding the protocol.
'Valid is for the instances like what I illustrated above with memory-mapped IO and how that [strictly speaking] precludes initialization; memory-overlays; and other such low-level/representational issues — consider a library, v2, where the Color enumeration is (Red, Green, Blue), & a v3 where the enumeration is (Red, Green, Blue, Purple, Yellow, Brown), if you try reading a file containing a record w/ a Color element that was produced with v3 using v2 there is the possibility of invalid values and you can use 'Valid to ensure that you can process the v3-generated data so long as there are no non-v3 extensions.
Stream is just less efficient. There is no real difference. Except that they work on less runtimes but with tagged types (hidden identifier affecting the size). You always need to validate any received data which goes beyond 'Valid but 'Valid can still be useful.
It seems the Streams package (Stream_IO) uses Unchecked_Conversion and 'Address and Unchecked_Access both of which are far more dangerous than Unchecked_Conversion.
It seems that unchecked_conversion was named unsafe_conversion before Ada 83 was released and it was perhaps changed in order not to discourage legitimate use cases.
No idea what you mean by that. You need to decode octet representation. You do not need either 'Valid or Unchecked_Conversion for that.
Of course I would not do that. There is no such thing as enumeration in protocols. Enumeration is a language term. Then the protocol specifies the behaviour for each possible value of an octet or a group of octets. If a value is illegal it defines recovery. That is all.
You do not need Stream for that either.
So, what is the point then?