True but not system raised ones for obvious flow analysis issue reasons. Which are the only ones I sometimes care about.
How an enumeration is any better? Remember the claim was that exceptions require documentation whereas codes/enumeration do not, or less, or worse documentation.
You have to document all exceptions raised by each function which often doesn’t seem to happen with Ada or it can be hard to find and analyse. The static predicates can tell you all the status in code and even what functions they have received status from. The compiler can check that all cases are handled and all are read. Status creation and passing is also a great opportunity to provide logs possibly in a helpful cascading fashion even in production and even for non exceptional handling. It works well and on any runtime so any debate is irrelevant to me.
Our production code logs exceptions. Note that we need not to do anything for that, because exception raising point can be hooked. We can filter exceptions that should go to the log. It is extremely powerful and versatile tool.
This method won’t fly with codes/statuses. You will have to manually insert calls to the logger all over the place. Furthermore unless you maintain a global repository of all codes/statuses and you said you don’t, then your log must know all enumerations of all statuses. Tight coupling and maintenance nightmare.
Where did I say I don’t? I use my elogs package. Just a random id gives you the location (I replaced line/package) in a bounded store. The only slightly annoying part is translating status from one package into another for extra clarity. It is very simple to do though and not a burden. What you call a maintenance nightmare I see as assurance. A bit like variant records vs inheritance.
Yes, the same sort of nightmare. After all exceptions and dynamic classes were invented in part to automate routine programming tasks, add extensibility and modularity through late binding.
The point about modularity is that one can load a relocatable library into an application extending existing classes (dispatching tables get modified) and adding new exceptions.
I remember writing an ASIS application (actually, an asis2xml application) to detect when others => null;
so we could go and challenge the developers! (This was a missile guidance system, where it was deemed preferable to dump the missile into the sea rather than leave the envelope of defined behaviour).
Today it would have to be a libadalang2xml application, of course. Personally I much preferred the way that asis2xml’s XML tags were recognisably related to the ARM, but that’s how it is.
That is your opinion and more and more disagree with you. One person who spends most of his time reviewing Ada code disagrees with you on variant records forcing corrections.
It seems that inheritance is on the way out and I am glad about that as I find it reduces readability and so maintainability.
Anyway, each to their own.
I get the “adding new exceptions” part, but dispatch tables are created by the compiler at compilation time; how could loading a relocatable library affect that? (what it does inside itself is a different matter, of course).
The case is when you derive S from T inside the library. When the library is loaded you must add S’Tag to the dispatching table (or equivalent). For example, you can create a T’Class object holding S in the library, and then pass it to Foo (X : T’Class) in the application. That will in turn dispatch to Baz overridden for S back to the library.
Such techniques existed since “antiquity.” E.g. how otherwise could you load a new device driver into the OS? OOP existed already in the Assembly language!
It wouldn’t. I would expect a unimplemented function to fail the build when linking, not emit a runtime exception.
Unless you wrote a top-level exception handler, an undocumented exception, especially of a custom type is unlikely to be handled and would escape and crash the program anyways.
Exceptions are useful to jump out of deep call stacks.
Option
s are great when the lack of a value is OK. It’s not about error handling as much as it is explicit case coverage.
Result
s provide error handling, but can result in binary bloat from instantiation of many different Result types. It also requires a sum of compatible error types or a common base type in the stack or “rewriting” the error value.
Status codes are great for local error reporting when you have recovery paths based on the type of error. Boolean returns work where only detection is needed, and reporting can be done internally.
Of all of these methods, only exceptions can affect everything up the call stack in a way that the compiler cannot warn about for operations that can now generate errors.
In structured programming, that is designing top to bottom, there is a technique of writing stubs for subroutines not yet implemented to allow early testing. It is reasonable to raise an exception Not_Implemented from a stub rather than deploy the “good old” C practice:
// TO DO
// Please somebody implement this later
I guess AdaCore used some tool to generate stubs and then forgot to review or write coverage tests. Shit happens…
Yes, however wording should be: when the lack of a value is expected as part of normal functionality. E.g. you might expect absence of a token when parsing a source or in Ada.Text_IO.Get_Immediate. So you likely would not use an exception there. Reading beyond the file end is similar but much less frequent, so End_Error is a reasonable choice. A numeric error should almost never happen so Constraint_Error seems obligatory. I believe it is possible to rationalize design choices that way.
Is this something that standard Ada does? or something that dlopen()
does? or something that the user needs to implement?
It works out of the box under Linux and Windows. As far as I can guess it is a part of the library elaboration.
OK, there are of course linker/loader idiosyncrasies. You must disable the library automatic initialization ( Library_Auto_Init) in the project file and call initialization manually after the first DLL load. GNAT defines a global symbol <library-name>init for that.
Another option (pun intended) is to assign or initialise an output with 'Invalid_Value so that any attempt to use an output causes an exception.
Num_Var := Positive'Invalid_Value;
Status := Timeout;
return;
Though probably only for testing as it requires -gnatVa for validity checks with a performance hit.
And since exceptions are bad, we add a second return flag indicating that the first flag might be invalid!
The status is the return. This is just belts and braces or a nice indication in the debugger and yes exceptions have worse issues imo.
I would like to extend to you an invitation to be on both the C & Unix design teams: this sort of thinking cannot be ignored.
I actually like the fundamental Unix design of small tools to do one job well in competition with replacements.
As well as being filesystem centric. Heck I prefer my audio as files and directories partly because the db fails on non commercial files but they are often named correctly.
To be blunt to the point of insult: Unix is utter dogshit because of this design philosophy, because it’s conjoined with the often unstated philosophy that fixing a problem 80%-of-the-way (often superficially) is desirable.
It’s the sort of non-thinking that leads programmers to ask “how can I use RegEx to parse HTML?” — which is literally impossible because HTML is not a regular language… but that won’t stop them from trying!
A good example on this is “using pipes” to “stitch together” a program, output to input, oftentimes multiply:
- The commandline has no structure
- This strips the type information,
- This forces ad-hoc parsing to regain that type information
- This introduces assumptions on the constraints
(e.g. “This value is never negative, so I can useunsigned char
.”) - Violation of these constraints is often a vulnerability.
- The type-info discard/reprocessing is forced at every step in the pipeline.
- Aside from discarding type-info destroying provability, the forced [re]processing increases the amount of time and energy recovering information that was there to begin with!
I’d much rather have an OS that was designed ground-up on a database: let the native file-system use views to emulate a hierarchical filesystem if needed — the reason that we don’t have this is precisely because “My C code depends on the filesystem and won’t work!”
The closest thing to that, that I’ve heard of, is OpenVMS’s file-system has a native DB-file and/or can open text-files as a DB-record. (I’m a bit unclear on the details, as I haven’t had occasion & opportunity to use the feature.)