Ada Performance Issue?

I’m currently testing the enhancements introduced in ChatGPT-4o, which has recently been released, to verify the accuracy of the provided information. ChatGPT-4o is purported to offer improved and more precise information compared to version 3.5, which was utilized in the previous free version. Regarding Ada’s performance, this is the statement:

the perception of performance issues of Ada programming language arise from the following aspects:

Safety Features:

Ada’s emphasis on safety and correctness involves runtime checks (e.g., range checking, array bounds checking) which can add some overhead compared to C and C++ if not carefully managed. These checks are crucial for safety-critical applications but may need optimization in performance-critical sections.


While Ada compilers can optimize code effectively, achieving the highest performance might require more effort from the programmer to leverage Ada’s advanced features and directives. This contrasts with C and C++, where low-level optimizations might be more familiar to many developers.

Perceived Complexity:

The initial perception of Ada’s verbosity and complexity can lead to the belief that it might not be as performant. This is more about the learning curve and the effort needed to write optimized code rather than inherent performance issues with the language itself.

Is the information provided by ChatGPT-4o accurate, or is it still misinformation?

This neural network nonsense is coloured by the inane opinions it has seen across the Internet. Ada is only inefficient to lunatics who believe that bounds checking shouldn’t always be done, and who cry about type checks that take away time from the routine that linearly scans an octet array for a null value rather than simply taking the stored length in constant time.

As for optimization, an Ada compiler is free to make optimizations which a C or C++ language compiler never dares, due to the extreme overspecification of everything in those languages.

No one who ever argues about any of this is actually affected by it. Very few people write code which is noticeably impacted by necessary checking.


The runtime could induce some overhead, but don’t forget that if you verify your code (yes that’s SPARK not Ada) you can disable the runtime. Even with the runtime though, it still stands higher than many languages. Here’s a benchmark from a little bit ago:

Ada is also actively used in real-time systems (from nuclear reactors to artificial hearts). That should say something about its performance.

I’m not sure where it’s coming from here. If anything, since you have things like bounds checking directly in the language, you’re doing less “hard coded” checks, and there’s certain optimizations that the compiler could (theoretically) make when taking certain datatypes into consideration. You never have to worry about dividing by zero if the denominator is Positive.

I’ve packed records or arrays to squeeze out extra memory in Ada. Stuff that’s kind of hard to do in C/C++ unless you’re bit-shifting.

What’s nice about Ada is that you’re defining the structure of the program that you’re trying to solve, and you can get a lot accomplished without getting bogged down by what’s happening at the compiler level. Ada is definitely seen as “verbose” by people who don’t like it, though. Heck, the very first comment on that recent hackaday article is someone complaining about the verbosity of Ada:

(my answer to that is the use keyword)


A couple things to consider:

  1. The checks that are performed in Ada often can be optimized out to some extent.

  2. C and C++ programs would no doubt often have similar if not more overhead if they were written to guard against as many possible bad cases as an Ada program. More because the set of inputs is usually bigger than the code considers as valid input (eg compare range of C int type verses range of an Ada integer that is statically limited to -123 … 200).


If the overhead of run-time checks that were not optimized away is really an issue, you can always suppress them (pragma Suppress, or a switch like -gnatp).

  • Not for use for a complex, dynamic system.
  • Good for a Monte-Carlo simulation on your PC: if the “Debug” version, with the checks, has passed the first iteration, the checks might be (depending on your model) useless for the next millions of iterations. So you can re-run with the “Optimized-no-checks” build mode and get your results faster.
  • It is also good for benchmarking tests. Then you discover that you can produce Ada code that is much faster than equivalent C code, with a lot less effort…
    Cf: Generic Image Decoder (GID) version 13 - #2 by zertovitch

That paper used the fastest submissions from the benchmarks game which tends to have programs that are semantically different. For example the current fastest C fasta implementation (one of the benchmarks the paper used) fills a char buffer and prints multiple lines at once using write(2) - a non portable POSIX function, but the fastest Ada one prints a single line at a time using the much slower but portable Ada.Text_IO.Put_Line. On top of that, the Ada programs are typically compiled with -gnatp to turn off all the runtime checks, so in practice I would expect Ada to use a bit more energy or run slower even if they were semantically equivalent.

I don’t think that table is representative of performance or energy consumption for any language, and you can kind of infer something is wrong with their methodology based on the spread between C and C++ alone. After all you could simply compile the C programs as C++.

FWIW a while back I spent a bunch of time evaluating the codegen between semantically identical algorithms in C (gcc) and Ada (gnat) and they’re almost always within ~5-10% of each other once you turn off runtime checks. At the assembly level equivalent functions are nearly identical, and the performance difference is mostly due to slight variations in register allocation or instruction scheduling.


This is preaching to the choir, but: Equivalent Ada and C programs have equivalent execution times. Robert Dewar famously had a set of equivalent Ada and C programs that produced identical machine code when compiled with gcc. One should also be aware of Ada Outperforms Assembly: A Case Study and C vs Ada: arguing performance religion. So anything that claims a significant difference between Ada and C execution times is not comparing equivalent programs.

ChatGPT simply produces stuff that looks like what is found on line, regardless of whether that information is correct or not. Anything it produces should be considered inaccurate until proven otherwise.