Why does the light runtime not have deallocation?

AdaCores docs state it is a noop. “The rationale being that with certifiable software dynamic memory allocation is performed at most once, at startup”.

In embedded C it is often advised not to use it but it isn’t prohibited. I assume there isn’t much to supporting it or am I wrong?

It’s not a huge deal but seems odd. Especially with sparks shiny memory leak prevention.

Is it so that you can say that this system cannot be exploited except logically? You can still use up the stack.

Edit: You could also misuse 'Address, so I guess not.

Well,the algorithms that control basic heap deallocation are really really tough to verify and certify I think (at least it used to be)? There’s a lot of memory reordering when it is returned to the heap and it had to try different things to prevent memory fragmentation and it is hard to verify/quantify all of at a certification level. I believe the idea was if you wanted allocation / deallocation then you needed to make your own storage pool with a dealllocate procedure that was not a noop but much simpler than what heap deallocation provided. In that way you could have a deallocation process that was certifyable. That’s my guess at least.

3 Likes

Interesting, and here’s me thinking it was just a free. In C, it obviously results in fragmentation which is the main reason against it without an MMU.

Thanks

The implementations of free I looked at back in the day involved multiple free lists that had to be managed (and they could emerge/disappear/merge together as needed) and even had small garbage collectors to rearrange memory when either a really large block was freed or there was sufficient fragmentation detected. Then of course they have to handle requests from multiple processes/tasks/threads

I don’t know what they look like today though. So it may be simpler implementations or simpler to analyze all that for certification. Though currently the ARG doesn’t list any current certified compilers anyways, so maybe it’s not a practical issue anymore.

I do know on bare metal embedded projects, I definitely avoid heap just for the possible fragmentation issues. Generally malloc will lie to you about memory availablity (it often “guesses” that the memory will be avialable by the time you get to actually needing it) so relying on the null check is problematic and eventually the system could run out of memory due to fragmentation issues (potentially).

Reference: Interesting malloc implementation for linux: What is Overcommit? And why is it bad?

1 Like

Back in about 2005 we had serious fragmentation problems with GNAT over VxWorks, so we implemented our own interface (to System.Memory? I didn’t do the work myself) which allocated & chained in free lists as many blocks of various sizes as we were going to need (determined by experiment); blocks were allocated from & returned to the appropriate list. I don’t remember what we did about memory leaks, if anything, but then I don’t remember them being a problem.

2 Likes