Do you use Artificial Intelligence for Ada coding?

That’s exactly the direction we’re heading. The fine-tune is one piece — the other piece is an agentic pipeline we’re building around Steelman as the code generation engine. It’s still a WIP prototype, but the architecture is designed around the principles you’re describing.

The design: spec (.ads) first, then the generator produces the body (.adb), then the GNAT compiler verifies it with strict flags — warnings-as-errors, validity checks, style enforcement. If it fails, a patcher agent gets the structured compiler errors and produces a fix. The compiler is the oracle, not another LLM judging the output.

The point you’re making about not rewriting tests to match a bad implementation is critical, and it’s a core design principle: the compiler and the spec are the authorities, the generated code conforms to them, never the other way around. That’s the whole reason we chose Ada — the language’s type system and contract mechanisms (Pre/Post/Contract_Cases/Type_Invariant) give you machine-checkable specs that an LLM can’t weasel around.

DO-178 / EN 50128 traceability is exactly the kind of application this is heading toward. A fine-tune that understands Ada idioms combined with an agentic pipeline that enforces correctness through the compiler — that’s how you get from “LLM that generates code” to something that could eventually participate in a certification workflow.

The model and all training data are public but the agentic system is still private. If you have thoughts on what a certification-oriented pipeline would need, I’m interested. Thanks!