Just a thought: Ada should be perfect for vibe coding

I don’t know, yet, what to think about vibe coding. I do not generate code, but I do use Claude to discuss ideas, perform code reviews on solo projects.

And when Ada is optimized for reading (citation needed :stuck_out_tongue:), I started to think it could be a perfect fit for vibe coding. Best of all worlds when your new programming language is English :wink:

What do you think?

There was actually a paper on this a bit ago, though it was specific to SPARK, with the idea that formally verified code would be better suited to LLM-generated stuff:

1 Like

All I can say is that Claude Code sped up my getting into Ada and building usable crates such as TinyAML tremendously. I had to steer and correct it like I’d do with a junior developer, but roundtrip times were so much faster.

Having decades of software engineering practice certainly helped. While I’m only a beginner with Ada, I could trust my instinct when it told me “that doesn’t (yet) look right”.

I find it plausible that Ada’s age and syntax make it easier for an LLM to generate usable results.

1 Like

This!

I’m cooking something by myself too :wink:

Vibecoding is antithetical to Ada.

2 Likes

I think vibe coding is an engineering aberration and an environmental disaster, and it will bring the demise of civilization… that being said, I’m glad you’re learning Ada and having fun :wink: Just please think of the consequences of releasing potentially bad code to the ecosystem (which is why verified code is a better idea with LLMs).

1 Like

It certainly has pros and cons. I’ve seen code from humans (not juniors) that is much worse than what you get from an LLM. At least they are not lazy and skip all the annoying error handling and such.

So my personal opinion: Currently you need too frequent interaction and you still need to wait too long for the result to be actually much faster than coding yourself when you really know what you (have to) do. I find it hard to do something meaningful while waiting for the LLM output. Also it often fails to adapt to changes I do myself and keeps reverting them. But for learning or getting an idea of how things are typically done in a new language or framework is really helpful.

I find the term “vibe coding” pretty scary, but if you called it “exploratory programming” I might feel better. It sounds like an interesting way to learn a language, but anything written by an LLM should be treated as something you downloaded from an untrusted site on the internet – perhaps a good way to get some ideas, but unlikely to actually be free of safety and security holes.

1 Like

That’s why I was explicit and I actually wanted to call it “vibe coding” :slight_smile: I do not support it. I do like exploratory programming as I spent a lot of time writing Clojure. Let’s treat the LLM output as a series of experiments, nothing more.

I’m working on a library right now, and as I said, I used Claude do do code review for me, but not code was generated or blindly accepted.

Also, you don’t know the copyright of where it was nicked from.

2 Likes

It might end up being a separate article from the Anteforth one I’m working on, but I’ve found vibe coding touch and go in Ada and SPARK. The power is running with checks on would catch consistency issues and provide stack traces, and in general there’s a lot of constraints built into the language. I got it to generate Rust-like Box and Rc packages, and I’ve used it to “fill in” package bodies from spec files. GPT-4 and GPT 4.5 struggled a bit with emitting compiling code, but GPT-5 and Gemini Pro both do significantly better.

Gemini Pro did well help me fill in gaps and ensuring consistency of my target spec in Markdown. Using its suggestions in SPARK can be limiting, as I found postconditions must be extensive to be most usable by the provers, and it can lead you into a false sense of security thinking it’s suggestions are complete. Gemini Pro was good at checking the Ada code against the spec and identifying inconsistencies and gaps, and also in helping to diagnose some failing proofs. Overall, I wrote the code by hand and used it to for research and as a virtual rubber duck.