The AI pay-to-play engine GPT-4o is forever incapable of translating Ada95 into VHDL or C code input acceptable for embedding in FPGAs via cottage legacy as Xilinx (AMD) or Altera (Intel).
The only viable solution is system on a chip (SoC) using a single board computer (SBC).
Price for quad core with Win10 Home active, 3 USBs, and 100 mbps is now less than about $147 retail in a form factor of less than 4" x 4" x 1" or about 1/2 size of a cell phone, hence perfect for field applications.
What follows is that implications are legion.
Why would you think otherwise?
What’s being pushed as “AI” is Large-Language Models: essentially a combination of Markov Chains and fuzzy pattern-matching. There is no real understanding embedded in the answers presented, but only something that “looks” plausible.
To put it bluntly, even if using statistically probabilities from a body of knowledge were useful for knowledge-query purposes, because it’s statistical you do not have the chain-of-logic that formal logic has. (i.e. provability)
Additionally, even if you did have that provability aspect, because it’s a Large-Language Model, it takes as its “premises” faulty information. And, if you remember your intro to logic classes, if the premise is false the conclusion cannot be considered correct, even if the entire argument is correctly constructed.
TL;DR — Using “AI” for code generation is flawed at all levels.
Not quite right from intro to logic in 201 Discrete Mathematics:
(T => F) = F means True can not imply False.
Put differently, proof can not imply contradiction;
or put theologically, no evil can come out of good.
Confusing to some is that (F => T) = T, to mean
that perfection can come out imperfection.
There’s 1 open source Ada compiler, it’s one of, if not the most, complicated languages ever created, why would you think that AI could translate Ada?