I likely don’t have the right to barge in this discussion but oh hell.
As a student I’ve been using Claude as what AI was meant to be from the start: an expert system taught on very exact, accurate technical sources. I found very useful to ask the AI to sum things up or rephrase things for me, or produce diagrams to help me understand. But it only works when someone basically already laid out the exact solution or close somewhere, or you get ‘û©ï nonsense.
And even when it does works, we should remember what happened with the widespread use of electronic calculators. No one can calculate ßðîþ anymore mentally. And since we use the same mental muscles we train with "dumb” tasks, so if we become too complacent out of convenience, we loose that too, moreover for a poor endproduct.
Add this to your opencode.json:
"lsp": {
"ada-lsp": {
"command": ["ada_language_server"],
"extensions": [".ads", ".adb", ".ada"]
},
"gpr-lsp": {
"command": ["ada_language_server", "--language-gpr"],
"extensions": [".gpr"]
}
}
You have to stand alone install ALS or change command to point to the path in VS Code extension
With Rocq or Lean you can create a prove and show that the algorithm works. But if you want to get a runnable piece of code that you can use in a program… Well, now you have to redo all the work in another language!
This is not true. Proofs in Rocq use constructive logic (so, one can only develop constructive proofs), and afterward one can cause it to emit a program in OCaml or Haskell. As I understand it, Lean is itself a programming language that produces runnable programs.
Purely personal but I won’t use AI for coding because I like the thinking/design process and I’d like to do it myself instead of getting a machine/someone else to do it for me. Using AI just takes the skill and fun out of coding. Also, AI isn’t going to help you if you are on a project that can only be done in a secure room with no internet access where everything has to be written from scratch.
This is exactly why I do use LLMs for programming. I like software engineering but not programming. I want to be able to just say “Let’s switch this over to Horner’s method and see if it’s faster” and then not have to write out the whole thing myself.
I don’t dislike programming, but it may be my least favorite part of software engineering. It has never fully made sense to me that programming should be part of the job description of a software “engineer “. After all, engineers don’t build bridges. (But I know there are other senses of the word, like train engineer.)
I would use LLMs if my ethical concerns were satisfied.
You know, it seems to me like a LOT of what’s being discussed here could be addressed with a highly-integrated IDE —no LLM needed, but rather a capable database+editor— where you could select a subprogram (eg sort) and issue some SQL-like command: “SELECT INTO Selected FROM Algorithms WHERE Has_Tag('Sort') AND Big_Omega( n**2 );” or something.
IMO, LLMs are the wrong technology for addressing these sorts of problems, much like ‘text’ is the wrong technology for representing programs. (eg Dumb text-diff vs semantic-aware diff.)
This is something that has been attempted for as long as programming has existed, all previous attempts have proven to be utterly useless in practice.
I’m wondering if that’s a failure of the ‘integration’ part of Integrated Development Library.
After all, if you’re using the same substrate in your tools as you’re using in your work, you can leverage things. Hence why I’d think that the core of such an IDE would be a SPARK verified unification engine, which in turn is the core of the database-engine and the proof-engine. — You could also leverage things by having the database language be either your IDE-language or substantially similar, i.e an Ada+PL/SQL environment.
Good points but it can still be helpful on docs and maybe code snippets on an isolated device. Personally I think it’s dangerous to use LLMs outside the browser.