Use of Ada in HW verification: a new opportunity to save the world

I get what they were trying to do…

y <= d1 when s else d0;

But the new if/case expressions could do this now.

IIRC, the .all is equivalent to Ada.Strings.*, if * were valid for saying “everything”.

Probably just convention; IIRC, VHDL is case-insensitive, like Ada.

Ah, this is simple: 1..3 is (1, 2, 3). The downto is the reverse of that: (3, 2, 1).
The reason for this is precisely the difference between least-significant and most-significant bit.

Yeah, I know, but Ada doesn’t use with ADA.TEXT_IO.all.

Yes, but “STD_” is very C-like.

Realised last night that downto is pre-Ada80 Ada I think and that it’s required by the compiler for checking intent.

I still think a more modern Ada derived HDL would be worth a go, not me I’m a hw beginner really.

Ada could be extended with a new Hardware subset specifically for designing hardware,

type Entity is interface;

type Architecture is new Entity;

Something like this?

It has to be: 32..1 is the null-range.
So, in order to specify something like that you would have to modify 1..32 with a keyword (eg reverse 1..32), or make some other syntactic construct (eg 32 downto 1).

1 Like

VHDL is much more complex than you think. This is because there are needs to describe hardware that you don’t need when writing software.

For example, in VHDL, there are constants, variables and signals. Constants are like in Ada. Variables and signals behave differently. Signals describe physical nets. Variables are some kind of virtual signals but might generate some logic under the hood.

Another example: For one entity, you can write many architectures. One for behavioral simulation, one for RTL simulation, one optimized for speed, one optimized for area…
You select the architecture you want to use either case by case or globally through configurations. Components can also enter the party.

Functions and procedures might generate logic or not depending on their body content and how they are used.

In another thread, it is explained that programming with Ada needs a different way of thinking than with C. That is true and it is even more true with VHDL. Since a few years, FPGA vendors want software programmers to enter the FPGA world (to sell more chips). In the FPGA manufacturers forums, we can read many posts of users asking “dumb” questions (no offense here) because they think like software programmers.

On the topic of “Ada as a hardware language”, there is some literature on the subject:

  1. Ada as a hardware description language: An initial report [1984]
  2. A new approach to prototyping Ada-based hardware/software systems [1990]
  3. Language issues of compiling Ada to hardware [2002]
  4. SystemAda: an ada based system-level hardware description language [2009]
  5. Software techniques in ADA for high-level hardware descriptions [1986]
    (Hint: Use /10.1109/mcd.1986.6311802 in sci-hub.)

@Lucretia, I think @DrPi is correct: the hardware aspect of the description language is a LOT more complex/detailed than our software experiences would lead intuition to believe. However, I also believe that there are massive points-of-leverage that could be had by having a truly integrated development environment with VHDL+Ada. (Also +PL/SQL.)

1 Like

I’m not saying it’s easy, but there are people here with more experience than myself who could define it.

Having got this far, there are far more inconsistencies with VHDL than there are with Ada (e.g. inconsistent record syntax) that a newer Ada 2025 based approach could take.

All standards are defined by committee - I’ve been on standards committees. And there’s a huge battle between what makes theoretical sense, business sense (easy to implement), political battles (companies A and B are big rivals, so Marketing sets policies that demand engineers on committees sabotage what the competitor wants, even if it’s good for their employer too). “People! They’re everywhere, I tell ya!” :slight_smile:

VHDL is case-insensitive. But to make it easier to read, some people like to capitalize all the things that are from the language standard or standard packages (STD_LOGIC_1164), and then lowercase is used for the types&variables defined in the product itself.

Unfortunately, there’s no “standard” on that. And no consistency either :slight_smile:

The “.all” is because you can have the same object declared in multiple packages. So you need to be able to select which ones you want.

A standard example is that you need to merge chips A and B together into one new chip C.
B was a copy of A, that then had a bunch of changes made. So A and B have packages with the same names, and those packages have a lot of duplicate object type-names that are mostly the same. For example: A defined MAX_UART_SPEED = 3600 (i.e. it was going to run at 3600 baud at most). But B came along years later, and now there’s 64K. So B has MAX_UART_SPEED = 64000. The bit-widths are different, and the UARTs that are in the A portion of the design can only run up to 3600 baud. While the UARTs that are in B portion can go all the way to 64K. So you need to carefully specify which MAX_UART_SPEED to use:

library A; use A.uart_package.MAX_UART_SPEED;

It’s also a smart way to code, so you don’t pull in a bunch of stuff from a package that you didn’t expect.

I’m the guy who gets called in when everybody has spent a month trying to figure out what they did wrong, and the project is dead in the water… So I’ve learned to program defensively.

As far as “..” - remember that these were people in the 80s, with 70s thinking. I started off programming in COBOL, FORTRAN-2, FORTRAN-4. I remember people being blown away when Pascal came along (which was just written to prove that GOTO statements weren’t needed).

You know what unit-testing and Mocks are? Well “architecture” is how you can do Mocks. But it’s also how you select between different implementations of the same thing. In the SW world, conceptually you’d have a UART_driver entity, and architectures defined for Ubuntu, Win10, Win11, Win95, etc. The entity ensures everyone has identical interfaces, and the architectures are used to encapsulate implementation-specific info.

Then you use the Configuration statement to select what combination of architectures you want to form your product.

Hardware has a lot bigger solution space than SW. In SW, you basically have just small footprint or fast. But in HW, you have tradeoffs for die area, cybersecurity, power, immunity to single-event-upset (See Airbus last week), and a lot of IP is licensed, not open-source. So you might have to build multiple versions of the same product, depending on target market. The EU requires cybersecurity module “A” be used, and with backdoors installed so govts can spy on you. US forbids using “A”, and requires “B”. And also forbids shipping products with “B” outside the USA.

1 Like

I’ve been on standards committees; it takes a decade to get changes through. Where I think the big value is:

  • use Ada for the SW, and for the class-based testbenches to verify HW.
  • use VHDL for the HW.
  • and focus on what things are common to both HW+SW. For example: constants and data-structures. Like bit-widths, address values, enums, etc. We need to get rid of having the same value two places.

Ideally we’d have some subset of things that can be directly used in both languages.

If not, I’d propose using the IP-XACT standard to capture all the constants & data-types, and then use post-processing scripts to generate the VHDL and Ada packages.

That way there’s still one source of information.

IP-XACT also allows capturing the build and verification process, which is a huge deal. It’s literally a standard for capturing SBOM and DevOps in reusable format.

I agree! I’d start (as noted below) with just getting common ground established: a single source for constants and data-types shared between SW and HW. I’ve been doing HW/SW codesign for literally decades, and I think Ada is uniquely qualified to replace UVM, SystemVerilog. C, C++. And why change to Rust when you spend all that conversion cost and don’t get 80% of the value that Ada would give you.

You started this thread, :winking_face_with_tongue: and seems like you’re the perfect person to do it. hint, hint

Standards committees didn’t stop people making Chisel on Scala or the rust hdl or the Haskel one or the myriad Python ones.

Basing it on Ada 2025, rather than some pre-1980’s Ada/Pascal/whatever else, seems like a no brainer to me as VHDL doesn’t really feel anything like Ada to me.

How much VHDL have you used?
Have you “compiled” anything to silicon? FGPA?
I think you may be judging a bit too harshly, when considering the multitude of [low level] details that have to be attended to in HW; one of the big failings of C is that it forces you to cater to the low-level rather than the problem-space, but WRT HW design the low-level is the problem space.

I’m thinking a bit more high-level than merely “single source” and having a tool generate out Ada nad VHDL, but rather a full IDE… indeed, to break much of the [IMO wrongheaded] dependency on external trivium, I’d go with a integrated database-first approach, and also a [common/specialized] IR-first approach. (This de-prioritizes ‘files’ and ‘text’, instead focusing on the underlying structure.)

One thing that might really help is a re-implementation of IDL [of which DIANA is an instance], with the ability to define a common IR-set of Ada+VHDL+PL/SQL. Then “as an ‘afterthought’” implement the file/text parsing. (ie cementing that ‘text’ [and ‘file’] are second-class citizens within the IDE.) — There’s a LOT of places that things can be consolidated/leveraged, such as the ability to implement a SPARK-proved unification-engine: as unification is used in both database-queries and in theorem-provers.

Not yet, as stated above, I have only just started and tbh, first impressions aren’t that good, it’s like they did a good job with Ada and then couldn’t really be bothered to match it with VHDL.

I agree about the GUI. My personal suggestion: just use Kactus2 to load and check the IP-XACT fileset. If you compile the source for Kactus2, you can then use its Python API to manipulate the IP-XACT data.

VHDL has a different target. LISP is meant for one type of work, and Assembly code another. There are concepts in SW that don’t apply to HW, and vice-versa. Also, at some point, you have to “ship the product”. Which means that concepts that would be nice to have, but are unfinished, or generate a lot of controversy, get dropped.

As an example: Ethernet (packet based) switching is about 10% network efficient. whereas circuit-based (Fiberchannel) is more like 80-90%. So if all you do is change type of packet protocol, the amount of $$$ you have to spent to build the Internet drops by 5-8X. OK, so now the two types of data are: files, and then audio. Turns out, the ideal packetsize for files is 128Bytes. For audio, 32bytes. The two sides couldn’t agree, and somehow came up with 48. Which meant it was very inefficient for both applications. So nobody used. And Ethernet won. But the two camps DID make sure that the other guys didn’t get market share…

So don’t beat up the technical standards out there - often their biggest enemy is the VP-Marketing, and the engineers on the Committee have to carefully work around the Marketing demands yet still build a standard that’ll be useful enough that people will use it.

I’d recommend you look at the IP-XACT standard. It’s a database (XML schema) and every point in it supports “vendorExtension” where you can add your own arbitrary schema to it. The standards body itself has used that to extend IP-XACT to support analog/mixed-signal (AMS).

Also, IP-XACT specifies everything using VLNV: Vendor, Library, Name, Version. Not by filename. The only times filenames are specified are when you need to provide paths to external files (Verilog code, Python or TCL scripts, etc.).

Note that IP-XACT is meant for capturing everything needed about an IP: like build-environment (requires Python 3.8.7+), along with build scripts themselves. So it looks like it does a lot more than just what IDL does (and I only took a very quick glance at IDL, so maybe it supports the build flow as well). My thought would be that the IP-XACT file for IP ‘FOO’ would reference the appropriate IDL files, and require compliance. As well as specify the compiler switches, compiler versions, etc. that need to be run as part of running the compliance test.

Once you have an API to reading and cross-checking/collating all the IP-XACT info, it’s not hard to add a GUI, since everything is just collections of the XML fundamentals: strings, boolean, integer, float, etc. One guy I knew built a generic GUI that did just that - and then the user could specify schema-specific panels. For example: how to display a particular list as drop-down listbox instead of a bunch of radio buttons. Which was really flexible.

What I really wish I had in Ada was that tool to parse and build up the IP-XACT files. Then everyone could build tools on top of it. And add VendorExtensions to IDL, etc.

If I can’t read the text in the crappy GUI, which I can’t, then I can’t use it.

I’ll be trying to do something with VHDL at some point.

It seems that you’re fundamentally misunderstanding what the IDL (and DIANA) are for. IDL’s design was generally for specifying datastructures, with a primary use-case for usage within compilers. DIANA was an instance of IDL which mapped to Ada83, such that it was an intermediate representation (IR). — One interesting thing about IDL is that the datastructures were defined in a manner that you could derive from an existing structure, extending it, or suppressing/constraining some element, or both.

Perhaps I wasn’t clear enough on the design I was getting at with the IDE design: imagine doing all the “compiler” (and tool) stuff completely w/o files, completely w/o text, and design all that sort of stuff as a import/export module. —That is to say, take a datastructure-first approach throughout the IDE, and backed/implemented-by database.— This immediately makes the IDE (and translators, and tools) not-dependent on things like the file-system or environment-variables, and thus makes it very portable.

Or, to state it a different way, ask: “What would this look like without text? without files?”

I remember the University of York paper (no.3 above). Interesting, but somewhat weakened by the “synthesisable Ada” example code, a loop and some operations, shown in the paper … after a couple of trivial syntax changes, such as swapping “..” for “to”, I wrapped it in a VHDL clocked process statement, and fed it into the Xilinx synthesis tool, which was perfectly happy to accept it as synthesisable VHDL!

It generated a huge pile of gates - about 10x the size reported in the paper - to implement the example in a single (rather slow, below 20MHz if memory serves) clock cycle! In other words it unrolled the loop entirely, into combinatorial logic! (Which was wrapped in the clocked process above, as a way to allow timing measurement)

I stopped there as I was just playing. Changes, eg to perform one loop iteration per clock cycle, would have been pretty easy, but disturb the Ada code. So the value of that paper seems to me, that it illustrated ways to automatically extract serialism (which is a real problem - economising hardware) from a fundamentally parallel Ada or behavioural VHDL description.

There IS definite synergy between Ada and VHDL. For one point, I used Ada generics, instantiated with both floating point and different resolutions of fixed point, to explore accuracy vs hardware cost, before using essentially the same code as a behavioural VHDL model (“golden model”) to compare against synthesisable (state machine or pipelined) VHDL implementations, to show bit-level accuracy of the latter.

Another point. The leading open source VHDL simulator (GHDL) has a secret weapon … it is implemented in Ada. GitHub - ghdl/ghdl: VHDL 2008/93/87 simulator

But possibly the biggest gain, for HW development, would be to bring Spark to HW development. Formal proof would hugely simplify HW verification. And it would not be a huge sell to HW engineers. They already appreciate the formal proof approach, e.g. the superiority of static timing analysis over gate level simulation for meeting speed goals.

However I don’t see either extending Ada into a HW design language, or SPARK to directly prove VHDL, as being realistic short term goals. But the similarity between Ada and behavioural VHDL could be leveraged to prove the correctness of the algorithm (Ada/behavioural VHDL) used as a starting point - and golden model - for HW development and verification.

Having a proven correct model, down to the bit level, would be a huge win.
Subsequent transformation into efficient hardware (either state machine based for small hardware, or balanced pipelines for performance) is somewhat mechanical (though the tools still sucked, last time I looked, I could still do much better by hand).
Then verification reduces to demonstrating that the transformation hasn’t broken anything - the output is bit level identical to the behavioural model - just delayed by e.g. the pipeline length. In some domains, in the ASIC world, this can be done formally by equivalence checkers, but I don’t know of an equivalence checker for VHDL (the big money seems to migrate to System-C).

1 Like