ANN: Configurable bareboard runtimes v15.2.0

I have released version 15.2.0 of my configurable bareboard runtime crates.

light, light-tasking, and embedded runtime profiles are available for the following targets:

  • Raspberry Pi RP2040
  • Raspberry Pi RP2350 (Cortex-M33 cores)
  • Nordic Semi nRF52 Series
  • STMicroelectronics STM32F0xx Series
  • STMicroelectronics STM32G0xx Series
  • STMicroelectronics STM32G4xx Series

You can find pre-generated runtimes as crates in the Alire Community Index (v15.2.0 should show up from tomorrow), or attached as assets in the release announcement on GitHub.

The crates are named after the runtime profile and target, e.g. light_tasking_rp2040 is the light-tasking runtime for the RP2040.

Previously, these runtimes were managed in separate repositories on GitHub, but having them spread out like this was starting to make maintenance more burdensome. So I’ve now moved these runtimes to a new repository: community-bb-runtimes. If you have any problems, questions, or requests for these runtimes, that’s the place to open an issue.

Changes in v15.2.0:

  • Added the light runtime profile for the RP2040 and RP2350
  • RP2350: Fixed FPU not being initialized at startup
  • RP2350: Fixed incorrect interrupt IDs for CPU 2 interrupts
  • RP2350: Fixed compilation error with single-core runtime configurations
  • RP2350: Fixed potential race condition when interrupts are nested
  • RP2040: Fixed crash during context switch for tasks on the second CPU
  • Improved GNATprove compatibility when analyzing projects that use the runtimes.

In this version I’ve also put a lot of attention to improving the quality of the releases. I now have automated on-target testing set up with various test cases to provide some sanity checks that target-specific things like interrupts and multicore tasking are working correctly on each supported target. This should help prevent future regressions and provide better quality assurance.

10 Likes

Hello @damaki,

the Pi RP2040 and Pi RP2350 are dual cores MCU. Does it mean you provide a symmetric runtime?
What I mean if it’s possible to write code like that:

task A witch CPU => 1;
task B with CPU => 2;

If so, how do you manage the priority inheritance protocol when a protected object is used by two tasks on different CPUs?

Nice. But will it work with pico_bsp or will they fight over the startup code? Over on Telegram they said there is a possibility.

That’s correct, you can write code like that to pin tasks to either of the two cores!

The priority ceiling protocol works as normal. When a task (on either core) makes a call to a protected object, it inherits the ceiling priority of the protected object, so you cannot then make another call to a protected object with a lower ceiling priority.

If two tasks on different CPUs try to call the same protected object at the same time, then one CPU will “win” and will get the lock the protected object, and the other CPU will wait (spin) on the lock until the first CPU returns from the protected call and releases the lock, at which point the next CPU will acquire the lock.

3 Likes

Yes, it works with rp2040_hal and pico_bsp!

You will, however, need to configure rp2040_hal to tell it not to use its own startup code or interrupt handlers by adding this to your project’s alire.toml:

[configuration.values]
rp2040_hal.Use_Startup = false
rp2040_hal.Interrupts = "bb_runtimes"
2 Likes

Cool. I could have the internal and external LED blink at the same time at different intervals. (I don’t do anything serious with my pi). I’ll have a look at it tomorrow.

2 Likes

By the way, I have a demo project that shows using both cores and rp2040_hal/pico_bsp here: GitHub - damaki/pico_smp_demo: Ada multicore demo for the Raspberry Pi Pico (RP2040) · GitHub

2 Likes

I tried that. however:

multiple definition of `_6o8_o0gkdun5pa8+ta0fx_70r)fo2:m7 multiple definition of `8__ef0u7_n6gcf_nbla4to2_o2kr1uo5pm2_'2;f
 5uC0n:c3\_U1l5so0eofrks7\6uKeRp9M'aA;11 0\CcA:1\bUps8p4eD5ra8sta\a8K\\RlLMiAob1c\\allAi\pbapRlDipar2te0\a4c\0aL_ocHcahalle.\\aab(luriiplr-edr\soc\mal.ciohg)he:t\_

Of course, as always the actual error message is multiple pages as usual with any linker error. What could that be? I can give you the full error message if that helps. Source code is on SourceForge.

It looks like you’re using version 2.6.0 of rp2040_hal, which has a known issue that there are conflicting aeabi symbols for floating point operations (both rp2040_hal and the runtime are trying to provide them). This is fixed in version 2.7.0 of rp2040_hal, so I recommend upgrading to that version.

Version 2.7.0 is also now compatible with the GNAT FSF 15 toolchain, so you can also update that to use this latest 15.2.0 release of the runtimes (though version 14 will still work if you really want to stick with that).

Basically, try update your alire.toml dependencies to this:

[[depends-on]]
rp2040_hal                      = "^2.7.0"
pico_bsp                        = "^2.2.0"
light_tasking_rp2040            = "^15.2.0"

I also recommend updating your Alire index to make sure you can see the latest versions of everything by running:

alr index --update-all

I’ve tried this locally and I was able to reproduce your issue and fix it by doing the above.

Needed a bit more fine tuning but this compiled:

[[depends-on]]
rp2040_hal                      = "^2.7"
pico_bsp                        = "^2.2"
light_tasking_rp2040            = "^15.2"

[configuration.values]
rp2040_hal.Use_Startup          = false
rp2040_hal.Interrupts           = "bb_runtimes"
light_tasking_rp2040.Max_CPUs   = 2
light_tasking_rp2040.Board      = "rpi_pico"
d)

Strangely “.0” didn’t work.

1 Like

Thanks for the answer. A similar question: how do you manage the clock for scheduling on both CPUs? Do you have only one timer as a common base for both of them?

1 Like

I use the TIMER peripheral as a common time base between the two CPUs, and I reserve one of the four 32-bit ALARM channels for the runtime to schedule an interrupt (the exact ALARM channel used is configurable via a crate configuration variable, with ALARM3 as the default). So this gives both CPUs a common time base.

The runtime’s alarm interrupt handler always runs on the first CPU. This handler checks both CPUs for any expired alarms. If an alarm has expired on the second CPU, then the first CPU “pokes” the second CPU to generate an interrupt, then the poke handler on that CPU then handles the expired alarm and performs any required rescheduling and context switches.

1 Like

Why not using the CPU SysTick?

1 Like

There’s two reasons for not using the SysTick:

  1. Each CPU has its own SysTick, and a CPU cannot read the other CPU’s SysTick, so this would not provide a common time base.
  2. The TIMER allows for more accurate task delays and lower power.

The SysTick is a simple down counter, so it would generate an interrupt every 1 ms (or however long the period is configured for). Not only would this waste power and CPU time by waking up the CPU frequently just to check for expired alarms, it would also mean that task delays are only as accurate as the tick period.

By contrast, the TIMER peripheral has 1 µs resolution and its 32-bit ALARMs allows the runtime to schedule an interrupt anywhere in the next 2^32 microsecond period (about 1 hour 11 minutes). This avoids unnecessary wakeups (unless the next task wakeup time is longer than 1 hour 11 minutes in the future) allowing the CPU to sleep for longer, and makes task delays more accurate since the interrupt is generated exactly when the next alarm expires.

6 Likes

Thanks for sharing your knowledge

1 Like