The crates are named after the runtime profile and target, e.g. light_tasking_rp2040 is the light-tasking runtime for the RP2040.
Previously, these runtimes were managed in separate repositories on GitHub, but having them spread out like this was starting to make maintenance more burdensome. So I’ve now moved these runtimes to a new repository: community-bb-runtimes. If you have any problems, questions, or requests for these runtimes, that’s the place to open an issue.
Changes in v15.2.0:
Added the light runtime profile for the RP2040 and RP2350
RP2350: Fixed FPU not being initialized at startup
RP2350: Fixed incorrect interrupt IDs for CPU 2 interrupts
RP2350: Fixed compilation error with single-core runtime configurations
RP2350: Fixed potential race condition when interrupts are nested
RP2040: Fixed crash during context switch for tasks on the second CPU
Improved GNATprove compatibility when analyzing projects that use the runtimes.
In this version I’ve also put a lot of attention to improving the quality of the releases. I now have automated on-target testing set up with various test cases to provide some sanity checks that target-specific things like interrupts and multicore tasking are working correctly on each supported target. This should help prevent future regressions and provide better quality assurance.
That’s correct, you can write code like that to pin tasks to either of the two cores!
The priority ceiling protocol works as normal. When a task (on either core) makes a call to a protected object, it inherits the ceiling priority of the protected object, so you cannot then make another call to a protected object with a lower ceiling priority.
If two tasks on different CPUs try to call the same protected object at the same time, then one CPU will “win” and will get the lock the protected object, and the other CPU will wait (spin) on the lock until the first CPU returns from the protected call and releases the lock, at which point the next CPU will acquire the lock.
You will, however, need to configure rp2040_hal to tell it not to use its own startup code or interrupt handlers by adding this to your project’s alire.toml:
Cool. I could have the internal and external LED blink at the same time at different intervals. (I don’t do anything serious with my pi). I’ll have a look at it tomorrow.
multiple definition of `_6o8_o0gkdun5pa8+ta0fx_70r)fo2:m7 multiple definition of `8__ef0u7_n6gcf_nbla4to2_o2kr1uo5pm2_'2;f
5uC0n:c3\_U1l5so0eofrks7\6uKeRp9M'aA;11 0\CcA:1\bUps8p4eD5ra8sta\a8K\\RlLMiAob1c\\allAi\pbapRlDipar2te0\a4c\0aL_ocHcahalle.\\aab(luriiplr-edr\soc\mal.ciohg)he:t\_
Of course, as always the actual error message is multiple pages as usual with any linker error. What could that be? I can give you the full error message if that helps. Source code is on SourceForge.
It looks like you’re using version 2.6.0 of rp2040_hal, which has a known issue that there are conflicting aeabi symbols for floating point operations (both rp2040_hal and the runtime are trying to provide them). This is fixed in version 2.7.0 of rp2040_hal, so I recommend upgrading to that version.
Version 2.7.0 is also now compatible with the GNAT FSF 15 toolchain, so you can also update that to use this latest 15.2.0 release of the runtimes (though version 14 will still work if you really want to stick with that).
Basically, try update your alire.toml dependencies to this:
Thanks for the answer. A similar question: how do you manage the clock for scheduling on both CPUs? Do you have only one timer as a common base for both of them?
I use the TIMER peripheral as a common time base between the two CPUs, and I reserve one of the four 32-bit ALARM channels for the runtime to schedule an interrupt (the exact ALARM channel used is configurable via a crate configuration variable, with ALARM3 as the default). So this gives both CPUs a common time base.
The runtime’s alarm interrupt handler always runs on the first CPU. This handler checks both CPUs for any expired alarms. If an alarm has expired on the second CPU, then the first CPU “pokes” the second CPU to generate an interrupt, then the poke handler on that CPU then handles the expired alarm and performs any required rescheduling and context switches.
Each CPU has its own SysTick, and a CPU cannot read the other CPU’s SysTick, so this would not provide a common time base.
The TIMER allows for more accurate task delays and lower power.
The SysTick is a simple down counter, so it would generate an interrupt every 1 ms (or however long the period is configured for). Not only would this waste power and CPU time by waking up the CPU frequently just to check for expired alarms, it would also mean that task delays are only as accurate as the tick period.
By contrast, the TIMER peripheral has 1 µs resolution and its 32-bit ALARMs allows the runtime to schedule an interrupt anywhere in the next 2^32 microsecond period (about 1 hour 11 minutes). This avoids unnecessary wakeups (unless the next task wakeup time is longer than 1 hour 11 minutes in the future) allowing the CPU to sleep for longer, and makes task delays more accurate since the interrupt is generated exactly when the next alarm expires.