Real Time For the Masses
A concurrency framework for building real time systems
Preface
This book contains user level documentation for the Real Time For the Masses (RTFM) framework. The API reference can be found here.
There is a translation of this book in Russian.
Features
-
Tasks as the unit of concurrency 1. Tasks can be event triggered (fired in response to asynchronous stimuli) or spawned by the application on demand.
-
Message passing between tasks. Specifically, messages can be passed to software tasks at spawn time.
-
A timer queue 2. Software tasks can be scheduled to run at some time in the future. This feature can be used to implement periodic tasks.
-
Support for prioritization of tasks and, thus, preemptive multitasking.
-
Efficient and data race free memory sharing through fine grained priority based critical sections 1.
-
Deadlock free execution guaranteed at compile time. This is an stronger guarantee than what's provided by the standard
Mutex
abstraction.
-
Minimal scheduling overhead. The task scheduler has minimal software footprint; the hardware does the bulk of the scheduling.
-
Highly efficient memory usage: All the tasks share a single call stack and there's no hard dependency on a dynamic memory allocator.
-
All Cortex-M devices are supported. The core features of RTFM are supported on all Cortex-M devices. The timer queue is currently only supported on ARMv7-M devices.
-
This task model is amenable to known WCET (Worst Case Execution Time) analysis and scheduling analysis techniques. (Though we haven't yet developed Rust friendly tooling for that.)
Requirements
-
Rust 1.36.0+
-
Applications must be written using the 2018 edition.
Acknowledgments
This crate is based on the RTFM language created by the Embedded Systems group at Luleå University of Technology, led by Prof. Per Lindgren.
References
Eriksson, J., Häggström, F., Aittamaa, S., Kruglyak, A., & Lindgren, P. (2013, June). Real-time for the masses, step 1: Programming API and static priority SRP kernel primitives. In Industrial Embedded Systems (SIES), 2013 8th IEEE International Symposium on (pp. 110-113). IEEE.
Lindgren, P., Fresk, E., Lindner, M., Lindner, A., Pereira, D., & Pinho, L. M. (2016). Abstract timers and their implementation onto the arm cortex-m family of mcus. ACM SIGBED Review, 13(1), 48-53.
License
All source code (including code snippets) is licensed under either of
- Apache License, Version 2.0 (LICENSE-APACHE or https://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or https://opensource.org/licenses/MIT)
at your option.
The written prose contained within the book is licensed under the terms of the Creative Commons CC-BY-SA v4.0 license (LICENSE-CC-BY-SA or https://creativecommons.org/licenses/by-sa/4.0/legalcode).
Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be licensed as above, without any additional terms or conditions.
RTFM by example
This part of the book introduces the Real Time For the Masses (RTFM) framework to new users by walking them through examples of increasing complexity.
All examples in this part of the book can be found in the GitHub repository of the project, and most of the examples can be run on QEMU so no special hardware is required to follow along.
To run the examples on your laptop / PC you'll need the qemu-system-arm
program. Check the embedded Rust book for instructions on how to set up an
embedded development environment that includes QEMU.
The app
attribute
This is the smallest possible RTFM application:
#![allow(unused)] fn main() { //! examples/smallest.rs #![deny(unsafe_code)] #![deny(warnings)] #![no_main] #![no_std] // panic-handler crate extern crate panic_semihosting; use rtfm::app; #[app(device = lm3s6965)] const APP: () = { #[init] fn init() {} }; }
All RTFM applications use the app
attribute (#[app(..)]
). This attribute
must be applied to a const
item that contains items. The app
attribute has
a mandatory device
argument that takes a path as a value. This path must
point to a peripheral access crate (PAC) generated using svd2rust
v0.14.x. The app
attribute will expand into a suitable entry point so it's
not required to use the cortex_m_rt::entry
attribute.
ASIDE: Some of you may be wondering why we are using a
const
item as a module and not a propermod
item. The reason is that using attributes on modules requires a feature gate, which requires a nightly toolchain. To make RTFM work on stable we use theconst
item instead. When more parts of macros 1.2 are stabilized we'll move from aconst
item to amod
item and eventually to a crate level attribute (#![app]
).
init
Within the pseudo-module the app
attribute expects to find an initialization
function marked with the init
attribute. This function must have signature
[unsafe] fn()
.
This initialization function will be the first part of the application to run.
The init
function will run with interrupts disabled and has exclusive access
to Cortex-M and device specific peripherals through the core
and device
variables, which are injected in the scope of init
by the app
attribute. Not
all Cortex-M peripherals are available in core
because the RTFM runtime takes
ownership of some of them -- for more details see the rtfm::Peripherals
struct.
static mut
variables declared at the beginning of init
will be transformed
into &'static mut
references that are safe to access.
The example below shows the types of the core
and device
variables and
showcases safe access to a static mut
variable.
#![allow(unused)] fn main() { //! examples/init.rs #![deny(unsafe_code)] #![deny(warnings)] #![no_main] #![no_std] extern crate panic_semihosting; use cortex_m_semihosting::{debug, hprintln}; use rtfm::app; #[app(device = lm3s6965)] const APP: () = { #[init] fn init() { static mut X: u32 = 0; // Cortex-M peripherals let _core: rtfm::Peripherals = core; // Device specific peripherals let _device: lm3s6965::Peripherals = device; // Safe access to local `static mut` variable let _x: &'static mut u32 = X; hprintln!("init").unwrap(); debug::exit(debug::EXIT_SUCCESS); } }; }
Running the example will print init
to the console and then exit the QEMU
process.
$ cargo run --example init
init
idle
A function marked with the idle
attribute can optionally appear in the
pseudo-module. This function is used as the special idle task and must have
signature [unsafe] fn() - > !
.
When present, the runtime will execute the idle
task after init
. Unlike
init
, idle
will run with interrupts enabled and it's not allowed to return
so it runs forever.
When no idle
function is declared, the runtime sets the SLEEPONEXIT bit and
then sends the microcontroller to sleep after running init
.
Like in init
, static mut
variables will be transformed into &'static mut
references that are safe to access.
The example below shows that idle
runs after init
.
#![allow(unused)] fn main() { //! examples/idle.rs #![deny(unsafe_code)] #![deny(warnings)] #![no_main] #![no_std] extern crate panic_semihosting; use cortex_m_semihosting::{debug, hprintln}; use rtfm::app; #[app(device = lm3s6965)] const APP: () = { #[init] fn init() { hprintln!("init").unwrap(); } #[idle] fn idle() -> ! { static mut X: u32 = 0; // Safe access to local `static mut` variable let _x: &'static mut u32 = X; hprintln!("idle").unwrap(); debug::exit(debug::EXIT_SUCCESS); loop {} } }; }
$ cargo run --example idle
init
idle
interrupt
/ exception
Just like you would do with the cortex-m-rt
crate you can use the interrupt
and exception
attributes within the app
pseudo-module to declare interrupt
and exception handlers. In RTFM, we refer to interrupt and exception handlers as
hardware tasks.
#![allow(unused)] fn main() { //! examples/interrupt.rs #![deny(unsafe_code)] #![deny(warnings)] #![no_main] #![no_std] extern crate panic_semihosting; use cortex_m_semihosting::{debug, hprintln}; use lm3s6965::Interrupt; use rtfm::app; #[app(device = lm3s6965)] const APP: () = { #[init] fn init() { // Pends the UART0 interrupt but its handler won't run until *after* // `init` returns because interrupts are disabled rtfm::pend(Interrupt::UART0); hprintln!("init").unwrap(); } #[idle] fn idle() -> ! { // interrupts are enabled again; the `UART0` handler runs at this point hprintln!("idle").unwrap(); rtfm::pend(Interrupt::UART0); debug::exit(debug::EXIT_SUCCESS); loop {} } #[interrupt] fn UART0() { static mut TIMES: u32 = 0; // Safe access to local `static mut` variable *TIMES += 1; hprintln!( "UART0 called {} time{}", *TIMES, if *TIMES > 1 { "s" } else { "" } ) .unwrap(); } }; }
$ cargo run --example interrupt
init
UART0 called 1 time
idle
UART0 called 2 times
So far all the RTFM applications we have seen look no different that the
applications one can write using only the cortex-m-rt
crate. In the next
section we start introducing features unique to RTFM.
Resources
One of the limitations of the attributes provided by the cortex-m-rt
crate is
that sharing data (or peripherals) between interrupts, or between an interrupt
and the entry
function, requires a cortex_m::interrupt::Mutex
, which
always requires disabling all interrupts to access the data. Disabling all
the interrupts is not always required for memory safety but the compiler doesn't
have enough information to optimize the access to the shared data.
The app
attribute has a full view of the application thus it can optimize
access to static
variables. In RTFM we refer to the static
variables
declared inside the app
pseudo-module as resources. To access a resource the
context (init
, idle
, interrupt
or exception
) must first declare the
resource in the resources
argument of its attribute.
In the example below two interrupt handlers access the same resource. No Mutex
is required in this case because the two handlers run at the same priority and
no preemption is possible. The SHARED
resource can only be accessed by these
two handlers.
#![allow(unused)] fn main() { //! examples/resource.rs #![deny(unsafe_code)] #![deny(warnings)] #![no_main] #![no_std] extern crate panic_semihosting; use cortex_m_semihosting::{debug, hprintln}; use lm3s6965::Interrupt; use rtfm::app; #[app(device = lm3s6965)] const APP: () = { // A resource static mut SHARED: u32 = 0; #[init] fn init() { rtfm::pend(Interrupt::UART0); rtfm::pend(Interrupt::UART1); } #[idle] fn idle() -> ! { debug::exit(debug::EXIT_SUCCESS); // error: `SHARED` can't be accessed from this context // SHARED += 1; loop {} } // `SHARED` can be access from this context #[interrupt(resources = [SHARED])] fn UART0() { *resources.SHARED += 1; hprintln!("UART0: SHARED = {}", resources.SHARED).unwrap(); } // `SHARED` can be access from this context #[interrupt(resources = [SHARED])] fn UART1() { *resources.SHARED += 1; hprintln!("UART1: SHARED = {}", resources.SHARED).unwrap(); } }; }
$ cargo run --example resource
UART0: SHARED = 1
UART1: SHARED = 2
Priorities
The priority of each handler can be declared in the interrupt
and exception
attributes. It's not possible to set the priority in any other way because the
runtime takes ownership of the NVIC
peripheral; it's also not possible to
change the priority of a handler / task at runtime. Thanks to this restriction
the framework has knowledge about the static priorities of all interrupt and
exception handlers.
Interrupts and exceptions can have priorities in the range 1..=(1 << NVIC_PRIO_BITS)
where NVIC_PRIO_BITS
is a constant defined in the device
crate. The idle
task has a priority of 0
, the lowest priority.
Resources that are shared between handlers that run at different priorities require critical sections for memory safety. The framework ensures that critical sections are used but only where required: for example, no critical section is required by the highest priority handler that has access to the resource.
The critical section API provided by the RTFM framework (see Mutex
) is
based on dynamic priorities rather than on disabling interrupts. The consequence
is that these critical sections will prevent some handlers, including all the
ones that contend for the resource, from starting but will let higher priority
handlers, that don't contend for the resource, run.
In the example below we have three interrupt handlers with priorities ranging
from one to three. The two handlers with the lower priorities contend for the
SHARED
resource. The lowest priority handler needs to lock
the
SHARED
resource to access its data, whereas the mid priority handler can
directly access its data. The highest priority handler is free to preempt
the critical section created by the lowest priority handler.
#![allow(unused)] fn main() { //! examples/lock.rs #![deny(unsafe_code)] #![deny(warnings)] #![no_main] #![no_std] extern crate panic_semihosting; use cortex_m_semihosting::{debug, hprintln}; use lm3s6965::Interrupt; use rtfm::app; #[app(device = lm3s6965)] const APP: () = { static mut SHARED: u32 = 0; #[init] fn init() { rtfm::pend(Interrupt::GPIOA); } // when omitted priority is assumed to be `1` #[interrupt(resources = [SHARED])] fn GPIOA() { hprintln!("A").unwrap(); // the lower priority task requires a critical section to access the data resources.SHARED.lock(|shared| { // data can only be modified within this critical section (closure) *shared += 1; // GPIOB will *not* run right now due to the critical section rtfm::pend(Interrupt::GPIOB); hprintln!("B - SHARED = {}", *shared).unwrap(); // GPIOC does not contend for `SHARED` so it's allowed to run now rtfm::pend(Interrupt::GPIOC); }); // critical section is over: GPIOB can now start hprintln!("E").unwrap(); debug::exit(debug::EXIT_SUCCESS); } #[interrupt(priority = 2, resources = [SHARED])] fn GPIOB() { // the higher priority task does *not* need a critical section *resources.SHARED += 1; hprintln!("D - SHARED = {}", *resources.SHARED).unwrap(); } #[interrupt(priority = 3)] fn GPIOC() { hprintln!("C").unwrap(); } }; }
$ cargo run --example lock
A
B - SHARED = 1
C
D - SHARED = 2
E
One more note about priorities: choosing a priority higher than what the device
supports (that is 1 << NVIC_PRIO_BITS
) will result in a compile error. Due to
limitations in the language the error is currently far from helpful: it will say
something along the lines of "evaluation of constant value failed" and the span
of the error will not point out to the problematic interrupt value -- we are
sorry about this!
Late resources
Unlike normal static
variables, which need to be assigned an initial value
when declared, resources can be initialized at runtime. We refer to these
runtime initialized resources as late resources. Late resources are useful for
moving (as in transferring ownership) peripherals initialized in init
into
interrupt and exception handlers.
Late resources are declared like normal resources but that are given an initial
value of ()
(the unit value). init
must return the initial values of all
late resources packed in a struct
of type init::LateResources
.
The example below uses late resources to stablish a lockless, one-way channel
between the UART0
interrupt handler and the idle
function. A single producer
single consumer Queue
is used as the channel. The queue is split into
consumer and producer end points in init
and then each end point is stored
in a different resource; UART0
owns the producer resource and idle
owns
the consumer resource.
#![allow(unused)] fn main() { //! examples/late.rs #![deny(unsafe_code)] #![deny(warnings)] #![no_main] #![no_std] extern crate panic_semihosting; use cortex_m_semihosting::{debug, hprintln}; use heapless::{ consts::*, spsc::{Consumer, Producer, Queue}, }; use lm3s6965::Interrupt; use rtfm::app; #[app(device = lm3s6965)] const APP: () = { // Late resources static mut P: Producer<'static, u32, U4> = (); static mut C: Consumer<'static, u32, U4> = (); #[init] fn init() -> init::LateResources { // NOTE: we use `Option` here to work around the lack of // a stable `const` constructor static mut Q: Option<Queue<u32, U4>> = None; *Q = Some(Queue::new()); let (p, c) = Q.as_mut().unwrap().split(); // Initialization of late resources init::LateResources { P: p, C: c } } #[idle(resources = [C])] fn idle() -> ! { loop { if let Some(byte) = resources.C.dequeue() { hprintln!("received message: {}", byte).unwrap(); debug::exit(debug::EXIT_SUCCESS); } else { rtfm::pend(Interrupt::UART0); } } } #[interrupt(resources = [P])] fn UART0() { resources.P.enqueue(42).unwrap(); } }; }
$ cargo run --example late
received message: 42
static
resources
static
variables can also be used as resources. Tasks can only get &
(shared) references to these resources but locks are never required to access
their data. You can think of static
resources as plain static
variables that
can be initialized at runtime and have better scoping rules: you can control
which tasks can access the variable, instead of the variable being visible to
all the functions in the scope it was declared in.
In the example below a key is loaded (or created) at runtime and then used from two tasks that run at different priorities.
#![allow(unused)] fn main() { //! examples/static.rs #![deny(unsafe_code)] #![deny(warnings)] #![no_main] #![no_std] extern crate panic_semihosting; use cortex_m_semihosting::{debug, hprintln}; use lm3s6965::Interrupt; use rtfm::app; #[app(device = lm3s6965)] const APP: () = { static KEY: u32 = (); #[init] fn init() -> init::LateResources { rtfm::pend(Interrupt::UART0); rtfm::pend(Interrupt::UART1); init::LateResources { KEY: 0xdeadbeef } } #[interrupt(resources = [KEY])] fn UART0() { hprintln!("UART0(KEY = {:#x})", resources.KEY).unwrap(); debug::exit(debug::EXIT_SUCCESS); } #[interrupt(priority = 2, resources = [KEY])] fn UART1() { hprintln!("UART1(KEY = {:#x})", resources.KEY).unwrap(); } }; }
$ cargo run --example static
UART1(KEY = 0xdeadbeef)
UART0(KEY = 0xdeadbeef)
Software tasks
RTFM treats interrupt and exception handlers as hardware tasks. Hardware tasks are invoked by the hardware in response to events, like pressing a button. RTFM also supports software tasks which can be spawned by the software from any execution context.
Software tasks can also be assigned priorities and are dispatched from interrupt
handlers. RTFM requires that free interrupts are declared in an extern
block
when using software tasks; these free interrupts will be used to dispatch the
software tasks. An advantage of software tasks over hardware tasks is that many
tasks can be mapped to a single interrupt handler.
Software tasks are declared by applying the task
attribute to functions. To be
able to spawn a software task the name of the task must appear in the spawn
argument of the context attribute (init
, idle
, interrupt
, etc.).
The example below showcases three software tasks that run at 2 different priorities. The three tasks map to 2 interrupts handlers.
#![allow(unused)] fn main() { //! examples/task.rs #![deny(unsafe_code)] #![deny(warnings)] #![no_main] #![no_std] extern crate panic_semihosting; use cortex_m_semihosting::{debug, hprintln}; use rtfm::app; #[app(device = lm3s6965)] const APP: () = { #[init(spawn = [foo])] fn init() { spawn.foo().unwrap(); } #[task(spawn = [bar, baz])] fn foo() { hprintln!("foo").unwrap(); // spawns `bar` onto the task scheduler // `foo` and `bar` have the same priority so `bar` will not run until // after `foo` terminates spawn.bar().unwrap(); // spawns `baz` onto the task scheduler // `baz` has higher priority than `foo` so it immediately preempts `foo` spawn.baz().unwrap(); } #[task] fn bar() { hprintln!("bar").unwrap(); debug::exit(debug::EXIT_SUCCESS); } #[task(priority = 2)] fn baz() { hprintln!("baz").unwrap(); } // Interrupt handlers used to dispatch software tasks extern "C" { fn UART0(); fn UART1(); } }; }
$ cargo run --example task
foo
baz
bar
Message passing
The other advantage of software tasks is that messages can be passed to these tasks when spawning them. The type of the message payload must be specified in the signature of the task handler.
The example below showcases three tasks, two of them expect a message.
#![allow(unused)] fn main() { //! examples/message.rs #![deny(unsafe_code)] #![deny(warnings)] #![no_main] #![no_std] extern crate panic_semihosting; use cortex_m_semihosting::{debug, hprintln}; use rtfm::app; #[app(device = lm3s6965)] const APP: () = { #[init(spawn = [foo])] fn init() { spawn.foo(/* no message */).unwrap(); } #[task(spawn = [bar])] fn foo() { static mut COUNT: u32 = 0; hprintln!("foo").unwrap(); spawn.bar(*COUNT).unwrap(); *COUNT += 1; } #[task(spawn = [baz])] fn bar(x: u32) { hprintln!("bar({})", x).unwrap(); spawn.baz(x + 1, x + 2).unwrap(); } #[task(spawn = [foo])] fn baz(x: u32, y: u32) { hprintln!("baz({}, {})", x, y).unwrap(); if x + y > 4 { debug::exit(debug::EXIT_SUCCESS); } spawn.foo().unwrap(); } extern "C" { fn UART0(); } }; }
$ cargo run --example message
foo
bar(0)
baz(1, 2)
foo
bar(1)
baz(2, 3)
Capacity
Task dispatchers do not use any dynamic memory allocation. The memory required
to store messages is statically reserved. The framework will reserve enough
space for every context to be able to spawn each task at most once. This is a
sensible default but the "inbox" capacity of each task can be controlled using
the capacity
argument of the task
attribute.
The example below sets the capacity of the software task foo
to 4. If the
capacity is not specified then the second spawn.foo
call in UART0
would
fail.
#![allow(unused)] fn main() { //! examples/capacity.rs #![deny(unsafe_code)] #![deny(warnings)] #![no_main] #![no_std] extern crate panic_semihosting; use cortex_m_semihosting::{debug, hprintln}; use lm3s6965::Interrupt; use rtfm::app; #[app(device = lm3s6965)] const APP: () = { #[init] fn init() { rtfm::pend(Interrupt::UART0); } #[interrupt(spawn = [foo, bar])] fn UART0() { spawn.foo(0).unwrap(); spawn.foo(1).unwrap(); spawn.foo(2).unwrap(); spawn.foo(3).unwrap(); spawn.bar().unwrap(); } #[task(capacity = 4)] fn foo(x: u32) { hprintln!("foo({})", x).unwrap(); } #[task] fn bar() { hprintln!("bar").unwrap(); debug::exit(debug::EXIT_SUCCESS); } // Interrupt handlers used to dispatch software tasks extern "C" { fn UART1(); } }; }
$ cargo run --example capacity
foo(0)
foo(1)
foo(2)
foo(3)
bar
Timer queue
When the timer-queue
feature is enabled the RTFM framework includes a global
timer queue that applications can use to schedule software tasks to run at
some time in the future.
NOTE: The timer-queue feature can't be enabled when the target is
thumbv6m-none-eabi
because there's no timer queue support for ARMv6-M. This may change in the future.
NOTE: When the
timer-queue
feature is enabled you will not be able to use theSysTick
exception as a hardware task because the runtime uses it to implement the global timer queue.
To be able to schedule a software task the name of the task must appear in the
schedule
argument of the context attribute. When scheduling a task the
Instant
at which the task should be executed must be passed as the first
argument of the schedule
invocation.
The RTFM runtime includes a monotonic, non-decreasing, 32-bit timer which can be
queried using the Instant::now
constructor. A Duration
can be added to
Instant::now()
to obtain an Instant
into the future. The monotonic timer is
disabled while init
runs so Instant::now()
always returns the value
Instant(0 /* clock cycles */)
; the timer is enabled right before the
interrupts are re-enabled and idle
is executed.
The example below schedules two tasks from init
: foo
and bar
. foo
is
scheduled to run 8 million clock cycles in the future. Next, bar
is scheduled
to run 4 million clock cycles in the future. bar
runs before foo
since it
was scheduled to run first.
IMPORTANT: The examples that use the
schedule
API or theInstant
abstraction will not properly work on QEMU because the Cortex-M cycle counter functionality has not been implemented inqemu-system-arm
.
#![allow(unused)] fn main() { //! examples/schedule.rs #![deny(unsafe_code)] #![deny(warnings)] #![no_main] #![no_std] extern crate panic_semihosting; use cortex_m_semihosting::hprintln; use rtfm::{app, Instant}; // NOTE: does NOT work on QEMU! #[app(device = lm3s6965)] const APP: () = { #[init(schedule = [foo, bar])] fn init() { let now = Instant::now(); hprintln!("init @ {:?}", now).unwrap(); // Schedule `foo` to run 8e6 cycles (clock cycles) in the future schedule.foo(now + 8_000_000.cycles()).unwrap(); // Schedule `bar` to run 4e6 cycles in the future schedule.bar(now + 4_000_000.cycles()).unwrap(); } #[task] fn foo() { hprintln!("foo @ {:?}", Instant::now()).unwrap(); } #[task] fn bar() { hprintln!("bar @ {:?}", Instant::now()).unwrap(); } extern "C" { fn UART0(); } }; }
Running the program on real hardware produces the following output in the console:
init @ Instant(0)
bar @ Instant(4000236)
foo @ Instant(8000173)
Periodic tasks
Software tasks have access to the Instant
at which they were scheduled to run
through the scheduled
variable. This information and the schedule
API can be
used to implement periodic tasks as shown in the example below.
#![allow(unused)] fn main() { //! examples/periodic.rs #![deny(unsafe_code)] #![deny(warnings)] #![no_main] #![no_std] extern crate panic_semihosting; use cortex_m_semihosting::hprintln; use rtfm::{app, Instant}; const PERIOD: u32 = 8_000_000; // NOTE: does NOT work on QEMU! #[app(device = lm3s6965)] const APP: () = { #[init(schedule = [foo])] fn init() { schedule.foo(Instant::now() + PERIOD.cycles()).unwrap(); } #[task(schedule = [foo])] fn foo() { let now = Instant::now(); hprintln!("foo(scheduled = {:?}, now = {:?})", scheduled, now).unwrap(); schedule.foo(scheduled + PERIOD.cycles()).unwrap(); } extern "C" { fn UART0(); } }; }
This is the output produced by the example. Note that there is zero drift /
jitter even though schedule.foo
was invoked at the end of foo
. Using
Instant::now
instead of scheduled
would have resulted in drift / jitter.
foo(scheduled = Instant(8000000), now = Instant(8000196))
foo(scheduled = Instant(16000000), now = Instant(16000196))
foo(scheduled = Instant(24000000), now = Instant(24000196))
Baseline
For the tasks scheduled from init
we have exact information about their
scheduled
time. For hardware tasks there's no scheduled
time because these
tasks are asynchronous in nature. For hardware tasks the runtime provides a
start
time, which indicates the time at which the task handler started
executing.
Note that start
is not equal to the arrival time of the event that fired
the task. Depending on the priority of the task and the load of the system the
start
time could be very far off from the event arrival time.
What do you think will be the value of scheduled
for software tasks that are
spawned instead of scheduled? The answer is that spawned tasks inherit the
baseline time of the context that spawned it. The baseline of hardware tasks
is start
, the baseline of software tasks is scheduled
and the baseline of
init
is start = Instant(0)
. idle
doesn't really have a baseline but tasks
spawned from it will use Instant::now()
as their baseline time.
The example below showcases the different meanings of the baseline.
#![allow(unused)] fn main() { //! examples/baseline.rs #![deny(unsafe_code)] #![deny(warnings)] #![no_main] #![no_std] extern crate panic_semihosting; use cortex_m_semihosting::{debug, hprintln}; use lm3s6965::Interrupt; use rtfm::app; // NOTE: does NOT properly work on QEMU #[app(device = lm3s6965)] const APP: () = { #[init(spawn = [foo])] fn init() { hprintln!("init(baseline = {:?})", start).unwrap(); // `foo` inherits the baseline of `init`: `Instant(0)` spawn.foo().unwrap(); } #[task(schedule = [foo])] fn foo() { static mut ONCE: bool = true; hprintln!("foo(baseline = {:?})", scheduled).unwrap(); if *ONCE { *ONCE = false; rtfm::pend(Interrupt::UART0); } else { debug::exit(debug::EXIT_SUCCESS); } } #[interrupt(spawn = [foo])] fn UART0() { hprintln!("UART0(baseline = {:?})", start).unwrap(); // `foo` inherits the baseline of `UART0`: its `start` time spawn.foo().unwrap(); } extern "C" { fn UART1(); } }; }
Running the program on real hardware produces the following output in the console:
init(baseline = Instant(0))
foo(baseline = Instant(0))
UART0(baseline = Instant(904))
foo(baseline = Instant(904))
Caveats
The Instant
and Duration
APIs are meant to be exclusively used with the
schedule
API to schedule tasks with a precision of a single core clock
cycle. These APIs are not, for example, meant to be used to time external
events like a user pressing a button.
The timer queue feature internally uses the system timer, SysTick
. This timer
is a 24-bit counter and it's clocked at the core clock frequency so tasks
scheduled more than (1 << 24).cycles()
in the future will incur in additional
overhead, proportional to the size of their Duration
, compared to task
scheduled with Duration
s below that threshold.
If you need periodic tasks with periods greater than (1 << 24).cycles()
you
likely don't need a timer with a resolution of one core clock cycle so we advise
you instead use a device timer with an appropriate prescaler.
We can't stop you from using Instant
to measure external events so please be
aware that Instant.sub
/ Instant.elapsed
will never return a Duration
equal or greater than (1 << 31).cycles()
so you won't be able to measure
events that last more than 1 << 31
core clock cycles (e.g. seconds).
Adding a Duration
equal or greater than (1 << 31).cycles()
to an Instant
will effectively overflow it so it's not possible to schedule a task more than
(1 << 31).cycles()
in the future. There are some debug assertions in place to
catch this kind of user error but it's not possible to prevent it with 100%
success rate because one can always write (instant + duration) + duration
and
bypass the runtime checks.
Singletons
The app
attribute is aware of owned-singleton
crate and its Singleton
attribute. When this attribute is applied to one of the resources the runtime
will perform the unsafe
initialization of the singleton for you, ensuring that
only a single instance of the singleton is ever created.
Note that when using the Singleton
attribute you'll need to have the
owned_singleton
in your dependencies.
Below is an example that uses the Singleton
attribute on a chunk of memory
and then uses the singleton instance as a fixed-size memory pool using one of
the alloc-singleton
abstractions.
#![allow(unused)] fn main() { //! examples/singleton.rs #![deny(unsafe_code)] #![deny(warnings)] #![no_main] #![no_std] extern crate panic_semihosting; use alloc_singleton::stable::pool::{Box, Pool}; use cortex_m_semihosting::{debug, hprintln}; use lm3s6965::Interrupt; use rtfm::app; #[app(device = lm3s6965)] const APP: () = { #[Singleton(Send)] static mut M: [u32; 2] = [0; 2]; static mut P: Pool<M> = (); #[init(resources = [M])] fn init() -> init::LateResources { rtfm::pend(Interrupt::I2C0); init::LateResources { P: Pool::new(resources.M), } } #[interrupt( priority = 2, resources = [P], spawn = [foo, bar], )] fn I2C0() { spawn.foo(resources.P.alloc(1).unwrap()).unwrap(); spawn.bar(resources.P.alloc(2).unwrap()).unwrap(); } #[task(resources = [P])] fn foo(x: Box<M>) { hprintln!("foo({})", x).unwrap(); resources.P.lock(|p| p.dealloc(x)); debug::exit(debug::EXIT_SUCCESS); } #[task(priority = 2, resources = [P])] fn bar(x: Box<M>) { hprintln!("bar({})", x).unwrap(); resources.P.dealloc(x); } extern "C" { fn UART0(); fn UART1(); } }; }
$ cargo run --example singleton
bar(2)
foo(1)
Types, Send and Sync
The app
attribute injects a context, a collection of variables, into every
function. All these variables have predictable, non-anonymous types so you can
write plain functions that take them as arguments.
The API reference specifies how these types are generated from the input. You
can also generate documentation for you binary crate (cargo doc --bin <name>
);
in the documentation you'll find Context
structs (e.g. init::Context
and
idle::Context
) whose fields represent the variables injected into each
function.
The example below shows the different types generates by the app
attribute.
#![allow(unused)] fn main() { //! examples/types.rs #![deny(unsafe_code)] #![deny(warnings)] #![no_main] #![no_std] extern crate panic_semihosting; use cortex_m_semihosting::debug; use rtfm::{app, Exclusive, Instant}; #[app(device = lm3s6965)] const APP: () = { static mut SHARED: u32 = 0; #[init(schedule = [foo], spawn = [foo])] fn init() { let _: Instant = start; let _: rtfm::Peripherals = core; let _: lm3s6965::Peripherals = device; let _: init::Schedule = schedule; let _: init::Spawn = spawn; debug::exit(debug::EXIT_SUCCESS); } #[exception(schedule = [foo], spawn = [foo])] fn SVCall() { let _: Instant = start; let _: SVCall::Schedule = schedule; let _: SVCall::Spawn = spawn; } #[interrupt(resources = [SHARED], schedule = [foo], spawn = [foo])] fn UART0() { let _: Instant = start; let _: resources::SHARED = resources.SHARED; let _: UART0::Schedule = schedule; let _: UART0::Spawn = spawn; } #[task(priority = 2, resources = [SHARED], schedule = [foo], spawn = [foo])] fn foo() { let _: Instant = scheduled; let _: Exclusive<u32> = resources.SHARED; let _: foo::Resources = resources; let _: foo::Schedule = schedule; let _: foo::Spawn = spawn; } extern "C" { fn UART1(); } }; }
Send
Send
is a marker trait for "types that can be transferred across thread
boundaries", according to its definition in core
. In the context of RTFM the
Send
trait is only required where it's possible to transfer a value between
tasks that run at different priorities. This occurs in a few places: in message
passing, in shared static mut
resources and in the initialization of late
resources.
The app
attribute will enforce that Send
is implemented where required so
you don't need to worry much about it. It's more important to know where you do
not need the Send
trait: on types that are transferred between tasks that
run at the same priority. This occurs in two places: in message passing and in
shared static mut
resources.
The example below shows where a type that doesn't implement Send
can be used.
#![allow(unused)] fn main() { //! `examples/not-send.rs` #![deny(unsafe_code)] #![deny(warnings)] #![no_main] #![no_std] extern crate panic_halt; use core::marker::PhantomData; use cortex_m_semihosting::debug; use rtfm::app; pub struct NotSend { _0: PhantomData<*const ()>, } #[app(device = lm3s6965)] const APP: () = { static mut SHARED: Option<NotSend> = None; #[init(spawn = [baz, quux])] fn init() { spawn.baz().unwrap(); spawn.quux().unwrap(); } #[task(spawn = [bar])] fn foo() { // scenario 1: message passed to task that runs at the same priority spawn.bar(NotSend { _0: PhantomData }).ok(); } #[task] fn bar(_x: NotSend) { // scenario 1 } #[task(priority = 2, resources = [SHARED])] fn baz() { // scenario 2: resource shared between tasks that run at the same priority *resources.SHARED = Some(NotSend { _0: PhantomData }); } #[task(priority = 2, resources = [SHARED])] fn quux() { // scenario 2 let _not_send = resources.SHARED.take().unwrap(); debug::exit(debug::EXIT_SUCCESS); } extern "C" { fn UART0(); fn UART1(); } }; }
It's important to note that late initialization of resources is effectively a
send operation where the initial value is sent from idle
, which has the lowest
priority of 0
, to a task with will run with a priority greater than or equal
to 1
. Thus all late resources need to implement the Send
trait.
Sharing a resource with init
can be used to implement late initialization, see
example below. For that reason, resources shared with init
must also implement
the Send
trait.
#![allow(unused)] fn main() { //! `examples/shared-with-init.rs` #![deny(unsafe_code)] #![deny(warnings)] #![no_main] #![no_std] extern crate panic_halt; use cortex_m_semihosting::debug; use lm3s6965::Interrupt; use rtfm::app; pub struct MustBeSend; #[app(device = lm3s6965)] const APP: () = { static mut SHARED: Option<MustBeSend> = None; #[init(resources = [SHARED])] fn init() { // this `message` will be sent to task `UART0` let message = MustBeSend; *resources.SHARED = Some(message); rtfm::pend(Interrupt::UART0); } #[interrupt(resources = [SHARED])] fn UART0() { if let Some(message) = resources.SHARED.take() { // `message` has been received drop(message); debug::exit(debug::EXIT_SUCCESS); } } }; }
Sync
Similarly, Sync
is a marker trait for "types for which it is safe to share
references between threads", according to its definition in core
. In the
context of RTFM the Sync
trait is only required where it's possible for two,
or more, tasks that run at different priority to hold a shared reference to a
resource. This only occurs with shared static
resources.
The app
attribute will enforce that Sync
is implemented where required but
it's important to know where the Sync
bound is not required: in static
resources shared between tasks that run at the same priority.
The example below shows where a type that doesn't implement Sync
can be used.
#![allow(unused)] fn main() { //! `examples/not-sync.rs` #![deny(unsafe_code)] #![deny(warnings)] #![no_main] #![no_std] extern crate panic_halt; use core::marker::PhantomData; use cortex_m_semihosting::debug; use rtfm::app; pub struct NotSync { _0: PhantomData<*const ()>, } #[app(device = lm3s6965)] const APP: () = { static SHARED: NotSync = NotSync { _0: PhantomData }; #[init] fn init() { debug::exit(debug::EXIT_SUCCESS); } #[task(resources = [SHARED])] fn foo() { let _: &NotSync = resources.SHARED; } #[task(resources = [SHARED])] fn bar() { let _: &NotSync = resources.SHARED; } extern "C" { fn UART0(); } }; }
Starting a new project
Now that you have learned about the main features of the RTFM framework you can try it out on your hardware by following these instructions.
- Instantiate the
cortex-m-quickstart
template.
$ # for example using `cargo-generate`
$ cargo generate \
--git https://github.com/rust-embedded/cortex-m-quickstart \
--name app
$ # follow the rest of the instructions
- Add a peripheral access crate (PAC) that was generated using
svd2rust
v0.14.x, or a board support crate that depends on one such PAC as a dependency. Make sure that thert
feature of the crate is enabled.
In this example, I'll use the lm3s6965
device crate. This device crate
doesn't have an rt
Cargo feature; that feature is always enabled.
This device crate provides a linker script with the memory layout of the target
device so memory.x
and build.rs
need to be removed.
$ cargo add lm3s6965 --vers 0.1.3
$ rm memory.x build.rs
- Add the
cortex-m-rtfm
crate as a dependency and, if you need it, enable thetimer-queue
feature.
$ cargo add cortex-m-rtfm
- Write your RTFM application.
Here I'll use the init
example from the cortex-m-rtfm
crate.
$ curl \
-L https://github.com/japaric/cortex-m-rtfm/raw/v0.4.0/examples/init.rs \
> src/main.rs
That example depends on the panic-semihosting
crate:
$ cargo add panic-semihosting
- Build it, flash it and run it.
$ # NOTE: I have uncommented the `runner` option in `.cargo/config`
$ cargo run
init
Tips & tricks
Generics
Resources shared between two or more tasks implement the Mutex
trait in all
contexts, even on those where a critical section is not required to access the
data. This lets you easily write generic code that operates on resources and can
be called from different tasks. Here's one such example:
#![allow(unused)] fn main() { //! examples/generics.rs #![deny(unsafe_code)] #![deny(warnings)] #![no_main] #![no_std] extern crate panic_semihosting; use cortex_m_semihosting::{debug, hprintln}; use lm3s6965::Interrupt; use rtfm::{app, Mutex}; #[app(device = lm3s6965)] const APP: () = { static mut SHARED: u32 = 0; #[init] fn init() { rtfm::pend(Interrupt::UART0); rtfm::pend(Interrupt::UART1); } #[interrupt(resources = [SHARED])] fn UART0() { static mut STATE: u32 = 0; hprintln!("UART0(STATE = {})", *STATE).unwrap(); advance(STATE, resources.SHARED); rtfm::pend(Interrupt::UART1); debug::exit(debug::EXIT_SUCCESS); } #[interrupt(priority = 2, resources = [SHARED])] fn UART1() { static mut STATE: u32 = 0; hprintln!("UART1(STATE = {})", *STATE).unwrap(); // just to show that `SHARED` can be accessed directly and .. *resources.SHARED += 0; // .. also through a (no-op) `lock` resources.SHARED.lock(|shared| *shared += 0); advance(STATE, resources.SHARED); } }; fn advance(state: &mut u32, mut shared: impl Mutex<T = u32>) { *state += 1; let (old, new) = shared.lock(|shared| { let old = *shared; *shared += *state; (old, *shared) }); hprintln!("SHARED: {} -> {}", old, new).unwrap(); } }
$ cargo run --example generics
UART1(STATE = 0)
SHARED: 0 -> 1
UART0(STATE = 0)
SHARED: 1 -> 2
UART1(STATE = 1)
SHARED: 2 -> 4
This also lets you change the static priorities of tasks without having to
rewrite code. If you consistently use lock
s to access the data behind shared
resources then your code will continue to compile when you change the priority
of tasks.
Conditional compilation
You can use conditional compilation (#[cfg]
) on resources (static [mut]
items) and tasks (fn
items). The effect of using #[cfg]
attributes is that
the resource / task will not be injected into the prelude of tasks that use
them (see resources
, spawn
and schedule
) if the condition doesn't hold.
The example below logs a message whenever the foo
task is spawned, but only if
the program has been compiled using the dev
profile.
#![allow(unused)] fn main() { //! examples/cfg.rs #![deny(unsafe_code)] #![deny(warnings)] #![no_main] #![no_std] extern crate panic_semihosting; #[cfg(debug_assertions)] use cortex_m_semihosting::hprintln; use rtfm::app; #[app(device = lm3s6965)] const APP: () = { #[cfg(debug_assertions)] // <- `true` when using the `dev` profile static mut COUNT: u32 = 0; #[init] fn init() { // .. } #[task(priority = 3, resources = [COUNT], spawn = [log])] fn foo() { #[cfg(debug_assertions)] { *resources.COUNT += 1; spawn.log(*resources.COUNT).ok(); } // this wouldn't compile in `release` mode // *resources.COUNT += 1; // .. } #[cfg(debug_assertions)] #[task] fn log(n: u32) { hprintln!( "foo has been called {} time{}", n, if n == 1 { "" } else { "s" } ) .ok(); } extern "C" { fn UART0(); fn UART1(); } }; }
Running tasks from RAM
The main goal of moving the specification of RTFM applications to attributes in
RTFM v0.4.x was to allow inter-operation with other attributes. For example, the
link_section
attribute can be applied to tasks to place them in RAM; this can
improve performance in some cases.
IMPORTANT: In general, the
link_section
,export_name
andno_mangle
attributes are very powerful but also easy to misuse. Incorrectly using any of these attributes can cause undefined behavior; you should always prefer to use safe, higher level attributes around them likecortex-m-rt
'sinterrupt
andexception
attributes.In the particular case of RAM functions there's no safe abstraction for it in
cortex-m-rt
v0.6.5 but there's an RFC for adding aramfunc
attribute in a future release.
The example below shows how to place the higher priority task, bar
, in RAM.
#![allow(unused)] fn main() { //! examples/ramfunc.rs #![deny(unsafe_code)] #![deny(warnings)] #![no_main] #![no_std] extern crate panic_semihosting; use cortex_m_semihosting::{debug, hprintln}; use rtfm::app; #[app(device = lm3s6965)] const APP: () = { #[init(spawn = [bar])] fn init() { spawn.bar().unwrap(); } #[inline(never)] #[task] fn foo() { hprintln!("foo").unwrap(); debug::exit(debug::EXIT_SUCCESS); } // run this task from RAM #[inline(never)] #[link_section = ".data.bar"] #[task(priority = 2, spawn = [foo])] fn bar() { spawn.foo().unwrap(); } extern "C" { fn UART0(); // run the task dispatcher from RAM #[link_section = ".data.UART1"] fn UART1(); } }; }
Running this program produces the expected output.
$ cargo run --example ramfunc
foo
One can look at the output of cargo-nm
to confirm that bar
ended in RAM
(0x2000_0000
), whereas foo
ended in Flash (0x0000_0000
).
$ cargo nm --example ramfunc --release | grep ' foo::'
20000100 B foo::FREE_QUEUE::ujkptet2nfdw5t20
200000dc B foo::INPUTS::thvubs85b91dg365
000002c6 T foo::sidaht420cg1mcm8
$ cargo nm --example ramfunc --release | grep ' bar::'
20000100 B bar::FREE_QUEUE::lk14244m263eivix
200000dc B bar::INPUTS::mi89534s44r1mnj1
20000000 T bar::ns9009yhw2dc2y25
binds
NOTE: Requires RTFM ~0.4.2
You can give hardware tasks more task-like names using the binds
argument: you
name the function as you wish and specify the name of the interrupt / exception
in the binds
argument. Types like Spawn
will be placed in a module named
after the function, not the interrupt / exception. Example below:
#![allow(unused)] fn main() { //! examples/binds.rs #![deny(unsafe_code)] #![deny(warnings)] #![no_main] #![no_std] extern crate panic_semihosting; use cortex_m_semihosting::{debug, hprintln}; use lm3s6965::Interrupt; use rtfm::app; // `examples/interrupt.rs` rewritten to use `binds` #[app(device = lm3s6965)] const APP: () = { #[init] fn init() { rtfm::pend(Interrupt::UART0); hprintln!("init").unwrap(); } #[idle] fn idle() -> ! { hprintln!("idle").unwrap(); rtfm::pend(Interrupt::UART0); debug::exit(debug::EXIT_SUCCESS); loop {} } #[interrupt(binds = UART0)] fn foo() { static mut TIMES: u32 = 0; *TIMES += 1; hprintln!( "foo called {} time{}", *TIMES, if *TIMES > 1 { "s" } else { "" } ) .unwrap(); } }; }
$ cargo run --example binds
init
foo called 1 time
idle
foo called 2 times
Indirection for faster message passing
Message passing always involves copying the payload from the sender into a
static variable and then from the static variable into the receiver. Thus
sending a large buffer, like a [u8; 128]
, as a message involves two expensive
memcpy
s. To minimize the message passing overhead one can use indirection:
instead of sending the buffer by value, one can send an owning pointer into the
buffer.
One can use a global allocator to achieve indirection (alloc::Box
,
alloc::Rc
, etc.), which requires using the nightly channel as of Rust v1.34.0,
or one can use a statically allocated memory pool like heapless::Pool
.
Here's an example where heapless::Pool
is used to "box" buffers of 128 bytes.
#![allow(unused)] fn main() { //! examples/pool.rs #![deny(unsafe_code)] #![deny(warnings)] #![no_main] #![no_std] extern crate panic_semihosting; use cortex_m_semihosting::{debug, hprintln}; use heapless::{ pool, pool::singleton::{Box, Pool}, }; use lm3s6965::Interrupt; use rtfm::app; // Declare a pool of 128-byte memory blocks pool!(P: [u8; 128]); #[app(device = lm3s6965)] const APP: () = { #[init] fn init() { static mut MEMORY: [u8; 512] = [0; 512]; // Increase the capacity of the memory pool by ~4 P::grow(MEMORY); rtfm::pend(Interrupt::I2C0); } #[interrupt(priority = 2, spawn = [foo, bar])] fn I2C0() { // claim a memory block, leave it uninitialized and .. let x = P::alloc().unwrap().freeze(); // .. send it to the `foo` task spawn.foo(x).ok().unwrap(); // send another block to the task `bar` spawn.bar(P::alloc().unwrap().freeze()).ok().unwrap(); } #[task] fn foo(x: Box<P>) { hprintln!("foo({:?})", x.as_ptr()).unwrap(); // explicitly return the block to the pool drop(x); debug::exit(debug::EXIT_SUCCESS); } #[task(priority = 2)] fn bar(x: Box<P>) { hprintln!("bar({:?})", x.as_ptr()).unwrap(); // this is done automatically so we can omit the call to `drop` // drop(x); } extern "C" { fn UART0(); fn UART1(); } }; }
$ cargo run --example binds
bar(0x2000008c)
foo(0x20000110)
Under the hood
This section describes the internals of the RTFM framework at a high level.
Low level details like the parsing and code generation done by the procedural
macro (#[app]
) will not be explained here. The focus will be the analysis of
the user specification and the data structures used by the runtime.
Ceiling analysis
TODO
Task dispatcher
TODO
Timer queue
TODO