Resource usage
The RTIC framework manages shared and task local resources allowing persistent data
storage and safe accesses without the use of unsafe
code.
RTIC resources are visible only to functions declared within the #[app]
module and the framework
gives the user complete control (on a per-task basis) over resource accessibility.
Declaration of system-wide resources is done by annotating two struct
s within the #[app]
module
with the attribute #[local]
and #[shared]
.
Each field in these structures corresponds to a different resource (identified by field name).
The difference between these two sets of resources will be covered below.
Each task must declare the resources it intends to access in its corresponding metadata attribute
using the local
and shared
arguments. Each argument takes a list of resource identifiers.
The listed resources are made available to the context under the local
and shared
fields of the
Context
structure.
The init
task returns the initial values for the system-wide (#[shared]
and #[local]
)
resources, and the set of initialized timers used by the application. The monotonic timers will be
further discussed in Monotonic & spawn_{at/after}
.
#[local]
resources
#[local]
resources are locally accessible to a specific task, meaning that only that task can
access the resource and does so without locks or critical sections. This allows for the resources,
commonly drivers or large objects, to be initialized in #[init]
and then be passed to a specific
task.
Thus, a task #[local]
resource can only be accessed by one singular task.
Attempting to assign the same #[local]
resource to more than one task is a compile-time error.
Types of #[local]
resources must implement a Send
trait as they are being sent from init
to a target task, crossing a thread boundary.
The example application shown below contains two tasks where each task has access to its own
#[local]
resource; the idle
task has its own #[local]
as well.
#![allow(unused)] fn main() { //! examples/locals.rs #![deny(unsafe_code)] #![deny(warnings)] #![deny(missing_docs)] #![deny(missing_docs)] #![no_main] #![no_std] use panic_semihosting as _; #[rtic::app(device = lm3s6965, dispatchers = [UART0, UART1])] mod app { use cortex_m_semihosting::{debug, hprintln}; #[shared] struct Shared {} #[local] struct Local { /// Local foo local_to_foo: i64, /// Local bar local_to_bar: i64, /// Local idle local_to_idle: i64, } // `#[init]` cannot access locals from the `#[local]` struct as they are initialized here. #[init] fn init(_: init::Context) -> (Shared, Local, init::Monotonics) { foo::spawn().unwrap(); bar::spawn().unwrap(); ( Shared {}, // initial values for the `#[local]` resources Local { local_to_foo: 0, local_to_bar: 0, local_to_idle: 0, }, init::Monotonics(), ) } // `local_to_idle` can only be accessed from this context #[idle(local = [local_to_idle])] fn idle(cx: idle::Context) -> ! { let local_to_idle = cx.local.local_to_idle; *local_to_idle += 1; hprintln!("idle: local_to_idle = {}", local_to_idle); debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator // error: no `local_to_foo` field in `idle::LocalResources` // _cx.local.local_to_foo += 1; // error: no `local_to_bar` field in `idle::LocalResources` // _cx.local.local_to_bar += 1; loop { cortex_m::asm::nop(); } } // `local_to_foo` can only be accessed from this context #[task(local = [local_to_foo])] fn foo(cx: foo::Context) { let local_to_foo = cx.local.local_to_foo; *local_to_foo += 1; // error: no `local_to_bar` field in `foo::LocalResources` // cx.local.local_to_bar += 1; hprintln!("foo: local_to_foo = {}", local_to_foo); } // `local_to_bar` can only be accessed from this context #[task(local = [local_to_bar])] fn bar(cx: bar::Context) { let local_to_bar = cx.local.local_to_bar; *local_to_bar += 1; // error: no `local_to_foo` field in `bar::LocalResources` // cx.local.local_to_foo += 1; hprintln!("bar: local_to_bar = {}", local_to_bar); } } }
Running the example:
$ cargo run --target thumbv7m-none-eabi --example locals
foo: local_to_foo = 1
bar: local_to_bar = 1
idle: local_to_idle = 1
Local resources in #[init]
and #[idle]
have 'static
lifetimes. This is safe since both tasks are not re-entrant.
Task local initialized resources
Local resources can also be specified directly in the resource claim like so:
#[task(local = [my_var: TYPE = INITIAL_VALUE, ...])]
; this allows for creating locals which do no need to be
initialized in #[init]
.
Types of #[task(local = [..])]
resources have to be neither Send
nor Sync
as they
are not crossing any thread boundary.
In the example below the different uses and lifetimes are shown:
#![allow(unused)] fn main() { //! examples/declared_locals.rs #![deny(unsafe_code)] #![deny(warnings)] #![deny(missing_docs)] #![no_main] #![no_std] use panic_semihosting as _; #[rtic::app(device = lm3s6965, dispatchers = [UART0])] mod app { use cortex_m_semihosting::debug; #[shared] struct Shared {} #[local] struct Local {} #[init(local = [a: u32 = 0])] fn init(cx: init::Context) -> (Shared, Local, init::Monotonics) { // Locals in `#[init]` have 'static lifetime let _a: &'static mut u32 = cx.local.a; debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator (Shared {}, Local {}, init::Monotonics()) } #[idle(local = [a: u32 = 0])] fn idle(cx: idle::Context) -> ! { // Locals in `#[idle]` have 'static lifetime let _a: &'static mut u32 = cx.local.a; loop {} } #[task(local = [a: u32 = 0])] fn foo(cx: foo::Context) { // Locals in `#[task]`s have a local lifetime let _a: &mut u32 = cx.local.a; // error: explicit lifetime required in the type of `cx` // let _a: &'static mut u32 = cx.local.a; } } }
#[shared]
resources and lock
Critical sections are required to access #[shared]
resources in a data race-free manner and to
achieve this the shared
field of the passed Context
implements the Mutex
trait for each
shared resource accessible to the task. This trait has only one method, lock
, which runs its
closure argument in a critical section.
The critical section created by the lock
API is based on dynamic priorities: it temporarily
raises the dynamic priority of the context to a ceiling priority that prevents other tasks from
preempting the critical section. This synchronization protocol is known as the
Immediate Ceiling Priority Protocol (ICPP), and complies with
Stack Resource Policy (SRP) based scheduling of RTIC.
In the example below we have three interrupt handlers with priorities ranging from one to three.
The two handlers with the lower priorities contend for a shared
resource and need to succeed in locking the
resource in order to access its data. The highest priority handler, which does not access the shared
resource, is free to preempt a critical section created by the lowest priority handler.
#![allow(unused)] fn main() { //! examples/lock.rs #![deny(unsafe_code)] #![deny(warnings)] #![deny(missing_docs)] #![no_main] #![no_std] use panic_semihosting as _; #[rtic::app(device = lm3s6965, dispatchers = [GPIOA, GPIOB, GPIOC])] mod app { use cortex_m_semihosting::{debug, hprintln}; #[shared] struct Shared { shared: u32, } #[local] struct Local {} #[init] fn init(_: init::Context) -> (Shared, Local, init::Monotonics) { foo::spawn().unwrap(); (Shared { shared: 0 }, Local {}, init::Monotonics()) } // when omitted priority is assumed to be `1` #[task(shared = [shared])] fn foo(mut c: foo::Context) { hprintln!("A"); // the lower priority task requires a critical section to access the data c.shared.shared.lock(|shared| { // data can only be modified within this critical section (closure) *shared += 1; // bar will *not* run right now due to the critical section bar::spawn().unwrap(); hprintln!("B - shared = {}", *shared); // baz does not contend for `shared` so it's allowed to run now baz::spawn().unwrap(); }); // critical section is over: bar can now start hprintln!("E"); debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator } #[task(priority = 2, shared = [shared])] fn bar(mut c: bar::Context) { // the higher priority task does still need a critical section let shared = c.shared.shared.lock(|shared| { *shared += 1; *shared }); hprintln!("D - shared = {}", shared); } #[task(priority = 3)] fn baz(_: baz::Context) { hprintln!("C"); } } }
$ cargo run --target thumbv7m-none-eabi --example lock
A
B - shared = 1
C
D - shared = 2
E
Types of #[shared]
resources have to be Send
.
Multi-lock
As an extension to lock
, and to reduce rightward drift, locks can be taken as tuples. The
following examples show this in use:
#![allow(unused)] fn main() { //! examples/mutlilock.rs #![deny(unsafe_code)] #![deny(warnings)] #![deny(missing_docs)] #![no_main] #![no_std] use panic_semihosting as _; #[rtic::app(device = lm3s6965, dispatchers = [GPIOA])] mod app { use cortex_m_semihosting::{debug, hprintln}; #[shared] struct Shared { shared1: u32, shared2: u32, shared3: u32, } #[local] struct Local {} #[init] fn init(_: init::Context) -> (Shared, Local, init::Monotonics) { locks::spawn().unwrap(); ( Shared { shared1: 0, shared2: 0, shared3: 0, }, Local {}, init::Monotonics(), ) } // when omitted priority is assumed to be `1` #[task(shared = [shared1, shared2, shared3])] fn locks(c: locks::Context) { let s1 = c.shared.shared1; let s2 = c.shared.shared2; let s3 = c.shared.shared3; (s1, s2, s3).lock(|s1, s2, s3| { *s1 += 1; *s2 += 1; *s3 += 1; hprintln!("Multiple locks, s1: {}, s2: {}, s3: {}", *s1, *s2, *s3); }); debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator } } }
$ cargo run --target thumbv7m-none-eabi --example multilock
Multiple locks, s1: 1, s2: 1, s3: 1
Only shared (&-
) access
By default, the framework assumes that all tasks require exclusive access (&mut-
) to resources,
but it is possible to specify that a task only requires shared access (&-
) to a resource using the
&resource_name
syntax in the shared
list.
The advantage of specifying shared access (&-
) to a resource is that no locks are required to
access the resource even if the resource is contended by more than one task running at different
priorities. The downside is that the task only gets a shared reference (&-
) to the resource,
limiting the operations it can perform on it, but where a shared reference is enough this approach
reduces the number of required locks. In addition to simple immutable data, this shared access can
be useful where the resource type safely implements interior mutability, with appropriate locking
or atomic operations of its own.
Note that in this release of RTIC it is not possible to request both exclusive access (&mut-
)
and shared access (&-
) to the same resource from different tasks. Attempting to do so will
result in a compile error.
In the example below a key (e.g. a cryptographic key) is loaded (or created) at runtime and then used from two tasks that run at different priorities without any kind of lock.
#![allow(unused)] fn main() { //! examples/only-shared-access.rs #![deny(unsafe_code)] #![deny(warnings)] #![deny(missing_docs)] #![no_main] #![no_std] use panic_semihosting as _; #[rtic::app(device = lm3s6965, dispatchers = [UART0, UART1])] mod app { use cortex_m_semihosting::{debug, hprintln}; #[shared] struct Shared { key: u32, } #[local] struct Local {} #[init] fn init(_: init::Context) -> (Shared, Local, init::Monotonics) { foo::spawn().unwrap(); bar::spawn().unwrap(); (Shared { key: 0xdeadbeef }, Local {}, init::Monotonics()) } #[task(shared = [&key])] fn foo(cx: foo::Context) { let key: &u32 = cx.shared.key; hprintln!("foo(key = {:#x})", key); debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator } #[task(priority = 2, shared = [&key])] fn bar(cx: bar::Context) { hprintln!("bar(key = {:#x})", cx.shared.key); } } }
$ cargo run --target thumbv7m-none-eabi --example only-shared-access
bar(key = 0xdeadbeef)
foo(key = 0xdeadbeef)
Lock-free resource access of shared resources
A critical section is not required to access a #[shared]
resource that's only accessed by tasks
running at the same priority. In this case, you can opt out of the lock
API by adding the
#[lock_free]
field-level attribute to the resource declaration (see example below). Note that
this is merely a convenience to reduce needless resource locking code, because even if the
lock
API is used, at runtime the framework will not produce a critical section due to how
the underlying resource-ceiling preemption works.
Also worth noting: using #[lock_free]
on resources shared by
tasks running at different priorities will result in a compile-time error -- not using the lock
API would be a data race in that case.
#![allow(unused)] fn main() { //! examples/lock-free.rs #![deny(unsafe_code)] #![deny(warnings)] #![deny(missing_docs)] #![no_main] #![no_std] use panic_semihosting as _; #[rtic::app(device = lm3s6965, dispatchers = [GPIOA])] mod app { use cortex_m_semihosting::{debug, hprintln}; #[shared] struct Shared { #[lock_free] // <- lock-free shared resource counter: u64, } #[local] struct Local {} #[init] fn init(_: init::Context) -> (Shared, Local, init::Monotonics) { foo::spawn().unwrap(); (Shared { counter: 0 }, Local {}, init::Monotonics()) } #[task(shared = [counter])] // <- same priority fn foo(c: foo::Context) { bar::spawn().unwrap(); *c.shared.counter += 1; // <- no lock API required let counter = *c.shared.counter; hprintln!(" foo = {}", counter); } #[task(shared = [counter])] // <- same priority fn bar(c: bar::Context) { foo::spawn().unwrap(); *c.shared.counter += 1; // <- no lock API required let counter = *c.shared.counter; hprintln!(" bar = {}", counter); debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator } } }
$ cargo run --target thumbv7m-none-eabi --example lock-free
foo = 1
bar = 2