Resource usage
The RTIC framework manages shared and task local resources allowing persistent data storage and safe accesses without the use of unsafe
code.
RTIC resources are visible only to functions declared within the #[app]
module and the framework gives the user complete control (on a per-task basis) over resource accessibility.
Declaration of system-wide resources is done by annotating two struct
s within the #[app]
module with the attribute #[local]
and #[shared]
. Each field in these structures corresponds to a different resource (identified by field name). The difference between these two sets of resources will be covered below.
Each task must declare the resources it intends to access in its corresponding metadata attribute using the local
and shared
arguments. Each argument takes a list of resource identifiers. The listed resources are made available to the context under the local
and shared
fields of the Context
structure.
The init
task returns the initial values for the system-wide (#[shared]
and #[local]
) resources.
#[local]
resources
#[local]
resources are locally accessible to a specific task, meaning that only that task can access the resource and does so without locks or critical sections. This allows for the resources, commonly drivers or large objects, to be initialized in #[init]
and then be passed to a specific task.
Thus, a task #[local]
resource can only be accessed by one singular task. Attempting to assign the same #[local]
resource to more than one task is a compile-time error.
Types of #[local]
resources must implement a Send
trait as they are being sent from init
to a target task, crossing a thread boundary.
The example application shown below contains three tasks foo
, bar
and idle
, each having access to its own #[local]
resource.
//! examples/locals.rs
#![no_main]
#![no_std]
#![deny(warnings)]
#![deny(unsafe_code)]
#![deny(missing_docs)]
use panic_semihosting as _;
#[rtic::app(device = lm3s6965, dispatchers = [UART0, UART1])]
mod app {
use cortex_m_semihosting::{debug, hprintln};
#[shared]
struct Shared {}
#[local]
struct Local {
local_to_foo: i64,
local_to_bar: i64,
local_to_idle: i64,
}
// `#[init]` cannot access locals from the `#[local]` struct as they are initialized here.
#[init]
fn init(_: init::Context) -> (Shared, Local) {
foo::spawn().unwrap();
bar::spawn().unwrap();
(
Shared {},
// initial values for the `#[local]` resources
Local {
local_to_foo: 0,
local_to_bar: 0,
local_to_idle: 0,
},
)
}
// `local_to_idle` can only be accessed from this context
#[idle(local = [local_to_idle])]
fn idle(cx: idle::Context) -> ! {
let local_to_idle = cx.local.local_to_idle;
*local_to_idle += 1;
hprintln!("idle: local_to_idle = {}", local_to_idle);
debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator
// error: no `local_to_foo` field in `idle::LocalResources`
// _cx.local.local_to_foo += 1;
// error: no `local_to_bar` field in `idle::LocalResources`
// _cx.local.local_to_bar += 1;
loop {
cortex_m::asm::nop();
}
}
// `local_to_foo` can only be accessed from this context
#[task(local = [local_to_foo], priority = 1)]
async fn foo(cx: foo::Context) {
let local_to_foo = cx.local.local_to_foo;
*local_to_foo += 1;
// error: no `local_to_bar` field in `foo::LocalResources`
// cx.local.local_to_bar += 1;
hprintln!("foo: local_to_foo = {}", local_to_foo);
}
// `local_to_bar` can only be accessed from this context
#[task(local = [local_to_bar], priority = 1)]
async fn bar(cx: bar::Context) {
let local_to_bar = cx.local.local_to_bar;
*local_to_bar += 1;
// error: no `local_to_foo` field in `bar::LocalResources`
// cx.local.local_to_foo += 1;
hprintln!("bar: local_to_bar = {}", local_to_bar);
}
}
Running the example:
$ cargo xtask qemu --verbose --example locals
bar: local_to_bar = 1
foo: local_to_foo = 1
idle: local_to_idle = 1
Local resources in #[init]
and #[idle]
have 'static
lifetimes. This is safe since both tasks are not re-entrant.
Task local initialized resources
Local resources can also be specified directly in the resource claim like so: #[task(local = [my_var: TYPE = INITIAL_VALUE, ...])]
; this allows for creating locals which do no need to be initialized in #[init]
.
Types of #[task(local = [..])]
resources have to be neither Send
nor Sync
as they are not crossing any thread boundary.
In the example below the different uses and lifetimes are shown:
//! examples/declared_locals.rs
#![no_main]
#![no_std]
#![deny(warnings)]
#![deny(unsafe_code)]
#![deny(missing_docs)]
use panic_semihosting as _;
#[rtic::app(device = lm3s6965)]
mod app {
use cortex_m_semihosting::debug;
#[shared]
struct Shared {}
#[local]
struct Local {}
#[init(local = [a: u32 = 0])]
fn init(cx: init::Context) -> (Shared, Local) {
// Locals in `#[init]` have 'static lifetime
let _a: &'static mut u32 = cx.local.a;
debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator
(Shared {}, Local {})
}
#[idle(local = [a: u32 = 0])]
fn idle(cx: idle::Context) -> ! {
// Locals in `#[idle]` have 'static lifetime
let _a: &'static mut u32 = cx.local.a;
loop {}
}
#[task(binds = UART0, local = [a: u32 = 0])]
fn foo(cx: foo::Context) {
// Locals in `#[task]`s have a local lifetime
let _a: &mut u32 = cx.local.a;
// error: explicit lifetime required in the type of `cx`
// let _a: &'static mut u32 = cx.local.a;
}
}
You can run the application, but as the example is designed merely to showcase the lifetime properties there is no output (it suffices to build the application).
$ cargo build --target thumbv7m-none-eabi --example declared_locals
#[shared]
resources and lock
Critical sections are required to access #[shared]
resources in a data race-free manner and to achieve this the shared
field of the passed Context
implements the Mutex
trait for each shared resource accessible to the task. This trait has only one method, lock
, which runs its closure argument in a critical section.
The critical section created by the lock
API is based on dynamic priorities: it temporarily raises the dynamic priority of the context to a ceiling priority that prevents other tasks from preempting the critical section. This synchronization protocol is known as the Immediate Ceiling Priority Protocol (ICPP), and complies with Stack Resource Policy (SRP) based scheduling of RTIC.
In the example below we have three interrupt handlers with priorities ranging from one to three. The two handlers with the lower priorities contend for a shared
resource and need to succeed in locking the resource in order to access its data. The highest priority handler, which does not access the shared
resource, is free to preempt a critical section created by the lowest priority handler.
//! examples/lock.rs
#![no_main]
#![no_std]
#![deny(warnings)]
#![deny(unsafe_code)]
#![deny(missing_docs)]
use panic_semihosting as _;
#[rtic::app(device = lm3s6965, dispatchers = [GPIOA, GPIOB, GPIOC])]
mod app {
use cortex_m_semihosting::{debug, hprintln};
#[shared]
struct Shared {
shared: u32,
}
#[local]
struct Local {}
#[init]
fn init(_: init::Context) -> (Shared, Local) {
foo::spawn().unwrap();
(Shared { shared: 0 }, Local {})
}
// when omitted priority is assumed to be `1`
#[task(shared = [shared])]
async fn foo(mut c: foo::Context) {
hprintln!("A");
// the lower priority task requires a critical section to access the data
c.shared.shared.lock(|shared| {
// data can only be modified within this critical section (closure)
*shared += 1;
// bar will *not* run right now due to the critical section
bar::spawn().unwrap();
hprintln!("B - shared = {}", *shared);
// baz does not contend for `shared` so it's allowed to run now
baz::spawn().unwrap();
});
// critical section is over: bar can now start
hprintln!("E");
debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator
}
#[task(priority = 2, shared = [shared])]
async fn bar(mut c: bar::Context) {
// the higher priority task does still need a critical section
let shared = c.shared.shared.lock(|shared| {
*shared += 1;
*shared
});
hprintln!("D - shared = {}", shared);
}
#[task(priority = 3)]
async fn baz(_: baz::Context) {
hprintln!("C");
}
}
$ cargo xtask qemu --verbose --example lock
A
B - shared = 1
C
D - shared = 2
E
Types of #[shared]
resources have to be Send
.
Multi-lock
As an extension to lock
, and to reduce rightward drift, locks can be taken as tuples. The following examples show this in use:
//! examples/mutlilock.rs
#![no_main]
#![no_std]
#![deny(warnings)]
#![deny(unsafe_code)]
#![deny(missing_docs)]
use panic_semihosting as _;
#[rtic::app(device = lm3s6965, dispatchers = [GPIOA])]
mod app {
use cortex_m_semihosting::{debug, hprintln};
#[shared]
struct Shared {
shared1: u32,
shared2: u32,
shared3: u32,
}
#[local]
struct Local {}
#[init]
fn init(_: init::Context) -> (Shared, Local) {
locks::spawn().unwrap();
(
Shared {
shared1: 0,
shared2: 0,
shared3: 0,
},
Local {},
)
}
// when omitted priority is assumed to be `1`
#[task(shared = [shared1, shared2, shared3])]
async fn locks(c: locks::Context) {
let s1 = c.shared.shared1;
let s2 = c.shared.shared2;
let s3 = c.shared.shared3;
(s1, s2, s3).lock(|s1, s2, s3| {
*s1 += 1;
*s2 += 1;
*s3 += 1;
hprintln!("Multiple locks, s1: {}, s2: {}, s3: {}", *s1, *s2, *s3);
});
debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator
}
}
$ cargo xtask qemu --verbose --example multilock
Multiple locks, s1: 1, s2: 1, s3: 1
Only shared (&-
) access
By default, the framework assumes that all tasks require exclusive mutable access (&mut-
) to resources, but it is possible to specify that a task only requires shared access (&-
) to a resource using the &resource_name
syntax in the shared
list.
The advantage of specifying shared access (&-
) to a resource is that no locks are required to access the resource even if the resource is contended by more than one task running at different priorities. The downside is that the task only gets a shared reference (&-
) to the resource, limiting the operations it can perform on it, but where a shared reference is enough this approach reduces the number of required locks. In addition to simple immutable data, this shared access can be useful where the resource type safely implements interior mutability, with appropriate locking or atomic operations of its own.
Note that in this release of RTIC it is not possible to request both exclusive access (&mut-
) and shared access (&-
) to the same resource from different tasks. Attempting to do so will result in a compile error.
In the example below a key (e.g. a cryptographic key) is loaded (or created) at runtime (returned by init
) and then used from two tasks that run at different priorities without any kind of lock.
//! examples/only-shared-access.rs
#![no_main]
#![no_std]
#![deny(warnings)]
#![deny(unsafe_code)]
#![deny(missing_docs)]
use panic_semihosting as _;
#[rtic::app(device = lm3s6965, dispatchers = [UART0, UART1])]
mod app {
use cortex_m_semihosting::{debug, hprintln};
#[shared]
struct Shared {
key: u32,
}
#[local]
struct Local {}
#[init]
fn init(_: init::Context) -> (Shared, Local) {
foo::spawn().unwrap();
bar::spawn().unwrap();
(Shared { key: 0xdeadbeef }, Local {})
}
#[task(shared = [&key])]
async fn foo(cx: foo::Context) {
let key: &u32 = cx.shared.key;
hprintln!("foo(key = {:#x})", key);
debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator
}
#[task(priority = 2, shared = [&key])]
async fn bar(cx: bar::Context) {
hprintln!("bar(key = {:#x})", cx.shared.key);
}
}
$ cargo xtask qemu --verbose --example only-shared-access
bar(key = 0xdeadbeef)
foo(key = 0xdeadbeef)
Lock-free access of shared resources
A critical section is not required to access a #[shared]
resource that's only accessed by tasks running at the same priority. In this case, you can opt out of the lock
API by adding the #[lock_free]
field-level attribute to the resource declaration (see example below).
To adhere to the Rust aliasing rule, a resource may be either accessed through multiple immutable references or a singe mutable reference (but not both at the same time).
Using #[lock_free]
on resources shared by tasks running at different priorities will result in a compile-time error -- not using the lock
API would violate the aforementioned alias rule. Similarly, for each priority there can be only a single software task accessing a shared resource (as an async
task may yield execution to other software or hardware tasks running at the same priority). However, under this single-task restriction, we make the observation that the resource is in effect no longer shared
but rather local
. Thus, using a #[lock_free]
shared resource will result in a compile-time error -- where applicable, use a #[local]
resource instead.
//! examples/lock-free.rs
#![no_main]
#![no_std]
#![deny(warnings)]
#![deny(unsafe_code)]
#![deny(missing_docs)]
use panic_semihosting as _;
#[rtic::app(device = lm3s6965)]
mod app {
use cortex_m_semihosting::{debug, hprintln};
use lm3s6965::Interrupt;
#[shared]
struct Shared {
#[lock_free] // <- lock-free shared resource
counter: u64,
}
#[local]
struct Local {}
#[init]
fn init(_: init::Context) -> (Shared, Local) {
rtic::pend(Interrupt::UART0);
(Shared { counter: 0 }, Local {})
}
#[task(binds = UART0, shared = [counter])] // <- same priority
fn foo(c: foo::Context) {
rtic::pend(Interrupt::UART1);
*c.shared.counter += 1; // <- no lock API required
let counter = *c.shared.counter;
hprintln!(" foo = {}", counter);
}
#[task(binds = UART1, shared = [counter])] // <- same priority
fn bar(c: bar::Context) {
rtic::pend(Interrupt::UART0);
*c.shared.counter += 1; // <- no lock API required
let counter = *c.shared.counter;
hprintln!(" bar = {}", counter);
debug::exit(debug::EXIT_SUCCESS); // Exit QEMU simulator
}
}
$ cargo xtask qemu --verbose --example lock-free
foo = 1
bar = 2