Real-Time Interrupt-driven Concurrency

A concurrency framework for building real-time systems

Preface

This book contains user level documentation for the Real-Time Interrupt-driven Concurrency (RTIC) framework. The API reference can be found here.

Formerly known as Real-Time For the Masses.

There is a translation of this book in Russian.

This is the documentation of v0.5.x of RTIC; for the documentation of version v0.4.x go here.

crates.io docs.rs book rustc

Features

  • Tasks as the unit of concurrency 1. Tasks can be event triggered (fired in response to asynchronous stimuli) or spawned by the application on demand.

  • Message passing between tasks. Specifically, messages can be passed to software tasks at spawn time.

  • A timer queue 2. Software tasks can be scheduled to run at some time in the future. This feature can be used to implement periodic tasks.

  • Support for prioritization of tasks and, thus, preemptive multitasking.

  • Efficient and data race free memory sharing through fine grained priority based critical sections 1.

  • Deadlock free execution guaranteed at compile time. This is an stronger guarantee than what's provided by the standard Mutex abstraction.

  • Minimal scheduling overhead. The task scheduler has minimal software footprint; the hardware does the bulk of the scheduling.

  • Highly efficient memory usage: All the tasks share a single call stack and there's no hard dependency on a dynamic memory allocator.

  • All Cortex-M devices are fully supported.

  • This task model is amenable to known WCET (Worst Case Execution Time) analysis and scheduling analysis techniques. (Though we haven't yet developed Rust friendly tooling for that.)

User documentation

API reference

Chat

Join us and talk about RTIC in the Matrix room.

Contributing

New features and big changes should go through the RFC process in the dedicated RFC repository.

Acknowledgments

This crate is based on the Real-Time For the Masses language created by the Embedded Systems group at Luleå University of Technology, led by Prof. Per Lindgren.

References

1

Eriksson, J., Häggström, F., Aittamaa, S., Kruglyak, A., & Lindgren, P. (2013, June). Real-time for the masses, step 1: Programming API and static priority SRP kernel primitives. In Industrial Embedded Systems (SIES), 2013 8th IEEE International Symposium on (pp. 110-113). IEEE.

2

Lindgren, P., Fresk, E., Lindner, M., Lindner, A., Pereira, D., & Pinho, L. M. (2016). Abstract timers and their implementation onto the arm cortex-m family of mcus. ACM SIGBED Review, 13(1), 48-53.

License

All source code (including code snippets) is licensed under either of

at your option.

The written prose contained within the book is licensed under the terms of the Creative Commons CC-BY-SA v4.0 license (LICENSE-CC-BY-SA or https://creativecommons.org/licenses/by-sa/4.0/legalcode).

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be licensed as above, without any additional terms or conditions.

RTIC by example

This part of the book introduces the Real-Time Interrupt-driven Concurrency (RTIC) framework to new users by walking them through examples of increasing complexity.

All examples in this part of the book can be found in the GitHub repository of the project, and most of the examples can be run on QEMU so no special hardware is required to follow along.

To run the examples on your laptop / PC you'll need the qemu-system-arm program. Check the embedded Rust book for instructions on how to set up an embedded development environment that includes QEMU.

The app attribute

This is the smallest possible RTIC application:


# #![allow(unused_variables)]
#fn main() {
//! examples/smallest.rs

#![no_main]
#![no_std]

use panic_semihosting as _; // panic handler
use rtic::app;

#[app(device = lm3s6965)]
const APP: () = {};

#}

All RTIC applications use the app attribute (#[app(..)]). This attribute must be applied to a const item that contains items. The app attribute has a mandatory device argument that takes a path as a value. This path must point to a peripheral access crate (PAC) generated using svd2rust v0.14.x or newer. The app attribute will expand into a suitable entry point so it's not required to use the cortex_m_rt::entry attribute.

ASIDE: Some of you may be wondering why we are using a const item as a module and not a proper mod item. The reason is that using attributes on modules requires a feature gate, which requires a nightly toolchain. To make RTIC work on stable we use the const item instead. When more parts of macros 1.2 are stabilized we'll move from a const item to a mod item and eventually to a crate level attribute (#![app]).

init

Within the pseudo-module the app attribute expects to find an initialization function marked with the init attribute. This function must have signature fn(init::Context) [-> init::LateResources] (the return type is not always required).

This initialization function will be the first part of the application to run. The init function will run with interrupts disabled and has exclusive access to Cortex-M and, optionally, device specific peripherals through the core and device fields of init::Context.

static mut variables declared at the beginning of init will be transformed into &'static mut references that are safe to access.

The example below shows the types of the core and device fields and showcases safe access to a static mut variable. The device field is only available when the peripherals argument is set to true (it defaults to false).


# #![allow(unused_variables)]
#fn main() {
//! examples/init.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use cortex_m_semihosting::{debug, hprintln};
use panic_semihosting as _;

#[rtic::app(device = lm3s6965, peripherals = true)]
const APP: () = {
    #[init]
    fn init(cx: init::Context) {
        static mut X: u32 = 0;

        // Cortex-M peripherals
        let _core: cortex_m::Peripherals = cx.core;

        // Device specific peripherals
        let _device: lm3s6965::Peripherals = cx.device;

        // Safe access to local `static mut` variable
        let _x: &'static mut u32 = X;

        hprintln!("init").unwrap();

        debug::exit(debug::EXIT_SUCCESS);
    }
};

#}

Running the example will print init to the console and then exit the QEMU process.

$ cargo run --example init
init

idle

A function marked with the idle attribute can optionally appear in the pseudo-module. This function is used as the special idle task and must have signature fn(idle::Context) - > !.

When present, the runtime will execute the idle task after init. Unlike init, idle will run with interrupts enabled and it's not allowed to return so it must run forever.

When no idle function is declared, the runtime sets the SLEEPONEXIT bit and then sends the microcontroller to sleep after running init.

Like in init, static mut variables will be transformed into &'static mut references that are safe to access.

The example below shows that idle runs after init.


# #![allow(unused_variables)]
#fn main() {
//! examples/idle.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use cortex_m_semihosting::{debug, hprintln};
use panic_semihosting as _;

#[rtic::app(device = lm3s6965)]
const APP: () = {
    #[init]
    fn init(_: init::Context) {
        hprintln!("init").unwrap();
    }

    #[idle]
    fn idle(_: idle::Context) -> ! {
        static mut X: u32 = 0;

        // Safe access to local `static mut` variable
        let _x: &'static mut u32 = X;

        hprintln!("idle").unwrap();

        debug::exit(debug::EXIT_SUCCESS);

        loop {}
    }
};

#}
$ cargo run --example idle
init
idle

Hardware tasks

To declare interrupt handlers the framework provides a #[task] attribute that can be attached to functions. This attribute takes a binds argument whose value is the name of the interrupt to which the handler will be bound to; the function adorned with this attribute becomes the interrupt handler. Within the framework these type of tasks are referred to as hardware tasks, because they start executing in reaction to a hardware event.

The example below demonstrates the use of the #[task] attribute to declare an interrupt handler. Like in the case of #[init] and #[idle] local static mut variables are safe to use within a hardware task.


# #![allow(unused_variables)]
#fn main() {
//! examples/hardware.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use cortex_m_semihosting::{debug, hprintln};
use lm3s6965::Interrupt;
use panic_semihosting as _;

#[rtic::app(device = lm3s6965)]
const APP: () = {
    #[init]
    fn init(_: init::Context) {
        // Pends the UART0 interrupt but its handler won't run until *after*
        // `init` returns because interrupts are disabled
        rtic::pend(Interrupt::UART0); // equivalent to NVIC::pend

        hprintln!("init").unwrap();
    }

    #[idle]
    fn idle(_: idle::Context) -> ! {
        // interrupts are enabled again; the `UART0` handler runs at this point

        hprintln!("idle").unwrap();

        rtic::pend(Interrupt::UART0);

        debug::exit(debug::EXIT_SUCCESS);

        loop {}
    }

    #[task(binds = UART0)]
    fn uart0(_: uart0::Context) {
        static mut TIMES: u32 = 0;

        // Safe access to local `static mut` variable
        *TIMES += 1;

        hprintln!(
            "UART0 called {} time{}",
            *TIMES,
            if *TIMES > 1 { "s" } else { "" }
        )
        .unwrap();
    }
};

#}
$ cargo run --example hardware
init
UART0 called 1 time
idle
UART0 called 2 times

So far all the RTIC applications we have seen look no different than the applications one can write using only the cortex-m-rt crate. From this point we start introducing features unique to RTIC.

Priorities

The static priority of each handler can be declared in the task attribute using the priority argument. Tasks can have priorities in the range 1..=(1 << NVIC_PRIO_BITS) where NVIC_PRIO_BITS is a constant defined in the device crate. When the priority argument is omitted, the priority is assumed to be 1. The idle task has a non-configurable static priority of 0, the lowest priority.

When several tasks are ready to be executed the one with highest static priority will be executed first. Task prioritization can be observed in the following scenario: an interrupt signal arrives during the execution of a low priority task; the signal puts the higher priority task in the pending state. The difference in priority results in the higher priority task preempting the lower priority one: the execution of the lower priority task is suspended and the higher priority task is executed to completion. Once the higher priority task has terminated the lower priority task is resumed.

The following example showcases the priority based scheduling of tasks.


# #![allow(unused_variables)]
#fn main() {
//! examples/preempt.rs

#![no_main]
#![no_std]

use cortex_m_semihosting::{debug, hprintln};
use lm3s6965::Interrupt;
use panic_semihosting as _;
use rtic::app;

#[app(device = lm3s6965)]
const APP: () = {
    #[init]
    fn init(_: init::Context) {
        rtic::pend(Interrupt::GPIOA);
    }

    #[task(binds = GPIOA, priority = 1)]
    fn gpioa(_: gpioa::Context) {
        hprintln!("GPIOA - start").unwrap();
        rtic::pend(Interrupt::GPIOC);
        hprintln!("GPIOA - end").unwrap();
        debug::exit(debug::EXIT_SUCCESS);
    }

    #[task(binds = GPIOB, priority = 2)]
    fn gpiob(_: gpiob::Context) {
        hprintln!(" GPIOB").unwrap();
    }

    #[task(binds = GPIOC, priority = 2)]
    fn gpioc(_: gpioc::Context) {
        hprintln!(" GPIOC - start").unwrap();
        rtic::pend(Interrupt::GPIOB);
        hprintln!(" GPIOC - end").unwrap();
    }
};

#}
$ cargo run --example preempt
GPIOA - start
 GPIOC - start
 GPIOC - end
 GPIOB
GPIOA - end

Note that the task gpiob does not preempt task gpioc because its priority is the same as gpioc's. However, once gpioc terminates the execution of task, gpiob is prioritized over gpioa due to its higher priority. gpioa is resumed only after gpiob terminates.

One more note about priorities: choosing a priority higher than what the device supports (that is 1 << NVIC_PRIO_BITS) will result in a compile error. Due to limitations in the language, the error message is currently far from helpful: it will say something along the lines of "evaluation of constant value failed" and the span of the error will not point out to the problematic interrupt value -- we are sorry about this!

Resources

The framework provides an abstraction to share data between any of the contexts we saw in the previous section (task handlers, init and idle): resources.

Resources are data visible only to functions declared within the #[app] pseudo-module. The framework gives the user complete control over which context can access which resource.

All resources are declared as a single struct within the #[app] pseudo-module. Each field in the structure corresponds to a different resource. Resources can optionally be given an initial value using the #[init] attribute. Resources that are not given an initial value are referred to as late resources and are covered in more detail in a follow-up section in this page.

Each context (task handler, init or idle) must declare the resources it intends to access in its corresponding metadata attribute using the resources argument. This argument takes a list of resource names as its value. The listed resources are made available to the context under the resources field of the Context structure.

The example application shown below contains two interrupt handlers that share access to a resource named shared.


# #![allow(unused_variables)]
#fn main() {
//! examples/resource.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use cortex_m_semihosting::{debug, hprintln};
use lm3s6965::Interrupt;
use panic_semihosting as _;

#[rtic::app(device = lm3s6965)]
const APP: () = {
    struct Resources {
        // A resource
        #[init(0)]
        shared: u32,
    }

    #[init]
    fn init(_: init::Context) {
        rtic::pend(Interrupt::UART0);
        rtic::pend(Interrupt::UART1);
    }

    // `shared` cannot be accessed from this context
    #[idle]
    fn idle(_cx: idle::Context) -> ! {
        debug::exit(debug::EXIT_SUCCESS);

        // error: no `resources` field in `idle::Context`
        // _cx.resources.shared += 1;

        loop {}
    }

    // `shared` can be accessed from this context
    #[task(binds = UART0, resources = [shared])]
    fn uart0(cx: uart0::Context) {
        let shared: &mut u32 = cx.resources.shared;
        *shared += 1;

        hprintln!("UART0: shared = {}", shared).unwrap();
    }

    // `shared` can be accessed from this context
    #[task(binds = UART1, resources = [shared])]
    fn uart1(cx: uart1::Context) {
        *cx.resources.shared += 1;

        hprintln!("UART1: shared = {}", cx.resources.shared).unwrap();
    }
};

#}
$ cargo run --example resource
UART0: shared = 1
UART1: shared = 2

Note that the shared resource cannot be accessed from idle. Attempting to do so results in a compile error.

lock

In the presence of preemption critical sections are required to mutate shared data in a data race free manner. As the framework has complete knowledge over the priorities of tasks and which tasks can access which resources it enforces that critical sections are used where required for memory safety.

Where a critical section is required the framework hands out a resource proxy instead of a reference. This resource proxy is a structure that implements the Mutex trait. The only method on this trait, lock, runs its closure argument in a critical section.

The critical section created by the lock API is based on dynamic priorities: it temporarily raises the dynamic priority of the context to a ceiling priority that prevents other tasks from preempting the critical section. This synchronization protocol is known as the Immediate Ceiling Priority Protocol (ICPP).

In the example below we have three interrupt handlers with priorities ranging from one to three. The two handlers with the lower priorities contend for the shared resource. The lowest priority handler needs to lock the shared resource to access its data, whereas the mid priority handler can directly access its data. The highest priority handler, which cannot access the shared resource, is free to preempt the critical section created by the lowest priority handler.


# #![allow(unused_variables)]
#fn main() {
//! examples/lock.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use cortex_m_semihosting::{debug, hprintln};
use lm3s6965::Interrupt;
use panic_semihosting as _;

#[rtic::app(device = lm3s6965)]
const APP: () = {
    struct Resources {
        #[init(0)]
        shared: u32,
    }

    #[init]
    fn init(_: init::Context) {
        rtic::pend(Interrupt::GPIOA);
    }

    // when omitted priority is assumed to be `1`
    #[task(binds = GPIOA, resources = [shared])]
    fn gpioa(mut c: gpioa::Context) {
        hprintln!("A").unwrap();

        // the lower priority task requires a critical section to access the data
        c.resources.shared.lock(|shared| {
            // data can only be modified within this critical section (closure)
            *shared += 1;

            // GPIOB will *not* run right now due to the critical section
            rtic::pend(Interrupt::GPIOB);

            hprintln!("B - shared = {}", *shared).unwrap();

            // GPIOC does not contend for `shared` so it's allowed to run now
            rtic::pend(Interrupt::GPIOC);
        });

        // critical section is over: GPIOB can now start

        hprintln!("E").unwrap();

        debug::exit(debug::EXIT_SUCCESS);
    }

    #[task(binds = GPIOB, priority = 2, resources = [shared])]
    fn gpiob(c: gpiob::Context) {
        // the higher priority task does *not* need a critical section
        *c.resources.shared += 1;

        hprintln!("D - shared = {}", *c.resources.shared).unwrap();
    }

    #[task(binds = GPIOC, priority = 3)]
    fn gpioc(_: gpioc::Context) {
        hprintln!("C").unwrap();
    }
};

#}
$ cargo run --example lock
A
B - shared = 1
C
D - shared = 2
E

Late resources

Late resources are resources that are not given an initial value at compile time using the #[init] attribute but instead are initialized at runtime using the init::LateResources values returned by the init function.

Late resources are useful for moving (as in transferring the ownership of) peripherals initialized in init into interrupt handlers.

The example below uses late resources to establish a lockless, one-way channel between the UART0 interrupt handler and the idle task. A single producer single consumer Queue is used as the channel. The queue is split into consumer and producer end points in init and then each end point is stored in a different resource; UART0 owns the producer resource and idle owns the consumer resource.


# #![allow(unused_variables)]
#fn main() {
//! examples/late.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use cortex_m_semihosting::{debug, hprintln};
use heapless::{
    consts::*,
    i,
    spsc::{Consumer, Producer, Queue},
};
use lm3s6965::Interrupt;
use panic_semihosting as _;

#[rtic::app(device = lm3s6965)]
const APP: () = {
    // Late resources
    struct Resources {
        p: Producer<'static, u32, U4>,
        c: Consumer<'static, u32, U4>,
    }

    #[init]
    fn init(_: init::Context) -> init::LateResources {
        static mut Q: Queue<u32, U4> = Queue(i::Queue::new());

        let (p, c) = Q.split();

        // Initialization of late resources
        init::LateResources { p, c }
    }

    #[idle(resources = [c])]
    fn idle(c: idle::Context) -> ! {
        loop {
            if let Some(byte) = c.resources.c.dequeue() {
                hprintln!("received message: {}", byte).unwrap();

                debug::exit(debug::EXIT_SUCCESS);
            } else {
                rtic::pend(Interrupt::UART0);
            }
        }
    }

    #[task(binds = UART0, resources = [p])]
    fn uart0(c: uart0::Context) {
        c.resources.p.enqueue(42).unwrap();
    }
};

#}
$ cargo run --example late
received message: 42

Only shared access

By default the framework assumes that all tasks require exclusive access (&mut-) to resources but it is possible to specify that a task only requires shared access (&-) to a resource using the &resource_name syntax in the resources list.

The advantage of specifying shared access (&-) to a resource is that no locks are required to access the resource even if the resource is contended by several tasks running at different priorities. The downside is that the task only gets a shared reference (&-) to the resource, limiting the operations it can perform on it, but where a shared reference is enough this approach reduces the number of required locks.

Note that in this release of RTIC it is not possible to request both exclusive access (&mut-) and shared access (&-) to the same resource from different tasks. Attempting to do so will result in a compile error.

In the example below a key (e.g. a cryptographic key) is loaded (or created) at runtime and then used from two tasks that run at different priorities without any kind of lock.


# #![allow(unused_variables)]
#fn main() {
//! examples/static.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use cortex_m_semihosting::{debug, hprintln};
use lm3s6965::Interrupt;
use panic_semihosting as _;

#[rtic::app(device = lm3s6965)]
const APP: () = {
    struct Resources {
        key: u32,
    }

    #[init]
    fn init(_: init::Context) -> init::LateResources {
        rtic::pend(Interrupt::UART0);
        rtic::pend(Interrupt::UART1);

        init::LateResources { key: 0xdeadbeef }
    }

    #[task(binds = UART0, resources = [&key])]
    fn uart0(cx: uart0::Context) {
        let key: &u32 = cx.resources.key;
        hprintln!("UART0(key = {:#x})", key).unwrap();

        debug::exit(debug::EXIT_SUCCESS);
    }

    #[task(binds = UART1, priority = 2, resources = [&key])]
    fn uart1(cx: uart1::Context) {
        hprintln!("UART1(key = {:#x})", cx.resources.key).unwrap();
    }
};

#}
$ cargo run --example only-shared-access
UART1(key = 0xdeadbeef)
UART0(key = 0xdeadbeef)

Software tasks

In addition to hardware tasks, which are invoked by the hardware in response to hardware events, RTIC also supports software tasks which can be spawned by the application from any execution context.

Software tasks can also be assigned priorities and, under the hood, are dispatched from interrupt handlers. RTIC requires that free interrupts are declared in an extern block when using software tasks; some of these free interrupts will be used to dispatch the software tasks. An advantage of software tasks over hardware tasks is that many tasks can be mapped to a single interrupt handler.

Software tasks are also declared using the task attribute but the binds argument must be omitted. To be able to spawn a software task from a context the name of the task must appear in the spawn argument of the context attribute (init, idle, task, etc.).

The example below showcases three software tasks that run at 2 different priorities. The three software tasks are mapped to 2 interrupts handlers.


# #![allow(unused_variables)]
#fn main() {
//! examples/task.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use cortex_m_semihosting::{debug, hprintln};
use panic_semihosting as _;

#[rtic::app(device = lm3s6965)]
const APP: () = {
    #[init(spawn = [foo])]
    fn init(c: init::Context) {
        c.spawn.foo().unwrap();
    }

    #[task(spawn = [bar, baz])]
    fn foo(c: foo::Context) {
        hprintln!("foo - start").unwrap();

        // spawns `bar` onto the task scheduler
        // `foo` and `bar` have the same priority so `bar` will not run until
        // after `foo` terminates
        c.spawn.bar().unwrap();

        hprintln!("foo - middle").unwrap();

        // spawns `baz` onto the task scheduler
        // `baz` has higher priority than `foo` so it immediately preempts `foo`
        c.spawn.baz().unwrap();

        hprintln!("foo - end").unwrap();
    }

    #[task]
    fn bar(_: bar::Context) {
        hprintln!("bar").unwrap();

        debug::exit(debug::EXIT_SUCCESS);
    }

    #[task(priority = 2)]
    fn baz(_: baz::Context) {
        hprintln!("baz").unwrap();
    }

    // RTIC requires that unused interrupts are declared in an extern block when
    // using software tasks; these free interrupts will be used to dispatch the
    // software tasks.
    extern "C" {
        fn SSI0();
        fn QEI0();
    }
};

#}
$ cargo run --example task
foo - start
foo - middle
baz
foo - end
bar

Message passing

The other advantage of software tasks is that messages can be passed to these tasks when spawning them. The type of the message payload must be specified in the signature of the task handler.

The example below showcases three tasks, two of them expect a message.


# #![allow(unused_variables)]
#fn main() {
//! examples/message.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use cortex_m_semihosting::{debug, hprintln};
use panic_semihosting as _;

#[rtic::app(device = lm3s6965)]
const APP: () = {
    #[init(spawn = [foo])]
    fn init(c: init::Context) {
        c.spawn.foo(/* no message */).unwrap();
    }

    #[task(spawn = [bar])]
    fn foo(c: foo::Context) {
        static mut COUNT: u32 = 0;

        hprintln!("foo").unwrap();

        c.spawn.bar(*COUNT).unwrap();
        *COUNT += 1;
    }

    #[task(spawn = [baz])]
    fn bar(c: bar::Context, x: u32) {
        hprintln!("bar({})", x).unwrap();

        c.spawn.baz(x + 1, x + 2).unwrap();
    }

    #[task(spawn = [foo])]
    fn baz(c: baz::Context, x: u32, y: u32) {
        hprintln!("baz({}, {})", x, y).unwrap();

        if x + y > 4 {
            debug::exit(debug::EXIT_SUCCESS);
        }

        c.spawn.foo().unwrap();
    }

    // RTIC requires that unused interrupts are declared in an extern block when
    // using software tasks; these free interrupts will be used to dispatch the
    // software tasks.
    extern "C" {
        fn SSI0();
    }
};

#}
$ cargo run --example message
foo
bar(0)
baz(1, 2)
foo
bar(1)
baz(2, 3)

Capacity

RTIC does not perform any form of heap-based memory allocation. The memory required to store messages is statically reserved. By default the framework minimizes the memory footprint of the application so each task has a message "capacity" of 1: meaning that at most one message can be posted to the task before it gets a chance to run. This default can be overridden for each task using the capacity argument. This argument takes a positive integer that indicates how many messages the task message buffer can hold.

The example below sets the capacity of the software task foo to 4. If the capacity is not specified then the second spawn.foo call in UART0 would fail (panic).


# #![allow(unused_variables)]
#fn main() {
//! examples/capacity.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use cortex_m_semihosting::{debug, hprintln};
use lm3s6965::Interrupt;
use panic_semihosting as _;

#[rtic::app(device = lm3s6965)]
const APP: () = {
    #[init]
    fn init(_: init::Context) {
        rtic::pend(Interrupt::UART0);
    }

    #[task(binds = UART0, spawn = [foo, bar])]
    fn uart0(c: uart0::Context) {
        c.spawn.foo(0).unwrap();
        c.spawn.foo(1).unwrap();
        c.spawn.foo(2).unwrap();
        c.spawn.foo(3).unwrap();

        c.spawn.bar().unwrap();
    }

    #[task(capacity = 4)]
    fn foo(_: foo::Context, x: u32) {
        hprintln!("foo({})", x).unwrap();
    }

    #[task]
    fn bar(_: bar::Context) {
        hprintln!("bar").unwrap();

        debug::exit(debug::EXIT_SUCCESS);
    }

    // RTIC requires that unused interrupts are declared in an extern block when
    // using software tasks; these free interrupts will be used to dispatch the
    // software tasks.
    extern "C" {
        fn SSI0();
    }
};

#}
$ cargo run --example capacity
foo(0)
foo(1)
foo(2)
foo(3)
bar

Error handling

The spawn API returns the Err variant when there's no space to send the message. In most scenarios spawning errors are handled in one of two ways:

  • Panicking, using unwrap, expect, etc. This approach is used to catch the programmer error (i.e. bug) of selecting a capacity that was too small. When this panic is encountered during testing choosing a bigger capacity and recompiling the program may fix the issue but sometimes it's necessary to dig deeper and perform a timing analysis of the application to check if the platform can deal with peak payload or if the processor needs to be replaced with a faster one.

  • Ignoring the result. In soft real-time and non real-time applications it may be OK to occasionally lose data or fail to respond to some events during event bursts. In those scenarios silently letting a spawn call fail may be acceptable.

It should be noted that retrying a spawn call is usually the wrong approach as this operation will likely never succeed in practice. Because there are only context switches towards higher priority tasks retrying the spawn call of a lower priority task will never let the scheduler dispatch said task meaning that its message buffer will never be emptied. This situation is depicted in the following snippet:


# #![allow(unused_variables)]
#fn main() {
#[rtic::app(..)]
const APP: () = {
    #[init(spawn = [foo, bar])]
    fn init(cx: init::Context) {
        cx.spawn.foo().unwrap();
        cx.spawn.bar().unwrap();
    }

    #[task(priority = 2, spawn = [bar])]
    fn foo(cx: foo::Context) {
        // ..

        // the program will get stuck here
        while cx.spawn.bar(payload).is_err() {
            // retry the spawn call if it failed
        }
    }

    #[task(priority = 1)]
    fn bar(cx: bar::Context, payload: i32) {
        // ..
    }
};
#}

Timer queue

In contrast with the spawn API, which immediately spawns a software task onto the scheduler, the schedule API can be used to schedule a task to run some time in the future.

To use the schedule API a monotonic timer must be first defined using the monotonic argument of the #[app] attribute. This argument takes a path to a type that implements the Monotonic trait. The associated type, Instant, of this trait represents a timestamp in arbitrary units and it's used extensively in the schedule API -- it is suggested to model this type after the one in the standard library.

Although not shown in the trait definition (due to limitations in the trait / type system) the subtraction of two Instants should return some Duration type (see core::time::Duration) and this Duration type must implement the TryInto<u32> trait. The implementation of this trait must convert the Duration value, which uses some arbitrary unit of time, into the "system timer (SYST) clock cycles" time unit. The result of the conversion must be a 32-bit integer. If the result of the conversion doesn't fit in a 32-bit number then the operation must return an error, any error type.

For ARMv7+ targets the rtic crate provides a Monotonic implementation based on the built-in CYCle CouNTer (CYCCNT). Note that this is a 32-bit timer clocked at the frequency of the CPU and as such it is not suitable for tracking time spans in the order of seconds.

To be able to schedule a software task from a context the name of the task must first appear in the schedule argument of the context attribute. When scheduling a task the (user-defined) Instant at which the task should be executed must be passed as the first argument of the schedule invocation.

Additionally, the chosen monotonic timer must be configured and initialized during the #[init] phase. Note that this is also the case if you choose to use the CYCCNT provided by the cortex-m-rtic crate.

The example below schedules two tasks from init: foo and bar. foo is scheduled to run 8 million clock cycles in the future. Next, bar is scheduled to run 4 million clock cycles in the future. Thus bar runs before foo since it was scheduled to run first.

IMPORTANT: The examples that use the schedule API or the Instant abstraction will not properly work on QEMU because the Cortex-M cycle counter functionality has not been implemented in qemu-system-arm.


# #![allow(unused_variables)]
#fn main() {
//! examples/schedule.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use cortex_m::peripheral::DWT;
use cortex_m_semihosting::hprintln;
use panic_halt as _;
use rtic::cyccnt::{Instant, U32Ext as _};

// NOTE: does NOT work on QEMU!
#[rtic::app(device = lm3s6965, monotonic = rtic::cyccnt::CYCCNT)]
const APP: () = {
    #[init(schedule = [foo, bar])]
    fn init(mut cx: init::Context) {
        // Initialize (enable) the monotonic timer (CYCCNT)
        cx.core.DCB.enable_trace();
        // required on Cortex-M7 devices that software lock the DWT (e.g. STM32F7)
        DWT::unlock();
        cx.core.DWT.enable_cycle_counter();

        // semantically, the monotonic timer is frozen at time "zero" during `init`
        // NOTE do *not* call `Instant::now` in this context; it will return a nonsense value
        let now = cx.start; // the start time of the system

        hprintln!("init @ {:?}", now).unwrap();

        // Schedule `foo` to run 8e6 cycles (clock cycles) in the future
        cx.schedule.foo(now + 8_000_000.cycles()).unwrap();

        // Schedule `bar` to run 4e6 cycles in the future
        cx.schedule.bar(now + 4_000_000.cycles()).unwrap();
    }

    #[task]
    fn foo(_: foo::Context) {
        hprintln!("foo  @ {:?}", Instant::now()).unwrap();
    }

    #[task]
    fn bar(_: bar::Context) {
        hprintln!("bar  @ {:?}", Instant::now()).unwrap();
    }

    // RTIC requires that unused interrupts are declared in an extern block when
    // using software tasks; these free interrupts will be used to dispatch the
    // software tasks.
    extern "C" {
        fn SSI0();
    }
};

#}

Running the program on real hardware produces the following output in the console:

init @ Instant(0)
bar  @ Instant(4000236)
foo  @ Instant(8000173)

When the schedule API is being used the runtime internally uses the SysTick interrupt handler and the system timer peripheral (SYST) so neither can be used by the application. This is accomplished by changing the type of init::Context.core from cortex_m::Peripherals to rtic::Peripherals. The latter structure contains all the fields of the former minus the SYST one.

Periodic tasks

Software tasks have access to the Instant at which they were scheduled to run through the scheduled variable. This information and the schedule API can be used to implement periodic tasks as shown in the example below.


# #![allow(unused_variables)]
#fn main() {
//! examples/periodic.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use cortex_m_semihosting::hprintln;
use panic_semihosting as _;
use rtic::cyccnt::{Instant, U32Ext};

const PERIOD: u32 = 8_000_000;

// NOTE: does NOT work on QEMU!
#[rtic::app(device = lm3s6965, monotonic = rtic::cyccnt::CYCCNT)]
const APP: () = {
    #[init(schedule = [foo])]
    fn init(cx: init::Context) {
        // omitted: initialization of `CYCCNT`

        cx.schedule.foo(cx.start + PERIOD.cycles()).unwrap();
    }

    #[task(schedule = [foo])]
    fn foo(cx: foo::Context) {
        let now = Instant::now();
        hprintln!("foo(scheduled = {:?}, now = {:?})", cx.scheduled, now).unwrap();

        cx.schedule.foo(cx.scheduled + PERIOD.cycles()).unwrap();
    }

    // RTIC requires that unused interrupts are declared in an extern block when
    // using software tasks; these free interrupts will be used to dispatch the
    // software tasks.
    extern "C" {
        fn SSI0();
    }
};

#}

This is the output produced by the example. Note that there is zero drift / jitter even though schedule.foo was invoked at the end of foo. Using Instant::now instead of scheduled would have resulted in drift / jitter.

foo(scheduled = Instant(8000000), now = Instant(8000196))
foo(scheduled = Instant(16000000), now = Instant(16000196))
foo(scheduled = Instant(24000000), now = Instant(24000196))

Baseline

For the tasks scheduled from init we have exact information about their scheduled time. For hardware tasks there's no scheduled time because these tasks are asynchronous in nature. For hardware tasks the runtime provides a start time, which indicates the time at which the task handler started executing.

Note that start is not equal to the arrival time of the event that fired the task. Depending on the priority of the task and the load of the system the start time could be very far off from the event arrival time.

What do you think will be the value of scheduled for software tasks that are spawned instead of scheduled? The answer is that spawned tasks inherit the baseline time of the context that spawned it. The baseline of hardware tasks is their start time, the baseline of software tasks is their scheduled time and the baseline of init is the system start time or time zero (Instant::zero()). idle doesn't really have a baseline but tasks spawned from it will use Instant::now() as their baseline time.

The example below showcases the different meanings of the baseline.


# #![allow(unused_variables)]
#fn main() {
//! examples/baseline.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use cortex_m_semihosting::{debug, hprintln};
use lm3s6965::Interrupt;
use panic_semihosting as _;

// NOTE: does NOT properly work on QEMU
#[rtic::app(device = lm3s6965, monotonic = rtic::cyccnt::CYCCNT)]
const APP: () = {
    #[init(spawn = [foo])]
    fn init(cx: init::Context) {
        // omitted: initialization of `CYCCNT`

        hprintln!("init(baseline = {:?})", cx.start).unwrap();

        // `foo` inherits the baseline of `init`: `Instant(0)`
        cx.spawn.foo().unwrap();
    }

    #[task(schedule = [foo])]
    fn foo(cx: foo::Context) {
        static mut ONCE: bool = true;

        hprintln!("foo(baseline = {:?})", cx.scheduled).unwrap();

        if *ONCE {
            *ONCE = false;

            rtic::pend(Interrupt::UART0);
        } else {
            debug::exit(debug::EXIT_SUCCESS);
        }
    }

    #[task(binds = UART0, spawn = [foo])]
    fn uart0(cx: uart0::Context) {
        hprintln!("UART0(baseline = {:?})", cx.start).unwrap();

        // `foo` inherits the baseline of `UART0`: its `start` time
        cx.spawn.foo().unwrap();
    }

    // RTIC requires that unused interrupts are declared in an extern block when
    // using software tasks; these free interrupts will be used to dispatch the
    // software tasks.
    extern "C" {
        fn SSI0();
    }
};

#}

Running the program on real hardware produces the following output in the console:

init(baseline = Instant(0))
foo(baseline = Instant(0))
UART0(baseline = Instant(904))
foo(baseline = Instant(904))

Types, Send and Sync

Every function within the APP pseudo-module has a Context structure as its first parameter. All the fields of these structures have predictable, non-anonymous types so you can write plain functions that take them as arguments.

The API reference specifies how these types are generated from the input. You can also generate documentation for you binary crate (cargo doc --bin <name>); in the documentation you'll find Context structs (e.g. init::Context and idle::Context).

The example below shows the different types generates by the app attribute.


# #![allow(unused_variables)]
#fn main() {
//! examples/types.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use cortex_m_semihosting::debug;
use panic_semihosting as _;
use rtic::cyccnt;

#[rtic::app(device = lm3s6965, peripherals = true, monotonic = rtic::cyccnt::CYCCNT)]
const APP: () = {
    struct Resources {
        #[init(0)]
        shared: u32,
    }

    #[init(schedule = [foo], spawn = [foo])]
    fn init(cx: init::Context) {
        let _: cyccnt::Instant = cx.start;
        let _: rtic::Peripherals = cx.core;
        let _: lm3s6965::Peripherals = cx.device;
        let _: init::Schedule = cx.schedule;
        let _: init::Spawn = cx.spawn;

        debug::exit(debug::EXIT_SUCCESS);
    }

    #[idle(schedule = [foo], spawn = [foo])]
    fn idle(cx: idle::Context) -> ! {
        let _: idle::Schedule = cx.schedule;
        let _: idle::Spawn = cx.spawn;

        loop {}
    }

    #[task(binds = UART0, resources = [shared], schedule = [foo], spawn = [foo])]
    fn uart0(cx: uart0::Context) {
        let _: cyccnt::Instant = cx.start;
        let _: resources::shared = cx.resources.shared;
        let _: uart0::Schedule = cx.schedule;
        let _: uart0::Spawn = cx.spawn;
    }

    #[task(priority = 2, resources = [shared], schedule = [foo], spawn = [foo])]
    fn foo(cx: foo::Context) {
        let _: cyccnt::Instant = cx.scheduled;
        let _: &mut u32 = cx.resources.shared;
        let _: foo::Resources = cx.resources;
        let _: foo::Schedule = cx.schedule;
        let _: foo::Spawn = cx.spawn;
    }

    // RTIC requires that unused interrupts are declared in an extern block when
    // using software tasks; these free interrupts will be used to dispatch the
    // software tasks.
    extern "C" {
        fn SSI0();
    }
};

#}

Send

Send is a marker trait for "types that can be transferred across thread boundaries", according to its definition in core. In the context of RTIC the Send trait is only required where it's possible to transfer a value between tasks that run at different priorities. This occurs in a few places: in message passing, in shared resources and in the initialization of late resources.

The app attribute will enforce that Send is implemented where required so you don't need to worry much about it. It's more important to know where you do not need the Send trait: on types that are transferred between tasks that run at the same priority. This occurs in two places: in message passing and in shared resources.

The example below shows where a type that doesn't implement Send can be used.


# #![allow(unused_variables)]
#fn main() {
//! `examples/not-send.rs`

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use core::marker::PhantomData;

use cortex_m_semihosting::debug;
use panic_halt as _;
use rtic::app;

pub struct NotSend {
    _0: PhantomData<*const ()>,
}

#[app(device = lm3s6965)]
const APP: () = {
    struct Resources {
        #[init(None)]
        shared: Option<NotSend>,
    }

    #[init(spawn = [baz, quux])]
    fn init(c: init::Context) {
        c.spawn.baz().unwrap();
        c.spawn.quux().unwrap();
    }

    #[task(spawn = [bar])]
    fn foo(c: foo::Context) {
        // scenario 1: message passed to task that runs at the same priority
        c.spawn.bar(NotSend { _0: PhantomData }).ok();
    }

    #[task]
    fn bar(_: bar::Context, _x: NotSend) {
        // scenario 1
    }

    #[task(priority = 2, resources = [shared])]
    fn baz(c: baz::Context) {
        // scenario 2: resource shared between tasks that run at the same priority
        *c.resources.shared = Some(NotSend { _0: PhantomData });
    }

    #[task(priority = 2, resources = [shared])]
    fn quux(c: quux::Context) {
        // scenario 2
        let _not_send = c.resources.shared.take().unwrap();

        debug::exit(debug::EXIT_SUCCESS);
    }

    // RTIC requires that unused interrupts are declared in an extern block when
    // using software tasks; these free interrupts will be used to dispatch the
    // software tasks.
    extern "C" {
        fn SSI0();
        fn QEI0();
    }
};

#}

It's important to note that late initialization of resources is effectively a send operation where the initial value is sent from the background context, which has the lowest priority of 0, to a task, which will run at a priority greater than or equal to 1. Thus all late resources need to implement the Send trait, except for those exclusively accessed by idle, which runs at a priority of 0.

Sharing a resource with init can be used to implement late initialization, see example below. For that reason, resources shared with init must also implement the Send trait.


# #![allow(unused_variables)]
#fn main() {
//! `examples/shared-with-init.rs`

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use cortex_m_semihosting::debug;
use lm3s6965::Interrupt;
use panic_halt as _;
use rtic::app;

pub struct MustBeSend;

#[app(device = lm3s6965)]
const APP: () = {
    struct Resources {
        #[init(None)]
        shared: Option<MustBeSend>,
    }

    #[init(resources = [shared])]
    fn init(c: init::Context) {
        // this `message` will be sent to task `UART0`
        let message = MustBeSend;
        *c.resources.shared = Some(message);

        rtic::pend(Interrupt::UART0);
    }

    #[task(binds = UART0, resources = [shared])]
    fn uart0(c: uart0::Context) {
        if let Some(message) = c.resources.shared.take() {
            // `message` has been received
            drop(message);

            debug::exit(debug::EXIT_SUCCESS);
        }
    }
};

#}

Sync

Similarly, Sync is a marker trait for "types for which it is safe to share references between threads", according to its definition in core. In the context of RTIC the Sync trait is only required where it's possible for two, or more, tasks that run at different priorities and may get a shared reference (&-) to a resource. This only occurs with shared access (&-) resources.

The app attribute will enforce that Sync is implemented where required but it's important to know where the Sync bound is not required: shared access (&-) resources contended by tasks that run at the same priority.

The example below shows where a type that doesn't implement Sync can be used.


# #![allow(unused_variables)]
#fn main() {
//! `examples/not-sync.rs`

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use core::marker::PhantomData;

use cortex_m_semihosting::debug;
use panic_halt as _;

pub struct NotSync {
    _0: PhantomData<*const ()>,
}

#[rtic::app(device = lm3s6965)]
const APP: () = {
    struct Resources {
        #[init(NotSync { _0: PhantomData })]
        shared: NotSync,
    }

    #[init]
    fn init(_: init::Context) {
        debug::exit(debug::EXIT_SUCCESS);
    }

    #[task(resources = [&shared])]
    fn foo(c: foo::Context) {
        let _: &NotSync = c.resources.shared;
    }

    #[task(resources = [&shared])]
    fn bar(c: bar::Context) {
        let _: &NotSync = c.resources.shared;
    }

    // RTIC requires that unused interrupts are declared in an extern block when
    // using software tasks; these free interrupts will be used to dispatch the
    // software tasks.
    extern "C" {
        fn SSI0();
    }
};

#}

Starting a new project

Now that you have learned about the main features of the RTIC framework you can try it out on your hardware by following these instructions.

  1. Instantiate the cortex-m-quickstart template.
$ # for example using `cargo-generate`
$ cargo generate \
    --git https://github.com/rust-embedded/cortex-m-quickstart \
    --name app

$ # follow the rest of the instructions
  1. Add a peripheral access crate (PAC) that was generated using svd2rust v0.14.x, or a board support crate that depends on one such PAC as a dependency. Make sure that the rt feature of the crate is enabled.

In this example, I'll use the lm3s6965 device crate. This device crate doesn't have an rt Cargo feature; that feature is always enabled.

This device crate provides a linker script with the memory layout of the target device so memory.x and build.rs need to be removed.

$ cargo add lm3s6965 --vers 0.1.3

$ rm memory.x build.rs
  1. Add the cortex-m-rtic crate as a dependency.
$ cargo add cortex-m-rtic --allow-prerelease
  1. Write your RTIC application.

Here I'll use the init example from the cortex-m-rtic crate.

$ curl \
    -L https://github.com/rtic-rs/cortex-m-rtic/raw/v0.5.3/examples/init.rs \
    > src/main.rs

That example depends on the panic-semihosting crate:

$ cargo add panic-semihosting
  1. Build it, flash it and run it.
$ # NOTE: I have uncommented the `runner` option in `.cargo/config`
$ cargo run
init

Tips & tricks

Generics

Resources may appear in contexts as resource proxies or as unique references (&mut-) depending on the priority of the task. Because the same resource may appear as different types in different contexts one cannot refactor a common operation that uses resources into a plain function; however, such refactor is possible using generics.

All resource proxies implement the rtic::Mutex trait. On the other hand, unique references (&mut-) do not implement this trait (due to limitations in the trait system) but one can wrap these references in the rtic::Exclusive newtype which does implement the Mutex trait. With the help of this newtype one can write a generic function that operates on generic resources and call it from different tasks to perform some operation on the same set of resources. Here's one such example:


# #![allow(unused_variables)]
#fn main() {
//! examples/generics.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use cortex_m_semihosting::{debug, hprintln};
use lm3s6965::Interrupt;
use panic_semihosting as _;
use rtic::{Exclusive, Mutex};

#[rtic::app(device = lm3s6965)]
const APP: () = {
    struct Resources {
        #[init(0)]
        shared: u32,
    }

    #[init]
    fn init(_: init::Context) {
        rtic::pend(Interrupt::UART0);
        rtic::pend(Interrupt::UART1);
    }

    #[task(binds = UART0, resources = [shared])]
    fn uart0(c: uart0::Context) {
        static mut STATE: u32 = 0;

        hprintln!("UART0(STATE = {})", *STATE).unwrap();

        // second argument has type `resources::shared`
        advance(STATE, c.resources.shared);

        rtic::pend(Interrupt::UART1);

        debug::exit(debug::EXIT_SUCCESS);
    }

    #[task(binds = UART1, priority = 2, resources = [shared])]
    fn uart1(c: uart1::Context) {
        static mut STATE: u32 = 0;

        hprintln!("UART1(STATE = {})", *STATE).unwrap();

        // just to show that `shared` can be accessed directly
        *c.resources.shared += 0;

        // second argument has type `Exclusive<u32>`
        advance(STATE, Exclusive(c.resources.shared));
    }
};

// the second parameter is generic: it can be any type that implements the `Mutex` trait
fn advance(state: &mut u32, mut shared: impl Mutex<T = u32>) {
    *state += 1;

    let (old, new) = shared.lock(|shared: &mut u32| {
        let old = *shared;
        *shared += *state;
        (old, *shared)
    });

    hprintln!("shared: {} -> {}", old, new).unwrap();
}

#}
$ cargo run --example generics
UART1(STATE = 0)
shared: 0 -> 1
UART0(STATE = 0)
shared: 1 -> 2
UART1(STATE = 1)
shared: 2 -> 4

Using generics also lets you change the static priorities of tasks during development without having to rewrite a bunch code every time.

Conditional compilation

You can use conditional compilation (#[cfg]) on resources (the fields of struct Resources) and tasks (the fn items). The effect of using #[cfg] attributes is that the resource / task will not be available through the corresponding Context struct if the condition doesn't hold.

The example below logs a message whenever the foo task is spawned, but only if the program has been compiled using the dev profile.


# #![allow(unused_variables)]
#fn main() {
//! examples/cfg.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use cortex_m_semihosting::debug;
#[cfg(debug_assertions)]
use cortex_m_semihosting::hprintln;
use panic_semihosting as _;

#[rtic::app(device = lm3s6965)]
const APP: () = {
    struct Resources {
        #[cfg(debug_assertions)] // <- `true` when using the `dev` profile
        #[init(0)]
        count: u32,
    }

    #[init(spawn = [foo])]
    fn init(cx: init::Context) {
        cx.spawn.foo().unwrap();
        cx.spawn.foo().unwrap();
    }

    #[idle]
    fn idle(_: idle::Context) -> ! {
        debug::exit(debug::EXIT_SUCCESS);

        loop {}
    }

    #[task(capacity = 2, resources = [count], spawn = [log])]
    fn foo(_cx: foo::Context) {
        #[cfg(debug_assertions)]
        {
            *_cx.resources.count += 1;

            _cx.spawn.log(*_cx.resources.count).unwrap();
        }

        // this wouldn't compile in `release` mode
        // *_cx.resources.count += 1;

        // ..
    }

    #[cfg(debug_assertions)]
    #[task(capacity = 2)]
    fn log(_: log::Context, n: u32) {
        hprintln!(
            "foo has been called {} time{}",
            n,
            if n == 1 { "" } else { "s" }
        )
        .ok();
    }

    // RTIC requires that unused interrupts are declared in an extern block when
    // using software tasks; these free interrupts will be used to dispatch the
    // software tasks.
    extern "C" {
        fn SSI0();
        fn QEI0();
    }
};

#}
$ cargo run --example cfg --release

$ cargo run --example cfg
foo has been called 1 time
foo has been called 2 times

Running tasks from RAM

The main goal of moving the specification of RTIC applications to attributes in RTIC v0.4.0 was to allow inter-operation with other attributes. For example, the link_section attribute can be applied to tasks to place them in RAM; this can improve performance in some cases.

IMPORTANT: In general, the link_section, export_name and no_mangle attributes are very powerful but also easy to misuse. Incorrectly using any of these attributes can cause undefined behavior; you should always prefer to use safe, higher level attributes around them like cortex-m-rt's interrupt and exception attributes.

In the particular case of RAM functions there's no safe abstraction for it in cortex-m-rt v0.6.5 but there's an RFC for adding a ramfunc attribute in a future release.

The example below shows how to place the higher priority task, bar, in RAM.


# #![allow(unused_variables)]
#fn main() {
//! examples/ramfunc.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use cortex_m_semihosting::{debug, hprintln};
use panic_semihosting as _;

#[rtic::app(device = lm3s6965)]
const APP: () = {
    #[init(spawn = [bar])]
    fn init(c: init::Context) {
        c.spawn.bar().unwrap();
    }

    #[inline(never)]
    #[task]
    fn foo(_: foo::Context) {
        hprintln!("foo").unwrap();

        debug::exit(debug::EXIT_SUCCESS);
    }

    // run this task from RAM
    #[inline(never)]
    #[link_section = ".data.bar"]
    #[task(priority = 2, spawn = [foo])]
    fn bar(c: bar::Context) {
        c.spawn.foo().unwrap();
    }

    extern "C" {
        fn UART0();

        // run the task dispatcher from RAM
        #[link_section = ".data.UART1"]
        fn UART1();
    }
};

#}

Running this program produces the expected output.

$ cargo run --example ramfunc
foo

One can look at the output of cargo-nm to confirm that bar ended in RAM (0x2000_0000), whereas foo ended in Flash (0x0000_0000).

$ cargo nm --example ramfunc --release | grep ' foo::'
00000162 t ramfunc::foo::h30e7789b08c08e19
$ cargo nm --example ramfunc --release | grep ' bar::'
20000000 t ramfunc::bar::h9d6714fe5a3b0c89

Indirection for faster message passing

Message passing always involves copying the payload from the sender into a static variable and then from the static variable into the receiver. Thus sending a large buffer, like a [u8; 128], as a message involves two expensive memcpys. To minimize the message passing overhead one can use indirection: instead of sending the buffer by value, one can send an owning pointer into the buffer.

One can use a global allocator to achieve indirection (alloc::Box, alloc::Rc, etc.), which requires using the nightly channel as of Rust v1.37.0, or one can use a statically allocated memory pool like heapless::Pool.

Here's an example where heapless::Pool is used to "box" buffers of 128 bytes.


# #![allow(unused_variables)]
#fn main() {
//! examples/pool.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use cortex_m_semihosting::{debug, hprintln};
use heapless::{
    pool,
    pool::singleton::{Box, Pool},
};
use lm3s6965::Interrupt;
use panic_semihosting as _;
use rtic::app;

// Declare a pool of 128-byte memory blocks
pool!(P: [u8; 128]);

#[app(device = lm3s6965)]
const APP: () = {
    #[init]
    fn init(_: init::Context) {
        static mut MEMORY: [u8; 512] = [0; 512];

        // Increase the capacity of the memory pool by ~4
        P::grow(MEMORY);

        rtic::pend(Interrupt::I2C0);
    }

    #[task(binds = I2C0, priority = 2, spawn = [foo, bar])]
    fn i2c0(c: i2c0::Context) {
        // claim a memory block, leave it uninitialized and ..
        let x = P::alloc().unwrap().freeze();

        // .. send it to the `foo` task
        c.spawn.foo(x).ok().unwrap();

        // send another block to the task `bar`
        c.spawn.bar(P::alloc().unwrap().freeze()).ok().unwrap();
    }

    #[task]
    fn foo(_: foo::Context, x: Box<P>) {
        hprintln!("foo({:?})", x.as_ptr()).unwrap();

        // explicitly return the block to the pool
        drop(x);

        debug::exit(debug::EXIT_SUCCESS);
    }

    #[task(priority = 2)]
    fn bar(_: bar::Context, x: Box<P>) {
        hprintln!("bar({:?})", x.as_ptr()).unwrap();

        // this is done automatically so we can omit the call to `drop`
        // drop(x);
    }

    // RTIC requires that unused interrupts are declared in an extern block when
    // using software tasks; these free interrupts will be used to dispatch the
    // software tasks.
    extern "C" {
        fn SSI0();
        fn QEI0();
    }
};

#}
$ cargo run --example pool
bar(0x2000008c)
foo(0x20000110)

Inspecting the expanded code

#[rtic::app] is a procedural macro that produces support code. If for some reason you need to inspect the code generated by this macro you have two options:

You can inspect the file rtic-expansion.rs inside the target directory. This file contains the expansion of the #[rtic::app] item (not your whole program!) of the last built (via cargo build or cargo check) RTIC application. The expanded code is not pretty printed by default so you'll want to run rustfmt over it before you read it.

$ cargo build --example foo

$ rustfmt target/rtic-expansion.rs

$ tail target/rtic-expansion.rs
#[doc = r" Implementation details"]
const APP: () = {
    #[doc = r" Always include the device crate which contains the vector table"]
    use lm3s6965 as _;
    #[no_mangle]
    unsafe extern "C" fn main() -> ! {
        rtic::export::interrupt::disable();
        let mut core: rtic::export::Peripherals = core::mem::transmute(());
        core.SCB.scr.modify(|r| r | 1 << 1);
        rtic::export::interrupt::enable();
        loop {
            rtic::export::wfi()
        }
    }
};

Or, you can use the cargo-expand sub-command. This sub-command will expand all the macros, including the #[rtic::app] attribute, and modules in your crate and print the output to the console.

$ # produces the same output as before
$ cargo expand --example smallest | tail

Resource de-structure-ing

When having a task taking multiple resources it can help in readability to split up the resource struct. Here are two examples on how this can be done:


# #![allow(unused_variables)]
#fn main() {
//! examples/destructure.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

use cortex_m_semihosting::hprintln;
use lm3s6965::Interrupt;
use panic_semihosting as _;

#[rtic::app(device = lm3s6965)]
const APP: () = {
    struct Resources {
        // Some resources to work with
        #[init(0)]
        a: u32,
        #[init(0)]
        b: u32,
        #[init(0)]
        c: u32,
    }

    #[init]
    fn init(_: init::Context) {
        rtic::pend(Interrupt::UART0);
        rtic::pend(Interrupt::UART1);
    }

    // Direct destructure
    #[task(binds = UART0, resources = [a, b, c])]
    fn uart0(cx: uart0::Context) {
        let a = cx.resources.a;
        let b = cx.resources.b;
        let c = cx.resources.c;

        hprintln!("UART0: a = {}, b = {}, c = {}", a, b, c).unwrap();
    }

    // De-structure-ing syntax
    #[task(binds = UART1, resources = [a, b, c])]
    fn uart1(cx: uart1::Context) {
        let uart1::Resources { a, b, c } = cx.resources;

        hprintln!("UART0: a = {}, b = {}, c = {}", a, b, c).unwrap();
    }
};

#}

Migrating from v0.4.x to v0.5.0

This section covers how to upgrade an application written against RTIC v0.4.x to the version v0.5.0 of the framework.

Cargo.toml

First, the version of the cortex-m-rtic dependency needs to be updated to "0.5.0". The timer-queue feature needs to be removed.

[dependencies.cortex-m-rtic]
# change this
version = "0.4.3"

# into this
version = "0.5.0"

# and remove this Cargo feature
features = ["timer-queue"]
#           ^^^^^^^^^^^^^

Context argument

All functions inside the #[rtic::app] item need to take as first argument a Context structure. This Context type will contain the variables that were magically injected into the scope of the function by version v0.4.x of the framework: resources, spawn, schedule -- these variables will become fields of the Context structure. Each function within the #[rtic::app] item gets a different Context type.


# #![allow(unused_variables)]
#fn main() {
#[rtic::app(/* .. */)]
const APP: () = {
    // change this
    #[task(resources = [x], spawn = [a], schedule = [b])]
    fn foo() {
        resources.x.lock(|x| /* .. */);
        spawn.a(message);
        schedule.b(baseline);
    }

    // into this
    #[task(resources = [x], spawn = [a], schedule = [b])]
    fn foo(mut cx: foo::Context) {
        // ^^^^^^^^^^^^^^^^^^^^

        cx.resources.x.lock(|x| /* .. */);
    //  ^^^

        cx.spawn.a(message);
    //  ^^^

        cx.schedule.b(message, baseline);
    //  ^^^
    }

    // change this
    #[init]
    fn init() {
        // ..
    }

    // into this
    #[init]
    fn init(cx: init::Context) {
        //  ^^^^^^^^^^^^^^^^^
        // ..
    }

    // ..
};
#}

Resources

The syntax used to declare resources has been changed from static mut variables to a struct Resources.


# #![allow(unused_variables)]
#fn main() {
#[rtic::app(/* .. */)]
const APP: () = {
    // change this
    static mut X: u32 = 0;
    static mut Y: u32 = (); // late resource

    // into this
    struct Resources {
        #[init(0)] // <- initial value
        X: u32, // NOTE: we suggest changing the naming style to `snake_case`

        Y: u32, // late resource
    }

    // ..
};
#}

Device peripherals

If your application was accessing the device peripherals in #[init] through the device variable then you'll need to add peripherals = true to the #[rtic::app] attribute to continue to access the device peripherals through the device field of the init::Context structure.

Change this:


# #![allow(unused_variables)]
#fn main() {
#[rtic::app(/* .. */)]
const APP: () = {
    #[init]
    fn init() {
        device.SOME_PERIPHERAL.write(something);
    }

    // ..
};
#}

Into this:


# #![allow(unused_variables)]
#fn main() {
#[rtic::app(/* .. */, peripherals = true)]
//                    ^^^^^^^^^^^^^^^^^^
const APP: () = {
    #[init]
    fn init(cx: init::Context) {
        //  ^^^^^^^^^^^^^^^^^
        cx.device.SOME_PERIPHERAL.write(something);
    //  ^^^
    }

    // ..
};
#}

#[interrupt] and #[exception]

The #[interrupt] and #[exception] attributes have been removed. To declare hardware tasks in v0.5.x use the #[task] attribute with the binds argument.

Change this:


# #![allow(unused_variables)]
#fn main() {
#[rtic::app(/* .. */)]
const APP: () = {
    // hardware tasks
    #[exception]
    fn SVCall() { /* .. */ }

    #[interrupt]
    fn UART0() { /* .. */ }

    // software task
    #[task]
    fn foo() { /* .. */ }

    // ..
};
#}

Into this:


# #![allow(unused_variables)]
#fn main() {
#[rtic::app(/* .. */)]
const APP: () = {
    #[task(binds = SVCall)]
    //     ^^^^^^^^^^^^^^
    fn svcall(cx: svcall::Context) { /* .. */ }
    // ^^^^^^ we suggest you use a `snake_case` name here

    #[task(binds = UART0)]
    //     ^^^^^^^^^^^^^
    fn uart0(cx: uart0::Context) { /* .. */ }

    #[task]
    fn foo(cx: foo::Context) { /* .. */ }

    // ..
};
#}

schedule

The timer-queue feature has been removed. To use the schedule API one must first define the monotonic timer the runtime will use using the monotonic argument of the #[rtic::app] attribute. To continue using the cycle counter (CYCCNT) as the monotonic timer, and match the behavior of version v0.4.x, add the monotonic = rtic::cyccnt::CYCCNT argument to the #[rtic::app] attribute.

Also, the Duration and Instant types and the U32Ext trait have been moved into the rtic::cyccnt module. This module is only available on ARMv7-M+ devices. The removal of the timer-queue also brings back the DWT peripheral inside the core peripherals struct, this will need to be enabled by the application inside init.

Change this:


# #![allow(unused_variables)]
#fn main() {
use rtic::{Duration, Instant, U32Ext};

#[rtic::app(/* .. */)]
const APP: () = {
    #[task(schedule = [b])]
    fn a() {
        // ..
    }
};
#}

Into this:


# #![allow(unused_variables)]
#fn main() {
use rtic::cyccnt::{Duration, Instant, U32Ext};
//        ^^^^^^^^

#[rtic::app(/* .. */, monotonic = rtic::cyccnt::CYCCNT)]
//                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
const APP: () = {
    #[init]
    fn init(cx: init::Context) {
        cx.core.DWT.enable_cycle_counter();
        // optional, configure the DWT run without a debugger connected
        cx.core.DCB.enable_trace();
    }
    #[task(schedule = [b])]
    fn a(cx: a::Context) {
        // ..
    }
};
#}

Migrating from RTFM to RTIC

This section covers how to upgrade an application written against RTFM v0.5.x to the same version of RTIC. This applies since the renaming of the framework as per RFC #33.

Note: There are no code differences between RTFM v0.5.3 and RTIC v0.5.3, it is purely a name change.

Cargo.toml

First, the cortex-m-rtfm dependency needs to be updated to cortex-m-rtic.

[dependencies]
# change this
cortex-m-rtfm = "0.5.3"

# into this
cortex-m-rtic = "0.5.3"

Code changes

The only code change that needs to be made is that any reference to rtfm before now need to point to rtic as follows:


# #![allow(unused_variables)]
#fn main() {
//
// Change this
//

#[rtfm::app(/* .. */, monotonic = rtfm::cyccnt::CYCCNT)]
const APP: () = {
    // ...

};

//
// Into this
//

#[rtic::app(/* .. */, monotonic = rtic::cyccnt::CYCCNT)]
const APP: () = {
    // ...

};
#}

Under the hood

This section describes the internals of the RTIC framework at a high level. Low level details like the parsing and code generation done by the procedural macro (#[app]) will not be explained here. The focus will be the analysis of the user specification and the data structures used by the runtime.

We highly suggest that you read the embedonomicon section on concurrency before you dive into this material.

Interrupt configuration

Interrupts are core to the operation of RTIC applications. Correctly setting interrupt priorities and ensuring they remain fixed at runtime is a requisite for the memory safety of the application.

The RTIC framework exposes interrupt priorities as something that is declared at compile time. However, this static configuration must be programmed into the relevant registers during the initialization of the application. The interrupt configuration is done before the init function runs.

This example gives you an idea of the code that the RTIC framework runs:


# #![allow(unused_variables)]
#fn main() {
#[rtic::app(device = lm3s6965)]
const APP: () = {
    #[init]
    fn init(c: init::Context) {
        // .. user code ..
    }

    #[idle]
    fn idle(c: idle::Context) -> ! {
        // .. user code ..
    }

    #[interrupt(binds = UART0, priority = 2)]
    fn foo(c: foo::Context) {
        // .. user code ..
    }
};
#}

The framework generates an entry point that looks like this:

// the real entry point of the program
#[no_mangle]
unsafe fn main() -> ! {
    // transforms a logical priority into a hardware / NVIC priority
    fn logical2hw(priority: u8) -> u8 {
        use lm3s6965::NVIC_PRIO_BITS;

        // the NVIC encodes priority in the higher bits of a bit
        // also a bigger numbers means lower priority
        ((1 << NVIC_PRIORITY_BITS) - priority) << (8 - NVIC_PRIO_BITS)
    }

    cortex_m::interrupt::disable();

    let mut core = cortex_m::Peripheral::steal();

    core.NVIC.enable(Interrupt::UART0);

    // value specified by the user
    let uart0_prio = 2;

    // check at compile time that the specified priority is within the supported range
    let _ = [(); (1 << NVIC_PRIORITY_BITS) - (uart0_prio as usize)];

    core.NVIC.set_priority(Interrupt::UART0, logical2hw(uart0_prio));

    // call into user code
    init(/* .. */);

    // ..

    cortex_m::interrupt::enable();

    // call into user code
    idle(/* .. */)
}

Non-reentrancy

In RTIC, tasks handlers are not reentrant. Reentering a task handler can break Rust aliasing rules and lead to undefined behavior. A task handler can be reentered in one of two ways: in software or by hardware.

In software

To reenter a task handler in software its underlying interrupt handler must be invoked using FFI (see example below). FFI requires unsafe code so end users are discouraged from directly invoking an interrupt handler.


# #![allow(unused_variables)]
#fn main() {
#[rtic::app(device = ..)]
const APP: () = {
    #[init]
    fn init(c: init::Context) { .. }

    #[interrupt(binds = UART0)]
    fn foo(c: foo::Context) {
        static mut X: u64 = 0;

        let x: &mut u64 = X;

        // ..

        //~ `bar` can preempt `foo` at this point

        // ..
    }

    #[interrupt(binds = UART1, priority = 2)]
    fn bar(c: foo::Context) {
        extern "C" {
            fn UART0();
        }

        // this interrupt handler will invoke task handler `foo` resulting
        // in aliasing of the static variable `X`
        unsafe { UART0() }
    }
};
#}

The RTIC framework must generate the interrupt handler code that calls the user defined task handlers. We are careful in making these handlers impossible to call from user code.

The above example expands into:


# #![allow(unused_variables)]
#fn main() {
fn foo(c: foo::Context) {
    // .. user code ..
}

fn bar(c: bar::Context) {
    // .. user code ..
}

const APP: () = {
    // everything in this block is not visible to user code

    #[no_mangle]
    unsafe fn USART0() {
        foo(..);
    }

    #[no_mangle]
    unsafe fn USART1() {
        bar(..);
    }
};
#}

By hardware

A task handler can also be reentered without software intervention. This can occur if the same handler is assigned to two or more interrupts in the vector table but there's no syntax for this kind of configuration in the RTIC framework.

Access control

One of the core foundations of RTIC is access control. Controlling which parts of the program can access which static variables is instrumental to enforcing memory safety.

Static variables are used to share state between interrupt handlers, or between interrupts handlers and the bottom execution context, main. In normal Rust code it's hard to have fine grained control over which functions can access a static variable because static variables can be accessed from any function that resides in the same scope in which they are declared. Modules give some control over how a static variable can be accessed by they are not flexible enough.

To achieve the fine-grained access control where tasks can only access the static variables (resources) that they have specified in their RTIC attribute the RTIC framework performs a source code level transformation. This transformation consists of placing the resources (static variables) specified by the user inside a const item and the user code outside the const item. This makes it impossible for the user code to refer to these static variables.

Access to the resources is then given to each task using a Resources struct whose fields correspond to the resources the task has access to. There's one such struct per task and the Resources struct is initialized with either a unique reference (&mut-) to the static variables or with a resource proxy (see section on critical sections).

The code below is an example of the kind of source level transformation that happens behind the scenes:


# #![allow(unused_variables)]
#fn main() {
#[rtic::app(device = ..)]
const APP: () = {
    static mut X: u64: 0;
    static mut Y: bool: 0;

    #[init(resources = [Y])]
    fn init(c: init::Context) {
        // .. user code ..
    }

    #[interrupt(binds = UART0, resources = [X])]
    fn foo(c: foo::Context) {
        // .. user code ..
    }

    #[interrupt(binds = UART1, resources = [X, Y])]
    fn bar(c: bar::Context) {
        // .. user code ..
    }

    // ..
};
#}

The framework produces codes like this:

fn init(c: init::Context) {
    // .. user code ..
}

fn foo(c: foo::Context) {
    // .. user code ..
}

fn bar(c: bar::Context) {
    // .. user code ..
}

// Public API
pub mod init {
    pub struct Context<'a> {
        pub resources: Resources<'a>,
        // ..
    }

    pub struct Resources<'a> {
        pub Y: &'a mut bool,
    }
}

pub mod foo {
    pub struct Context<'a> {
        pub resources: Resources<'a>,
        // ..
    }

    pub struct Resources<'a> {
        pub X: &'a mut u64,
    }
}

pub mod bar {
    pub struct Context<'a> {
        pub resources: Resources<'a>,
        // ..
    }

    pub struct Resources<'a> {
        pub X: &'a mut u64,
        pub Y: &'a mut bool,
    }
}

/// Implementation details
const APP: () = {
    // everything inside this `const` item is hidden from user code

    static mut X: u64 = 0;
    static mut Y: bool = 0;

    // the real entry point of the program
    unsafe fn main() -> ! {
        interrupt::disable();

        // ..

        // call into user code; pass references to the static variables
        init(init::Context {
            resources: init::Resources {
                X: &mut X,
            },
            // ..
        });

        // ..

        interrupt::enable();

        // ..
    }

    // interrupt handler that `foo` binds to
    #[no_mangle]
    unsafe fn UART0() {
        // call into user code; pass references to the static variables
        foo(foo::Context {
            resources: foo::Resources {
                X: &mut X,
            },
            // ..
        });
    }

    // interrupt handler that `bar` binds to
    #[no_mangle]
    unsafe fn UART1() {
        // call into user code; pass references to the static variables
        bar(bar::Context {
            resources: bar::Resources {
                X: &mut X,
                Y: &mut Y,
            },
            // ..
        });
    }
};

Late resources

Some resources are initialized at runtime after the init function returns. It's important that these resources (static variables) are fully initialized before tasks are allowed to run, that is they must be initialized while interrupts are disabled.

The example below shows the kind of code that the framework generates to initialize late resources.


# #![allow(unused_variables)]
#fn main() {
#[rtic::app(device = ..)]
const APP: () = {
    struct Resources {
        x: Thing,
    }

    #[init]
    fn init() -> init::LateResources {
        // ..

        init::LateResources {
            x: Thing::new(..),
        }
    }

    #[task(binds = UART0, resources = [x])]
    fn foo(c: foo::Context) {
        let x: &mut Thing = c.resources.x;

        x.frob();

        // ..
    }

    // ..
};
#}

The code generated by the framework looks like this:

fn init(c: init::Context) -> init::LateResources {
    // .. user code ..
}

fn foo(c: foo::Context) {
    // .. user code ..
}

// Public API
pub mod init {
    pub struct LateResources {
        pub x: Thing,
    }

    // ..
}

pub mod foo {
    pub struct Resources<'a> {
        pub x: &'a mut Thing,
    }

    pub struct Context<'a> {
        pub resources: Resources<'a>,
        // ..
    }
}

/// Implementation details
const APP: () = {
    // uninitialized static
    static mut x: MaybeUninit<Thing> = MaybeUninit::uninit();

    #[no_mangle]
    unsafe fn main() -> ! {
        cortex_m::interrupt::disable();

        // ..

        let late = init(..);

        // initialization of late resources
        x.as_mut_ptr().write(late.x);

        cortex_m::interrupt::enable(); //~ compiler fence

        // exceptions, interrupts and tasks can preempt `main` at this point

        idle(..)
    }

    #[no_mangle]
    unsafe fn UART0() {
        foo(foo::Context {
            resources: foo::Resources {
                // `x` has been initialized at this point
                x: &mut *x.as_mut_ptr(),
            },
            // ..
        })
    }
};

An important detail here is that interrupt::enable behaves like a compiler fence, which prevents the compiler from reordering the write to X to after interrupt::enable. If the compiler were to do that kind of reordering there would be a data race between that write and whatever operation foo performs on X.

Architectures with more complex instruction pipelines may need a memory barrier (atomic::fence) instead of a compiler fence to fully flush the write operation before interrupts are re-enabled. The ARM Cortex-M architecture doesn't need a memory barrier in single-core context.

Critical sections

When a resource (static variable) is shared between two, or more, tasks that run at different priorities some form of mutual exclusion is required to mutate the memory in a data race free manner. In RTIC we use priority-based critical sections to guarantee mutual exclusion (see the Immediate Ceiling Priority Protocol).

The critical section consists of temporarily raising the dynamic priority of the task. While a task is within this critical section all the other tasks that may request the resource are not allowed to start.

How high must the dynamic priority be to ensure mutual exclusion on a particular resource? The ceiling analysis is in charge of answering that question and will be discussed in the next section. This section will focus on the implementation of the critical section.

Resource proxy

For simplicity, let's look at a resource shared by two tasks that run at different priorities. Clearly one of the task can preempt the other; to prevent a data race the lower priority task must use a critical section when it needs to modify the shared memory. On the other hand, the higher priority task can directly modify the shared memory because it can't be preempted by the lower priority task. To enforce the use of a critical section on the lower priority task we give it a resource proxy, whereas we give a unique reference (&mut-) to the higher priority task.

The example below shows the different types handed out to each task:


# #![allow(unused_variables)]
#fn main() {
#[rtic::app(device = ..)]
const APP: () = {
    struct Resources {
        #[init(0)]
        x: u64,
    }

    #[interrupt(binds = UART0, priority = 1, resources = [x])]
    fn foo(c: foo::Context) {
        // resource proxy
        let mut x: resources::x = c.resources.x;

        x.lock(|x: &mut u64| {
            // critical section
            *x += 1
        });
    }

    #[interrupt(binds = UART1, priority = 2, resources = [x])]
    fn bar(c: bar::Context) {
        let mut x: &mut u64 = c.resources.x;

        *x += 1;
    }

    // ..
};
#}

Now let's see how these types are created by the framework.


# #![allow(unused_variables)]
#fn main() {
fn foo(c: foo::Context) {
    // .. user code ..
}

fn bar(c: bar::Context) {
    // .. user code ..
}

pub mod resources {
    pub struct x {
        // ..
    }
}

pub mod foo {
    pub struct Resources {
        pub x: resources::x,
    }

    pub struct Context {
        pub resources: Resources,
        // ..
    }
}

pub mod bar {
    pub struct Resources<'a> {
        pub x: &'a mut u64,
    }

    pub struct Context {
        pub resources: Resources,
        // ..
    }
}

const APP: () = {
    static mut x: u64 = 0;

    impl rtic::Mutex for resources::x {
        type T = u64;

        fn lock<R>(&mut self, f: impl FnOnce(&mut u64) -> R) -> R {
            // we'll check this in detail later
        }
    }

    #[no_mangle]
    unsafe fn UART0() {
        foo(foo::Context {
            resources: foo::Resources {
                x: resources::x::new(/* .. */),
            },
            // ..
        })
    }

    #[no_mangle]
    unsafe fn UART1() {
        bar(bar::Context {
            resources: bar::Resources {
                x: &mut x,
            },
            // ..
        })
    }
};
#}

lock

Let's now zoom into the critical section itself. In this example, we have to raise the dynamic priority to at least 2 to prevent a data race. On the Cortex-M architecture the dynamic priority can be changed by writing to the BASEPRI register.

The semantics of the BASEPRI register are as follows:

  • Writing a value of 0 to BASEPRI disables its functionality.
  • Writing a non-zero value to BASEPRI changes the priority level required for interrupt preemption. However, this only has an effect when the written value is lower than the priority level of current execution context, but note that a lower hardware priority level means higher logical priority

Thus the dynamic priority at any point in time can be computed as


# #![allow(unused_variables)]
#fn main() {
dynamic_priority = max(hw2logical(BASEPRI), hw2logical(static_priority))
#}

Where static_priority is the priority programmed in the NVIC for the current interrupt, or a logical 0 when the current context is idle.

In this particular example we could implement the critical section as follows:

NOTE: this is a simplified implementation


# #![allow(unused_variables)]
#fn main() {
impl rtic::Mutex for resources::x {
    type T = u64;

    fn lock<R, F>(&mut self, f: F) -> R
    where
        F: FnOnce(&mut u64) -> R,
    {
        unsafe {
            // start of critical section: raise dynamic priority to `2`
            asm!("msr BASEPRI, 192" : : : "memory" : "volatile");

            // run user code within the critical section
            let r = f(&mut x);

            // end of critical section: restore dynamic priority to its static value (`1`)
            asm!("msr BASEPRI, 0" : : : "memory" : "volatile");

            r
        }
    }
}
#}

Here it's important to use the "memory" clobber in the asm! block. It prevents the compiler from reordering memory operations across it. This is important because accessing the variable x outside the critical section would result in a data race.

It's important to note that the signature of the lock method prevents nesting calls to it. This is required for memory safety, as nested calls would produce multiple unique references (&mut-) to x breaking Rust aliasing rules. See below:


# #![allow(unused_variables)]
#fn main() {
#[interrupt(binds = UART0, priority = 1, resources = [x])]
fn foo(c: foo::Context) {
    // resource proxy
    let mut res: resources::x = c.resources.x;

    res.lock(|x: &mut u64| {
        res.lock(|alias: &mut u64| {
            //~^ error: `res` has already been uniquely borrowed (`&mut-`)
            // ..
        });
    });
}
#}

Nesting

Nesting calls to lock on the same resource must be rejected by the compiler for memory safety but nesting lock calls on different resources is a valid operation. In that case we want to make sure that nesting critical sections never results in lowering the dynamic priority, as that would be unsound, and we also want to optimize the number of writes to the BASEPRI register and compiler fences. To that end we'll track the dynamic priority of the task using a stack variable and use that to decide whether to write to BASEPRI or not. In practice, the stack variable will be optimized away by the compiler but it still provides extra information to the compiler.

Consider this program:


# #![allow(unused_variables)]
#fn main() {
#[rtic::app(device = ..)]
const APP: () = {
    struct Resources {
        #[init(0)]
        x: u64,
        #[init(0)]
        y: u64,
    }

    #[init]
    fn init() {
        rtic::pend(Interrupt::UART0);
    }

    #[interrupt(binds = UART0, priority = 1, resources = [x, y])]
    fn foo(c: foo::Context) {
        let mut x = c.resources.x;
        let mut y = c.resources.y;

        y.lock(|y| {
            *y += 1;

            *x.lock(|x| {
                x += 1;
            });

            *y += 1;
        });

        // mid-point

        x.lock(|x| {
            *x += 1;

            y.lock(|y| {
                *y += 1;
            });

            *x += 1;
        })
    }

    #[interrupt(binds = UART1, priority = 2, resources = [x])]
    fn bar(c: foo::Context) {
        // ..
    }

    #[interrupt(binds = UART2, priority = 3, resources = [y])]
    fn baz(c: foo::Context) {
        // ..
    }

    // ..
};
#}

The code generated by the framework looks like this:


# #![allow(unused_variables)]
#fn main() {
// omitted: user code

pub mod resources {
    pub struct x<'a> {
        priority: &'a Cell<u8>,
    }

    impl<'a> x<'a> {
        pub unsafe fn new(priority: &'a Cell<u8>) -> Self {
            x { priority }
        }

        pub unsafe fn priority(&self) -> &Cell<u8> {
            self.priority
        }
    }

    // repeat for `y`
}

pub mod foo {
    pub struct Context {
        pub resources: Resources,
        // ..
    }

    pub struct Resources<'a> {
        pub x: resources::x<'a>,
        pub y: resources::y<'a>,
    }
}

const APP: () = {
    use cortex_m::register::basepri;

    #[no_mangle]
    unsafe fn UART1() {
        // the static priority of this interrupt (as specified by the user)
        const PRIORITY: u8 = 2;

        // take a snashot of the BASEPRI
        let initial = basepri::read();

        let priority = Cell::new(PRIORITY);
        bar(bar::Context {
            resources: bar::Resources::new(&priority),
            // ..
        });

        // roll back the BASEPRI to the snapshot value we took before
        basepri::write(initial); // same as the `asm!` block we saw before
    }

    // similarly for `UART0` / `foo` and `UART2` / `baz`

    impl<'a> rtic::Mutex for resources::x<'a> {
        type T = u64;

        fn lock<R>(&mut self, f: impl FnOnce(&mut u64) -> R) -> R {
            unsafe {
                // the priority ceiling of this resource
                const CEILING: u8 = 2;

                let current = self.priority().get();
                if current < CEILING {
                    // raise dynamic priority
                    self.priority().set(CEILING);
                    basepri::write(logical2hw(CEILING));

                    let r = f(&mut y);

                    // restore dynamic priority
                    basepri::write(logical2hw(current));
                    self.priority().set(current);

                    r
                } else {
                    // dynamic priority is high enough
                    f(&mut y)
                }
            }
        }
    }

    // repeat for resource `y`
};
#}

At the end the compiler will optimize the function foo into something like this:


# #![allow(unused_variables)]
#fn main() {
fn foo(c: foo::Context) {
    // NOTE: BASEPRI contains the value `0` (its reset value) at this point

    // raise dynamic priority to `3`
    unsafe { basepri::write(160) }

    // the two operations on `y` are merged into one
    y += 2;

    // BASEPRI is not modified to access `x` because the dynamic priority is high enough
    x += 1;

    // lower (restore) the dynamic priority to `1`
    unsafe { basepri::write(224) }

    // mid-point

    // raise dynamic priority to `2`
    unsafe { basepri::write(192) }

    x += 1;

    // raise dynamic priority to `3`
    unsafe { basepri::write(160) }

    y += 1;

    // lower (restore) the dynamic priority to `2`
    unsafe { basepri::write(192) }

    // NOTE: it would be sound to merge this operation on `x` with the previous one but
    // compiler fences are coarse grained and prevent such optimization
    x += 1;

    // lower (restore) the dynamic priority to `1`
    unsafe { basepri::write(224) }

    // NOTE: BASEPRI contains the value `224` at this point
    // the UART0 handler will restore the value to `0` before returning
}
#}

The BASEPRI invariant

An invariant that the RTIC framework has to preserve is that the value of the BASEPRI at the start of an interrupt handler must be the same value it has when the interrupt handler returns. BASEPRI may change during the execution of the interrupt handler but running an interrupt handler from start to finish should not result in an observable change of BASEPRI.

This invariant needs to be preserved to avoid raising the dynamic priority of a handler through preemption. This is best observed in the following example:


# #![allow(unused_variables)]
#fn main() {
#[rtic::app(device = ..)]
const APP: () = {
    struct Resources {
        #[init(0)]
        x: u64,
    }

    #[init]
    fn init() {
        // `foo` will run right after `init` returns
        rtic::pend(Interrupt::UART0);
    }

    #[task(binds = UART0, priority = 1)]
    fn foo() {
        // BASEPRI is `0` at this point; the dynamic priority is currently `1`

        // `bar` will preempt `foo` at this point
        rtic::pend(Interrupt::UART1);

        // BASEPRI is `192` at this point (due to a bug); the dynamic priority is now `2`
        // this function returns to `idle`
    }

    #[task(binds = UART1, priority = 2, resources = [x])]
    fn bar() {
        // BASEPRI is `0` (dynamic priority = 2)

        x.lock(|x| {
            // BASEPRI is raised to `160` (dynamic priority = 3)

            // ..
        });

        // BASEPRI is restored to `192` (dynamic priority = 2)
    }

    #[idle]
    fn idle() -> ! {
        // BASEPRI is `192` (due to a bug); dynamic priority = 2

        // this has no effect due to the BASEPRI value
        // the task `foo` will never be executed again
        rtic::pend(Interrupt::UART0);

        loop {
            // ..
        }
    }

    #[task(binds = UART2, priority = 3, resources = [x])]
    fn baz() {
        // ..
    }

};
#}

IMPORTANT: let's say we forget to roll back BASEPRI in UART1 -- this would be a bug in the RTIC code generator.


# #![allow(unused_variables)]
#fn main() {
// code generated by RTIC

const APP: () = {
    // ..

    #[no_mangle]
    unsafe fn UART1() {
        // the static priority of this interrupt (as specified by the user)
        const PRIORITY: u8 = 2;

        // take a snashot of the BASEPRI
        let initial = basepri::read();

        let priority = Cell::new(PRIORITY);
        bar(bar::Context {
            resources: bar::Resources::new(&priority),
            // ..
        });

        // BUG: FORGOT to roll back the BASEPRI to the snapshot value we took before
        basepri::write(initial);
    }
};
#}

The consequence is that idle will run at a dynamic priority of 2 and in fact the system will never again run at a dynamic priority lower than 2. This doesn't compromise the memory safety of the program but affects task scheduling: in this particular case tasks with a priority of 1 will never get a chance to run.

Ceiling analysis

A resource priority ceiling, or just ceiling, is the dynamic priority that any task must have to safely access the resource memory. Ceiling analysis is relatively simple but critical to the memory safety of RTIC applications.

To compute the ceiling of a resource we must first collect a list of tasks that have access to the resource -- as the RTIC framework enforces access control to resources at compile time it also has access to this information at compile time. The ceiling of the resource is simply the highest logical priority among those tasks.

init and idle are not proper tasks but they can access resources so they need to be considered in the ceiling analysis. idle is considered as a task that has a logical priority of 0 whereas init is completely omitted from the analysis -- the reason for that is that init never uses (or needs) critical sections to access static variables.

In the previous section we showed that a shared resource may appear as a unique reference (&mut-) or behind a proxy depending on the task that has access to it. Which version is presented to the task depends on the task priority and the resource ceiling. If the task priority is the same as the resource ceiling then the task gets a unique reference (&mut-) to the resource memory, otherwise the task gets a proxy -- this also applies to idle. init is special: it always gets a unique reference (&mut-) to resources.

An example to illustrate the ceiling analysis:


# #![allow(unused_variables)]
#fn main() {
#[rtic::app(device = ..)]
const APP: () = {
    struct Resources {
        // accessed by `foo` (prio = 1) and `bar` (prio = 2)
        // -> CEILING = 2
        #[init(0)]
        x: u64,

        // accessed by `idle` (prio = 0)
        // -> CEILING = 0
        #[init(0)]
        y: u64,
    }

    #[init(resources = [x])]
    fn init(c: init::Context) {
        // unique reference because this is `init`
        let x: &mut u64 = c.resources.x;

        // unique reference because this is `init`
        let y: &mut u64 = c.resources.y;

        // ..
    }

    // PRIORITY = 0
    #[idle(resources = [y])]
    fn idle(c: idle::Context) -> ! {
        // unique reference because priority (0) == resource ceiling (0)
        let y: &'static mut u64 = c.resources.y;

        loop {
            // ..
        }
    }

    #[interrupt(binds = UART0, priority = 1, resources = [x])]
    fn foo(c: foo::Context) {
        // resource proxy because task priority (1) < resource ceiling (2)
        let x: resources::x = c.resources.x;

        // ..
    }

    #[interrupt(binds = UART1, priority = 2, resources = [x])]
    fn bar(c: foo::Context) {
        // unique reference because task priority (2) == resource ceiling (2)
        let x: &mut u64 = c.resources.x;

        // ..
    }

    // ..
};
#}

Software tasks

RTIC supports software tasks and hardware tasks. Each hardware task is bound to a different interrupt handler. On the other hand, several software tasks may be dispatched by the same interrupt handler -- this is done to minimize the number of interrupts handlers used by the framework.

The framework groups spawn-able tasks by priority level and generates one task dispatcher per priority level. Each task dispatcher runs on a different interrupt handler and the priority of said interrupt handler is set to match the priority level of the tasks managed by the dispatcher.

Each task dispatcher keeps a queue of tasks which are ready to execute; this queue is referred to as the ready queue. Spawning a software task consists of adding an entry to this queue and pending the interrupt that runs the corresponding task dispatcher. Each entry in this queue contains a tag (enum) that identifies the task to execute and a pointer to the message passed to the task.

The ready queue is a SPSC (Single Producer Single Consumer) lock-free queue. The task dispatcher owns the consumer endpoint of the queue; the producer endpoint is treated as a resource contended by the tasks that can spawn other tasks.

The task dispatcher

Let's first take a look the code generated by the framework to dispatch tasks. Consider this example:


# #![allow(unused_variables)]
#fn main() {
#[rtic::app(device = ..)]
const APP: () = {
    // ..

    #[interrupt(binds = UART0, priority = 2, spawn = [bar, baz])]
    fn foo(c: foo::Context) {
        foo.spawn.bar().ok();

        foo.spawn.baz(42).ok();
    }

    #[task(capacity = 2, priority = 1)]
    fn bar(c: bar::Context) {
        // ..
    }

    #[task(capacity = 2, priority = 1, resources = [X])]
    fn baz(c: baz::Context, input: i32) {
        // ..
    }

    extern "C" {
        fn UART1();
    }
};
#}

The framework produces the following task dispatcher which consists of an interrupt handler and a ready queue:


# #![allow(unused_variables)]
#fn main() {
fn bar(c: bar::Context) {
    // .. user code ..
}

const APP: () = {
    use heapless::spsc::Queue;
    use cortex_m::register::basepri;

    struct Ready<T> {
        task: T,
        // ..
    }

    /// `spawn`-able tasks that run at priority level `1`
    enum T1 {
        bar,
        baz,
    }

    // ready queue of the task dispatcher
    // `U4` is a type-level integer that represents the capacity of this queue
    static mut RQ1: Queue<Ready<T1>, U4> = Queue::new();

    // interrupt handler chosen to dispatch tasks at priority `1`
    #[no_mangle]
    unsafe UART1() {
        // the priority of this interrupt handler
        const PRIORITY: u8 = 1;

        let snapshot = basepri::read();

        while let Some(ready) = RQ1.split().1.dequeue() {
            match ready.task {
                T1::bar => {
                    // **NOTE** simplified implementation

                    // used to track the dynamic priority
                    let priority = Cell::new(PRIORITY);

                    // call into user code
                    bar(bar::Context::new(&priority));
                }

                T1::baz => {
                    // we'll look at `baz` later
                }
            }
        }

        // BASEPRI invariant
        basepri::write(snapshot);
    }
};
#}

Spawning a task

The spawn API is exposed to the user as the methods of a Spawn struct. There's one Spawn struct per task.

The Spawn code generated by the framework for the previous example looks like this:


# #![allow(unused_variables)]
#fn main() {
mod foo {
    // ..

    pub struct Context<'a> {
        pub spawn: Spawn<'a>,
        // ..
    }

    pub struct Spawn<'a> {
        // tracks the dynamic priority of the task
        priority: &'a Cell<u8>,
    }

    impl<'a> Spawn<'a> {
        // `unsafe` and hidden because we don't want the user to tamper with it
        #[doc(hidden)]
        pub unsafe fn priority(&self) -> &Cell<u8> {
            self.priority
        }
    }
}

const APP: () = {
    // ..

    // Priority ceiling for the producer endpoint of the `RQ1`
    const RQ1_CEILING: u8 = 2;

    // used to track how many more `bar` messages can be enqueued
    // `U2` is the capacity of the `bar` task; a max of two instances can be queued
    // this queue is filled by the framework before `init` runs
    static mut bar_FQ: Queue<(), U2> = Queue::new();

    // Priority ceiling for the consumer endpoint of `bar_FQ`
    const bar_FQ_CEILING: u8 = 2;

    // a priority-based critical section
    //
    // this run the given closure `f` at a dynamic priority of at least
    // `ceiling`
    fn lock(priority: &Cell<u8>, ceiling: u8, f: impl FnOnce()) {
        // ..
    }

    impl<'a> foo::Spawn<'a> {
        /// Spawns the `bar` task
        pub fn bar(&self) -> Result<(), ()> {
            unsafe {
                match lock(self.priority(), bar_FQ_CEILING, || {
                    bar_FQ.split().1.dequeue()
                }) {
                    Some(()) => {
                        lock(self.priority(), RQ1_CEILING, || {
                            // put the taks in the ready queue
                            RQ1.split().1.enqueue_unchecked(Ready {
                                task: T1::bar,
                                // ..
                            })
                        });

                        // pend the interrupt that runs the task dispatcher
                        rtic::pend(Interrupt::UART0);
                    }

                    None => {
                        // maximum capacity reached; spawn failed
                        Err(())
                    }
                }
            }
        }
    }
};
#}

Using bar_FQ to limit the number of bar tasks that can be spawned may seem like an artificial limitation but it will make more sense when we talk about task capacities.

Messages

We have omitted how message passing actually works so let's revisit the spawn implementation but this time for task baz which receives a u64 message.


# #![allow(unused_variables)]
#fn main() {
fn baz(c: baz::Context, input: u64) {
    // .. user code ..
}

const APP: () = {
    // ..

    // Now we show the full contents of the `Ready` struct
    struct Ready {
        task: Task,
        // message index; used to index the `INPUTS` buffer
        index: u8,
    }

    // memory reserved to hold messages passed to `baz`
    static mut baz_INPUTS: [MaybeUninit<u64>; 2] =
        [MaybeUninit::uninit(), MaybeUninit::uninit()];

    // the free queue: used to track free slots in the `baz_INPUTS` array
    // this queue is initialized with values `0` and `1` before `init` is executed
    static mut baz_FQ: Queue<u8, U2> = Queue::new();

    // Priority ceiling for the consumer endpoint of `baz_FQ`
    const baz_FQ_CEILING: u8 = 2;

    impl<'a> foo::Spawn<'a> {
        /// Spawns the `baz` task
        pub fn baz(&self, message: u64) -> Result<(), u64> {
            unsafe {
                match lock(self.priority(), baz_FQ_CEILING, || {
                    baz_FQ.split().1.dequeue()
                }) {
                    Some(index) => {
                        // NOTE: `index` is an ownining pointer into this buffer
                        baz_INPUTS[index as usize].write(message);

                        lock(self.priority(), RQ1_CEILING, || {
                            // put the task in the ready queue
                            RQ1.split().1.enqueue_unchecked(Ready {
                                task: T1::baz,
                                index,
                            });
                        });

                        // pend the interrupt that runs the task dispatcher
                        rtic::pend(Interrupt::UART0);
                    }

                    None => {
                        // maximum capacity reached; spawn failed
                        Err(message)
                    }
                }
            }
        }
    }
};
#}

And now let's look at the real implementation of the task dispatcher:


# #![allow(unused_variables)]
#fn main() {
const APP: () = {
    // ..

    #[no_mangle]
    unsafe UART1() {
        const PRIORITY: u8 = 1;

        let snapshot = basepri::read();

        while let Some(ready) = RQ1.split().1.dequeue() {
            match ready.task {
                Task::baz => {
                    // NOTE: `index` is an ownining pointer into this buffer
                    let input = baz_INPUTS[ready.index as usize].read();

                    // the message has been read out so we can return the slot
                    // back to the free queue
                    // (the task dispatcher has exclusive access to the producer
                    // endpoint of this queue)
                    baz_FQ.split().0.enqueue_unchecked(ready.index);

                    let priority = Cell::new(PRIORITY);
                    baz(baz::Context::new(&priority), input)
                }

                Task::bar => {
                    // looks just like the `baz` branch
                }

            }
        }

        // BASEPRI invariant
        basepri::write(snapshot);
    }
};
#}

INPUTS plus FQ, the free queue, is effectively a memory pool. However, instead of using the usual free list (linked list) to track empty slots in the INPUTS buffer we use a SPSC queue; this lets us reduce the number of critical sections. In fact, thanks to this choice the task dispatching code is lock-free.

Queue capacity

The RTIC framework uses several queues like ready queues and free queues. When the free queue is empty trying to spawn a task results in an error; this condition is checked at runtime. Not all the operations performed by the framework on these queues check if the queue is empty / full. For example, returning an slot to the free queue (see the task dispatcher) is unchecked because there's a fixed number of such slots circulating in the system that's equal to the capacity of the free queue. Similarly, adding an entry to the ready queue (see Spawn) is unchecked because of the queue capacity chosen by the framework.

Users can specify the capacity of software tasks; this capacity is the maximum number of messages one can post to said task from a higher priority task before spawn returns an error. This user-specified capacity is the capacity of the free queue of the task (e.g. foo_FQ) and also the size of the array that holds the inputs to the task (e.g. foo_INPUTS).

The capacity of the ready queue (e.g. RQ1) is chosen to be the sum of the capacities of all the different tasks managed by the dispatcher; this sum is also the number of messages the queue will hold in the worst case scenario of all possible messages being posted before the task dispatcher gets a chance to run. For this reason, getting a slot from the free queue in any spawn operation implies that the ready queue is not yet full so inserting an entry into the ready queue can omit the "is it full?" check.

In our running example the task bar takes no input so we could have omitted both bar_INPUTS and bar_FQ and let the user post an unbounded number of messages to this task, but if we did that it would have not be possible to pick a capacity for RQ1 that lets us omit the "is it full?" check when spawning a baz task. In the section about the timer queue we'll see how the free queue is used by tasks that have no inputs.

Ceiling analysis

The queues internally used by the spawn API are treated like normal resources and included in the ceiling analysis. It's important to note that these are SPSC queues and that only one of the endpoints is behind a resource; the other endpoint is owned by a task dispatcher.

Consider the following example:


# #![allow(unused_variables)]
#fn main() {
#[rtic::app(device = ..)]
const APP: () = {
    #[idle(spawn = [foo, bar])]
    fn idle(c: idle::Context) -> ! {
        // ..
    }

    #[task]
    fn foo(c: foo::Context) {
        // ..
    }

    #[task]
    fn bar(c: bar::Context) {
        // ..
    }

    #[task(priority = 2, spawn = [foo])]
    fn baz(c: baz::Context) {
        // ..
    }

    #[task(priority = 3, spawn = [bar])]
    fn quux(c: quux::Context) {
        // ..
    }
};
#}

This is how the ceiling analysis would go:

  • idle (prio = 0) and baz (prio = 2) contend for the consumer endpoint of foo_FQ; this leads to a priority ceiling of 2.

  • idle (prio = 0) and quux (prio = 3) contend for the consumer endpoint of bar_FQ; this leads to a priority ceiling of 3.

  • idle (prio = 0), baz (prio = 2) and quux (prio = 3) all contend for the producer endpoint of RQ1; this leads to a priority ceiling of 3

Timer queue

The timer queue functionality lets the user schedule tasks to run at some time in the future. Unsurprisingly, this feature is also implemented using a queue: a priority queue where the scheduled tasks are kept sorted by earliest scheduled time. This feature requires a timer capable of setting up timeout interrupts. The timer is used to trigger an interrupt when the scheduled time of a task is up; at that point the task is removed from the timer queue and moved into the appropriate ready queue.

Let's see how this in implemented in code. Consider the following program:


# #![allow(unused_variables)]
#fn main() {
#[rtic::app(device = ..)]
const APP: () = {
    // ..

    #[task(capacity = 2, schedule = [foo])]
    fn foo(c: foo::Context, x: u32) {
        // schedule this task to run again in 1M cycles
        c.schedule.foo(c.scheduled + Duration::cycles(1_000_000), x + 1).ok();
    }

    extern "C" {
        fn UART0();
    }
};
#}

schedule

Let's first look at the schedule API.


# #![allow(unused_variables)]
#fn main() {
mod foo {
    pub struct Schedule<'a> {
        priority: &'a Cell<u8>,
    }

    impl<'a> Schedule<'a> {
        // unsafe and hidden because we don't want the user to tamper with this
        #[doc(hidden)]
        pub unsafe fn priority(&self) -> &Cell<u8> {
            self.priority
        }
    }
}

const APP: () = {
    type Instant = <path::to::user::monotonic::timer as rtic::Monotonic>::Instant;

    // all tasks that can be `schedule`-d
    enum T {
        foo,
    }

    struct NotReady {
        index: u8,
        instant: Instant,
        task: T,
    }

    // The timer queue is a binary (min) heap of `NotReady` tasks
    static mut TQ: TimerQueue<U2> = ..;
    const TQ_CEILING: u8 = 1;

    static mut foo_FQ: Queue<u8, U2> = Queue::new();
    const foo_FQ_CEILING: u8 = 1;

    static mut foo_INPUTS: [MaybeUninit<u32>; 2] =
        [MaybeUninit::uninit(), MaybeUninit::uninit()];

    static mut foo_INSTANTS: [MaybeUninit<Instant>; 2] =
        [MaybeUninit::uninit(), MaybeUninit::uninit()];

    impl<'a> foo::Schedule<'a> {
        fn foo(&self, instant: Instant, input: u32) -> Result<(), u32> {
            unsafe {
                let priority = self.priority();
                if let Some(index) = lock(priority, foo_FQ_CEILING, || {
                    foo_FQ.split().1.dequeue()
                }) {
                    // `index` is an owning pointer into these buffers
                    foo_INSTANTS[index as usize].write(instant);
                    foo_INPUTS[index as usize].write(input);

                    let nr = NotReady {
                        index,
                        instant,
                        task: T::foo,
                    };

                    lock(priority, TQ_CEILING, || {
                        TQ.enqueue_unchecked(nr);
                    });
                } else {
                    // No space left to store the input / instant
                    Err(input)
                }
            }
        }
    }
};
#}

This looks very similar to the Spawn implementation. In fact, the same INPUTS buffer and free queue (FQ) are shared between the spawn and schedule APIs. The main difference between the two is that schedule also stores the Instant at which the task was scheduled to run in a separate buffer (foo_INSTANTS in this case).

TimerQueue::enqueue_unchecked does a bit more work that just adding the entry into a min-heap: it also pends the system timer interrupt (SysTick) if the new entry ended up first in the queue.

The system timer

The system timer interrupt (SysTick) takes cares of two things: moving tasks that have become ready from the timer queue into the right ready queue and setting up a timeout interrupt to fire when the scheduled time of the next task is up.

Let's see the associated code.


# #![allow(unused_variables)]
#fn main() {
const APP: () = {
    #[no_mangle]
    fn SysTick() {
        const PRIORITY: u8 = 1;

        let priority = &Cell::new(PRIORITY);
        while let Some(ready) = lock(priority, TQ_CEILING, || TQ.dequeue()) {
            match ready.task {
                T::foo => {
                    // move this task into the `RQ1` ready queue
                    lock(priority, RQ1_CEILING, || {
                        RQ1.split().0.enqueue_unchecked(Ready {
                           task: T1::foo,
                           index: ready.index,
                        })
                    });

                    // pend the task dispatcher
                    rtic::pend(Interrupt::UART0);
                }
            }
        }
    }
};
#}

This looks similar to a task dispatcher except that instead of running the ready task this only places the task in the corresponding ready queue, that way it will run at the right priority.

TimerQueue::dequeue will set up a new timeout interrupt when it returns None. This ties in with TimerQueue::enqueue_unchecked, which pends this handler; basically, enqueue_unchecked delegates the task of setting up a new timeout interrupt to the SysTick handler.

Resolution and range of cyccnt::Instant and cyccnt::Duration

RTIC provides a Monotonic implementation based on the DWT's (Data Watchpoint and Trace) cycle counter. Instant::now returns a snapshot of this timer; these DWT snapshots (Instants) are used to sort entries in the timer queue. The cycle counter is a 32-bit counter clocked at the core clock frequency. This counter wraps around every (1 << 32) clock cycles; there's no interrupt associated to this counter so nothing worth noting happens when it wraps around.

To order Instants in the queue we need to compare two 32-bit integers. To account for the wrap-around behavior we use the difference between two Instants, a - b, and treat the result as a 32-bit signed integer. If the result is less than zero then b is a later Instant; if the result is greater than zero then b is an earlier Instant. This means that scheduling a task at an Instant that's (1 << 31) - 1 cycles greater than the scheduled time (Instant) of the first (earliest) entry in the queue will cause the task to be inserted at the wrong place in the queue. There some debug assertions in place to prevent this user error but it can't be avoided because the user can write (instant + duration_a) + duration_b and overflow the Instant.

The system timer, SysTick, is a 24-bit counter also clocked at the core clock frequency. When the next scheduled task is more than 1 << 24 clock cycles in the future an interrupt is set to go off in 1 << 24 cycles. This process may need to happen several times until the next scheduled task is within the range of the SysTick counter.

In conclusion, both Instant and Duration have a resolution of 1 core clock cycle and Duration effectively has a (half-open) range of 0..(1 << 31) (end not included) core clock cycles.

Queue capacity

The capacity of the timer queue is chosen to be the sum of the capacities of all schedule-able tasks. Like in the case of the ready queues, this means that once we have claimed a free slot in the INPUTS buffer we are guaranteed to be able to insert the task in the timer queue; this lets us omit runtime checks.

System timer priority

The priority of the system timer can't set by the user; it is chosen by the framework. To ensure that lower priority tasks don't prevent higher priority tasks from running we choose the priority of the system timer to be the maximum of all the schedule-able tasks.

To see why this is required consider the case where two previously scheduled tasks with priorities 2 and 3 become ready at about the same time but the lower priority task is moved into the ready queue first. If the system timer priority was, for example, 1 then after moving the lower priority (2) task it would run to completion (due to it being higher priority than the system timer) delaying the execution of the higher priority (3) task. To prevent scenarios like these the system timer must match the highest priority of the schedule-able tasks; in this example that would be 3.

Ceiling analysis

The timer queue is a resource shared between all the tasks that can schedule a task and the SysTick handler. Also the schedule API contends with the spawn API over the free queues. All this must be considered in the ceiling analysis.

To illustrate, consider the following example:


# #![allow(unused_variables)]
#fn main() {
#[rtic::app(device = ..)]
const APP: () = {
    #[task(priority = 3, spawn = [baz])]
    fn foo(c: foo::Context) {
        // ..
    }

    #[task(priority = 2, schedule = [foo, baz])]
    fn bar(c: bar::Context) {
        // ..
    }

    #[task(priority = 1)]
    fn baz(c: baz::Context) {
        // ..
    }
};
#}

The ceiling analysis would go like this:

  • foo (prio = 3) and baz (prio = 1) are schedule-able task so the SysTick must run at the highest priority between these two, that is 3.

  • foo::Spawn (prio = 3) and bar::Schedule (prio = 2) contend over the consumer endpoind of baz_FQ; this leads to a priority ceiling of 3.

  • bar::Schedule (prio = 2) has exclusive access over the consumer endpoint of foo_FQ; thus the priority ceiling of foo_FQ is effectively 2.

  • SysTick (prio = 3) and bar::Schedule (prio = 2) contend over the timer queue TQ; this leads to a priority ceiling of 3.

  • SysTick (prio = 3) and foo::Spawn (prio = 3) both have lock-free access to the ready queue RQ3, which holds foo entries; thus the priority ceiling of RQ3 is effectively 3.

  • The SysTick has exclusive access to the ready queue RQ1, which holds baz entries; thus the priority ceiling of RQ1 is effectively 3.

Changes in the spawn implementation

When the schedule API is used the spawn implementation changes a bit to track the baseline of tasks. As you saw in the schedule implementation there's an INSTANTS buffers used to store the time at which a task was scheduled to run; this Instant is read in the task dispatcher and passed to the user code as part of the task context.


# #![allow(unused_variables)]
#fn main() {
const APP: () = {
    // ..

    #[no_mangle]
    unsafe UART1() {
        const PRIORITY: u8 = 1;

        let snapshot = basepri::read();

        while let Some(ready) = RQ1.split().1.dequeue() {
            match ready.task {
                Task::baz => {
                    let input = baz_INPUTS[ready.index as usize].read();
                    // ADDED
                    let instant = baz_INSTANTS[ready.index as usize].read();

                    baz_FQ.split().0.enqueue_unchecked(ready.index);

                    let priority = Cell::new(PRIORITY);
                    // CHANGED the instant is passed as part the task context
                    baz(baz::Context::new(&priority, instant), input)
                }

                Task::bar => {
                    // looks just like the `baz` branch
                }

            }
        }

        // BASEPRI invariant
        basepri::write(snapshot);
    }
};
#}

Conversely, the spawn implementation needs to write a value to the INSTANTS buffer. The value to be written is stored in the Spawn struct and its either the start time of the hardware task or the scheduled time of the software task.


# #![allow(unused_variables)]
#fn main() {
mod foo {
    // ..

    pub struct Spawn<'a> {
        priority: &'a Cell<u8>,
        // ADDED
        instant: Instant,
    }

    impl<'a> Spawn<'a> {
        pub unsafe fn priority(&self) -> &Cell<u8> {
            &self.priority
        }

        // ADDED
        pub unsafe fn instant(&self) -> Instant {
            self.instant
        }
    }
}

const APP: () = {
    impl<'a> foo::Spawn<'a> {
        /// Spawns the `baz` task
        pub fn baz(&self, message: u64) -> Result<(), u64> {
            unsafe {
                match lock(self.priority(), baz_FQ_CEILING, || {
                    baz_FQ.split().1.dequeue()
                }) {
                    Some(index) => {
                        baz_INPUTS[index as usize].write(message);
                        // ADDED
                        baz_INSTANTS[index as usize].write(self.instant());

                        lock(self.priority(), RQ1_CEILING, || {
                            RQ1.split().1.enqueue_unchecked(Ready {
                                task: Task::foo,
                                index,
                            });
                        });

                        rtic::pend(Interrupt::UART0);
                    }

                    None => {
                        // maximum capacity reached; spawn failed
                        Err(message)
                    }
                }
            }
        }
    }
};
#}

Homogeneous multi-core support

This section covers the experimental homogeneous multi-core support provided by RTIC behind the homogeneous Cargo feature.

Content coming soon

Heterogeneous multi-core support

This section covers the experimental heterogeneous multi-core support provided by RTIC behind the heterogeneous Cargo feature.

Content coming soon