Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • master
1 result

Target

Select target project
  • d7018e-special-studies-embedded-systems/are_we_embedded_yet
  • tmplt/are_we_embedded_yet
  • shawnshank/are_we_embedded_yet
  • CuriouslyCurious/are_we_embedded_yet
  • nevvi/are_we_embedded_yet
  • williameriksson/are_we_embedded_yet
  • nannap/are_we_embedded_yet
  • ironedde/are_we_embedded_yet
8 results
Select Git revision
  • master
  • patch-1
2 results
Show changes
Commits on Source (29)
...@@ -4,7 +4,7 @@ Are We Embedded Yet ...@@ -4,7 +4,7 @@ Are We Embedded Yet
## D7018E - Special Studies in Embedded Systems ## D7018E - Special Studies in Embedded Systems
Disclaimer: This document is in beta state!!!!! Just to give a hint how the course will look like. Disclaimer: This document is in beta state!!!!! Just to give a hint how the course will look like.
The course will be given as a self study course with a set of introductional seminars and accompanying mandatory assignments, followed by a larger assignment (project). The project can be carried out individually or in groups depending on size. Grading will be individual, based on agreed requirements between the student and the teacher, showing understanding and abilities regarding: The course will be given as a self study course with a set of introduction seminars and accompanying mandatory assignments, followed by a larger assignment (project). The project can be carried out individually or in groups depending on size. Grading will be individual, based on agreed requirements between the student and the teacher, showing understanding and abilities regarding:
1. The Rust ecosystem. [ecosystem](doc/Ecosystem.md) 1. The Rust ecosystem. [ecosystem](doc/Ecosystem.md)
...@@ -32,7 +32,7 @@ The course will be given as a self study course with a set of introductional sem ...@@ -32,7 +32,7 @@ The course will be given as a self study course with a set of introductional sem
8. Building and debugging embedded code in Rust 8. Building and debugging embedded code in Rust
Hardware abstractions using svd2rust (autogenerated from vendor-provided SVD specifications). Compiling using xargo. Setting up openocd and gdb. Hardware abstractions using svd2rust (auto-generated from vendor-provided SVD specifications). Compiling using xargo. Setting up openocd and gdb.
9. Pre-processing 9. Pre-processing
Custom build.rs build scripts, and the RTFM Concurrent Reactive Component model. Custom build.rs build scripts, and the RTFM Concurrent Reactive Component model.
...@@ -50,7 +50,7 @@ Each project should be reported in terms of a git Rust crate, with sufficient do ...@@ -50,7 +50,7 @@ Each project should be reported in terms of a git Rust crate, with sufficient do
There will be two presentation rounds (at end of LP2 and LP3). Students taking (too many other) courses during LP2 may choose to present their project at end of LP3 (instead of LP2). Presentations will be oral, where the student(s), will present and demonstrate their work. There will be two presentation rounds (at end of LP2 and LP3). Students taking (too many other) courses during LP2 may choose to present their project at end of LP3 (instead of LP2). Presentations will be oral, where the student(s), will present and demonstrate their work.
Projects should be related to embedded programming, either on the target side (some application using the RTFM-core or CRC model), or on the host side, communicating with an embedded target running Rust RTFM. E.g., two groups can work together with building a system, e.g., with back-end processing of data collected by the embedded system, or by providing a front-end to the embedded system. Alternatively, host side project could relate the development of the RTFM-core/ RTFM-CRC frameworks or related tools (e.g. LLWM-KLEE as a backend for analysis of Rust code). Projects should be related to embedded programming, either on the target side (some application using the RTFM-core or CRC model), or on the host side, communicating with an embedded target running Rust RTFM. E.g., two groups can work together with building a system, e.g., with back-end processing of data collected by the embedded system, or by providing a front-end to the embedded system. Alternatively, host side project could relate the development of the RTFM-core/ RTFM-CRC frameworks or related tools (e.g. LLVM-KLEE as a back-end for analysis of Rust code).
## Resources ## Resources
Students will carry out the assignments on their personal laptops (in case you don't have a working laptop we will try to lend you one). Tools used are available for Linux and OSX, but with a bit of trickery windows based installations should be possible (but you are on your own here). In case you don't run OSX/Linux native, virtual box or VMware is possible, though debugging of target MCUs are feasible it is a bit more tricky. Students will carry out the assignments on their personal laptops (in case you don't have a working laptop we will try to lend you one). Tools used are available for Linux and OSX, but with a bit of trickery windows based installations should be possible (but you are on your own here). In case you don't run OSX/Linux native, virtual box or VMware is possible, though debugging of target MCUs are feasible it is a bit more tricky.
...@@ -138,7 +138,7 @@ Seminars ...@@ -138,7 +138,7 @@ Seminars
* Topic [Memory](doc/Memory.md) * Topic [Memory](doc/Memory.md)
In-depth discussion of underlying theory, linear types (relation to functional programming). The *Affine* type system of Rust, requirements on the programmer, and guarantees offered by the compiler. Lifetimes, of stack allocated and global variables. Relation to C++ `unique pointers`. In-depth discussion of underlying theory, linear types (relation to functional programming). The *Affine* type system of Rust, requirements on the programmer, and guarantees offered by the compiler. Lifetimes, of stack allocated and global variables. Relation to C++ `unique pointers`.
* Assignment * Assignment 3
a. Recall the D0013E course lab2/4, where you decrypted a message in assembler (lab2) and C (lab 4). Now, let's re-implement the lab in Rust (base your development on group number [1's](http://www.sm.luth.se/csee/courses/smd/D0013E/labs/lab1underlag/grupp_01.lab1_underlag.s ) lab assignment). a. Recall the D0013E course lab2/4, where you decrypted a message in assembler (lab2) and C (lab 4). Now, let's re-implement the lab in Rust (base your development on group number [1's](http://www.sm.luth.se/csee/courses/smd/D0013E/labs/lab1underlag/grupp_01.lab1_underlag.s ) lab assignment).
...@@ -148,7 +148,7 @@ Seminars ...@@ -148,7 +148,7 @@ Seminars
The `seed`, `abc`,`coded` and `plain` should be stack allocated. The decoded string should be printed when decryption is finished. The `seed`, `abc`,`coded` and `plain` should be stack allocated. The decoded string should be printed when decryption is finished.
b. Make the `seed`, `abc`,`coded` and `plain` static (heap) allocated (i.e., as global varibles). Accessing those will require some `unsafe` code. (Keep the unsafe blocks as local as possible.) b. Make the `seed`, `abc`,`coded` and `plain` static (heap) allocated (i.e., as global variables). Accessing those will require some `unsafe` code. (Keep the unsafe blocks as local as possible.)
c. Safety analysis. Provoke the implementation, by omitting the `'\0'` (null termination). Observe the result and motivate the behavior in terms of your understanding of the Rust memory model. Under which circumstances do you consider 3a and 3b to have same/different memory safety. c. Safety analysis. Provoke the implementation, by omitting the `'\0'` (null termination). Observe the result and motivate the behavior in terms of your understanding of the Rust memory model. Under which circumstances do you consider 3a and 3b to have same/different memory safety.
...@@ -169,21 +169,21 @@ Seminars ...@@ -169,21 +169,21 @@ Seminars
* `blue-pill` and `nucleo` board support crates * `blue-pill` and `nucleo` board support crates
* Building and debugging your first application. * Building and debugging your first application.
* Assignment * Assignment 4
a. Backport assignment `3b` to your chosen target. Use semihosting in order to `write` the resulting string to the host. You may need to use `--release` for decoding the long (`coded`) message, as being deeply recursive unoptimized code may run out of stack memory. a. Backport assignment `3b` to your chosen target. Use semi-hosting in order to `write` the resulting string to the host. You may need to use `--release` for decoding the long (`coded`) message, as being deeply recursive unoptimized code may run out of stack memory.
b. Discuss from a memory safety perspective the outcome. b. Discuss from a memory safety perspective the outcome.
c. Compare for the short message (`abc`), the number of cycles required for `decode` in debug (standard) vs. `--release`. As a comparison my straightforword C implementation took 2200 cycles in best optimized mode using `gcc` (-o3), while my (translation) to Rust code took 1780 cycles (--release). (Both executed on a bluepill board at 8MHz without (flash) memory wait states). c. Compare for the short message (`abc`), the number of cycles required for `decode` in debug (standard) vs. `--release`. As a comparison my straightforward C implementation took 2200 cycles in best optimized mode using `gcc` (-o3), while my (translation) to Rust code took 1780 cycles (--release). (Both executed on a bluepill board at 8MHz without (flash) memory wait states).
Make a new git for your embedded development. Make three branches (`3a, 3b, 3c`) with updated documentation according to the above. Make a new git for your embedded development. Make three branches (`4a, 4b, 4c`) with updated documentation according to the above.
5. Advanced Rust Concepts 5. Advanced Rust Concepts
* Preparation * Preparation
Be prepared to present the progress on assignment 3. Be prepared to present the progress on assignment 4.
* Topic * Topic
Advanced Rust features, trait system and closures. Advanced Rust features, trait system and closures.
...@@ -192,12 +192,12 @@ Seminars ...@@ -192,12 +192,12 @@ Seminars
* [13 - Functional Language Features in Rust](https://doc.rust-lang.org/book/second-edition/ch13-00-functional-features.html). * [13 - Functional Language Features in Rust](https://doc.rust-lang.org/book/second-edition/ch13-00-functional-features.html).
* Assignment * Assignment
Continue working on assignment 3. Continue working on assignment 4.
6. Memory Safe Concurrency 6. Memory Safe Concurrency
* Preparation * Preparation
* Finish lab 3 and be prepared to show your solution. * Finish assignment 4 and be prepared to show your solution.
* Topic * Topic
* UnsafeCell, and synchronization in the RTFM model. * UnsafeCell, and synchronization in the RTFM model.
...@@ -207,37 +207,37 @@ Seminars ...@@ -207,37 +207,37 @@ Seminars
* [cortex-m-rtfm](https://github.com/japaric/cortex-m-rtfm) The RTFM-core (task and resource model) in Rust for the Cortex-M family * [cortex-m-rtfm](https://github.com/japaric/cortex-m-rtfm) The RTFM-core (task and resource model) in Rust for the Cortex-M family
* [svd2rust](https://github.com/japaric/svd2rust) Generating * [svd2rust](https://github.com/japaric/svd2rust) Generating
* Assignment 4 * Assignment 5
Implement a simple system with 3 tasks Implement a simple system with 3 tasks
* A periodic task executing each Xms (free of accumulated drift, and with minimal jitter), that blinks the onboard LED, and * A periodic task executing each Xms (free of accumulated drift, and with minimal jitter), that blinks the on-board LED, and
* A USART task receiving commands (pause, start, period 1-1000ms), received commands should be parsed and corresponding responses generated and sent over the USART. (Come up with a nice and simple user interface.) * A USART task receiving commands (pause, start, period 1-1000ms), received commands should be parsed and corresponding responses generated and sent over the USART. (Come up with a nice and simple user interface.)
* A a logging task, run each second (period 1s), that prints statistics of CPU usage over the ITM port * A a logging task, run each second (period 1s), that prints statistics of CPU usage over the ITM port
* Idle should gather statics on sleep/up time, (there is a sleep counter in the cortex core) * Idle should gather statics on sleep/up time, (there is a sleep counter in the cortex core)
* Use shared resources (data structures) to ensure race free execution * Use shared resources (data structures) to ensure race free execution
You may use the core systic timer (relative) and the dwt cycle counter (absoulte) in combination to achieve drift free timing. Alternative you look inte the stm32f4xx timer peripheral. There is a support crate for the [STM32F3DISCOVERY](https://github.com/japaric/f3) board. Periherals are similar so you may "borrow" code from there. You may use the core systic timer (relative) and the dwt cycle counter (absolute) in combination to achieve drift free timing. Alternative you look into the stm32f4xx timer peripheral. There is a support crate for the [STM32F3DISCOVERY](https://github.com/japaric/f3) board. Peripherals are similar so you may "borrow" code from there.
Make a new git with the development and documentation. Make a new git with the development and documentation.
Optional: Optional:
Find a way to measure the power consumpion. A possible solution is to power the board externally and use a power cube with current measuring capability. Alternative use an external power source with known charge (e.g., a "capacitor"), and measure the discharge time (start and residue charge at brown-out voltage), at least a precise relative measure is possible to obtain. Find a way to measure the power consumption. A possible solution is to power the board externally and use a power cube with current measuring capability. Alternative use an external power source with known charge (e.g., a "capacitor"), and measure the discharge time (start and residue charge at brown-out voltage), at least a precise relative measure is possible to obtain.
Operation without being connected to the USB port: in this case the serial IO and ITM needs to be connected externally (e.g., using some ftdi serial-USB). Operation without being connected to the USB port: in this case the serial IO and ITM needs to be connected externally (e.g., using some ftdi serial-USB).
Super optional: Super optional:
Try to minimize power consumption while maintaining desired operatotion. Lowering supply voltage and using aggressive power modes of the processor might be applied. (Not sure how USART/ITM communication can be made possible at sub 3.3v voltages. Also you have to make sure not to source the board over the communication interfaces.) Try to minimize power consumption while maintaining desired operation. Lowering supply voltage and using aggressive power modes of the processor might be applied. (Not sure how USART/ITM communication can be made possible at sub 3.3v voltages. Also you have to make sure not to source the board over the communication interfaces.)
7. Macros and Projects (Monday Nov. 20th) 7. Macros and Projects (Monday Nov. 20th)
* Preparation * Preparation
Be prepared to present the progress on assignment 4. Be prepared to present the progress on assignment 5.
* Topic * Topic
- We will cover the implementation of the RTFM-core crate. - We will cover the implementation of the rtfm-core and the cortex-m-rtfm crates. For details see [RTFM](doc/RTFM.md).
Special focus to `macro_rules` and `procedural macros`. Special focus to `macro_rules` and `procedural macros`.
...@@ -253,5 +253,8 @@ Seminars ...@@ -253,5 +253,8 @@ Seminars
8. Wrap-up (Monday Dec. 4th) 8. Wrap-up (Monday Dec. 4th)
* Preparation * Preparation
* Be prepared to present assignment 4. * Be prepared to present assignment 5.
* Be prepared to present your project, 10 minutes per project. * Be prepared to present your project, 10 minutes per project.
* A good idea is to prepare a git for the project with a `README.md` and use that as supporting material for your presentation. Its advicable to have a section (or in a doc sub folder) where you collect references to the material you will use, i.e., links to data sheets, links to other related crates and projects of importance to your project.
This type of reference section will be largely helpful to both you (during the project and afterwards maintaining it, but also to other users and people interested in your work). Moreover, from a "course" perspective it shows that you have done the necessary background studies BEFORE you "hack away". Of corse this will be a living document, updated throughout the project, but its a very good thing to start out NOW, then it can be used for your 10 minutes of fame!
...@@ -29,16 +29,16 @@ We suggest a Linux or OSX development environment, though Rust related tools are ...@@ -29,16 +29,16 @@ We suggest a Linux or OSX development environment, though Rust related tools are
The [rustup](https://www.rustup.rs/), tool manager allows you to manage multiple tool chain installations. Rust is distributed in three channels (`stable`, `beta` and `nightly`). You may set the default toolchain: The [rustup](https://www.rustup.rs/), tool manager allows you to manage multiple tool chain installations. Rust is distributed in three channels (`stable`, `beta` and `nightly`). You may set the default toolchain:
``` ```
rustup default nightly-2017-10-22-x86_64-unknown-linux-gnu rustup default nightly-2018-01-10-x86_64-unknown-linux-gnu
``` ```
and get information on the status of `rustup` and get information on the status of `rustup`
``` ```
rustup show rustup show
``` ```
Nightly tool chains allow for the development of libraries and applications including `unsafe` code using features not available on the `stable channel` (which will be necessary for the later exercises). For some tools to work (`rls/rustfmt`), you need to install additional components. For this to work, you should use a nightly toolchain for which all tools and components work (currently `nightly-2017-10-30` is the latest). Here is an example: Nightly tool chains allow for the development of libraries and applications including `unsafe` code using features not available on the `stable channel` (which will be necessary for the later exercises). For some tools to work (`rls/rustfmt`), you need to install additional components. For this to work, you should use a nightly toolchain for which all tools and components work (currently `nightly-2018-01-10` is the latest). Here is an example:
``` ```
rustup default nightly-2017-10-30 rustup default nightly-2018-01-10
rustup component add rls-preview rustup component add rls-preview
rustup component add rust-analysis rustup component add rust-analysis
rustup component add rust-src rustup component add rust-src
...@@ -71,7 +71,7 @@ Dependencies (may) include a minimal version, following the [semver](http://semv ...@@ -71,7 +71,7 @@ Dependencies (may) include a minimal version, following the [semver](http://semv
See [rls](https://marketplace.visualstudio.com/items?itemName=rust-lang.rust) for installing the RLS extension. See [rls](https://marketplace.visualstudio.com/items?itemName=rust-lang.rust) for installing the RLS extension.
You will need to pin the specific toolchain version used, by setting the `"rust-client.channel": "nightly-2017-10-30"` in your `vscode` *user* settings (this will be stored in a file `~/.config/Code/User/settings.json` and used for all your `vscode` projects. Settings may be set individually for each *workspace*, overriding the defaults. Regarding the `"rust-client.channel"` setting, a *workspace* setting would force the specific version (overriding the default), and may not work when the code is distributed (as other developers may be on other toolchains). You will need to pin the specific toolchain version used, by setting the `"rust-client.channel": "nightly-2018-01-10"` in your `vscode` *user* settings (this will be stored in a file `~/.config/Code/User/settings.json` and used for all your `vscode` projects. Settings may be set individually for each *workspace*, overriding the defaults. Regarding the `"rust-client.channel"` setting, a *workspace* setting would force the specific version (overriding the default), and may not work when the code is distributed (as other developers may be on other toolchains).
For RLS to work, `vscode` need a path to the `rls-preview` library (using the environment variable `LD_LIBRARY_PATH` (Linux), `DYLD_LIBRARY_PATH` (OSX ?)). For RLS to work, `vscode` need a path to the `rls-preview` library (using the environment variable `LD_LIBRARY_PATH` (Linux), `DYLD_LIBRARY_PATH` (OSX ?)).
``` ```
......
# Nucleo 64 # STM32 Nucleo-64 Board
A collection of documentation, tricks and tips regarding the ST Nucleo 64 development kit. A collection of documentation, tricks and tips regarding the STM32 Nucleo-64 Board (development kit).
We will mainly cover the `stm32f401re` and `stm32f411re` as these models. We will mainly cover the `stm32f401re` and `stm32f411re` as these models (main difference is the higher maximum clock frequency of the `stm32f411re`).
---
## Stlink inteface
The nucleo-64 platform has an onboard `stlink-v2.1` SWD interface (programmer) supported by `openocd`. By default the programmer is connected to the target (`stm32f401re` or similar).
---
### Programming external targets
You may use the board as a programmer (connector `CN4` SWD), in that case you should remove the `CN3` jumpers, and optionally desolder `SB15` (SWO) and `SB12` (NRST). See board documentation for details.
## Links ## Links
* [Nucleo-64 board](http://www.st.com/content/ccc/resource/technical/document/user_manual/98/2e/fa/4b/e0/82/43/b7/DM00105823.pdf/files/DM00105823.pdf/jcr:content/translations/en.DM00105823.pdf)
Board documentation.
* [Firmware Upgrade](http://www.st.com/en/development-tools/stsw-link007.html)
Standalone java program.
# Clocking # Clocking
* [Note on Overclocking](https://stm32f4-discovery.net/2014/11/overclock-stm32f4-device-up-to-250mhz/) * [Note on Overclocking](https://stm32f4-discovery.net/2014/11/overclock-stm32f4-device-up-to-250mhz/)
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
In order to be officially enrolled for the course each student need to write (and submit) an individual course plan including grading goals and assessments based on this document. In order to be officially enrolled for the course each student need to write (and submit) an individual course plan including grading goals and assessments based on this document.
Assignments are mandatory (as detailed in the README.md). Assignments are mandatory (as detailed in the README.md).
- Each student should for each assigment 2,3,4 comment (make an `issue` and/or `pull request` to one other `git`). Comments should be meaningful and constructive. For the assigment 2, 3, 4 you should comment on different groups. Strive to spread comments so that each group will get at least one comment for each assignment. - Each student should for each assigment 2, .., 5 comment (make an `issue` and/or `pull request` to one other `git`). Comments should be meaningful and constructive. For the assigment 2, .., 5 you should comment on different groups. Strive to spread comments so that each group will get at least one comment for each assignment.
- Each student/group should attend `issues`/`pull request` - Each student/group should attend `issues`/`pull request`
Projects should aim to further cover the learning goals as stated in the README.md. Projects should aim to further cover the learning goals as stated in the README.md.
......
# Suggested projects # Suggested projects
---
## Printing device (William)
- Rotating stick with LEDs at the end of the stick that can "print" text by switching the LEDs on and off with the right timings.
--- ---
## Seer (Nils) ## Seer (Nils)
Symbolic execution engine for MIR internal format Symbolic execution engine for MIR internal format
...@@ -8,11 +12,12 @@ Symbolic execution engine for MIR internal format ...@@ -8,11 +12,12 @@ Symbolic execution engine for MIR internal format
- Study outsets for program verification based on seer - Study outsets for program verification based on seer
--- ---
## LED Audio (taken) ## LED Audio (John)
- Blink leds according to audio input - Modulate LED colors and intensity according to audio input
--- ---
## Drivers for NXP (Axel) ## Drivers for NXP (Axel)
--- ---
## WCET analysis for RTFM models using KLEE (Henrik) ## WCET analysis for RTFM models using KLEE (Henrik)
- Automated testebed, integrated as cargo sub-command - Automated testebed, integrated as cargo sub-command
...@@ -35,13 +40,13 @@ Symbolic execution engine for MIR internal format ...@@ -35,13 +40,13 @@ Symbolic execution engine for MIR internal format
- Implement a wheel sensor for existing model car - Implement a wheel sensor for existing model car
--- ---
## Stack Memory Analys ## Stack Memory Analysis
- Seer or KLEE based path/call graph extraction - Seer or KLEE based path/call graph extraction
- Target code analysis, per function stack usage - Target code analysis, per function stack usage
- Static worst case stack analysis for RTFM and/or RTFM-TTA - Static worst case stack analysis for RTFM and/or RTFM-TTA
--- ---
## Ethernet driver for TCP/UDP/IP stack ## Ethernet driver for TCP/UDP/IP stack (Jonas)
- Develop driver and integrate to existing TCP/UDP/IP stack - Develop driver and integrate to existing TCP/UDP/IP stack
--- ---
...@@ -49,7 +54,7 @@ Symbolic execution engine for MIR internal format ...@@ -49,7 +54,7 @@ Symbolic execution engine for MIR internal format
- Drivers for the Nucleo 64, stm32f401re/stm32f411re, similar to the f3/bluepill support crates - Drivers for the Nucleo 64, stm32f401re/stm32f411re, similar to the f3/bluepill support crates
--- ---
## Time Triggered Architectuer (RTFM-TTA) ## Time Triggered Architecture (RTFM-TTA)
- Periodic timers - Periodic timers
- Communication channels/message buffers - Communication channels/message buffers
- Static analysis (for safely bound buffers) - Static analysis (for safely bound buffers)
......
...@@ -39,7 +39,13 @@ ...@@ -39,7 +39,13 @@
- Arch Linux - Arch Linux
``` console ``` console
$ sudo pacman -Sy arm-none-eabi-{binutils,gdb} openocd $ sudo pacman -Sy arm-none-eabi-{binutils,gdb}
```
OpenOCD is in AUR:
```
$ yaourt -S openocd
``` ```
For hardware-association and pre-packaged UDEV-rules also install: For hardware-association and pre-packaged UDEV-rules also install:
...@@ -646,7 +652,7 @@ write: ...@@ -646,7 +652,7 @@ write:
Now pick from the menu: `Tasks > Configure Default Build Task...` and pick `xargo build`. Now pick from the menu: `Tasks > Configure Default Build Task...` and pick `xargo build`.
Now you should be able to build your project by picking `Tasks > Run Build Task...` from the menu or Now you should be able to build your project by picking `Tasks > Run Build Task...` from the menu or
by hitting the shorcut `Ctrl + Shift + B`. by hitting the shortcut `Ctrl + Shift + B`.
![Build task](/assets/vscode-build.png) ![Build task](/assets/vscode-build.png)
......
# Real Time For the Masses
Real Time For the Masses is a set of programming models and tools geared towards developing systems with analytical properties, with respect e.g., memory requirements, response time, safety and security.
## History
### RTFM-core
The RTFM-core model offers a static task and resource model for device level modelling, implementation and analysis. The original model has been implemented as a coordination language, embedded and extending the C language with a set of RTFM primitives.
For single core deployment, the input program is analysed under the Stack Resource Policy. The RTFM-core compiler generates code with inlined scheduling and resource management primitives offering the following key properties:
- Efficient static priority preemptive scheduling, using the underlying interrupt hardware
- Race-free execution (each resource is exclusively accessed)
- Deadlock-free execution
- Schedulability test and response time analysing using a plethora of known methods
Related publications:
- [Real-time for the masses: Step 1: programming API and static priority SRP kernel primitives](http://ltu.diva-portal.org/smash/record.jsf?dswid=-6547&pid=diva2%3A1005680&c=23&searchType=RESEARCH&language=en&query=&af=%5B%5D&aq=%5B%5B%7B%22personId%22%3A%22pln%22%7D%5D%5D&aq2=%5B%5B%5D%5D&aqe=%5B%5D&noOfRows=50&sortOrder=dateIssued_sort_desc&sortOrder2=title_sort_asc&onlyFullText=false&sf=all)
- [RTFM-core: Language and Implementation](http://ltu.diva-portal.org/smash/record.jsf?dswid=-6547&pid=diva2%3A1013248&c=11&searchType=RESEARCH&language=en&query=&af=%5B%5D&aq=%5B%5B%7B%22personId%22%3A%22pln%22%7D%5D%5D&aq2=%5B%5B%5D%5D&aqe=%5B%5D&noOfRows=50&sortOrder=dateIssued_sort_desc&sortOrder2=title_sort_asc&onlyFullText=false&sf=all)
- [RTFM-RT: a threaded runtime for RTFM-core towards execution of IEC 61499](http://ltu.diva-portal.org/smash/record.jsf?dswid=-6547&pid=diva2%3A1001553&c=12&searchType=RESEARCH&language=en&query=&af=%5B%5D&aq=%5B%5B%7B%22personId%22%3A%22pln%22%7D%5D%5D&aq2=%5B%5B%5D%5D&aqe=%5B%5D&noOfRows=50&sortOrder=dateIssued_sort_desc&sortOrder2=title_sort_asc&onlyFullText=false&sf=all)
- [Abstract Timers and their Implementation onto the ARM Cortex-M family of MCUs](http://ltu.diva-portal.org/smash/record.jsf?dswid=-6547&pid=diva2%3A1013030&c=4&searchType=RESEARCH&language=en&query=&af=%5B%5D&aq=%5B%5B%7B%22personId%22%3A%22pln%22%7D%5D%5D&aq2=%5B%5B%5D%5D&aqe=%5B%5D&noOfRows=50&sortOrder=dateIssued_sort_desc&sortOrder2=title_sort_asc&onlyFullText=false&sf=all)
- [Safe tasks: run time verification of the RTFM-lang model of computation](http://ltu.diva-portal.org/smash/record.jsf?dswid=-6547&pid=diva2%3A1037297&c=6&searchType=RESEARCH&language=en&query=&af=%5B%5D&aq=%5B%5B%7B%22personId%22%3A%22pln%22%7D%5D%5D&aq2=%5B%5B%5D%5D&aqe=%5B%5D&noOfRows=50&sortOrder=dateIssued_sort_desc&sortOrder2=title_sort_asc&onlyFullText=false&sf=all)
- [Well formed Control-flow for Critical Sections in RTFM-core](http://ltu.diva-portal.org/smash/record.jsf?dswid=-6547&pid=diva2%3A1013317&c=13&searchType=RESEARCH&language=en&query=&af=%5B%5D&aq=%5B%5B%7B%22personId%22%3A%22pln%22%7D%5D%5D&aq2=%5B%5B%5D%5D&aqe=%5B%5D&noOfRows=50&sortOrder=dateIssued_sort_desc&sortOrder2=title_sort_asc&onlyFullText=false&sf=all)
### RTFM-cOOre
An object oriented model offering a component based abstraction. RTFM-cOOre models can be compiled to RTFM-core for further analysis and target code generation. The language is a mere proof of concept, used by students of the course in Compiler Construction at LTU. The RTFM-cOOre language undertakes the computational model of Concurrent Reactive Objects similarly to the functional Timber language, its C-code implementation (TinyTimber) and the CRC/CRO IDE below.
Related publications:
- [Timber](http://www.timber-lang.org/)
- [TinyTimber, Reactive Objects in C for Real-Time Embedded Systems](http://ieeexplore.ieee.org/document/4484933/)
- [An IDE for component-based design of embedded real-time software](http://ltu.diva-portal.org/smash/record.jsf?dswid=-6547&pid=diva2%3A1013957&c=26&searchType=RESEARCH&language=en&query=&af=%5B%5D&aq=%5B%5B%7B%22personId%22%3A%22pln%22%7D%5D%5D&aq2=%5B%5B%5D%5D&aqe=%5B%5D&noOfRows=50&sortOrder=dateIssued_sort_desc&sortOrder2=title_sort_asc&onlyFullText=false&sf=all)
- [RTFM-lang static semantics for systems with mixed criticality](http://ltu.diva-portal.org/smash/record.jsf?dswid=-6547&pid=diva2%3A987559&c=19&searchType=RESEARCH&language=en&query=&af=%5B%5D&aq=%5B%5B%7B%22personId%22%3A%22pln%22%7D%5D%5D&aq2=%5B%5B%5D%5D&aqe=%5B%5D&noOfRows=50&sortOrder=dateIssued_sort_desc&sortOrder2=title_sort_asc&onlyFullText=false&sf=all)
- [RTFM-core: course in compiler construction](http://ltu.diva-portal.org/smash/record.jsf?faces-redirect=true&aq2=%5B%5B%5D%5D&af=%5B%5D&searchType=SIMPLE&sortOrder2=title_sort_asc&query=&language=sv&pid=diva2%3A1068636&aq=%5B%5B%5D%5D&sf=all&aqe=%5B%5D&sortOrder=author_sort_asc&onlyFullText=false&noOfRows=50&dswid=-6339)
### RTFM in Rust
A major drawback of the RTFM-core model relies in the dependency to C code for the implementation of tasks (risking to break the safety of memory accesses and introduce race conditions). While the RTFM-cOOre model lifts this dependency, developing and maintaining a fully fledged language and accompanying compiler is a daunting task. Instead, we took the rout of the systems programming language offering the memory safety required and more it turned out.
- first attempt:
Resource protection by scope and `Deref`. Without going into details, a proof of concept was implemented. However feasible, the approach was dropped in favour of using closures, as seen in RTFM-v1 and RTFM-v2 below.
- second attempt:
At this time Japaric came into play, bringing Rust coding ninja skillz. [RTFM-v1](http://blog.japaric.io/fearless-concurrency/). The approach allows the user to enter resource ceilings manually and uses the Rust type system to verify their soundness. The approach is fairly complicated, and writing generic code requires explicit type bounds.
- current implementation.
In [RTFM-v2](http://blog.japaric.io/rtfm-v2/), A system model is given declaratively (by the `app!` procedural macro). During compilation the system model is analysed, resource ceilings derived and code generated accordingly. This simplifies programming, and generics can be expressed more succinctly.
The RTFM-v2 implementation provides a subset of the original RTFM-core language. The RTFM-core model offers offset based scheduling, a taks may trigger the asynchronous execution of a sub-task, with optional timing offset, priority assignment and payload. The `rtfm-core` compiler analyses the system model and statically allocates buffers and generates code for sending and receiving payloads. This has not been implemented the RTFM-v2 framework. However, similar behavior can be programmatically achieved by manually triggering tasks (`rtfm::set_pending`), and using `arm-core systic`/`device timers` to give timing offsets. Payloads can be safely implemented using *channels* (unprotected *single writer, single readerafter writer* buffers)
- current work and future implementations.
One can think of extending the RTFM-v2 API with channels, and synthesize buffers, send/receive code and timer management. (Suitable Masters thesis.)
The CRC/CRO model has been implemented as a proof of concept. Unlike the RTFM-cOOre model, RTFM-CRC directly generates target code (i.e., it does NOT compile to RTFM-v2). RTFM-CRC is in a prototype stage, implemented so far:
- A system level `Sys` AST (abstract syntax tree) is derived from CRC (components)/ CRO (object) descriptions (given in separate text files following a Rust struct like syntax for each component and object)
- The `Sys` AST is analysed and resources, and task set derived. From that resource ceilings are derived.
- Resources (objects) and resource ceilings are synthesized.
- Message buffers and message passing primitives are synthesized (assuming each message/port being a single buffer)
Not implemented:
- There is no automatic support for messages with offsets (for the purpose of demonstrating a mock-up is possible by hand written code)
# RTFM-v2 breakdown
In this section a behind the scenes breakdown of the RTFM-v2 is provided.
---
## Task/resource model
The RTFM-v2 mode defines a system in terms of a set of tasks and resources, in compliance to the Stack Resource policy for task with static priorities and single unit resources.
### Tasks and Resources
- `t` in a task, with priority `pri(t)`, during execution a task may `claim` the access of resources in a nested (LIFO) fashion.
- `r` is a singe unit resource (i.e., `r` can be either *locked* or *free*), `ceil(r)` denotes the ceiling of the resource computed as the maximum priority of any task claiming `r`
Example: assume a system with three tasks `t1, t2, t3` and two resources `low, high`:
- `t1`, `pri(t1) = 1`, claiming both `low, high`
- `t2`, `pri(t2) = 2`, claiming `low`
- `t3`, `pri(t3) = 3`, claiming `high`
This renders the resources.
- `low`, `ceil(low) = max(pri(t1), pri(t2)) = 2`
- `high`, `ceil(high) = max(pri(t1), pri(t3)) = 3`
---
### System Ceiling and Current running task
- `sc` is the current system ceiling, set to the maximum ceiling of currently held resources
- `st` is the currently running task
Example:
Assume we currently run the task `t1` having claimed both resources `low` and `high`
- `sc = max(ceil(low), ceil(high)) = max(2, 3) = 3`
- `st = t1`
---
### Execution semantics
- `P` is the set of requested (pended), but not yet dispatched tasks
A task `t` can only be dispatched (scheduled for execution) iff:
- `t in P`
- `pri(t) >= max(pri(tn)), tn in P`
- `pri(t) > sc`
- `pri(t) > pri(st)`
Example 1:
Assume we are currently running task `t1` at the point `low` is held. At this point both `t2` and `t3` are requested for execution (becomes pending).
- `sc = max(ceil(low)) = max(2) = 2`
- `sp = pri(t1) = 1`
- `P = {t2, p3}`
Following the dispatch rule, for `t2`:
- `t2 in P` =>
`t2 in {t2, t3}` => OK
- `pri(t2) >= max(pri(tn)), tn in P` =>
`2 >= max(2, 3)` => FAIL
The scheduling condition for `t2` is not met.
Following the dispatch rule, for `t3`:
- `t3 in P` =>
`t3 in {t2, t3}` => OK
- `pri(t3) >= max(pri(tn)), tn in P` =>
`3 >= max(2, 3)` OK =>
- `pri(t3) > sc` =>
`3 > 2` OK =>
- `pri(t3) > pri(t1)` =>
`3 > 1` OK
All conditions hold, and task `t3` will be dispatched.
Example 3:
Assume we are currently running task `t1` at the point both `low` and `high` are held. At this point both `t2` and `t3` are requested for execution (becomes pending).
In this case both `t2` and `t3` fails meeting the dispatch rules. Details left to the reader as an exercise.
Notice, due to the dispatch condition, a task may never preempt itself.
---
## RTFM on Bare Metal ARM Cortex M3 and Above
To our aid the Nested Interrupt Vectorized Controller (NVIC) ARM Cortex M3 and Above implements the following:
- tracks the `pri(i)`, for each interrupt handler `i`
- tracks the set of pended interrupts `I`
- tracks the `si` (the currently running interrupt handler)
- the `BASEPRI` register, a base priority for dispatching interrupts
- the `PRIMASK` register, a global interrupt enable.
An interrupt will be dispatched iff (for details see [Core Registers](http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0552a/CHDBIBGJ.html)):
- `i in I`
- `pri(i) >= max(pri(i) in I)`
- `pri(i) > BASEPRI && !PRIMASK`
- `pri(i) > pri(si)`
Mapping:
We map each task `t` to an interrupt `i` with `pri(i) = pri(t)`. Assume `BASEPRI` is set to `sc` system ceiling. Assume `PRIMASK == false`.
Exercises:
- Show for the two examples that the NVIC will dispatch `i3` for the above Example 2, while not dispatch any interrupt for above Example 3.
- Show that an interrupt cannot preempt itself.
Notice, under the Stack Resource Policy, there is an additional dispatch rule, on a tie among pending tasks priorities, the one with the oldest time for request has priority. This rule cannot be enforced directly by the NVIC. However, it can be shown that this restriction does not invalidate soundness, it only affects the response time calculation.
---
## Overall design
Code is split into three partitions,
- the generic `cortex-m-rtfm` library,
- the user code, and
- the *glue* code generated from the `app!` macro.
---
### `cortex-m-rtfm` library
The library implements an *unsafe* `claim<T, R, F>` method, `T` being a reference to the resource data (can be either `&` or `&mut`), `R` the return type, and `F: FnOnce(T, &mut Threshold) -> R` the closure to execute within the `claim`. Claim cannot be directly accessed from *safe* user code instead a *safe* API `claim/claim_mut` is offered by the generated code. (The API is implemented by a *trait* approach.)
---
### User code
```rust
fn exti0(
t: &mut Threshold,
EXTI0::Resources { mut LOW, mut HIGH }: EXTI0::Resources,
)
```
`t` is the initial `Threshold`, used for the resource protection mechanism (as seen later the parameter will be opted out by the compiler in `--release` mode, yet been the logic behind the parameter will be taken into account.)
`EXTI0::Resources { mut LOW, mut HIGH }: EXTI0::Resources` gives access to the resources, `LOW` and `HIGH`. Technically, we *destruct* the given parameter (of type `EXTI0::Resource`) into its fields (`mut LOW`, `mut HIGH`).
Notice here the type `EXTI0::Resources` was not user defined, but rather generated by the `app!` macro.
The `LOW`/`HIGH` arguments gives you *safe* access to the corresponding resources through the *safe* API (`claim/claim_mut`).
---
### Generated code (app! macro)
The procedural macro `app!` takes a system configuration, and performs the following:
- `Sys` AST creation after syntactic check
- A mapping from tasks to interrupts
- Resource ceiling computation according to the RTFM SRP model
- Generation of code for:
- task to interrupt bindings, and initialization code enabling corresponding interrupts
- static memory allocation and initialization for Resources
- Generation of structures for task parameters
- Interrupt entry points (calling the corresponding tasks)
### Invariants and key properties
Key properties include:
- Race-free execution (each resource is exclusively accessed)
- Deadlock-free execution
Both rely on the RTFM (SRP based) execution model, and a correct implementation thereof. A key component here is the implementation of `claim` in the `cortex-m-rtfm` library.
```rust
pub unsafe fn claim<T, R, F>(
data: T,
ceiling: u8,
_nvic_prio_bits: u8,
t: &mut Threshold,
f: F,
) -> R
where
F: FnOnce(T, &mut Threshold) -> R,
if ceiling > t.value() {
let max_priority = 1 << _nvic_prio_bits;
if ceiling == max_priority {
atomic(t, |t| f(data, t))
} else {
let old = basepri::read();
let hw = (max_priority - ceiling) << (8 - _nvic_prio_bits);
basepri::write(hw);
let ret = f(data, &mut Threshold::new(ceiling));
basepri::write(old);
ret
}
} else {
f(data, t)
}
}
```
As seen, the implementation is fairly simple. `ceiling` here is the resource ceiling for the static data `T`, and `t` is the current `Threshold`. If `ceiling <= t.value()` we can directly access it by executing the closure (`f(dada, t)`), else we need to *claim* the resource before access. Claiming has two cases:
- `ceiling == max_priority` => here we cannot protect the resource by setting `BASEPRI` (masking priorities), and instead use `atomic` (which executes the closure `|t| f(data, t)` with globally disabled interrupts ( `PRIMASK = true`)
- `ceiling != max_priority` => here we store the current system ceiling, (`old = basepri::read())`, set the new system ceiling `basepri::write(hw)` execute the closure `ret = f(data, &mut Threshold::new(ceiling))`, restore the system ceiling, `basepri::write(old)` and return the result `ret`. The `PRIMASK` and `BASEPRI` registers are located in the `Private Peripheral Bus` memory region, which is `Strongly-ordered` (meaning that accesses are executed in program order). I.e. the next instruction following `basepri::write(hw)` (inside the `claim`) will be protected by the raised system ceiling. [Arm doc - memory barriers](https://static.docs.arm.com/dai0321/a/DAI0321A_programming_guide_memory_barriers_for_m_profile.pdf)
Race freeness at this level can be argued from:
- Each *resource* is associated a *ceiling* according to SRP. The `app!` procedural macro computes the ceilings from the tasks defined and the resources (declared and) used. How do we ensure that a task cannot access a resource not declared used in the `app`?
The only resources accessible is those passed in the argument to the task (e.g., `EXTI0::Resources { mut LOW, mut HIGH }: EXTI0::Resources`). There is also no way in *safe* code to leak a reference to a resource through static (global memory) to another task. Notice though that is perfectly ok pass e.g., `&mut LOW` to a subroutine. In this case the sub routine will execute in task *context*.
Another thing achieved here is that the Rust semantics for non-aliased mutability is ensured. (Essentially a nested claim to the same resource would be illegal in Rust, since `claim` passes as mutable reference to the *inner* data). This cannot happen as `claim` takes a `mut T`.
```rust
...
LOW.claim_mut(b, t, |_low, b, t| {
rtfm::bkpt();
LOW.claim_mut(b, t, |_high, _, _| {
rtfm::bkpt();
});
});
...
```
would be rejected
```
error[E0499]: cannot borrow `LOW` as mutable more than once at a time
--> examples/nested_new.rs:100:29
|
100 | LOW.claim_mut(b, t, |_low, b, t| {
| --- ^^^^^^^^^^^^ second mutable borrow occurs here
```
Trying to bluntly copy (clone) a resource handler will also fail.
```rust
let mut LOWC = LOW.clone();
error[E0599]: no method named `clone` found for type `_resource::LOW` in the current scope
--> examples/nested_new.rs:100:24
|
100 | let mut LOWC = LOW.clone();
```
- Accessing a *resource* from *safe* user code can only be done through the `Resource::claim/claim_mut` trait, calling the generic library function `claim`
- The `claim` implementation together with the `NVIC`, `BASEPRI` and `PRIMASK` enforces the SRP dispatch policy.
However there is more to it:
What if the user could fake (or alter) the `t` (Threshold). Well in that case the `claim` might give unprotected access. This is prevented by using an *opaque* data type `Threshold` in the `rtfm-core` lib.
```rust
pub struct Threshold {
value: u8,
_not_send: PhantomData<*const ()>,
}
```
The `value` field is not accessible to the user directly (and the user cannot alter or create a new `Threshold`) and the API to `Threshold::new()` is *unsafe*, i.e.,
```rust
...
*_t.value = 72; // attempt to fake Threshodld
let t = Threshold::new(0); // attempt to create a new Threshold
...
```
will render:
```rust
Compiling cortex-m-rtfm v0.2.1 (file:///home/pln/course/nucleo-64-rtfm)
error[E0616]: field `value` of struct `rtfm::Threshold` is private
--> examples/nested_new.rs:135:6
|
135 | *_t.value = 72;
| ^^^^^^^^
|
= note: a method `value` also exists, perhaps you wish to call it
error[E0133]: call to unsafe function requires unsafe function or block
--> examples/nested_new.rs:135:13
|
135 | let t = Threshold::new(0);
| ^^^^^^^^^^^^^^^^^ call to unsafe function
```
## The generated code in detail
Procedural macros in Rust are executed before code generation (causing the argument AST to replaced by a new AST for the remainder of compilation).
The intermediate code (AST after expansion) can be exported by the `cargo` sub-command `export`.
```rust
> cargo export examples nested > expanded.rs
```
or
```rust
> xargo export examples nested > expanded.rs
```
Let us study the `nested` example in detail.
```rust
app! {
device: stm32f40x,
resources: {
static LOW: u64 = 0;
static HIGH: u64 = 0;
},
tasks: {
EXTI0: {
path: exti0,
priority: 1,
resources: [LOW, HIGH],
},
EXTI1: {
path: exti1,
priority: 2,
resources: [LOW],
},
EXTI2: {
path: exti2,
priority: 3,
resources: [HIGH],
},
},
}
```
---
### Auto generated `main`
The intermediate AST defines the following `main` function.
```rust
fn main() {
let init: fn(stm32f40x::Peripherals, init::Resources) = init;
rtfm::atomic(unsafe { &mut rtfm::Threshold::new(0) }, |_t| unsafe {
let _late_resources =
init(stm32f40x::Peripherals::all(), init::Resources::new());
let nvic = &*stm32f40x::NVIC.get();
let prio_bits = stm32f40x::NVIC_PRIO_BITS;
let hw = ((1 << prio_bits) - 3u8) << (8 - prio_bits);
nvic.set_priority(stm32f40x::Interrupt::EXTI2, hw);
nvic.enable(stm32f40x::Interrupt::EXTI2);
let prio_bits = stm32f40x::NVIC_PRIO_BITS;
let hw = ((1 << prio_bits) - 1u8) << (8 - prio_bits);
nvic.set_priority(stm32f40x::Interrupt::EXTI0, hw);
nvic.enable(stm32f40x::Interrupt::EXTI0);
let prio_bits = stm32f40x::NVIC_PRIO_BITS;
let hw = ((1 << prio_bits) - 2u8) << (8 - prio_bits);
nvic.set_priority(stm32f40x::Interrupt::EXTI1, hw);
nvic.enable(stm32f40x::Interrupt::EXTI1);
});
let idle: fn() -> ! = idle;
idle();
}
```
Essentially, the generated code initiates the peripheral and resource bindings in an `atomic` section (with the interrupts disabled). Besides first calling the user defined function `init`, the generated code also sets the interrupt priorities and enables the interrupts (tasks).
---
### Allocation of resources
The allocation of memory for the system resources is done using (global) `static mut`, with resource names prepended by `_`. Resources can only by accessed from user code through the `Resource` wrapping, initialized at run time.
```rust
static mut _HIGH: u64 = 0;
static mut _LOW: u64 = 0;
```
---
### Auto generated `init` arguments
All resources and peripherals are passed to the user `init` as defined in the generated `_initResources`. The auto generated code implements a module `init` holding the resource handlers.
```rust
pub struct _initResources<'a> {
pub LOW: &'a mut rtfm::Static<u64>,
pub HIGH: &'a mut rtfm::Static<u64>,
}
#[allow(unsafe_code)]
mod init {
pub use stm32f40x::Peripherals;
pub use _initResources as Resources;
#[allow(unsafe_code)]
impl<'a> Resources<'a> {
pub unsafe fn new() -> Self {
Resources {
LOW: ::rtfm::Static::ref_mut(&mut ::_LOW),
HIGH: ::rtfm::Static::ref_mut(&mut ::_HIGH),
}
}
}
}
```
---
### Auto generated `task` arguments
A generic resource abstraction is generated in `_resource`.
```rust
mod _resource {
pub struct HIGH {
_0: (),
}
impl HIGH {
pub unsafe fn new() -> Self {
HIGH { _0: () }
}
}
pub struct LOW {
_0: (),
}
impl LOW {
pub unsafe fn new() -> Self {
LOW { _0: () }
}
}
}
```
In Rust a `mod` provides a *name space*, thus the statically allocated `HIGH` and `LOW` structs are accessed under the names `_resource::HIGH`, `_resource::LOW` respectively.
Code is generated for binding the user API `RES::claim`/`RES::claim_mut` to the library implementation of `claim`. For `claim` the reference is passed as `rtfm::Static::ref_(&_HIGH)`, while for `claim_mut` the reference is passed as `rtfm::Static::ref_mut(&_HIGH)`. Recall here that `_HIGH` is the actual resource allocation.
Similarly code is generated for each resource.
```rust
unsafe impl rtfm::Resource for _resource::HIGH {
type Data = u64;
fn claim<R, F>(&self, t: &mut rtfm::Threshold, f: F) -> R
where
F: FnOnce(&rtfm::Static<u64>, &mut rtfm::Threshold) -> R,
{
unsafe {
rtfm::claim(
rtfm::Static::ref_(&_HIGH),
3u8, // << computed ceiling value
stm32f40x::NVIC_PRIO_BITS,
t,
f,
)
}
}
fn claim_mut<R, F>(&mut self, t: &mut rtfm::Threshold, f: F) -> R
where
F: FnOnce(&mut rtfm::Static<u64>, &mut rtfm::Threshold) -> R,
{
unsafe {
rtfm::claim(
rtfm::Static::ref_mut(&mut _HIGH),
3u8, // << computed ceiling value
stm32f40x::NVIC_PRIO_BITS,
t,
f,
)
}
}
}
```
The `rtfm::Resource` *triat* and `rtfm::Static` type are given through the `rtfm_core` crate.
```rust
pub unsafe trait Resource {
/// The data protected by the resource
type Data: Send;
/// Claims the resource data for the span of the closure `f`. For the
/// duration of the closure other tasks that may access the resource data
/// are prevented from preempting the current task.
fn claim<R, F>(&self, t: &mut Threshold, f: F) -> R
where
F: FnOnce(&Static<Self::Data>, &mut Threshold) -> R;
/// Mutable variant of `claim`
fn claim_mut<R, F>(&mut self, t: &mut Threshold, f: F) -> R
where
F: FnOnce(&mut Static<Self::Data>, &mut Threshold) -> R;
}
unsafe impl<T> Resource for Static<T>
where
T: Send,
{
type Data = T;
fn claim<R, F>(&self, t: &mut Threshold, f: F) -> R
where
F: FnOnce(&Static<Self::Data>, &mut Threshold) -> R,
{
f(self, t)
}
fn claim_mut<R, F>(&mut self, t: &mut Threshold, f: F) -> R
where
F: FnOnce(&mut Static<Self::Data>, &mut Threshold) -> R,
{
f(self, t)
}
}
/// Preemption threshold token
///
/// The preemption threshold indicates the priority a task must have to preempt
/// the current context. For example a threshold of 2 indicates that only
/// interrupts / exceptions with a priority of 3 or greater can preempt the
/// current context
pub struct Threshold {
value: u8,
_not_send: PhantomData<*const ()>,
}
impl Threshold {
/// Creates a new `Threshold` token
///
/// This API is meant to be used to create abstractions and not to be
/// directly used by applications.
pub unsafe fn new(value: u8) -> Self {
Threshold {
value,
_not_send: PhantomData,
}
}
/// Creates a `Threshold` token with maximum value
///
/// This API is meant to be used to create abstractions and not to be
/// directly used by applications.
pub unsafe fn max() -> Self {
Self::new(u8::MAX)
}
/// Returns the value of this `Threshold` token
pub fn value(&self) -> u8 {
self.value
}
}
```
---
### Interrupt entry points
Each task is mapped to a corresponding entry in the interrupt vector table. An entry point stub is generated for each task, calling the user defined code. Each taks is called with the exact set of resource handlers (and peripherals used), in the above example `EXTI0::Resources`.
```rust
pub unsafe extern "C" fn _EXTI0() {
let f: fn(&mut rtfm::Threshold, EXTI0::Resources) = exti0;
f(
&mut if 1u8 == 1 << stm32f40x::NVIC_PRIO_BITS {
rtfm::Threshold::new(::core::u8::MAX)
} else {
rtfm::Threshold::new(1u8)
},
EXTI0::Resources::new(),
)
}
mod EXTI0 {
pub struct Resources {
pub HIGH: ::_resource::HIGH,
pub LOW: ::_resource::LOW,
}
impl Resources {
pub unsafe fn new() -> Self {
Resources {
HIGH: { ::_resource::HIGH::new() },
LOW: { ::_resource::LOW::new() },
}
}
}
}
```
---
## Performance
As seen there is quite some autogenerated and library code involved for the task and resource management. To our aid here is the Rust memory model allowing for zero cost abstractions.
The `exti0` task:
```rust
fn exti0(
t: &mut Threshold,
EXTI0::Resources { mut LOW, mut HIGH }: EXTI0::Resources,
) {
rtfm::bkpt();
LOW.claim_mut(t, |_low, t| {
rtfm::bkpt();
HIGH.claim_mut(t, |_high, _| {
rtfm::bkpt();
});
});
}
```
Amounts to the following assembly (including the interrupt entry code.)
```rust
Dump of assembler code for function nested_new::_EXTI0:
0x080005a6 <+0>: movs r1, #224 ; 0xe0
=> 0x080005a8 <+2>: bkpt 0x0000
0x080005aa <+4>: mrs r0, BASEPRI
0x080005ae <+8>: movs r2, #208 ; 0xd0
0x080005b0 <+10>: msr BASEPRI, r1
0x080005b4 <+14>: bkpt 0x0000
0x080005b6 <+16>: mrs r1, BASEPRI
0x080005ba <+20>: msr BASEPRI, r2
0x080005be <+24>: bkpt 0x0000
0x080005c0 <+26>: msr BASEPRI, r1
0x080005c4 <+30>: msr BASEPRI, r0
0x080005c8 <+34>: bx lr
```
The worlds fastest preemptive scheduler for tasks with shared resources is at bay! (We challenge anyone to beat RTFM!)
# How low can you go
An observation here is that we read basepri in the inner claim
```
0x080005b6 <+16>: mrs r1, BASEPRI
```
though that we actually know that `BASEPRI` will have the value `r1` in this case.
In an experimental version of the RTFM implementation this observation has been exploited.
```rust
Dump of assembler code for function nested_new::_EXTI3:
0x080005d0 <+0>: movs r1, #224 ; 0xe0
0x080005d2 <+2>: movs r2, #208 ; 0xd0
=> 0x080005d4 <+4>: bkpt 0x0000
0x080005d6 <+6>: mrs r0, BASEPRI
0x080005da <+10>: msr BASEPRI, r1
0x080005de <+14>: bkpt 0x0000
0x080005e0 <+16>: msr BASEPRI, r2
0x080005e4 <+20>: bkpt 0x0000
0x080005e6 <+22>: msr BASEPRI, r1
0x080005ea <+26>: msr BASEPRI, r0
0x080005ee <+30>: bx lr
```