Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • master
1 result

Target

Select target project
  • d7018e-special-studies-embedded-systems/are_we_embedded_yet
  • tmplt/are_we_embedded_yet
  • shawnshank/are_we_embedded_yet
  • CuriouslyCurious/are_we_embedded_yet
  • nevvi/are_we_embedded_yet
  • williameriksson/are_we_embedded_yet
  • nannap/are_we_embedded_yet
  • ironedde/are_we_embedded_yet
8 results
Select Git revision
  • master
  • patch-1
2 results
Show changes
Commits on Source (51)
Are We Embedded Yet
=====================
## D7018E - Special Studies in Embedded Systems
Disclaimer: This document is in beta state!!!!! Just to give a hint how the course will look like.
The course will be given as a self study course with a set of introductional seminars and accompanying mandatory assignments, followed by a larger assignment (project). The project can be carried out individually or in groups depending on size. Grading will be individual, based on agreed requirements between the student and the teacher, showing understanding and abilities regarding:
The course will be given as a self study course with a set of introduction seminars and accompanying mandatory assignments, followed by a larger assignment (project). The project can be carried out individually or in groups depending on size. Grading will be individual, based on agreed requirements between the student and the teacher, showing understanding and abilities regarding:
1. The Rust ecosystem. [ecosystem](doc/Ecosystem.md)
......@@ -33,7 +32,7 @@ The course will be given as a self study course with a set of introductional sem
8. Building and debugging embedded code in Rust
Hardware abstractions using svd2rust (autogenerated from vendor-provided SVD specifications). Compiling using xargo. Setting up openocd and gdb.
Hardware abstractions using svd2rust (auto-generated from vendor-provided SVD specifications). Compiling using xargo. Setting up openocd and gdb.
9. Pre-processing
Custom build.rs build scripts, and the RTFM Concurrent Reactive Component model.
......@@ -51,7 +50,7 @@ Each project should be reported in terms of a git Rust crate, with sufficient do
There will be two presentation rounds (at end of LP2 and LP3). Students taking (too many other) courses during LP2 may choose to present their project at end of LP3 (instead of LP2). Presentations will be oral, where the student(s), will present and demonstrate their work.
Projects should be related to embedded programming, either on the target side (some application using the RTFM-core or CRC model), or on the host side, communicating with an embedded target running Rust RTFM. E.g., two groups can work together with building a system, e.g., with back-end processing of data collected by the embedded system, or by providing a front-end to the embedded system. Alternatively, host side project could relate the development of the RTFM-core/ RTFM-CRC frameworks or related tools (e.g. LLWM-KLEE as a backend for analysis of Rust code).
Projects should be related to embedded programming, either on the target side (some application using the RTFM-core or CRC model), or on the host side, communicating with an embedded target running Rust RTFM. E.g., two groups can work together with building a system, e.g., with back-end processing of data collected by the embedded system, or by providing a front-end to the embedded system. Alternatively, host side project could relate the development of the RTFM-core/ RTFM-CRC frameworks or related tools (e.g. LLVM-KLEE as a back-end for analysis of Rust code).
## Resources
Students will carry out the assignments on their personal laptops (in case you don't have a working laptop we will try to lend you one). Tools used are available for Linux and OSX, but with a bit of trickery windows based installations should be possible (but you are on your own here). In case you don't run OSX/Linux native, virtual box or VMware is possible, though debugging of target MCUs are feasible it is a bit more tricky.
......@@ -139,7 +138,7 @@ Seminars
* Topic [Memory](doc/Memory.md)
In-depth discussion of underlying theory, linear types (relation to functional programming). The *Affine* type system of Rust, requirements on the programmer, and guarantees offered by the compiler. Lifetimes, of stack allocated and global variables. Relation to C++ `unique pointers`.
* Assignment
* Assignment 3
a. Recall the D0013E course lab2/4, where you decrypted a message in assembler (lab2) and C (lab 4). Now, let's re-implement the lab in Rust (base your development on group number [1's](http://www.sm.luth.se/csee/courses/smd/D0013E/labs/lab1underlag/grupp_01.lab1_underlag.s ) lab assignment).
......@@ -149,7 +148,7 @@ Seminars
The `seed`, `abc`,`coded` and `plain` should be stack allocated. The decoded string should be printed when decryption is finished.
b. Make the `seed`, `abc`,`coded` and `plain` static (heap) allocated (i.e., as global varibles). Accessing those will require some `unsafe` code. (Keep the unsafe blocks as local as possible.)
b. Make the `seed`, `abc`,`coded` and `plain` static (heap) allocated (i.e., as global variables). Accessing those will require some `unsafe` code. (Keep the unsafe blocks as local as possible.)
c. Safety analysis. Provoke the implementation, by omitting the `'\0'` (null termination). Observe the result and motivate the behavior in terms of your understanding of the Rust memory model. Under which circumstances do you consider 3a and 3b to have same/different memory safety.
......@@ -161,29 +160,30 @@ Seminars
Finish assignment 3. Bring a USB mini cable, and/or your Cortex M dev board of choice. We will provide Nucleo 64s (STM32f401re/STM32f411re if you do not have a board.)
* Topic
Embedded programming in Rust.
Embedded programming in Rust. Check this [document](doc/Quickstart.md)
* xargo for building non-`std` (bare metal) systems
* [cortex-m-quickstart]
* [cortex-m]
* [bluepill/nucleo] crates
* `cortex-m-quickstart`, project template
* `cortex-m`, crate common to all Cortex-M devices
* `stm32f103xx` and `stm32f40x`, device crates
* `blue-pill` and `nucleo` board support crates
* Building and debugging your first application.
* Assignment
* Assignment 4
a. Backport assignment `3b` to your chosen target. Use semihosting in order to `write` the resulting string to the host. You may need to use `--release` for decoding the long (`coded`) message, as being deeply recursive unoptimized code may run out of stack memory.
a. Backport assignment `3b` to your chosen target. Use semi-hosting in order to `write` the resulting string to the host. You may need to use `--release` for decoding the long (`coded`) message, as being deeply recursive unoptimized code may run out of stack memory.
b. Discuss from a memory safety perspective the outcome.
c. Compare for the short message (`abc`), the number of cycles required for `decode` in debug (standard) vs. `--release`. As a comparison my straightforword C implementation took 2200 cycles in best optimized mode using `gcc` (-o3), while my (translation) to Rust code took 1780 cycles (--release). (Both executed on a bluepill board at 8MHz without (flash) memory wait states).
c. Compare for the short message (`abc`), the number of cycles required for `decode` in debug (standard) vs. `--release`. As a comparison my straightforward C implementation took 2200 cycles in best optimized mode using `gcc` (-o3), while my (translation) to Rust code took 1780 cycles (--release). (Both executed on a bluepill board at 8MHz without (flash) memory wait states).
Make a new git for your embedded development. Make three branches (`3a, 3b, 3c`) with updated documentation according to the above.
Make a new git for your embedded development. Make three branches (`4a, 4b, 4c`) with updated documentation according to the above.
5. Advanced Rust Concepts
* Preparation
Be prepared to present the progress on assignment 3.
Be prepared to present the progress on assignment 4.
* Topic
Advanced Rust features, trait system and closures.
......@@ -192,12 +192,12 @@ Seminars
* [13 - Functional Language Features in Rust](https://doc.rust-lang.org/book/second-edition/ch13-00-functional-features.html).
* Assignment
Continue working on assignment 3.
Continue working on assignment 4.
6. Memory Safe Concurrency
* Preparation
* Finish lab 3 and be prepared to show your solution.
* Finish assignment 4 and be prepared to show your solution.
* Topic
* UnsafeCell, and synchronization in the RTFM model.
......@@ -207,55 +207,54 @@ Seminars
* [cortex-m-rtfm](https://github.com/japaric/cortex-m-rtfm) The RTFM-core (task and resource model) in Rust for the Cortex-M family
* [svd2rust](https://github.com/japaric/svd2rust) Generating
* Assignment 4
* Assignment 5
Implement a simple system with 3 tasks
Implement a simple system with two tasks
* A periodic task executing each Xms (free of accumulated drift, and with minimal jitter), that blinks the on-board LED, and
* A USART task receiving commands (pause, start, period 1-1000ms), received commands should be parsed and corresponding responses generated and sent over the USART. (Come up with a nice and simple user interface.)
* A a logging task, run each second (period 1s), that prints statistics of CPU usage over the ITM port
* Idle should gather statics on sleep/up time, (there is a sleep counter in the cortex core)
* Use shared resources (data structures) to ensure race free execution
* a periodic task executing each 10ms, that blinks the onboard LED, and
* USART task receiving commands (pause, start, period)
* a shared resource (data structure) protecting the command and period
You may use the core systic timer (relative) and the dwt cycle counter (absolute) in combination to achieve drift free timing. Alternative you look into the stm32f4xx timer peripheral. There is a support crate for the [STM32F3DISCOVERY](https://github.com/japaric/f3) board. Peripherals are similar so you may "borrow" code from there.
Make a new git with the development and documentation.
7. Macros
* Preparation
Optional:
Find a way to measure the power consumption. A possible solution is to power the board externally and use a power cube with current measuring capability. Alternative use an external power source with known charge (e.g., a "capacitor"), and measure the discharge time (start and residue charge at brown-out voltage), at least a precise relative measure is possible to obtain.
Be prepared to present the progress on assignment 4.
* Topic
Operation without being connected to the USB port: in this case the serial IO and ITM needs to be connected externally (e.g., using some ftdi serial-USB).
Super optional:
Try to minimize power consumption while maintaining desired operation. Lowering supply voltage and using aggressive power modes of the processor might be applied. (Not sure how USART/ITM communication can be made possible at sub 3.3v voltages. Also you have to make sure not to source the board over the communication interfaces.)
We will cover the implementation of the RTFM-core crate.
Special focus to `macro_rules` and `procedural macros`.
8. Concurrent Reactive Objects
7. Macros and Projects (Monday Nov. 20th)
* Preparation
Be prepared to present assignment 4.
Be prepared to present the progress on assignment 5.
* Topic
A component model for reactive real-time programming.
We will cover the programming model and the implementation, including the `build.rs`, parsing of model files and generation of Rust code.
* Assignment 5
* port the assignment 4 to the RTFM-CRC model.
- We will cover the implementation of the rtfm-core and the cortex-m-rtfm crates. For details see [RTFM](doc/RTFM.md).
Make a new git for the development and documentation.
Special focus to `macro_rules` and `procedural macros`.
- RTFM-CRC A component model for reactive real-time programming.
9. Presentation of project ideas
* Preparation
We will cover the programming model and the implementation, including the `build.rs`, parsing of model files and generation of Rust code.
Be prepared to present progress of assignment 5.
* Topic
Discussion of projects
- Discussion of project ideas
* Assignment
Write a project specification including individual grading assessment criteria.
10. Wrap-up
8. Wrap-up (Monday Dec. 4th)
* Preparation
* Be prepared to present assignment 5.
* Be prepared to present your project.
* Be prepared to present your project, 10 minutes per project.
* A good idea is to prepare a git for the project with a `README.md` and use that as supporting material for your presentation. Its advicable to have a section (or in a doc sub folder) where you collect references to the material you will use, i.e., links to data sheets, links to other related crates and projects of importance to your project.
This type of reference section will be largely helpful to both you (during the project and afterwards maintaining it, but also to other users and people interested in your work). Moreover, from a "course" perspective it shows that you have done the necessary background studies BEFORE you "hack away". Of corse this will be a living document, updated throughout the project, but its a very good thing to start out NOW, then it can be used for your 10 minutes of fame!
assets/cortex-m-layers.png

23.6 KiB

assets/vscode-build.png

118 KiB

assets/vscode-debug.png

161 KiB

......@@ -29,16 +29,16 @@ We suggest a Linux or OSX development environment, though Rust related tools are
The [rustup](https://www.rustup.rs/), tool manager allows you to manage multiple tool chain installations. Rust is distributed in three channels (`stable`, `beta` and `nightly`). You may set the default toolchain:
```
rustup default nightly-2017-10-22-x86_64-unknown-linux-gnu
rustup default nightly-2018-01-10-x86_64-unknown-linux-gnu
```
and get information on the status of `rustup`
```
rustup show
```
Nightly tool chains allow for the development of libraries and applications including `unsafe` code using features not available on the `stable channel` (which will be necessary for the later exercises). For some tools to work (`rls/rustfmt`), you need to install additional components. For this to work, you should use a nightly toolchain for which all tools and components work (currently `nightly-2017-10-30` is the latest). Here is an example:
Nightly tool chains allow for the development of libraries and applications including `unsafe` code using features not available on the `stable channel` (which will be necessary for the later exercises). For some tools to work (`rls/rustfmt`), you need to install additional components. For this to work, you should use a nightly toolchain for which all tools and components work (currently `nightly-2018-01-10` is the latest). Here is an example:
```
rustup default nightly-2017-10-30
rustup default nightly-2018-01-10
rustup component add rls-preview
rustup component add rust-analysis
rustup component add rust-src
......@@ -71,7 +71,7 @@ Dependencies (may) include a minimal version, following the [semver](http://semv
See [rls](https://marketplace.visualstudio.com/items?itemName=rust-lang.rust) for installing the RLS extension.
You will need to pin the specific toolchain version used, by setting the `"rust-client.channel": "nightly-2017-10-30"` in your `vscode` *user* settings (this will be stored in a file `~/.config/Code/User/settings.json` and used for all your `vscode` projects. Settings may be set individually for each *workspace*, overriding the defaults. Regarding the `"rust-client.channel"` setting, a *workspace* setting would force the specific version (overriding the default), and may not work when the code is distributed (as other developers may be on other toolchains).
You will need to pin the specific toolchain version used, by setting the `"rust-client.channel": "nightly-2018-01-10"` in your `vscode` *user* settings (this will be stored in a file `~/.config/Code/User/settings.json` and used for all your `vscode` projects. Settings may be set individually for each *workspace*, overriding the defaults. Regarding the `"rust-client.channel"` setting, a *workspace* setting would force the specific version (overriding the default), and may not work when the code is distributed (as other developers may be on other toolchains).
For RLS to work, `vscode` need a path to the `rls-preview` library (using the environment variable `LD_LIBRARY_PATH` (Linux), `DYLD_LIBRARY_PATH` (OSX ?)).
```
......
......@@ -85,30 +85,22 @@ Thus, only for cases when *true* random access is desired/required, raw indexing
In short, the complilation process can be broken down to the following steps:
1. Parsing input
* this processes the .rs files and produces the AST ("abstract syntax tree")
* the AST is defined in syntax/ast.rs. It is intended to match the lexical syntax of the Rust language quite closely.
2. Name resolution, macro expansion, and configuration
* once parsing is complete, we process the AST recursively, resolving paths and expanding macros. This same process also processes `#[cfg]` nodes, and hence may strip thingsout of the AST as well.
3. Lowering to HIR
* Once name resolution completes, we convert the AST into the HIR, or "high-level IR".
* The HIR is a lightly desugared variant of the AST. It is more processed than the AST and more suitable for the analyses that follow.
4. Type-checking and subsequent analyses
* An important step in processing the HIR is to perform type checking. This process assigns types to every HIR expression, and also is responsible for resolving some "type-dependent" paths, such as field accesses (`x.f`)
5. Lowering to MIR and post-processing
* Once type-checking is done, we can lower the HIR into MIR ("middle IR"), which is a very desugared version of Rust.
Here is where the borrow checking is done!!!!
6. Translation to LLVM and LLVM optimizations
* From MIR, we can produce LLVM IR.
LLVM then runs its various optimizations, which produces a number of .o files (one for each "codegen unit").
7. Linking
Finally, those .o files are linked together.
### LLVM
......
# Nucleo 64
A collection of documentation, tricks and tips regarding the ST Nucleo 64 development kit.
# STM32 Nucleo-64 Board
A collection of documentation, tricks and tips regarding the STM32 Nucleo-64 Board (development kit).
We will mainly cover the `stm32f401re` and `stm32f411re` as these models.
We will mainly cover the `stm32f401re` and `stm32f411re` as these models (main difference is the higher maximum clock frequency of the `stm32f411re`).
---
## Stlink inteface
The nucleo-64 platform has an onboard `stlink-v2.1` SWD interface (programmer) supported by `openocd`. By default the programmer is connected to the target (`stm32f401re` or similar).
---
### Programming external targets
You may use the board as a programmer (connector `CN4` SWD), in that case you should remove the `CN3` jumpers, and optionally desolder `SB15` (SWO) and `SB12` (NRST). See board documentation for details.
## Links
* [Nucleo-64 board](http://www.st.com/content/ccc/resource/technical/document/user_manual/98/2e/fa/4b/e0/82/43/b7/DM00105823.pdf/files/DM00105823.pdf/jcr:content/translations/en.DM00105823.pdf)
Board documentation.
* [Firmware Upgrade](http://www.st.com/en/development-tools/stsw-link007.html)
Standalone java program.
# Clocking
* [Note on Overclocking](https://stm32f4-discovery.net/2014/11/overclock-stm32f4-device-up-to-250mhz/)
......
# Course description
In order to be officially enrolled for the course each student need to write (and submit) an individual course plan including grading goals and assessments based on this document.
Assignments are mandatory (as detailed in the README.md).
- Each student should for each assigment 2, .., 5 comment (make an `issue` and/or `pull request` to one other `git`). Comments should be meaningful and constructive. For the assigment 2, .., 5 you should comment on different groups. Strive to spread comments so that each group will get at least one comment for each assignment.
- Each student/group should attend `issues`/`pull request`
Projects should aim to further cover the learning goals as stated in the README.md.
## Blooms taxonomy
Blooms taxonomy assess the (increasing) level of understanding as:
- Remember
- Understand
- Apply
- Evaluate
- Create.
(In the original taxonomy Evaluate was set at a higher level than Create.)
Well its not the only one.
## The SOLO taxonomy
The SOLO taxonomy can be summarised:
- Prestructural - Incompetence: fail, incompetent, missing the point
- Unistructural - One relevant aspect: identify, name, follow simple procedure
- Multistructural - Several relevant inedendent aspects: combine, describe, enumerate, perform serial skills, list
- Relational - Aspects integrated into a structure: analyze, apply, argue, compare/contrast, critisze, explain causes, relate, justify,
- Extended Abstract - Asprects generalized to new domain: create, formulate, generate, hypothesize, reflect, theorize
The advantage of the SOLO taxonomy is that you have progression towards generalization (while the Bloom taxonomy allows progression in isolation).
How can such "mumbo jumbo" be useful towards setting the goals and assessment criteria for Your project? Project assessment is not an easy task and there is no single correct answer to it, so let's study an example.
---
## Example HW AES Support for cortex-m.
- Grade 5. The project aims to `create` an API and library allowing end user memory safe access to the underlying AES hardware of the target platform. The API will provide the features A, B, C, and D, with correcness argued from the evaluation.
- Grade 4. The API/library will be based on an `evalution` (analysis) or requirements and possible solutions based on the Rust Memory model and the `cortex-m-rtfm` task/resource model.
- Grade 3. Low level access to the AES hardware, by `applying`, provided primitives by the Rust language and the `cortex-m-rtfm` library.
---
## Grading
So basically for grade 3 you show that you understand and can apply known methods and provided material. For grade 4, you show that you can make judgements on designs choices. And for grade 5, you put it all together, practical understaning and theoretical judgements. For the example for grade 5, you will show to combine knowledge and understanding of both software and hardware architecture and the theoretical concerns of a sound implementation.
Assessment/evalutation is individual. If you work together in a group, you should detail the description, e.g., student X will focus on the features A and B, while studen Y will focucs C and D.
## Amount versus quality.
The amount of work required to make a complete solution is VERY hard to predict (even industry stuggles with such questions). However in industry a track record of previous projects serves as a baseline, here we face a much harder problem, You are likely new to the topic.
Thus, if you along the project see that covering a complete solution (in this case features A-D) is out of reach (or that when doing the grade 3 you find that B and D is not possible for some reason) you are free to make restrictions (meaning that its ok to drop features provided a motivation). However quality is not to be compromised. Shortcutting the design/evaluation in favour of more features is NOT getting you a higher grade. Ultimatetely, you can still get a grade 5, even if the project fails to meet its goals (provided that the quality of the work done holds up). This is where industry and academia largly differs!
## Engineering
You are becoming engineers, to that end I believe Bloom's taxonomy to a good fit (simle to apply), you will become experts at engineering in your field. Of course this does not prohibit generalization, but the point here is NOT to show the socioeconamical impact and political issues of AES encryption, but rather to engineer a solution. (Actually, we are subject to political issues regarding AES - due to US export restrictions the Nucleo boards are shipped without HW AES, but thats another story.)
# Instructions, what to turn in
For each student
Create a git with a README.md with
- Course name : "d7018e - special studies in embedded systems"
- name, mail address and personal number
- title of your project
- project description (here you may share text with your partner) and grading goals (individual)
I will give feedback to the git (an issue). When we have an agreement you will print the READE.md as a `pdf` and send to edusrt@ltu.se (with the title "d7018e - special studies in embedded systems", so you can be officially admitted and enrolled.
# Suggested projects
---
## Printing device (William)
- Rotating stick with LEDs at the end of the stick that can "print" text by switching the LEDs on and off with the right timings.
---
## Seer (Nils)
Symbolic execution engine for MIR internal format
- Study and understand the Z3 API
- Study and understand the user API (maybe add more functionalty)
- Study outsets for program verification based on seer
---
## LED Audio (John)
- Modulate LED colors and intensity according to audio input
---
## Drivers for NXP (Axel)
---
## WCET analysis for RTFM models using KLEE (Henrik)
- Automated testebed, integrated as cargo sub-command
---
## USB-Hid (Johannes)
---
## ETM Tracing
- Develop an API for setting up ETM trace
[ARM](https://www.arm.com/files/pdf/AT_-_Advanced_Debug_of_Cortex-M_Systems.pdf)
---
## AES Encryption in hardware (Viktor)
- Develop on API for hardware supported AES encryption
---
## CAN bus API and Wheel Sensor implementation
- Develop a CAN bus API for cortex-m0
- Implement a wheel sensor for existing model car
---
## Stack Memory Analysis
- Seer or KLEE based path/call graph extraction
- Target code analysis, per function stack usage
- Static worst case stack analysis for RTFM and/or RTFM-TTA
---
## Ethernet driver for TCP/UDP/IP stack (Jonas)
- Develop driver and integrate to existing TCP/UDP/IP stack
---
## Nucleo 64 support crate
- Drivers for the Nucleo 64, stm32f401re/stm32f411re, similar to the f3/bluepill support crates
---
## Time Triggered Architecture (RTFM-TTA)
- Periodic timers
- Communication channels/message buffers
- Static analysis (for safely bound buffers)
- Static analysis for data aging (opitmal ordering?)
---
## Your ideas...
\ No newline at end of file
# Quickstart: a template for Cortex-M development
## Abstraction layers
![Abstraction layers](/assets/cortex-m-layers.png)
- `cortex-m` is a crate that provides an API to use functionality common to all Cortex-M
microcontrollers.
- `stm32f30x` is a *device crate*. It provides an API to access the hardware of a device. This crate
is automatically generated from a [SVD file][svd] and provides a low level API to manipulate
registers.
[svd]: https://github.com/posborne/cmsis-svd/blob/master/data/STMicro/STM32F103xx.svd
- `f3` is a *board support crate*. It provides a higher level API (`Serial`, `I2C`, etc.) tailored
to a specific development board.
- `cortex-m-rt` is a minimal "runtime" that handles initialization of RAM and provides the default
exception handling behavior. It is also gives your program the required memory layout.
- `???` is a concurrency framework that we'll introduce in a later lecture.
## Dependencies for development
- `arm-none-eabi-binutils`, linker
- `arm-none-eabi-gdb`, debugger
- `openocd`, for flashing / debugging the device
- `xargo`, for compiling the `core` crate. Xargo is a Cargo wrapper -- it has the exact same UI.
Xargo takes care of building the `core` crate and linking it to your program / library.
- And other handy Cargo subcommands
### Linux
- Arch Linux
``` console
$ sudo pacman -Sy arm-none-eabi-{binutils,gdb}
```
OpenOCD is in AUR:
```
$ yaourt -S openocd
```
For hardware-association and pre-packaged UDEV-rules also install:
```
$ sudo pacman -S stlink
```
### macOS
``` console
$ brew cask install gcc-arm-embedded
$ brew install openocd
```
If the brew cask command doesn't work (Error: Unknown command: cask), then run `brew tap
Caskroom/tap` first and try again.
### Windows
Installers below
- [`arm-none-eabi` toolchain](https://launchpad.net/gcc-arm-embedded/5.0/5-2016-q3-update/+download/gcc-arm-none-eabi-5_4-2016q3-20160926-win32.exe)
- [OpenOCD](http://sysprogs.com/files/gnutoolchains/arm-eabi/openocd/OpenOCD-20170821.7z). Unzip to
your C (system) drive
## All
``` console
$ # we have to use the nightly channel for embedded development
$ rustup default nightly
$ cargo install cargo-clone xargo
```
> **NOTE** If the `cargo install` fails you may need to install `pkg-config`. In Arch this can be
> accomplished with the `pacman -S pkg-config` command.
## Demo
In the first part of the demo we'll use command line tools in the terminal then we'll transition to
the Visual Studio Code IDE. It's a good idea to get familiar with the command line tools. The IDE is
nice because it calls these tools with the right arguments for you but when things go south it pays
off to understand what the IDE is doing under the hood.
### Creating a new project
These steps will give you a minimal Cortex-M project. If you run into any problem running these
commands check out the [troubleshooting guide][troubleshoot].
[troubleshoot]: https://docs.rs/cortex-m-quickstart/0.2.1/cortex_m_quickstart/#troubleshooting
``` console
$ # fetch the Cargo project template
$ cargo clone cortex-m-quickstart
$ # rename it as you wish (remember this name! you'll use it later)
$ mv cortex-m-quickstart app
$ cd app
$ # Cargo.toml.orig has a nicer format so let's use that instead of the reformatted one
$ mv Cargo.toml{.orig,}
$ # update the crate name and author
$ $EDITOR Cargo.toml
$ cat Cargo.toml
[package]
authors = ["Jorge Aparicio <jorge@japaric.io>"]
name = "app"
version = "0.1.0"
[dependencies]
cortex-m = "0.3.0"
cortex-m-semihosting = "0.2.0"
[dependencies.cortex-m-rt]
features = ["abort-on-panic"]
version = "0.3.3"
[profile.release]
debug = true
lto = true
$ # we need to specify the memory layout of the device
$ $EDITOR memory.x
$ # for the blue-pill you should have
$ cat memory.x
MEMORY
{
/* NOTE K = KiBi = 1024 bytes */
FLASH : ORIGIN = 0x08000000, LENGTH = 64K
RAM : ORIGIN = 0x20000000, LENGTH = 20K
}
$ # for the NUCLEO-F401RE you should have
$ cat memory.x
{
/* NOTE K = KiBi = 1024 bytes */
FLASH : ORIGIN = 0x08000000, LENGTH = 512K
RAM : ORIGIN = 0x20000000, LENGTH = 96K
}
```
### Hello world
Let's start with the hello world example:
``` console
$ rm -rf src
$ mkdir src
$ cp examples/hello.rs src/main.rs
```
This is the hello world program. You can ignore the `INTERRUPTS` + `default_handler` part -- that's
a generic interrupt table that we'll remove later.
``` rust
#![feature(used)]
#![no_std]
extern crate cortex_m;
extern crate cortex_m_rt;
extern crate cortex_m_semihosting;
use core::fmt::Write;
use cortex_m::asm;
use cortex_m_semihosting::hio;
fn main() {
// get a handle to the *host* standard output
let mut stdout = hio::hstdout().unwrap();
// write "Hello, world!" to it
writeln!(stdout, "Hello, world!").unwrap();
}
// As we are not using interrupts, we just register a dummy catch all handler
#[link_section = ".vector_table.interrupts"]
#[used]
static INTERRUPTS: [extern "C" fn(); 240] = [default_handler; 240];
extern "C" fn default_handler() {
asm::bkpt();
}
```
The new thing here is the `#![no_std]` attribute. This indicates that this program will *not* link
to the `std`, standard, crate. Instead it will link to the `core` crate. The `core` crate is a
subset of the `std` crate that has no dependencies to OS mechanisms like threads, dynamic memory
allocation, sockets, etc. `core` provides the minimal amount of support to run Rust on a bare metal
system.
### Build and analyze
Let's build this:
``` console
$ # NOTE use `thumbv7m-none-eabi` for the blue-pill, and `thumbv7em-none-eabihf` for the nucleo
$ xargo build --target thumbv7m-none-eabi
```
> The `thumbv7m-none-eabi` target corresponds to the Cortex-M3 architecture. The
> `thumbv7em-none-eabihf` target corresponds to the Cortex-M4F architecture -- note the "F": this
> means that the architecture has hardware support for floating point operations).
This produces an unoptimized binary.
``` console
$ # mind the target name (use thumbv7em-none-eabihf for the nucleo)
$ arm-none-eabi-size target/thumbv7m-none-eabi/debug/app
text data bss dec hex filename
14596 0 0 14596 3904 target/thumbv7m-none-eabi/debug/app
```
Let's rebuild in release mode. To avoid repeating myself I'll create a `$TARGET` variable that
contains the name of the target.
``` console
$ TARGET=thumbv7m-none-eabi
$ xargo build --target $TARGET --release
```
Now the binary is much smaller.
``` console
$ arm-none-eabi-size target/$TARGET/release/app
text data bss dec hex filename
3646 0 0 3646 e3e target/thumbv7m-none-eabi/release/app
```
You can get a breakdown of the memory usage by passing the `-Ax` flag:
``` console
$ arm-none-eabi-size -Ax target/$TARGET/debug/app
section size addr
.vector_table 0x400 0x8000000
.text 0x27f8 0x8000400
.rodata 0xd0c 0x8002c00
.bss 0x0 0x20000000
.data 0x0 0x20000000
```
`.bss` and `.data` are statically allocated (`static`) variables; there are none in this program.
`.text` holds the program code. `.rodata` are constants, usually you'll find strings like our
"Hello, world!" in this section. `.vector_table` is a region of memory that holds the vector table.
> **Exercise** Do you remember the start address of the Flash memory and RAM? (hint: memory.x) Which
> sections are located in Flash memory? Which sections are located in RAM?
Another interesting thing to do here is to look at the disassembly of the program:
``` console
$ arm-none-eabi-objdump -CD target/$TARGET/release/app
Disassembly of section .vector_table:
08000000 <_svector_table>:
8000000: 20005000 andcs r5, r0, r0
08000004 <cortex_m_rt::RESET_VECTOR>:
8000004: 08000401 stmdaeq r0, {r0, sl}
08000008 <EXCEPTIONS>:
8000008: 08000639 stmdaeq r0, {r0, r3, r4, r5, r9, sl}
(..)
Disassembly of section .text:
08000400 <cortex_m_rt::reset_handler>:
8000400: b580 push {r7, lr}
8000402: 466f mov r7, sp
8000404: b088 sub sp, #32
8000406: f240 0000 movw r0, #0
800040a: f240 0100 movw r1, #0
800040e: f2c2 0000 movt r0, #8192 ; 0x2000
8000412: f2c2 0100 movt r1, #8192 ; 0x2000
8000416: 4281 cmp r1, r0
(..)
08000638 <BUS_FAULT>:
8000638: f3ef 8008 mrs r0, MSP
800063c: f7ff bffa b.w 8000634 <cortex_m_rt::default_handler>
(..)
Disassembly of section .rodata:
08000d44 <vtable.8>:
8000d44: 080004cb stmdaeq r0, {r0, r1, r3, r6, r7, sl}
8000d48: 00000004 andeq r0, r0, r4
8000d4c: 00000004 andeq r0, r0, r4
8000d50: 080005e9 stmdaeq r0, {r0, r3, r5, r6, r7, r8, sl}
(..)
```
> **Exercise** Compare the contents of the `.vector_table` linker section, see above (or look at
> your local output), to the diagram of the vector table in the [ARM documentation][vector_table].
> What are the values of the "Initial SP value", "Reset", "NMI", "Hard fault" entries according to
> the disassembly? What do these values mean? Investigate how these values are used in the boot
> process and the exception handling mechanism.
[vector_table]: https://developer.arm.com/docs/dui0552/latest/the-cortex-m3-processor/exception-model/vector-table
If you are curious about how the program ended with this particular memory layout look at the linker
scripts in the `target` directory -- these scripts instruct the linker where to place things.
``` console
$ # list of linker scripts
$ find target -name '*.x'
target/thumbv7m-none-eabi/release/build/app-4c6a87e0e5f739ae/out/memory.x
target/thumbv7m-none-eabi/release/build/cortex-m-rt-4f13cf879b7980df/out/link.x
```
You can also visualize the exact linker command `rustc` used to link the binary by running the
following command:
``` console
$ xargo rustc --target $TARGET --release -- -Z print-link-args
"arm-none-eabi-ld" "-L" (..)
```
### Flash and debug
To flash the program into the microcontroller we must first connect the device to our laptop. If you
are using a NUCLEO-F401RE you only to connect a USB cable. If you are using the blue-pill you'll
have to connect a external SWD programmer. The pinout of the blue-pill is shown below ;
you'll have to at least connect the GND, SWDIO and SWCLK pins. If you want to power the blue-pill
using the SWD programmer then also connect the 3V3 *or* the 5V pin.
![blue-pill pinout](http://wiki.stm32duino.com/images/a/ae/Bluepillpinout.gif)
Then we have to start OpenOCD. OpenOCD will connect to the SWD programmer (the NUCLEO-F401RE board
has a built-in one) and start a GDB server.
``` console
$ # for the blue-pill
$ openocd -f interface/stlink-v2.cfg -f target/stm32f1x.cfg
$ # for the NUCLE-F401RE
$ openocd -f interface/stlink-v2-1.cfg -f target/stm32f4x.cfg
```
> **NOTE(Linux)** If you get a permission error when running OpenOCD then you'll need to change the
> udev rules for the SWD programmer you are using. To do that create the following file at
> `/etc/udev/rules.d`.
>
> ``` console
> $ cat /etc/udev/rules.d/99-st-link.rules
> # ST-LINK v2
> SUBSYSTEMS=="usb", ATTRS{idVendor}=="0483", ATTRS{idProduct}=="3748", MODE:="0666"
>
> # ST-LINK v2-1
> SUBSYSTEMS=="usb", ATTRS{idVendor}=="0483", ATTRS{idProduct}=="374b", MODE:="0666"
>
> # NUCLEO-F401RE
> SUBSYSTEMS=="usb", ATTRS{idVendor}=="0483", ATTRS{idProduct}=="374b", MODE:="0666"
> ```
>
> With that file in place call the command `sudo udevadm control --reload-rules`. Then unplug and
> re-plug your SWD programmer. That should fix the permission problem.
You should see some output like this:
``` console
Open On-Chip Debugger 0.10.0
Licensed under GNU GPL v2
For bug reports, read
http://openocd.org/doc/doxygen/bugs.html
Info : auto-selecting first available session transport "hla_swd". To override use 'transport select <transport>'.
Info : The selected transport took over low-level target control. The results might differ compared to plain JTAG/SWD
adapter speed: 2000 kHz
adapter_nsrst_delay: 100
none separate
Info : Unable to match requested speed 2000 kHz, using 1800 kHz
Info : Unable to match requested speed 2000 kHz, using 1800 kHz
Info : clock speed 1800 kHz
Info : STLINK v2 JTAG v27 API v2 SWIM v15 VID 0x0483 PID 0x374B
Info : using stlink api v2
Info : Target voltage: 3.268993
Info : stm32f4x.cpu: hardware has 6 breakpoints, 4 watchpoints
```
You should definitively get the last line -- maybe with some different numbers -- if you don't that
indicates a problem: it could be a connection problem, or you could have used the wrong
configuration file.
The program will block. That's OK. Leave it running.
Apart from the GDB server OpenOCD also starts a telnet server. You can connect to this server and
issue commands to the SWD programmer.
``` console
$ telnet localhost 4444
> # this is the telnet propmt
> # the following command will reset the microcontroller and halt the processor
> reset halt
adapter speed: 1800 kHz
target halted due to debug-request, current mode: Thread
xPSR: 0x01000000 pc: 0x08000188 msp: 0x20018000
> exit
```
The documentation of these commands is [here][openocd-commands].
[openocd-commands]: http://openocd.org/doc/html/General-Commands.html
With OpenOCD working now we can flash and debug the program using GDB.
``` console
$ # enable .gdbinit files
$ echo 'add-auto-load-safe-path /' >> ~/.gdbinit
$ arm-none-eabi-gdb target/$TARGET/debug/app
(gdb) # this is the GDB shell
```
The processor will be halted at the entry point. You can print the source code that the processor is
about to execute using the `list` command:
``` console
(gdb) # source code
(gdb) list
331 ///
332 /// This is the entry point of all programs
333 #[cfg(target_arch = "arm")]
334 #[link_section = ".reset_handler"]
335 unsafe extern "C" fn reset_handler() -> ! {
336 r0::zero_bss(&mut _sbss, &mut _ebss);
337 r0::init_data(&mut _sdata, &mut _edata, &_sidata);
338
339 match () {
340 #[cfg(not(has_fpu))]
```
And you can print the machine code that the processor is about to execute using the `disassemble`
command.
``` console
(gdb) disassemble
Dump of assembler code for function cortex_m_rt::reset_handler:
0x08000130 <+0>: push {r7, lr}
0x08000132 <+2>: mov r7, sp
0x08000134 <+4>: sub sp, #32
=> 0x08000136 <+6>: movw r0, #0
0x0800013a <+10>: movw r1, #0
0x0800013e <+14>: movt r0, #8192 ; 0x2000
0x08000142 <+18>: movt r1, #8192 ; 0x2000
```
We can skip to our program `main` by creating a breakpoint and then calling `continue`.
``` console
(gdb) break app::main
Breakpoint 1 at 0x800045c: file src/main.rs, line 18.
(gdb) continue
Continuing.
Note: automatically using hardware breakpoints for read-only addresses.
Breakpoint 1, app::main () at src/main.rs:18
18 let mut stdout = hio::hstdout().unwrap();
```
We are now in `main` we can execute each line of code in this function be repeatedly calling the
`next` command.
``` console
(gdb) next
19 writeln!(stdout, "Hello, world!").unwrap();
(gdb) next
20 }
```
After executing `writeln!` you should see "Hello, world!" printed *on the OpenOCD console*.
``` console
$ openocd -f interface/stlink-v2.cfg -f target/stm32f1x.cfg
(..)
Info : halted: PC: 0x08000aee
Hello, world!
Info : halted: PC: 0x0800049c
(..)
```
One more thing we can do here is to reset the microcontroller using the `monitor` command. `monitor`
will forward the command to the telnet server.
``` console
(gdb) monitor reset halt
target halted due to debug-request, current mode: Thread
xPSR: 0x01000000 pc: 0x08000400 msp: 0x20005000, semihosting
```
This is the same command we ran before from the `telnet` prompt.
Tip: You can get list of all the GDB commands by entering `help all` in the GDB prompt.
Note that semihosting is *very slow*. Each write operation takes *hundreds* of milliseconds; the
processor will be in a halted state for the duration of the write operation. Semihosting is nice
because it requires no extra wiring or stream but it's only appropriate for simple programs where
timing is not a concern.
### Device specific program
Let's replace that weird `INTERRUPTS` + `default_handler` with something proper. Change
`src/main.rs` to:
``` rust
#![no_std]
extern crate cortex_m_semihosting;
extern crate stm32f103xx; // heads up! use `stm32f40x` for the NUCLEO-401RE
use core::fmt::Write;
use cortex_m_semihosting::hio;
fn main() {
// get a handle to the *host* standard output
let mut stdout = hio::hstdout().unwrap();
// write "Hello, world!" to it
writeln!(stdout, "Hello, world!").unwrap();
}
```
What we have done here is replace the *generic* vector table with one tailored for the device we
are targeting. Note that `cortex-m-rt` is gone; that crate is now provided by the device crate.
Before we can compile this we have to tell Cargo where to get the device crate from. This info goes
in the Cargo.toml file:
``` console
$ $EDITOR Cargo.toml
$ cat Cargo.toml
# ..
# for the blue-pill (NOTE use this dependency or the other but not both)
[dependencies.stm32f103xx]
features = ["rt"] # this feature indicates that the device crate will provide the vector table
version = "0.7.5"
# for the blue-pill (NOTE use this dependency or the other but not both)
[dependencies.stm32f40x]
features = ["rt"] # see comment above
git = "https://gitlab.henriktjader.com/pln/STM32F40x"
# ..
```
You should now be able to compile the program again.
``` console
$ xargo build --target $TARGET --release
$ arm-none-eabi-size -Ax target/$TARGET/release/app
section size addr
.vector_table 0x130 0x8000000
.text 0x944 0x8000130
.rodata 0xfc 0x8000a74
.bss 0x0 0x20000000
.data 0x0 0x20000000
```
If you are careful observer you probably have noticed that the `.vector_table` is now smaller.
All the interrupts of a device are listed in the vector table and each device has a different number
of interrupts thus the size of the vector table will vary according to the device.
In the original program we were using a "generic" vector table that assumed that the device had 240
interrupts -- that's the maximum number of interrupts a device can have but devices usually have way
less interrupts.
### Bonus: setting a default target
So far we have always been calling Xargo with the `--target` flag. We can skip that by setting a
default target in `.cargo/config`.
``` console
$ cat >>.cargo/config <<EOF
[build]
target = "$TARGET"
EOF
```
Now you can build your program by simply calling `xargo build` or `xargo build --release`.
## Transitioning to Visual Code Studio
> **NOTE** Here I assume that you have already installed the [vscode-rust] plugin.
[vscode-rust]: https://github.com/editor-rs/vscode-rust
First some cleanup:
- Terminate any open GDB clients connected to the OpenOCD GDB server.
- Remove, or rename, the local `.gdbinit` file.
Now open the `app` folder with VSCode.
``` console
$ code .
```
### Formatting
You can enable format on save by adding `"editor.formatOnSave": true` to the User settings which you
can open hitting `Ctrl + ,`
### Build task
You make the build task work with Cortex-M projects you'll have to tweak the default build task.
Pick from the menu: `Tasks > Configure Tasks...` then pick `Rust: cargo build`. In `tasks.json`
write:
``` js
{
"version": "2.0.0",
"tasks": [
{
"type": "shell",
"taskName": "xargo build",
"command": "xargo",
"args": [
"build"
],
"problemMatcher": [
"$rustc"
]
}
]
}
```
Now pick from the menu: `Tasks > Configure Default Build Task...` and pick `xargo build`.
Now you should be able to build your project by picking `Tasks > Run Build Task...` from the menu or
by hitting the shortcut `Ctrl + Shift + B`.
![Build task](/assets/vscode-build.png)
### Debugging
You'll need to configure Native Debug to work with embedded projects. Pick `Debug > Open
Configurations` from the menu, pick `GDB` from the drop down menu and then write this into
`launch.json`.
``` js
{
"configurations": [
{
"autorun": [
"monitor arm semihosting enable",
"load",
"break app::main" // Heads up: crate name
],
"cwd": "${workspaceRoot}",
"gdbpath": "arm-none-eabi-gdb",
"executable": "./target/thumbv7m-none-eabi/debug/app", // Heads up: target name
"name": "Debug",
"remote": true,
"request": "attach",
"target": ":3333",
"type": "gdb"
}
],
"version": "0.2.0"
}
```
Now you should be able to debug your program by pressing `F5`. Note that (a) you have to build the
program first (e.g. by pressing `Ctrl + Shift + B`) and that (b) the debugger will execute your
program right after flashing the device so you'll always need at least one breakpoint.
![Debug session](/assets/vscode-debug.png)
# Real Time For the Masses
Real Time For the Masses is a set of programming models and tools geared towards developing systems with analytical properties, with respect e.g., memory requirements, response time, safety and security.
## History
### RTFM-core
The RTFM-core model offers a static task and resource model for device level modelling, implementation and analysis. The original model has been implemented as a coordination language, embedded and extending the C language with a set of RTFM primitives.
For single core deployment, the input program is analysed under the Stack Resource Policy. The RTFM-core compiler generates code with inlined scheduling and resource management primitives offering the following key properties:
- Efficient static priority preemptive scheduling, using the underlying interrupt hardware
- Race-free execution (each resource is exclusively accessed)
- Deadlock-free execution
- Schedulability test and response time analysing using a plethora of known methods
Related publications:
- [Real-time for the masses: Step 1: programming API and static priority SRP kernel primitives](http://ltu.diva-portal.org/smash/record.jsf?dswid=-6547&pid=diva2%3A1005680&c=23&searchType=RESEARCH&language=en&query=&af=%5B%5D&aq=%5B%5B%7B%22personId%22%3A%22pln%22%7D%5D%5D&aq2=%5B%5B%5D%5D&aqe=%5B%5D&noOfRows=50&sortOrder=dateIssued_sort_desc&sortOrder2=title_sort_asc&onlyFullText=false&sf=all)
- [RTFM-core: Language and Implementation](http://ltu.diva-portal.org/smash/record.jsf?dswid=-6547&pid=diva2%3A1013248&c=11&searchType=RESEARCH&language=en&query=&af=%5B%5D&aq=%5B%5B%7B%22personId%22%3A%22pln%22%7D%5D%5D&aq2=%5B%5B%5D%5D&aqe=%5B%5D&noOfRows=50&sortOrder=dateIssued_sort_desc&sortOrder2=title_sort_asc&onlyFullText=false&sf=all)
- [RTFM-RT: a threaded runtime for RTFM-core towards execution of IEC 61499](http://ltu.diva-portal.org/smash/record.jsf?dswid=-6547&pid=diva2%3A1001553&c=12&searchType=RESEARCH&language=en&query=&af=%5B%5D&aq=%5B%5B%7B%22personId%22%3A%22pln%22%7D%5D%5D&aq2=%5B%5B%5D%5D&aqe=%5B%5D&noOfRows=50&sortOrder=dateIssued_sort_desc&sortOrder2=title_sort_asc&onlyFullText=false&sf=all)
- [Abstract Timers and their Implementation onto the ARM Cortex-M family of MCUs](http://ltu.diva-portal.org/smash/record.jsf?dswid=-6547&pid=diva2%3A1013030&c=4&searchType=RESEARCH&language=en&query=&af=%5B%5D&aq=%5B%5B%7B%22personId%22%3A%22pln%22%7D%5D%5D&aq2=%5B%5B%5D%5D&aqe=%5B%5D&noOfRows=50&sortOrder=dateIssued_sort_desc&sortOrder2=title_sort_asc&onlyFullText=false&sf=all)
- [Safe tasks: run time verification of the RTFM-lang model of computation](http://ltu.diva-portal.org/smash/record.jsf?dswid=-6547&pid=diva2%3A1037297&c=6&searchType=RESEARCH&language=en&query=&af=%5B%5D&aq=%5B%5B%7B%22personId%22%3A%22pln%22%7D%5D%5D&aq2=%5B%5B%5D%5D&aqe=%5B%5D&noOfRows=50&sortOrder=dateIssued_sort_desc&sortOrder2=title_sort_asc&onlyFullText=false&sf=all)
- [Well formed Control-flow for Critical Sections in RTFM-core](http://ltu.diva-portal.org/smash/record.jsf?dswid=-6547&pid=diva2%3A1013317&c=13&searchType=RESEARCH&language=en&query=&af=%5B%5D&aq=%5B%5B%7B%22personId%22%3A%22pln%22%7D%5D%5D&aq2=%5B%5B%5D%5D&aqe=%5B%5D&noOfRows=50&sortOrder=dateIssued_sort_desc&sortOrder2=title_sort_asc&onlyFullText=false&sf=all)
### RTFM-cOOre
An object oriented model offering a component based abstraction. RTFM-cOOre models can be compiled to RTFM-core for further analysis and target code generation. The language is a mere proof of concept, used by students of the course in Compiler Construction at LTU. The RTFM-cOOre language undertakes the computational model of Concurrent Reactive Objects similarly to the functional Timber language, its C-code implementation (TinyTimber) and the CRC/CRO IDE below.
Related publications:
- [Timber](http://www.timber-lang.org/)
- [TinyTimber, Reactive Objects in C for Real-Time Embedded Systems](http://ieeexplore.ieee.org/document/4484933/)
- [An IDE for component-based design of embedded real-time software](http://ltu.diva-portal.org/smash/record.jsf?dswid=-6547&pid=diva2%3A1013957&c=26&searchType=RESEARCH&language=en&query=&af=%5B%5D&aq=%5B%5B%7B%22personId%22%3A%22pln%22%7D%5D%5D&aq2=%5B%5B%5D%5D&aqe=%5B%5D&noOfRows=50&sortOrder=dateIssued_sort_desc&sortOrder2=title_sort_asc&onlyFullText=false&sf=all)
- [RTFM-lang static semantics for systems with mixed criticality](http://ltu.diva-portal.org/smash/record.jsf?dswid=-6547&pid=diva2%3A987559&c=19&searchType=RESEARCH&language=en&query=&af=%5B%5D&aq=%5B%5B%7B%22personId%22%3A%22pln%22%7D%5D%5D&aq2=%5B%5B%5D%5D&aqe=%5B%5D&noOfRows=50&sortOrder=dateIssued_sort_desc&sortOrder2=title_sort_asc&onlyFullText=false&sf=all)
- [RTFM-core: course in compiler construction](http://ltu.diva-portal.org/smash/record.jsf?faces-redirect=true&aq2=%5B%5B%5D%5D&af=%5B%5D&searchType=SIMPLE&sortOrder2=title_sort_asc&query=&language=sv&pid=diva2%3A1068636&aq=%5B%5B%5D%5D&sf=all&aqe=%5B%5D&sortOrder=author_sort_asc&onlyFullText=false&noOfRows=50&dswid=-6339)
### RTFM in Rust
A major drawback of the RTFM-core model relies in the dependency to C code for the implementation of tasks (risking to break the safety of memory accesses and introduce race conditions). While the RTFM-cOOre model lifts this dependency, developing and maintaining a fully fledged language and accompanying compiler is a daunting task. Instead, we took the rout of the systems programming language offering the memory safety required and more it turned out.
- first attempt:
Resource protection by scope and `Deref`. Without going into details, a proof of concept was implemented. However feasible, the approach was dropped in favour of using closures, as seen in RTFM-v1 and RTFM-v2 below.
- second attempt:
At this time Japaric came into play, bringing Rust coding ninja skillz. [RTFM-v1](http://blog.japaric.io/fearless-concurrency/). The approach allows the user to enter resource ceilings manually and uses the Rust type system to verify their soundness. The approach is fairly complicated, and writing generic code requires explicit type bounds.
- current implementation.
In [RTFM-v2](http://blog.japaric.io/rtfm-v2/), A system model is given declaratively (by the `app!` procedural macro). During compilation the system model is analysed, resource ceilings derived and code generated accordingly. This simplifies programming, and generics can be expressed more succinctly.
The RTFM-v2 implementation provides a subset of the original RTFM-core language. The RTFM-core model offers offset based scheduling, a taks may trigger the asynchronous execution of a sub-task, with optional timing offset, priority assignment and payload. The `rtfm-core` compiler analyses the system model and statically allocates buffers and generates code for sending and receiving payloads. This has not been implemented the RTFM-v2 framework. However, similar behavior can be programmatically achieved by manually triggering tasks (`rtfm::set_pending`), and using `arm-core systic`/`device timers` to give timing offsets. Payloads can be safely implemented using *channels* (unprotected *single writer, single readerafter writer* buffers)
- current work and future implementations.
One can think of extending the RTFM-v2 API with channels, and synthesize buffers, send/receive code and timer management. (Suitable Masters thesis.)
The CRC/CRO model has been implemented as a proof of concept. Unlike the RTFM-cOOre model, RTFM-CRC directly generates target code (i.e., it does NOT compile to RTFM-v2). RTFM-CRC is in a prototype stage, implemented so far:
- A system level `Sys` AST (abstract syntax tree) is derived from CRC (components)/ CRO (object) descriptions (given in separate text files following a Rust struct like syntax for each component and object)
- The `Sys` AST is analysed and resources, and task set derived. From that resource ceilings are derived.
- Resources (objects) and resource ceilings are synthesized.
- Message buffers and message passing primitives are synthesized (assuming each message/port being a single buffer)
Not implemented:
- There is no automatic support for messages with offsets (for the purpose of demonstrating a mock-up is possible by hand written code)
# RTFM-v2 breakdown
In this section a behind the scenes breakdown of the RTFM-v2 is provided.
---
## Task/resource model
The RTFM-v2 mode defines a system in terms of a set of tasks and resources, in compliance to the Stack Resource policy for task with static priorities and single unit resources.
### Tasks and Resources
- `t` in a task, with priority `pri(t)`, during execution a task may `claim` the access of resources in a nested (LIFO) fashion.
- `r` is a singe unit resource (i.e., `r` can be either *locked* or *free*), `ceil(r)` denotes the ceiling of the resource computed as the maximum priority of any task claiming `r`
Example: assume a system with three tasks `t1, t2, t3` and two resources `low, high`:
- `t1`, `pri(t1) = 1`, claiming both `low, high`
- `t2`, `pri(t2) = 2`, claiming `low`
- `t3`, `pri(t3) = 3`, claiming `high`
This renders the resources.
- `low`, `ceil(low) = max(pri(t1), pri(t2)) = 2`
- `high`, `ceil(high) = max(pri(t1), pri(t3)) = 3`
---
### System Ceiling and Current running task
- `sc` is the current system ceiling, set to the maximum ceiling of currently held resources
- `st` is the currently running task
Example:
Assume we currently run the task `t1` having claimed both resources `low` and `high`
- `sc = max(ceil(low), ceil(high)) = max(2, 3) = 3`
- `st = t1`
---
### Execution semantics
- `P` is the set of requested (pended), but not yet dispatched tasks
A task `t` can only be dispatched (scheduled for execution) iff:
- `t in P`
- `pri(t) >= max(pri(tn)), tn in P`
- `pri(t) > sc`
- `pri(t) > pri(st)`
Example 1:
Assume we are currently running task `t1` at the point `low` is held. At this point both `t2` and `t3` are requested for execution (becomes pending).
- `sc = max(ceil(low)) = max(2) = 2`
- `sp = pri(t1) = 1`
- `P = {t2, p3}`
Following the dispatch rule, for `t2`:
- `t2 in P` =>
`t2 in {t2, t3}` => OK
- `pri(t2) >= max(pri(tn)), tn in P` =>
`2 >= max(2, 3)` => FAIL
The scheduling condition for `t2` is not met.
Following the dispatch rule, for `t3`:
- `t3 in P` =>
`t3 in {t2, t3}` => OK
- `pri(t3) >= max(pri(tn)), tn in P` =>
`3 >= max(2, 3)` OK =>
- `pri(t3) > sc` =>
`3 > 2` OK =>
- `pri(t3) > pri(t1)` =>
`3 > 1` OK
All conditions hold, and task `t3` will be dispatched.
Example 3:
Assume we are currently running task `t1` at the point both `low` and `high` are held. At this point both `t2` and `t3` are requested for execution (becomes pending).
In this case both `t2` and `t3` fails meeting the dispatch rules. Details left to the reader as an exercise.
Notice, due to the dispatch condition, a task may never preempt itself.
---
## RTFM on Bare Metal ARM Cortex M3 and Above
To our aid the Nested Interrupt Vectorized Controller (NVIC) ARM Cortex M3 and Above implements the following:
- tracks the `pri(i)`, for each interrupt handler `i`
- tracks the set of pended interrupts `I`
- tracks the `si` (the currently running interrupt handler)
- the `BASEPRI` register, a base priority for dispatching interrupts
- the `PRIMASK` register, a global interrupt enable.
An interrupt will be dispatched iff (for details see [Core Registers](http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0552a/CHDBIBGJ.html)):
- `i in I`
- `pri(i) >= max(pri(i) in I)`
- `pri(i) > BASEPRI && !PRIMASK`
- `pri(i) > pri(si)`
Mapping:
We map each task `t` to an interrupt `i` with `pri(i) = pri(t)`. Assume `BASEPRI` is set to `sc` system ceiling. Assume `PRIMASK == false`.
Exercises:
- Show for the two examples that the NVIC will dispatch `i3` for the above Example 2, while not dispatch any interrupt for above Example 3.
- Show that an interrupt cannot preempt itself.
Notice, under the Stack Resource Policy, there is an additional dispatch rule, on a tie among pending tasks priorities, the one with the oldest time for request has priority. This rule cannot be enforced directly by the NVIC. However, it can be shown that this restriction does not invalidate soundness, it only affects the response time calculation.
---
## Overall design
Code is split into three partitions,
- the generic `cortex-m-rtfm` library,
- the user code, and
- the *glue* code generated from the `app!` macro.
---
### `cortex-m-rtfm` library
The library implements an *unsafe* `claim<T, R, F>` method, `T` being a reference to the resource data (can be either `&` or `&mut`), `R` the return type, and `F: FnOnce(T, &mut Threshold) -> R` the closure to execute within the `claim`. Claim cannot be directly accessed from *safe* user code instead a *safe* API `claim/claim_mut` is offered by the generated code. (The API is implemented by a *trait* approach.)
---
### User code
```rust
fn exti0(
t: &mut Threshold,
EXTI0::Resources { mut LOW, mut HIGH }: EXTI0::Resources,
)
```
`t` is the initial `Threshold`, used for the resource protection mechanism (as seen later the parameter will be opted out by the compiler in `--release` mode, yet been the logic behind the parameter will be taken into account.)
`EXTI0::Resources { mut LOW, mut HIGH }: EXTI0::Resources` gives access to the resources, `LOW` and `HIGH`. Technically, we *destruct* the given parameter (of type `EXTI0::Resource`) into its fields (`mut LOW`, `mut HIGH`).
Notice here the type `EXTI0::Resources` was not user defined, but rather generated by the `app!` macro.
The `LOW`/`HIGH` arguments gives you *safe* access to the corresponding resources through the *safe* API (`claim/claim_mut`).
---
### Generated code (app! macro)
The procedural macro `app!` takes a system configuration, and performs the following:
- `Sys` AST creation after syntactic check
- A mapping from tasks to interrupts
- Resource ceiling computation according to the RTFM SRP model
- Generation of code for:
- task to interrupt bindings, and initialization code enabling corresponding interrupts
- static memory allocation and initialization for Resources
- Generation of structures for task parameters
- Interrupt entry points (calling the corresponding tasks)
### Invariants and key properties
Key properties include:
- Race-free execution (each resource is exclusively accessed)
- Deadlock-free execution
Both rely on the RTFM (SRP based) execution model, and a correct implementation thereof. A key component here is the implementation of `claim` in the `cortex-m-rtfm` library.
```rust
pub unsafe fn claim<T, R, F>(
data: T,
ceiling: u8,
_nvic_prio_bits: u8,
t: &mut Threshold,
f: F,
) -> R
where
F: FnOnce(T, &mut Threshold) -> R,
if ceiling > t.value() {
let max_priority = 1 << _nvic_prio_bits;
if ceiling == max_priority {
atomic(t, |t| f(data, t))
} else {
let old = basepri::read();
let hw = (max_priority - ceiling) << (8 - _nvic_prio_bits);
basepri::write(hw);
let ret = f(data, &mut Threshold::new(ceiling));
basepri::write(old);
ret
}
} else {
f(data, t)
}
}
```
As seen, the implementation is fairly simple. `ceiling` here is the resource ceiling for the static data `T`, and `t` is the current `Threshold`. If `ceiling <= t.value()` we can directly access it by executing the closure (`f(dada, t)`), else we need to *claim* the resource before access. Claiming has two cases:
- `ceiling == max_priority` => here we cannot protect the resource by setting `BASEPRI` (masking priorities), and instead use `atomic` (which executes the closure `|t| f(data, t)` with globally disabled interrupts ( `PRIMASK = true`)
- `ceiling != max_priority` => here we store the current system ceiling, (`old = basepri::read())`, set the new system ceiling `basepri::write(hw)` execute the closure `ret = f(data, &mut Threshold::new(ceiling))`, restore the system ceiling, `basepri::write(old)` and return the result `ret`. The `PRIMASK` and `BASEPRI` registers are located in the `Private Peripheral Bus` memory region, which is `Strongly-ordered` (meaning that accesses are executed in program order). I.e. the next instruction following `basepri::write(hw)` (inside the `claim`) will be protected by the raised system ceiling. [Arm doc - memory barriers](https://static.docs.arm.com/dai0321/a/DAI0321A_programming_guide_memory_barriers_for_m_profile.pdf)
Race freeness at this level can be argued from:
- Each *resource* is associated a *ceiling* according to SRP. The `app!` procedural macro computes the ceilings from the tasks defined and the resources (declared and) used. How do we ensure that a task cannot access a resource not declared used in the `app`?
The only resources accessible is those passed in the argument to the task (e.g., `EXTI0::Resources { mut LOW, mut HIGH }: EXTI0::Resources`). There is also no way in *safe* code to leak a reference to a resource through static (global memory) to another task. Notice though that is perfectly ok pass e.g., `&mut LOW` to a subroutine. In this case the sub routine will execute in task *context*.
Another thing achieved here is that the Rust semantics for non-aliased mutability is ensured. (Essentially a nested claim to the same resource would be illegal in Rust, since `claim` passes as mutable reference to the *inner* data). This cannot happen as `claim` takes a `mut T`.
```rust
...
LOW.claim_mut(b, t, |_low, b, t| {
rtfm::bkpt();
LOW.claim_mut(b, t, |_high, _, _| {
rtfm::bkpt();
});
});
...
```
would be rejected
```
error[E0499]: cannot borrow `LOW` as mutable more than once at a time
--> examples/nested_new.rs:100:29
|
100 | LOW.claim_mut(b, t, |_low, b, t| {
| --- ^^^^^^^^^^^^ second mutable borrow occurs here
```
Trying to bluntly copy (clone) a resource handler will also fail.
```rust
let mut LOWC = LOW.clone();
error[E0599]: no method named `clone` found for type `_resource::LOW` in the current scope
--> examples/nested_new.rs:100:24
|
100 | let mut LOWC = LOW.clone();
```
- Accessing a *resource* from *safe* user code can only be done through the `Resource::claim/claim_mut` trait, calling the generic library function `claim`
- The `claim` implementation together with the `NVIC`, `BASEPRI` and `PRIMASK` enforces the SRP dispatch policy.
However there is more to it:
What if the user could fake (or alter) the `t` (Threshold). Well in that case the `claim` might give unprotected access. This is prevented by using an *opaque* data type `Threshold` in the `rtfm-core` lib.
```rust
pub struct Threshold {
value: u8,
_not_send: PhantomData<*const ()>,
}
```
The `value` field is not accessible to the user directly (and the user cannot alter or create a new `Threshold`) and the API to `Threshold::new()` is *unsafe*, i.e.,
```rust
...
*_t.value = 72; // attempt to fake Threshodld
let t = Threshold::new(0); // attempt to create a new Threshold
...
```
will render:
```rust
Compiling cortex-m-rtfm v0.2.1 (file:///home/pln/course/nucleo-64-rtfm)
error[E0616]: field `value` of struct `rtfm::Threshold` is private
--> examples/nested_new.rs:135:6
|
135 | *_t.value = 72;
| ^^^^^^^^
|
= note: a method `value` also exists, perhaps you wish to call it
error[E0133]: call to unsafe function requires unsafe function or block
--> examples/nested_new.rs:135:13
|
135 | let t = Threshold::new(0);
| ^^^^^^^^^^^^^^^^^ call to unsafe function
```
## The generated code in detail
Procedural macros in Rust are executed before code generation (causing the argument AST to replaced by a new AST for the remainder of compilation).
The intermediate code (AST after expansion) can be exported by the `cargo` sub-command `export`.
```rust
> cargo export examples nested > expanded.rs
```
or
```rust
> xargo export examples nested > expanded.rs
```
Let us study the `nested` example in detail.
```rust
app! {
device: stm32f40x,
resources: {
static LOW: u64 = 0;
static HIGH: u64 = 0;
},
tasks: {
EXTI0: {
path: exti0,
priority: 1,
resources: [LOW, HIGH],
},
EXTI1: {
path: exti1,
priority: 2,
resources: [LOW],
},
EXTI2: {
path: exti2,
priority: 3,
resources: [HIGH],
},
},
}
```
---
### Auto generated `main`
The intermediate AST defines the following `main` function.
```rust
fn main() {
let init: fn(stm32f40x::Peripherals, init::Resources) = init;
rtfm::atomic(unsafe { &mut rtfm::Threshold::new(0) }, |_t| unsafe {
let _late_resources =
init(stm32f40x::Peripherals::all(), init::Resources::new());
let nvic = &*stm32f40x::NVIC.get();
let prio_bits = stm32f40x::NVIC_PRIO_BITS;
let hw = ((1 << prio_bits) - 3u8) << (8 - prio_bits);
nvic.set_priority(stm32f40x::Interrupt::EXTI2, hw);
nvic.enable(stm32f40x::Interrupt::EXTI2);
let prio_bits = stm32f40x::NVIC_PRIO_BITS;
let hw = ((1 << prio_bits) - 1u8) << (8 - prio_bits);
nvic.set_priority(stm32f40x::Interrupt::EXTI0, hw);
nvic.enable(stm32f40x::Interrupt::EXTI0);
let prio_bits = stm32f40x::NVIC_PRIO_BITS;
let hw = ((1 << prio_bits) - 2u8) << (8 - prio_bits);
nvic.set_priority(stm32f40x::Interrupt::EXTI1, hw);
nvic.enable(stm32f40x::Interrupt::EXTI1);
});
let idle: fn() -> ! = idle;
idle();
}
```
Essentially, the generated code initiates the peripheral and resource bindings in an `atomic` section (with the interrupts disabled). Besides first calling the user defined function `init`, the generated code also sets the interrupt priorities and enables the interrupts (tasks).
---
### Allocation of resources
The allocation of memory for the system resources is done using (global) `static mut`, with resource names prepended by `_`. Resources can only by accessed from user code through the `Resource` wrapping, initialized at run time.
```rust
static mut _HIGH: u64 = 0;
static mut _LOW: u64 = 0;
```
---
### Auto generated `init` arguments
All resources and peripherals are passed to the user `init` as defined in the generated `_initResources`. The auto generated code implements a module `init` holding the resource handlers.
```rust
pub struct _initResources<'a> {
pub LOW: &'a mut rtfm::Static<u64>,
pub HIGH: &'a mut rtfm::Static<u64>,
}
#[allow(unsafe_code)]
mod init {
pub use stm32f40x::Peripherals;
pub use _initResources as Resources;
#[allow(unsafe_code)]
impl<'a> Resources<'a> {
pub unsafe fn new() -> Self {
Resources {
LOW: ::rtfm::Static::ref_mut(&mut ::_LOW),
HIGH: ::rtfm::Static::ref_mut(&mut ::_HIGH),
}
}
}
}
```
---
### Auto generated `task` arguments
A generic resource abstraction is generated in `_resource`.
```rust
mod _resource {
pub struct HIGH {
_0: (),
}
impl HIGH {
pub unsafe fn new() -> Self {
HIGH { _0: () }
}
}
pub struct LOW {
_0: (),
}
impl LOW {
pub unsafe fn new() -> Self {
LOW { _0: () }
}
}
}
```
In Rust a `mod` provides a *name space*, thus the statically allocated `HIGH` and `LOW` structs are accessed under the names `_resource::HIGH`, `_resource::LOW` respectively.
Code is generated for binding the user API `RES::claim`/`RES::claim_mut` to the library implementation of `claim`. For `claim` the reference is passed as `rtfm::Static::ref_(&_HIGH)`, while for `claim_mut` the reference is passed as `rtfm::Static::ref_mut(&_HIGH)`. Recall here that `_HIGH` is the actual resource allocation.
Similarly code is generated for each resource.
```rust
unsafe impl rtfm::Resource for _resource::HIGH {
type Data = u64;
fn claim<R, F>(&self, t: &mut rtfm::Threshold, f: F) -> R
where
F: FnOnce(&rtfm::Static<u64>, &mut rtfm::Threshold) -> R,
{
unsafe {
rtfm::claim(
rtfm::Static::ref_(&_HIGH),
3u8, // << computed ceiling value
stm32f40x::NVIC_PRIO_BITS,
t,
f,
)
}
}
fn claim_mut<R, F>(&mut self, t: &mut rtfm::Threshold, f: F) -> R
where
F: FnOnce(&mut rtfm::Static<u64>, &mut rtfm::Threshold) -> R,
{
unsafe {
rtfm::claim(
rtfm::Static::ref_mut(&mut _HIGH),
3u8, // << computed ceiling value
stm32f40x::NVIC_PRIO_BITS,
t,
f,
)
}
}
}
```
The `rtfm::Resource` *triat* and `rtfm::Static` type are given through the `rtfm_core` crate.
```rust
pub unsafe trait Resource {
/// The data protected by the resource
type Data: Send;
/// Claims the resource data for the span of the closure `f`. For the
/// duration of the closure other tasks that may access the resource data
/// are prevented from preempting the current task.
fn claim<R, F>(&self, t: &mut Threshold, f: F) -> R
where
F: FnOnce(&Static<Self::Data>, &mut Threshold) -> R;
/// Mutable variant of `claim`
fn claim_mut<R, F>(&mut self, t: &mut Threshold, f: F) -> R
where
F: FnOnce(&mut Static<Self::Data>, &mut Threshold) -> R;
}
unsafe impl<T> Resource for Static<T>
where
T: Send,
{
type Data = T;
fn claim<R, F>(&self, t: &mut Threshold, f: F) -> R
where
F: FnOnce(&Static<Self::Data>, &mut Threshold) -> R,
{
f(self, t)
}
fn claim_mut<R, F>(&mut self, t: &mut Threshold, f: F) -> R
where
F: FnOnce(&mut Static<Self::Data>, &mut Threshold) -> R,
{
f(self, t)
}
}
/// Preemption threshold token
///
/// The preemption threshold indicates the priority a task must have to preempt
/// the current context. For example a threshold of 2 indicates that only
/// interrupts / exceptions with a priority of 3 or greater can preempt the
/// current context
pub struct Threshold {
value: u8,
_not_send: PhantomData<*const ()>,
}
impl Threshold {
/// Creates a new `Threshold` token
///
/// This API is meant to be used to create abstractions and not to be
/// directly used by applications.
pub unsafe fn new(value: u8) -> Self {
Threshold {
value,
_not_send: PhantomData,
}
}
/// Creates a `Threshold` token with maximum value
///
/// This API is meant to be used to create abstractions and not to be
/// directly used by applications.
pub unsafe fn max() -> Self {
Self::new(u8::MAX)
}
/// Returns the value of this `Threshold` token
pub fn value(&self) -> u8 {
self.value
}
}
```
---
### Interrupt entry points
Each task is mapped to a corresponding entry in the interrupt vector table. An entry point stub is generated for each task, calling the user defined code. Each taks is called with the exact set of resource handlers (and peripherals used), in the above example `EXTI0::Resources`.
```rust
pub unsafe extern "C" fn _EXTI0() {
let f: fn(&mut rtfm::Threshold, EXTI0::Resources) = exti0;
f(
&mut if 1u8 == 1 << stm32f40x::NVIC_PRIO_BITS {
rtfm::Threshold::new(::core::u8::MAX)
} else {
rtfm::Threshold::new(1u8)
},
EXTI0::Resources::new(),
)
}
mod EXTI0 {
pub struct Resources {
pub HIGH: ::_resource::HIGH,
pub LOW: ::_resource::LOW,
}
impl Resources {
pub unsafe fn new() -> Self {
Resources {
HIGH: { ::_resource::HIGH::new() },
LOW: { ::_resource::LOW::new() },
}
}
}
}
```
---
## Performance
As seen there is quite some autogenerated and library code involved for the task and resource management. To our aid here is the Rust memory model allowing for zero cost abstractions.
The `exti0` task:
```rust
fn exti0(
t: &mut Threshold,
EXTI0::Resources { mut LOW, mut HIGH }: EXTI0::Resources,
) {
rtfm::bkpt();
LOW.claim_mut(t, |_low, t| {
rtfm::bkpt();
HIGH.claim_mut(t, |_high, _| {
rtfm::bkpt();
});
});
}
```
Amounts to the following assembly (including the interrupt entry code.)
```rust
Dump of assembler code for function nested_new::_EXTI0:
0x080005a6 <+0>: movs r1, #224 ; 0xe0
=> 0x080005a8 <+2>: bkpt 0x0000
0x080005aa <+4>: mrs r0, BASEPRI
0x080005ae <+8>: movs r2, #208 ; 0xd0
0x080005b0 <+10>: msr BASEPRI, r1
0x080005b4 <+14>: bkpt 0x0000
0x080005b6 <+16>: mrs r1, BASEPRI
0x080005ba <+20>: msr BASEPRI, r2
0x080005be <+24>: bkpt 0x0000
0x080005c0 <+26>: msr BASEPRI, r1
0x080005c4 <+30>: msr BASEPRI, r0
0x080005c8 <+34>: bx lr
```
The worlds fastest preemptive scheduler for tasks with shared resources is at bay! (We challenge anyone to beat RTFM!)
# How low can you go
An observation here is that we read basepri in the inner claim
```
0x080005b6 <+16>: mrs r1, BASEPRI
```
though that we actually know that `BASEPRI` will have the value `r1` in this case.
In an experimental version of the RTFM implementation this observation has been exploited.
```rust
Dump of assembler code for function nested_new::_EXTI3:
0x080005d0 <+0>: movs r1, #224 ; 0xe0
0x080005d2 <+2>: movs r2, #208 ; 0xd0
=> 0x080005d4 <+4>: bkpt 0x0000
0x080005d6 <+6>: mrs r0, BASEPRI
0x080005da <+10>: msr BASEPRI, r1
0x080005de <+14>: bkpt 0x0000
0x080005e0 <+16>: msr BASEPRI, r2
0x080005e4 <+20>: bkpt 0x0000
0x080005e6 <+22>: msr BASEPRI, r1
0x080005ea <+26>: msr BASEPRI, r0
0x080005ee <+30>: bx lr
```