From 98e54ac6694d5a2551a70df21282ca3a234d3efe Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Henrik=20Tj=C3=A4der?= <henrik@tjaders.com> Date: Mon, 20 Nov 2017 20:48:47 +0100 Subject: [PATCH] Correcting spelling --- doc/RTFM.md | 78 ++++++++++++++++++++++++++--------------------------- 1 file changed, 39 insertions(+), 39 deletions(-) diff --git a/doc/RTFM.md b/doc/RTFM.md index a134972..047226b 100644 --- a/doc/RTFM.md +++ b/doc/RTFM.md @@ -6,11 +6,11 @@ Real Time For the Masses is a set of programming models and tools geared towards ### RTFM-core -The RTFM-core model offers a static task and resource model for device level modelling, implementation and analysis. The original model has been implemented as a coordination language, embedded and extending the C language with a set of RTFM primiteves. +The RTFM-core model offers a static task and resource model for device level modelling, implementation and analysis. The original model has been implemented as a coordination language, embedded and extending the C language with a set of RTFM primitives. -For single core deployment, the input program is analysed under the Stack Resource Policy. The RTFM-core compiler generates code with inlined scheduling and resource managent primives offering the following key properties: +For single core deployment, the input program is analysed under the Stack Resource Policy. The RTFM-core compiler generates code with inlined scheduling and resource management primitives offering the following key properties: -- Efficeint statcic priority preemtive scheduling, using the underlying interrupt hardware +- Efficient static priority preemptive scheduling, using the underlying interrupt hardware - Race-free execution (each resource is exclusively accessed) - Deadlock-free execution - Schedulability test and response time analysing using a plethora of known methods @@ -37,32 +37,32 @@ Related publications: ### RTFM in Rust -A major drawback of the RTFM-core model relies in the dependency to C code for the implementation of tasks (risking to break the safety of memory accesses and introduce race conditions). While the RTFM-cOOre model lifts this dependency, developing and maintaining a fully fledged language and accomyanying compiler is a daunting task. Instead, we took the rout of the systems programming language offering the memory saftey required and more it turned out. +A major drawback of the RTFM-core model relies in the dependency to C code for the implementation of tasks (risking to break the safety of memory accesses and introduce race conditions). While the RTFM-cOOre model lifts this dependency, developing and maintaining a fully fledged language and accompanying compiler is a daunting task. Instead, we took the rout of the systems programming language offering the memory safety required and more it turned out. - first attempt: - Resource protection by scope and `Deref`. Without going into details, a proof of concept was implemented. However feasibible, the approach was dropped in favour of using closures, as seen in RTFM-v1 and RTFM-v2 below. + Resource protection by scope and `Deref`. Without going into details, a proof of concept was implemented. However feasible, the approach was dropped in favour of using closures, as seen in RTFM-v1 and RTFM-v2 below. - second attempt: At this time Japaric came into play, bringing Rust coding ninja skillz. [RTFM-v1](http://blog.japaric.io/fearless-concurrency/). The approach allows the user to enter resource ceilings manually and uses the Rust type system to verify their soundness. The approach is fairly complicated, and writing generic code requires explicit type bounds. -- current implmentetaion. +- current implementation. - In [RTFM-v2](http://blog.japaric.io/rtfm-v2/), A system model is given declaratively (by the `app!` procedural macro). During compilation the system model is analysed, resource ceilings derived and code generated accordingly. This simplifies programming, and generics can be expressed more succintly. + In [RTFM-v2](http://blog.japaric.io/rtfm-v2/), A system model is given declaratively (by the `app!` procedural macro). During compilation the system model is analysed, resource ceilings derived and code generated accordingly. This simplifies programming, and generics can be expressed more succinctly. - The RTFM-v2 implmentation provides a subset of the original RTFM-core language. The RTFM-core model offers offsed based scheduling, a taks may trigger the asynchronous execution of a sub-task, with optional timing offset, priority assignment and payload. The `rtfm-core` compiler analyses the system model and statically allocates buffers and generates code for sending and receiving payloads. This has not been implmendend the RTFM-v2 framework. However, similar bahavior can be programmatically achieved by manually triggering tasks (`rtfm::set_pending`), and using `arm-core systic`/`device timers` to give timing offsets. Payloads can be safely implmented using *channels* (unprotected *single writer, single readerafter writer* buffers) -- current work and future implmentations. + The RTFM-v2 implementation provides a subset of the original RTFM-core language. The RTFM-core model offers offset based scheduling, a taks may trigger the asynchronous execution of a sub-task, with optional timing offset, priority assignment and payload. The `rtfm-core` compiler analyses the system model and statically allocates buffers and generates code for sending and receiving payloads. This has not been implemented the RTFM-v2 framework. However, similar behavior can be programmatically achieved by manually triggering tasks (`rtfm::set_pending`), and using `arm-core systic`/`device timers` to give timing offsets. Payloads can be safely implemented using *channels* (unprotected *single writer, single readerafter writer* buffers) +- current work and future implementations. - One can think of extending the RTFM-v2 API with channels, and synthsize buffers, send/receive code and timer managament. (Suitable Masters theisis.) + One can think of extending the RTFM-v2 API with channels, and synthesize buffers, send/receive code and timer management. (Suitable Masters thesis.) - The CRC/CRO model has been implemented as a proof of concept. Unlike the RTFM-cOOre model, RTFM-CRC direcly generates target code (i.e., it does NOT compile to RTFM-v2). RTFM-CRC is in a prototype stage, implemented so far: + The CRC/CRO model has been implemented as a proof of concept. Unlike the RTFM-cOOre model, RTFM-CRC directly generates target code (i.e., it does NOT compile to RTFM-v2). RTFM-CRC is in a prototype stage, implemented so far: - - A system level `Sys` AST (abstact syntax tree) is derived from CRC (components)/ CRO (object) descriptions (given in separtate text files following a Rust struct like syntax for each componet and object) - - The `Sys` AST is analysed and resources, and task set dirived. From that resource ceilings are derived. - - Resources (objcets) and resource ceilings are synthesized. - - Message buffers and messeage passing primintives are synthesized (assuming each message/port being a single buffer) + - A system level `Sys` AST (abstract syntax tree) is derived from CRC (components)/ CRO (object) descriptions (given in separate text files following a Rust struct like syntax for each component and object) + - The `Sys` AST is analysed and resources, and task set derived. From that resource ceilings are derived. + - Resources (objects) and resource ceilings are synthesized. + - Message buffers and message passing primitives are synthesized (assuming each message/port being a single buffer) Not implemented: - - There is no automatic support for messages with offesets (for the puprose of demontastration a mockup is passible by hand written code) + - There is no automatic support for messages with offsets (for the purpose of demonstrating a mock-up is possible by hand written code) # RTFM-v2 breakdown @@ -71,13 +71,13 @@ In this section a behind the scenes breakdown of the RTFM-v2 is provided. --- ## Task/resource model -The RTFM-v2 mode defines a system in terms of a set of tasks and resources, in compliance to the Stack Resource policy for task with static priorirties and single unit resources. +The RTFM-v2 mode defines a system in terms of a set of tasks and resources, in compliance to the Stack Resource policy for task with static priorities and single unit resources. -### Tasks and Resouces -- `t` in a task, with priority `pri(t)`, during exection a task may `claim` the access of resorces in a nested (LIFO) fashion. -- `r` is a singe unit resource (i.e., `r` can be eiter *locked* or *free*), `ceil(r)` denotes the ceiling of the resource computed as the maximum priority of any task claiming `r` +### Tasks and Resources +- `t` in a task, with priority `pri(t)`, during execution a task may `claim` the access of resources in a nested (LIFO) fashion. +- `r` is a singe unit resource (i.e., `r` can be either *locked* or *free*), `ceil(r)` denotes the ceiling of the resource computed as the maximum priority of any task claiming `r` -Example: assume a system with three tasks `t1, t2, t3` and two resorces `low, high`: +Example: assume a system with three tasks `t1, t2, t3` and two resources `low, high`: - `t1`, `pri(t1) = 1`, claiming both `low, high` - `t2`, `pri(t2) = 2`, claiming `low` - `t3`, `pri(t3) = 3`, claiming `high` @@ -107,7 +107,7 @@ A task `t` can only be dispatched (scheduled for execution) iff: - `pri(t) > sc` - `pri(t) > pri(st)` -Example1: +Example 1: Assume we are currently running task `t1` at the point `low` is held. At this point both `t2` and `t3` are requested for execution (becomes pending). @@ -150,14 +150,14 @@ Example 3: Assume we are currently running task `t1` at the point both `low` and `high` are held. At this point both `t2` and `t3` are requested for execution (becomes pending). -In this case both `t2` and `t3` fails meeting the dispatch rules. Details left to the reader as an excercise. +In this case both `t2` and `t3` fails meeting the dispatch rules. Details left to the reader as an exercise. Notice, due to the dispatch condition, a task may never preempt itself. --- -## RTFM on Bare Metal ARM Contex M3 and Above +## RTFM on Bare Metal ARM Cortex M3 and Above -To our aid the Nested Interrupt Vectorized Controller (NVIC) ARM Contex M3 and Above implements the following: +To our aid the Nested Interrupt Vectorized Controller (NVIC) ARM Cortex M3 and Above implements the following: - tracks the `pri(i)`, for each interrupt handler `i` - tracks the set of pended interrupts `I` @@ -177,10 +177,10 @@ We map each task `t` to an interrupt `i` with `pri(i) = pri(t)`. Assume `BASEPRI Exercises: -- Show for the two examles that the NVIC will dispatch `i3` for the above Example 2, while not dispatch any interrupt for abave Example 3. +- Show for the two examples that the NVIC will dispatch `i3` for the above Example 2, while not dispatch any interrupt for above Example 3. - Show that an interrupt cannot preempt itself. -Notice, under the Stack Resource Policy, there is an additional dispatch rule, on a tie among pending tasks priorties, the one with the oldest time for request has priority. This rule cannot be enforced directly by the NVIC. However, it can be shown that this restriction does not invalidate soundness, it only affects the response time calculation. +Notice, under the Stack Resource Policy, there is an additional dispatch rule, on a tie among pending tasks priorities, the one with the oldest time for request has priority. This rule cannot be enforced directly by the NVIC. However, it can be shown that this restriction does not invalidate soundness, it only affects the response time calculation. --- @@ -194,7 +194,7 @@ Code is split into three partitions, --- ### `cortex-m-rtfm` library -The library implements an *unsafe* `claim<T, R, F>` method, `T` being a referernce to the resource data (can be either `&` or `&mut`), `R` the return type, and `F: FnOnce(T, &mut Threshold) -> R` the closure to execute within the `claim`. Claim cannot be directy accessed from *safe* user code instead a *safe* API `claim/claim_mut` is offered by the generated code. (The API is implemented by a *trait* approach.) +The library implements an *unsafe* `claim<T, R, F>` method, `T` being a reference to the resource data (can be either `&` or `&mut`), `R` the return type, and `F: FnOnce(T, &mut Threshold) -> R` the closure to execute within the `claim`. Claim cannot be directly accessed from *safe* user code instead a *safe* API `claim/claim_mut` is offered by the generated code. (The API is implemented by a *trait* approach.) --- ### User code @@ -208,7 +208,7 @@ fn exti0( `t` is the initial `Threshold`, used for the resource protection mechanism (as seen later the parameter will be opted out by the compiler in `--release` mode, yet been the logic behind the parameter will be taken into account.) -`EXTI0::Resources { mut LOW, mut HIGH }: EXTI0::Resources` gives access to the the resources, `LOW` and `HIGH`. Technically, we *destruct* the given parameter (of type `EXTI0::Resource`) into its fields (`mut LOW`, `mut HIGH`). +`EXTI0::Resources { mut LOW, mut HIGH }: EXTI0::Resources` gives access to the resources, `LOW` and `HIGH`. Technically, we *destruct* the given parameter (of type `EXTI0::Resource`) into its fields (`mut LOW`, `mut HIGH`). Notice here the type `EXTI0::Resources` was not user defined, but rather generated by the `app!` macro. @@ -218,7 +218,7 @@ The `LOW`/`HIGH` arguments gives you *safe* access to the corresponding resource --- ### Generated code (app! macro) -The procedural macro `app!` takes a sytem configuration, and performs the following: +The procedural macro `app!` takes a system configuration, and performs the following: - `Sys` AST creation after syntactic check - A mapping from tasks to interrupts @@ -249,7 +249,7 @@ pub unsafe fn claim<T, R, F>( ) -> R where F: FnOnce(T, &mut Threshold) -> R, -{ + if ceiling > t.value() { let max_priority = 1 << _nvic_prio_bits; @@ -270,10 +270,10 @@ where } } ``` -As seen, the implmentation is fairly simple. `ceiling` here is the resource ceiling for the static data `T`, and `t` is the current `Threshold`. If `ceiling <= t.value()` we can directly access it by executing the closure (`f(dada, t)`), else we need to *claim* the resource before access. Claiming has two cases: +As seen, the implementation is fairly simple. `ceiling` here is the resource ceiling for the static data `T`, and `t` is the current `Threshold`. If `ceiling <= t.value()` we can directly access it by executing the closure (`f(dada, t)`), else we need to *claim* the resource before access. Claiming has two cases: - `ceiling == max_priority` => here we cannot protect the resource by setting `BASEPRI` (masking priorities), and instead use `atomic` (which executes the closure `|t| f(data, t)` with globally disabled interrupts ( `PRIMASK = true`) -- `ceiling != max_priority` => here we store the current system ceiling, (`old = basepri::read())`, set the new system ceiling `basepri::write(hw)` execute the closure `ret = f(data, &mut Threshold::new(ceiling))`, restore the system ceiling, `basepri::write(old)` and return the result `ret`. The `PRIMASK` and `BASEPRI` regeisters are located in the `Private Peripheral Bus` memory region, which is `Strongly-ordered` (meaning that accesses are executed in program order). I.e. the next instruction following `basepri::write(hw)` (inside the `claim`) will be protected by the raised system ceiling. [Arm doc - memory barriers](https://static.docs.arm.com/dai0321/a/DAI0321A_programming_guide_memory_barriers_for_m_profile.pdf) +- `ceiling != max_priority` => here we store the current system ceiling, (`old = basepri::read())`, set the new system ceiling `basepri::write(hw)` execute the closure `ret = f(data, &mut Threshold::new(ceiling))`, restore the system ceiling, `basepri::write(old)` and return the result `ret`. The `PRIMASK` and `BASEPRI` registers are located in the `Private Peripheral Bus` memory region, which is `Strongly-ordered` (meaning that accesses are executed in program order). I.e. the next instruction following `basepri::write(hw)` (inside the `claim`) will be protected by the raised system ceiling. [Arm doc - memory barriers](https://static.docs.arm.com/dai0321/a/DAI0321A_programming_guide_memory_barriers_for_m_profile.pdf) Race freeness at this level can be argued from: @@ -281,7 +281,7 @@ Race freeness at this level can be argued from: The only resources accessible is those passed in the argument to the task (e.g., `EXTI0::Resources { mut LOW, mut HIGH }: EXTI0::Resources`). There is also no way in *safe* code to leak a reference to a resource through static (global memory) to another task. Notice though that is perfectly ok pass e.g., `&mut LOW` to a subroutine. In this case the sub routine will execute in task *context*. - Another thing achieved here is that the Rust semantics for non-aliased mutability is ensured. (Essentiall a nested claim to the same resource would be illegal in Rust, since `claim` passes as mutable reference to the *inner* data). This cannot happen as `claim` takes a `mut T`. + Another thing achieved here is that the Rust semantics for non-aliased mutability is ensured. (Essentially a nested claim to the same resource would be illegal in Rust, since `claim` passes as mutable reference to the *inner* data). This cannot happen as `claim` takes a `mut T`. ```rust ... @@ -422,10 +422,10 @@ fn main() { idle(); } ``` -Essentially, the generated code initates the peripheral and resource bindings in an `atomic` section (with the interrupts disabled). Besides first calling the user defined function `init`, the generated code also sets the interrupt priorities and enables the interrupts (tasks). +Essentially, the generated code initiates the peripheral and resource bindings in an `atomic` section (with the interrupts disabled). Besides first calling the user defined function `init`, the generated code also sets the interrupt priorities and enables the interrupts (tasks). --- -### Allocition of resources +### Allocation of resources The allocation of memory for the system resources is done using (global) `static mut`, with resource names prepended by `_`. Resources can only by accessed from user code through the `Resource` wrapping, initialized at run time. ```rust static mut _HIGH: u64 = 0; @@ -434,7 +434,7 @@ static mut _LOW: u64 = 0; --- ### Auto generated `init` arguments -All resources and peripherals are passed to the user `init` as defined in the generated `_initResources`. The auto generated code implments a module `init` holding the resource handlers. +All resources and peripherals are passed to the user `init` as defined in the generated `_initResources`. The auto generated code implements a module `init` holding the resource handlers. ```rust pub struct _initResources<'a> { @@ -675,7 +675,7 @@ Dump of assembler code for function nested_new::_EXTI0: 0x080005c8 <+34>: bx lr ``` -The worlds fastest preemtive scheduler for tasks with shared resources is at bay! (We challenge anyone to beat RTFM!) +The worlds fastest preemptive scheduler for tasks with shared resources is at bay! (We challenge anyone to beat RTFM!) # How low can you go @@ -685,7 +685,7 @@ An observation here is that we read basepri in the inner claim ``` though that we actually know that `BASEPRI` will have the value `r1` in this case. -In an exprimental version of the RTFM implementation this observation has been explointed. +In an experimental version of the RTFM implementation this observation has been exploited. ```rust Dump of assembler code for function nested_new::_EXTI3: -- GitLab