Skip to content
GitLab
Explore
Sign in
Register
Primary navigation
Search or go to…
Project
E
e7020e_2020
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Anton Grahn
e7020e_2020
Commits
e76abd91
Commit
e76abd91
authored
5 years ago
by
Per Lindgren
Browse files
Options
Downloads
Patches
Plain Diff
bare9 polished and tested
parent
e249e0e6
Branches
Branches containing commit
No related tags found
No related merge requests found
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
examples/bare9.rs
+74
-67
74 additions, 67 deletions
examples/bare9.rs
with
74 additions
and
67 deletions
examples/bare9.rs
+
74
−
67
View file @
e76abd91
//! bare9.rs
//!
//! Heapless
//!
//!
//! Heapless
//!
//! What it covers:
//! - Heapless Ringbuffer
//! - Heapless Producer/Consumer lockfree data access
//! - Interrupt driven I/O
//!
//!
#![no_main]
#![no_std]
...
...
@@ -26,24 +26,29 @@ use nb::block;
use
rtfm
::
app
;
#[app(device
=
hal::stm32)]
#[app(device
=
hal::stm32
,
peripherals
=
true
)]
const
APP
:
()
=
{
// Late resources
static
mut
TX
:
Tx
<
hal
::
stm32
::
USART2
>
=
();
static
mut
RX
:
Rx
<
hal
::
stm32
::
USART2
>
=
();
static
mut
PRODUCER
:
Producer
<
'static
,
u8
,
U3
>
=
();
static
mut
CONSUMER
:
Consumer
<
'static
,
u8
,
U3
>
=
();
static
mut
ITM
:
ITM
=
();
struct
Resources
{
// Late resources
TX
:
Tx
<
hal
::
stm32
::
USART2
>
,
RX
:
Rx
<
hal
::
stm32
::
USART2
>
,
PRODUCER
:
Producer
<
'static
,
u8
,
U3
>
,
CONSUMER
:
Consumer
<
'static
,
u8
,
U3
>
,
ITM
:
ITM
,
// An initialized resource
#[init(None)]
RB
:
Option
<
Queue
<
u8
,
U3
>>
,
}
// init runs in an interrupt free section
#[init]
fn
init
()
{
#[init(resources
=
[
RB]
)]
fn
init
(
cx
:
init
::
Context
)
->
init
::
LateResources
{
let
mut
core
=
cx
.core
;
let
device
=
cx
.device
;
// A ring buffer for our data
static
mut
RB
:
Option
<
Queue
<
u8
,
U3
>>
=
None
;
*
RB
=
Some
(
Queue
::
new
());
*
cx
.resources.RB
=
Some
(
Queue
::
new
());
// Split into producer/consumer pair
let
(
producer
,
consumer
)
=
RB
.as_mut
()
.unwrap
()
.split
();
let
(
producer
,
consumer
)
=
cx
.resources.
RB
.as_mut
()
.unwrap
()
.split
();
let
stim
=
&
mut
core
.ITM.stim
[
0
];
iprintln!
(
stim
,
"bare9"
);
...
...
@@ -56,7 +61,7 @@ const APP: () = {
let
gpioa
=
device
.GPIOA
.split
();
let
tx
=
gpioa
.pa2
.into_alternate_af7
();
let
rx
=
gpioa
.pa3
.into_alternate_af7
();
let
rx
=
gpioa
.pa3
.into_alternate_af7
();
let
mut
serial
=
Serial
::
usart2
(
device
.USART2
,
...
...
@@ -72,26 +77,27 @@ const APP: () = {
let
(
tx
,
rx
)
=
serial
.split
();
// Late resources
// Our split queue
PRODUCER
=
producer
;
CONSUMER
=
consumer
;
init
::
LateResources
{
// Our split queue
PRODUCER
:
producer
,
CONSUMER
:
consumer
,
// Our split serial
TX
=
tx
;
RX
=
rx
;
// Our split serial
TX
:
tx
,
RX
:
rx
,
// For debugging
ITM
=
core
.ITM
;
// For debugging
ITM
:
core
.ITM
,
}
}
// idle may be interrupted by other interrupt/tasks in the system
// #[idle(resources = [RX, TX, ITM])]
#[idle(resources
=
[
ITM,
CONSUMER]
)]
fn
idle
()
->
!
{
let
stim
=
&
mut
resources
.ITM.stim
[
0
];
fn
idle
(
cx
:
idle
::
Context
)
->
!
{
let
stim
=
&
mut
cx
.
resources.ITM.stim
[
0
];
loop
{
while
let
Some
(
byte
)
=
resources
.CONSUMER
.dequeue
()
{
while
let
Some
(
byte
)
=
cx
.
resources.CONSUMER
.dequeue
()
{
iprintln!
(
stim
,
"data {}"
,
byte
);
}
...
...
@@ -102,16 +108,18 @@ const APP: () = {
}
}
#[interrupt(resources
=
[
RX,
TX,
PRODUCER]
)]
fn
USART2
()
{
let
rx
=
resources
.RX
;
let
tx
=
resources
.TX
;
// task run on USART2 interrupt (set to fire for each byte received)
#[task(binds
=
USART2,
resources
=
[
RX,
TX,
PRODUCER]
)]
fn
usart2
(
cx
:
usart2
::
Context
)
{
let
rx
=
cx
.resources.RX
;
let
tx
=
cx
.resources.TX
;
// at this point we know there must be a byte to read
match
rx
.read
()
{
Ok
(
byte
)
=>
{
tx
.write
(
byte
)
.unwrap
();
match
resources
.PRODUCER
.enqueue
(
byte
)
{
match
cx
.
resources.PRODUCER
.enqueue
(
byte
)
{
Ok
(
_
)
=>
{}
Err
(
_
)
=>
asm
::
bkpt
(),
}
...
...
@@ -121,18 +129,19 @@ const APP: () = {
}
};
// Optional
// 0. Compile and run the project at 16MHz in release mode
// make sure its running (not paused).
//
// > cargo build --example bare9 --features "
hal
rtfm" --release
// > cargo build --example bare9 --features "rtfm" --release
// (or use the vscode build task)
//
//
// 1. Start a terminal program, connect with 15200 8N1
//
// You should now be able to send data and recive an echo from the MCU
// You should now be able to send data and rec
e
ive an echo from the MCU
//
// Try sending: "abcd" as a single sequence (set the option No end in moserial),
// don't send the quation marks, just abcd.
// don't send the qu
ot
ation marks, just abcd.
//
// What did you receive, and what was the output of the ITM trace.
//
...
...
@@ -152,9 +161,9 @@ const APP: () = {
//
// > cargo build --example bare9 --features "hal rtfm"
// (or use the vscode build task)
//
//
// Try sending: "abcd" as a single sequence (set the option No end in moserial),
// don't send the quation marks, just abcd.
// don't send the qu
ot
ation marks, just abcd.
//
// What did you receive, and what was the output of the ITM trace.
//
...
...
@@ -176,18 +185,18 @@ const APP: () = {
// The concurrency model behind RTFM offers
// 1. Race-free resource access
//
// 2. Deadlock-free exection
// 2. Deadlock-free exec
u
tion
//
// 3. Shared execution stack (no pre-allocated stack regions)
//
// 4. Bound priority inversion
//
// 5. Theoretical underpinning ->
// + proofs of soundness
// +
(pen and paper)
proofs of soundness
// + schedulability analysis
// + response time analysis
// + stack memory analysis
// + ... leverages on >25 years of reseach in the real-time community
// + ... leverages on >25 years of resea
r
ch in the real-time community
// based on the seminal work of Baker in the early 1990s
// (known as the Stack Resource Policy, SRP)
//
...
...
@@ -195,47 +204,45 @@ const APP: () = {
// 1. compile check and analysis of tasks and resources
// + the API implementation together with the Rust compiler will ensure that
// both RTFM (SRP) soundness and the Rust memory model invariants
// are upheld (under all circum
p
stances).
//
// are upheld (under all circumstances).
//
// 2. arguably the worlds fastest real time scheduler *
// + task invocation 0-cycle OH on top of HW interrupt handling
// + 2 cycle OH for locking a shared resource (on lock/claim entry)
// + 1 cycle OH for releasin
e
g a shared resoure (on lock/claim exit)
//
// + 1 cycle OH for releasing a shared resour
c
e (on lock/claim exit)
//
// 3. arguably the worlds most memory efficient scheduler *
// + 1 byte stack memory OH for each (nested) lock/claim
// (no additional book-keeping during run-time)
//
//
// * applies to static task/resource models with single core
// pre-emptive, static priority scheduling
//
// In comparison "real-time" schedulers for threaded models like FreeRTOS
// - CPU and memory OH magnitudes larger
(100s of cycles/kilobytes of memory)
//
// In comparison "real-time" schedulers for threaded models
(
like FreeRTOS
)
// - CPU and memory OH magnitudes larger
// - ... and what's worse OH is typically unbound (no proofs of worst case)
// And additionally threaded models typically imposes
// - potential race conditions (up to the user to verify)
// - potential dead-locks (up to the implementation)
// - potential unbound priority inversion (up to the implementation)
//
// Rust RTFM (currently) target ONLY STATIC SYSTEMS,
there is no notion
// of dynamically creating new executions contexts/threads
//
//
However,
Rust RTFM (currently) target ONLY STATIC SYSTEMS,
//
there is no notion
of dynamically creating new executions contexts/threads
// so a direct comparison is not completely fair.
//
//
// On the other hand, embedded applications are typically static by nature
// so a STATIC model is to that end better suitable.
//
//
// RTFM is reactive by nature, a task execute to end, triggered
// by an internal or external event, (where an interrupt is an external event
// from the environment, like a HW peripheral such as the USART2).
//
// Threads on the other hand are concurrent and infinte by nature and
// actively blocking/yeilding awaiting stimuli. Hence reactivity needs to be CODED.
// This leads to an anomaly, the underlying HW is reactive (interrupts),
// requiring an interrupt handler, that creates a signal to the scheduler.
//
//
// Threads on the other hand are concurrent and infinite by nature and
// actively blocking/yielding awaiting stischedulers
// The scheduler then needs to keep track of all threads and at some point choose
// to dispatch the awaiting thread. So reactivity is bottlenecked to the point
// to dispatch the awaiting thread. So reactivity is bottle
-
necked to the point
// of scheduling by queue management, context switching and other additional
// book keeping.
//
//
// In essence, the thread scheduler tries to re-establish the reactivity that
// were there from the beginning (interrupts), a battle that cannot be won...
\ No newline at end of file
// were there from the beginning (interrupts), a battle that cannot be won...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment