Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • master
1 result

Target

Select target project
No results found
Select Git revision
  • master
1 result
Show changes

Commits on Source 11

10 files
+ 678
8
Compare changes
  • Side-by-side
  • Inline

Files

Original line number Original line Diff line number Diff line
{
{
    "cSpell.ignoreWords": [
    "cSpell.ignoreWords": [
        "alpha",
        "ask",
        "break",
        "can",
        "cmake",
        "cmake",
        "controller",
        "core",
        "core",
        "cstr",
        "cstr",
        "cyccnt",
        "datatypes",
        "datatypes",
        "f",
        "for",
        "grepit",
        "in",
        "is",
        "joh",
        "johan",
        "ktest",
        "ktest",
        "libklee",
        "libklee",
        "libz",
        "libz",
        "llvm",
        "llvm",
        "mkdir",
        "mkdir",
        "nucleo",
        "opel",
        "present",
        "primask",
        "romeo",
        "rtic",
        "runtest",
        "runtest",
        "rustc",
        "rustc",
        "same",
        "satisfiability",
        "satisfiability",
        "skylake",
        "skylake",
        "so",
        "speedster",
        "vcell",
        "xx",
        "you",
        "znver"
        "znver"
    ]
    ]
}
}
 No newline at end of file
+10 −0
Original line number Original line Diff line number Diff line
@@ -2,6 +2,16 @@


Most recent changes:
Most recent changes:


- 2020-12-20 Home exams: Initial commit.

- 2020-12-14 Some clarifications:

  - In `array.rs`, missing `not` in "(Hint, even if we don't use the result `b`, Rust do not optimize out the call, why?)"

  - Small clarifications to other examples (see diff for details.)
  
- 2020-12-12 Update with `cyccnt` example.

- 2020-12-08 Update `README.md` for Ubuntu (like) system with `z3` dependency.
- 2020-12-08 Update `README.md` for Ubuntu (like) system with `z3` dependency.


- 2020-12-08 Update `README.md` and `examples/get_sign` to reflect the recent `klee 2.2` and `aur` package.
- 2020-12-08 Update `README.md` and `examples/get_sign` to reflect the recent `klee 2.2` and `aur` package.

HOME_EXAM.md

0 → 100644
+181 −0
Original line number Original line Diff line number Diff line
# Home Exam D7020E January 2021

## Grading:

3) Implement the response time analysis, and overall schedulability test. (Skeleton found under `srp_analysis`.)

4) Generate report on the analysis results, this could be as a generated html/xml or however you feel like results are best reported and visualized. 

5) Perform measurements on a real RTIC application and analyze with your tool developed for grades 3/4.

Both Lab 4 and 5 are mandatory for passing the course. Grades are incremental, i.e. to get a 5 you must pass grades 3 and 4.

---

## Preparation

Start by reading 1, 2 and 3:

1) [A Stack-Based Resource Allocation Policy for Realtime Processes](https://www.math.unipd.it/~tullio/RTS/2009/Baker-1991.pdf), which refers to

2) [Stack-Based Scheduling of Realtime Processes](https://link.springer.com/content/pdf/10.1007/BF00365393.pdf), journal publication based on technical report [3] of the 1991 paper. The underlying model is the same in both papers.

3) [Rate Monotonic Analysis](http://www.di.unito.it/~bini/publications/2003BinButBut.pdf), especially equation 3 is of interest to us. (It should be familiar for the real-time systems course you have taken previously.)

## Procedure

Assignments for submission will be posted on Canvas.

The procedure follows that of Lab 4/5, i.e:

The files in `srp_analysis` gives a skeleton for your solution(s). Implement your solutions for Grade 3, and optionally 4/5.

1) `Home-Exam Initial Submission`: Submit link(s) in canvas (similarly to Lab 4/5). You will get two reviews and two repositories to review. If you have done work towards grade 5 you also submit link to your `rtic_fxx_nucleo` repository.

2) `Hame-Exam Review`: Perform and submit reviews.

3) `Home-Exam Final Submission`: Address raised issues and make final submission.

(Check issues you made to fellow repositories and close those those that have been addressed to your satisfaction, both regarding labs and the home exam.)

As the final part of the examination you will individually present your solutions. Grades will be based on the fulfillment of the `Home Exam` grading goals. (We decide time/place for your presentation over Telegram.)

---

## Definitions

A task `t` is defined by:

- `P(t)` the priority of task `t`
- `D(t)` the deadline of taks `t`
- `A(t)` the inter-arrival of task `t`

A resource `r` is defined by:

- `π(r)` the resource ceiling, computed as the highest priority of any task accessing `r`. SRP allows for dynamic priorities, in our case we have static priorities only.

For SRP based analysis we assume a task to perform/execute a finite sequence of operations/instructions (aka. run-to-end or run-to-completion semantics). During execution, a task can claim resources `Rj`... in nested fashion. Sequential re-claim of resources is allowed but NOT re-claiming an already held resource (in a nested fashion, since that would violate the Rust memory aliasing rule).

E.g., a possible trace for a task can look like:

 `[t:...[r1:...[r2:...]...]...[r2:...]...]`, where `[r:...]` denotes a critical section of task `t` holding the resource `r`. In this case the task starts, and at some point claims `r1` and inside the critical section claims `r2` (nested claim), at some point it exits `r2`, exits `r1` and continues executing where it executes a critical section on `r2`, and then finally executes until completion.

## Grade 3

Analysis:

### 1. Total CPU utilization

Worst Case Execution Time (WCET) for tasks and critical sections

In general determining WCET is rather tricky. In our case we adopt a measurement based technique, that spans all feasible paths for the task. Tests triggering the execution paths are automatically generated by symbolic execution. To correctly take concurrency into account resource state is treated symbolically. Thus, for a critical section, the resource is given a fresh (new) symbolic value for each critical section. Inside the critical section we are ensured exclusive access (and thus the value can be further constrained inside of the critical section).

We model hardware (peripheral register accesses) as shared resources (shared between your application and the environment). As such each *read* regenerates a new symbolic value while write operations have no side-effect.

For now, we just assume we have complete WCETs information, in terms of `start` and `end` time-stamps (`u32`) for each section `[_: ... ]`. We represent that by the `Task` and `Trace` data structures in `common.rs`.

### Total CPU request (or total load factor)

Each task `t` has a WCET `C(t)` and given (assumed) inter-arrival time `A(t)`. The CPU request (or load) inferred by a task is `L(t)` = `C(t)`/`A(t)`. Ask yourself, what is the consequence of `C(t)` > `A(t)`?

We can compute the total CPU request (or load factor), as `Ltot` = sum(`L(T)`), `T` being the set of tasks.

Ask yourself, what is the consequence of `Ltot` > 1?

Implement a function taking `Vec<Task>` and returning the load factor. (Use data structures from `common.rs` for suitable data structures and inspiration.)

### Response time (simple over-approximation)

Under SRP response time can be computed by equation 7.22 in [Hard Real-Time Computing Systems](
https://doc.lagout.org/science/0_Computer%20Science/2_Algorithms/Hard%20Real-Time%20Computing%20Systems_%20Predictable%20Scheduling%20Algorithms%20and%20Applications%20%283rd%20ed.%29%20%5BButtazzo%202011-09-15%5D.pdf).

In general the response time for a task `t` is computed as.

- `R(t)` = `B(t)` + `C(t)` + `I(t)`, where
  - `B(t)` is the blocking time for task `t`, and
  - `I(t)` is the interference (preemptions) to task `t`

For a task set to be schedulable under SRP we have two requirements:

- `Ltot` < 1
- `R(t)` < `D(t)`, for all tasks. (`R(t)` > `D(t)` implies a deadline miss.)

#### Blocking

SRP brings the outstanding property of single blocking. In words, a task `t` is blocked by the maximal critical section a task `l` with lower priority (`P(l)`< `P(t)`) holds a resource `l_r`, with a ceiling `π(l_r)` equal or higher than the priority of `t`.

- `B(t)` = max(`C(l_r)`), where `P(l)`< `P(t)`, `π(l_r) >= P(t)`  

Implement a function that takes a `Task` and returns the corresponding blocking time.

#### Preemptions

A task is exposed to interference (preemptions) by higher priority tasks. Intuitively, during the execution of a task `t` (`Bp(t)`) each higher priority task `h` (`P(h)`>`P(t)`) may preempt us (`Bp(t)`/`A(h)` rounded upwards) times.

- `I(t)` = sum(`C(h)` * ceiling(`Bp(t)`/`A(h)`)), forall tasks `h`, `P(h)` > `P(t)`, where
- `Bp(t)` is the *busy-period*

We can over approximate the *busy period* `Bp(i)` = `D(i)` (assuming the worst allowed *busy-period*).

As a technical detail. For the scheduling of tasks of the same priority, the original work on SRP adopted a FIFO model (first arrived, first served). Under Rust RTIC, tasks are bound to hardware interrupts. Thus we can exploit the underlying hardware to do the actual scheduling for us (with zero-overhead). However the interrupt hardware, schedules interrupts of the same priority by the index in the vector table. For our case here we can make a safe over approximation by considering preemptions from tasks with SAME or higher priority (`P(h)` >= `P(t)`).  

Implement a function that takes a `Task` and returns the corresponding preemption time.

Now make a function that computes the response time for a `Task`, by combing `B(t)`, `C(t)`, and `I(t)`.

Finally, make a function that iterates over the task set and returns a vector with containing:
`Vec<Task, R(t), C(t), B(t), I(t)>`. Just a simple `println!` of that vector gives the essential information on the analysis.

#### Preemptions revisited

The *busy-period* is in `7.22` computed by a recurrence equation.

Implement the recurrence relation (equation) starting from the base case `B(t) + C(t)`. The recurrence might diverge in case the `Bp(t) > D(t)`, this is a pathological case, where the task becomes non-schedulable, in that case terminate the recurrence (with an error indicating a deadline miss). You might want to indicate that a non feasible response time have been reached by using the `Result<u32, ())>` type or some other means e.g., (`Option<u32>`).

You can let your `preemption` function take a parameter indicating if the exact solution or approximation should be used.

---

## Grade 4

Your tool should be usable/documented and present a comprehensive "view" of the analysis results. Think about the usability of the tool and write a short description of use. (You may use `StructOpt` for command line parsing making your application work as a "professional" tool, a fun and easy exercise.) This you can easily implement a `-h/--help` for documenting the usage.

---

## Grade 5

Pull the latest [rtic_fxx_nucleo](https://gitlab.henriktjader.com/pln/rtic_f4xx_nucleo). Use your tool (that you developed for grades 3, 4) to analyse the task/resource set of `timing_exam.rs` based on the corresponding WCET data (per task/critical section). (See the file `examples/timing_exam.rs` for further information.) To import task-set/wcet, look at Rust `serde`.

## Resources

`common.rs` gives the basic data structures, and some helper functions.

`main.rs` gives an example on how `Tasks` can be manually constructed. This is vastly helpful for your development, when getting started. Later you likely want file input using `serde`.

## Tips

When working with Rust, the standard library documentation [std](https://doc.rust-lang.org/std/) is excellent and easy to search (just press S). For most cases, you will find examples on intended use and cross referencing to related data types is just a click away.

Use the `main` example to get started. Initially you may simplify it further by reducing the number of tasks/and or resources. Make sure you understand the helper functions given in `common.rs`, (your code will likely look quite similar). You might want to add further `common` types and helper functions to streamline your development, along the way.

Generate your own task sets to make sure your code works in the general case not only for the `Tasks` provided. Heads up, I will expose your code to some other more complex task sets.

---

## Learning outcomes, Robust and Energy Efficient Real-Time Systems

In this part of the course, we have covered.

- Embedded Programming using the RTIC framework. General tracing and debugging. Cycle accurate timing measurements.

- Software robustness. We have adopted Rust and Symbolic Execution to achieve guaranteed memory safety and defined behavior (panic free execution). With this at hand, we have a strong (and theoretically underpinned) foundation for improved robustness and reliability proven at compile time.

- Real-Time Scheduling and Analysis. SRP provides an execution model and resource management policy with outstanding properties of race-and deadlock free execution, single blocking and stack sharing. Our Rust RTIC framework provides a correct by construction implementation of SRP, exploiting zero-cost (software) abstractions. Using Rust RTIC resource management and scheduling, is done by directly by the hardware, which allows for efficiency (zero OH) and predictability.

  The SRP model is amenable to static analysis, which you have now internalized through an actual implementation of the theoretical foundations. We have also covered methods for Worst Case Execution Time (WCET) analysis by cycle accurate measurements, which in combination with Symbolic Execution for test-case generation allows for high degree of automation.

- Energy Consumption is roughly proportional to the supply voltage (static leakage/dissipation), and exponential to the frequency (dynamic/switching activity dissipation). In the case of embedded systems, low-power modes allow parts of the system to be powered down while retaining sufficient functionality to wake on external (and/or internal) events. In sleep mode, both static and dynamic power dissipation is minimized typically to the order of uAmp (in comparison to mAmp in run mode).  

   Rust RTIC adopts an event driven approach allowing the system to automatically sleep in case no further tasks are eligible for scheduling. Moreover, leveraging on the zero-cost abstractions in Rust and the guarantees provided by the analysis framework, we do not need to sacrifice correctness/robustness and reliability in order to obtain highly efficient executables.

Robust and Energy Efficient Real-Time Systems for real, This is the Way!
+31 −5
Original line number Original line Diff line number Diff line
@@ -118,16 +118,42 @@ Here you learn:


- an easy way to compile and run KLEE on Rust code
- an easy way to compile and run KLEE on Rust code
- an easy way to replay test cases for Rust programs
- an easy way to replay test cases for Rust programs
- an easy way to find "hard to find" errors in embedded code


TODO:
---

More interesting verification problems using `cargo klee`.


--
## Related files and their locations and attributes

- `vcell`: KLEE enable `vcell`
  - version: 0.1.2
  - git: `https://github.com/perlindgren/vcell/blob/trustit`
  - branch: `trustit`
  - features: 
    - `klee-analysis`
      - `get` results in a new (fresh) symbolic object. 
      - `set` is suppressed (no side effect).

- `klee-sys`: low-level bindings for KLEE.
  - version: 0.2.0
  - git: `https://gitlab.henriktjader.com/pln/klee-sys`
  - features:
    - `klee-analysis`
      - KLEE API binds to external functions
    - `klee-replay`
      - KLEE API `klee_make_symbolic` binds to inline assembly breakpoint. This allows a debugger to catch the halted CPU and insert test case to location of symbolic object. Other KLEE API binds to Rust `panic!`.

- `cargo-klee`: `cargo` sub-command.
  - version: 0.3.0
  - git: `https://gitlab.henriktjader.com/pln/cargo-klee`

- `panic_klee`: Binds panic handler to external `abort`
  - version: 0.1.0
  - git: `https://gitlab.henriktjader.com/pln/panic-klee`


---


## Why KLEE on Rust
## Why KLEE on Rust


Out the box, Rust provides superior memory safety and guarantees to well defined behavior. However there are cases where the Rust compiler (rustc) cannot statically (at compile time) ensure these guarantees. For such cases (e.g., division by zero, slice/array indexing etc.) Rust injects code for run-time verification that emit a `panic!` (with appropriate error message). While still being safe (the code never runs into memory unsafety or undefined behavior) this limits the reliability (availability) of the system (as its very hard to recover from a `panic!`.) In practice, the developer would typically reboot/reset the system (and store some error code/trace for post-mortem analysis).
Out the box, Rust provides superior memory safety and guarantees to well defined behavior. However there are cases where the Rust compiler (rustc) cannot statically (at compile time) ensure these guarantees. For such cases (e.g., division by zero, slice/array indexing etc.) Rust injects code for run-time verification that emit a `panic!` (with appropriate error message). While still being safe (the code nevhttps://github.com/perlindgren/vcell.giter runs into memory unsafety or undefined behavior) this limits the reliability (availability) of the system (as its very hard to recover from a `panic!`.) In practice, the developer would typically reboot/reset the system (and store some error code/trace for post-mortem analysis).


With KLEE we can do better! We bind Rust `panic!` to KLEE `abort` (a path termination), and let. For all reachable `panic!`s, KLEE will provide a concrete test, which we can replay and address in our source code. When done, we have a proof of `panic` freedom and thus defined behavior, with huge impact to reliability (and security) of the system at hand.
With KLEE we can do better! We bind Rust `panic!` to KLEE `abort` (a path termination), and let. For all reachable `panic!`s, KLEE will provide a concrete test, which we can replay and address in our source code. When done, we have a proof of `panic` freedom and thus defined behavior, with huge impact to reliability (and security) of the system at hand.
Original line number Original line Diff line number Diff line
@@ -4,6 +4,10 @@ version = "0.1.0"
authors = ["pln <Per Lindgren>"]
authors = ["pln <Per Lindgren>"]
edition = "2018"
edition = "2018"


# [dependencies.stm32f4] 
# version = "0.12.1"
# features = ["stm32f401"]

[dependencies.klee-sys]
[dependencies.klee-sys]
git = "https://gitlab.henriktjader.com/pln/klee-sys.git"
git = "https://gitlab.henriktjader.com/pln/klee-sys.git"
version = "0.2.0"
version = "0.2.0"
@@ -12,6 +16,16 @@ version = "0.2.0"
git = "https://gitlab.henriktjader.com/pln/panic-klee.git"
git = "https://gitlab.henriktjader.com/pln/panic-klee.git"
version = "0.1.0"
version = "0.1.0"


[dependencies.cortex-m]
version = "0.6.1"

[dependencies]
vcell = "0.1.2"

[patch.crates-io]
cortex-m = { git = "https://github.com/perlindgren/cortex-m.git", branch = "trustit" }
vcell = { git = "https://github.com/perlindgren/vcell.git", branch = "trustit" }



[profile.dev]
[profile.dev]
panic = "abort"
panic = "abort"
@@ -27,4 +41,8 @@ lto = true # better optimization
codegen-units = 1   # better optimization
codegen-units = 1   # better optimization


[features]
[features]
klee-analysis = ["klee-sys/klee-analysis"]
klee-analysis = [
    "klee-sys/klee-analysis", 
    "cortex-m/klee-analysis",
    "vcell/klee-analysis"
]
Original line number Original line Diff line number Diff line
@@ -35,7 +35,7 @@ fn main() {
// test generated in debug/dev mode (un-optimized).
// test generated in debug/dev mode (un-optimized).
//
//
// Try to explain in your own words the difference and why?
// Try to explain in your own words the difference and why?
// (Hint, even if we don't use the result `b`, Rust do optimize out the call, why?)
// (Hint, even if we don't use the result `b`, Rust do not optimize out the call, why?)
//
//
// [your answer here]
// [your answer here]
//
//
@@ -48,11 +48,14 @@ fn main() {
//
//
// C) In the example, the array is holding only zeroes.
// C) In the example, the array is holding only zeroes.
// Figure out a way to make the content symbolic.
// Figure out a way to make the content symbolic.
// (Hint, declare as mutable, iterate and set each element symbolic)
// (Hint, declare as mutable, iterate and set each element symbolic.)
// (Hint2, it seems that you can set the whole array symbolic directly
// without messing with an iterator, super!!!.)
//
//
// [Git commit "C"]
// [Git commit "C"]
//
//
// D) Analyze the example using KLEE. Now a new (maybe unexpected) error should occur!
// D) Analyze the example using KLEE. Now a new (maybe unexpected) error should occur!
// Notice, the error occurs only in `debug/dev` builds.
//
//
// Explain what caused the error.
// Explain what caused the error.
//
//
+250 −0
Original line number Original line Diff line number Diff line
// cyccnt.rs

#![no_std]
#![no_main]

extern crate cortex_m;
extern crate panic_klee;

#[no_mangle]
fn main() {
    let mut core = cortex_m::Peripherals::take().unwrap();
    // core.DCB.enable_trace();
    // core.DWT.enable_cycle_counter();

    let start: u32 = core.DWT.cyccnt.read();

    // some code to measure
    // ...

    let end = core.DWT.cyccnt.read();

    let _time = end - start;
}

// A) Running KLEE on embedded code:
//
// Let's now revisit the time measure example from `rtic_f4xx_nucleo`.
// For simplicity we omit the `rtic` specifics.
//
// `cortex-m` is an API to the ARM Cortex core peripherals.
// (Under RTIC it was passed to the `init::Context` as `core`.)
//
// In this example we access it directly.
// Since Rust forbids mutable aliasing, we cannot `take` it twice,
// but more about that later...
//
// Let's focus on the access to the core peripherals, and
// the relation to the `klee-analysis` feature.
//
// In the Rust embedded ecosystem, all hardware registers
// are accessed through `vcell`, which ensure `volatile access`.
// (This means that reads and writes goes to memory.)
//
// We override `vcell` by a custom implementation:
//
// Looking at the `Cargo.toml` we find:
//
// [patch.crates-io]
// cortex-m = { git = "https://github.com/perlindgren/cortex-m.git", branch = "trustit" }
// vcell = { git = "https://github.com/perlindgren/vcell.git", branch = "trustit" }
//
// and
//
// [features]
// klee-analysis = [
//     "klee-sys/klee-analysis",
//     "cortex-m/klee-analysis",
//     "vcell/klee-analysis"
// ]
//
// The `klee-analysis` feature is propagated to `vcell`.
// When active:
// - writes are suppressed (they have no effect), and
// - reads generate a fresh (new) unique symbolic value.
//
// In the code above Line 15, we get one symbolic value for `DWT.cyccnt.read()`,
// and in Line 20, we get another unique symbolic value.
//
// Among other things `cortex-m` provides an API to read/write the `PRIMASK`,
// core register. (This controls the global interrupt enable/disable).
//
// We override the `cortex-m` crate, and propagate the `klee-analysis` feature.
// Similarly to `vcell`, when the `klee-analysis` feature is active:
// - writes to PRIMASK are suppressed, and
// - reads generate a fresh (new) unique value.
//
// In the code above Line 11, `cortex-m` executes the `take()` operation
// in a "critical section". In more detail:
// It first reads the `PRIMASK`, if false (we are not already in a critical section)
// it sets the `PRIMASK`.
//
// Now let's run the analysis:
//
// > cargo klee --example cyccnt --release
//
// KLEE: Using Z3 solver backend
// KLEE: WARNING: undefined reference to function: rust_eh_personality
//
// KLEE: done: total instructions = 38
// KLEE: done: completed paths = 1
// KLEE: done: generated tests = 1
//
// Only one path was found (no error occurred along the path).
//
// > ktest-tool target/release/examples/klee-last/test000001.ktest
//
// ktest file : 'target/release/examples/klee-last/test000001.ktest'
// args       : ['/home/pln/courses/d7020e/klee_tutorial/cargo_klee_examples/target/release/examples/cyccnt-012d42640c9faf9d.ll']
// num objects: 3
// object 0: name: 'PRIMASK'
// object 0: size: 4
// object 0: data: b'\x00\x00\x00\x00'
// object 0: hex : 0x00000000
// object 0: int : 0
// object 0: uint: 0
// object 0: text: ....
// object 1: name: 'vcell'
// object 1: size: 4
// object 1: data: b'\x00\x00\x00\x00'
// object 1: hex : 0x00000000
// object 1: int : 0
// object 1: uint: 0
// object 1: text: ....
// object 2: name: 'vcell'
// object 2: size: 4
// object 2: data: b'\x00\x00\x00\x00'
// object 2: hex : 0x00000000
// object 2: int : 0
// object 2: uint: 0
// object 2: text: ....
//
// As expected we read the `PRIMASK` and then read `cyccnt` twice.
//
// KLEE does not know which register was read, just that a `vcell`
// was accessed.
//
// We can replay the test:
//
// > cargo klee --example cyccnt --release --replay --gdb
// ...
// Type "apropos word" to search for commands related to "word"...
// Reading symbols from cyccnt.replay...
// (gdb) break main
// Breakpoint 1 at 0x555555555127: file examples/cyccnt.rs, line 11.
// (gdb) set environment KTEST_FILE=klee-last/test000001.ktest
// (gdb) run
// Breakpoint 1, klee_sys::lib_klee_analysis::klee_make_symbolic<u32> (t=0x7fffffffd75c,
//     name=<optimized out>)
//     at /home/pln/.cargo/git/checkouts/klee-sys-7ee2aa8a1a6bbc46/de8fd90/src/lib_klee_analysis.rs:19
// 19              crate::ll::klee_make_symbolic(
//
// Nice!, the very first thing that happens is that we get a symbolic value
// (this is the `PRIMASK`)
//
// (gdb) next
// cortex_m::interrupt::free<closure-0,core::option::Option<cortex_m::peripheral::Peripherals>> (f=...)
//     at /home/pln/.cargo/git/checkouts/cortex-m-514878a7410beb63/07e7543/src/interrupt.rs:75
// 75          let r = f(unsafe { &CriticalSection::new() });
//
// This is the critical section mentioned above:
//
// (gdb) next
// cyccnt::main () at examples/cyccnt.rs:15
// 15          let start: u32 = core.DWT.cyccnt.read();
//
// (gdb) next
// cyccnt::main () at examples/cyccnt.rs:15
// 20          let end = core.DWT.cyccnt.read();
//
// (gdb) next
// 23      }
//
// Pretty much what we could expect.
//
// Now try to `run` the again and print the values of `start`/`end`.
//
// What do you get, and why?
//
// [your answer here]
//
// As you should have seen, this was not particularly informative, right?
//
// B) In debug/dev mode
//
// Let's replay the example in debug mode?
//
// > cargo klee --example cyccnt
//
// KLEE: ERROR: /home/pln/.cargo/git/checkouts/panic-klee-aa8d015442188497/3b0c897/src/lib.rs:8: abort failure
// KLEE: NOTE: now ignoring this error at this location
//
// KLEE: done: total instructions = 761
// KLEE: done: completed paths = 4
// KLEE: done: generated tests = 3
//
// WHUT!!!!!
// An error occurred, this was unexpected, right?
//
// Try figure out what caused the error.
//
// Hint, look at the generated tests and open the `testX.abort.err` file.
// (Each failing test comes with a corresponding error file.)
// 
// The error file contains a backtrace.
// Replay the failing test with a breakpoint set to the failing operation.
// Print the values of `start` and `end`, when hitting the breakpoint.
//
// Value of `start`.
//
// [your answer here]
//
// Value of `end`
//
// [your answer here]
//
// Why does these values cause an error debug/dev build but not in a release build?
//
// [your answer here]
//
// C) Fix the problem!
//
// Come up with a solution to the problem, and alter the source code accordingly.
//
// Verify that the program now passes without errors in both debug/dev and
// release builds.
//
// There are numerous ways to solve the problem.
// Argue for your solution in your own words.
//
// [your answer here]
//
// D) Learning outcomes and major takeaways.
//
// In this exercise, you have validated embedded Rust code using KLEE
// and found a typical "hard to find" error.
//
// why are such errors so hard to find?
//
// Think about it?
//
// What is the condition for triggering the error?
//
// What is the behavior of CYCCNT?
//
// Assume the MCU is running at 8MHz.
//
// How long time would lines 16/17 take to run to trigger the error?
//
// [your answer here]
//
// Of course this is a contrived example, and may not occur in practice.
// But, it represents a class of problems/errors/bugs that is
// extremely hard to find by testing.
//
// Ask Jan Van Deventer about the anti-lock breaks he developed
// for Alpha Romeo! (His nick, was Fender Bender at the time.)
//
// Same break controller is present in Opel Speedster, so you can
// ask Johan at Grepit how it feels when you have zero-breaking
// after hitting a bump. (He now bypassed the anti-lock.)
+9 −0
Original line number Original line Diff line number Diff line
[package]
name = "srp_analysis"
version = "0.1.0"
authors = ["Per Lindgren <per.lindgren@ltu.se>"]
edition = "2018"

# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

[dependencies]
+72 −0
Original line number Original line Diff line number Diff line
use std::collections::{HashMap, HashSet};

// common data structures

#[derive(Debug)]
pub struct Task {
    pub id: String,
    pub prio: u8,
    pub deadline: u32,
    pub inter_arrival: u32,
    pub trace: Trace,
}

//#[derive(Debug, Clone)]
#[derive(Debug)]
pub struct Trace {
    pub id: String,
    pub start: u32,
    pub end: u32,
    pub inner: Vec<Trace>,
}

// useful types

// Our task set
pub type Tasks = Vec<Task>;

// A map from Task/Resource identifiers to priority
pub type IdPrio = HashMap<String, u8>;

// A map from Task identifiers to a set of Resource identifiers
pub type TaskResources = HashMap<String, HashSet<String>>;

// Derives the above maps from a set of tasks
pub fn pre_analysis(tasks: &Tasks) -> (IdPrio, TaskResources) {
    let mut ip = HashMap::new();
    let mut tr: TaskResources = HashMap::new();
    for t in tasks {
        update_prio(t.prio, &t.trace, &mut ip);
        for i in &t.trace.inner {
            update_tr(t.id.clone(), i, &mut tr);
        }
    }
    (ip, tr)
}

// helper functions
fn update_prio(prio: u8, trace: &Trace, hm: &mut IdPrio) {
    if let Some(old_prio) = hm.get(&trace.id) {
        if prio > *old_prio {
            hm.insert(trace.id.clone(), prio);
        }
    } else {
        hm.insert(trace.id.clone(), prio);
    }
    for cs in &trace.inner {
        update_prio(prio, cs, hm);
    }
}

fn update_tr(s: String, trace: &Trace, trmap: &mut TaskResources) {
    if let Some(seen) = trmap.get_mut(&s) {
        seen.insert(trace.id.clone());
    } else {
        let mut hs = HashSet::new();
        hs.insert(trace.id.clone());
        trmap.insert(s.clone(), hs);
    }
    for trace in &trace.inner {
        update_tr(s.clone(), trace, trmap);
    }
}
+76 −0
Original line number Original line Diff line number Diff line
mod common;
use common::*;

fn main() {
    let t1 = Task {
        id: "T1".to_string(),
        prio: 1,
        deadline: 100,
        inter_arrival: 100,
        trace: Trace {
            id: "T1".to_string(),
            start: 0,
            end: 10,
            inner: vec![],
        },
    };

    let t2 = Task {
        id: "T2".to_string(),
        prio: 2,
        deadline: 200,
        inter_arrival: 200,
        trace: Trace {
            id: "T2".to_string(),
            start: 0,
            end: 30,
            inner: vec![
                Trace {
                    id: "R1".to_string(),
                    start: 10,
                    end: 20,
                    inner: vec![Trace {
                        id: "R2".to_string(),
                        start: 12,
                        end: 16,
                        inner: vec![],
                    }],
                },
                Trace {
                    id: "R1".to_string(),
                    start: 22,
                    end: 28,
                    inner: vec![],
                },
            ],
        },
    };

    let t3 = Task {
        id: "T3".to_string(),
        prio: 3,
        deadline: 50,
        inter_arrival: 50,
        trace: Trace {
            id: "T3".to_string(),
            start: 0,
            end: 30,
            inner: vec![Trace {
                id: "R2".to_string(),
                start: 10,
                end: 20,
                inner: vec![],
            }],
        },
    };

    // builds a vector of tasks t1, t2, t3
    let tasks: Tasks = vec![t1, t2, t3];

    println!("tasks {:?}", &tasks);
    // println!("tot_util {}", tot_util(&tasks));

    let (ip, tr) = pre_analysis(&tasks);
    println!("ip: {:?}", ip);
    println!("tr: {:?}", tr);
}