Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
Loading items

Target

Select target project
0 results Searching
Select Git revision
Loading items
Show changes

Commits on Source 11

15 files
+ 413
50
Compare changes
  • Side-by-side
  • Inline

Files

+2 −1
Original line number Original line Diff line number Diff line
@@ -3,3 +3,4 @@


ktest/target
ktest/target
runner/target
runner/target
cargo-trust/target
 No newline at end of file

.vscode/settings.json

0 → 100644
+28 −0
Original line number Original line Diff line number Diff line
{
    "cSpell.ignoreWords": [
        "backtrace",
        "eabihf",
        "em",
        "fffffffd",
        "file",
        "istats",
        "ktest",
        "ktest file",
        "libc",
        "libcore",
        "lldb",
        "llvm",
        "maybe",
        "maybe uninit",
        "openocd",
        "satisfiability",
        "sigabrt",
        "thumbv",
        "thumbv em",
        "trustit",
        "uninit",
        "vcell",
        "x",
        "x fffffffd"
    ]
}
 No newline at end of file
+3 −3
Original line number Original line Diff line number Diff line
@@ -35,7 +35,7 @@ version = "0.1.0"
git = "https://gitlab.henriktjader.com/pln/klee-sys.git"
git = "https://gitlab.henriktjader.com/pln/klee-sys.git"
# path = "../klee-sys"
# path = "../klee-sys"
version = "0.1.0"
version = "0.1.0"
features = ["inline-asm"]
#features = ["inline-asm"]


# [dependencies.cortex-m-rtfm]
# [dependencies.cortex-m-rtfm]
# path = "../cortex-m-rtpro"
# path = "../cortex-m-rtpro"
@@ -63,8 +63,8 @@ klee-analysis = [
     "cortex-m/klee-analysis", 
     "cortex-m/klee-analysis", 
     "cortex-m-rt/klee-analysis" 
     "cortex-m-rt/klee-analysis" 
]
]

klee-replay = [ "klee-sys/klee-replay"]
inline-asm = ["cortex-m/inline-asm"]
inline-asm = ["cortex-m/inline-asm", "klee-sys/inline-asm"]
# rtpro = [ "cortex-m-rtfm/klee-analysis", "cortex-m-rt/rtpro", "lm3s6965" ]
# rtpro = [ "cortex-m-rtfm/klee-analysis", "cortex-m-rt/rtpro", "lm3s6965" ]
f4 = ["stm32f4/stm32f401", "stm32f4/rt", "cortex-m-semihosting", "cortex-m-rt", "cortex-m"]
f4 = ["stm32f4/stm32f401", "stm32f4/rt", "cortex-m-semihosting", "cortex-m-rt", "cortex-m"]


EXAM.md

0 → 100644
+161 −0
Original line number Original line Diff line number Diff line
# Home Exam January 2020.

## Grading:

3) Implement the response time analysis, and overall schedulability test.

4) Generate report on the alaysis results, this could be as a generated html (or xml and use same xml rendering engine) or however you feel like results are best reported and visualized, as discussed in class.

5) Integrate your analysis to the “trustit” framework (KLEE + automated test bed). The complete testbed will be provided later.

## Procedure

Start by reading 1, 2 and 3:

1) [A Stack-Based Resource Allocation Policy for Realtime Processes](https://www.math.unipd.it/~tullio/RTS/2009/Baker-1991.pdf), which refers to

2) [Stack-Based Scheduling of Realtime Processes](https://link.springer.com/content/pdf/10.1007/BF00365393.pdf), journal publication based on technical report [3] of the 1991 paper. The underlying model is the same in both papers.

3) [Rate Monotonic Analysis](http://www.di.unito.it/~bini/publications/2003BinButBut.pdf) , especially equation 3 is of interest to us. (It should be familiar for the real-time systems course you have taken previously.)

## Presentation

Make a git repo of your solution(s) with documentation (README.md) sufficient to reproduce your results.

Notify me (Telegram or mail) and we decide on time for individual presentation. 30 minutes should be sufficient.

---

## Definitions

A task `t` is defined by:

- `P(t)` the priority of task `t`
- `D(t)` the deadline of taks `t`
- `A(t)` the inter-arrival of task `t`

A resource `r` is defined by:

- `π(r)` the resource ceiling, computed as the highest priority of any task accessing `r`. SRP allows for dynamic priorities, in our case we have static priorities only.

For SRP based analysis we assume a task to perform/execute a finite sequence of operations/instructions (aka. run-to-end or run-to-completion semantics). During execution, a task can claim resources `Rj`... in nested fashion. Sequential re-claim of resources is allowed but NOT re-claiming an already held resource (in a nested fashion, since that would violate the Rust memory aliasing rule).

E.g., a possible trace for a task can look like:

 `[t:...[r1:...[r2:...]...]...[r2:...]...]`, where `[r:...]` denotes a critical section of task `t` holding the resource `r`. In this case the task starts, and at some point claims `r1` and inside the critical section claims `r2` (nested claim), at some point it exits `r2`, exits `r1` and continues executing where it executes a critical section on `r2`, and then finally executes until completion.

## Grade 3

Analysis:

### 1. Total CPU utilization

Worst Case Execution Time (WCET) for tasks and critical sections

In general determining WCET is rather tricky. In our case we adopt a measurement based technique, that spans all feasible paths for the task. Tests triggering the execution paths are automatically generated by symbolic execution. To correctly take concurrency into account resource state is treated symbolically. Thus, for a critical section, the resource is given a fresh (new) symbolic value for each critical section. Inside the critical section we are ensured exclusive access (and thus the value can be further constrained inside of the critical section). The resource model can be further extended by contracts (as shown by the `assume_assert.rs` example).

We model hardware (peripherals) as shared resources (shared by the environment) as being *atomic* (with read/write/modify operations only). Rationale, we must assume that the state of the hardware resources may be changed at any time, thus only *atomic* access can be allowed.

For now, we just assume we have complete WCETs information, in terms of `start` and `end` time-stamps (`u32`) for each section `[_: ... ]`. We represent that by the `Task` and `Trace` data structures in `common.rs`.

### Total CPU request (or total load factor)

Each task `t` has a WCET `C(t)` and inter-arrival time `A(t)`. The CPU request (or load) inferred by a task is `L(t)` = `C(t)`/`A(t)`. Ask yourself, what is the consequence of `C(t)` > `A(t)`?

We can compute the total CPU request (or load factor), as `Ltot` = sum(`L(T)`), `T` being the set of tasks.

Ask yourself, what is the consequence of `Ltot` > 1?

Implement a function taking `Vec<Task>` and returning the load factor.

### Response time (simple over-approximation)

Under SRP response time can be computed by equation 7.22 in [Hard Real-Time Computing Systems](
https://doc.lagout.org/science/0_Computer%20Science/2_Algorithms/Hard%20Real-Time%20Computing%20Systems_%20Predictable%20Scheduling%20Algorithms%20and%20Applications%20%283rd%20ed.%29%20%5BButtazzo%202011-09-15%5D.pdf).

In general the response time is computed as.

- `R(t)` =  `C(t)` + `B(t)` + `I(t)`, where
  - `B(t)` is the blocking time for task `t`, and
  - `I(t)` is the interference (preemptions) to task `t`

For a task set to be schedulable under SRP we have two requirements:

- `Ltot` < 1
- `R(t)` < `D(t)`, for all tasks. (`R(t)` > `D(t)` implies a deadline miss.)

#### Blocking

SRP brings the outstanding property of single blocking. In words, a task `t` is blocked by the maximal critical section a task `l` with lower priority (`P(l)`< `P(t)`) holds a resource `l_r`, with a ceiling `π(l_r)` equal or higher than the priority of `t`.

- `B(t)` = max(`C(l_r)`), where `P(l)`< `P(t)`, `π(l_r) >= P(t)`  

Implement a function that takes a `Task` and returns the corresponding blocking time.

#### Preemptions

A task is exposed to interference (preemptions) by higher priority tasks. Intuitively, during the execution of a task `t` (`Bp(t)`) each higher priority task `h` (`P(h)`>`P(t)`) may preempt us (`Bp(t)`/`A(h)` rounded upwards) times.

- `I(t)` = sum(`C(h)` * ceiling(`Bp(t)`/`A(h)`)), forall tasks `h`, `P(h)` > `P(t)`, where
- `Bp(t)` is the *busy-period*

We can over approximate the *busy period* `Bp(i)` = `D(i)` (assuming the worst allowed *busy-period*).

As a technical detail. For the scheduling of tasks of the same priority, the original work on SRP adopted a FIFO model (first arrived, first served). Under Rust RTFM, tasks are bound to hardware interrupts. Thus we can exploit the underlying hardware to do the actual scheduling for us (with zero-overhead). However the interrupt hardware, schedules interrupts of the same priority by the index in the vector table. For our case here we can make a safe over approximation by considering preemptions from tasks with SAME or higher priority (`P(h)` >= `P(t)`).  

Implement a function that takes a `Task` and returns the corresponding preemption time.

Now make a function that computes the response time for a `Task`, by combing `C(t)`, `B(t)` and `I(t)`.

Finally, make a function that iterates over the task set and returns a vector with containing:
`Vec<Task, R(t), C(t), B(t), I(t)>`. Just a simple `println!` of that vector gives the essential information on the analysis.

#### Preemptions revisited

The *busy-period* is in `7.22` computed by a recurrence equation.

Implement the recurrence relation (equation) starting from the base case `C(t) + B(t)`. The recurrence might diverge in case the `Bp(t) > D(t)`, this is a pathological case, where the task becomes non-schedulable, in that case terminate the recurrence (with an error). You might want to indicate that a non feasible response time have been reached by using the `Result<u32, ())>` type or some other means e.g., (`Option<u32>`).

You can let your `preemption` function take a parameter indicating if the exact solution or approximation should be used.

## Grade 4

Here you can go wild, and use your creativity to present task set and results of analysis in the best informative manner. We will discuss some possible visualizations during class.

## Grade 5

If you aim for the highest grade, let me know and I will hook you up with the current state of the development. The goal is to derive the task set characterization by means of the automated test-bed, (test case generation + test runner based on the `probe.rs` library.) All the primitives are there, and re-implementing (back-porting) previous work based on `RTFM3` is mostly an engineering effort.

---

## Resources

`common.rs` gives the basic data structures, and some helper functions. 

`generate.rs` gives an example on how `Tasks` can be manually constructed. This is vastly helpful for your development, when getting started.

## Tips

When working with Rust, the standard library documentation [std](https://doc.rust-lang.org/std/) is excellent and easy to search (just press S). For most cases, you will find examples on intended use and cross referencing to related data types is just a click away.

Use the `generate` example to get started. Initially you may simplify it further by reducing the number of tasks/and or resources. Make sure you understand the helper functions given in `common.rs`, (your code will likely look quite similar). You might want to add further `common` types and helper functions to streamline your development, along the way.

Generate your own task sets to make sure your code works in the general case not only for the `Tasks` provided. Heads up, I will expose your code to some other more complex task sets.

---

## Robust and Energy Efficient Real-Time Systems

In this part of the course, we have covered.

- Software robustness. We have adopted Rust and Symbolic Execution to achieve guaranteed memory safety and defined behavior (panic free execution). With this at hand, we have a strong (and theoretically underpinned) foundation for improved robustness and reliability proven at compile time.

- Real-Time Scheduling and Analysis. SRP provides an execution model and resource management policy with outstanding properties of race-and deadlock free execution, single blocking and stack sharing. Our Rust RTFM framework provides a correct by construction implementation of SRP, exploiting zero-cost (software) abstractions. Using Rust RTFM resource management and scheduling, is done by directly by the hardware, which allows for efficiency (zero OH) and predictability.

  The SRP model is amenable to static analysis, which you have now internalized through an actual implementation of the theoretical foundations. We have also covered methods for Worst Case Execution Time (WCET) analysis by cycle accurate measurements, which in combination with Symbolic Execution for test-case generation allows for high degree of automation.

- Energy Consumption is roughly proportional to the supply voltage (static leakage/dissipation), and exponential to the frequency (dynamic/switching activity dissipation). In the case of embedded systems, low-power modes allow parts of the system to be powered down while retaining sufficient functionality to wake on external (and/or internal) events. In sleep mode, both static and dynamic power dissipation is minimized typically to the order of uAmp (in comparison to mAmp in run mode).  

   Rust RTFM adopts an event driven approach allowing the system to automatically sleep in case no further tasks are eligible for scheduling. Moreover, leveraging on the zero-cost abstractions in Rust and the guarantees provided by the analysis framework, we do not need to sacrifice correctness/robustness and reliability in order to obtain highly efficient executables.

Robust and Energy Efficient Real-Time Systems for real, This is the Way!
+6 −7
Original line number Original line Diff line number Diff line
@@ -2,14 +2,13 @@


This repo contains a set of usage examples for `klee-sys` low-level KLEE bindings. For more information on internal design behind see the [klee-sys](https://gitlab.henriktjader.com/pln/klee-sys) repo.
This repo contains a set of usage examples for `klee-sys` low-level KLEE bindings. For more information on internal design behind see the [klee-sys](https://gitlab.henriktjader.com/pln/klee-sys) repo.


See section `Cargo.toml` for detaled information on features introduced.
See section `Cargo.toml` for detailed information on features introduced.


### General dependencies
### General dependencies


- llvm toolchain tested with (9.0.1)
- LLVM toolchain tested with (9.0.1 and 10.0.1)
- rustup tested with 1.40.0 (73528e339 2019-12-16)
- rustup tested with rust toolchain 1.47.0 (18bf6b4f0 2020-10-07) and 1.49.0-nightly (ffa2e7ae8 2020-10-24) 
- klee tested with KLEE 2.1-pre (https://klee.github.io)
- klee tested with KLEE 2.1 (https://klee.github.io)

- cargo-klee (installed from git)
- cargo-klee (installed from git)


---
---
@@ -18,11 +17,11 @@ See section `Cargo.toml` for detaled information on features introduced.


- `paths.rs`
- `paths.rs`


    This example showcase the different path termintaiton conditions possible and their effect to KLEE test case generation.
    This example showcase the different path termination conditions possible and their effect to KLEE test case generation.


- `assume_assert.rs`
- `assume_assert.rs`


    This example showcase contract based verification, and the possibilies to extract proofs.
    This example showcase contract based verification, and the possibilities to extract proofs.


- `struct.rs`
- `struct.rs`


examples/README.md

0 → 100644
+14 −0
Original line number Original line Diff line number Diff line
# Examples

## assume_assert

Exercises the ability to introduce assumptions in order to prove assertions.

This leads into possibilities for contracts.

## register_test

Exercises the ability to automatically treat hardware read accesses as streams of unknown values,
and hardware write accesses as no operations. In this way, we can *pessimistically* analyse low-level
code such as drivers. More detailed information (assumptions) on the behavior of the hardware can be 
modelled as mocks for the corresponding register blocks.
 No newline at end of file
Original line number Original line Diff line number Diff line
@@ -74,7 +74,7 @@ fn f1(a: u32) -> u32 {
// So KLEE tracks the "path condition", i.e., at line 18 it knows (assumes) that that
// So KLEE tracks the "path condition", i.e., at line 18 it knows (assumes) that that
// a < u32::MAX, and finds that the assumption a == u32::MAX cannot be satisfied.
// a < u32::MAX, and finds that the assumption a == u32::MAX cannot be satisfied.
//
//
// This is extremely powerful as KLEE tracks all known "constraints" and all their raliaitons
// This is extremely powerful as KLEE tracks all known "constraints" and all their relations
// and mathematically checks for the satisfiability of each "assume" and "assert".
// and mathematically checks for the satisfiability of each "assume" and "assert".
// So what we get here is not a mere test, but an actual proof!!!!
// So what we get here is not a mere test, but an actual proof!!!!
// This is the way!
// This is the way!
@@ -102,7 +102,7 @@ fn f1(a: u32) -> u32 {
//     a + 1
//     a + 1
// }
// }
//
//
// It might even be possible to derive post condtitions from pre conditions,
// It might even be possible to derive post conditions from pre-conditions,
// and report them to the user. Problem is that the conditions are
// and report them to the user. Problem is that the conditions are
// represented as "first order logic" (FOL) constraints, which need to be
// represented as "first order logic" (FOL) constraints, which need to be
// converted into readable form (preferably Rust expressions.)
// converted into readable form (preferably Rust expressions.)
Original line number Original line Diff line number Diff line
@@ -8,43 +8,51 @@ extern crate panic_halt;


use stm32f4::stm32f401 as stm32;
use stm32f4::stm32f401 as stm32;


use cortex_m::{asm, bkpt, iprintln};
use cortex_m::{asm, bkpt};
use cortex_m_rt::entry;
use cortex_m_rt::entry;

// // use klee_sys::klee_make_symbolic2;
// // Mimic RTFM resources
// static mut X: u32 = 54;
#[entry]
#[entry]
#[inline(never)]
fn main() -> ! {
fn main() -> ! {
    let mut x: u32 = 54;
    let mut x = 54;
    // klee_make_symbolic(&mut x);
    // while x == 0 {}
    // // asm::bkpt();
    //
    //
    bkpt!(1);
    asm::nop();
    asm::nop();


    bkpt!(2);
    klee_make_symbolic(&mut x);
    asm::nop();


    if x == 0 {
        bkpt!();
        bkpt!();
    }

    loop {
    loop {
        asm::nop();
        asm::nop();
    }
    }
}
}


#[inline(always)]
#[inline(never)]
fn klee_make_symbolic<T>(data: &mut T) {
pub fn klee_make_symbolic<T>(data: &mut T) {
    asm::bkpt();
    // force llvm to consider data to be mutaded
    // unsafe { klee_bkpt(data as *mut T as *mut core::ffi::c_void) };
    unsafe {
        asm!("bkpt #0" : /* output */: /* input */ "r"(data): /* clobber */ : "volatile")
    }
    }

#[no_mangle]
pub extern "C" fn klee_bkpt(data: *mut core::ffi::c_void) {
    //*data = 0;
    asm::bkpt();
}
}


// pub fn taint() {
//     unsafe {
//         X = 73;
//     }
// }

// #[no_mangle]
// pub extern "C" fn klee_bkpt(data: *mut core::ffi::c_void) {
//     bkpt!();
// }

// extern "C" {
// extern "C" {
//     pub fn klee_bkpt(ptr: *mut core::ffi::c_void, // pointer to the data
//     pub fn klee_bkpt(ptr: *mut core::ffi::c_void, // pointer to the data
//     );
//     );
// }
// }
// cargo objdump --bin app --release -- -disassemble -no-show-raw-insn

// unsafe { asm!("mov $0,R15" : "=r"(r) ::: "volatile") }
// cargo objdump --example f401_ktest --release --features f4,inline-asm --target thumbv7em-none-eabihf -- -d