Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • devel
  • master
  • trustit
3 results

Target

Select target project
  • pln/klee-examples
  • grammers/klee-examples
2 results
Select Git revision
  • master
1 result
Show changes
Commits on Source (11)
......@@ -3,3 +3,4 @@
ktest/target
runner/target
cargo-trust/target
\ No newline at end of file
{
"cSpell.ignoreWords": [
"backtrace",
"eabihf",
"em",
"fffffffd",
"file",
"istats",
"ktest",
"ktest file",
"libc",
"libcore",
"lldb",
"llvm",
"maybe",
"maybe uninit",
"openocd",
"satisfiability",
"sigabrt",
"thumbv",
"thumbv em",
"trustit",
"uninit",
"vcell",
"x",
"x fffffffd"
]
}
\ No newline at end of file
......@@ -35,7 +35,7 @@ version = "0.1.0"
git = "https://gitlab.henriktjader.com/pln/klee-sys.git"
# path = "../klee-sys"
version = "0.1.0"
features = ["inline-asm"]
#features = ["inline-asm"]
# [dependencies.cortex-m-rtfm]
# path = "../cortex-m-rtpro"
......@@ -63,8 +63,8 @@ klee-analysis = [
"cortex-m/klee-analysis",
"cortex-m-rt/klee-analysis"
]
inline-asm = ["cortex-m/inline-asm"]
klee-replay = [ "klee-sys/klee-replay"]
inline-asm = ["cortex-m/inline-asm", "klee-sys/inline-asm"]
# rtpro = [ "cortex-m-rtfm/klee-analysis", "cortex-m-rt/rtpro", "lm3s6965" ]
f4 = ["stm32f4/stm32f401", "stm32f4/rt", "cortex-m-semihosting", "cortex-m-rt", "cortex-m"]
......
# Home Exam January 2020.
## Grading:
3) Implement the response time analysis, and overall schedulability test.
4) Generate report on the alaysis results, this could be as a generated html (or xml and use same xml rendering engine) or however you feel like results are best reported and visualized, as discussed in class.
5) Integrate your analysis to the “trustit” framework (KLEE + automated test bed). The complete testbed will be provided later.
## Procedure
Start by reading 1, 2 and 3:
1) [A Stack-Based Resource Allocation Policy for Realtime Processes](https://www.math.unipd.it/~tullio/RTS/2009/Baker-1991.pdf), which refers to
2) [Stack-Based Scheduling of Realtime Processes](https://link.springer.com/content/pdf/10.1007/BF00365393.pdf), journal publication based on technical report [3] of the 1991 paper. The underlying model is the same in both papers.
3) [Rate Monotonic Analysis](http://www.di.unito.it/~bini/publications/2003BinButBut.pdf) , especially equation 3 is of interest to us. (It should be familiar for the real-time systems course you have taken previously.)
## Presentation
Make a git repo of your solution(s) with documentation (README.md) sufficient to reproduce your results.
Notify me (Telegram or mail) and we decide on time for individual presentation. 30 minutes should be sufficient.
---
## Definitions
A task `t` is defined by:
- `P(t)` the priority of task `t`
- `D(t)` the deadline of taks `t`
- `A(t)` the inter-arrival of task `t`
A resource `r` is defined by:
- `π(r)` the resource ceiling, computed as the highest priority of any task accessing `r`. SRP allows for dynamic priorities, in our case we have static priorities only.
For SRP based analysis we assume a task to perform/execute a finite sequence of operations/instructions (aka. run-to-end or run-to-completion semantics). During execution, a task can claim resources `Rj`... in nested fashion. Sequential re-claim of resources is allowed but NOT re-claiming an already held resource (in a nested fashion, since that would violate the Rust memory aliasing rule).
E.g., a possible trace for a task can look like:
`[t:...[r1:...[r2:...]...]...[r2:...]...]`, where `[r:...]` denotes a critical section of task `t` holding the resource `r`. In this case the task starts, and at some point claims `r1` and inside the critical section claims `r2` (nested claim), at some point it exits `r2`, exits `r1` and continues executing where it executes a critical section on `r2`, and then finally executes until completion.
## Grade 3
Analysis:
### 1. Total CPU utilization
Worst Case Execution Time (WCET) for tasks and critical sections
In general determining WCET is rather tricky. In our case we adopt a measurement based technique, that spans all feasible paths for the task. Tests triggering the execution paths are automatically generated by symbolic execution. To correctly take concurrency into account resource state is treated symbolically. Thus, for a critical section, the resource is given a fresh (new) symbolic value for each critical section. Inside the critical section we are ensured exclusive access (and thus the value can be further constrained inside of the critical section). The resource model can be further extended by contracts (as shown by the `assume_assert.rs` example).
We model hardware (peripherals) as shared resources (shared by the environment) as being *atomic* (with read/write/modify operations only). Rationale, we must assume that the state of the hardware resources may be changed at any time, thus only *atomic* access can be allowed.
For now, we just assume we have complete WCETs information, in terms of `start` and `end` time-stamps (`u32`) for each section `[_: ... ]`. We represent that by the `Task` and `Trace` data structures in `common.rs`.
### Total CPU request (or total load factor)
Each task `t` has a WCET `C(t)` and inter-arrival time `A(t)`. The CPU request (or load) inferred by a task is `L(t)` = `C(t)`/`A(t)`. Ask yourself, what is the consequence of `C(t)` > `A(t)`?
We can compute the total CPU request (or load factor), as `Ltot` = sum(`L(T)`), `T` being the set of tasks.
Ask yourself, what is the consequence of `Ltot` > 1?
Implement a function taking `Vec<Task>` and returning the load factor.
### Response time (simple over-approximation)
Under SRP response time can be computed by equation 7.22 in [Hard Real-Time Computing Systems](
https://doc.lagout.org/science/0_Computer%20Science/2_Algorithms/Hard%20Real-Time%20Computing%20Systems_%20Predictable%20Scheduling%20Algorithms%20and%20Applications%20%283rd%20ed.%29%20%5BButtazzo%202011-09-15%5D.pdf).
In general the response time is computed as.
- `R(t)` = `C(t)` + `B(t)` + `I(t)`, where
- `B(t)` is the blocking time for task `t`, and
- `I(t)` is the interference (preemptions) to task `t`
For a task set to be schedulable under SRP we have two requirements:
- `Ltot` < 1
- `R(t)` < `D(t)`, for all tasks. (`R(t)` > `D(t)` implies a deadline miss.)
#### Blocking
SRP brings the outstanding property of single blocking. In words, a task `t` is blocked by the maximal critical section a task `l` with lower priority (`P(l)`< `P(t)`) holds a resource `l_r`, with a ceiling `π(l_r)` equal or higher than the priority of `t`.
- `B(t)` = max(`C(l_r)`), where `P(l)`< `P(t)`, `π(l_r) >= P(t)`
Implement a function that takes a `Task` and returns the corresponding blocking time.
#### Preemptions
A task is exposed to interference (preemptions) by higher priority tasks. Intuitively, during the execution of a task `t` (`Bp(t)`) each higher priority task `h` (`P(h)`>`P(t)`) may preempt us (`Bp(t)`/`A(h)` rounded upwards) times.
- `I(t)` = sum(`C(h)` * ceiling(`Bp(t)`/`A(h)`)), forall tasks `h`, `P(h)` > `P(t)`, where
- `Bp(t)` is the *busy-period*
We can over approximate the *busy period* `Bp(i)` = `D(i)` (assuming the worst allowed *busy-period*).
As a technical detail. For the scheduling of tasks of the same priority, the original work on SRP adopted a FIFO model (first arrived, first served). Under Rust RTFM, tasks are bound to hardware interrupts. Thus we can exploit the underlying hardware to do the actual scheduling for us (with zero-overhead). However the interrupt hardware, schedules interrupts of the same priority by the index in the vector table. For our case here we can make a safe over approximation by considering preemptions from tasks with SAME or higher priority (`P(h)` >= `P(t)`).
Implement a function that takes a `Task` and returns the corresponding preemption time.
Now make a function that computes the response time for a `Task`, by combing `C(t)`, `B(t)` and `I(t)`.
Finally, make a function that iterates over the task set and returns a vector with containing:
`Vec<Task, R(t), C(t), B(t), I(t)>`. Just a simple `println!` of that vector gives the essential information on the analysis.
#### Preemptions revisited
The *busy-period* is in `7.22` computed by a recurrence equation.
Implement the recurrence relation (equation) starting from the base case `C(t) + B(t)`. The recurrence might diverge in case the `Bp(t) > D(t)`, this is a pathological case, where the task becomes non-schedulable, in that case terminate the recurrence (with an error). You might want to indicate that a non feasible response time have been reached by using the `Result<u32, ())>` type or some other means e.g., (`Option<u32>`).
You can let your `preemption` function take a parameter indicating if the exact solution or approximation should be used.
## Grade 4
Here you can go wild, and use your creativity to present task set and results of analysis in the best informative manner. We will discuss some possible visualizations during class.
## Grade 5
If you aim for the highest grade, let me know and I will hook you up with the current state of the development. The goal is to derive the task set characterization by means of the automated test-bed, (test case generation + test runner based on the `probe.rs` library.) All the primitives are there, and re-implementing (back-porting) previous work based on `RTFM3` is mostly an engineering effort.
---
## Resources
`common.rs` gives the basic data structures, and some helper functions.
`generate.rs` gives an example on how `Tasks` can be manually constructed. This is vastly helpful for your development, when getting started.
## Tips
When working with Rust, the standard library documentation [std](https://doc.rust-lang.org/std/) is excellent and easy to search (just press S). For most cases, you will find examples on intended use and cross referencing to related data types is just a click away.
Use the `generate` example to get started. Initially you may simplify it further by reducing the number of tasks/and or resources. Make sure you understand the helper functions given in `common.rs`, (your code will likely look quite similar). You might want to add further `common` types and helper functions to streamline your development, along the way.
Generate your own task sets to make sure your code works in the general case not only for the `Tasks` provided. Heads up, I will expose your code to some other more complex task sets.
---
## Robust and Energy Efficient Real-Time Systems
In this part of the course, we have covered.
- Software robustness. We have adopted Rust and Symbolic Execution to achieve guaranteed memory safety and defined behavior (panic free execution). With this at hand, we have a strong (and theoretically underpinned) foundation for improved robustness and reliability proven at compile time.
- Real-Time Scheduling and Analysis. SRP provides an execution model and resource management policy with outstanding properties of race-and deadlock free execution, single blocking and stack sharing. Our Rust RTFM framework provides a correct by construction implementation of SRP, exploiting zero-cost (software) abstractions. Using Rust RTFM resource management and scheduling, is done by directly by the hardware, which allows for efficiency (zero OH) and predictability.
The SRP model is amenable to static analysis, which you have now internalized through an actual implementation of the theoretical foundations. We have also covered methods for Worst Case Execution Time (WCET) analysis by cycle accurate measurements, which in combination with Symbolic Execution for test-case generation allows for high degree of automation.
- Energy Consumption is roughly proportional to the supply voltage (static leakage/dissipation), and exponential to the frequency (dynamic/switching activity dissipation). In the case of embedded systems, low-power modes allow parts of the system to be powered down while retaining sufficient functionality to wake on external (and/or internal) events. In sleep mode, both static and dynamic power dissipation is minimized typically to the order of uAmp (in comparison to mAmp in run mode).
Rust RTFM adopts an event driven approach allowing the system to automatically sleep in case no further tasks are eligible for scheduling. Moreover, leveraging on the zero-cost abstractions in Rust and the guarantees provided by the analysis framework, we do not need to sacrifice correctness/robustness and reliability in order to obtain highly efficient executables.
Robust and Energy Efficient Real-Time Systems for real, This is the Way!
......@@ -2,14 +2,13 @@
This repo contains a set of usage examples for `klee-sys` low-level KLEE bindings. For more information on internal design behind see the [klee-sys](https://gitlab.henriktjader.com/pln/klee-sys) repo.
See section `Cargo.toml` for detaled information on features introduced.
See section `Cargo.toml` for detailed information on features introduced.
### General dependencies
- llvm toolchain tested with (9.0.1)
- rustup tested with 1.40.0 (73528e339 2019-12-16)
- klee tested with KLEE 2.1-pre (https://klee.github.io)
- LLVM toolchain tested with (9.0.1 and 10.0.1)
- rustup tested with rust toolchain 1.47.0 (18bf6b4f0 2020-10-07) and 1.49.0-nightly (ffa2e7ae8 2020-10-24)
- klee tested with KLEE 2.1 (https://klee.github.io)
- cargo-klee (installed from git)
---
......@@ -18,11 +17,11 @@ See section `Cargo.toml` for detaled information on features introduced.
- `paths.rs`
This example showcase the different path termintaiton conditions possible and their effect to KLEE test case generation.
This example showcase the different path termination conditions possible and their effect to KLEE test case generation.
- `assume_assert.rs`
This example showcase contract based verification, and the possibilies to extract proofs.
This example showcase contract based verification, and the possibilities to extract proofs.
- `struct.rs`
......
# Examples
## assume_assert
Exercises the ability to introduce assumptions in order to prove assertions.
This leads into possibilities for contracts.
## register_test
Exercises the ability to automatically treat hardware read accesses as streams of unknown values,
and hardware write accesses as no operations. In this way, we can *pessimistically* analyse low-level
code such as drivers. More detailed information (assumptions) on the behavior of the hardware can be
modelled as mocks for the corresponding register blocks.
\ No newline at end of file
......@@ -74,7 +74,7 @@ fn f1(a: u32) -> u32 {
// So KLEE tracks the "path condition", i.e., at line 18 it knows (assumes) that that
// a < u32::MAX, and finds that the assumption a == u32::MAX cannot be satisfied.
//
// This is extremely powerful as KLEE tracks all known "constraints" and all their raliaitons
// This is extremely powerful as KLEE tracks all known "constraints" and all their relations
// and mathematically checks for the satisfiability of each "assume" and "assert".
// So what we get here is not a mere test, but an actual proof!!!!
// This is the way!
......@@ -102,7 +102,7 @@ fn f1(a: u32) -> u32 {
// a + 1
// }
//
// It might even be possible to derive post condtitions from pre conditions,
// It might even be possible to derive post conditions from pre-conditions,
// and report them to the user. Problem is that the conditions are
// represented as "first order logic" (FOL) constraints, which need to be
// converted into readable form (preferably Rust expressions.)
......
......@@ -8,43 +8,51 @@ extern crate panic_halt;
use stm32f4::stm32f401 as stm32;
use cortex_m::{asm, bkpt, iprintln};
use cortex_m::{asm, bkpt};
use cortex_m_rt::entry;
// // use klee_sys::klee_make_symbolic2;
// // Mimic RTFM resources
// static mut X: u32 = 54;
#[entry]
#[inline(never)]
fn main() -> ! {
let mut x: u32 = 54;
// klee_make_symbolic(&mut x);
// while x == 0 {}
// // asm::bkpt();
//
//
bkpt!(1);
asm::nop();
asm::nop();
let mut x = 54;
bkpt!(2);
asm::nop();
klee_make_symbolic(&mut x);
if x == 0 {
bkpt!();
}
loop {
asm::nop();
}
}
#[inline(always)]
fn klee_make_symbolic<T>(data: &mut T) {
asm::bkpt();
// unsafe { klee_bkpt(data as *mut T as *mut core::ffi::c_void) };
#[inline(never)]
pub fn klee_make_symbolic<T>(data: &mut T) {
// force llvm to consider data to be mutaded
unsafe {
asm!("bkpt #0" : /* output */: /* input */ "r"(data): /* clobber */ : "volatile")
}
#[no_mangle]
pub extern "C" fn klee_bkpt(data: *mut core::ffi::c_void) {
//*data = 0;
asm::bkpt();
}
// pub fn taint() {
// unsafe {
// X = 73;
// }
// }
// #[no_mangle]
// pub extern "C" fn klee_bkpt(data: *mut core::ffi::c_void) {
// bkpt!();
// }
// extern "C" {
// pub fn klee_bkpt(ptr: *mut core::ffi::c_void, // pointer to the data
// );
// }
// cargo objdump --bin app --release -- -disassemble -no-show-raw-insn
// unsafe { asm!("mov $0,R15" : "=r"(r) ::: "volatile") }
// cargo objdump --example f401_ktest --release --features f4,inline-asm --target thumbv7em-none-eabihf -- -d
......@@ -46,8 +46,8 @@ fn main() -> ! {
// Breakpoint 1, main () at examples/f401_minimal.rs:14
// 14 #[entry]
//
// `main` is our "entry" point for the user applicaiton.
// It can be named anything by needs to annoted by #[entry].
// `main` is our "entry" point for the user application.
// It can be named anything by needs to annotated by #[entry].
// At this point global variables have been initiated.
//
// The `openocd.gdb` script defines the startup procedure, where we have set
......@@ -95,19 +95,19 @@ fn main() -> ! {
//
// Some basic Rust.
// Use https://www.rust-lang.org/learn and in particular https://doc.rust-lang.org/book/.
// There is even a book on embeddded Rust available:
// There is even a book on embedded Rust available:
// https://rust-embedded.github.io/book/, it covers much more than we need here.
//
// Figure out a way to print the numbers 0..10 using a for loop.
//
// Figure out a way to store the numbers in 0..10 in a stacic (global) array using a loop.
// Figure out a way to store the numbers in 0..10 in a static (global) array using a loop.
//
// Print the resulting array (using a single println invocation, not a loop).
//
// (You may prototype the code directly on https://play.rust-lang.org/, and when it works
// backport that into the minimal example, and chack that it works the same)
// back-port that into the minimal example, and check that it works the same)
//
// These two small excersises should get you warmed up.
// These two small exercises should get you warmed up.
//
// Some reflections:
// Why is does dealing with static variables require `unsafe`?
......
......@@ -27,7 +27,7 @@ fn main() {
// embedded Rust ecosystem.
//
// When analyzed by KLEE, we make the return value symbolic, thus each access
// givs a new unique symbol. Even if we write a value to it, the next read
// gives a new unique symbol. Even if we write a value to it, the next read
// will still be treated as a new symbol. That might be overly pessimistic
// but is a safe approximation for the worst case behavior of the hardware.
//
......@@ -104,7 +104,7 @@ fn main() {
// object 1: uint: 2
// object 1: text: ....
//
// The first read gives 1, the second 2, and we hit unreacable.
// The first read gives 1, the second 2, and we hit unreachable.
//
// We can replay the last test to gather more information:
// $ cargo klee --example register_test -r -k -g -v
......@@ -136,7 +136,9 @@ fn main() {
// (gdb) print read_2
// $2 = 2
//
// If this does not work its a gdb problem, try lldb the LLVM debugger
// If this does not work its a gdb problem, try `lldb` (or `rust-lldb`) the LLVM debugger
// (debug info may not be completely compatible)
//
// https://lldb.llvm.org/use/map.html
//
// This is the way!
......@@ -7,7 +7,7 @@ edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
probe-rs = { path = "../../probe-rs/probe-rs", version = "0.3.0" }
probe-rs = { version = "0.3.0" }
ktest = { path = "../ktest", version = "0.1.0" }
failure = "0.1.6"
......
......@@ -19,7 +19,7 @@ use probe_rs::{
fn main() {
println!("read ktest file");
let ktest = read_ktest("test000001.ktest").unwrap();
let ktest = read_ktest("resources/test000001.ktest").unwrap();
println!("ktest {:?}", ktest);
let mut probe = open_probe();
......
// use std::collections::{HashMap, HashSet};
use runner::common::*;
fn main() {
let t1 = Task {
id: "T1".to_string(),
prio: 1,
deadline: 100,
inter_arrival: 100,
trace: Trace {
id: "T1".to_string(),
start: 0,
end: 10,
inner: vec![],
},
};
let t2 = Task {
id: "T2".to_string(),
prio: 2,
deadline: 200,
inter_arrival: 200,
trace: Trace {
id: "T2".to_string(),
start: 0,
end: 30,
inner: vec![
Trace {
id: "R1".to_string(),
start: 10,
end: 20,
inner: vec![Trace {
id: "R2".to_string(),
start: 12,
end: 16,
inner: vec![],
}],
},
Trace {
id: "R1".to_string(),
start: 22,
end: 28,
inner: vec![],
},
],
},
};
let t3 = Task {
id: "T3".to_string(),
prio: 3,
deadline: 50,
inter_arrival: 50,
trace: Trace {
id: "T3".to_string(),
start: 0,
end: 30,
inner: vec![Trace {
id: "R2".to_string(),
start: 10,
end: 20,
inner: vec![],
}],
},
};
// builds a vector of tasks t1, t2, t3
let tasks: Tasks = vec![t1, t2, t3];
println!("tasks {:?}", &tasks);
// println!("tot_util {}", tot_util(&tasks));
let (ip, tr) = pre_analysis(&tasks);
println!("ip: {:?}", ip);
println!("tr: {:?}", tr);
}
use std::collections::{HashMap, HashSet};
// common data structures
#[derive(Debug)]
pub struct Task {
pub id: String,
pub prio: u8,
pub deadline: u32,
pub inter_arrival: u32,
pub trace: Trace,
}
//#[derive(Debug, Clone)]
#[derive(Debug)]
pub struct Trace {
pub id: String,
pub start: u32,
pub end: u32,
pub inner: Vec<Trace>,
}
// uselful types
// Our task set
pub type Tasks = Vec<Task>;
// A map from Task/Resource identifiers to priority
pub type IdPrio = HashMap<String, u8>;
// A map from Task identifiers to a set of Resource identifiers
pub type TaskResources = HashMap<String, HashSet<String>>;
// Derives the above maps from a set of tasks
pub fn pre_analysis(tasks: &Tasks) -> (IdPrio, TaskResources) {
let mut ip = HashMap::new();
let mut tr: TaskResources = HashMap::new();
for t in tasks {
update_prio(t.prio, &t.trace, &mut ip);
for i in &t.trace.inner {
update_tr(t.id.clone(), i, &mut tr);
}
}
(ip, tr)
}
// helper functions
fn update_prio(prio: u8, trace: &Trace, hm: &mut IdPrio) {
if let Some(old_prio) = hm.get(&trace.id) {
if prio > *old_prio {
hm.insert(trace.id.clone(), prio);
}
} else {
hm.insert(trace.id.clone(), prio);
}
for cs in &trace.inner {
update_prio(prio, cs, hm);
}
}
fn update_tr(s: String, trace: &Trace, trmap: &mut TaskResources) {
if let Some(seen) = trmap.get_mut(&s) {
seen.insert(trace.id.clone());
} else {
let mut hs = HashSet::new();
hs.insert(trace.id.clone());
trmap.insert(s.clone(), hs);
}
for trace in &trace.inner {
update_tr(s.clone(), trace, trmap);
}
}
use ktest::{read_ktest, KTEST};
pub mod common;
use probe_rs::{
config::registry::{Registry, SelectionStrategy},
coresight::memory::MI,
......