Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision

Target

Select target project
  • pln/klee_tutorial
  • inaule-6/klee_tutorial
  • rubenasplund/klee_tutorial
  • ironedde/klee_tutorial
4 results
Select Git revision
Show changes
Commits on Source (22)
target
**/*.rs.bk
Cargo.lock
{
"cSpell.ignoreWords": [
"alpha",
"ask",
"break",
"can",
"cmake",
"controller",
"core",
"cstr",
"cyccnt",
"datatypes",
"f",
"for",
"grepit",
"in",
"is",
"joh",
"johan",
"ktest",
"libklee",
"libz",
"llvm",
"mkdir",
"nucleo",
"opel",
"present",
"primask",
"romeo",
"rtic",
"runtest",
"rustc",
"same",
"satisfiability",
"skylake",
"so",
"speedster",
"vcell",
"xx",
"you",
"znver"
]
}
\ No newline at end of file
# Changelog
Most recent changes:
- 2020-12-20 Home exams: Initial commit.
- 2020-12-14 Some clarifications:
- In `array.rs`, missing `not` in "(Hint, even if we don't use the result `b`, Rust do not optimize out the call, why?)"
- Small clarifications to other examples (see diff for details.)
- 2020-12-12 Update with `cyccnt` example.
- 2020-12-08 Update `README.md` for Ubuntu (like) system with `z3` dependency.
- 2020-12-08 Update `README.md` and `examples/get_sign` to reflect the recent `klee 2.2` and `aur` package.
[package]
name = "klee_tutorial"
version = "0.1.0"
authors = ["pln <Per Lindgren>"]
edition = "2018"
[dependencies.cstr_core]
version = "0.2.2"
default-features = false
[profile.dev]
panic = "abort"
incremental = false # better optimization
lto = true # better optimization
codegen-units = 1 # better optimization
[profile.release]
panic = "abort"
debug = true # better debugging
incremental = false # better optimization
lto = true # better optimization
codegen-units = 1 # better optimization
# Home Exam D7020E January 2021
## Grading:
3) Implement the response time analysis, and overall schedulability test. (Skeleton found under `srp_analysis`.)
4) Generate report on the analysis results, this could be as a generated html/xml or however you feel like results are best reported and visualized.
5) Perform measurements on a real RTIC application and analyze with your tool developed for grades 3/4.
Both Lab 4 and 5 are mandatory for passing the course. Grades are incremental, i.e. to get a 5 you must pass grades 3 and 4.
---
## Preparation
Start by reading 1, 2 and 3:
1) [A Stack-Based Resource Allocation Policy for Realtime Processes](https://www.math.unipd.it/~tullio/RTS/2009/Baker-1991.pdf), which refers to
2) [Stack-Based Scheduling of Realtime Processes](https://link.springer.com/content/pdf/10.1007/BF00365393.pdf), journal publication based on technical report [3] of the 1991 paper. The underlying model is the same in both papers.
3) [Rate Monotonic Analysis](http://www.di.unito.it/~bini/publications/2003BinButBut.pdf), especially equation 3 is of interest to us. (It should be familiar for the real-time systems course you have taken previously.)
## Procedure
Assignments for submission will be posted on Canvas.
The procedure follows that of Lab 4/5, i.e:
The files in `srp_analysis` gives a skeleton for your solution(s). Implement your solutions for Grade 3, and optionally 4/5.
1) `Home-Exam Initial Submission`: Submit link(s) in canvas (similarly to Lab 4/5). You will get two reviews and two repositories to review. If you have done work towards grade 5 you also submit link to your `rtic_fxx_nucleo` repository.
2) `Hame-Exam Review`: Perform and submit reviews.
3) `Home-Exam Final Submission`: Address raised issues and make final submission.
(Check issues you made to fellow repositories and close those those that have been addressed to your satisfaction, both regarding labs and the home exam.)
As the final part of the examination you will individually present your solutions. Grades will be based on the fulfillment of the `Home Exam` grading goals. (We decide time/place for your presentation over Telegram.)
---
## Definitions
A task `t` is defined by:
- `P(t)` the priority of task `t`
- `D(t)` the deadline of taks `t`
- `A(t)` the inter-arrival of task `t`
A resource `r` is defined by:
- `π(r)` the resource ceiling, computed as the highest priority of any task accessing `r`. SRP allows for dynamic priorities, in our case we have static priorities only.
For SRP based analysis we assume a task to perform/execute a finite sequence of operations/instructions (aka. run-to-end or run-to-completion semantics). During execution, a task can claim resources `Rj`... in nested fashion. Sequential re-claim of resources is allowed but NOT re-claiming an already held resource (in a nested fashion, since that would violate the Rust memory aliasing rule).
E.g., a possible trace for a task can look like:
`[t:...[r1:...[r2:...]...]...[r2:...]...]`, where `[r:...]` denotes a critical section of task `t` holding the resource `r`. In this case the task starts, and at some point claims `r1` and inside the critical section claims `r2` (nested claim), at some point it exits `r2`, exits `r1` and continues executing where it executes a critical section on `r2`, and then finally executes until completion.
## Grade 3
Analysis:
### 1. Total CPU utilization
Worst Case Execution Time (WCET) for tasks and critical sections
In general determining WCET is rather tricky. In our case we adopt a measurement based technique, that spans all feasible paths for the task. Tests triggering the execution paths are automatically generated by symbolic execution. To correctly take concurrency into account resource state is treated symbolically. Thus, for a critical section, the resource is given a fresh (new) symbolic value for each critical section. Inside the critical section we are ensured exclusive access (and thus the value can be further constrained inside of the critical section).
We model hardware (peripheral register accesses) as shared resources (shared between your application and the environment). As such each *read* regenerates a new symbolic value while write operations have no side-effect.
For now, we just assume we have complete WCETs information, in terms of `start` and `end` time-stamps (`u32`) for each section `[_: ... ]`. We represent that by the `Task` and `Trace` data structures in `common.rs`.
### Total CPU request (or total load factor)
Each task `t` has a WCET `C(t)` and given (assumed) inter-arrival time `A(t)`. The CPU request (or load) inferred by a task is `L(t)` = `C(t)`/`A(t)`. Ask yourself, what is the consequence of `C(t)` > `A(t)`?
We can compute the total CPU request (or load factor), as `Ltot` = sum(`L(T)`), `T` being the set of tasks.
Ask yourself, what is the consequence of `Ltot` > 1?
Implement a function taking `Vec<Task>` and returning the load factor. (Use data structures from `common.rs` for suitable data structures and inspiration.)
### Response time (simple over-approximation)
Under SRP response time can be computed by equation 7.22 in [Hard Real-Time Computing Systems](
https://doc.lagout.org/science/0_Computer%20Science/2_Algorithms/Hard%20Real-Time%20Computing%20Systems_%20Predictable%20Scheduling%20Algorithms%20and%20Applications%20%283rd%20ed.%29%20%5BButtazzo%202011-09-15%5D.pdf).
In general the response time for a task `t` is computed as.
- `R(t)` = `B(t)` + `C(t)` + `I(t)`, where
- `B(t)` is the blocking time for task `t`, and
- `I(t)` is the interference (preemptions) to task `t`
For a task set to be schedulable under SRP we have two requirements:
- `Ltot` < 1
- `R(t)` < `D(t)`, for all tasks. (`R(t)` > `D(t)` implies a deadline miss.)
#### Blocking
SRP brings the outstanding property of single blocking. In words, a task `t` is blocked by the maximal critical section a task `l` with lower priority (`P(l)`< `P(t)`) holds a resource `l_r`, with a ceiling `π(l_r)` equal or higher than the priority of `t`.
- `B(t)` = max(`C(l_r)`), where `P(l)`< `P(t)`, `π(l_r) >= P(t)`
Implement a function that takes a `Task` and returns the corresponding blocking time.
#### Preemptions
A task is exposed to interference (preemptions) by higher priority tasks. Intuitively, during the execution of a task `t` (`Bp(t)`) each higher priority task `h` (`P(h)`>`P(t)`) may preempt us (`Bp(t)`/`A(h)` rounded upwards) times.
- `I(t)` = sum(`C(h)` * ceiling(`Bp(t)`/`A(h)`)), forall tasks `h`, `P(h)` > `P(t)`, where
- `Bp(t)` is the *busy-period*
We can over approximate the *busy period* `Bp(i)` = `D(i)` (assuming the worst allowed *busy-period*).
As a technical detail. For the scheduling of tasks of the same priority, the original work on SRP adopted a FIFO model (first arrived, first served). Under Rust RTIC, tasks are bound to hardware interrupts. Thus we can exploit the underlying hardware to do the actual scheduling for us (with zero-overhead). However the interrupt hardware, schedules interrupts of the same priority by the index in the vector table. For our case here we can make a safe over approximation by considering preemptions from tasks with SAME or higher priority (`P(h)` >= `P(t)`).
Implement a function that takes a `Task` and returns the corresponding preemption time.
Now make a function that computes the response time for a `Task`, by combing `B(t)`, `C(t)`, and `I(t)`.
Finally, make a function that iterates over the task set and returns a vector with containing:
`Vec<Task, R(t), C(t), B(t), I(t)>`. Just a simple `println!` of that vector gives the essential information on the analysis.
#### Preemptions revisited
The *busy-period* is in `7.22` computed by a recurrence equation.
Implement the recurrence relation (equation) starting from the base case `B(t) + C(t)`. The recurrence might diverge in case the `Bp(t) > D(t)`, this is a pathological case, where the task becomes non-schedulable, in that case terminate the recurrence (with an error indicating a deadline miss). You might want to indicate that a non feasible response time have been reached by using the `Result<u32, ())>` type or some other means e.g., (`Option<u32>`).
You can let your `preemption` function take a parameter indicating if the exact solution or approximation should be used.
---
## Grade 4
Your tool should be usable/documented and present a comprehensive "view" of the analysis results. Think about the usability of the tool and write a short description of use. (You may use `StructOpt` for command line parsing making your application work as a "professional" tool, a fun and easy exercise.) This you can easily implement a `-h/--help` for documenting the usage.
---
## Grade 5
Pull the latest [rtic_fxx_nucleo](https://gitlab.henriktjader.com/pln/rtic_f4xx_nucleo). Use your tool (that you developed for grades 3, 4) to analyse the task/resource set of `timing_exam.rs` based on the corresponding WCET data (per task/critical section). (See the file `examples/timing_exam.rs` for further information.) To import task-set/wcet, look at Rust `serde`.
## Resources
`common.rs` gives the basic data structures, and some helper functions.
`main.rs` gives an example on how `Tasks` can be manually constructed. This is vastly helpful for your development, when getting started. Later you likely want file input using `serde`.
## Tips
When working with Rust, the standard library documentation [std](https://doc.rust-lang.org/std/) is excellent and easy to search (just press S). For most cases, you will find examples on intended use and cross referencing to related data types is just a click away.
Use the `main` example to get started. Initially you may simplify it further by reducing the number of tasks/and or resources. Make sure you understand the helper functions given in `common.rs`, (your code will likely look quite similar). You might want to add further `common` types and helper functions to streamline your development, along the way.
Generate your own task sets to make sure your code works in the general case not only for the `Tasks` provided. Heads up, I will expose your code to some other more complex task sets.
---
## Learning outcomes, Robust and Energy Efficient Real-Time Systems
In this part of the course, we have covered.
- Embedded Programming using the RTIC framework. General tracing and debugging. Cycle accurate timing measurements.
- Software robustness. We have adopted Rust and Symbolic Execution to achieve guaranteed memory safety and defined behavior (panic free execution). With this at hand, we have a strong (and theoretically underpinned) foundation for improved robustness and reliability proven at compile time.
- Real-Time Scheduling and Analysis. SRP provides an execution model and resource management policy with outstanding properties of race-and deadlock free execution, single blocking and stack sharing. Our Rust RTIC framework provides a correct by construction implementation of SRP, exploiting zero-cost (software) abstractions. Using Rust RTIC resource management and scheduling, is done by directly by the hardware, which allows for efficiency (zero OH) and predictability.
The SRP model is amenable to static analysis, which you have now internalized through an actual implementation of the theoretical foundations. We have also covered methods for Worst Case Execution Time (WCET) analysis by cycle accurate measurements, which in combination with Symbolic Execution for test-case generation allows for high degree of automation.
- Energy Consumption is roughly proportional to the supply voltage (static leakage/dissipation), and exponential to the frequency (dynamic/switching activity dissipation). In the case of embedded systems, low-power modes allow parts of the system to be powered down while retaining sufficient functionality to wake on external (and/or internal) events. In sleep mode, both static and dynamic power dissipation is minimized typically to the order of uAmp (in comparison to mAmp in run mode).
Rust RTIC adopts an event driven approach allowing the system to automatically sleep in case no further tasks are eligible for scheduling. Moreover, leveraging on the zero-cost abstractions in Rust and the guarantees provided by the analysis framework, we do not need to sacrifice correctness/robustness and reliability in order to obtain highly efficient executables.
Robust and Energy Efficient Real-Time Systems for real, This is the Way!
# Klee tutorial # KLEE tutorial
Installing and testing klee. - Installing and testing klee.
- Manual use of the KLEE tools for C and Rust programs.
- Simplified usage adopting `klee-sys` and `cargo klee`.
## Dependencies ## Dependencies
...@@ -10,19 +14,29 @@ Under the hood, klee uses a `sat` solver for First Order Logic (FOL). Klee can i ...@@ -10,19 +14,29 @@ Under the hood, klee uses a `sat` solver for First Order Logic (FOL). Klee can i
So first install `z3` on your system (then klee will use that instead of the default solver). So first install `z3` on your system (then klee will use that instead of the default solver).
Later, you also need to have `gdb` installed, under arch by:
### Arch linux:
Under arch with `yay` installed simply: Under arch with `yay` installed simply:
```shell ```shell
> yay -S z3 > yay -S z3
``` ```
```shell
> yay -S gdb
```
## Install KLEE from source ### Ubuntu (like) systems
> sudo apt install z3 libz3-4 libz3-cil libz3-dev libz3-java libz3-jni libz3-ocaml-de
## Install KLEE from source
The instructions recommend LLVM 9, but the current master (2.2-pre) supports LLVM 11 (which is what you would have under arch as of 2020-12-07). The instructions recommend LLVM 9, but the current master (2.2-pre) supports LLVM 11 (which is what you would have under arch as of 2020-12-07).
Under arch you can simply Under arch you can simply:
```shell ```shell
> git clone https://github.com/klee/klee.git > git clone https://github.com/klee/klee.git
...@@ -51,6 +65,95 @@ LLVM (http://llvm.org/): ...@@ -51,6 +65,95 @@ LLVM (http://llvm.org/):
If your build fails at some point, consult the docs [building klee](https://klee.github.io/build-llvm9/). If your build fails at some point, consult the docs [building klee](https://klee.github.io/build-llvm9/).
## Testing a small function ## Install KLEE from `aur` (arch linux)
The `aur` package [klee](https://aur.archlinux.org/packages/klee/), installs KLEE in `/usr/bin` (binaries), `/usr/include` (C-include files), and `usr/lib` (libraries). These are the default system folders respectively, so it makes it easier to compile, link, and run the KLEE tools.
```shell
> yay -S klee
> klee -version
KLEE 2.2 (https://klee.github.io)
Build mode: Release (Asserts: ON)
Build revision: 5719d2803e93252e5d4613f43afc7db0d72332f1
LLVM (http://llvm.org/):
LLVM version 11.0.0
Optimized build.
Default target: x86_64-pc-linux-gnu
Host CPU: skylake
```
Notice, if you have previously installed from source, but want to use the `aur` instead you should remove the source installed files found in `/usr/local/bin`, `/usr/local/include` and `/usr/local/lib` (please make sure that you only remove the KLEE related files).
---
## Testing a small C function
See the `examples/get_sign` file.
Here you learn:
- how to generate LLVM-IR
- how to run KLEE
- how to inspect generated test
- how to replay test cases
## Testing a small Rust function
See the `examples/get_sign.rs` file.
Here you learn:
- how to generate LLVM-IR from a Rust program
- how to run KLEE on Rust code
- how to replay test cases for Rust programs
---
## klee-sys and Cargo-klee
See the `cargo-klee-examples` folder.
Here you learn:
- an easy way to compile and run KLEE on Rust code
- an easy way to replay test cases for Rust programs
- an easy way to find "hard to find" errors in embedded code
---
## Related files and their locations and attributes
- `vcell`: KLEE enable `vcell`
- version: 0.1.2
- git: `https://github.com/perlindgren/vcell/blob/trustit`
- branch: `trustit`
- features:
- `klee-analysis`
- `get` results in a new (fresh) symbolic object.
- `set` is suppressed (no side effect).
- `klee-sys`: low-level bindings for KLEE.
- version: 0.2.0
- git: `https://gitlab.henriktjader.com/pln/klee-sys`
- features:
- `klee-analysis`
- KLEE API binds to external functions
- `klee-replay`
- KLEE API `klee_make_symbolic` binds to inline assembly breakpoint. This allows a debugger to catch the halted CPU and insert test case to location of symbolic object. Other KLEE API binds to Rust `panic!`.
- `cargo-klee`: `cargo` sub-command.
- version: 0.3.0
- git: `https://gitlab.henriktjader.com/pln/cargo-klee`
- `panic_klee`: Binds panic handler to external `abort`
- version: 0.1.0
- git: `https://gitlab.henriktjader.com/pln/panic-klee`
---
## Why KLEE on Rust
Out the box, Rust provides superior memory safety and guarantees to well defined behavior. However there are cases where the Rust compiler (rustc) cannot statically (at compile time) ensure these guarantees. For such cases (e.g., division by zero, slice/array indexing etc.) Rust injects code for run-time verification that emit a `panic!` (with appropriate error message). While still being safe (the code nevhttps://github.com/perlindgren/vcell.giter runs into memory unsafety or undefined behavior) this limits the reliability (availability) of the system (as its very hard to recover from a `panic!`.) In practice, the developer would typically reboot/reset the system (and store some error code/trace for post-mortem analysis).
See the `examples/get_sign` folder. With KLEE we can do better! We bind Rust `panic!` to KLEE `abort` (a path termination), and let. For all reachable `panic!`s, KLEE will provide a concrete test, which we can replay and address in our source code. When done, we have a proof of `panic` freedom and thus defined behavior, with huge impact to reliability (and security) of the system at hand.
/target
**/*.rs.bk
Cargo.lock
[package]
name = "cargo_klee_examples"
version = "0.1.0"
authors = ["pln <Per Lindgren>"]
edition = "2018"
# [dependencies.stm32f4]
# version = "0.12.1"
# features = ["stm32f401"]
[dependencies.klee-sys]
git = "https://gitlab.henriktjader.com/pln/klee-sys.git"
version = "0.2.0"
[dependencies.panic-klee]
git = "https://gitlab.henriktjader.com/pln/panic-klee.git"
version = "0.1.0"
[dependencies.cortex-m]
version = "0.6.1"
[dependencies]
vcell = "0.1.2"
[patch.crates-io]
cortex-m = { git = "https://github.com/perlindgren/cortex-m.git", branch = "trustit" }
vcell = { git = "https://github.com/perlindgren/vcell.git", branch = "trustit" }
[profile.dev]
panic = "abort"
incremental = false # better optimization
lto = true # better optimization
codegen-units = 1 # better optimization
[profile.release]
panic = "abort"
debug = true # better debugging
incremental = false # better optimization
lto = true # better optimization
codegen-units = 1 # better optimization
[features]
klee-analysis = [
"klee-sys/klee-analysis",
"cortex-m/klee-analysis",
"vcell/klee-analysis"
]
# Examples on `cargo-klee` usage
## Install
Clone the [cargo-klee](https://gitlab.henriktjader.com/pln/cargo-klee).
Follow install instructions.
## Examples
- `examples/get_sign.rs`, same simple example again but now using the `cargo klee` cargo sub-command. The example also uses `klee-sys` bindings.
// get_sign.rs
// Showcase how we automatically can interface Rust to KLEE
//
#![no_std]
#![no_main]
use klee_sys::klee_make_symbolic;
use panic_klee as _;
fn sum_first_elements(arr: &[u8], index: usize) -> u8 {
let mut acc = 0;
for i in 0..index {
acc += arr[i as usize];
}
acc
}
#[no_mangle]
fn main() {
let arr = [0u8; 8];
let mut i: usize = 0;
klee_make_symbolic!(&mut i, "i");
let b = sum_first_elements(&arr, i);
}
// A) Array indexing is tricky to analyse at compile time.
// Thus Rust (rustc) will inject code for run-time verification
// `panic`ing on index out of range.
//
// (Compare to C/C++, where a "buffer overflow" might pass unnoticed
// causing all sorts of problems.)
//
// Compare the test generated in release `--release` (optimized) to
// test generated in debug/dev mode (un-optimized).
//
// Try to explain in your own words the difference and why?
// (Hint, even if we don't use the result `b`, Rust do not optimize out the call, why?)
//
// [your answer here]
//
// B) Fix the code so that you don't get an error.
// (It should still compute the sum of the n first elements
// and return the sum of the whole array if index larger than size/length).
// The fix should be in the function (not on the caller side).
//
// [Git commit "B"]
//
// C) In the example, the array is holding only zeroes.
// Figure out a way to make the content symbolic.
// (Hint, declare as mutable, iterate and set each element symbolic.)
// (Hint2, it seems that you can set the whole array symbolic directly
// without messing with an iterator, super!!!.)
//
// [Git commit "C"]
//
// D) Analyze the example using KLEE. Now a new (maybe unexpected) error should occur!
// Notice, the error occurs only in `debug/dev` builds.
//
// Explain what caused the error.
//
// [your answer here]
//
// E) Make a sensible fix to the code.
// Motivate your choice.
//
// [your answer here]
//
// [Git commit "D"]
//
// F) Learning outcome.
// 70% of Microsoft security updates over the last decade is directly related to
// memory safety.
//
// Explain in your own words what Microsoft would gain by using Rust.
//
// [your answer here]
//
// Explain in your own words what Microsoft would gain by using `cargo klee`
// on their Rust code.
//
// And YES, Microsoft is rewriting core system functionality in Rust as we speak!
// cyccnt.rs
#![no_std]
#![no_main]
extern crate cortex_m;
extern crate panic_klee;
#[no_mangle]
fn main() {
let mut core = cortex_m::Peripherals::take().unwrap();
// core.DCB.enable_trace();
// core.DWT.enable_cycle_counter();
let start: u32 = core.DWT.cyccnt.read();
// some code to measure
// ...
let end = core.DWT.cyccnt.read();
let _time = end - start;
}
// A) Running KLEE on embedded code:
//
// Let's now revisit the time measure example from `rtic_f4xx_nucleo`.
// For simplicity we omit the `rtic` specifics.
//
// `cortex-m` is an API to the ARM Cortex core peripherals.
// (Under RTIC it was passed to the `init::Context` as `core`.)
//
// In this example we access it directly.
// Since Rust forbids mutable aliasing, we cannot `take` it twice,
// but more about that later...
//
// Let's focus on the access to the core peripherals, and
// the relation to the `klee-analysis` feature.
//
// In the Rust embedded ecosystem, all hardware registers
// are accessed through `vcell`, which ensure `volatile access`.
// (This means that reads and writes goes to memory.)
//
// We override `vcell` by a custom implementation:
//
// Looking at the `Cargo.toml` we find:
//
// [patch.crates-io]
// cortex-m = { git = "https://github.com/perlindgren/cortex-m.git", branch = "trustit" }
// vcell = { git = "https://github.com/perlindgren/vcell.git", branch = "trustit" }
//
// and
//
// [features]
// klee-analysis = [
// "klee-sys/klee-analysis",
// "cortex-m/klee-analysis",
// "vcell/klee-analysis"
// ]
//
// The `klee-analysis` feature is propagated to `vcell`.
// When active:
// - writes are suppressed (they have no effect), and
// - reads generate a fresh (new) unique symbolic value.
//
// In the code above Line 15, we get one symbolic value for `DWT.cyccnt.read()`,
// and in Line 20, we get another unique symbolic value.
//
// Among other things `cortex-m` provides an API to read/write the `PRIMASK`,
// core register. (This controls the global interrupt enable/disable).
//
// We override the `cortex-m` crate, and propagate the `klee-analysis` feature.
// Similarly to `vcell`, when the `klee-analysis` feature is active:
// - writes to PRIMASK are suppressed, and
// - reads generate a fresh (new) unique value.
//
// In the code above Line 11, `cortex-m` executes the `take()` operation
// in a "critical section". In more detail:
// It first reads the `PRIMASK`, if false (we are not already in a critical section)
// it sets the `PRIMASK`.
//
// Now let's run the analysis:
//
// > cargo klee --example cyccnt --release
//
// KLEE: Using Z3 solver backend
// KLEE: WARNING: undefined reference to function: rust_eh_personality
//
// KLEE: done: total instructions = 38
// KLEE: done: completed paths = 1
// KLEE: done: generated tests = 1
//
// Only one path was found (no error occurred along the path).
//
// > ktest-tool target/release/examples/klee-last/test000001.ktest
//
// ktest file : 'target/release/examples/klee-last/test000001.ktest'
// args : ['/home/pln/courses/d7020e/klee_tutorial/cargo_klee_examples/target/release/examples/cyccnt-012d42640c9faf9d.ll']
// num objects: 3
// object 0: name: 'PRIMASK'
// object 0: size: 4
// object 0: data: b'\x00\x00\x00\x00'
// object 0: hex : 0x00000000
// object 0: int : 0
// object 0: uint: 0
// object 0: text: ....
// object 1: name: 'vcell'
// object 1: size: 4
// object 1: data: b'\x00\x00\x00\x00'
// object 1: hex : 0x00000000
// object 1: int : 0
// object 1: uint: 0
// object 1: text: ....
// object 2: name: 'vcell'
// object 2: size: 4
// object 2: data: b'\x00\x00\x00\x00'
// object 2: hex : 0x00000000
// object 2: int : 0
// object 2: uint: 0
// object 2: text: ....
//
// As expected we read the `PRIMASK` and then read `cyccnt` twice.
//
// KLEE does not know which register was read, just that a `vcell`
// was accessed.
//
// We can replay the test:
//
// > cargo klee --example cyccnt --release --replay --gdb
// ...
// Type "apropos word" to search for commands related to "word"...
// Reading symbols from cyccnt.replay...
// (gdb) break main
// Breakpoint 1 at 0x555555555127: file examples/cyccnt.rs, line 11.
// (gdb) set environment KTEST_FILE=klee-last/test000001.ktest
// (gdb) run
// Breakpoint 1, klee_sys::lib_klee_analysis::klee_make_symbolic<u32> (t=0x7fffffffd75c,
// name=<optimized out>)
// at /home/pln/.cargo/git/checkouts/klee-sys-7ee2aa8a1a6bbc46/de8fd90/src/lib_klee_analysis.rs:19
// 19 crate::ll::klee_make_symbolic(
//
// Nice!, the very first thing that happens is that we get a symbolic value
// (this is the `PRIMASK`)
//
// (gdb) next
// cortex_m::interrupt::free<closure-0,core::option::Option<cortex_m::peripheral::Peripherals>> (f=...)
// at /home/pln/.cargo/git/checkouts/cortex-m-514878a7410beb63/07e7543/src/interrupt.rs:75
// 75 let r = f(unsafe { &CriticalSection::new() });
//
// This is the critical section mentioned above:
//
// (gdb) next
// cyccnt::main () at examples/cyccnt.rs:15
// 15 let start: u32 = core.DWT.cyccnt.read();
//
// (gdb) next
// cyccnt::main () at examples/cyccnt.rs:15
// 20 let end = core.DWT.cyccnt.read();
//
// (gdb) next
// 23 }
//
// Pretty much what we could expect.
//
// Now try to `run` the again and print the values of `start`/`end`.
//
// What do you get, and why?
//
// [your answer here]
//
// As you should have seen, this was not particularly informative, right?
//
// B) In debug/dev mode
//
// Let's replay the example in debug mode?
//
// > cargo klee --example cyccnt
//
// KLEE: ERROR: /home/pln/.cargo/git/checkouts/panic-klee-aa8d015442188497/3b0c897/src/lib.rs:8: abort failure
// KLEE: NOTE: now ignoring this error at this location
//
// KLEE: done: total instructions = 761
// KLEE: done: completed paths = 4
// KLEE: done: generated tests = 3
//
// WHUT!!!!!
// An error occurred, this was unexpected, right?
//
// Try figure out what caused the error.
//
// Hint, look at the generated tests and open the `testX.abort.err` file.
// (Each failing test comes with a corresponding error file.)
//
// The error file contains a backtrace.
// Replay the failing test with a breakpoint set to the failing operation.
// Print the values of `start` and `end`, when hitting the breakpoint.
//
// Value of `start`.
//
// [your answer here]
//
// Value of `end`
//
// [your answer here]
//
// Why does these values cause an error debug/dev build but not in a release build?
//
// [your answer here]
//
// C) Fix the problem!
//
// Come up with a solution to the problem, and alter the source code accordingly.
//
// Verify that the program now passes without errors in both debug/dev and
// release builds.
//
// There are numerous ways to solve the problem.
// Argue for your solution in your own words.
//
// [your answer here]
//
// D) Learning outcomes and major takeaways.
//
// In this exercise, you have validated embedded Rust code using KLEE
// and found a typical "hard to find" error.
//
// why are such errors so hard to find?
//
// Think about it?
//
// What is the condition for triggering the error?
//
// What is the behavior of CYCCNT?
//
// Assume the MCU is running at 8MHz.
//
// How long time would lines 16/17 take to run to trigger the error?
//
// [your answer here]
//
// Of course this is a contrived example, and may not occur in practice.
// But, it represents a class of problems/errors/bugs that is
// extremely hard to find by testing.
//
// Ask Jan Van Deventer about the anti-lock breaks he developed
// for Alpha Romeo! (His nick, was Fender Bender at the time.)
//
// Same break controller is present in Opel Speedster, so you can
// ask Johan at Grepit how it feels when you have zero-breaking
// after hitting a bump. (He now bypassed the anti-lock.)
// get_sign.rs
// Showcase how we automatically can interface Rust to KLEE
//
#![no_std]
#![no_main]
use klee_sys::klee_make_symbolic;
use panic_klee as _;
fn get_sign(x: i32) -> i32 {
if x == 0 {
return 0;
}
if x < 0 {
return -1;
} else {
return 1;
}
}
#[no_mangle]
fn main() {
let mut a: i32 = 0;
klee_make_symbolic!(&mut a, "a");
get_sign(a);
}
// A) Compile a Rust program for KLEE analysis.
//
// The bindings to klee are abstracted into `klee-sys`.
// The panic handler into `panic_klee` (essentially KLEE `abort`).
// The tedious build process into `cargo klee`.
//
// > cargo klee -help
//
// The default is to generate LLVM-IR run KLEE on the generated LLVM-IR.
// This enables the `klee-analysis` feature of `klee-sys`.
//
// Using the `-v` option we see verbose output.
//
// > cargo klee -v --example get_sign
// "cargo" "rustc" "-v" "--features" "klee-analysis" "--example" "get_sign" "--color=always" "--" "-C" "linker=true" "-C" "lto" "--emit=llvm-ir"
// ...
//
// KLEE: Using Z3 solver backend
//
// KLEE: done: total instructions = 92
// KLEE: done: completed paths = 3
// KLEE: done: generated tests = 3
// This should be familiar to the build you did in the previous exercise.
//
// Now locate the generated tests. Give the relative path to `klee-last`.
//
// [your answer here]
//
// B) Replay test cases
//
// By giving the `-r` (or `--replay`), you can easily replay the test cases:
//
// > cargo klee -v -r -g --example get_sign
//
// (gdb) set environment KTEST_FILE=klee-last/test000001.ktest
// (gdb) break get_sign
// (gdb) run
//
// What path of the program does this path trigger?
//
// [your answer here]
//
// Just out of curiosity, you may test the other test cases as well...
//
// B) UX/IX
//
// Compare to the "stock KLEE for C" experience.
//
// How does `klee-sys` and `cargo-klee` score on a 0..5 scale?
//
// [your answer here]
//
// If below 5, what could be done to improve the UX/IX?
//
// [your answer here]
//
// C) Inner workings.
//
// Have a look on the source code of `klee-sys`.
// You will find that it is very similar to the code you saw in the
// previous exercise.
//
// Try to follow the `klee-analysis` feature.
// What modules in `klee-sys` does this feature enable?
//
// [your answer here]
//
// Have a look at the source code of `cargo klee`.
// (The actual sub-command is in the folder `cargo-klee`.)
//
// Try to figure out how the correct `.ll` file is determined.
// Which one will it pick, and why?
//
// [your answer here]
//
// Actually this is one of the "bad seeds" in the internal design.
// Seems there is no "stable" way of controlling/retrieving the "metadata"
// (i.e., the hashing). Ideas welcome!
...@@ -66,20 +66,34 @@ int main() ...@@ -66,20 +66,34 @@ int main()
// > ls /usr/local/lib // > ls /usr/local/lib
// klee libkleeRuntest.so libkleeRuntest.so.1.0 // klee libkleeRuntest.so libkleeRuntest.so.1.0
// //
// If you installed Klee using the package manager
// the path might be different:
//
// Using `aur` (arch) files are stored in the system default
// folders, `/usr/include` and `/usr/lib`.
//
// If those are ok, then you can compile for replay: // If those are ok, then you can compile for replay:
// //
// > clang -I /usr/local/include/ -L /usr/local/lib get_sign.c -l kleeRuntest // > clang -I /usr/local/include/ -L /usr/local/lib get_sign.c -l kleeRuntest
// //
// Or just
// > clang get_sign -l kleeRuntest
//
// If the `include` and `lib` paths are the system defaults.
//
// To replay the first test: // To replay the first test:
// //
// We need to add the libary path so it can be dynamically loaded: // We need to add the libary path so it can be dynamically loaded:
// Depending on shell this might look different: // Depending on shell this might look different:
// //
// Under `bash` (and `bash` like shells) // Under `bash` (and `bash` like shells)
// > export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH // > export LD_LIBRARY_PATH=/usr/local/lib/:$LD_LIBRARY_PATH
// //
// Under `fish` // Under `fish`
// > set -x LD_LIBRARY_PATH /usr/local/lib/ // > set -x LD_LIBRARY_PATH /usr/local/lib/:$LD_LIBRARY_PATH
//
// Once again, if using the system default system folders
// you don't need to add anything to `LD_LIBRARY_PATH`.
// //
// > KTEST_FILE=klee-last/test000001.ktest ./a.out // > KTEST_FILE=klee-last/test000001.ktest ./a.out
// //
...@@ -112,7 +126,7 @@ int main() ...@@ -112,7 +126,7 @@ int main()
// //
// [your answer here] // [your answer here]
// //
// Why not? Confir to shell error codes: // Why not? Confer to shell error codes:
// //
// [your answer here] // [your answer here]
// //
...@@ -124,6 +138,9 @@ int main() ...@@ -124,6 +138,9 @@ int main()
// First build it with debug symbols (`-g`). // First build it with debug symbols (`-g`).
// > clang -g -I /usr/local/include/ -L /usr/local/lib get_sign.c -l kleeRuntest // > clang -g -I /usr/local/include/ -L /usr/local/lib get_sign.c -l kleeRuntest
// //
// Or if using system defaults:
// > clang -g get_sign.c -l kleeRuntest
//
// Then start `gdb`: // Then start `gdb`:
// > KTEST_FILE=klee-last/test000001.ktest gdb ./a.out // > KTEST_FILE=klee-last/test000001.ktest gdb ./a.out
// (gdb) break get_sign // (gdb) break get_sign
......
// get_sign.rs
// Showcase how we manually can interface Rust to KLEE
//
#![no_std]
#![no_main]
fn get_sign(x: i32) -> i32 {
if x == 0 {
return 0;
}
if x < 0 {
return -1;
} else {
return 1;
}
}
#[no_mangle]
fn main() {
let mut a: i32 = 0;
klee_make_symbolic!(&mut a, "a");
get_sign(a);
}
// KLEE bindings
#[panic_handler]
fn panic(_info: &core::panic::PanicInfo) -> ! {
// abort symbol caught by LLVM-KLEE
unsafe { ll::abort() }
}
#[inline(always)]
pub fn klee_make_symbolic<T>(t: &mut T, name: &'static cstr_core::CStr) {
unsafe {
crate::ll::klee_make_symbolic(
t as *mut T as *mut core::ffi::c_void,
core::mem::size_of::<T>(),
name.as_ptr() as *const cstr_core::c_char,
)
}
}
#[macro_export]
macro_rules! klee_make_symbolic {
(&mut $id:expr, $name:expr) => {
klee_make_symbolic(&mut $id, unsafe {
cstr_core::CStr::from_bytes_with_nul_unchecked(concat!($name, "\0").as_bytes())
})
};
}
mod ll {
//! Low level bindings
extern "C" {
pub fn abort() -> !;
pub fn klee_make_symbolic(
ptr: *mut core::ffi::c_void, // pointer to the data
size: usize, // size (in bytes)
name: *const cstr_core::c_char, // pointer to zero-terminated C string
);
}
}
// A) Compile a Rust program for KLEE analysis.
//
// In order for KLEE analysis we need to generate LLVM-IR.
//
// > cargo rustc -v --example get_sign -- -C linker=true -C lto --emit=llvm-ir
//
// Here `rustc` tells cargo that we will call the compiler with arguments.
//
// `-v` that we want verbose output.
//
// Options after `--` is passed directly to to the compiler (`rustc`).
//
// `-C linker=true`, binds the linker to the `true` binary (a program that just returns).
// This way we can prevent the compiler to link to a binary (ugly right!).
//
// `-C --emit=llvm-ir`, to emit LLVM-IR in `.ll` form.
//
// `cargo` places the generated artifacts under the `target` directory.
//
// > ls target/debug/examples/
// get_sign-85c57be6132dac1d.d get_sign-85c57be6132dac1d.ll get_sign.d
//
// If you look at the compilation, above you find that `cargo` adds
// `extra-filename=-85c57be6132dac1d` (the hash might be different but)
// matches the hash for the `.ll` file.
//
// The hash is based on the setting for this particular build.
// This way, `cargo` keeps track of the builds, so you can several builds
// of the same files with different settings without name collisions.
//
// Now we can run KLEE on the generate LLVM-IR.
// (KLEE can read both `.bc` and `.ll` files, `.ll` files are human readable.)
//
// > klee target/debug/examples/get_sign-85c57be6132dac1d.ll
// KLEE: output directory is "/home/pln/courses/d7020e/klee_tutorial/target/debug/examples/klee-out-0"
// KLEE: Using Z3 solver backend
//
// KLEE: done: total instructions = 92
// KLEE: done: completed paths = 3
// KLEE: done: generated tests = 3
//
// (You need to give the right hash for the `.ll` file.)
//
// What was the generated hash.
//
// [your answer here]
//
// B) Inspecting the test cases.
//
// Figure out to run `ktest-tool` on the generated test cases.
// (Hint, it is just a matter of paths.)
//
// [your answer here]
//
// C) Replaying your test cases.
//
// Now we need to re-compile the `get_sign.rs` with bindings to
// the `libkleeRuntest`.
//
// The first thing we need to do is to generate an object file `.o`.
//
// > cd target/debug/examples
// > llc -filetype=obj -relocation-model=pic get_sign-85c57be6132dac1d.ll
//
// `llc` is the LLVM compiler, `-filetype=obj` to generate an object file and
// `-relocation-model=pic` is to obtain Position Independent Code (https://en.wikipedia.org/wiki/Position-independent_code)
// (This is needed since we don't know where it is going to be placed/linked later.)
//
// Verify that you now have an object file:
//
// > ls *.o
//
// [your answer here]
//
// Now we need to link it with the `libkleeRuntest`.
//
// > clang get_sign-85c57be6132dac1d.o -l kleeRuntest -o a.out
//
// (See `get_sign.c` for linking options if you don't have KLEE libs in default location.)
//
// Now we can replay the test case. However, we have compiled the example for `no_std`
// (as we target embedded applications).
//
// > KTEST_FILE=klee-last/test000001.ktest gdb a.out
//
// (gdb) break get_sign
// Breakpoint 1 at 0x1165: file examples/get_sign.rs, line 9.
//
// Now run the code in the debugger. What path was triggered.
//
// [your answer here]
//
// Change to test000002, what path does it trigger.
//
// [your answer here]
//
// And finally change to test000003, what path was triggered.
//
// [your answer here]
//
// D) Remarks and conclusions.
//
// You have now successfully:
//
// - Generated LLVM-IR from Rust code
// - Run KLEE on Rust code to generate tests.
// - Linked with the 'kleeRuntest`.
// - Replayed code in `gdb'.
//
// It was a bit of a hassle right!
//
// In the next part of the lab we will:
//
// Introduce a Rust library for the LLVM bindings.
// Introduce a cargo sub-command for:
// - building and running KLEE
// - replaying test cases
//
// With this at hand, ergonomics using KLEE is improved.
// (Arguably better than the C/C++ integration.)
//
// So why did we not start directly with the `cargo klee`.
//
// Well this is course at advanced level, where you
// learn not only to use tools, but how the work.
//
// The library `klee-sys` basically extends on the
// primitive KLEE bindings in this file.
//
// The `cargo klee` command basically extends on the
// steps you have taken to compile and run the tools
// in this lab.
//
// If you grasp this lab, the rest is just coding.
// You may find and fix bugs in the `klee-sys`,
// improve and `cargo klee` etc.
//
// At that point you are `computer science master`!
[package]
name = "srp_analysis"
version = "0.1.0"
authors = ["Per Lindgren <per.lindgren@ltu.se>"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
use std::collections::{HashMap, HashSet};
// common data structures
#[derive(Debug)]
pub struct Task {
pub id: String,
pub prio: u8,
pub deadline: u32,
pub inter_arrival: u32,
pub trace: Trace,
}
//#[derive(Debug, Clone)]
#[derive(Debug)]
pub struct Trace {
pub id: String,
pub start: u32,
pub end: u32,
pub inner: Vec<Trace>,
}
// useful types
// Our task set
pub type Tasks = Vec<Task>;
// A map from Task/Resource identifiers to priority
pub type IdPrio = HashMap<String, u8>;
// A map from Task identifiers to a set of Resource identifiers
pub type TaskResources = HashMap<String, HashSet<String>>;
// Derives the above maps from a set of tasks
pub fn pre_analysis(tasks: &Tasks) -> (IdPrio, TaskResources) {
let mut ip = HashMap::new();
let mut tr: TaskResources = HashMap::new();
for t in tasks {
update_prio(t.prio, &t.trace, &mut ip);
for i in &t.trace.inner {
update_tr(t.id.clone(), i, &mut tr);
}
}
(ip, tr)
}
// helper functions
fn update_prio(prio: u8, trace: &Trace, hm: &mut IdPrio) {
if let Some(old_prio) = hm.get(&trace.id) {
if prio > *old_prio {
hm.insert(trace.id.clone(), prio);
}
} else {
hm.insert(trace.id.clone(), prio);
}
for cs in &trace.inner {
update_prio(prio, cs, hm);
}
}
fn update_tr(s: String, trace: &Trace, trmap: &mut TaskResources) {
if let Some(seen) = trmap.get_mut(&s) {
seen.insert(trace.id.clone());
} else {
let mut hs = HashSet::new();
hs.insert(trace.id.clone());
trmap.insert(s.clone(), hs);
}
for trace in &trace.inner {
update_tr(s.clone(), trace, trmap);
}
}
mod common;
use common::*;
fn main() {
let t1 = Task {
id: "T1".to_string(),
prio: 1,
deadline: 100,
inter_arrival: 100,
trace: Trace {
id: "T1".to_string(),
start: 0,
end: 10,
inner: vec![],
},
};
let t2 = Task {
id: "T2".to_string(),
prio: 2,
deadline: 200,
inter_arrival: 200,
trace: Trace {
id: "T2".to_string(),
start: 0,
end: 30,
inner: vec![
Trace {
id: "R1".to_string(),
start: 10,
end: 20,
inner: vec![Trace {
id: "R2".to_string(),
start: 12,
end: 16,
inner: vec![],
}],
},
Trace {
id: "R1".to_string(),
start: 22,
end: 28,
inner: vec![],
},
],
},
};
let t3 = Task {
id: "T3".to_string(),
prio: 3,
deadline: 50,
inter_arrival: 50,
trace: Trace {
id: "T3".to_string(),
start: 0,
end: 30,
inner: vec![Trace {
id: "R2".to_string(),
start: 10,
end: 20,
inner: vec![],
}],
},
};
// builds a vector of tasks t1, t2, t3
let tasks: Tasks = vec![t1, t2, t3];
println!("tasks {:?}", &tasks);
// println!("tot_util {}", tot_util(&tasks));
let (ip, tr) = pre_analysis(&tasks);
println!("ip: {:?}", ip);
println!("tr: {:?}", tr);
}