Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Using theorem provers to find attacks

I've heard a bit about using automated theorem provers in attempts to show that security vulnerabilities don't exist in a software system. In general this is fiendishly hard to do.

My question is has anyone done work on using similar tools to find vulnerabilities in existing or proposed systems?


Eidt: I'm NOT asking about proving that a software system is secure. I'm asking about finding (ideally previously unknown) vulnerabilities (or even classes of them). I'm thinking like (but an not) a black hat here: describe the formal semantics of the system, describe what I want to attack and then let the computer figure out what chain of actions I need to use to take over your system.

like image 831
BCS Avatar asked Sep 09 '10 17:09

BCS


6 Answers

Yes, a lot of work has been done in this area. Satisfiability (SAT and SMT) solvers are regularly used to find security vulnerabilities. For example, in Microsoft, a tool called SAGE is used to eradicate buffer overruns bugs from windows. SAGE uses the Z3 theorem prover as its satisfiability checker. If you search the internet using keywords such as “smart fuzzing” or “white-box fuzzing”, you will find several other projects using satisfiability checkers for finding security vulnerabilities. The high-level idea is the following: collect execution paths in your program (that you didn't manage to exercise, that is, you didn't find an input that made the program execute it), convert these paths into mathematical formulas, and feed these formulas to a satisfiability solver. The idea is to create a formula that is satisfiable/feasible only if there is an input that will make the program execute the given path. If the produced formula is satisfiable (i.e., feasible), then the satisfiability solver will produce an assignment and the desired input values. White-box fuzzers use different strategies for selecting execution paths. The main goal is to find an input that will make the program execute a path that leads to a crash.

like image 138
Leonardo de Moura Avatar answered Nov 20 '22 02:11

Leonardo de Moura


So, at least in some meaningful sense, the opposite of proving something is secure is finding code paths for which it isn't.

Try Byron Cook's TERMINATOR project.

And at least two videos on Channel9. Here's one of them

His research is likely to be a good starting point for you to learn about this extremely interesting area of research.

Projects such as Spec# and Typed-Assembly-Language are related too. In their quest to move the possibility of safety checks from runtime back to compile-time, they allow the compiler to detect many bad code paths as compilation errors. Strictly, they don't help your stated intent, but the theory they exploit might be useful to you.

like image 28
Jason Kleban Avatar answered Nov 20 '22 02:11

Jason Kleban


I'm currently writing a PDF parser in Coq together with someone else. While the goal in this case is to produce a secure piece of code, doing something like this can definitely help with finding fatal logic bugs.

Once you've familiarized yourself with the tool, most proof become easy. The harder proofs yield interesting test cases, that can sometimes trigger bugs in real, existing programs. (And for finding bugs, you can simply assume theorems as axioms once you're sure that there's no bug to find, no serious proving necessary.)

About a moth ago, we hit a problem parsing PDFs with multiple / older XREF tables. We could not prove that the parsing terminates. Thinking about this, I constructed a PDF with looping /Prev Pointers in the Trailer (who'd think of that? :-P), which naturally made some viewers loop forever. (Most notably, pretty much any poppler-based viewer on Ubuntu. Made me laugh and curse Gnome/evince-thumbnailer for eating all my CPU. I think they fixed it now, tho.)


Using Coq to find lower-level bugs will be difficult. In order to prove anything, you need a model of the program's behavior. For stack / heap problems, you'll probably have to model the CPU-level or at least C-level execution. While technically possible, I'd say this is not worth the effort.

Using SPLint for C or writing a custom checker in your language of choice should be more efficient.

like image 27
nobody Avatar answered Nov 20 '22 00:11

nobody


STACK and KINT used constraint solvers to find vulnerabilities in many OSS projects, like the linux kernel and ffmpeg. The project pages point to papers and code.

like image 25
pwnall Avatar answered Nov 20 '22 02:11

pwnall


It's not really related to theorem-proving, but fuzz testing is a common technique for finding vulnerabilities in an automated way.

like image 23
Kristopher Johnson Avatar answered Nov 20 '22 02:11

Kristopher Johnson


There is the L4 verified kernel which is trying to do just that. However, if you look at the history of exploitation, completely new attack patterns are found and then a lot of software written up to that point is very vulnerable to attacks. For instance, format string vulnerabilities weren't discovered until 1999. About a month ago H.D. Moore released DLL Hijacking and literally everything under windows is vulnerable.

I don't think its possible to prove that a piece of software is secure against an unknown attack. At least not until a theorem is able to discover such an attack, and as far as I know this hasn't happened.

like image 1
rook Avatar answered Nov 20 '22 01:11

rook