I have made a site describing the boolean satisfiability problem and DPLL algorithm. It's got an interactive animation of the algorithm at work (try the Watts-Strongatz graph) and discussions of the problem and algorithm. Made in @rustlang and @vuejs.

https://mortaljoe.gitlab.io/r-sat/ 
The boolean satisfiability problem is gorgeous, but requires some obtuse setup to get to the payoff. It's a bit like those logic puzzles, https://en.wikipedia.org/wiki/Zebra_Puzzle. Take a bunch of statements "it is raining", "dogs exist", etc...
Now put the statements together with "or" and "not": "it is raining OR dogs exist", "it is raining OR math is NOT cool". Call these clauses. The problem asks: with some set of these statements and some set of clauses, is it possible for all of these clauses to be true?
The content of the statements is irrelevant and usually abstracted away, leaving you with a (conjunctive normal form) satisfiability problem `S = (x_1 OR x_2 OR -x_3) AND (-x_2 OR x_3 OR -x_4) AND ...`

Is there some combination of x_i=T/F that will make this S true?
Naively, you can try all combinations of x and eventually you will find a solution that gives S=T (it is satisfiable, or SAT) or you will exhaust all possibilities (it is UNSAT). But like brute force guessing a password, that simply takes too long.

The DPLL algorithm is better.
DPLL augments the "guessing" with deduction. Worst-case problems have roughly the same runtime as pure guessing, but DPLL generally performs much better. See the site for how it works, or wikipedia: https://en.wikipedia.org/wiki/DPLL_algorithm.
Another aspect is: how many clauses does it generally take before a set of statements cannot satisfy them? First, say that each clause has three statements; this is called 3-SAT and is just complicated enough to be interesting (and actually is completely general, which is nice).
The number of random clauses it takes to find mostly unsatisfiable problems is ~4.3 times the number of statements. With 1000 statements and 4000 random clauses, you will probably find a problem that is satisfiable. With 4500 clauses, it will likely be UNSAT.
This is a phase transition in the solvability: it is mostly SAT below and mostly UNSAT above. Keeping the ratio of clauses to statements fixed, this transition sharpens as the number of statements and clauses is increased: the critical 50-50 region gets narrower.
This transition also has an effect on the difficulty of solving the problem. It's pretty easy to find a solution with DPLL and a bit harder to find UNSAT. But the hardest area is right on that 4.3 line. (It also takes longer the more statements and clauses you add, of course)
There's a lot to say about this problem; it's pretty central to the most famous computer science conjecture: does P equal NP? https://en.wikipedia.org/wiki/P_versus_NP_problem Roughly, if a problem can be quickly verified (NP), does that mean it can be quickly solved (P)? ("quick" == "in polynomial time")
The 3-SAT problem is "NP-complete", meaning it is provably as hard as any problem in NP, while it is an NP problem itself. This means if you prove there is no way to quickly solve it, P!=NP. If you find a polynomial-time algorithm, P==NP (and you get $1 million + science cred).
All that to say: get that money! Write a DPLL solver, then improve it. Join a competition: http://satcompetition.org  (yes, you compete to solve the boolean satisfiability problem fastest/best). And put it out of your mind that it might be over there in Godel's incomplete void...
You can follow @ApproximateJoe.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: