Paper Review of BaseSAFE is up on YouTube . Conclusion, really good paper, but with the cases provided there's about a 10x slowdown due to fork()/AFL being slow, as well as another ~10x due to slow emulation. Correct thoughts, but could be ~50x faster
For reference, for small fuzz cases (100,000 instructions, relatively large fuzz case for a single packet parser, but small in the grand scheme of things). I outperform this on the exact same hardware with vectorized emulation by 800x (21M/sec for me, 27k/sec for it).
Even assuming scalar emulation is 8x slower than vecemu (since I'm running 8 VMs in parallel, but each lane is definitely not the same), there's at least a 100x speedup here.
This is often why you see me ranting about scaling and performance of fuzzers. When a lot of people say "AFL scales" they mean you can run it on multiple cores, but in this case running AFL on 192 threads only yields 3 cores worth of perf. It just throws it all away.
Most fuzzing environments have 10-100x speedups that can be done with ~50 hours of development. This is why I think our focus on making byte flippers 5% better is kind of pointless until this is addressed. That being said, overheads of fuzzers don't matter for large fuzz cases
... or if you're using persistent mode... I'd hazard persistent mode is really never needed if you have good harnessing. The overhead for me to do 21 million resets per second is 0.4% of my CPU time, I'd rather this than hack in persistent mode, or potentially lose determinism.
You can follow @gamozolabs.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: