You entrust digital products with a lot: from your thermostat to your car's informatics to your pacemaker to your email and financial data, defects in computers can expose you to potentially enormous risk.

1/
The only thing worse than using a defective product is UNKNOWINGLY using a defective product (having faulty brakes is bad, discovering your brakes are faulty on the highway is much, much worse).

2/
Tech companies have long asserted that they alone have the right to decide who can disclose true facts about defects in their products...for safety. If randos who discover their mistakes make disclosures without warning companies, then "bad guys" will exploit the bugs.

3/
There's a legitimate ethical debate about the best way to make bug disclosures, but even if you believe that someone should be the official, legal custodian of Bad News About a Company's Products, it's commonsense that the company itself should not be that custodian.

4/
It seems obvious that, in the US, the First Amendment protects your right to make truthful disclosures about defective products.

5/
Yet, corporations (led by @Oracle) have stretched the disastrously vague, Reagan-era Computer Fraud and Abuse Act to threaten (and, sometimes, imprison) researchers who make these disclosures without permission.

6/
It's not just CFAA. Sec 1201 of the Digital Millennium Copyright Act provides 5 years prison/$500k fine for first offenses to anyone who "trafficks" in a "circumvention device". So publishing proof-of-concept code demonstrating vulns in systems with DRM is a potential felony.

7/
Enter the Vulnerability Disclosure Program and its freespending cousin, the Bug Bounty Program. Under these "managed disclosure" systems, companies invite security researchers to reveal their findings.

8/
In theory, this is how we want things to work: rather than coercing researchers into silence, companies entice them into cooperation, say, by promising to publish all reported bugs themselves after a suitable period to investigate and fix them.

9/
Maybe they even pay researchers for going the managed disclosure route.

In practice, though, criminal and civil threats loom large over these programs. Companies offer cash and immunity to researchers as a carrot, but they hold out fines and prison as a stick.

10/
And it turns out that, yup, companies are really shitty stewards of bad news about their own products. When companies get to set terms on which hackers talk to them first, they set terms that bind researchers to long periods (sometimes indefinite) of silence.

11/
And the companies also reserve the right to decide whether THEY will ever reveal the bugs to us poor suckers trusting their products with our money, privacy and lives, whether they'll ever patch those products, etc.

12/
But a few years back, some people had an idea to turn this bug into a feature: they'd start VC-backed companies that would manage bug bounties and disclosure programs for companies. They'd organize researchers, validate findings, manage thorny comms with the companies...

13/
They'd build platforms where researchers could flock and socialize and collaborate and become millionaires (!) by working WITH companies, instead of against them.

14/
That didn't work out so great. Because the hackers that the companies were supposed to protect weren't these companies' customers - the tech companies whose products they were testing were the customers.

15/
Hackers who join these platforms to earn big by doing the right thing instead find that they are required to sign indefinite, one-sided NDAs that prevent them from disclosing ANYTHING, even the fact that they signed an NDA.

17/
And the companies don't have to make any promises (apart from payment...sometimes) to do anything about the bugs that are brought to them by researchers.

18/
In @toholdaquill's excellent piece on this for CSO Magazine, he describes how the VC-backed, growth-oriented bug-bounty platforms are incentivized to, uh, overstate how much money hackers can make from using them, and what kind of results they can expect.

19/
Reading between the lines, and talking with former Hackerone exec @k8em0, Porup makes a pretty good case that apart from statistically insignificant outliers, there's not much money to be made by using these platforms, and the price of admission is silence and inaction.

20/
Porup also makes the case that bug bounty platforms are potentially violating California's employment law, AND the GDPR. He also debunks claims that their operations follow the ISO standards for bug disclosure (which Moussouris co-authored).

21/
I think that the outcome here was entirely predictable. The bug bounty platforms have tacitly endorsed the idea that it is/should be illegal to tell the truth about defective products without permission from the products' manufacturers.

22/
Inevitably, deputizing companies to decide who can warn their customers that their products can't be trusted ends with those companies abusing that power. Period. To imagine otherwise is to engage in fantasy.

23/
It's the kind of motivated reasoning that looks great in a VC pitch but is a disaster in the world.

eof/
You can follow @doctorow.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: