Look, the question of whose fault bugs are is not actually that complicated once you've got a properly well rounded background in cognitive systems engineering, moral philosophy, philosophy of moral education, anarchist theory, psychotherapy, phenomenology, Christian theology an-
It might help to know some software development too IDK.
Three theses:

1. Systems where nobody is willing to take personal responsibility for their maintenance and proper function do not work and inevitably produce awful results.
2. If everyone involved takes personal responsibility, basically every system works well.

BUT
3. If a system relies on people taking personal responsibility in order to ensure normal proper functioning, it will burn out the people who are willing to take personal responsibility, they will leave, and you will be left with only the people who DGAF about proper functioning.
This is prompted by https://twitter.com/hillelogram/status/1265716565632303109 of course. Are bugs programmers fault?

Often, yes, they are. And programmers who create bugs should show an appropriate amount of guilt over them. Often that appropriate amount of guilt is very small, sometimes it's larger. It depends.
But guilt is not blame. Guilt is an impetus to do better, a sign that there are virtues you need to cultivate. A healthy system treats bugs as a learning opportunity. Sometimes you need to learn to do better as an individual, sometimes as a system.
Hillel cites the safety system literature and he is right to do so. Treating an individual person as culpable and deserving of punishment just because the fault is "their" error is self-destructive behaviour in any system that wants to do better.
But the safety systems literature also warns us that humans experts in the system are also what create correct functioning, and that overly brittle systems that treat humans as problems to be routed around tend to tie their hands at the worst possible moment.
Additionally, if you've never shipped a bug, you've not shipped enough. The correct number of bugs in the wild is almost never zero. This is true even for most safety critical software.
In some cases it could be true *even true if the bug kills people*. If you take two years longer to deliver some life saving technology in order to ensure that it is free of bugs, how many people died waiting for you to ship?

This is a hard trade and there is no easy answer.
Bugs are, ultimately, an integral part of the life of any system, and are primarily an opportunity for the system to learn, and to grow, and to self-improve. This includes the system overall, but it also includes the people within it.
The ultimate question is not one of fixing bugs, assigning blame, or changing the system, but the more meta one: When a bug occurs, does your system get better or worse? Does error cause the people within your system to grow better, or does it cause the ones who care to leave?
You can follow @DRMacIver.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: