Code metrics are useless to track an engineering team's performance. Not lines of code, not number of commits, not number of deployments, not test coverage. I can't think of a single one that can't be gamed or misconstrued.
They *are* useful in other contexts, usually to the programmers themselves. A bean counter sitting outside the team trying to find meaning in the avg number of methods per class may as well be reading tea leaves.
But what about bugs reported, you ask. Bugs are a human problem, not a code problem. The process of reporting them is as fallible as the one that created them in the first place. Not all bugs are equal, not all bugs are found, some turn out to be poorly understood requirements.
Just because something is easy to measure doesn't mean that information is valuable.
This thread brought to you by everyone trying to swindle managers into buying their Git analytics software.
You can follow @goblindegook.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: