This week's essay is for the coders! Nerds, every last one of you. Especially you. https://link.medium.com/x5sFgobSW5 
I'll do a quick twitter-friendly storm about this later today, so you can get a taste if you're curious.
Ok, I may have snuck a weekend in there, but hey. It was the weekend. Since I'm always working at home these days, its important to protect boundaries of when to work, and when to... not.

Anywho, lets go!
I wrote that article, in part, because I've been writing a lot about more meta, hand-wavey team stuff, even as I'm sitting on a wealth of precise technical opinions, tailored over my career. Are they all great? Probably not! But I hope I've learned a thing or two.
And because I'm been testing so long, and trying to approach it thoughtfully most of that time, I hope my lessons on the subject are fairly resilient.
So yeah... one of the great virtues of testing is that the test survives. Sometimes it even survives the programmer his/herself. *Fondly remembers dearly departed coworkers.* Writing a suite of tests is building a robot version of yourself that'll keep you honest over time.
I've read tests that are over a decade old in order to figure out how things work. This is not some abstract thing that never happens in the wild... if your project succeeds (and I hope you want it to!), you'll win this problem.
And if you want to *keep* your project successful, making sure you understand why things were built they way they were is important.

Anyway, I digress.

Given that long livedness, I've discovered in myself the following *ordered priorities* for how to make my tests... better.
So, in order:
1. A test has to validate the feature works - it has to fail when it doesn't work.
2. A test must communicate the feature under test- it must read precisely.
3. A test should be self-contained - minimize knowledge required to 'get it'.
4. A test should be a realistic example of how to use the feature.
5. A test should be brief and focused.
6. A test should restrict itself to *one scenario* - additional scenarios are additional tests.
The way these rules work is essentially the same as Asimov's Laws of Robotics... your test should adhere to rule 2, except when it conflicts with rule 1. Lower priority rules yield to higher priority rules.
For example, if making a test very brief (rule 5) makes it communicate poorly (rule 2), then the brevity's got to go. Accuracy is more important.
But mistake me not! *ALL* of these rules are important! Just some are more important than others when it comes to their long-term health maintenance.
The more rules your test struggles with, the bigger the hill the programmer of the future has to climb up in order to maintain it.
I've found working with people over the years, that people intuit most of these rules in some way. But maybe not all of them, and frequently not very clearly... they get smuggled into a variety of preferences.
I've seen a lot of conflict coming from people highly focused on *one* rule, and then prioritizing it to the exclusion of all else. I think this is a natural part of the evolution of a developer, to be honest.

The truth? Pretty much all programmer rules of thumb are contextual.
And most of them are related to our observations of other programmers... and our (worst) selves.

Which makes a shocking number of best-practices in the highly precise and technical field of programming... loose observations of human nature, and its temptations.
For example, *global variables are to be avoided* is a law, and it sounds strong and technical.

But (for the most part) the *reason* they are to be avoided is because of how humans use and understand them (Which is to say, they're difficult to use and understand correctly).
And the cost of misunderstanding is high.

So it is with these test principles.
Now, I've thought a lot about *why* I believe these principles should be in the order they are. I'm happy to field questions on the subject if need be.

The principle *behind* the principle is that all code should be effectively self-describing. Even test code.
One of the most frustrating moments when digging through old tests is discovering that *what you thought the test was doing* was not the case.

Sadly, that's become a mark of an experienced programmer... being able to sniff out that the code is leaving out some important context.
And then starting to dig around until you find it. Its an important skill, that will always be useful (Sherlock Holmes will never go out of style).

But if we pride ourselves on writing clear code, we don't want to put another developer in that situation if we can avoid it.
This is why principle #2 and principle #3 are so high - making the content of the test *accurate* and *highly visible* is really important. It is hard to understand something *I can't see*.
This is especially relevant because of temptations presented by various frameworks and coding styles that let you smuggle in a tremendous amount of complexity, without leaving a trace on your test code.
I mean, this is great for getting the line numbers down. But it doesn't make for very self-documenting code.

And when things need to change (or just plain-old break) in the smuggled content, the blood pressure of the programmer may go through the roof.
I think I'm running out of steam on this thread, but I'll leave with a few final thoughts.

If you can make your test *really truly* read as examples of how to use the system being tested, then you've got something precious. Especially if you're writing a shared library.
Ok, maybe just one thought. If you stumble on this thread and find it useful, let me know! #testing #tdd
You can follow @ZeGreatRoB.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: