One of the more horrible things you can do when employing automation as part of your testing strategy and tactics is when you think of your automation activities as something you do after performing your testing.
An even worse thing to do is when you decouple your testing activities from your programming work. Most automation augmentation efforts fail when the feedback loop from said efforts is so far behind what your build team is working on that the information they receive is outdated
In some successful products I've worked with, groups had a way to balance how the team handles known information about the product and discover, process, and handle unknown information about the product. All of the above is also a reflection of the team culture.
Most (if not all) Automation Centers of Excellence fail because of the top-down approach where tools are rammed into unassuming user's throats despite not knowing the context of the problem being solved.

The tool needs to adapt according to the problem that needs to be solved.
The other issue that bothers the heck out of me when it comes to automation centers of excellence is how shallow their success measures are. For example, the metric, % of tests automated vs. manual, only works if it's a gauge of what else can be done for a given timebox. But, no
this metric is actually used to gauge completeness. That "vs. Manual" part (putting aside unnecessary conversations on the validity of the term) is sampling at its best. That particular numerical entity approaches infinity over time. So yeah, #BadMetric.
So, where should the focus of measurement be when it comes to your automation efforts? Where can automation in testing bring the most value? Who benefits from the information any metric provides? Is it your customer? Your build team? Senior Leadership? All of the above?
The American psychologist Edwin Garrigues Boring once said,

"Measurements are used to construct a reality for people who were not there. "

Any reality, can be important as long as that measurement is actionable. Always ask, why is this metric important, and to whom?
An example metric that I can showcase is the time it takes to perform a set of automated checks. I was pulled in to evaluate a project that typically took 2-3 hours for data set up, and roughly 30 minutes to execute a suite of tests that check for regression.
After having conversations with the team, it was pretty clear that we could improve the process for test data management, write separate tests that validate the data, and make the test runner more efficient. The result? The full suite now executes in 4 seconds. Yes, 4 seconds.
The above example can be misleading because there are a LOT of dependencies that led to the success of the implementation that hasn't been discussed. Things like the skill of the team members, and the willingness of the product owner to pivot work so the issue can be addressed
You can follow @perze.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: