A very important thread for people who assume that a game's buggy release means that the testers were bad.

I'll add my own thoughts in this thread. https://twitter.com/Wiisak/status/1307698134622105603
I have worked with QA teams that are *understaffed* and I have worked with QA teams that have *bad leads*, but I have never worked a QA team that is just inherently "bad".

QA teams tend to be quite large, have a fair amount of turnover, and they're a potluck of humanity.
You get some people that are incredibly smart, disciplined testers, some grade A knuckleheads, and a lot of people in between. Good QA leads know how to focus people on what they do best. Even a relatively poor tester can still do critical testing work on a game if focused well.
So assume that any QA team of a reasonable size with competent leads is capable of doing good testing work. There are two keys to success: 1) the appropriate staff and amount of time for the size of the game (scope & features) 2) appropriate high level management/prioritization.
#1: If a QA team is understaffed, it won't succeed. If a QA team isn't given enough time, it won't succeed. If a dev team does not stop introducing new content or new features, bug burn down charts will always look Sisyphean because the testing process can never really finish.
#2: If the people who manage QA's focus do not focus on the right things, the right things will not receive the focus they need. Pretty simple. Testers are given specific things to do. QA teams do not generally "free graze" to find bugs as it's WILDLY inefficient.
There are stages of development (usually early) where free grazing for bugs is appropriate, but the further into development a game gets, the less that happens (if at all). A huge amount of late dev cycle testing is spent on very specific testing plans and regressing old bugs.
Because late development testing tends to be so specialized, how people in charge manage the testing team's focus is critical. Note that a QA lead typically *does not*, at a high level, choose what the team as a whole focuses on - or ignores.
One of the reasons I'm always wary when a publisher handles the majority of testing on a project is that a publisher's priority is OFTEN not the developer's priority. With some exceptions, a publisher's priority is usually *shipping*, which means passing certification standards.
A developer's priority is usually *quality*, i.e., the game is enjoyable from a player perspective, which often does not have that much to do certification standards.

Note: this doesn't mean publishers don't care about quality and devs don't care about shipping the game on time.
But when push comes to shove and deadlines approach, pressure refines that focus. And this is where things can get ugly. BTW, QA leads can and will advocate and fight like hell for what they think is best, but there is always someone above them who can overrule them.
In the worst of cases, I've seen quality-focused bugs fall into QA black holes because QA leads have been instructed to shelve *all gameplay bugs* indefinitely in favor of certification bugs. A feature feeling clumsy or shitty won't prevent the game from being shipped.
So here's how it goes: a tester (or a dev) sees a stupid bug where physics or something in the animation system goes haywire and a character in the game looks bad or broken. It gets written up and classified appropriately - gameplay, animation, physics, whatever.
It goes to a QA lead/manager of some sort. Whatever that classification is has been *categorically* deferred to be a post-launch issue, so that QA lead/manager marks it and unless something changes it, the appropriate dev *will not see it* until the game ships.
Even if 6 different people note the bug, all of the write-ups will get bundled together or deleted as dupes, falling into the same general post-launch black hole.

BTW, the certification people can also make the process very frustrating.
You know what it feels like to try to ship a game on tech that has already shipped on a platform only to find out that the certification people previously waived a bunch of massive issues that are still in the tech? And then they tell you that they won't waive them for you?
Well, friends, let me tell you, it is fucking infuriating for two reasons 1) console manufacturers applying different standards to different projects at different times is obviously unfair and 2) "Surprise! You've inherited bugs and if you don't fix them, you can't ship!" is bad.
These landmines typically appear late in the dev cycle because neither the console manufacturers nor middleware providers have much incentive to tell devs "Oh BTW there are a hundred TCR/TRC violations in here that you're going to have to fix later LOL."
I've seen a project where halfway through development, the dev team (which had limited resources) had to upgrade a critical piece of middleware that was years out of date because the engine's current revision of that middleware wouldn't compile with recent SDKs.
Dev: Hey, [Console Manufacturer], this wouldn't even have compiled with a recent SDK two years ago when this engine last shipped on your platform.

Console Manufacturer: Oh haha yeah we let it ship on an out of date SDK.

Dev: So can we d-

CM: No LOL.
You get the idea. Obviously, this torpedoes any testing plans or priorities you may have dreamed of having before all the devs' and testers' time and bug focus became 90% dominated by passing certification.
So when this happens, the publisher gets what they want (the game ships) and any bugginess in the game gets blamed on the devs and/or the testers, even if the devs didn't want the game to ship and the testers could not control their priorities.
And while normally it's fair to say, "Well, the devs put those bugs there," it is not at all uncommon for devs on a game team to get stuck fixing hidden legacy bugs in middleware that they -- that no one at their studio, really -- had any hand in making.
Logically, if all of their time get focused on fixing legacy/cert bugs, they never *get* to fix the bugs that they made.

In conclusion, usually the testers are not to blame for how buggy a game is. That's almost always a factor of resources and priorities.
QA leads determine neither what resources they have nor do they dictate their high level priorities. That's always on someone else.

BTW, related to this, a bad QA lead is absolutely, devastatingly bad. Most QA leads I've worked with range from good to great.
But let me tell you something: a bad QA lead will make shipping your game an awful experience. A great QA lead will create a beautiful golden road for your team to travel down to the paradise of game shipping. They are wondrous creatures and should be treated appropriately.
I.e., QA leads should be considered the equivalent of a dev team's discipline leads and given the appropriate respect and monetary/benefits compensation.

One of many reasons why QA is viewed as a "stepping stone" job is because of how testers are compensated and treated.
If you spend your time working as a tester and advancing to a lead position managing a big QA team on a 20, 50, 100 million dollar project and are treated like dog shit and paid the equivalent of a junior dev, WOW no shit of course you're logically going to move into dev.
Which means that those who are skilled/good at being QA leads are monetarily motivated to *stop being QA leads*. It's myopic for companies to do this.

If people genuinely *want* to move into dev and have the chops for it, great. If they want to be QA leads, help them do that.
You can follow @jesawyer.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: