I'm grading editing tests from intern applicants again, and it's reminding me how much I hate most editing tests.
Too many of them, I think, don't really test editing skill. It's more of a test of how well you can guess what the test maker is thinking.
For example, our department's editing test has a section that is a mix of style and usage questions. The style questions include the possessive of "press," whether the first letter after a colon is capped, and whether "good-natured" is hyphenated before a noun.
Most styles would hyphenate "good-natured" before a noun, but the key says that the first word after a colon should be capped, even though our preferred style manual, Chicago, doesn't unless the colon introduces multiple sentences. (In this case, it introduces only one.)
But the choice between "press'" and "press's" isn't a matter of right or wrong; it's just a style choice. AP style prefers to former; Chicago prefers the latter. Should we really mark someone wrong for choosing AP style on that question just because we prefer Chicago?
And then the usage questions are full of issues that we don't consistently enforce in our own editing: none is/are, one of those that is/are, better than she/her, and so on. So if we don't enforce these rules, why should we mark down test takers for failing to enforce them?
I think there's an argument to be made that it's important to know about the issue so that you can make an informed decision about whether or not to follow it, but the test doesn't capture that kind of thinking. (Though sometimes I write in explanatory notes when I take tests.)
The spelling section on our test is pretty fiendish, but I'm not sure it's good. Test takers are simply supposed to circle every word that's spelled wrong. They get extra points if they correct the spelling. But a lot of the "wrong" spellings are just variants.
Is it okay if we as a department prefer "canceled" and "collectible" "judgment" to "cancelled" an "collectable" and "judgement"? Sure. Should we mark people down for not knowing our preferences? I don't think so. Again, we're testing editing skill, not mind reading.
Then there's the section where you edit a bunch of sentences, which is just a mix of all the previous problems. Plus, if someone rewrites it in a way that's different from what the key says, how do you score it? It's so hard to quantify editing changes at the sentence level.
I think this kind of test can give you a clear negative—if someone bombs the test, you can usually assume that they're not going to be a good editor. Though maybe that's not true, because we test on paper and don't allow access to references. That's not how real editing works.
Someone might be a really good editor if they have access to a style manual and a dictionary, but they might struggle with some things if they can't look them up. Is it fair the penalize those people?
But even if the test can reliably tell you if someone is a bad editor, I don't think it can reliably tell you if someone is good. I've seen people do well on tests and then really struggle on the job. The rate of false positives is too high.
I'd much rather just give someone a page or two of text and see what they do with it. It would be difficult or impossible to score in a fair and objective way, but it would give us a much better sense of how well someone edits than a bunch of "guess what I'm thinking" questions.
It's because copy editing is such a profitable and highly sought-after profession. So many people want in that we have to screen out the riffraff. It's okay if you're jealous, Ty. https://twitter.com/JamesSACorey/status/1305939640046690304
You can follow @ArrantPedantry.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: