I've been (re)reading a bunch of @MasteryGrading papers lately, and I'm going to summarize and share thoughts on them as a way to reflect and digest them.
I'll start with "Mastery Based Testing in Undergraduate Mathematics Courses" (Collins et al): https://www.tandfonline.com/doi/full/10.1080/10511970.2018.1488317
I'll start with "Mastery Based Testing in Undergraduate Mathematics Courses" (Collins et al): https://www.tandfonline.com/doi/full/10.1080/10511970.2018.1488317
I read this one long ago in the Before Times, and things have been a bit hazy for the last year. When I picked this up to reread it today, I was really impressed by how clear and well-written this paper is. It's an *excellent* intro to mastery-based testing in math.
This is not a paper that focuses on nitty-gritty implementation details (for that, check out some of the same authors in the @ReadPRIMUS special issue on mastery grading).
Instead, it begins with a *fantastic* analogy of learning as climbing mountains. What would progress of 70% look like? Did you make it 70% of the way up every peak, or fully climb 70% and not touch others?
The authors use this analogy to identify issues with points and partial credit, and how they fail to represent what students really know, while also potentially burying students who struggle early on (what I like to call the "point traps").
The paper continues to identify key features of MBT (and mastery grading in general): "Concepts" (a broad conception of "objectives"), credit only for mastery, and multiple attempts to meet those expectations.
The authors give a lot of great, and very general, advice about how to implement MBT in almost any class. As I said, this is not a nitty-gritty details paper. Afterwards, you will have some great ideas and a lot of questions remaining.
The paper ends with a short summary of a common survey that the authors gave to students in their MBT classes, across a huge variety of institutions (this is a great idea, and something I'd love to happen more often).
Key results are that students: strongly believed that MBT helped them learn ideas more deeply than usual; felt that the results fairly represented their knowledge; gave them time to understand things and identify places that needed review.
(There's some discussion of low test anxiety, but no comparison to other classes. Plus there's always the issue of whether *any* timed assessment results in anxiety -- MBT can perhaps reduce that, but the anxiety is still there.)
Concrete examples are few, but that's not the goal of this paper. However, the authors do include examples from as wide a range as calculus, discrete math, and real analysis.
They emphasize that MBT can work in a wide variety of contexts, although I'd suggest that something more like Specifications grading is better suited for proof-based classes.
This paper was a joy to read, and provides an excellent introduction to the philosophy and key ideas in mastery-based testing. It's not "here's my course in a box -- go ahead and use it!", but it will get you thinking about your assessments and why you do what you do.
There are a ton of authors, and I recommend checking out their other work too: @jbcolli2, @Am2an7da9, J. Hart, K. Haymaker, A. Hoofnagle, @mkjanssen, J. Kelly, A. Mohr, and @JessicaOShaugh3.
Hm, I never defined mastery-based testing. The key feature of MBT is that it applies mastery grading ideas *only* to tests/quizzes. From that, you can calculate an "exam %" that fits into an otherwise traditionally graded class (e.g. with homework, etc.).
This makes MBT especially nice as a first step into mastery grading: You can "bolt it on" to an existing class without having to rework all of your assessments.