It’s that time of year! Time to brine the turkey and review to what extent DC schools are rated based on the demographics of students they serve. Here we go!
Schools with larger percentages of “at-risk” students (no, I did not make up this term) generally have lower ratings than schools serving few at-risk students. This was true this year and last. I don’t see big differences in the trends, although some dots moved around.
Same thing for special education. Downward trend between school ratings and percentages of special education students.
Why are we basing school quality on student demographics? Good question. I’m optimistic that that’s not the intent, but schools’ scores mostly come from either proficiency levels or growth metrics on the PARCC test. Proficiency levels are very related to family income as shown 👇
At least one of our growth metrics, the median growth percentile, also appears to be downwardly biased for higher poverty schools. I don’t fully trust it for this reason.
If schools serving larger percentages of “at-risk” students really have lower growth (on average), then why doesn’t the downward trend exist to the same extent when just looking at performance of the “at-risk” student subgroup only?
In other words, it appears that there may not be enough controls for contextual factors in calculating those scores.
Okay, back to the school ratings. So school ratings track onto schools’ proficiency levels and growth metrics, with some variation. So changes in one of these metrics will result in changes to a schools’ STAR rating.
Proficiency levels have remained pretty stable over time, which makes sense. There is more variation in school growth from one year to the next, which calls into question to what extent growth is picking up real changes in school quality over time versus noise/error.
The good? news is that changes in school ratings from last year to this year don’t appear to be directly explained by changes in student demographics.
I’d make note of schools that had a big shift in demographics and you can see those here.
Also important to recognize which schools don’t get ratings. Schools that have not yet been open long enough are missing ratings, as well as schools that serve only PK children.
Finally, there were also some changes to the underlying metric data available in the “School Report Card” tab. Notably, suspensions and mid-year mobility were included in the file 2018 but were not included in the 2019 file. But here are those data for 2018.
So are DC schools improving? Final answer: I honestly don’t know. You would need more information and context to really figure that out.
But more to the point. These scores don't necessarily tell us what's going on in a school building. They tell us more about who the school is serving. And there's something wrong with that.
CORRECTION: The data on suspensions, mid year mobility, and teacher experience are there, just moved to a different tab “School Report Card Only Metrics.”
You can follow @betsyjwolf.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: