Interestingly, just before reading this I was reading a great piece from the Fordham Institute about evidence-based practices.
Let’s talk about educational technology research and “media comparison studies.” (thread) https://twitter.com/mfeldstein67/status/1225406509498208261
Let’s talk about educational technology research and “media comparison studies.” (thread) https://twitter.com/mfeldstein67/status/1225406509498208261
For many years, the focus of educational technology research was to apply the standards suggested in this article: randomized assignment of participants to conditions where the variable was the technology (is it present or not).
This resulted in hundreds (and I mean hundreds) of studies that all shake out to what has been termed the “no significant difference phenomenon." Some of these studies showed tech had a positive influence, some showed it had a negative influence, but those were like
the tails in the curve and those studies had clear methodological issues. The large bulk in the middle showed no significant difference, time and again. Thomas Russell (2001) catalogued these on the “No Significant Difference Phenomenon” website and wrote a book of that title.
The large pile of evidence forced the research community to step back and contend with this data and how to interpret it. I could write a separate thread on various interpretations alone (and maybe I should, or maybe EdSurge should pay *me* to write some articles).
But let’s go with Richard Clark here – in 1994 (26 years ago!), Clark advanced the argument that these sorts of studies were methodologically flawed because they attribute outcomes to differences in media rather than differences in method (or instructional decisions).
Well before then, in 1983, Clark was starting to dismantle the media comparison methodology (see Clark, 1983, Reconsidering research on learning from media in the Review of Educational Research, 445-459). Today (other aspects of Clark’s argument aside – Jim!),
this view of media comparison studies methodologies remains a steadfast fixture in the educational technology research community. Ranked journals, one of which I edit, will reject media comparison studies, and we discourage students from designing such studies in doc programs.
Some good articles on this I would recommend:
Lockee, B., Moore, M., and Burton, J. (2001). Old concerns with new distance education research. Educause Quarterly, 60-62. (Yes, that citation is an APA mess, but you can still locate the article with that info)
Lockee, B., Moore, M., and Burton, J. (2001). Old concerns with new distance education research. Educause Quarterly, 60-62. (Yes, that citation is an APA mess, but you can still locate the article with that info)
Lockee, B., Moore, M., and Burton, J. (2002). Measuring success: Evaluation strategies for distance education. Educause Quarterly, 20-26. (Someone at VA Tech let Barb know I’m giving her major shoutouts :-D)
Some major meta-analyses:
Bernard, Abrami, Lou, Borokhovski, Wade, Wozney, Wallet, Fiset & Wong (2004);
U.S. Department of Education (2010);
Zhao, Lei, Yan, Lai & Tan (2005)
Bernard, Abrami, Lou, Borokhovski, Wade, Wozney, Wallet, Fiset & Wong (2004);
U.S. Department of Education (2010);
Zhao, Lei, Yan, Lai & Tan (2005)
Mayer (2011) proposed that research focus on three questions:
What works? (Does an *instructional method* cause learning?)
When does it work? (how do contextual variables influence what works)
How does it work? (what learning processes determine the effectiveness of the method)
What works? (Does an *instructional method* cause learning?)
When does it work? (how do contextual variables influence what works)
How does it work? (what learning processes determine the effectiveness of the method)
These are not questions about tools. They are questions about instructional decisions and the contextual variables. The second question is the “messiness” of educational / learning contexts that often frustrates research (especially if the gold standard is the RCT).
That is not to say we cannot investigate tools, but it is to say that it’s not the tool that’s the variable. I continually hearken back to Quintilian's _Institutes of Oratory_, specifically Book X where he talks about the new technology – writing – and its role in the academy.
He explores what writing is good for, what its limitations are, and therefore what place it has in the curriculum. He was not concerned about which pen or parchment was best, but rather framed writing as a strategy (a method) and queried its utility that way.
I’m sorry-not-sorry to frustrate folks who want the simple answer of which pen or parchment to pick. But we have been on the research journey, and it suggests to us we should be asking better questions of and around technology.
Anyone digging for the type of research this article suggests won't find it. Not from educational technology researchers, anyway (you will find it from those in other fields thinking they're inventing our field and repeating some of these hard lessons learned).
All that said, we the research community DO need to do a better job framing our research in terms focused on actual needs and problems of practice. But that's another thread (actually a call to action I'll be sending out in our journal soon).
Fin. (Maybe I have “Twitterized” this summary too much – colleagues, do feel free to add nuance where you think it’s important. Also, perhaps I'm being hard on this particular article, but I still think it warrants this elaboration.)