Hey, communication theorist and frequent peer reviewer here.

If you're going to use a fancy automated computer program to run an analysis on content, please be sure that you have COMPLETELY UNPACKED how it works, what the classifiers are and how they were chosen...
Developing a "highly reliable" tool means nothing to Danna.

Of course it's reliable. It's a goddamned computer program.
If you train it to categorize the word "the" to mean "a vegetable" ... it will be RELIABLE AND will tell you there are THOUSANDS OF VEGGIE REFERENCES IN EVERY PIECE OF CONTENT YOU EXAMINE.

"American News Media Has Vegetable Fetish," Computational Social Scientist concludes.
Without a detailed understanding of WHY and HOW these coded constructs are being classified in this way, and what programming was involved in the first place, how do we assess the value of the work at all?
The more I see work like this the more I start to question how much time and attention doctoral programs are dedicating to the philosophy of science, epistemology, and the philosophy of meaning and perception.
Given the perceived need (due to social media) to analyze gigantic quantities of data, folks seems desperate to prioritize efficiency and volume over meaning.

I don't care that you analyzed a corpus of 10,502,203,302 WHATEVERS. if I still have no idea what I'm looking at.
If I get the sense that you are prouder of your methods and your shiny new toy than you are about the question you are trying to answer and whether this new toy is even the appropriate way to answer it, I'mma conclude you're DOING SCIENCE WRONG.
You can follow @dannagal.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: