I'm skeptical of claims of 'underlying complexity'. Often that boils down to bad descriptions. But even if we grant that claim, we can turn to Gell-Mann: "What is the point of studying a complex system that we don’t understand by making a computer model that we don’t understand?" https://twitter.com/dbasanta/status/1297309127807467521
Of course, @dbasanta might respond to Gell-Mann by saying that as the #mathonco community we really _do_ understand these complex computer models. But I'm not sure if that would convince @hilseth_mistrov (or me): https://twitter.com/hilseth_mistrov/status/1297447008530309121
Does that mean I am 'against' complex agent-based models? I've discussed this at length with @PatrickEllswo15 & decided that I am not.

ABMs can be a good way to expand the imagination, especially in cooperative settings. And an expanding imagination is important for new science.
In this way, I view complex ABMs as a kind of science fiction. You can of course view minimal mathematical models as fictions, too. But for most people, a minimal math model is like reading scifi in foreign language: you spend more time translating than expanding your imagination
Issue with both is when we overclaim: suggesting our models aren't just ways to expand our imaginations but meant as 'predictive', 'diagnostic' or 'translational' or some other grant-worthy buzzword.

And we do often overclaim & not always in safe suspension of disbelief settings
This thread deconstructing a 'digital pregnancy test' to find an optical reader attached to a normal paper strip pregnancy test can be an allegory for so many things:

https://twitter.com/Foone/status/1301707401024827392

In my case, I think I will use it as allegory for overly complex agent-based models.
You can follow @kaznatcheev.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: