1/ I'm often confronted with skepticism that neural network models of the brain are intelligible, or that they're even proper models at all, considering how "different they look" from real brains.
2/ Sometimes the best approach to a tough science question is to step back and take a philosophical viewpoint. Together with my amazing colleague @luosha (an actual professional philosopher), that's what we've done in two new companion papers.
3/ Part I ( https://arxiv.org/abs/2104.01490 ) is about mechanisms. Philosophers have argued that mechanistic models should map to a explanandum's parts, mirror their internal organization, and reproduce their activities. Do NN models fit this model-mechanism-mapping ("3M") framework?
4/ We argue that NNs can be meaningfully mechanistic models -- given some innovations in understanding "mapping". We propose "3M++", an expanded concept for how to assess brain model that makes explicit the role of abstraction in both computational and experimental neuroscience.
6/ First, NN models require some amount of abstraction, and shouldn't be forced to use the most detailed possible level of description. But the abstraction must be sufficiently fine-grained to produce key behaviors of target system. Too coarse an abstraction won't be "runnable".
7/ Second, no two individuals' brains are identical. Whatever mapping transform is needed to judge a given brain area as "similar" between animals, but different from other brain areas within any single animal -- that is what we should use to map models to brain data.
8/ With these two concepts (runnability and transform similarity) in mind, we show that some Neural Network models are actually pretty good as mechanistic models of certain brain systems.
9/ Of course, NN models are far from perfect. But we think 3M++ framework’s concepts will transcend the specifics of today's NN, and helpfully apply to improved model classes of the future (with different and perhaps better, abstractions.
10/ Part II ( https://arxiv.org/abs/2104.01489 ) is about intelligibility. We argue that in certain circumstances deep NN models are intelligible (and not just "black boxes") - when intelligibility is understood in evolutionary constraint-based terms.
11/ We argue that task-optimized NN models that optimize for a meaningful high-level goal, and which have good neural predictivity as a by-product, are "intelligble" -- or at least, more intelligible than models which need to be fit to the neural data directly.
12/ When task-optimized NN models match neural data, this is probably not a miracle: it is likely because the constraints on both the model and the real organism have lead to convergent evolution.
13/ We thus argue for a "contravariance" principle: the harder the constraint, the smaller the set of mechanisms that can solve the constraint, and thus the more likely any two solving mechanisms (whether biological or artificial) are to be similar in key ways.
You can follow @dyamins.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: