We touched on a lot of interesting subjects at the great Salon yesterday
But i would like to dig into one where i think we failed to communicate: what it means to be "close" in genetic space, and why i think it& #39;s relevant. (1/n)
@criticalneuro @neuro_data @MelMitchell1
But i would like to dig into one where i think we failed to communicate: what it means to be "close" in genetic space, and why i think it& #39;s relevant. (1/n)
@criticalneuro @neuro_data @MelMitchell1
Suppose i am trying to solve eg a prediction task, where I take inputs from a set X and try to predict Y. Maybe X is a set of images, and Y is a set of labels.
Now let& #39;s say that i try to solve it with my favorite algorithm, and fail. (2/n)
Now let& #39;s say that i try to solve it with my favorite algorithm, and fail. (2/n)
I take it to one of you, and you say, "Oh, i see the problem, you just need to pre-process your images with this edge detector" or "You just need to use weight momentum to make your network converge". (3/n)
And sure enough, i change a few lines of code (out of many thousands), and it works--yay! thanks!!
So i would say that i was "close" to solving the problem, and that a small change---just a few lines of code-- were enough to fix the problem. Right? (4/n)
So i would say that i was "close" to solving the problem, and that a small change---just a few lines of code-- were enough to fix the problem. Right? (4/n)
So in this example, i think we can agree what "being close" to a solution means. In one case we can summarize the tweak at a pretty high level ("use an edge detector"), whereas in the other case there was a small harder to interpret tweak ("use weight momentum"). (5/n)
But in both cases, we can quantify (or at least, put a bound on) how close i was to a solution by how many lines of code i had to change. Right? (6/n)
Let& #39;s consider how this relates to animals. Let& #39;s say we have species A that can& #39;t solve "cognitive" problem X, and species B which can. Now let& #39;s imagine that species A and B are closely related. what this means is that their genomes dont differ very much. (7/n)
In fact, i can quantify exactly how close they are, by measuring exactly how similar their genomes are (in units of nucleotides). Since the genome is just a string of a few billion nucleotides, and there are 4 possible nucleotides (ATGC), distance in nucleotides is just bits(8/n)
So we can say that species A is only some number K bits from being able to solve some problem. In fact, i could in principle determine the minimum number of changes i& #39;d need to make to the genome of species A in order to make it solve problem X. (9/n)
Of course, you might find this unsatisfying, and i& #39;d agree with you. Knowing that A is close to B doesnt necessarily tell you much about *how* species B solves it, or what is different between species A and B. (10/n)
For example, it could be something pretty simple, like problem X requires patience; species B is less impulsive than species A. Or maybe: problem X requires strong auditory-visual integration, which species A doesnt do. (11/n)
So knowing that B but not A can solve it doesnt necessarily tell you what to do next, but it does reassure that you& #39;re on right track, and greatly limits space of things you might consider, since diffs between the 2 species must be pretty simple (in units of genes). (12/n)
And that is why i argue that having a system with mouse intelligence is the way to go. To clarify: I& #39;m not asking for a system that can merely mimic what a mouse does, but rather one that at some deep level does it the "same" way. (13/n)
Unfortunately, i can& #39;t tell you ahead of time what i mean by "same way". I do believe that using a neural network is "more similar" than trying to achieve mouse intelligence with symbolic AI. But i only have vague hypotheses about what else is missing from current ANNs (14/14)