Last week, I gave an overview in our weekly meeting on Imitation Learning. The goal in imitation learning is to train artificial agents to solve problems by watching demonstrations of experts solving these problems.
For example, you can train a self-driving car to make the same steering and acceleration decisions as a human driver.
( @deanpomerleau 1989)
Or you can train bots to play your video games for you (Ross et al. 2011)
or robots to put away your dishes ( @chelseabfinn et al. 2016)
or robot friends to play ping pong with you (Muelling et al. 2013)
Many neuroscientists are interested in training artificial agents using reinforcement learning and comparing them to animals, e.g. https://arxiv.org/abs/2007.03750 . Imitation learning is another paradigm to consider.
Imitation learning makes a lot of sense if you don't know what the reward function is (what is fly/mouse behavior optimal for anyways?), or don't have a good simulation model, or want your artificial agent to behave exactly like the real animal.
You can follow @KristinMBranson.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: