1. Overview
Neuroscience-Inspired Artificial Intelligence, a paper released by DeepMind in Cell
Why neuroscience is important for AI?
- inspiration
- validation
But in practical, biological plausibility is just a guide, not a strict requirement.
What can we do? Marr and Poggio stated that there are 3 layers of analysis
- top level: the goal of the system
- mid level: the process and computations, also called algorithmic level
- bottom level: how to implement the system physically
What DeepMind do is focusing the top 2 level of the analysis
2. Some History
2.1. Deep Learning
The origin of deep learning lies directly in neuroscience
Some key timestamp:
- Construction of neural networks —McCulloch and Pitts, 1943
later other researcher add feedback for improvment
Backpropagation algorithm allowed learning multiple layers —Rumelhart et al., 1985, Werbos, 1974
back that time, most AI researchers was focused on building logical processing systems based on serial computation inspired by symbolic. Parallel distributed processing(PDP) is a very important tools used then. And many ideas had a sustained influence on AI research such as NLP and CV or dropout
But stochastic and highly parallelized information processing makes pure symbolic method not practical
2.2. Reinforcement Learning
RL methods address the problem of how to maximize future reward by mapping states in the environment to actions and are among the most widely used tools in AI research. —Sutton and Barto, 1998
RL methods were originally inspired by research into animal learning. In paticular, the development of temporal-difference(TD) methods, was intertwine with research into animal behavior in conditioning experiments
3. Nowadays
3.1. Attention
Biological brains are modular, with distinct but interacting subsystems playing key functions such as memory, languga, and cognitive control—Anderson et al., 2004/ Shallice, 1988
Most CNN models worked directly on entire images or video frames. But primate visual system works differently.
Visual attention shifts strategically among locations and objects, centering processing resources and representational coordinates on a series regions in turn.
In practical, the attention approach were subsequently shown to produce impressive performance at difficult multi-object recognition taks both in terms of accuracy and computational efficiency
3.2. Episodic Memory
A well known theme in neuroscience: intelligent behavior relies on multiple memory systems
- reinforcement-based mechanisms: long-term memory
- instance-based mechanisms: episodic memory