A recent study examines how adding predictive auxiliary objectives affects learning in artificial neural networks. The results provide new insights into the function of the hippocampus, according to researchers from Columbia University and Google DeepMind.
The team developed a deep reinforcement learning (RL) model that not only learns to optimally solve a task but also predicts how the state of the environment will change through its own actions. In experiments, the AI model with the predictive auxiliary objectives learned faster and required fewer training iterations than models without this additional task. Predictive learning particularly helped prevent overfitting and a collapse of the learned representations when computational resources were limited.
The further the predictions reached into the future, the better the learned representations in the model captured the global structure of the environment. This made it easier for the model to adapt to new goals in similar environments without being retrained.
Prediction module resembles animal hippocampus
Interestingly, the activity patterns in the prediction module of the artificial neural network resembled those in the hippocampus of animals: So-called place fields formed that preferentially became active at specific positions in the virtual space. Reward learning in other network areas influenced the distribution of these place fields.
The research team also observed learning-related changes in the network's input module, which is similar in function to brain areas such as the visual cortex: Units of the neural network responded more selectively to rewarded visual stimuli - similar to neurons in the visual cortex.
The scientists conclude that predictive learning could be a key mechanism by which the hippocampus provides usefully structured representations for other brain areas. It does not even have to directly generate action sequences or support planning processes.
Deep RL systems as a model for interacting brain regions
According to the team, the study demonstrates how deep RL systems can serve as a model for the interaction of different brain regions. In the future, the researchers want to further expand this approach.
They plan to conduct studies with more complex tasks and additional learning-supportive goals. The effects of feedback between the network modules will also be explored to make the models even more biologically realistic.
The results shed new light on the role of the hippocampus in learning and problem-solving. They could also inspire new approaches for more efficient machine learning and more flexible artificial intelligence.