Content
summary Summary

A recent study examines how adding predictive auxiliary objectives affects learning in artificial neural networks. The results provide new insights into the function of the hippocampus, according to researchers from Columbia University and Google DeepMind.

Ad

The team developed a deep reinforcement learning (RL) model that not only learns to optimally solve a task but also predicts how the state of the environment will change through its own actions. In experiments, the AI model with the predictive auxiliary objectives learned faster and required fewer training iterations than models without this additional task. Predictive learning particularly helped prevent overfitting and a collapse of the learned representations when computational resources were limited.

The further the predictions reached into the future, the better the learned representations in the model captured the global structure of the environment. This made it easier for the model to adapt to new goals in similar environments without being retrained.

Prediction module resembles animal hippocampus

Interestingly, the activity patterns in the prediction module of the artificial neural network resembled those in the hippocampus of animals: So-called place fields formed that preferentially became active at specific positions in the virtual space. Reward learning in other network areas influenced the distribution of these place fields.

Ad
Ad

The research team also observed learning-related changes in the network's input module, which is similar in function to brain areas such as the visual cortex: Units of the neural network responded more selectively to rewarded visual stimuli - similar to neurons in the visual cortex.

The scientists conclude that predictive learning could be a key mechanism by which the hippocampus provides usefully structured representations for other brain areas. It does not even have to directly generate action sequences or support planning processes.

Deep RL systems as a model for interacting brain regions

According to the team, the study demonstrates how deep RL systems can serve as a model for the interaction of different brain regions. In the future, the researchers want to further expand this approach.

They plan to conduct studies with more complex tasks and additional learning-supportive goals. The effects of feedback between the network modules will also be explored to make the models even more biologically realistic.

The results shed new light on the role of the hippocampus in learning and problem-solving. They could also inspire new approaches for more efficient machine learning and more flexible artificial intelligence.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Recommendation
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Columbia University and Google DeepMind researchers developed a deep reinforcement learning model that learns to solve tasks while also predicting future changes in the environment based on its own actions. The model with predictive auxiliary objectives learned faster and required fewer training iterations compared to models without this additional task.
  • The activity patterns in the model's prediction module resembled those in the hippocampus of animals, with place fields forming that became active at specific positions in the virtual space. The model's input module also showed learning-related changes, with units responding more selectively to rewarded visual stimuli, similar to neurons in the visual cortex.
  • The study suggests that predictive learning could be a key mechanism by which the hippocampus provides structured representations for other brain areas. The researchers plan to conduct further studies with more complex tasks and additional learning-supportive goals to make the models more biologically realistic and potentially inspire new approaches for efficient machine learning and flexible AI.
Sources
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.