Content
newsletter Newsletter

Researchers show that there are some similarities in language processing between the human brain and OpenAI's GPT-2 language AI.

Language AIs like GPT-2 or, more recently, GPT-3 produce believable text without ever having learned the diverse and still incomplete rules that human linguists have developed over decades.

Instead, language AI learns word completion: it predicts the next word in a paragraph as accurately as possible by understanding the context of surrounding words. It does this by training on huge amounts of text.

These context-based language AIs leapfrog previous hand-programmed language systems and vector-based systems without contextual evaluation (word2vec).

Ad
Ad

In a new experiment, neuroscientists have now shown that the human brain also consistently performs word prediction. Based on this finding, researchers speculate that humans learn language similarly to machines.

GPT-2 listens to podcasts

In one experiment, 50 people were asked to listen to a 30-minute podcast. Meanwhile, they were asked to predict each of the nearly 5,000 words spoken.

On average, the participants' prediction accuracy was just under 28 percent - a good result given the myriad possibilities. For about 600 words, the accuracy rate was over 70 percent. According to the researchers, this is the first time that human predictive ability has been measured in such an experiment.

In a comparative test with OpenAI's GPT-2, the neural network's predictive ability was shown to be on par with human participants.

The more context GPT-2 can process simultaneously, the better the prediction. To achieve this, the researchers changed the number of transformers (explanation) in the neural network. Transformers are responsible for the AI's attention mechanism and thus for context.

Recommendation

Brain measurement proves human prediction routine

Another experiment conducted by the researchers shows that the good prediction performance of the human participants is no coincidence.

In the second experiment, the same podcast was played to epilepsy patients. The participants were simply asked to listen to the podcast. They were not told to predict the next word.

The researchers collected electrocorticography (ECoG) data from more than 1,000 electrodes during listening and modeled neural activity using several algorithms, including a GPT-2-based variant.

Using the data, the researchers were able to show that the brain predicts upcoming words without an explicit task before they are perceived by the participant. The prediction signals occurred up to 1,000 milliseconds earlier.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

The measurements also showed that neural responses to words can be predicted significantly better with models like GPT-2 than with older models that do not use context-based learning.

A sign of lifelong learning?

The existence of these spontaneous prediction signals could be a sign that predicting the next word supports lifelong human learning, the researchers write.

This is also supported by developmental psychology studies, which suggest that children are exposed to tens of thousands of words in contextualized language every day, providing a large amount of data for learning.

However, he said, more studies are needed to show that the newly found predictive signals are available at a young age and thus are actually involved in children's language acquisition. Other learning goals beyond predicting the next word may also be expected, he said.

Linguistic competence is not enough

Taken together, the results show that the human brain and language AIs like GPT-2 share certain principles of data processing, such as context-based word prediction.

However, these principles are implemented in radically different neural architectures, the researchers said. Moreover, this commonality between GPT-2 and the human brain is not evidence that GPT-2 can think, they said.

"Although language may play a central organizing role in our cognition, linguistic competence is not sufficient to capture thinking," the researchers wrote.

Language AIs like GPT-2 could not think, understand, or generate new meaningful ideas by integrating prior knowledge - they would simply reproduce the statistics of their input.

Therefore, a core question in cognitive neuroscience and AI research is how the brain uses its contextualized linguistic representations learned through prediction exercises as a "substrate for generating and articulating new thoughts."

Via: Biorxiv

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.