MENU

European project for ‘brain-like’ AI

Technology News |
By Nick Flaherty

Two French research agencies, CEA and Inria, are working with Meta AI on artificial intelligence (AI) technology that mimics the way the brain works.

The collaboration with the Neurospin neuroimaging centre, part of CEA, is comparing how AI language models and the brain respond to the same spoken or written sentences.

“Of course, we’re only scratching the surface — there’s still a lot we don’t understand about how the brain functions, and our research is ongoing,” said Jean Remi King, Research Scientist at Meta, which owns Facebook.

“Now, our collaborators at NeuroSpin are creating an original neuroimaging data set to expand this research. We’ll be open-sourcing the data set, deep learning models, code, and research papers resulting from this effort to help spur discoveries in both AI and neuroscience communities. All of this work is part of research into ‘human level’ AI that learns with limited to no supervision.

Related AI articles

Functional magnetic resonance imaging (fMRI) studies capture only a few snapshots of brain activities, typically from a small sample size. To meet the demanding quantity of data required for deep learning, the research is using thousands of brain scans recorded from public data sets using fMRI and also simultaneously models them using magnetoencephalography (MEG), a scanner that takes snapshots of brain activity every millisecond. These neuroimaging devices provide the large neuroimaging data necessary to detect where and in what order the activations take place in the brain.

“With magnetoencephalography, we can identify the brain responses to individual words and sentences at every millisecond. We can then compare areas in the brain to modern language algorithms,” said King.

The data sets were collected and shared by several academic institutions, including Max Planck Institute for Psycholinguistics in Germany and Princeton University in the US. Meta is at pains to point out that each institution collected and shared the data sets with informed consent from the volunteers in accordance with legal policies as approved by their respective ethical committees, including the consent obtained from the study participants.

“Our comparison between brains and language models have already led to valuable insights,” said King.

Language models that most closely resemble brain activity are those that best predict the next word from the context. Predictions based on partially observable inputs is at the core of self-supervised learning (SSL) in AI and may be key to how people learn language.

“However, we discover that specific regions in the brain anticipates words and ideas far ahead in time, while most language models today are typically trained to predict the very next word. Unlocking this long-range forecasting capability could help improve modern AI language models,” he said.

The research works both ways, providing more understanding of how the brain works.

“Our work is a part of the broader effort by the scientific community to use AI to better understand the brain. Neuroscientists have historically faced major limitations in analyzing brain signals — let alone comparing them with AI models,” sad King. “Studying neuronal activity and brain imaging is a time- and resource-intensive process, requiring heavy machinery to analyze neuronal activity, which is often opaque and noisy. Designing language experiments to measure brain responses in a controlled way can be painstaking too. For example, in classical language studies, sentences must match in complexity, and words must match frequency or number of letters, to allow a meaningful comparison of brain responses.”

Working with Inria, the researchers at Meta compared a variety of language models to the brain responses of 345 volunteers, who listened to complex narratives while being recorded with fMRI. Those models were enhanced with long-range predictions to track forecasts in the brain.

“Our results show that specific brain regions, such as the prefrontal and parietal cortices, are best accounted for by language models enhanced with deep representations of far-off words in the future. These results shed light on the computational organization of the human brain and its inherently predictive nature and pave the way toward improving current AI models,” he said.

ai.facebook.com; www.inria.fr; joliot.cea.fr

Related articles

Other articles on eeNews Europe

 

 


Share:

Linked Articles
eeNews Europe
10s