New AI system can read your mind!

NEW YORK:  Scientists have developed a new ‘mind reading’ artificial intelligence system that can decode complex human thoughts just by measuring brain activity.

The AI system indicates that the mind’s building blocks for constructing complex thoughts are formed by the brain’s various sub-systems and are not word-based.

“We have finally developed a way to see thoughts of that complexity in the fMRI signal. The discovery of this correspondence between thoughts and brain activation patterns tells us what the thoughts are built of,” said Marcel Just from Carnegie Mellon University (CMU) in the US.

Researchers demonstrated that the brain’s coding of 240 complex events, sentences like the shouting during a trial scenario uses an alphabet of 42 meaning components, or neurally plausible semantic features.

These consists of features, like person, setting, size, social interaction and physical action.

Each type of information is processed in a different brain system – which is how the brain also processes the information for objects, researchers said.

By measuring the activation in each brain system, the programme can tell what type of thoughts are being contemplated.

“One of the big advances of the human brain was the ability to combine individual concepts into complex thoughts, to think not just of ‘bananas,’ but ‘I like to eat bananas in evening with my friends,'” researchers said.

 “Our method overcomes the unfortunate property of fMRI to smear together the signals emanating from brain events that occur close together in time, like the reading of two successive words in a sentence,” Just said.

“This advance makes it possible for the first time to decode thoughts containing several concepts. That’s what most human thoughts are composed of,” Just added.

Researchers used a computational model to assess how the brain activation patterns for 239 sentences corresponded to the neurally plausible semantic features that characterised each sentence in seven adult participants.

The programme was then able to decode the features of the 240th left-out sentence. They went through leaving out each of the 240 sentences in turn, in what is called cross-validation.

The model was able to predict the features of the left- out sentence, with 87 per cent accuracy, despite never being exposed to its activation before, researchers said.

It was also able to work in the other direction, to predict the activation pattern of a previously unseen sentence, knowing only its semantic features.

The study was published in the journal Human Brain Mapping. (AGENCIES)