Taking strides toward understanding how the brain processes stimuli to recognise images, researchers figure out how to project neural activity on to a TV screen. – By Steve Rousseau
How do they do it?
UC Berkeley professor Jack Gallant and his team use MRI to track blood-fl ow changes in a subject’s primary visual cortex – the brain’s largest visual processing centre – as he or she watches a movie. The researchers then create a model of the visual cortex that matches the blood-flow pattern with the images the subject is viewing.
Algorithms are applied to compare the brain signals with a catalogue of about 5 000 hours of YouTube video. The images that most accurately correspond to the brain activity are compiled into a composite video that resembles the YouTube footage.
“I think it’s very impressive that they can get these admittedly crude re-creations of our internal representations of video,” says Marcel Just, director of the Carnegie Mellon University Centre for Cognitive Brain Imaging, who was not involved in the study.
What’s it good for?
Such brain-visual linkages could one day aid communication with stroke or coma patients. Gallant says that once his team’s technique has been refined, it could also record and play back dreams. The obstacle to this is understanding how the brain’s visual processing changes when a person is sleeping or awake. Gallant is confident this will happen, commenting: “It’s only a matter of time.”
Video: Click Projecting neural activity on to a TV screen to watch a video of Gallant explaining how they succeeded in decoding and reconstructing people’s dynamic visual experiences.