Taking strides toward understanding how the brain processes stimuli to recognise images, researchers have figured out how to project neural activity on to a TV screen.
How do they do it?
UC Berkeley professor Jack Gallant and his team use MRI to track blood-flow changes in a subject’s primary visual cortex – the brain’s largest visual processing centre – as he or she watches a movie. The researchers then create a model of the visual cortex that matches the blood-flow pattern with the images the subject is viewing.
Algorithms are applied to compare the brain signals with a catalogue of about 5 000 hours of YouTube video. The images that most accurately correspond to the brain activity are compiled into a composite video that resembles the YouTube footage.
In this video, Gallant explains how they succeeded in decoding and reconstructing people's dynamic visual experiences…