Seeing what the brain sees.

Recently Berkley scientists used fMRI data to reconstruct images processed by the brain by using YouTube videos to build a computer vision model. Linked above are some of the reconstructions they’ve made using this process.

In 2008, the lab published that they’ve learned how to use functional magnetic resonance imaging (fMRI) to detect images being processed by the brain (2008 paper here)
But how do you turn raw neuronal activity signals into an actual image? In the case of Jack Gallant’s neuroscience lab they contructed a vision model on a computer using a library of YouTube videos. A subject would watch a video, their brain activity would be measured, and that data would be used as part of a dataset for correlating a certain type of brain activity to a certain type of video. Then when the subject watched a novel video that the computer hasn’t seen before, the computer would read the brain activity and superimpose the population of videos that correlate to the neuronal signals.

It’s quite striking how well their model works. With unlimited computing power and an even bigger video library such reconstruced visions could become incredibly detailed.

This isn’t the first time scientists have been able to metaphorically see through someone else’s eyes. Ten years ago experiements, also performed at Berkley, were done with cats (link 1, link 2) in which electrode arrays were placed in the thalamus of a live cat and the measured neuronal activity was processed into an image.
Below is a video showing some of the moving recontructed images obtained from a cat’s brain. Note that it may be grotesque so some people – it contains invasive electrophysiology in a live animal. It’s also creepy in how the cat’s thalamic visual processing interprets human faces at catman faces.