No, seriously, I actually do know what you’re thinking. Or I would, anyway, if I subjected you to what has got to be the most alarming development in neuroscience that I have seen lately or ever.
According to a story I read in Gizmodo entitled Scientists Reconstruct Brains’ Visions Into Digital Video In Historic Experiment, “UC Berkeley scientists have developed a system to capture visual activity in human brains and reconstruct it as digital video clips. Eventually, this process will allow you to record and reconstruct your own dreams on a computer screen.”
Okay my fellow UC Berkeley colleagues, nice trick. But seriously, are you sure you want to continue with this research? I am guessing that most of us have had enough weird dreams–forget waking thoughts—to know that most should be permanently shielded from public scrutiny.
Yet according to Professor Jack Gallant, UC Berkeley neuroscientist and coauthor of the research published in the journal Current Biology, “This is a major leap toward reconstructing internal imagery. We are opening a window into the movies in our minds.” Lead research author Shinji Nishimoto adds,”…this is the first step to tap directly into what our brain sees and imagines.”
Okay, just stand by for a minute while I scream and run out of the room. I do not think that I want anyone else watching the movies in my mind unless I have personally sold them the admission ticket and told them where to sit. And please, don’t leave raisinets between the seats.
I mean, yes, it would be cool to be able to remember your own dreams when they can be helpful to figuring out what you need to do to improve your life, I suppose. But when the dream features your naked self presenting to a room full of executives or the one where you are running and running until you ingloriously fall out of bed, I’m thinking that those are movies best left on the cutting room floor.
So here’s how this technical breakthrough works. First they stick you inside a functional Magnetic Resonance Imaging (fMRI) system for one hell of a long time while showing you various Hollywood movie trailers (if chosen, I am going to request the entire filmography of George Clooney). As you watch George on-screen, the fMRI system reads the brain’s blood flow as it traverses the visual cortex and pipes the output into a software program that essentially decodes the brain signals, building a database that translates shape and motion signals generated by watching the movies into specific measurable chunks of data. According to the article, as the fMRI sessions progress over a period of time, the software gets better and better at analyzing the signals and putting them into some sort of organized order (that in itself would be a huge breakthrough for many people–I can’t imagine a day where all the crap free-floating inside my brain had actual organization to it).
Anyway, here’s where it gets particularly weird. The software was also used to analyze the same people (the ones put inside the fMRI) as they watch 18 million seconds of random YouTube videos, building a database of potential brain activity for each clip. As the article describes, “From all these videos, the software picked the one hundred clips that caused a brain activity more similar to the ones the subject watched, combining them into one final movie. Although the resulting video is low resolution and blurry, it clearly matched the actual clips watched by the subjects.”
Given the number of silly cat videos on YouTube, what are the odds that every output of this system will at some point include a cat flushing a toilet? I’m guessing the odds are high given that 18 million seconds is about 83 hours of video and you can’t go more than 12 minutes on YouTube without seeing a cat.
Anyway, the article goes on to say:
“Think about those 18 million seconds of random videos as a painter’s color palette. A painter sees a red rose in real life and tries to reproduce the color using the different kinds of reds available in his palette, combining them to match what he’s seeing. The software is the painter and the 18 million seconds of random video is its color palette. It analyzes how the brain reacts to certain stimuli, compares it to the brain reactions to the 18-million-second palette, and picks what more closely matches those brain reactions. Then it combines the clips into a new one that duplicates what the subject was seeing. Notice that the 18 million seconds of motion video are not what the subject is seeing. They are random bits used just to compose the brain image. Given a big enough database of video material and enough computing power, the system would be able to re-create any images in your brain.”
Shudder. This whole thing reminds me of that old song that goes, “If you could read my mind, love, what a tale your thoughts would tell.” Of course, if you could read my mind right now you would know I am thinking how disappointed I am to be old enough to remember who Gordon Lightfoot is. If I could read my kid’s brain vision right now it would be saying, “Dude, who the hell is Gordon Lightfoot?”
Anyway, according to the UC Berkeley researchers who have dedicated their lives to realizing the potential of the Vulcan Mind Meld, their work is very early, the reconstructed images are coming out blurry (click on the YouTube link below to see a sample), but the promise is great. Personally, I am relieved that the image of me presenting naked in front of that room is not being delivered in high definition. On my next trip to UC Berkeley I am thinking of hunting out the lab where this research is being conducted and spinning some dials so it can’t get any crisper. God help us all when they start reading these images in 3D.
httpv://www.youtube.com/watch?v=nsjDnYxJ0bo&feature=player_embedded#!