December 7, 2021
How sound changes sight
When we learn to associate an auditory stimulus with a visual stimulus, the perception of that visual stimulus changes, but this phenomenon is not well understood. For the first time, the Keller group has now identified a mechanism in the brain that enables auditory information to influence visual representations. The findings provide fundamental insight into the neural basis of multi-sensory disorders.
Walk down a street in a crowded city center and you will be bombarded by sensory information - the sight of other passers-by, the sound of their laughter, the smell of that pizza restaurant... Your brain seems to do a remarkable job of making sense of it all. However, it can sometimes be trapped into misinterpreting sensory information, as shown by various illusions. In the McGurk effect for example, the visual information you get when looking at a person speak, changes the way you hear the sound (for example the sound ‘ba’ might be interpreted for the sound ‘ga’ when you look at a person’s lips). The illusion occurs because what you see clashes with what you hear, and your sense of vision takes over.
The Mcgurk effect shows that we integrate auditory and visual information - as well as past experience - to interpret what we hear and see, and that learnt associations (the way lips move when they make a certain sound) can change our sensory perception. This effect has been known for several decades, however the neuronal connections in our brain underlying it are not well understood.
Aleena Garner, a postdoc in the Keller lab, set out to learn more about the neuronal basis of the connection between vision and sound, or rather between the visual cortex - the area in the brain that processes visual information – and the auditory cortex, where auditory information is processed.
In her study, published in Nature Neuroscience, she let mice explore a virtual environment and exposed them to sequentially paired auditory and visual stimuli, while recording the activity of their neurons in the visual and auditory cortices. Over the course of five training sessions, the mice learnt to associate the auditory stimulus with the visual stimulus that followed, and Garner recorded how this so-called associative learning translated in the mice brains.
The experiment showed that a visual stimulus is processed differently when the mice expect to see it than when it comes as a surprise. It allowed Garner to identify a mechanism by which, over the course of learning, neurons from the auditory and visual cortices directly interact through long-range connections to create a specific association between an auditory and a visual stimulus. This association results in a suppression of the predictable visual input to amplify the unpredictable visual input.
“Our work demonstrates, for the first time, a mechanism by which early visual cortex supplies an interpretation of visual information based upon a learned relationship with auditory information,” says Garner, the first author of the study. “These finding are in line with the theory of predictive processing which postulates that the brain uses prior information to make predictions about the future”. These predictions are used to calculate prediction errors, which in turn can be used to make better predictions next time – basically to learn about our world – but they can also lead to illusions, the researcher explains.
“The study expands our understanding of basic mechanisms of multi-sensory interactions and associative learning,” says Georg Keller. “It also provides fundamental insight into the neural basis of multi-sensory disorders such as dyslexia, sensory discrimination disorder and sensory modulation disorder.” Importantly, he adds, the findings reveal neural pathways capable of plasticity and that can thus be changed by potential therapeutic interventions for such disorders.
Aleena Garner setting up the virtual reality environment for her experiment.
Aleena R. Garner and Georg B. Keller A cortical circuit for audio-visual predictions. Nature Neuroscience (2021). Advance online publication
About the first author
Aleena Garner was born in Tucson, AZ, USA and obtained her PhD from the University of California, San Diego. She joined the lab of Georg Keller at the FMI in 2016 and says: “How the nervous system accomplishes vision is a question tackled by many labs internationally, but the freedom and encouragement for creative thinking, and the technical ingenuity making the experiments possible, made the Keller lab the ideal place for training.” Aleena will join the faculty at Harvard Medical School as an Assistant Professor in the Department of Neurobiology in March 2022. Her hobbies include dancing and boxing.