Skip to main content Skip to local navigation

Research finds clue to decoding retinal signals for better bionic eyes

Retinal prosthetics, like bionic eyes, currently don’t produce sharp, accurate pictures, but researchers at Duke University and York University may have found a piece of the puzzle that could improve messages between the prosthetic and the brain, resulting in better images.

Joel Zylberberg

Joel Zylberberg

Assistant Professor Joel Zylberberg of the Faculty of Science, a co-author on a study led by Duke University and a core member of York’s Vision: Science to Applications, looked at how the retina adapts to varying intensities of light, from daylight to evening light, and whether the brain needs to account for those changes to correctly interpret the retinal output – what we see.

What they found is that the brain needs to account for changes in the correlations between retinal ganglion cells (the propensity of pairs of cells to be co-active) at all light levels. If it doesn’t, then its information processing will be severely disrupted and so will our sight.

“Between night and day, there is a trillion-fold change in light intensity. The retina adapts to this change, altering the patterns of neural activity that the retina generates and sends down the optic nerve to tell the brain what the eye is seeing,” says Zylberberg, Canada Research Chair in Computational Neuroscience.

This is important if optimal encoders, used in retinal prosthetics, are to produce good, accurate images under low-and high-light levels. Encoders need to account for how the brain will interpret the signals sent by those encoders. By showing that the brain must account for retinal adaptation, this research shows that the encoders should mimic the retina’s adaptation. Otherwise, there would be a mismatch between the signals sent by the prosthetic, and the ones expected by the brain.

Previously, researchers had largely thought that the brain did not need to account for these correlations. One reason for that is that they studied only a single light level, typically a brighter one, and not the changes between different light levels.

The researchers, including Duke University grad student Kiersten Ruda, who led the study using a rat model, and Assistant Professor Gred Field of Duke University, show that accounting for correlations among retinal ganglion cells during low-light conditions improved decoding performance up to 100 per cent.

“This work could have strong implications for brain-machine interface technologies that process signals recorded in the nervous system as these are analogous in many ways to the brain processing the signals it receives from the retina,” says Zylberberg.

That includes things like robotic limbs that are controlled by the brain where implanted electrodes record neural activities in the motor cortex, and a decoder tries to guess what movement the person wanted to do, based on those signals.

Success here requires good decoding of movement intention from recorded neural signals similar to our work on trying to decode the visual stimulus from recorded retinal neural signals.

In robotic limbs, current decoding doesn't make use of the correlations between neurons, however, like retina adaptation, the correlations between neurons in the cortex change a lot with a person's behavioural state.

“Our research shows under what circumstances it’s necessary to account for noise correlations in the retina. It also shows that sensory adaptation can have a large impact on decoding requirements on downstream brain areas,” he says.

The research, “Ignoring correlated activity causes a failure of retinal population codes,” is published in Nature Communications.