Skip to main content Skip to local navigation

How do we know where things are? New study examines visual stabilization

Our eyes move three times per second. Every time we move our eyes, the world in front of us flies across the retina at the back of our eyes, dramatically shifting the image the eyes send to the brain; yet, as far as we can tell, nothing appears to move.

A new study out of York University and Dartmouth College provides new insight into this process known as "visual stabilization." The results are published in the Proceedings of the National Academy of Sciences.

Patrick Cavanaugh

Patrick Cavanagh

"Our results show that a framing strategy is at work behind the scenes all the time, which helps stabilize our visual experience," says senior author Patrick Cavanagh, a senior research fellow in psychology at both Glendon Campus and the Centre for Vision Research at York University and a research professor in psychological and brain sciences at Dartmouth College. "The brain has its own type of steadycam, which uses all sorts of cues to stabilize what we see relative to available frames, so that we don't see a shaky image like we do in handheld movies taken with a smartphone. The visual world around us is the ultimate stable frame but our research shows that even small frames work: the locations of a test within the frame will be perceived relative to the frame as if it were stationary. The frame acts to stabilize your perception."

One such example is when someone waves goodbye to you from the window of a moving bus. Their hand will appear as if it's moving up and down relative to the window rather following the snake-like path that it actually traces out from the moving bus. The bus window acts like a frame through which the motion of the hand waving good-bye is seen relative to that frame.

The study consisted of two experiments that tested how a small square frame moving on a computer monitor affected participants' judgments of location. The experiments were conducted in-person with eight individuals including two of the authors; and also online due to the COVID-19 pandemic with 274 participants recruited from York University of which 141 had complete data. The data were very similar for both types of participants.

In Experiment 1, a white, square frame moves left and right, back and forth, across a grey screen and the left and right edges of the square flash when the square reaches the end of its path: the right edge flashes blue at one end of the travel and the left edge flashes red at the other (see Movie 1), as shown in the figure below. Participants were asked to adjust a pair of markers at the top of the screen to indicate the distance they saw between the flashed edges.

In Experiment 1, the frame moves left and right but instead of seeing the locations of the blue and red edges where they are when they flash, they always appear with the blue flash on the left and separated by the width of the frame, as if the frame were not moving. When the frame moves more than its width as shown here, the red edge is physically to the left of the blue when they flash at the end of the frame's motion, and yet the blue still appears to the left of red, separated again by almost the width of the frame

Experiment 1 had two conditions: The first condition evaluated how far apart the outer left and right edges of the square frame appeared; the second condition assessed the travel of the frame's physical edge.

The data from both conditions of Experiment 1 demonstrated that participants perceived the flashed edges of the frame as if it were stable even though it was clearly moving, illustrating what the researchers call the "paradoxical stabilization" produced by a moving frame.

Experiment 2 again demonstrated the stabilizing power of a moving frame by flashing a red disc and a blue disc at the same location within a moving frame (see Movie 2). The square frame moves back and forth from left to right while the disc flashes red and blue in alternation. As in Experiment 1, participants were asked to indicate the perceived separation between the red and blue discs. Even though there is no physical separation between the discs, the moving frame creates the appearance that the two discs are located to the left and right of their true locations, relative to the frame where they flashed. In other words, participants perceived the location of the discs relative to the frame, as if it were stationary and this was true across a wide range of frame speeds, sizes, and path lengths.

"By using flashes inside a moving frame, our experiments triggered a paradoxical form of visual stabilization, which made the flashes appear in positions where they were never presented," says Cavanagh. "Our results demonstrate a 100 per cent stabilization effect triggered by the moving frames - the motion of the frame has been fully discounted.”

These data, he says, are the first to show a frame effect that matches our everyday experience where, each time our eyes move, the motion of the scene across our retinas has been fully discounted making the world appear stable.

"In the real-world, the scene in front of us acts as the anchor to stabilize our surroundings," Cavanagh says. Discounting the motion of the world as our eye move makes a lot of sense, as most scenes (i.e. house, workplace, school, outdoor environment) are not moving, unless an earthquake is occurring.

"Every time our eyes move, there's a process that blanks out the massive blur caused by the eye movement. Our brain stitches this gap together so that we don't notice the blank, but it also uses the motion to stabilize the scene. The motion is both suppressed and discounted so that we can keep track of the location of objects in the world," says Cavanagh.

Based on the study's results, the research team plans to explore visual stabilization further using brain imaging at York Dartmouth.

Mert Özkan, a graduate student in the Department of Psychological and Brain Sciences at Dartmouth; Stuart Anstis, professor emeritus in psychology at the University of California San Diego; Bernard M. ’t Hart, a postdoc at the Centre for Vision Research at York University; and Mark Wexler, Chargé de Recherche at the Integrative Neuroscience and Cognition Center at the Université de Paris, also served as co-authors of the study.

Uncategorized

Tags: