Nonretinotopic Reference Frames for Dynamic Form and Motion Perception
The spatial representation of a visual scene in the early visual system is well known. The optics of the eye map the three-dimensional environment onto two-dimensional images on the retina. These retinotopic representations are preserved in the early visual system. Retinotopic representations and processing are among the most prevalent concepts in visual neuroscience. However, it has long been known that a retinotopic representation of the stimulus is neither sufficient nor necessary for perception. Many visual processes (form and motion perception, visual search, attention, and perceptual learning) that have been thought to occur in retinotopic coordinates, have been found to operate in non-retinotopic coordinates. Based on these findings, our goal was to characterize the non-retinotopic representations and their underlying reference frames. We proposed that each retinotopic motion vector creates a perceptual reference-frame field in the retinotopic space (like an electromagnetic field), and interactions between these fields determine the selection of the effective reference frame. To test this theory, we performed a series of psychophysical experiments. We first used the slit-viewing paradigm to investigate how features of a moving object are attributed. Our results support the predictions of the non-retinotopic feature-processing hypothesis and demonstrate the ability of the visual system to operate non-retinotopically at a fine feature processing level. We then used a variant of the induced motion paradigm to investigate non-retinotopic reference frames for motion perception. We found that the effective reference frame for motion perception is non-retinotopic, and emerges from an amalgamation of motion-based, retinotopic and spatiotopic reference frames. In determining the percept, the influence of relative motion, defined by a motion-based reference frame, dominates those of retinotopic and spatiotopic motions within a finite region. Moreover, we found that different reference fields interact nonlinearly, and the way they interact depends on how motion vectors are grouped. Finally, we investigated how various spatiotemporal factors influence reference frame selection for motion perception. In line with our theory, we found that the motion-based-nearest-vector metric can fully account for all the data reported here. Taken together, these findings suggest that the brain actively constructs perceptual space by using motion-based reference frames.