Capacity, Precision, and Representations of Sensory and Mnemonic Processing
Huynh, Duong Le Vu
MetadataShow full item record
Most of our mental processes rely on memorization, and dysfunctions of memory systems can lead to severe perceptual, cognitive, and motor deficits. Although decades of research has accumulated a large knowledge base on human memory, several critical issues and contentions remain to be resolved. In this dissertation, we have addressed three such problems. The first study aimed to determine the bottlenecks of information processing that constrain the capacity of visual memory. We measured observers’ psychophysical performance in a direction-of-motion recalling task, and through statistical modeling, analyzed the quantity and quality of information along early stages of mnemonic processing. In contrast to the long-standing view targeting visual short-term memory as the only major bottleneck of processing, we find significant loss in both the quantity and quality of information also during the initial encoding and sensory memory stages. The second study aimed to characterize the representation format for visual information in terms of different featural dimensions. Specifically, we examined the roles of three different features, viz., position, color, and direction-of-motion, in the construction and maintenance of object representations. Using a cross-cuing paradigm, we showed that features are organized in bound forms, in which any feature can serve as an effective cue to retrieve another. However, the pattern of binding strength and cuing effectiveness is asymmetric and exhibits stream-specificity (parvocellular/magnocellular) of features. This study further showed that the distribution of information loss across processing stages found in the first study for direction-of-motion holds true for all features. The third study aimed to characterize the representation format for visual information in terms of different reference frames, viz., retina-based, world-based, or mixed. Using a motion-vector decomposition approach, we analyzed observers’ behavior and considered models with different assumptions about the contribution of processing in each reference-frame. With some exceptions, the world-based account was found to best describe the data. Taken together, we propose that the visual system encodes and stores information with quantitative and qualitative loss that occurs at multiple processing stages; this loss is mainly tuned to the metrics of the external world and its magnitude varies according to stimulus characteristics and task demands.