In his research Professor David Waller investigates how people learn and mentally represent spatial information about their environment. Wearing a head-mounted display and carrying a laptop-based dual pipe image generator in a backpack, users can wirelessly walk through immense computer generated virtual environments within the 24×48 meter gymnasium.



Research Project Examples

Specificity of Spatial Memories

When people learn about the locations of objects in a scene, what information gets represented in memory? For example, do people only remember what they saw, or do they commit more abstract information to memory? In two projects, we address these questions by examining how well people recognize perspectives of a scene that are similar but not identical to the views that they have learned. In a third project, we examine the reference frames that are used to code spatial information in memory. In a fourth project, we investigate whether the biases that people have in their memory for pictures also occur when they remember three-dimensional scenes.

Nonvisual Egocentric Spatial Updating

When we walk through the environment, we realize that the objects we pass do not cease to exist just because they are out of sight (e.g. behind us). We stay oriented in this way because we spatially update (i.e., keep track of changes in our position and orientation relative to the environment.) Several SPACELAB projects investigate the processes that underlie spatial updating. One project investigates possible limitations on the number of objects that people can update. Another project examines differences between spatial updating of a well-learned, familiar environment and that in recently-learned immediate environments. A third project investigates the degree to which spatial updating results from processing low-level sensory information versus high-level cognitive information such as one’s intentions to act.