Typical eye movement experiments are happening while the observer is sitting in front of a computer screen, with their head often restrained in a chin rest. While this approach allows high-levels of experimental control, it often suffers from a lack of comparability to actual real world tasks. To account for that, we strive to investiagte this behavior in more and more natural contexts in virtual reality or the real world.

How do we move our eyes during perceptual judgements?

Does a box fit into a shelf? In this project led by my colleague Avi Aizenman, we were intersted in investigating eye movements that happen during such everyday judgements. We used photorealistic Virtual Reality to present a scene that included everyday objects on the table and observers needed to judge the height, width or brightness of objects.

Inspired by research in the haptic domain who showed specific exploration routines for different tasks, we hypothesized that we will observe specific eye movement behavior that differs across these tasks. We observed that across the different tasks, observers did not show a specific pattern of movements, but rather fixated at different locations.

For the height judgements observers fixated closer to the top of the object, for the width judgments close to the center of the object. Interestingly, when we manipulated where observers could fixate on the object in a second gaze-contingent experiment, we did not see a drop in performance when forcing observers to fixate in the center for height judgments or at the top for width judgements. Thus, suggesting that to make the tested perceptual judgements, observers rather rely on perceptual processes and do not include potentially additional cues from eye movements (for example looking from the top to the bottom of the object and measuring this distance as indiciation of the size of the object).