From all of our different senses, vision is the most studied. However, not everything we want to interact with is based on visual information. When you hear your friend saying your name, you know in which direction to orient before you can actually see them. Similarly, when feeling a mosquito on your skin, you do not need visual input to know where to hit.

In addition, while eye movements are interesting as window into the mind, if we want to interact with the world, we need to move our hands. Across a series of papers, we studied how people coordinate and control eye and hand movements and how they can perform actions towards their own body based on different types of somatosensory information.

The interception of moving objects with impaired velocity signals

Intercepting a moving object is far from trivial: Due to our inheren sensory processing delays of around 100 ms, our estimate of the position of the object when starting the action is always already were the target used to be. Therefore, for a successful interaction with an moving object we need to take the velocity of the target into account to predict the correct location of the target.

In this study, we investigated how eye and hand movements are affected when needing to interact with a moving target, but the estimate of the velocity of the target is impaired as only second-order motion cues are available. We observed that in this case eye movements always landed at the position the target had 100 ms before saccade offset, whereas hand movements were accurate.

But does that mean that eye and hand movements were controlled by different types of information? We found that the most parsimonious explanation for this effect is the difference in latency between eye and hand movements. Eye movements typically start after around 150-200 ms, whereas hand movements started after around 400 ms. If we forced eye movements to be delayed and have a comparable latency, they also become more accurate. In fact the accuracy of eye movements across different latencies seemed to reflect the time course of the changes in the sensory processing of the second-order motion stimulus. If saccades were executed early, they consistently lagged behind the target, however when they were executed later they became more and more accurate. Similarly, for hand movements with very short latencies, we also observed an increased curvature, reflecting this evolution of the sensory signal. Thus, second order motion can be used for the guidance of goal-directed movements, but needs more time to be succesfully processed than more common stimuli.

The interaction of eye and hand movements

We investigated how isolated eye, isolated hand and coordinated hand movements were made towards the own body with different types of somatosensory information in darkness. Actions needed to be performed towards the own index or middle finger which were defined either by only proprioceptive information, proprioceptive information and an additional touch on one of the fingers, proprioception and additional kinesthetic information by actively moving the hand towards the target location or all three factors together.

We observed a systematic influence of the available somatosensory information on reach performance, while gaze endpoints were unaffected. Reach endpoint precision was poorest when target position was derived solely from proprioceptive input and it improved when two types of somatosensory information were available. In addition, kinesthetic information from the target digits affected reach endpoints by limiting proprioceptive drift and shifting reach endpoints in the direction of movement. However, the most interesting result was that in comparison to the isolated hand movement where participants needed to keep fixating at a fixation cross close to their body, a reaching movement with a combined eye movement was more accurate. Despite of the experiment happening in total darkness! This suggests that there is a direct benefit from efferent information about the eye movement for reach control.

Goettker, A., Fiehler, K., & Voudouris, D. (2020). Somatosensory target information is used for reaching but not for saccadic eye movements. Journal of Neurophysiology124(4), 1092-1102.

Kinesthetic information helps to perform quicker eye movements

In a similar setup as in the one described above, we focused on whether different types of somatosensory information can change movement latency. We also found no influence on saccade accuracy or precision, significantly shorter saccade latencies when additional kinesthetic was available.

We replicated this effect across a set of different control experiments, which suggest kinesthetic information facilitates saccade planning.

Goettker, A., Fiehler, K., & Voudouris, D. (2020). Somatosensory target information is used for reaching but not for saccadic eye movements. Journal of Neurophysiology124(4), 1092-1102.