One-sided Interference Between Speech Production and Visuomotor Learning

Daniel R. Lametti

 

Research examining human limb movements suggests that sensorimotor learning involves dissociable implicit and explicit learning processes. During reaching, the implicit process aims to reduce differences between predicted and actual sensory feedback (i.e., sensory prediction errors). The explicit process relates to the movement strategies used to ensure that task goals are achieved (e.g., hitting a target). Patients with cerebellar damage show deficits in the implicit process, whereas patients with frontal lobe damage fail to use movement strategies to achieve motor goals. In speech, a substantial body of research suggests that sensorimotor learning draws on the same implicit process observed in limb motor learning. Adaptation to real-time alterations in the sound of the voice is driven by sensory prediction errors, and learning in this case depends on the cerebellum. The extent to which speech motor learning also involves an explicit, strategy-based process remains unclear.

 

We recently developed a dual-task paradigm to pair visuomotor learning during hand movements with sensorimotor learning during speech. The task has participants make near ballistic movements of a fingertip-controlled joystick to move a cursor into targets on a computer screen. In time with these hand movements, they produce a consonant-vowel-consonant word (e.g., “bed”) into a microphone and their speech is fed back to them through headphones with almost no delay. Feedback is given throughout the task to ensure that peak hand velocity co-occurs precisely with speech production. Using this experimental model, visuomotor learning can be induced by rotating the cursor’s screen position; speech motor learning can be induced by real-time alterations of first formant frequencies; or sensorimotor learning can be induced in both movements simultaneously by rotating the cursor’s screen position and altering first formant frequencies. 

 

In a series of three studies utilizing this dual-task paradigm, we found that visuomotor learning during speech production was markedly impaired by the presence of altered auditory feedback. Critically, the impairment was not observed if visuomotor learning was induced by a gradual rotation of the cursor’s screen position, a manipulation known to eliminate the use of explicit movement strategies. In contrast, sensorimotor learning in speech was never impaired by the presence of a visuomotor alteration regardless of how the formant alteration was applied. The results demonstrate that the explicit component of visuomotor learning is sensitive to error signals in other motor domains (i.e., speech production). The results also suggest that sensorimotor learning in speech may lack the explicit, strategy-based learning process frequently observed during limb motor learning.