Local time-warping in auditory feedback alters articulatory timing in connected multisyllabic speech containing vowels, fricatives and stops

Local time-warping in auditory feedback alters articulatory timing in connected multisyllabic speech containing vowels, fricatives and stops

 

Shanqing Cai & Frank H. Guenther, Speech Laboratory, Department of Speech, Language and Hearing Sciences, Boston University

Research on auditory-motor interaction in speech production has focused mainly on quasi-static phonation and articulatory gestures. However, because most of speech produced in real life is multisyllabic, it is important to examine the role of auditory feedback in the control of connected multisyllabic articulation. By using formant trajectory manipulation, our group has previously demonstrated the role of auditory feedback in the online control of phonemic and syllable timing in a six-word utterance consisting of vowels and semivowels (Cai et al., 2011, J. Neurosci.). More recently, we have extended the generality of this finding by applying a new type of timing perturbation to an utterance consisting of fricative and stops. Participants produced the sentence “the steady bat gave birth to pups”, while unbeknownst to them, we used the Audapter system to impose a local time dilation that increased the duration of the [s] sound in the word “steady” by ~50 ms. A time compression that followed the time dilation ensured that feedback timing perturbation was confined to the word “steady”. Preliminary results indicate that normal adult speakers produced significantly lengthened phonemic and syllabic durations after the onset of the local time warping, as compared with their production under no time warping. Surprisingly, this type of duration adjustment was observed not only under the normal speech rhythm, but also under metronome-based rhythmic speech. We are also measuring the responses of adult stutterers to the same type of time perturbation and comparing them to normal speakers’ responses. The implication of these results for the mechanism of speech movement sequencing will be discussed in the framework of the DIVA and GODIVA models.