Multisensory Integration of Native and Nonnative Speech in Bilingual and Monolingual Adults

Multisens Res. 2024 Oct 8;37(6-8):413-430. doi: 10.1163/22134808-bja10132.

Abstract

Face-to-face speech communication is an audiovisual process during which the interlocuters use both the auditory speech signals as well as visual, oral articulations to understand the other. These sensory inputs are merged into a single, unified process known as multisensory integration. Audiovisual speech integration is known to be influenced by many factors, including listener experience. In this study, we investigated the roles of bilingualism and language experience on integration. We used a McGurk paradigm in which participants were presented with incongruent auditory and visual speech. This included an auditory utterance of 'ba' paired with visual articulations of 'ga' that often induce the perception of 'da' or 'tha', a fusion effect that is strong evidence of integration, as well as an auditory utterance of 'ga' paired with visual articulations of 'ba' that often induce the perception of 'bga', a combination effect that is weaker evidence of integration. We compared fusion and combination effects on three groups ( N = 20 each), English monolinguals, Spanish-English bilinguals, and Arabic-English bilinguals, with stimuli presented in all three languages. Monolinguals exhibited significantly stronger multisensory integration than bilinguals in fusion effects, regardless of the stimulus language. Bilinguals exhibited a nonsignificant trend by which greater experience led to increased integration as measured by fusion. These results held regardless of whether McGurk presentations were presented as stand-alone syllables or in the context of real words.

MeSH terms

  • Acoustic Stimulation
  • Adult
  • Auditory Perception / physiology
  • Female
  • Humans
  • Language
  • Male
  • Multilingualism*
  • Photic Stimulation
  • Speech / physiology
  • Speech Perception* / physiology
  • Visual Perception* / physiology
  • Young Adult