The impact of prior knowledge and spectral degradation on spoken lexical access in childrenPublic
In everyday conversations, listeners need to rapidly perceive and parse speech input as they receive it. While listeners with normal hearing may do this with relative ease, this is a multi-step process that requires listeners to recognize individual speech sounds as meaningful then integrate them to understand and respond almost instantaneously. In order to accomplish this, listeners should be able to efficiently recognize individual phonemes or syllables. Additionally, to facilitate this process, listeners may also apply top-down knowledge, or prior knowledge and other cognitive factors, to incoming perceptual input in order to more rapidly interpret speech (Pichora-Fuller, 2008; Rönnberg et al., 2013). Overall, everyday speech understanding involves combining perceptual input (the speech signal) with previously-obtained knowledge and representations (i.e., linguistic and conceptual information). The ability to efficiently utilize perceptual input and linguistic knowledge develops through exposure to spoken language throughout infancy and early childhood. However, how children process spectrally-degraded speech while they are acquiring their linguistic and conceptual knowledge is under-explored. This is particularly important for children who are deaf or hard of hearing and use cochlear implants, as their auditory input is delayed and consistently degraded. This dissertation investigates how spectral degradation impacts children’s ability to process spoken language in real time. The first part of this dissertation (Study 1, Chapter 2) investigated adults and children’s ability to apply their phonotactic knowledge to speech input that is spectrally-degraded via noise-band vocoding. First, participants completed an English Decision task, which required them to select the higher probability non-word token among non-word pairs that varied in phonotactic distance. Participants then completed a Lexical Decision task, which required them to indicate the lexicality when presented with non-words and real words that varied in phonotactic probability. This study presented evidence that spectral degradation impeded adults’ ability to utilize to phonotactic information during speech recognition. Children showed varying sensitivity to phonotactic information regardless of stimulus fidelity, and this was further impacted by task and age. Overall, these results suggest that children’s ability to use phonotactic knowledge for processing spoken language is still evolving. The second part of this dissertation (Study 2, Chapter 3) explored the impact of hearing ability on children’s lexical access and how the availability of semantic information contributes to this process. Children with normal hearing and children with cochlear implants completed a two-alternative forced choice task while eye gaze was recorded and quantified in quiet, ideal listening conditions. Both children with normal hearing and children with cochlear implants experience phonological competition when two choices share the same initial phonological onset. However, children with cochlear implants showed greater reliance on available semantic information compared to their peers with normal hearing. This provided insight into how children weight different cues, such as semantic information, during spoken language processing. These results provide evidence of different looking patterns between children with normal hearing and children with cochlear implants that could be further exacerbated in real-world environments. Overall, this work provides evidence for spectral degradation’s profound impact on language processing, especially for children with varying hearing experience and language exposure.