

Center for Perception and Communication in Children
Past Projects
Researchers carried out the following projects at the Center for Perception and Communication in Children.
Principle Investigator: Sophie Ambrose, Ph.D.
Approximately 2 babies of every 1,000 born have a hearing loss, and as a result are at risk for significant delays in language development, which can lead to negative academic, social, and employment outcomes. Access to spoken language can be improved when children consistently use appropriately fit hearing devices, including hearing aids and cochlear implants, and when they are provided with linguistic input that meets their unique needs. Unfortunately, many families struggle to establish consistent hearing device use and to modify their interactions in ways that support their children's language development. This project sought to better understand the barriers families face in establishing consistent hearing device use and providing their children with optimal language environments, and to develop interventions addressing these barriers.
Over 50% of families indicated they had experienced the one or more of the following barriers at least sometime in the past month: the devices falling off their child's head/ear; being busy with other things happening in the home; the child taking the devices off; the child being sick; fear of losing the devices; and the child not wanting the devices to be put on. We also found that approximately 15% of parents did not feel confident that their child's hearing devices were critical in helping their child learn to communicate. These children were at especially high risk for being poor device users.
The resulting intervention we developed was named Ears On. The intervention initially focuses on ensuring caregivers: 1) understand their child's hearing loss, 2) understand the potential impact of their child's hearing loss on development if hearing devices are not used consistently, 3) recognize that device use is the primary means of preventing delays associated with hearing loss, and 4) feel empowered to improve device use. The rest of the intervention content focuses on providing families with strategies to address the device use barriers they are personally experiencing. The effectiveness of the intervention was tested in a study that included three families. All three families initially demonstrated poor device use, despite enrollment in traditional early intervention services. Participating in the intervention resulted in increased device use for all three families, indicating that Ears On may have promise for eventually being used in early intervention settings.
Regarding children's language environments, our work indicated most early intervention providers did not have training or experience with Enhanced Milieu Teaching (EMT), an evidence-based intervention used to help young children learn language. We worked with Dr. Ann Kaiser, a renowned scientist whose career has focused on developing and testing EMT, to adapt it to meet the unique needs of young children with hearing loss. The resulting intervention is called COACH (Caregiver's Optimizing Achievement of Children with Hearing Loss). In COACH, parents are taught to respond to their children's gestures, signs, and words in ways that will help their children learn language. They are also taught strategies for increasing how much their children communicate. We are currently testing the effectiveness of COACH and early results indicate parents can learn the skills taught in the intervention.
Principle Investigator: Karina S. Blair, Ph.D.
The Program for Trauma and Anxiety in Children (PTAC) was created in October 2016 as part of the Center for Neurobehavioral Research.
The PTAC strives to be a national leader for research on the impact of maltreatment (abuse and neglect) on the developing brain and on the neurobiological basis of anxiety disorders in adolescents. Our primary research methodology is functional magnetic resonance imaging (fMRI) though other techniques are also adopted as appropriate.
Interactions of Language Impairment and Childhood Trauma Interventions Examined Using fMRI
Most interventions for maltreatment heavily rely on language. However, the extent to which language level moderates intervention efficacy has received almost no attention. This project will determine the extent to which language ability moderates the efficacy of two interventions for maltreatment (the more language dependent Trauma Focused Cognitive Behavioral Therapy (TF-CBT) and the less language dependent Eye Movement Desensitization and Reprocessing [EMDR]). The results from this project may help guide intervention choice as a function of language ability.
Principal Investigator: Adam Bosen, Pd.D.
Children who use cochlear implants to hear have difficulty remembering sequences of things. Impaired memory is likely one reason they have trouble understanding speech. Our lab seeks to determine the different effects that poor quality auditory input has on the growth and maintenance of memory. This goal is important because it will distinguish the different effects that auditory degradation has on memory, which will help us figure out how to deal with these effects. We use a mix of behavioral tests and computational models in our studies, which allows us to use patient performance to predict long-term outcomes.
Short-Term Memory Depends of Auditory Input Quality and the Information to be Remembered
We test short-term memory in children and adults with cochlear implants. We also test listeners with normal hearing with simulated auditory impairment. Listeners hear sequences of digits, words, or non-words and repeat them back. Our results indicate that auditory impairment affects memory differently depending on what listeners have to remember. Tests that only use digits may underestimate the impact of auditory impairment on short-term memory. We will use these results to determine how auditory quality and memory work together to influence speech recognition.
Computational Models Predict How Memory Develops With Hearing Impairment
We compare our experimental results with predictions made by models of verbal short-term memory. Other models have described short-term memory in listeners with typical hearing. We extend their work to make predictions about listeners with hearing loss. We simulate development with auditory degradation, and compare our model predictions to the results obtained in listeners with cochlear implants.
Our results will characterize how auditory degradation impacts speech recognition and short-term memory in listeners with cochlear implants. In future work, we will use our findings to determine how to support language and memory development in children with cochlear implants.
Principle Investigator: Marc Brennan, Ph.D.
Hearing loss may reduce access to temporal information. Such a finding would impact service provision for children with hearing loss. However, the impact of childhood hearing loss on temporal resolution and its implications for modifications to service provision are not well understood. The goals of this research were to (1) quantify the effect of auditory experience on the development of temporal resolution in children with hearing loss, (2) and evaluate the extent to which amplification can restore measures of temporal resolution, and (3) explore the impact of temporal resolution and amplification on speech recognition. We obtained forward-masked thresholds, gap-detection thresholds and estimates of speech recognition performance in conditions with and without amplification. Participants included children and adults with hearing loss, and controls groups with normal hearing. We also assessed the relationship between temporal resolution and both hearing-aid use and speech intelligibility. Two hypotheses were tested. The first hypothesis was that children with greater auditory experience show less delay in the development of temporal resolution than children with less experience. The second hypothesis was that audibility and compression speed can be manipulated to improve temporal resolution for CHL.
The findings showed that temporal resolution improved throughout childhood. In general, improvements in temporal resolution with age were similar for the children with and without hearing loss. However, children with hearing loss exhibited deficits in some of the measures. We found that while amplification improved access to temporal cues, this improvement was typically greater in adults than in children with hearing loss. Lastly, speech recognition data were obtained using amplification settings that varied in the extent to which they provided access to temporal cues in speech. Superior speech recognition occurred when using settings that better restored access to temporal cues. This work has provided new knowledge about the impact of childhood hearing loss on temporal resolution and has implications for improving speech recognition with amplification for individuals with hearing loss.
As part of this project, Marc Brennan developed software in collaboration with the Technology Core that generated the stimuli, controlled stimuli playback and recorded participant responses. This computer program has been shared with researchers at the University of Toronto and the National Center for Rehabilitative Researchers. Once data from the studies using these programs have been published, we plan to make this program available for others on the BTNRH website.
Principle Investigator: Katherine Gordon, Ph.D.
We know that children vary widely in their ability to learn new words. Some children learn new words quickly. Other children have difficulty learning new words even when they do not have developmental language disorders (DLD).
The overall goal of this project is to figure out variations in how quickly preschool-age children learn new words, and how long they remember those words. Also, certain types of training may be particularly helpful for some children, but not others. Thus, we will determine which teaching strategies support word learning in the majority of children, and which strategies are particularly helpful for children who struggle with word learning. The results from this project will help teachers support word learning across a variety of learners in their classrooms.
Principle Investigator: Kristen Janky, Ph.D.
The consequences of vestibular loss in adults has been documented to include reduced visual acuity, increased falls, decreased quality of life and decreased social interactions; however, the extent to which vestibular loss affects children is unknown. In children with hearing loss, the presence of vestibular loss may result in gross motor developmental delay and reduced visual acuity, requiring additional habilitation. However, vestibular testing is not routine in the pediatric population, and vestibular habilitation is seldom considered for children with hearing loss. The long-term goal of this research program is to diagnose and then minimize the impact of vestibular loss in children with hearing loss. This project characterized vestibular loss in children with hearing loss and evaluated the effect of vestibular loss on static visual acuity and dynamic visual acuity in these children. We focused on children with hearing loss who use a cochlear implant, who had increased risk of vestibular loss due to implantation and etiology of hearing loss.
Children with cochlear implants were found to have a significantly higher prevalence of vestibular loss compared to children with normal hearing. Additionally, the relationship between severity of vestibular loss and gross motor performance was investigated. Children with cochlear implants who have vestibular loss performed significantly more poorly on gross motor performance outcomes compared to children with normal hearing. Moreover, significant vestibular loss could be predicted by performance on gross motor measures. Static visual acuity and dynamic visual acuity were assessed while sitting and under increasing demands of maintaining upright stance as well as increased optotype complexity. Children with cochlear implants with vestibular loss had significantly reduced dynamic visual acuity, but not static visual acuity, and children with cochlear implants who have vestibular loss had significantly reduced static visual acuity as optotype complexity increased. This work provided new knowledge regarding the prevalence of vestibular loss and the extent to which vestibular loss affects visual acuity in children with hearing loss. This work has further advanced our understanding of the natural compensation process of vestibular loss in children with hearing loss and has led to further investigations regarding the identification, implications, and habilitation of vestibular loss in children with hearing loss.
Principle Investigator: Dawna Lewis, Ph.D.
Children with mild bilateral hearing loss and children with unilateral hearing loss experience greater speech-perception difficulties in noise and reverberation than children with normal hearing, as well as potential delays in speech and language development and educational progress. Mild bilateral hearing loss and unilateral hearing loss may reduce the quantity and quality of auditory experiences. The combined effects of acoustic environment, elevated hearing thresholds, and the lack or limited use of amplification are likely to produce inconsistent audibility during auditory-skill and speech/language development. If these abilities are immature, children with mild bilateral hearing loss and unilateral hearing loss may need to devote greater cognitive effort to understanding speech, leaving fewer resources for other complex processes. The availability and use of visual information also may impact speech understanding for these children. Research examining auditory and audiovisual speech perception in complex listening conditions is limited in children with mild bilateral hearing loss and unilateral hearing loss. This study was designed to document how these children compare to peers with normal hearing on these skills. The overall goal was to improve communication access in complex listening conditions for children with mild bilateral hearing loss or unilateral hearing loss by evaluating the influence of dynamic features of multi-source environments in isolation and in combination that impact speech understanding. The project consisted of two aims: first, to identify auditory and audiovisual factors that influence comprehension in complex acoustic environments for children with minimal/mild hearing loss; and second, to examine comprehension in complex acoustic environments for children with minimal/mild hearing loss.
The results of these studies indicate that children with mild bilateral hearing loss and unilateral hearing loss perform more poorly on tasks that require them to locate talkers in complex acoustic conditions. Depending on the task and environment, performance differences were observed between the two groups of children with hearing loss. This information adds to our understanding of the use of auditory and multimodal skills for listening in complex environments and is a first step toward identifying those children with mild bilateral hearing loss or unilateral hearing loss who may be at greater risk for difficulties.
To appreciate the interaction between auditory skills and real-world listening, it is important to examine performance in realistic environments during complex listening tasks. Laboratory studies only provide an approximation of these environments and testing in actual classrooms lacks experimental control due to their active nature. This project also developed learning tasks representative of typical classroom activities for use in novel laboratory simulations of plausible classroom environments. Studies in our lab have indicated that children with mild bilateral hearing loss and unilateral hearing loss perform more poorly than peers with normal hearing during these tasks. New stimuli have been developed to allow continuation of studies examining performance during complex tasks. Results of an exploratory study of virtual reality technology suggest that it is a viable option for use in simulating real-world visual environments in laboratory settings. The use of complex listening tasks and environments can provide performance information that will lead to changes in the ways children with mild bilateral hearing loss and unilateral hearing loss are served.
Principle Investigator: Kanae Nishi, Ph.D.
The overall goal of this project was to compare bilingual (Spanish/English) and monolingual (English only) listeners on consonant perception in noise performance to quantify the performance gap associated with listeners' language background. The focus was on consonants due to their high relevance to speech perception in individuals with hearing loss and to phonological differences between Spanish and English. Adults who are bilingual since early childhood perform more poorly than monolingual adults do on speech perception in adverse listening conditions even though they can perform on-par with monolinguals in quiet. Young children with normal hearing and persons with hearing loss are also known to show greater detrimental impact of adverse listening conditions than adults with normal hearing. Therefore, treatment of bilingual children with hearing loss presents a greater challenge for speech-language pathologists and audiologists. However, in contrast to the cumulating evidence as to the impact of bilingualism on language development and availability of evidence-based assessment tools, there is no evidence as to the synergistic impact of bilingualism and hearing loss on children's language development, let alone assessment tools. As the first step to address this issue, this research project hypothesized that the size and detailed characteristics of bilingual disadvantage in adverse listening conditions for normal-hearing listeners varied with contextual cues available in utterances. To test this hypothesis, this project compared recognition of consonants embedded in speech materials with varying contextual cues (non-words, non-words similar to real words, real words, and real words in sentences with high and low context) with and without background noise. This project consisted of two aims: first, to examine the influence of English language skill levels on consonant perception in Spanish-speaking bilingual listeners as compared to English-speaking monolingual listeners; and second, to examine the influence of hearing loss on consonant perception in Spanish-speaking bilingual listeners as compared to English-speaking monolingual listeners.
Altogether, results showed no striking bilingual disadvantage on average or for individual consonant recognition regardless of the background noise levels tested (-5 dB, 0 dB, 5 dB SNR, and quiet), contextual cues (non-words vs. real words), listener age (6-13 years old), or the addition of simulated mild-to-moderate sensorineural hearing loss. Although limited to the listening conditions tested (i.e., static noise, speech presented at 0° azimuth, within critical distance, no reverberation), these results provide new evidence that the detrimental impact of background noise on phoneme recognition is not greater for bilingual children than for monolingual peers. These results can lead to further investigation as to the interaction between phoneme recognition accuracy and the effective use of contextual cues in speech perception in more realistic adverse listening conditions for bilingual listeners.
This project demonstrated that the effect of simulated mild-to-moderate sensorineural hearing loss on consonant recognition accuracy in non-words is similar between monolingual and bilingual listeners with normal hearing. Further, at the phoneme level, regardless of the background noise levels tested, lexical status of stimulus words, or listener age, consonant recognition accuracy was similar between monolingual and bilingual listeners with normal hearing. This study produced a sentence bank which offers an unprecedented large library of sentences that is validated with a wide age range of listeners, and appropriate to use with school-age children.
Principle Investigator: Nicholas Smith, Ph.D.
The overall goal of this project was to evaluate how mothers modify their speech when addressing their children under conditions of reduced audibility, such as at various levels of background noise, and to test whether these speech modifications produce perceptual benefits in terms of increased speech intelligibility. Previous work on caregiver speech acoustics had focused primarily on speech to young infants as a means of promoting long-term speech and language development, and had not examined whether child-directed speech enhances the intelligibility of speech for children.
The project consisted of three aims. The first aim was to evaluate how mothers adapt their speech when communicating to children under difficult listening conditions. The second aim was to evaluate whether the mothers' speech adaptations are effective in increasing the intelligibility of speech to children. And finally, the third aim was to evaluate how children adapt their visual strategies in an audiovisual speech-perception task as a function of noise level and hearing loss.
These experiments indicate that mothers modify both the acoustical and visual components of their speech to children. For instance, they use a higher pitch and speak louder and slower when talking to their children than to another adult. The visual modifications consist of exaggerated facial movements related to their speech. This project demonstrated that both the acoustical and visual modifications in child-directed speech lead to increased speech intelligibility in conditions of background noise. These finding are important because they demonstrate that caregivers continue to modify their speech to children through early childhood in ways that support children's speech and language perception.
Principal Investigator: Gabrielle Merchant, Au.D., Ph.D.
Our current work is focused on improving the differential diagnosis of ear infections and middle-ear fluid (otitis media). Otitis media is one of the most common childhood diseases, but there is little consensus with respect to treatment options, which include watch-and-wait approaches, antibiotic use, and surgical placement of pressure equalization (PE) tubes. Currently, there is no evidence-based method to determine which treatment option is most appropriate for a given patient. The goal of this project is to develop methods to differentiate variations in otitis media in order to guide treatment decisions. Differentiating cases of otitis media that require treatment from those that do not would represent a substantial advance in terms of public health. Our findings have the potential to influence the understanding and therapeutic management of childhood hearing loss related to otitis media.
This project is funded by a NIH Centers for Biomedical Research Excellence (COBRE) grant (NIH-NIGMS / 5P20GM109023-04).
Principal Investigator: Kaylah Lalonde, Ph.D.
Visual speech helps in many ways. It helps us to know when to listen, fills in missing auditory speech information, and helps to separate speech from similar competing sounds. We are studying how well children at various ages can use visual speech in these different ways. Experiments examine how sensitive children are to different audiovisual cues and how much these different mechanisms contribute to individual differences in children's audiovisual speech enhancement.
This project is funded by a NIH Centers for Biomedical Research Excellence (COBRE) grant (NIH-NIGMS / 5P20GM109023-04).
Principal Investigator: Angela AuBuchon, Ph.D.
What is Working Memory?
Working memory refers to the ability to remember information for short periods of time in order to solve a problem or accomplish a task. Remembering a list of ingredients while navigating the grocery store, following the steps of a recipe, and converting fractions for a double-batch of cookies all require working memory. Working memory isn't only useful for cooking. It's used whenever you have to figure out something new, and it supports long-term learning.
Why is language important to working memory?
Working memory is important for making sense of language. We can only hear or read words one-at-a-time. Therefore, we must hold onto the words—and their order—while simultaneously creating meaning and thinking about our own response. Children with hearing loss must especially rely on working memory. Because the auditory signal they get is sometimes ambiguous, children with hearing loss often problem-solve to work out the meaning of what they hear.
Current Grants:
THE DEVELOPMENT OF "SELF-TALK" AS A MEMORY STRATEGY
It is common for adults to talk to themselves when they need to remember something. When an adult silently says the same word or phrase over and over in an attempt to commit it to memory, we call this rehearsal. Unfortunately, we know much less about how and when children rehearse. In this project, we measure how children's rehearsal changes as the memory task gets harder. We also want to know how children's use of rehearsal changes over time. This project is funded by a NIH Centers for Biomedical Research Excellence (COBRE) grant (NIH-NIGMS / 5P20GM109023-04).
Principal Investigator: Hope Sparks Lancaster, Ph.D.
Participate
We are not recruiting research participants at this time. Please check back later for new and upcoming research studies.