Event Title

Infant multisensory attention to social events predicts children’s expressive vocabulary

Presenter Information

Sheila Santos

Department

Psychology

Faculty Advisor

James Torrence Todd

Start Date

29-9-2020 2:00 PM

End Date

29-9-2020 3:00 PM

Abstract

A significant goal of developmental science is characterizing individual differences in basic skills that predict more complex developmental outcomes. Prior research has demonstrated that multisensory attention skills (e.g., shifting and maintaining attention to audiovisual events, such as a speaking face) are a foundation for language, social, and cognitive outcomes (e.g., Bahrick & Lickliter, 2012). For example, greater detection of intersensory redundancy (temporally synchronous visual and auditory stimulation) predicts better cognitive and language outcomes in 2- to 5-year-olds (Bahrick et al., 2018). However, as only within-age relations were assessed in this study, it is unclear whether multisensory attention skills assessed in infancy (the age at which these skills first emerge) would be predictive of later language outcomes. Here, we extend the findings of Bahrick et al. (2018) by assessing longitudinal relations between early infant multisensory attention skills (at 12- months) and later language outcomes (expressive vocabulary size) at 36-months. In the current study, we examined whether infant multisensory attention skills to social events—matching synchronous visual and auditory information from women speaking (intersensory matching), as well as maintaining and shifting attention to faces and voices—would predict expressive vocabulary skills in children. At 12- and 36-months, multisensory attention skills were assessed by the Multisensory Attention Assessment Protocol (MAAP; Bahrick et al., 2018), a novel individual difference measure of three basic indices of attention (intersensory matching, maintaining and shifting attention) to audiovisual social and nonsocial events. At 36-months, expressive vocabulary was assessed by the Expressive Vocabulary Test, 2nd edition (EVT-2; Williams, 2007) a standardized measure of vocabulary size. We predicted that better multisensory attention skills (longer attention maintenance, faster shifting, and greater intersensory matching of audiovisual speech) at 12-months would predict larger expressive vocabulary size at 36-months. We also assessed multisensory attention skills at 36-months to control for within-age relations between multisensory attention skills and expressive vocabulary. Children (N = 44) from a larger sample in an ongoing longitudinal study received the MAAP at 12- and 36- months to assess multisensory attention skills. For the MAAP, each trial began with a 3 s silent central visual event (animated geometric shapes) that was immediately followed by two lateral events (12 s) of women speaking (social) or objects being dropped into a container (nonsocial; see Figure 1). The movements of one of the lateral events were temporally synchronous with its natural soundtrack, while the movements of the other event were asynchronous. We calculated three multisensory attention skills on each trial: intersensory matching, attention maintenance, and speed. Intersensory matching was calculated as the proportion of total looking time to the synchronous event divided by the total looking time to both synchronous and asynchronous events. Attention maintenance, or sustained attention, was calculated as the proportion of total looking time to the lateral events divided by the length of the trial. Speed was measured as the latency, or the time it took to shift attention, from the silent central visual event to one of the lateral events. At 36-months the same participants received the EVT-2, which measures expressive vocabulary size in English (Williams, 2007). The assessment booklet contained enlarged, colored pictures that were presented to children. The children were asked to identify as many pictures as they could. Correlations revealed that 36-month intersensory accuracy for social (but not nonsocial) events predicted concurrent 36-month EVT scores, r = .30, p = .03. Further, 12-month intersensory accuracy for social (but not nonsocial) events was marginally predictive of 36-month EVT scores, r = .26, p = .06. However, 12-month intersensory accuracy was a significant predictor of EVT scores after controlling for 36-month intersensory accuracy, p = .02. Twelve-month intersensory accuracy predicted a significant 10% of the variance in EVT scores, p = .02, beyond the variance predicted by 36-month intersensory accuracy (total variance accounted for: 22%). In contrast, 12- or 36-month attention maintenance and shift speed were not significant predictors of 36-month EVT scores. This study replicates and extends previous research (Bahrick et al., 2018) by demonstrating that infant intersensory matching (but not attention maintenance and speed of shifting) of social events predicts childhood language outcomes. Consistent with our predictions, 12-month intersensory matching was a significant predictor of 36-month expressive vocabulary. Thus, some of the individual differences in children's language outcomes can be predicted by intersensory processing assessed as early as 12-months. Surprisingly, 12-month intersensory matching predicted a significant 10% of the variance in expressive vocabulary at 36-months. These findings are important because they are one of the first to demonstrate links between infant intersensory processing and child expressive language outcomes. Early assessments of intersensory processing may aid in identifying children that are at-risk for language delays during the early stages of development.

File Type

Event

Share

COinS
 
Sep 29th, 2:00 PM Sep 29th, 3:00 PM

Infant multisensory attention to social events predicts children’s expressive vocabulary

A significant goal of developmental science is characterizing individual differences in basic skills that predict more complex developmental outcomes. Prior research has demonstrated that multisensory attention skills (e.g., shifting and maintaining attention to audiovisual events, such as a speaking face) are a foundation for language, social, and cognitive outcomes (e.g., Bahrick & Lickliter, 2012). For example, greater detection of intersensory redundancy (temporally synchronous visual and auditory stimulation) predicts better cognitive and language outcomes in 2- to 5-year-olds (Bahrick et al., 2018). However, as only within-age relations were assessed in this study, it is unclear whether multisensory attention skills assessed in infancy (the age at which these skills first emerge) would be predictive of later language outcomes. Here, we extend the findings of Bahrick et al. (2018) by assessing longitudinal relations between early infant multisensory attention skills (at 12- months) and later language outcomes (expressive vocabulary size) at 36-months. In the current study, we examined whether infant multisensory attention skills to social events—matching synchronous visual and auditory information from women speaking (intersensory matching), as well as maintaining and shifting attention to faces and voices—would predict expressive vocabulary skills in children. At 12- and 36-months, multisensory attention skills were assessed by the Multisensory Attention Assessment Protocol (MAAP; Bahrick et al., 2018), a novel individual difference measure of three basic indices of attention (intersensory matching, maintaining and shifting attention) to audiovisual social and nonsocial events. At 36-months, expressive vocabulary was assessed by the Expressive Vocabulary Test, 2nd edition (EVT-2; Williams, 2007) a standardized measure of vocabulary size. We predicted that better multisensory attention skills (longer attention maintenance, faster shifting, and greater intersensory matching of audiovisual speech) at 12-months would predict larger expressive vocabulary size at 36-months. We also assessed multisensory attention skills at 36-months to control for within-age relations between multisensory attention skills and expressive vocabulary. Children (N = 44) from a larger sample in an ongoing longitudinal study received the MAAP at 12- and 36- months to assess multisensory attention skills. For the MAAP, each trial began with a 3 s silent central visual event (animated geometric shapes) that was immediately followed by two lateral events (12 s) of women speaking (social) or objects being dropped into a container (nonsocial; see Figure 1). The movements of one of the lateral events were temporally synchronous with its natural soundtrack, while the movements of the other event were asynchronous. We calculated three multisensory attention skills on each trial: intersensory matching, attention maintenance, and speed. Intersensory matching was calculated as the proportion of total looking time to the synchronous event divided by the total looking time to both synchronous and asynchronous events. Attention maintenance, or sustained attention, was calculated as the proportion of total looking time to the lateral events divided by the length of the trial. Speed was measured as the latency, or the time it took to shift attention, from the silent central visual event to one of the lateral events. At 36-months the same participants received the EVT-2, which measures expressive vocabulary size in English (Williams, 2007). The assessment booklet contained enlarged, colored pictures that were presented to children. The children were asked to identify as many pictures as they could. Correlations revealed that 36-month intersensory accuracy for social (but not nonsocial) events predicted concurrent 36-month EVT scores, r = .30, p = .03. Further, 12-month intersensory accuracy for social (but not nonsocial) events was marginally predictive of 36-month EVT scores, r = .26, p = .06. However, 12-month intersensory accuracy was a significant predictor of EVT scores after controlling for 36-month intersensory accuracy, p = .02. Twelve-month intersensory accuracy predicted a significant 10% of the variance in EVT scores, p = .02, beyond the variance predicted by 36-month intersensory accuracy (total variance accounted for: 22%). In contrast, 12- or 36-month attention maintenance and shift speed were not significant predictors of 36-month EVT scores. This study replicates and extends previous research (Bahrick et al., 2018) by demonstrating that infant intersensory matching (but not attention maintenance and speed of shifting) of social events predicts childhood language outcomes. Consistent with our predictions, 12-month intersensory matching was a significant predictor of 36-month expressive vocabulary. Thus, some of the individual differences in children's language outcomes can be predicted by intersensory processing assessed as early as 12-months. Surprisingly, 12-month intersensory matching predicted a significant 10% of the variance in expressive vocabulary at 36-months. These findings are important because they are one of the first to demonstrate links between infant intersensory processing and child expressive language outcomes. Early assessments of intersensory processing may aid in identifying children that are at-risk for language delays during the early stages of development.