June 3: Joseph C.Y. Lau and Sandra R. Waxman

Joseph C.Y. Lau will be presenting joint work with Sandra R. Waxman from the Infant and Child Development Center at NU. Sign up for the listserv to receive Zoom details.

Which acoustic features do infants use to link language (and a few other signals) to cognition? A machine-learning approach

Language is central to human cognition. As a hallmark of our species, language is a powerful tool that permits us to beyond the here-and-now to establish mental representations and to communicate with others the content of our minds. Research in our group has tackled the question of how, and how early, infants establish a language-cognition link. Within our group, two decades of behavioral studies have documented that by 3 months of age, infants link language (and a few other signals) to cognition (measured by object categorization). Interestingly, this cognitive advantage is evident when infants listen to infant-directed speech IDS in their own native language (e.g. English to infants from an English-speaking environment) and in some (e.g. German), but not all (e.g. Cantonese), non-native languages. Decades of studies have shown that speech processing in early infancy is tuned by language environment. We have shown that this perceptual tuning has downstream conceptual consequences on what signals infants link to cognition. Moreover, this link between language and cognition is disrupted when language samples are perturbed (e.g. presented in reverse). Surprisingly, at 3-4 months, language is not the only signal that supports infant cognition: listening to vocalizations from non-human primates (e.g. blue-eyed Madagascar lemurs), but not birds (e.g. zebra finches), also support infant object categorization. But which acoustic features, singly or in combination, do infants use to link this small subset of signals to cognition? Addressing this question is crucial to understanding the underpinnings of the language-cognition link. The current proposed project tests the hypothesis that there are acoustic properties shared among our identified “privileged” signals (e.g. English IDS, German IDS and lemur calls), and that these are also instrumental in acoustic and speech processing in early infancy. The goal is to identify common acoustic features in a data-driven approach, using supervised machine-learning-based models that search in multiple acoustic domains to identify acoustic representations that maximally classify the natural classes of “privileged” and “non-privileged” signals respectively for linguistic (e.g. English and German IDS vs. Cantonese IDS) and non-linguistic vocalizations (e.g. Lemur calls vs. Zebra Finch Songs), or among all signals regardless of their linguistic vs. non-linguistic nature. Also by modeling the different classifications of “privileged” and “non-privileged” signals at 4-months vs. 6-months (e.g. Lemur calls are “privileged” at 4-months but “non-privileged” at 6-months), this project seeks not only to pinpoint which acoustic features undergird the striking behavioral findings, but also to model how developmental changes in salience of acoustic features may subserve behavioral changes from 3 to 7 months. If successful, this project will also shed light on the evolutionary and developmental antecedents to the language-cognition links. Modeling results will also allow us to evaluate the hypothesis that there exist separate but parallel pathways in which linguistic and non-linguistic signals facilitate infant cognition based on different combinations of acoustic parameters. How this study may illuminate the fundamental role of prenatal and postpartum neurophysiological sensory experience in establishing the uniquely human language-cognition link will be discussed.

Leave a Reply

Your email address will not be published. Required fields are marked *