June 3: Joseph C.Y. Lau and Sandra R. Waxman

Joseph C.Y. Lau will be presenting joint work with Sandra R. Waxman from the Infant and Child Development Center at NU. Sign up for the listserv to receive Zoom details.

Which acoustic features do infants use to link language (and a few other signals) to cognition? A machine-learning approach

Language is central to human cognition. As a hallmark of our species, language is a powerful tool that permits us to beyond the here-and-now to establish mental representations and to communicate with others the content of our minds. Research in our group has tackled the question of how, and how early, infants establish a language-cognition link. Within our group, two decades of behavioral studies have documented that by 3 months of age, infants link language (and a few other signals) to cognition (measured by object categorization). Interestingly, this cognitive advantage is evident when infants listen to infant-directed speech IDS in their own native language (e.g. English to infants from an English-speaking environment) and in some (e.g. German), but not all (e.g. Cantonese), non-native languages. Decades of studies have shown that speech processing in early infancy is tuned by language environment. We have shown that this perceptual tuning has downstream conceptual consequences on what signals infants link to cognition. Moreover, this link between language and cognition is disrupted when language samples are perturbed (e.g. presented in reverse). Surprisingly, at 3-4 months, language is not the only signal that supports infant cognition: listening to vocalizations from non-human primates (e.g. blue-eyed Madagascar lemurs), but not birds (e.g. zebra finches), also support infant object categorization. But which acoustic features, singly or in combination, do infants use to link this small subset of signals to cognition? Addressing this question is crucial to understanding the underpinnings of the language-cognition link. The current proposed project tests the hypothesis that there are acoustic properties shared among our identified “privileged” signals (e.g. English IDS, German IDS and lemur calls), and that these are also instrumental in acoustic and speech processing in early infancy. The goal is to identify common acoustic features in a data-driven approach, using supervised machine-learning-based models that search in multiple acoustic domains to identify acoustic representations that maximally classify the natural classes of “privileged” and “non-privileged” signals respectively for linguistic (e.g. English and German IDS vs. Cantonese IDS) and non-linguistic vocalizations (e.g. Lemur calls vs. Zebra Finch Songs), or among all signals regardless of their linguistic vs. non-linguistic nature. Also by modeling the different classifications of “privileged” and “non-privileged” signals at 4-months vs. 6-months (e.g. Lemur calls are “privileged” at 4-months but “non-privileged” at 6-months), this project seeks not only to pinpoint which acoustic features undergird the striking behavioral findings, but also to model how developmental changes in salience of acoustic features may subserve behavioral changes from 3 to 7 months. If successful, this project will also shed light on the evolutionary and developmental antecedents to the language-cognition links. Modeling results will also allow us to evaluate the hypothesis that there exist separate but parallel pathways in which linguistic and non-linguistic signals facilitate infant cognition based on different combinations of acoustic parameters. How this study may illuminate the fundamental role of prenatal and postpartum neurophysiological sensory experience in establishing the uniquely human language-cognition link will be discussed.

Online talk May 13: Jeffrey Lamontagne (McGill)

Our meetings this quarter will be held on Zoom. Please sign up for the listserv to receive the Zoom link (instructions in sidebar).


Jeffrey Lamontagne

Finding Grammar Amidst Optionality and Opacity: High-vowel tenseness in Laurentian French
    Laurentian French (also commonly called Canadian French, Quebec French or Québécois) is characterised by a complex combination of processes affecting the tense/lax quality of high vowels. Laxing in word-final syllables is completely predictable, but laxing in non-final syllables combines optionality and opacity through harmony (local and non-local), disharmony, retensing, and vowel deletion. While laxing processes have received considerable attention in the literature (e.g. Dumas, 1987; Poliquin, 2006; Fast, 2008; Bosworth 2011), all quantitative data currently available are from acceptability judgments that Poliquin collected, rather than from production. The lack of production data stems from tense and lax high vowels not being possible to classify using one or two acoustic dimensions (Arnaud et al., 2011; Sigouin, 2013).
In collaboration with Peter Milne, a forced aligner was trained on tense and lax high vowels in final syllables (where tenseness is fully predictable) to classify tokens in non-final syllables, thereby creating the first corpus annotated for high-vowel tenseness. Drawing on 24,000 words with high vowels in non-final syllables, I refute Poliquin’s (2006) proposal that learners have insufficient input to generate a grammar that includes the phonological processes affecting high-vowel tenseness. I demonstrate that the community-level grammar a learner is expected to acquire largely reflects the broad processes proposed in the literature, but that certain aspects of those processes differ those suggested in the literature (e.g. the directionality of local harmony). I finally argue that these processes are phonological in nature; they cannot be explained purely in terms of undershoot or (non-phonologised) coarticulation.

Online talk May 20: Melissa Baese-Berk (University of Oregon)

Our meetings this quarter will be held on Zoom. Please sign up for the listserv to receive the Zoom link (instructions in sidebar).

Perception of and adaptation to non-native and unfamiliar speech

Listening to unfamiliar speech, including non-native speech, often results in substantial challenges for listeners. The consequences of these challenges are far-reaching (i.e., costs for comprehension, memory and other down-stream processing), and increased costs for listening to unfamiliar speech exist even when the speech is fully intelligible (e.g., McLaughlin & Van Engen, 2020). I will present a series of studies aimed at investigating what makes perception of non-native speech especially challenging and what factors impact adaptation to this speech. I will show some new data that suggests that social factors, in addition to linguistic properties, can impact adaptation to unfamiliar speech.

Talk March 4: Matt Goldrick

Modeling Liaison using Gradient Symbolic Representations

(Joint work with Paul Smolensky and Eric Rosen, Johns Hopkins University & Microsoft Research)

The Gradient Symbolic Computation framework claims that the mental representations underlying speech are abstract, symbolic, and continuous, such that different symbolic constituents can present within a structure to varying degrees. I’ll discuss how this framework can be used to model the distribution of liaison consonants in French, proposing an algorithm that learns the relative activation of symbolic constituents.

Talk Feb 5: Ann Bradlow

Global language systems and phonetics

I will present two approaches to language typology and classification that I believe are relevant for our understanding of speech production and perception in the context of extensive multilingualism and language/dialect contact.  Specifically, I will briefly outline (1) the distinction between “Esoteric Languages” and “Exoteric Languages” as discussed in Lupyan and Dale (2010, Language Structure Is Partly Determined by Social Structure, PLoS ONE 5(1): e8559), and (2) the “Global Language System” as developed in de Swaan (2002, Words of the World: The Global Language System). Together, these two views raise a number of issues and questions that are potentially instructive for the evolving field of experimental and corpus phonetics.

Talk Jan 15: Uriel Cohen Priva

Understanding lenition through its causal structure

Consonant lenition refers to a list of seemingly unrelated processes that are grouped together by their tendency to occur in similar environments (e.g. intervocalically) and under similar conditions (e.g. in faster speech). These processes typically include degemination, voicing, spirantization, approximantization, tapping, debuccalization, and deletion (Hock 1986). So, we might ask: What are the commonalities among all these processes and why do they happen? Different theories attribute lenition to assimilation (Smith 2008), effort-reduction (Kirchner 1998), phonetic undershoot (Bauer 2008), prosodic smoothing (Katz 2016), and low informativity (Cohen Priva 2017). We argue that it is worthwhile to focus on variable lenition (pre-phonologized processes) in conjunction with two phonetic characteristic of lenition: reduced duration and increased intensity. Using mediation analysis, we find causal asymmetries between the two, with reduced duration causally preceding increased intensity. These results are surprising as increased intensity (increased sonority) is often regarded as the defining property of lenition. The results not only simplify the assumptions associated with effort-reduction, prosodic smoothing, and low informativity, but they are also compatible with phonetic undershoot accounts.

Talk January 8: Anne Pycha

Our next presentation will be by Anne Pycha (University of Wisconsin, Milwaukee) on January 8th, 2020 at 4pm in Cresap 101. As usual, it will be followed by a happy hour at Stacked & Folded Evanston. Here are the title and abstract:

Categoricity of segment representations depends upon word context

Exemplar theories versus rule-based theories often make opposing predictions about the nature of segment representations. Exemplar theories predict strong categoricity for segments at morpheme or word boundaries: such segments occur in many environments, so their exemplar clouds include a range of phonetic variants over which listeners can generalize. On the other hand, rule-based theories predict strong categoricity for segments that participate in contrast or phonological rules, because they interact with other segments regardless of phonetic variation. In two studies, we tested these differing predictions by asking American English listeners to judge differences among phonetic variants of consonants occurring in different word contexts: a) at morpheme boundaries without rules, b) at morpheme boundaries with rules, c) internally without rules, and d) internally with rules. Preliminary results show that listeners are less sensitive to phonetic variation when the consonant occurs at a morpheme boundary, suggesting that the representations of these consonants are more categorical, in line with the predictions of exemplar theory.

Talk Nov 20 – Timo Roettger

Our next meeting will be on November 20, 2019 at 4PM, featuring a talk by Timo Roettger, a postdoctoral fellow in the Linguistics department. Abstract & title below. As usual, a happy hour will follow at Stacked & Folded Evaston (824 Noyes).

Preregistration – What is it? Why should we do it? And what’s in it for us?
The current publication system incentivizes neither publishing null results nor direct replication attempts. This state of affairs biases the scientific record toward novel findings that appear to support presented hypotheses (referred to as “publication bias”). Moreover, flexibility in data collection, measurement, and analysis (referred to as “researcher degrees of freedom”) can lead to overconfident beliefs in the robustness of a statistical relationship. This flexibility is particularly pronounced in speech sciences, potentially increasing the rate of false discoveries in our own publication record.
One strategy to systematically decrease publication bias and the harmful impact of researcher degrees of freedom is preregistration. A preregistration is a time-stamped document that specifies how data is to be collected, measured, and analyzed prior to data collection. Preregistration is a powerful tool to reduce bias and to facilitate transparency in decision making. This talk introduces the concept of preregistration and discusses its benefits and potential disadvantages for both our scientific field and individual researchers.

November 20, 2019, 4PM to 5PM
Cresap 101, Cresap Laboratory 2029 Sheridan Rd

Talk Oct 30 – Kasia Hitczenko

Our next speaker will be by Kasia Hitczenko, a postdoctoral fellow in the department of Linguistics at Northwestern.

How context can help in learning sounds from naturalistic speech

Infants learn the sound categories of their language and adults successfully process the sounds they hear, even though sound categories often overlap in their acoustics. Most researchers agree that listeners use context (e.g. who the speaker was, what the neighboring sounds were, etc.) to help disambiguate overlapping categories, and have put forth a number of theories about how contextual information could be used. However, for the most part these theories have been developed by studying simplified speech (synthetic or well-enunciated, controlled lab speech), so it is unclear to what extent these ideas extend to naturalistic speech. Here, I ask how contextual information could be helpful for processing and learning from naturalistic speech of the type that listeners actually hear. I implement two main ways of using context and test their efficacy in separating overlapping categories on naturalistic speech, focusing on the test case of Japanese vowel length. Our results show that well-established results from lab speech do not necessarily generalize to naturalistic speech, and lead to a new proposal for how infants could learn the sounds of their language. Overall, our results reveal the importance of studying infants’ naturalistic input and highlight the value of tools that allow us to do so.

October 30th, 2019, 4PM to 5PM
Cresap 101, Cresap Laboratory 2029 Sheridan Rd

Talk – Oriana Kilbourn-Ceron

Next meeting (5/1), Oriana Kilbourn-Ceron (LING) will be talking about “Phonological variability at word boundaries: the effect of speech production planning”.

Abstract:
“Connected speech processes have played a major role in shaping theories about phonological organization, and how phonology interacts with other components of the grammar. Presenting evidence from English /t/-realizations and French liaison, we argue that the effect of lexical frequency on variability can be understood as a consequence of the narrow window of phonological encoding during speech production planning. By connecting the study of phonological alternations with the study of factors influencing speech production planning, we can derive novel predictions about patterns of variability in external sandhi, and better understand the data that drive the development of phonological theories.”

Our meeting will take place at the regular time and place on Wednesday 05/01 from 4-5pm in Cresap 101. Afterwards, we will have our happy hour at the World of Beer in Evanston.