Events

Upcoming Events

Carolin Dudschig (Universität Tübingen) on The N400 and Large Language Models: Prediction- vs. count-based methods

On 23 April, Carolin Dudschig from Universität Tübingen will talk about The N400 and Large Language Models: Prediction- vs. count-based methods

Time: 13:15

On site: SOL A158

Zoom Link: https://lu-se.zoom.us/j/62491331134

Abstract:

The N400 is a well-established event-related potential (ERP) component  
that is widely studied in the field of cognitive neuroscience. It is  
typically larger following the detection of a semantic violation or  
other incongruities within language processing tasks. The N400  
provides valuable insights into the cognitive processes underlying  
language comprehension. For example, it has been used to investigate  
whether linguistic and world-knowledge violations are integrated in  
parallel during comprehension (e.g., Dudschig, Maienborn & Kaup, 2016;  
Hagoort, Bastiaansen & Petersson, 2004). Nevertheless, to date, it is  
still under debate what processes or information are reflected in the  
N400 and whether integration of basic operators such as negation are  
reflected in the N400 (e.g., Dudschig et al., 2019). The accounts  
range from integration views - suggesting the N400 reflects  
integration processes - to the lexical view - suggesting that the N400  
is non-combinatorial in nature and dominant prediction-based accounts  
that focus on the predictability of the critical word (for a review,  
see Kutas & Federmeier, 2011). Recent developments in large language  
models (LLMs) have opened new avenues to investigate what processes  
are reflected and what insights can be gained from examining the N400.  
This presentation aims to investigate the extent to which the N400  
amplitude can be better explained by pre-determined discrete condition  
labels (e.g., correct vs. world-knowledge vs. semantic violation)  
versus continuous word-pair embedding measures derived from multiple  
LLMs. Overall, this presentation aims to bridge the gap between  
traditional N400 research and the emerging field of natural language  
modeling.


Dudschig, C., Mackenzie, I. G., Maienborn, C., Kaup, B., & Leuthold,  
H. (2019). Negation and the N400: Investigating temporal aspects of  
negation integration using semantic and world-knowledge violations.  
Language, Cognition and Neuroscience, 34(3), 309-319.

Dudschig, C., Maienborn, C., & Kaup, B. (2016). Is there a difference  
between stripy journeys and stripy ladybirds? The N400 response to  
semantic and world-knowledge violations during sentence processing.  
Brain and Cognition, 103, 38-49.

Hagoort, P., Hald, L., Bastiaansen, M., & Petersson, K. M. (2004).  
Integration of word meaning and world knowledge in language  
comprehension. Science, 304(5669), 438-441.

Kutas, M., & Federmeier, K. D. (2011). Thirty years and counting:  
finding meaning in the N400 component of the event-related brain  
potential (ERP). Annual Review of Psychology, 62, 621-647.

Hanna Lindfors on Similar ERP effects for verbal and pictorial sequences with hierarchical structure

On 28 May, Hanna Lindfors, PhD candidate from Linnaeus University will talk about Similar ERP effects for verbal and pictorial sequences with hierarchical structure.

Time: 13.15-15.00

On-site: SOL:A158

Zoom Link: https://lu-se.zoom.us/j/62491331134

International Symposium on Speech Processing, organised by Renata Kochančikaitė and Tugba Lulaci

The symposium takes place on 30 maj 2024 13:00-19:00, at Lund University (Room: LUX C121 hörsal)

About the symposium:

The latest developments in auditory and neurobiological aspects of speech perception will be explored and discussed during this one-day event which gathers together young and advanced researchers, students and professors.

Speech processing comprises how we perceive the sounds that make up spoken language, how we comprehend the meaning, and how we respond to it verbally. With the recent advancements of neuroimaging techniques, the field of speech processing has taken a turn towards broader cognitive science and has growing implications for scientific and societal development. This international symposium will bring prominent scholars from different language and neuroscience labs to one place for exchange of most recent findings. The brain mechanisms of speech processing will be discussed in an interdisciplinary group, providing fresh perspectives.

Speakers: 

Prof. Yale E. Cohen, Penn Auditory Research Laboratory, (University of Pennsylvania)

Dr. M. Florencia Assaneo, Institute for Neurobiology, UNAM (Universidad Nacional Autónoma de México)

Prof. Mikael Roll, Lund Neurolinguistics group (Lund University)

Dr. Pelle Söderström, MARCS Institute for Brain, Behaviour and Development (Western Sydney University) & Lund Neurolinguistics group (Lund University)

Program: 

13:15-13:30   Opening remarks

13:30-14:30   Keynote 1:  M. Florencia Assaneo "Individual Differences in Perceptual-Motor Synchronization Performance"

14:30-15:30   Mikael Roll "Predictive Speech Processing in the Brain"

15:30-16:00   Coffee break

16:00-17:00   Pelle Söderström "Rapid Brain Responses to Spoken Language"

17:00-18:00   Keynote 2:  Yale E. Cohen "Neural Correlates of Auditory Perception"

18:00-18:45   General Discussion: Interdisciplinary perspectives on auditory perception

18:45-19:00   Closing notes

Past Events

Heming Strømholt Bremnes, Norges Teknisk-Naturvitenskapelige Universitet, on the computational complexity of quantifier verification and its neural consequences

The computational complexity of quantifier verification and its neural consequences

Date: 16 April

Time 14.15

On site: SOL L303b, zoom: https://lu-se.zoom.us/j/62491331134

Because of their mathematical nature, quantifiers are one of the few types of expressions in natural language for which a purely non-linguistic semantics can be given. This feature has prompted extensive study of quantificational expressions in formal semantics and has resulted in several theoretical results. Among these results is the fact that quantifiers fall into different classes depending on the complexity of their verification. Of interest to neuro- and psycholinguistics is the corollary that Aristotelian and numerical quantifiers (e.g. 'all' and 'three') can be verified with minimal working memory resources, whereas proportional quantifiers (e.g. 'most') requires an algorithm with a dedicated memory component. In a series of three EEG-experiments, we demonstrated that this theoretically derived difference is reflected in the evoked potential during sentence processing with verification. In this talk, I will present these experiments and attempt to draw some conclusions about the impact of verification complexity on models of sentence processing.

Victor Almeida, LIM-27, Neuroscience lab, Institute of Psychiatry, Faculty of Medicine, University of São Paulo (USP)

Date: 9 April

Time: 13.15-15

On site: SOL L303b

Zoom: https://lu-se.zoom.us/j/62491331134

Victor Almeida from LIM-27, Neuroscience lab, Institute of Psychiatry, Faculty of Medicine, University of São Paulo (USP) combines insights from animal studies, linguistics and neurophysiology to explain electrical potentials associated with predicton and prediction error.

The Predictive Coding (PC) framework was popularised by renowned neuroscientist Karl Frinston following Rao and Ballard’s 1999 computational model of extra-classical receptive-field effects in the visual cortex. Since then, canonical tenets of PC theory have infiltrated various subfields of cognitive neuroscience (whether ipsis litteris or under adaptations). Arguably, an epitome of this phenomenon is none other than psycholinguistics, given its multitude of generative models of prediction and prediction error. Regrettably though, a critical aspect of PC theory has been largely overlooked in our field. Frinston was originally inspired by a neurophysiological model which demonstrated how cortical feedback signals (predictions) and feedforward (residual, unpredicted error) could shape contextual interactions associated with peripheral receptive fields of lower visual cortex’s pyramidal neurons - namely, in such a way that it mimicked in vivo recordings. Hence, such mesoscopic neural operations constitute the cardinal pillar of the entire cognitive dimension of the PC framework - and much of its appeal in neuroscience. Yet, save for a few exceptions (to my knowledge), the same preoccupation with neural constraints of this nature appears to be lacking in language studies, which, in turn, might be problematic for a few reasons. Firstly, for example, associative and sensory cortices differ quite significantly in ways pertaining to microstructure, neurophysiology, and neural populations that behave differently, insofar as these differences should ideally be accounted for whenever one conjectures about prediction/prediction error in language, rather than perception. While it is infeasible to observe them via recordings of language processing in animals (for obvious reasons), they can still be safely inferred from neural behaviour during more basic cognitive processes in higher-order regions (e.g., working memory, selective attention, categorisation). Secondly, cognitive models can be extremely appealing even in spite of biological implausibility (as history itself teaches), and this poses a very real danger to the field - that is, it runs the risk of being misled into adopting questionable premises for empirical research, as well as non sequitur conclusions on the resulting data. In this seminar, I will thereby attempt to draw attention to these caveats. Namely, I will cover some of the transdisciplinary literature on the neural basis of prediction and prediction error - viz. as derived from in vivo studies and computational modelling of evoked-related potentials - and, by the end, I will make a case for a shift towards a more neurocentric approach in the study of language.

 

Efthymia Kapnoula on Individual differences in speech perception gradiency: Current insights and future directions

Efthymia Kapnoula, The Basque Center on Cognition, Brain and Language

Date: 12 March

Time: 10.15-12.00

On site: SOL A158

Zoom: https://lu-se.zoom.us/j/62491331134

Abstract: Listeners discriminate speech sounds from different phoneme categories better than equivalent acoustic differences within the same category - an empirical phenomenon widely known as Categorical Perception. Based on this phenomenon, it has been hypothesized that listeners perceive speech categorically, i.e., ignoring within-category differences. Despite the long prevalence of this idea in the field, there is now mounting evidence that listeners perceive speech sounds in a gradient manner and that they use subphonemic information to flexibly process speech. In addition, recent work on individual differences in speech perception has shed light onto the sources of this speech perception gradiency, as well as its functional role in spoken language processing. In this talk, I will present some key findings from this literature and briefly discuss some on-going work and future directions.
 

 

Sahel Azizpourlindy on What MEG can tell us about predictive processing during language comprehension

 

On 5 March, PhD candidate at Donders Centre for Cognition in Nijmegen, Netherlands, Sahel Azizpourlindy will talk about what MEG can tell us about predictive processing during language comprehension.

Time 13.15

On-site: SOL L123

Link to the zoom room: lu-se.zoom.us/j/62491331134

Azizpourlindy combines MEG and large language models to study the neural indices of predictive processing.

The brain uses contextual information and prior knowledge to predict future content during language comprehension. Previously, it has been demonstrated that contextual word embeddings, derived from Large Language Models, can be linearly mapped to brain data. Recently this method has been used to study neural signatures of predictive processing. One study found that in a naturalistic listening setting, predictive signatures of an upcoming word can be observed in its pre-onset signal, measured with ECoG. In the fMRI domain, another study has shown that including embeddings of multiple upcoming words improves the model’s fit to brain data. This has been interpreted as an indication that the brain encodes long-range predictions. In this study we examine whether the same predictive information can be found in MEG data, a signal with lower signal-to-noise ratio than EcOG and higher temporal resolution than fMRI. We show that: 1) The signatures of pre-onset predictions are also detectable in MEG data similarly to ECoG and 2) Contrary to what has been observed in the fMRI data, including future embeddings does not improve brain mapping in MEG signals. These findings provide a novel avenue for studying predictive processing during language comprehension with naturalistic stimuli.

Lars Meyer, Max Plank Institute, Leipzig, on How brain electrophysiology shapes language

Date 13 February

Time: 13.15-15.00
On site: SOL:H402
Zoom: https://lu-se.zoom.us/j/62491331134

Current research into the neurobiology of language puts strong focus on the role of periodic electrophysiological activity—so-called neural oscillations—for auditory and linguistic processing. Electrophysiological cycles are thought to provide processing time windows for acoustic and abstract linguistic units (e.g., prosodic and syntactic phrases, respectively). Most work has studied such functions in response to speech, that is, driven by acoustic or abstract cues available from the stimulus. My presentation turns this perspective around. I am presenting evidence that oscillations shape the comprehension and acquisition of language, as well as language as such, from the inside out. First, I discuss evidence that slow-frequency oscillations time-constrain our ability to form multi-word units during auditory comprehension and reading. Second, I show that the underlying neural rhythm may be reflected in the temporal architecture of prosody and syntax across the world’s languages. Third, I present cross-sectional electrophysiological results that suggest a tight relationship between the ontogenetic acceleration of brain rhythms—from slow to fast—and the gradual refinement of the temporal resolution of acoustic–phonological processing. In sum, I suggest that the built-in pace of brain electrophysiology poses an electrophysiological bottleneck for language acquisition, comprehension, and language as a cultural system.

Joint NLS and English Linguistics Seminar: Sara Farshchi (Lund University) on "ERP responses to confirmed and disconfirmed predictions in negated contexts"

On 6 December, 13:15-15:00, Sara Farshchi (Lund University) will talk about ERP responses to confirmed and disconfirmed predictions in negated contexts. 

On-site: SOL:A158

Zoom: https://lu-se.zoom.us/s/62491331134 

NLS Symposium on Tone and prediction in language, organised by Sabine Gosselke Berthelsen and Mikael Roll

The symposium takes place on 17 November, 9:00-12:30, at Lund University:

On-site: SOL:H402

Zoom: https://lu-se.zoom.us/j/63486401613 

In a series of six talks, the general process of prediction in different aspects of language will be discussed. Below is an overview of the program:

=========================================

09.00-09.30  Mikael Roll, Lund University

          Lexical tone accents and prediction in the brain

09.30-10.00  Pelle Söderström, Western Sydney University 

          Within-word prediction: from tones to segments

10.00-10.30  Sabine Gosselke Berthelsen, University of Copenhagen

          Morphophonological prediction in second language learners

Coffee break

11.00-11.30  Pei-Ju Chien, Lund University 

          Neural correlates of lexical tone and intonation perception in Mandarin Chinese

11.30-12.00  Wing Yee Chow, University College London 

          Incremental prediction in real-time language comprehension:
          From meaning to pitch contour

12.00-12.30  Yiling Huo, University College London

Organisers: Sabine Gosselke Berthelsen and Mikael Roll

=========================================

For more information about the talks and abstracts, see the link below:

https://www.sol.lu.se/en/the-department/calendar/event/symposium-tone-and-prediction-language/

Panos Athanasopoulos (Lund University) on Language modulations of pre-attentive categorical perception

On 17th of October at 13:15, Panos Athanasopoulos from Lund University will give a talk about "Language modulations of pre-attentive categorical perception".

Location: SOL:H402

Zoom Link: https://lu-se.zoom.us/j/63963142026 

 

Abstract

Modern approaches to the Sapir-Whorf linguistic relativity hypothesis have reframed it from one of whether language shapes our thinking or not, to one that tries to understand the extent and nature of any observable influence of language on perception. One important dimension of this strand of research asks whether language modulates our perception only at a conscious level, or whether such modulations can also be observed outside of conscious awareness, at early pre-attentive stages of visual integration. The current talk will review Event Related Brain Potential (ERP) evidence from three research domains (colour, objects, grammatical gender) that sheds light on these questions. The data shows that it is possible to observe language effects very early in the visual processing stream, thus supporting one of the basic tenets of the linguistic relativity hypothesis, namely that “the 'real world' is to a large extent unconsciously built up on the language habits of the group” (Sapir 1958 [1929], p. 69).

Conference: NLS 2023, 1-2 June

The first NLS conference will take place on 1-2 June, 2023, in Lund.

For more information see:

https://konferens.ht.lu.se/neurolinguistics-in-sweden-2023

Lia Călinescu (NTNU) on Verb-Noun and Adjective-Noun composition in the brain

Title: In search for composition in the brain: ERP and oscillatory effects of Verb-Noun and Adjective-Noun composition

Date: 9 May

Time: 13.15-15

Room: SOL:A158

Zoom https://lu-se.zoom.us/j/62491331134

Abstract:
Research aiming to uncover the neural corelates of composition in the brain has been very productive over the last few decades. However, such a corelate has arguably not been observed to date. In this research we explore the possibility that the reason for this delay is the fact that the experimental paradigms used in previous research have not been optimal for this aim. At the same time we raise the question of whether composition is not always used as a strategy of deriving meaning of complex expressions. I used a novel paradigm in a EEG experiment testing whether arguments (e.g. the direct object of a transitive verb) and adjuncts (e.g. an adjective modifying a noun) are composed by similar or different mechanisms at the neural level. ERP and oscillatory responses seem to suggest alternative explanations for meaning comprehension that do not rely compositionality.

Francesca Carota on "A neurobiologically informed theory of language production"

On 2 May 13.15-15.00 Francesca Carota from the Max Planck Institute for Psycholinguistics & Donders Center for Cognitive Neuroimaging will give the talk "Towards a neurobiologically informed theory of language production".

Location: SOL A158

Link to the zoom room: lu-se.zoom.us/j/62491331134

Yury Shtyrov, Aarhus University, on morphosyntactic interactions through the lens of brain dynamics

Are complex words real mental objects represented in the lexicon as such, or are they learnt, stored and processed as mere combinations of individual morphemes bound together by morphosyntactic rules? Do these mechanisms differ depending on the type of morphology under investigation? Such questions debated in (psycho)linguistic literature can be straightforwardly addressed using neurophysiology. Using MEG and EEG, we have established a distinct double dissociation pattern in neurophysiological responses to spoken language, which can reflect lexical («representational») vs. (morpho)syntactic («combinatorial») processes in the brain. These are manifest as: (1) a larger passive (i.e. obtained without any stimulus-related task) brain response to meaningful words relative to matched meaningless pseudowords, reflecting stronger activation of pre-existing lexical memory traces for monomorphemic words (= lexical ERP/ERF pattern), (2) a smaller brain response amplitude for congruous word combinations (reflecting priming via syntactic links), relative to incongruous combinations where no priming is possible (=combinatorial pattern). This double dissociation – larger response for auditorily presented simple holistic representations vs. smaller response for well-formed combinatorial sequences – allows, in turn, for clear experimental predictions. Such experiments could test the nature of morphosyntactic processing by presenting the subjects with real complex words and incongruous morpheme combinations in passive auditory event-related designs, and comparing the relative dynamics of their brain responses.

We have used this neurophysiological approach to address a range of morphosyntactic questions: neural processing of compound words, past tense inflections, particle verbs as well as differences between inflectional and derivational morphology and processes of complex word acquisition in L1 and L2. This body of results generally supports a flexible dual-route account of complex-word processing, with a range of strategies involved dynamically, depending on exact psycholinguistic stimulus properties. Furthermore, as these experiments indicate, comprehension of spoken complex words is a largely automatized process underpinned by a very rapid (starting from ~50 ms) neural activation in bilateral perisylvian areas.

Date: 28 February

Time: 13.15-15

Room: SOL A158 or on zoom https://lu-se.zoom.us/j/62491331134

 

Mikkel Wallentin on "Sex/gender in language. Large differences with small effects and small differences with large effects"

Tuesday, November 8, 13:15-15, Mikkel Wallentin (Aarhus University) will present his work on "Sex/gender in language" in Lund.

Location: SOL: H402
Zoom link: https://lu-se.zoom.us/s/62491331134

Pei-Ju Chien on "The neural bases of speech intonation and lexical tone in Mandarin Chinese"

Tuesday, October 25, 10:15-12, Pei-Ju Chien on "The neural bases of speech intonation and lexical tone in Mandarin Chinese"

Location: SOL: A158
Link to the Zoom room: https://lu-se.zoom.us/s/62491331134

Pelle Söderström on "Spoken-word recognition in the brain—A case for ubiquitous predictive processing"

Tuesday, October 18, 13:15-15, Pelle Söderström (Lund University & MARCS Institute, Sydney) will present his work on "Spoken-word recognition in the brain"

Location: SOL: H402
Link to the Zoom room: https://lu-se.zoom.us/s/62491331134

Rosario Tomasello on The neuropragmatics of speech acts, Tuesday 14 June

Tuesday, 14 June, 13.00-14.30, Rosario Tomasello will visit Stockholm University via zoom. 
Title: The neuropragmatics of speech acts
 
Abstract: In everyday social interactions, linguistic signs are used as tools allowing effective expression of our intentions to others. These intentions, described by linguistic-pragmatic theories as speech acts, are embedded in a set of complex settings and actions, including associated commitments, that define the specific nature of their actions in context. Here I summarise a series of studies on the brain correlates underlying the fine-grained distinction between different speech act types in written, spoken and gestural modalities, including speech prosody and the role of common ground. I will provide novel insights into the long-standing debate about when brain indexes of linguistic-pragmatic information of communicative functions first occur. Further, by presenting a neuromechanistic model, the Action Prediction Model of Communicative Function, I will argue that understanding a speech act requires the expectation of typical partner actions that follow it and that this predictive knowledge is reflected in the human brain.
 

Elliot Murphy on his ECoG/iEEG work on syntactic composition, 1 June 14.15

Title: A cortical mosaic for linguistic structure: Insights from intracranial recordings

Elliot Murphy (University of Texas Health Science Center)

Wed., June 1, 14:15-15:30 CET

zoom link: https://NTNU.zoom.us/j/94287253224

This is a talk organized by NTNU. 

Katharina Rufener on tACS to modulates auditory gamma oscillations

Title: Modulating auditory gamma oscillations by means of transcranial alternating current stimulation (tACS) ­– first evidence on the efficacy and feasability in individuals diagnosed with developmental dyslexia

Katharina Rufener (Otto-von-Guericke University, Magdeburg, Germany)

Wed., May 25, 13:15-14:15 CET

zoom link: https://lu-se.zoom.us/j/63263453894

Prediction in Brain Potentials 29 April 2022

Program

13:15-13:30  Introduction

13:30-14:15 Stronger expectations, larger negativities: slow negativities associated with semantic prediction in sentence comprehension. Patricia León-Cabrera

14:30-15:15 Information sampling during self-regulated language learning: Evidence using slow event-related brain components (ERPs). Antoni Rodriguez-Fornells

15:15-15:45 Coffee and snacks

15:45-16:30 The pre-activation negativity. Sabine Gosselke-Berthelsen, Anna Hjortdal & Mikael Roll

16:30-17:00 General discussion

Link to the event in the SOL calender

Page Manager: eva.klingvallenglund.luse | 2024-04-16