20

Nov

CogSeminar: "Pouring the cold water: Large-Language Models, generative AI, and the limits of mimicry" (Joel Parthemore)

20 November 2025 15:00 to 17:00 Seminar

Our long-term collaborator Joel Parthemore, with a solid background in cognitive science and AI, will continue our discussions on the topic of the simulations of semiosis and cognition in the currently fashionable "Large Language Models", and how these differ from the real thing. Welcome to the room, or to the zoom - but remember to turn on your camera BEFORE and AFTER the talk, for self-presentations and questions.

A number of prominent researchers in cognitive science -- notably Sussex University's Ron Chrisley and Tom Froese of Okinawa Institute of Science and Technology -- have cautiously suggested that chatbots based on Large-Language Models can, in effect, now engage in human reasoning and even have the potential to become conscious, or have begun to achieve some minimal level of consciousness already.

While one may be tempted to argue that, even if full behavioural equivalence is achieved, these artefactual agents will still not be reasoning agents and still not be conscious, that -- I think – is not necessary. As Noam Chomsky has written, there are critical observable aspects of cognition that we know, with reasonable certainty, can never be reproduced using this technology. As the functionalists were arguing in the 1970s and early '80s in rejecting behaviourism, it is impossible to get the surface behaviours right unless one properly accounts of those things one cannot directly see: i.e., the underlying mechanisms (which may, however, per functionalism, be multiply realisable). Seeming linguistic competence does not make one a semiotic agent. Fooling most of the people most of the time is not enough.

About the event:

20 November 2025 15:00 to 17:00

Location:
IRL: room H402, online: https://lu-se.zoom.us/j/61502831303

Contact:
jordan.zlatevsemiotik.luse

Save the event to your calendar