Cognitive Semiotics Seminar: "ChatGPT: Searching for the substance amidst the hype" (Joel Parthemore)
In this quite relevant seminar, our former colleague, with expert knowledge in AI and cognitive science, Dr. Joel Parthemore, will discuss practical and theoretical issues concerning the "intelligence" of the currently popular system ChatGDP. The talk itself will start from 15:15 and last about an hour, to be followed by lively, to be sure, discussion. All are warmly welcome, in the physical or virtual room!
ChatGPT has been ubiquitous in the news lately: university lecturers bemoaning their inability ever to mark essays again, journalists gushing about how ChatGPT has "soared past" the Turing test in its pursuit of greater challenges. At a time when world-renowned philosophers are sounding alarms about super-intelligent AI and even "super-intelligence++", it's a good time to look at the reality in contrast to the hype.
Introducing the Imitation Game in his 1950 paper in Mind, Turing makes no claim that an artefactual agent "winning" the game -- which came to be known as the Turing test -- would, by that measure, be intelligent. He presents it there as a thought experiment, not an operational test for intelligence. That said, it's highly unlikely that ChatGPT or any of its competitors could win the game if the human participants were bothering to pay attention. For all that it can do -- throwing together a sonnet or speaking with seeming authority on nearly any subject imaginable -- what is perhaps most striking about ChatGPT is what it has in common with Joseph Weizenbaum's Eliza, not what it can do differently. I will offer examples of ChatGPT's garbled output. They are not hard to come by. If you are trying to do so, ChatGPT is remarkably easy to trip up. It fails the functionalist test for intelligence. I see little prospect of that changing.
Functionalism is misunderstood by many of its supporters and detractors. It suggests that, if a purported agent can interact in all relevant ways like an intelligent human agent -- and continue to do so over time -- then it is for all intents and purposes intelligent -- even, in some sense, human. Functionalism though has never said that underlying structures don't matter: they critically do. The key notion is that of multiple realizability: in a slogan, there's more than one way to be human.
ChatGPT is neither conscious nor alive -- and, notably, no experts in the field are claiming that it is. The unstated assumption of many is that intelligence is divorceable from life, even though no one seems capable of explaining what intelligence without life would look like. Perhaps what separates ChatGPT from even the simplest of living organisms with their minimalist expressions of cognition is not that it is computational and they are not, but that it is computational in the wrong way, dependent in the end on essentially context-free formal systems we can understand as opposed to the far more expressively powerful, highly context-dependent formal systems we know about that might well be up to the task -- but that outstrip our capacity to understand.