Chat GPT: Second thoughts
Jono Ryan
We come to the concluding post in our series on Chat GPT and second language teaching. In the first, I started by drawing a comparison to the moral panic over using electronic dictionaries and calculators and outlined one effective way to adopt it for writing instruction. Maria then consulted with Chat GPT itself on what it offers and where its pitfalls are. Mike then drew on the work of Vygotsky to provide a way of conceptualizing Chat GPT as a mediating tool and put the onus on the user for ensuring its safe use. What I had anticipated doing now was wrapping things together with a few additional thoughts. But that’s not where I’m at.
A month into this conversation, my overriding impression is of the slippery, accelerating speed of this territory and the folly of trying to offer a wrap up. It’s too new. And everyone is talking about it: this month I’ve seen three calls papers for chapters and articles on its applications to language teaching. Meanwhile, as I’ve used it more and more, my own reactions have shifted back and forth, finding it both dazzling and dodgy.
What we will agree with is that there are countless applications of to second language learning and teaching. History suggests we are entering a phase in which Chat GPT becomes an undue focus of lessons, with teachers smitten with its novelty factor. Over time, it will become just another tool, like pen and paper, and the focus will be on how to get the best out of it. Reflective practice, action research, publications and professional development will all play a role. In this way, second language teaching is surely in safe hands.
My concerns are elsewhere, and particularly in the areas of teacher education, training and accreditation. We know that learning complex new skills and knowledge requires deep cognitive processing of information, and we should all be a little concerned that Chat GPT can spit out assignments so quickly. Yesterday, for example, I asked it to write a 45-minute lesson plan for teaching the English present perfect to a class of pre-intermediate level Japanese students. It did a pretty reasonable job and certainly good enough for an entry level certification course. I then asked it to explain the use of the English present perfect, and again it did a good job. A 2500-word essay on the evolution of language? No problem. Undoubtedly we'll come to terms with this and will use AI to help undergrad students and teacher trainees go further faster. But in the meantime I am worried about young undergraduates and high schoolers who may find it all too easy to leverage Chat GPT for all the heavy lifting, and so not develop the tenacity, perseverance, and perhaps even processing capability we expect. This is the same generation of course who had emergency remote teaching thrust on them and have missed out on the social experience of campus life. A double whammy.
More strikingly, I've become increasingly alarmed by Chat GPT’s propensity for confabulation. Let me explain. I started off by asking it questions about reference, which was the topic of my PhD thesis and so I feel qualified to say that it did a pretty good job of summarizing fairly general information. So far so good. Then I asked about one of my secondary interests, conversation analysis. Things started getting very weird very quickly. Its list of the 10 most influential figures in the field had a non-existent (or at least unpublished) person at 7th. Ridiculous. I asked for information about the controversy surrounding the handling of the unpublished work of the late Harvey Sacks, and it referred me to the book “The Sacks Affair: Reflections on the Politicization of Conversation Analysis," edited by Graham Button, John R. E. Lee, and Richard Harper and published in 2019.” I was surprised and delighted. I’d never heard of it. But. There is no such book. I looked everywhere. So I asked Chat GPT where I could find it, and it provided a link to its supposed page on Routledge’s site. Routledge said no such page exists. Chat GPT apologized.
So then you start to wonder: did Chat GPT have the inside scoop on a book that had been planned but then had publication abandoned (or delayed by years)? Or did it just make it up? I delved deeper.
By this stage, I had upgraded my subscription to Chat GPT4 at US$23 a month, which people assured me would result in more accurate results. I wanted to follow up about publications of John R. E. Lee but – and I put my hand up here – I got his name wrong (typing A. rather than E.). But Chat GPT was not to be deterred. It falsely informed me that he was an applied linguist (pandering to my interests?) and then proceeded to provide a phony list of his key works, with convincing APA references and even brief summaries of the work. Some were misattributions from different authors. Some were make-believe chapters within real edited books. But most had been conjured entirely out of thin air. Phantom articles within real issues of real journals. One was improbably called ‘Socializing’ the Micro-Electronic Student: Advice Giving and Receiving within a Unique Educational Setting. For this, I tried to reverse engineer the answer by providing the title and asking who had written it, and it claimed one David C. Kinloch had done so, but that reference didn’t check out either. Google Scholar reckons there’s no such person. I could go on. It has been a torrent of this word-salad nonsense.
It feels like gaslighting. Right now, I’m sitting here genuinely concerned that we’ve had unleashed on us a major purveyor of disinformation. Let’s hope it’s only picking on the conversation analysts.