Have you tried one yet? An ‘intelligent virtual assistant’ that engages in what can feel like authentic extended conversations. The Beta version of Chat GPT, developed by the company OpenAI, is available to play with and I’ve come across several articles assessing it’s functionality.
This article from Forbes magazine suggests it can help to brainstorm ideas, understand complex topics, summarise research, act as a writing assistant and translate into 95 different languages. Do all these functions sound useful for a student looking to get help with essay writing? Of course they do!
Michael G Madden at The Conversation sounds a note of caution by testing Chat GPT on Wordle puzzles, at which it is surprisingly poor.
Chat GPT is an example of a Large Language Model (LLM) which is ‘trained’ on 500 billion words of text from open-access websites such as of Wikipedia, all public-domain books and huge volumes of scientific articles and many others. The basic mechanism of LLMs is to predict the most likely next word in a sequence, based on all the collocations of words they have seen. However, they work with numbers not text so each word must be first encoded as a number to compare with patterns of numbers in the training database. This means they are poor at recognising the internal structure of words, necessary to succeed at Wordle.
My friend and colleague, Sue Argent, tried it with the question: How can concordancing contribute to the teaching of EAP? On the face of it she got what I would call a ‘listy’ text:
Concordancing can contribute to the teaching of EAP (English for Academic Purposes) in several ways, including:
1. Vocabulary Development: Concordancing can help EAP learners improve their vocabulary by showing them how words are used in context. By analyzing the concordance lines, learners can learn how individual words are used and how they combine with other words in collocations.
2. Language Awareness: Concordancing can also help EAP learners develop awareness of language features that are typical of academic writing, such as nominalization, passive voice, and hedging. Examining concordance lines can help students identify these features and understand their functions.
3. Corpus-based Teaching: A corpus is a collection of texts that can be analyzed for linguistic patterns. Concordancing is a corpus-based approach to teaching EAP, which provides students with authentic language data. Concordance tools enable students to search the corpus for linguistic features they want to analyze, and hence corpus-based teaching can help learners develop critical thinking skills.
4. EAP Skill Development: Concordancing can help EAP learners develop various skills, such as reading, writing, and listening. It allows them to analyze authentic texts, identify common patterns, and make informed choices when producing their own texts.
In conclusion, concordancing can be a valuable tool for teaching English for Academic Purposes. It can help students develop vocabulary, language awareness, critical thinking skills, and other important EAP skills. Therefore, teachers should consider incorporating concordancing into their EAP teaching activities.
Analysing this result, it seems cohesive but lacks coherence. Points 1 & 2 seem to make sense but then we might have expected point 3, with its definition of a corpus and general overview of concordance tools for teaching to come before points 1 & 2 in an essay, especially since the question asks about ‘teaching EAP’. The third sentence in point 3 is a classic example of plugging a linking word or phrase (and hence) into a text and hoping the evidence-conclusion link will miraculously appear. The link between ‘linguistic features they want to analyse’ and ‘develop critical thinking skills’ needs a lot more unpacking to be convincing.
I decided to try an evaluative question: What are the arguments against constructive alignment of course aims, content and assessment? I got a much shorter albeit accurate answer and an honest admission of failure in the last sentence.
Constructive alignment is a teaching approach that systematically aligns the teaching/learning activities, and the assessment tasks to the intended learning outcomes according to the learning activities required in the outcomes. Constructive alignment encourages clarity in the design of the curriculum, and transparency in the links between learning and assessment. In a truly Constructively Aligned curriculum it facilitates deep learning as the activities are designed for that purpose.
I couldn’t find any arguments against constructive alignment of course aims, content and assessment.
As with the Wordle test, this result seems to suggest that one way to prevent students from using Chat GPT and other LLMs to write their essays is to ask questions that require evaluation and argument from particular viewpoints. This would work well for final-year undergraduate or postgraduate students but foundation and first year undergraduate students do tend to be asked display questions: What is…? How can…? More creative approaches to the design of assessment questions are clearly needed for these groups.
2 thoughts on “Chat GPT – how worried should we be?”
I’ve just had the chance to try a more sophistocated version of Chat GPT during the Edinburgh Science Festival & it’s made me feel somewhat less complacent about this technology. I asked the same two questions and got the same answer for the first but a much more nunanced response to the second with some critical commentary. If a student submitted that answer without any further reorganisation, it would be clear – if only from the numbered list – that it had been produced by Chat GPT but by removing the numbers and doing a bit of structural rearrangement, they could produce an essay that would be hard to identify as AI-generated. So what to do? I guess that one way round it would be to ask for illustrations of concepts from personal experience, e.g., ‘Choose a subject you studied at Middle School & ealuate the extent to which the syllabus was constructively aligned’. Would this be harder for an AI to respond to?
A student wanted to know what I think of ChatGPT both as a tool for learning English and as a way for her to function professionally as an academic. She is a post-doctoral researcher in AI, and so I asked her how she had used it so far. Her strategies are quite interesting.
Her approach to reading, e.g., for a literature review, saves her a lot of time: gather the sources; ask ChatGPT for a summary of each; translate these summaries into Arabic; go back to the sources for further reading, using translation as and when necessary.
I think it’s a bit more problematic when she wants to communicate something in writing (e.g. an email, a personal statement, a bid for research funding). She writes it in Arabic then gets ChatGPT to translate it into English. Then she checks any unfamiliar vocabulary and back-translates the whole text into Arabic to make sure it says what she means. She tinkers around like this until she’s satisfied with the English language version. She showed me a couple of short texts that resulted from this process, an email and a paragraph from a personal statement. On the positive side, the writing was accurate in terms of language and content. On the negative side, each read like a marketing spiel and they were quite verbose — possibly a reflection of the predominant source texts in the training corpus, and possibly the style of the original text.
We emphasise to students the need for their writing to be shaped by purpose, reader and context. I wonder how long it will be before ChatGPT can be similarly directed with any accuracy.
LikeLiked by 1 person