I recently attended the BALEAP Professional Issues Meeting (PIM), hosted by University of York’s International Pathway College, which explored the concept of ownership in EAP. Most presenters focused on the concept of writer’s voice and how to encourage students to develop a confident academic voice in their writing. The PIM was held online and was a showcase for the positive affordances of online meetings. The technology enables colleagues from around the world to contribute and participate, greatly enriching the discussion. The use of the Wonder.me application enabled people to meet up and chat informally during the lunch break.
The first plenary by Sheena Gardner also highlighted the importance of technology, in this case corpora, for informing teaching content. Sheena presented a recent study (Gardner et al., 2019) using multidimensional analysis to explore clusters of lexico-grammatical features of academic essays across disciplines and levels of study in the BAWE corpus of student writing. The differences uncovered could be used in a general EAP course to raise students’ awareness of discipline-specific requirements rather than teaching one-size-fits-all essay writing.
The second plenary, a workshop by Mike Groves and Klaus Mundt, explored a recent trend in academic writing, the use of machine translation tools such as Google Translate. These tools have become so accurate in recent years, at least at the lexico-grammatical level, that their use is advocated for academics who need to publish in English (Luo & Hyland, 2019). My co-author for Access EAP: frameworks, Sue Argent, explored the use of early translation tools in unit 8 section 1 as a carrier topic for helping EAP students to recognise stance in relation to a claim. At the time of writing (2012-14), most authors thought that human translation was superior in terms of speed, accuracy and cost, because translation between languages was certainly not just a matter of replacing words. However, Computer-aided Translation (CAT) was becoming recognised as a useful tool for translating highly repetitive texts such as technical documentation. In 2012, the Duolinguo project was just getting started. People signed up to learn a language by translating sentences from web texts with the resulting translations from hundreds of learners combined to arrive at the best translation.
Things have moved on apace in ten years so that academic institutions now recognise that they need to adopt a policy on students’ likely use of machine translation tools to write essays. These tools may soon replace essay writing mills as the latest bogeyman in the plagiarism pantheon. Mike and Klaus set out in their workshop to answer the question: ‘[whether] or how MT transfers a writer’s voice from one language into another?’ Participants were asked to choose one of three texts to work on, all translated from different languages using MT. Our task was to discuss aspects of the text that seem strange for a reader in English. My group chose an environmental science text, the introduction to a study of heavy metal residues in lakes and waterways, from a Malaysian source. It was clear that at the level of lexico-grammar the writing was accurate but beyond that it quickly became incoherent. The expected genre-moves or the general to specific development of an introduction were absent and the sentences did not connect smoothly from given to new information to lead the reader through the text. Other groups discussing different texts commented on academic style, for example the use of hedges and boosters when making claims.
Our findings from the workshop mirrored approaches adopted by colleagues in the department of translating and interpreting where I used to work. Translation students were introduced to translation tools to facilitate quick and efficient translation as a first pass through a text, especially a technical one. However, the main focus of practical translation sessions in the degree studies was on argumentative texts where style and nuance of the language played a major role. When it came to assessment, the lecturers had to select texts carefully so that an effective translation could not be achieved only with MT. From the evaluation of the texts in the PIM workshop, we could see that the discourse macro structure, at the level of genre, and microstructure, at the level of theme/rheme links between sentences, were not captured with these machine translations. The appropriate choice of hedging and boosting for the specific genre was also inaccurate.
Where does this leave EAP students and their teachers? The workshop task to evaluate examples of machine translated writing was certainly a powerful way of showing students the limits of machine translation. They need to consider more closely the expectations of their academic audience (their lecturers) and how global aspects of structure and style in writing help to establish their voice and get their message across. In the introduction to Access EAP: frameworks unit 8, Sue set out our approach to academic voice:
…the intention is to move the spotlight away from cheating and to show students how to draw on their ideas, knowledge and experience to write with their own voice. A student with ideas will read critically, a student who reads critically will have something to say, a student with something to say will strive for a voice and a student with a voice does not need to steal the ideas or words of others.
Gardner, S., Nesi, H. and Biber, D. (2019) Discipline, level, genre: Integrating situational perspectives in a new MD analysis of university student writing. Applied Linguistics. 40(4), 646-674. doi/10.1093/applin/amy005 (open access)
Luo, N., & Hyland, K. (2019). “I won’t publish in Chinese now”: Publishing, translation and the non-English speaking academic. Journal of English for Academic Purposes, 39, 37–47. https://doi.org/10.1016/j.jeap.2019.03.003