Continuity or innovation: assessment and languages for specific purposes at the crossroads
Speaker: Anna Soltyska (Ruhr-Universität Bochum) [email protected]
Abstract:
The social, economic and environmental changes now being experienced in virtually every area of personal and professional life are clearly also shaping the future of language assessment and education. New ways of communicating require the practice of new skills and a rethinking of the test constructs designed to measure these new skills. Innovative, often AI-assisted, ways of learning and assessing languages in an increasingly autonomous and self-directed way are affecting the role of teachers and placing new demands on different stakeholders in the field of education. Evolving patterns of global movement of people, rethinking of priorities in the face of various crises, and a widening generation gap are leading to increasingly diverse and heterogeneous groups of learners whose educational needs have yet to be identified and addressed.
At the same time, some constants can still provide guidance and serve as points of reference for reconsidering teaching and assessment practices. O’Sullivan’s (2021) Comprehensive Learning System, for example, could support the alignment of new learning objectives, adjusted teaching methods and revisited assessment schemes, and Weir’s socio-cognitive framework (2005) could be used to inform test development, research and validation. The universal test quality criteria of validity, reliability and objectivity should remain the guiding principles for assessment activities, regardless of the circumstances.
This talk aims to discuss the common challenges that language assessment and the teaching and learning of English for specific purposes are experiencing today, in the face of rapidly changing external circumstances. While it is notoriously difficult to make reliable predictions, reflecting on the current trends affecting the work of English language teachers serves to inform teacher training and professional development, the areas in which IATEFL SIGs can best support their members.
Biodata
Anna Soltyska is a member of the academic staff at Ruhr-Universität Bochum, Germany, where she teaches English for General and Specific Academic Purposes and coordinates the English programme at the University Language Centre. Her current research interests include teaching and testing of languages for academic and specific purposes, the impact of AI-based tools on institutionalised foreign language learning and assessment, promoting multilingualism in higher education and various aspects of assessment-related malpractice and academic integrity. Anna is a member of IATEFL and has been a TEASIG webinar coordinator since April 2020.
Locating the human in a time of machine intelligence: generative AI and its impact on ESP testing and assessment
Blair Matthews
Abstract
Advances in generative artificial intelligence (gen-AI), particularly large language models (LLMs) such as Chat GPT, offer the potential to transform language education. Some argue that AI promises efficient tools which can enhance language learning opportunities (Son, et al., 2023; Rusmiyanto, et al., 2023). However, many worry that AI may undermine the validity and reliability of student work (Maier, 2022; Anderson and Rainie, 2024; Darvishi, et al. 2024). Although the long-term impacts of generative AI are difficult to predict, what is certain is that language learning is now taking place in a very different environment, and there is a need to understand how learners can be independent and self-determining in light of how language learning becomes increasingly entangled and interdependent with machine intelligence.
In this talk, I explore the ‘working-together’ of human and machine intelligence in order to identify what gen-AI does well and what it does not do well. I argue that , while gen-AI offers new ways of doing things, it may not increase learner autonomy, instead orienting behaviour towards the management of the technology. I discuss the implications of a permanent gen-AI presence for the testing and assessment of ESP. I finish by arguing that data-driven learning practices can be applied to gen-AI, particularly how it can be used with corpora of specific genres.
Biodata
Blair Matthews is a lecturer in TESOL and International Education at the University of St Andrews, where he teaches English for Academic Purposes and Research Methods. He is interested in student and teacher agency and supervises Masters and Doctoral students in these areas. His website is: https://linktr.ee/blairteacher.
How can ESP practitioners get inspirations to innovate courses and assessment methods?
Clarice Chan
For most ESP practitioners designing courses and assessment methods is a key aspect of their work. To ensure that learners can benefit from well-designed courses that help them meet the challenges of using English in an ever-changing world, innovative design in learning activities and assessment methods is crucial. In this plenary, I will discuss how ESP practitioners can find inspiration for implementing new ideas into their own teaching and assessment by referring to relevant research. Despite their usefulness, research studies are not always consulted, possibly because the vast number in the literature can seem overwhelming, and the relevance of research to course design and assessment may not always be obvious.
To help ESP practitioners overcome such obstacles, I will illustrate how a curriculum development framework, which I proposed in an award-winning article (Chan, 2018), can serve to guide them in identifying relevant research findings. These findings can then provide ideas for innovating various aspects of ESP practice, including course design and assessment. I will also show how practitioners can access research output through various channels, such as open access journals and research summaries.
Reference:
Chan, C. S. C. (2018). Proposing and illustrating a research-informed approach to curriculum development for specific topics in business English. English for Specific Purposes, 52, 27-46. doi: 10.1016/j.esp.2018.07.001
Biodata
Clarice Chan, PhD, SFHEA, is a researcher and practitioner in the areas of ESP, EAP and business communication. She supervises doctoral students in TESOL at the University of St Andrews, UK. Her co-edited book, New Ways in Teaching Business English (TESOL, 2014), was a finalist in the British Council’s 2015 ELTons Award for Teacher Innovation. Her 2018 paper, “Proposing and illustrating a research-informed approach to curriculum development for specific topics in business English”, published in English for Specific Purposes, won an Outstanding Article on Business Communication Award from the Association for Business Communication, USA.
Identifying indigenous criteria for the assessment of air traffic controllers
William Agius
This presentation reports on research to investigate the features of performance that air traffic controllers value in their peers. Through a process of qualitative analysis of focus group transcripts, a framework of aeronautical radiotelephony communication was developed from which five new rating criteria were derived. The new criteria emphasise interactional competence and the ability to achieve mutual intelligibility through language accommodation. Language accommodation refers to the process by which air traffic controllers adapt their language output to the perceived limitations of their interlocutors on the radiotelephony frequency – based on their experience and good judgement. The results suggest that the pertinent features of language performance in the target language use domain are underrepresented in the linguistically informed rating scale that is currently in use for the assessment of air traffic controllers’ English language proficiency. They also suggest that the features of performance in the scale do not adequately reflect the linguistic challenges that air traffic controllers encounter in the workplace in their daily routine. The study highlights the need for the inclusion of subject matter experts in every stage of the development process of tests that are designed to measure communicative competence in contexts of specific purpose language use and as a lingua franca.
Biodata
William Agius is the deputy head of the Centre for Aviation of the School of Engineering at the ZHAW Zurich University of Applied Sciences. He holds Master’s degrees in English linguistics from the University of Zurich, in Corporate Communication Management from the University of Northwestern Switzerland, and in Language Testing from Lancaster University. William wrote his doctoral thesis on indigenous criteria at Lancaster University, under the supervision of Dr John Pill.
William’s research focusses on test and rating scale development in contexts of language use for specific purposes as a lingua franca. He is the lead developer for the ELPAC test suite for air traffic controllers and pilots, which is currently in use in 64 countries.