Keeping English language testing relevant in the AI era
As AI reshapes how we study, work, and communicate, questions are being asked about the future of English language learning and testing. If translation tools and generative text can produce fluent outputs, what do tests need to measure? And how can stakeholders be confident that test results still reflect readiness for the demands of real life?
Trinity College London’s answer is ISE Digital. This qualification goes beyond proficiency, assessing not just English language proficiency but also the competencies that enable learners to thrive in higher education and professional contexts.
The role of presentations in assessment
One of the clearest examples is the speaking module. Candidates prepare and deliver a short talk on a topic of their choice. This may seem like a simple speaking task, but it requires a much broader range of skills.
Candidates must undertake independent research and organise the information to build a structured argument. They then need to deliver it with clarity and confidence, which are essential skills in academic and professional settings. A follow-up question ensures they go beyond mere memorisation, demonstrating their ability to listen, reflect, and adapt in real time.
As a result, this straightforward task fosters autonomy, critical decision-making, and the integration of ideas, while boosting confidence in spoken language. No AI tool can replicate or ultimately replace such skills.
Reading and listening as strategic processes
In his book Reading in a Second Language, Grabe highlights how skilled reading draws on strategies such as skimming, scanning, inferencing, and evaluation. Meanwhile, in their own works Rost and Field demonstrate that listening is an active process that combines decoding with meaning construction, requiring learners to integrate details, context, and world knowledge. Again, these skills are essential even for our interaction with AI tools.
If generative tools can already draft an email or produce a report, why should candidates still be tested on writing? The answer lies in what these tools cannot do
Hence, ISE Digital tasks are designed to encourage the use of reading and listening strategies rather than relying solely on simple recall. For example, a reading question might ask candidates to compare how two texts frame the same issue, while a listening question may require them to infer a speaker’s attitude. Such questions reflect how learners process information in lectures, reports, and other contexts where strategy is more important than surface comprehension.
Writing for digital and academic contexts
If generative tools can already draft an email or produce a report, why should candidates still be tested on writing? The answer lies in what these tools cannot do. They cannot demonstrate judgement, accountability, sensitivity, or the ability to apply knowledge in real-world contexts.
In ISE Digital, candidates compose online texts, such as messages or emails, demonstrating clarity, audience awareness, and tone, while also engaging in academic writing that requires synthesis, paraphrasing, knowledge transformation and structured argumentation. Together, these tasks highlight critical engagement competencies essential for higher education and professional life worldwide and ones that AI cannot replace.
Beyond proficiency
Ultimately, ISE Digital does more than place learners on a CEFR scale. It asks whether they can deliver a presentation, engage in dialogue, apply effective reading and listening strategies informed by research in applied linguistics, communicate appropriately in digital formats, and transform knowledge through academic writing. With adaptivity and levelling built into all four skills, tasks are aligned to ability while still requiring candidates to show growth and independence.
Trinity’s ethos is to view assessment not just as a measure of performance but as an opportunity to support ongoing learning. In an age of AI, where fluency can be simulated, ISE Digital highlights the human-centred skills that remain essential: autonomy, critical thinking, confident communication, and the ability to transform knowledge. These are the competencies universities, employers, and communities value most.
By embedding authenticity, adaptivity, and positive washback, ISE Digital provides a measure of language proficiency that also serves as a framework for developing the broader skills learners need to thrive in the 21st century.

About the author: Paraskevi (Voula) Kanistra is associate director/senior researcher at Trinity College London and holds a PhD in Language Testing from the University of Bremen. She specialises in designing English language assessments that combine academic rigour with practical impact, ensuring tests are fair, transparent, and aligned with the needs of learners and institutions worldwide.
Her expertise spans standard setting, CEFR alignment, validation, and measurement, and she has presented her work at major international conferences in Europe and Asia. She has published in leading journals, including Language Assessment Quarterly and Assessing Writing. She is the author of a forthcoming book on (virtual) standard setting, to be published by Peter Lang.
The post Keeping English language testing relevant in the AI era appeared first on The PIE News.