SIGLTA meeting on Thursday 23rd May: Artificial Intelligence (AI) – Changing the Face of Formative and Summative Assessment

SIGLTAYou are cordially invited to attend the Special Interest Group in Language Testing and Assessment (SIGLTA) meeting. SIGLTA is a postgraduate student-led reading/research group within the Faculty of Arts and Humanities.

The meeting is at 17:00-18:30 on Thursday 23/05/2019 in room 1173, Avenue Campus (building 65), and will be led by Dr. Rose Clesham, the Director of Academic Standards and Measurement, Global Assessment, Pearson.

Abstract: Millions of English as a Second Language students are taught and assessed each year on both receptive (listening and reading) and productive (speaking and writing) skills for entry into English speaking universities or professions. These tests are high stakes and prospective candidates apply from across the world. So how can these skills be tested with high validity, reliability and lack of bias and obtain almost immediate feedback, accurate scoring and diagnostic information? This talk will describe and demonstrate how research and advances in Artificial Intelligence (AI) technologies have changed the way one of these testing agencies assesses and measures oral, aural, reading and written skills, on a global scale, using large worldwide data sets. Artificial Intelligence as a concept is not new, dating back over seventy years. However, enormous computing power and algorithmic advances now available enables AI and automated machine decision-making to effortlessly process big data and it is applied to many areas of society, from banking to entertainment.

In an educational assessment context, these AI technologies can be used for formative or summative purposes, and may in time replace both national and international tests and assessments. Public perception in this area has often focused on the lack of human interaction and judgement when automated marking technologies are used. This talk will demonstrate that in many ways, the opposite is true. The use of artificial technologies allows the judgement of hundreds of human assessors to work in unison, increasing validity in terms of broader content representation, and removing bias and low reliability issues. These technologies also significantly reduce teacher workload in terms of marking student work, yet still allow teachers to benefit from diagnostic feedback on their students, and releases valuable time to facilitate personalised learning.

The speaker: Rose is the Director of Academic Standards and Measurement, working in Global Assessment at Pearson. Her career started in teaching, and teacher education, before moving on to Governmental positions, responsible for running national assessment programmes in the UK. Her roles in Pearson have included leading Assessment Design and Research teams, carrying out national and international alignment and benchmarking studies, and presenting at major international conferences. Rose has also worked extensively on OECD PISA assessments, co-writing the 2015 Scientific Literacy Framework.

If you require any further information please send an email to or or see the SIGLTA Facebook page.

SIGLTA meeting on Thursday 28th February: Setting a CEFR cut score on test instruments

SIGLTAYou are cordially invited to attend the Special Interest Group in Language Testing and Assessment (SIGLTA) meeting. SIGLTA is a postgraduate student-led reading/research group within the Faculty of Arts and Humanities.

The meeting is at 17:00-18:30 on Thursday 28/02/2019 in room 1095, Avenue Campus (building 65).

Abstract: Standard setting is a decision-making process of setting a cut score – a certain point on a test scale used for classifying test takers into at least two different categories (e.g. pass or fail). The standard setting process usually entails recruiting a group of panellists to complete a variety of tasks in order to recommend a cut score on a certain test instrument. In this presentation, I will discuss what constitutes good practice in setting CEFR standards for language examinations. The most common standard setting methods will be covered as well as their associated challenges.

The speaker: Dr Charalambos (Harry) Kollias received his Ph.D. degree from Lancaster University. He works as an Assessment Research and Analysis Manager at Oxford University Press. He has over 30 years’ experience in the education sector in roles ranging from teacher, teacher trainer to assessor trainer and (co)author of examination materials. With over 18 years’ experience in the assessment field, his areas of specialism include measurement analysis (pre- and post-test analysis), test material development, alignment studies, (virtual) standard setting workshops, and research studies. He has presented at several international conferences and facilitated the 12th Annual EALTA pre-conference workshop with Sauli Takala entitled “Standard setting – how to implement good practice”. His main interests are Rasch measurement theory, (virtual) standard setting, language assessment and validation, and artificial intelligence.

If you require any further information please send an email to or or see the SIGLTA Facebook page.

SIGLTA meeting on Friday 20th April: An Investigation of Assessment Practices in Mexican EMI Programmes

You are cordially invited to attend the Special Interest Group in Language Testing and Assessment (SIGLTA) meeting. SIGLTA is a faculty-supported postgraduate student-led reading/research group. The meeting is at 17:00-18:00 on Friday 20/04/2018 in room 1097, Avenue Campus (building 65).

Abstract: Assessment is an essential part of teaching and learning practices, in fact, assessment is oriented to develop the students’ academic skills. However, nowadays in Higher Education Institutions (HEI) where the instruction is offered in a second language and where neither the teachers nor the students are native speakers of such language; assessment could represent a double challenge for the instructors. On the one hand, they have to design authentic, valid and reliable assessment tools that represents the proper students’ skills’ development in the topic to be learned; and on the other hand that shows enough evidence that the students are improving their language skills.

The aim of this study is to examine the assessment practices of content teachers in a University in Mexico where English language is used as the Medium of Instruction (EMI); and where the students are no required to prove language competences at the beginning of their courses. The objective is to evaluate to what extent content teachers are consciously including language features in their assessment practices and the level of integration of content and language in this particular programme. Overall, it is expected that the results of the data analysis and interviews with the content teachers could be used in the design of a framework to help content teachers to develop valid and reliable assessment tools in higher education in programmes where the main goal is the integration of content and language.

Biodata: Lizbeth Morales-Berlanga, 2nd year PhD student at the University of Southampton. Her research project specialises in assessment and language testing in EMI environments in Mexico and Latin America, her previous research provides information about teachers´ perspectives in assessing speaking skills in English for Academic Purposes (EAP) courses. Her research interests include assessment methods and assessment practices in EAP, EMI, CLIL and ELF contexts.

CGE Research Seminar on 8th November: English as a Lingua Franca and language assessment: Challenges and opportunities


The next Centre for Global Englishes (CGE) seminar will take place on Wednesday 8th November 2017 from 5:00pm in Lecture Theatre C (room 1175), Building 65, Avenue Campus. The seminar will be presented by Dr Luke Harding from Lancaster University. All welcome!

Here is the abstract for this seminar:
English as a Lingua Franca (ELF) communication represents one of the most significant challenges to language testing and assessment since the advent of the communicative revolution. On one hand, ELF destabilises the place of the native speaker, and the notion of assessing against a “stable variety” (Jenkins & Leung, 2014, p.4). At the same time, however, research emerging from ELF studies suggests opportunities for reconceptualising and expanding language constructs. In this talk I will discuss the challenges and opportunities afforded by an English as a Lingua Franca perspective on language assessment. In the first part of the talk, I will describe the two fundamental challenges ELF presents for language assessment, and connect these with broader debates around the nature of communicative competence. I will then discuss how the language testing and assessment community has addressed the ELF challenge thus far, with examples from both scholarship and testing practice. Third, I will sketch an ELF construct for assessment purposes, and present two cases of small-scale studies which have attempted to operationalise this construct. Finally, I will discuss some new directions for research at the interface of ELF and language assessment.

Next CLLEAR seminar: “Measurement Principles and Language Facts”

The next Centre for Linguistics, Language Education and Acquisition Research (CLLEAR) seminar will take place on Wednesday 10 December 2014 from 5:00-7:00pm in Lecture Theatre A, Avenue Campus. The talk is entitled “Measurement Principles and Language Facts” and will be delivered by Professor John De Jong, Senior VP Global Assessment Standards at Pearson, Professor of Language Testing at VU University Amsterdam, and Programme Director of Framework Development for PISA 2015 and 2018. All welcome!

Modern Languages staff to present at the Language Testing Forum 2014

Language Testing Forum

A number of Modern Languages staff including Clare Mar-Molinero, Roumyana Slabakova and Richard Kiely will be presenting at the Language Testing Forum 2014, taking place at the University of Southampton from 21-23 November 2014. This year’s conference will focus on the theme of ‘Language Testing: Engaging multi-disciplinary perspectives’, and offers an opportunity for language testers and language educators to engage with new developments in the field.

For further information about the Language Testing Forum 2014, including the final programme, visit the event page on the Modern Languages website.