Case Study: Disinformation
Exploring the complex online harms arising from people’s encounters with information of questionable veracity, from sources with potentially dubious intentions
Project Summary
The rise of social media has reshaped how people consume news, leading to the “platformisation” of information. Instead of actively searching, users passively encounter content shaped by opaque algorithms optimised for engagement. This reduces user control to post-encounter actions (e.g., liking, reporting) while obscuring why they see certain content, how it’s tailored, or how to assess its reliability. The consequences – misinformation, filter bubbles, and threats to democracy – are worsened by AI advancements. Human-Computer Interaction (HCI) research explores ways to enhance trust, user agency, and understanding of online information. The AGENCY project builds on this by studying young people’s experiences with online misinformation, particularly in pregnancy choice contexts.
- We developed methods for automated fact verification using Wikipedia or Google search results as knowledge sources
- Creating methods for detecting AI-generated text and AI-generated images.
- Quantifying bias: Systematic Offensive Stereotyping occurs when technology meant to detect slurs instead associates victimized groups with them, silencing victims rather than perpetrators
- Exploring how biases in AI language models affect fairness in detecting toxic content and testing ways to reduce these biases
- A comprehensive analysis of online hate speech on 4chan’s politically incorrect board (/pol/), an anonymous and minimally moderated platform notorious for hosting controversial and extremist content
- Examining how AI-generated profile photos impact the perceived realism and effectiveness of malicious bots
- To detect coordinated disinformation, we propose a graph-based topic-modeling framework that captures relationships and sentiment between key entities, enabling investigators to track shifting narratives and identify suspicious patterns.
- Measuring potential bias in AI-driven recruitment across different demographic groups
- Poli is an AI-powered platform that simplifies dense Terms & Conditions, making legal and financial documents clear and accessible for consumers.
Meet the Team

Abi Durrant
Co-Investigator
Professor of Interaction Design, School of Computing, Newcastle University

Ehsan Toreini
Co-Investigator
Lecturer, School of Computer Science and Electronic Engineering, University
of Surrey
Adrian Bermudez Villalva
Research Associate
Research Associate, Royal Holloway University of London



Karen Elliott
Co-Investigator
Chair (Full Professor) of Practice in Finance and Fintech, Business School, University of Birmingham


Maryam Mehrnezhad
Co-Investigator
Reader in Information Security, Information Security Group (ISG), Royal Holloway University of London

Maxim Kolomeets
Research Associate
PhD in Computer Science, Department of Computer Science, Durham University

Shrikant Malviya
Research Associate
PhD in Computer Science, Department of Computer Science, Durham University

Stamos Katsigiannis
Co-Investigator
Assistant Professor, Department of Computer Science, Durham University

Vasilis Vlachokyriakos
Co-Investigator
Reader (Associate Professor) of Human-Computer Interaction, School of Computing, Newcastle University

Viana Nijia Zhang
Research Associate
PhD in Computer Science, School of Computing, Newcastle University