Project Summary

The rise of social media has reshaped how people consume news, leading to the “platformisation” of information. Instead of actively searching, users passively encounter content shaped by opaque algorithms optimised for engagement. This reduces user control to post-encounter actions (e.g., liking, reporting) while obscuring why they see certain content, how it’s tailored, or how to assess its reliability. The consequences – misinformation, filter bubbles, and threats to democracy – are worsened by AI advancements. Human-Computer Interaction (HCI) research explores ways to enhance trust, user agency, and understanding of online information. The AGENCY project builds on this by studying young people’s experiences with online misinformation, particularly in pregnancy choice contexts. 

  • We developed methods for automated fact verification using Wikipedia or Google search results as knowledge sources  
  • Creating methods for detecting AI-generated text and AI-generated images. 
  • Quantifying bias: Systematic Offensive Stereotyping occurs when technology meant to detect slurs instead associates victimized groups with them, silencing victims rather than perpetrators
  • Exploring how biases in AI language models affect fairness in detecting toxic content and testing ways to reduce these biases 
  • A comprehensive analysis of online hate speech on 4chan’s politically incorrect board (/pol/), an anonymous and minimally moderated platform notorious for hosting controversial and extremist content
  • Examining how AI-generated profile photos impact the perceived realism and effectiveness of malicious bots 
  • To detect coordinated disinformation, we propose a graph-based topic-modeling framework that captures relationships and sentiment between key entities, enabling investigators to track shifting narratives and identify suspicious patterns. 
  • Measuring potential bias in AI-driven recruitment across different demographic groups
  • Poli is an AI-powered platform that simplifies dense Terms & Conditions, making legal and financial documents clear and accessible for consumers. 

Methodology

In this section you will find the methodology used in the Smart Home project.

View

Knowledge Repository

In this section you can explore knowledge and insights gained from our study.

View

Data

In this section you can access the data developed for Disinformation case study.

View

Future

This section outlines the strategic future plan for potential follow on funds or policy impact.

View

Meet the Team

Abi Durrant

Co-Investigator

Professor of Interaction Design, School of Computing, Newcastle University

Ehsan Toreini

Co-Investigator

Lecturer, School of Computer Science and Electronic Engineering, University

of Surrey

Adrian Bermudez Villalva

Research Associate

Research Associate, Royal Holloway University of London

Becca Owens

Researcher

PhD Candidate in Law, Newcastle Law School, Newcastle University

Helen Orpin

Project Manager

School of Computer Science, University of Birmingham

Karen Elliott

Co-Investigator

Chair (Full Professor) of Practice in Finance and Fintech, Business School, University of Birmingham

Lei Shi

Co-Investigator

Senior Lecturer, School of Computing, Newcastle University

Maryam Mehrnezhad

Co-Investigator

Reader in Information Security, Information Security Group (ISG), Royal Holloway University of London

Shrikant Malviya

Research Associate

PhD in Computer Science, Department of Computer Science, Durham University

Stamos Katsigiannis

Co-Investigator

Assistant Professor, Department of Computer Science, Durham University

Vasilis Vlachokyriakos

Co-Investigator

Reader (Associate Professor) of Human-Computer Interaction, School of Computing, Newcastle University