The Future
Case Study: Disinformation
1. Workshops (Understanding Online Harms and Misinformation)
- Contextual Understanding of Online Harms:
More work is needed to understand the context-specific and vulnerability-related complexities of online harms, particularly how situational vulnerabilities (e.g., pregnancy-related misinformation) influence the impact of online information. This calls for a contextual and idiographic understanding of misinformation harms, focusing on the impact of online information on people and society. - Community Empowerment and Collective Agency:
The focus should shift from an individualistic to a community/collective understanding of agency. This involves designing for collective empowerment by addressing common fears, concerns, and vulnerabilities across groups of people. - Gen Z and Social Understanding of News:
Further research is required to understand how Gen Z encounters and assesses information online, specifically how their social understanding of news can inform designs that increase their agency in the digital space.
2. AI Detection (Fake Faces)
- Cross-Platform Investigation:
Extend the analysis to additional social networks (e.g., Twitter, Facebook) or language communities to assess whether observed trends hold across different market segments and moderation policies. - Evolving AI Detection:
As AI improves, its outputs may become harder to detect. Future work should explore next-generation AI detectors and adaptive adversarial techniques that evolve alongside AI-driven bot designs. - Integration of AI-Generated Text:
Extend the detection pipeline to jointly analyze profile photos and linguistic signals, focusing on how AI-generated text used in phishing or disinformation can be identified for more robust models of influence campaign detection. - Human Factors Research:
Additional user studies are necessary to understand how users perceive and respond to AI-based deception, both visual and textual. Research should focus on how user-interface interventions (e.g., warning labels, trust indicators) can mitigate the impact of disinformation.
3. AI Detection (AI-Generated CVs)
- Expanding the Bias Map:
Investigate new models, including multimodal AI (e.g., cover letters, CVs, video interviews), to explore broader hiring biases and enhance the bias map. - Intersectional Analysis:
Explore how multiple demographic attributes interact in hiring decisions, as biases may vary across different groups. - Regulatory Integration:
Align the bias detection framework with AI compliance standards (e.g., EU AI Act, GDPR), ensuring safer AI deployment in HR practices and enhancing fairness and accountability in AI-driven recruitment.
4. ARELight (Disinformation Detection)
- Extending Entity-Relation Extraction:
Build upon the current framework by extending entity-relation extraction beyond single-sentence contexts, capturing broader discourse patterns in long-form posts or conversation threads. - Domain-Specific Language Models:
Integrate domain-specific language models (e.g., trained on health-related or election-specific disinformation data) to improve classification accuracy for specialized investigations. - Explainable AI Components:
Incorporate explainable AI to shed light on how particular edges or subgraphs influence disinformation detection, enhancing trust and interpretability for researchers and the public.
5. Poli: Terms & Conditions Platform
- Enhanced User Testing:
Expand pilot studies with diverse user groups, particularly vulnerable consumers, to refine usability metrics and better align the platform with Consumer Duty standards. - Cross-Industry Deployment:
Adapt Poli for other sectors, such as insurance and energy, where lengthy documents hinder user understanding, broadening its impact. - Regulatory Collaborations:
Maintain ongoing dialogue with regulatory bodies, such as the FCA, to ensure Poli stays current with evolving compliance requirements and remains both legally robust and customer-focused.