Titre / Title: A Model for Reasoning About Human – Bot Interactions in Social Networks

Projet Ciblé (PC) / Targeted Project (PC): PC3 MATCHING – Collaboration with intelligent systems

Commentaires / Comments:

Etat du sujet / State of the subject: Disponible / Available

Date de publication / Publication date: 18/04/2025

Institution de rattachement / Institutional affiliation: Inria

Résumé / Abstract: Recent behavioral experimental studies have revealed a feedback loop where human–AI interactions alter processes underlying human perceptual, emotional and social judgements, subsequently amplifying biases in humans [Glickman 2025]. This amplification is significantly greater than that observed in interactions between humans, due to both the tendency of AI systems to amplify biases and the way humans perceive AI systems. This multidisciplinary PhD project, involving multi-agent modelling and social science, aims to extend a model of opinion formation under cognitive bias by the PhD supervisor Frank Valencia [Alvim et al., 2024] to explicitly incorporate AI-driven social bots (i.e., automated AI agents that participate in discussions) and analyze their role in bias amplification within social networks. The goal is to develop a mathematical framework to simulate and analyze how interactions with intelligent bots contribute to bias reinforcement, test the model using real-world social network data, and design and evaluate strategies to mitigate and reduce bias reinforcement by social bots.
This PhD proposal aligns primarily with PC3 MATCHING and also with PC4 CONGRATS. It is a multidisciplinary approach combining multi-agent models and social science and the co-supervision by Frank Valencia and Jean-Claude Dreher will allow us to combine their complementary expertise in computational modeling and group-behaviour experimentation. Furthemore, the strategies to mitigate bias reinforcement using social bots address a collaborative aspect in social networks: How can social bots be used to reduce the impact of cognitive biases, rather than amplify these biases, thus allowing for a more rational civil discourse? The modelling aspects of users with cognitive biases contrasting opinions (i.e., competing about a given subject) while interacting with intelligent bots also relates to PC3 MATCHING Themes 2 and 3.

Détails du sujet / Subject details: PDF

Directeur / directrice de thèse / Main advisor: Frank Valencia (frank.valencia@lix.polytechnique.fr) – LIX, Ecole Polytechnique

Encadrant(e) de thèse / Secondary advisor: Jean-Claude Dreher (dreherjeanclaude@gmail.com) – ISC M. Jeannerod, UMR5229

Autre Encadrant(e) de thèse / Additional advisor:

Pour faire acte de candidature sur ce sujet, veuillez écrire aux auteurs directement / To apply on this subject, please write directly to the authors.