Partner institutions : CNRS, Inria, Université Grenoble Alpes, Université Paris-Saclay,
IMT, Sorbonne Université, Université Claude Bernard Lyon 1, Université de Lille, Université Toulouse 3,
CESI, ENAC, ENSAM, Université de Lorraine, Université Gustave-Eiffel, Université Technologique de Troyes.

Contact: pc3 [at]

3 priority themes :

  1. Modeling and understanding group of agents with social capabilities in intelligent systems

  2. Modeling & understanding of collaborative or competitive interactions between humans and AI-driven entities

  3. Impact of intelligent systems on expertise and deskilling

Goal : funding of 3-5 Ph.D. theses

Thème 1 : Modeling and understanding group of agents with social capabilities in intelligent systems

Interaction in everyday life or work situations happens often in groups. Here, a group may be composed of up to twenty people and include one or several intelligent systems. Group interactions involve complex human behaviors with turn-taking, embodied synchronization, imitation, etc. However, they cannot be viewed as a dyad or even as a set of dyads. Moreover, interactions in groups involve dynamics that tend to accentuate polarization among group members. Introducing intelligent systems for supporting group interaction should consider these complex configurations and capture these group dynamics to enhance cohesion, ensure collaboration on a joint action and manage conflicts. The challenge is to ensure that the introduction of intelligent systems in a group of human users serves the purpose of the collaboration. We will develop measures and metrics to understand how collaboration happens and evolves in a group of human users and intelligent systems, taking into account the complexity of individual behavior in the context of collaboration. The project aims to go beyond dyadic interaction by conceiving and developing models to understand and explore the implications of collaborative interactions between groups of humans and of agents. Several topics will be addressed regarding the sociability of agents, the participation and adaptation of agents to group and collective activities, as well as the use of AI to enhance the role of artificial agents in creative tasks.

Keywords: Intelligent Systems; Creative Process; Group; Conflict; Leadership; Collaborative Interactions; Emergency of Collective Agency

Theme 2: Modeling & understanding of collaborative or competitive interactions between humans and AI-driven entities

With the increasing complexity of intelligent systems comes the complexity of understanding them. Indeed, a common observation when interacting with intelligent systems is the lack of understanding of the human users upon the elements that drive the system’s decisions which are necessary to empower individuals and maintain a sense of control. The goal of Explainable AI (XAI) is generally couched in terms of explaining to users how the algorithm works (e.g., criteria). However, users also seek to understand what aspects of their own behavior the intelligent system understands. More importantly, they want to know what the system will allow them to do next and what are the consequences on the different stakeholders. Reversely, the AI entity should estimate the state and intentions of the users, to compute the required behavior to assist them. Moreover, human users need to stay in control of these intelligent systems. But as a system achieves more and more complex tasks, it also becomes more and more difficult to control, whether through off-line configuration (e.g., recommender systems) or online interactions (e.g., communicating with conversational agents or achieving collaborative tasks), especially when the control depends on proprietary applications. To this end, we will conceive models and frameworks to design interactions with AI-driven entities that deal with users’ diversity, affective and/or cognitive states and willingness to control. Interaction between humans and AI-driven entities requires measuring, understanding and modeling the internal and external states of the users, based on multimodal signals. This will enable the AI system to adapt the interaction while dealing with user’s diversity, representations and experience. A promising approach consists in modeling human-AI interactions from the observation of human-human interactions. A key challenge is to encompass and integrate the range of cognitive and emotional states of the user’s resulting from anticipated or current interaction with the system as well as his/her own agency. In addition, the project will explore new avenues to enhance the cooperation within hybrid teams using dialogic communication or action adaptation between virtual and real entities.

Keywords: Complexity; Control; Explainability; User’s state; Trust; Privacy; Agency; Diversity.

Theme 3 : Impact of intelligent systems on expertise and deskilling

Intelligent systems can engage in long-term interactions with human users for a variety of services. By adapting to the users, intelligent systems may change the behaviors of human users. For example, humans who search for recommendations (e.g., movies, diagnosis) may get used to following them rather than exploring alternatives or questioning them; they may also decide to systematically go against them due to new algorithm aversion; finally, they may try to develop intuitions on how the system works (such intuitions are rarely "correct", since humans and machine learning algorithms reason in completely different ways). These long-term interaction mechanisms can lead to incorrect assumptions, expectations and potentially deception. Collaboration between humans and intelligent systems can also modify users’ behavior outside of the interaction, potentially at a much longer time scale. Indeed, users, especially expert users, come to rely on these systems, and begin to lose expertise. This form of deskilling may be benign or devastating in safety critical systems (e.g., assistive robotic surgery) when users no longer fully recall how to utilize their skills in an emergency. While some railings are available in specific contexts (e.g., continuous training of airline pilots), many users and institutions do not consider this form of deskilling which can lead to the reduction in the quality of users’ work. The challenge is to adapt collaboration with intelligent systems to augment users' capabilities over the long-term. Fully or even partially delegating tasks to intelligent agents has an obvious impact on companies and more generally work organizations. Human resources management aspects, such as building operators’ careers on competences or continuous increase of abilities, are to be seriously tuned to the insertion of new kinds of intelligent automata. The same statement can be applied to the experiences of workers, and to the genuine meaning and value of work when facing this potential new taylorism (splitting tasks and re attributing a large part to the machines). This project will tackle these questions from the various contributions of Human and Social Sciences, but also from a technical point of view that could better define standards, limits and conditions of use of AI techniques when applied to collective work places. The notion of authority sharing could also be addressed and formalized here.

Keywords: Long-Term Interaction; Co-Adaptation; Learning; Authority sharing; Deskilling; Vulnerability