Multimodal Co-Construction of Explanations with XAI Workshop

This workshop aims to bring together two growing and increasingly important research strands. On the one hand, Explainable AI (XAI) is a flourishing field concerned with developing methods to make modern machine learning-based AI systems transparent and “scrutable” for their stakeholders (developers, users, policy makers). Current systems provide seemingly superior solutions to highly complex problems while relying on so-called black-box models, such as deep neural network architectures. As a result, the field has begun to develop XAI methods that aim to provide human-understandable explanations of system decisions or behavior. However, such explanations are often generated ad hoc, for individual decisions, and are primarily aimed at developers. Enabling naive users to understand current AI systems and thus empowering them to act and use such technology in an informed, self-directed, and responsible manner is still a challenge1. On the other hand, research on multimodal interaction has made significant progress towards meaningful interaction and communication between human users and artificial systems or agents. This field has developed sophisticated methods for processing social signals, generating expressive communicative behavior, or enabling multimodal dialogue between users and AI agents.

With this workshop, we aim to foster scientific exchange and cross-fertilization between these two fields, which we believe is much needed and of great mutual benefit. Currently, XAI is largely based on either visualization or language-based representation (using LLMs). At the same time, first approaches to explainable autonomous robots (XAR) or embodied agents raise the need for situated and multimodal forms of explanation 2. Moreover, it has already been argued that explanations are produced through an interactive and social process co-constructed by the explainer (the XAI system) and the explainee (the human user 3). The communicative means for carrying out this interactive process (e.g., conversational speech, facial expressions, gestures, feedback, interactive repair, turn-taking) are inherently multimodal and require the development and application of advanced methods for processing multimodal behavior and interaction with a dedicated focus on explanations. The expected outcomes of the workshop are the identification of key research lines and approaches that will be documented in the workshop proceedings, improved networking between researchers in XAI and ICMI, and a better understanding of XAI as a multimodal, interactive co-construction challenge.

Important dates

All dates anywhere on earth

  • Submission due: June 3, 2024
  • Notification: July 2, 2024
  • Camera-ready due: August 16, 2024
  • Workshop date: November 4 or 8, 2024 (tbd)

Submission instructions

Please see the Call for Papers for all information on how to submit you work.

Organisers

Hendrik Buschmeier, Stefan Kopp, Teena C. Hassan

  1. Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. DOI:10.1016/j.artint.2018.07.007 

  2. Stange, S., Hassan, T., Schröder, F., Konkol, J., & Kopp, S. (2022). Self-Explaining Social Robots: An Explainable Behavior Generation Architecture for Human-Robot Interaction. Frontiers in Artificial Intelligence, 5. DOI:10.3389/frai.2022.866920 

  3. Rohlfing, K., Cimiano, P., Scharlau, I., Matzner, T., Buhl, H., Buschmeier, H., Grimminger, A., Hammer, B., Häb-Umbach, R., Horwath, I., Hüllermeier, E., Kern, F., Kopp, S., Thommes, K., Ngonga Ngomo, A.-C., Schulte, C., Wachsmuth, H., Wagner, P., & Wrede, B. (2021). Explanation as a social practice: Toward a conceptual framework for the social design of AI systems. IEEE Transactions on Cognitive and Developmental Systems, 13, 717–728. DOI:10.1109/TCDS.2020.3044366