1st Multimodal-XAI Workshop: “Multimodal Co-Construction of Explanations with XAI”

Workshop held at ACM ICMI 2024 (hybrid, with online participation possible)

https://dililab.github.io/multimodal-xai-2024/

We cordially invite you to submit papers to the 1st Workshop on Multimodal Co-Construction of Explanations with XAI (Multimodal-XAI), which will be held as part of the 26th ACM International Conference on Multimodal Interaction (ICMI 2024) in San José, Costa Rica on 4th or 8th November 2024. Participating online through video conferencing will be possible.

Overview:

This workshop aims to bring together two growing and increasingly important research strands — explainable AI (XAI) and multimodal interaction — in order to establish and promote a new field of research that investigates the construction of explanations by means of multimodal interaction between an XAI system (as explainer) and a human user (as explainee). With this workshop, we intend to foster scientific exchange and cross-fertilization between the XAI and ICMI communities in order to better understand XAI as a multimodal, interactive co-construction challenge and to identify key research lines and approaches.

This full-day workshop includes two keynotes, oral paper presentations and a poster session. Given the interdisciplinary nature of the topic of this workshop, the keynote speakers, programme committee members, and organisers hail from diverse disciplines including XAI, Human-Computer/Robot Interaction, Multimodal Interfaces, Cognitive Science, and Digital Linguistics.

Important Dates:

  • Paper submission deadline: 3rd June 2024, 23:59 AOE
  • Paper acceptance notification: 2nd July 2024
  • Camera-ready version due: 16th August 2024, 23:59 AOE
  • Workshop date: 4th or 8th November 2024 (TBD)

Topics of Interest:

We invite paper submissions on several topics, including (but not limited to):

  • Computational approaches for generating multimodal explanations
  • Approaches to multimodal explainable machine learning
  • Multimodal recognition of user explanation needs
  • Multimodal explanation dialogue models and systems
  • Architectures for multimodal explainable AI systems
  • XAI methods (e.g. explainable deep reinforcement learning, XAI for informed machine learning) applied in multimodal systems
  • Multimodal interaction with explainable autonomous robots (XAR) or other autonomous systems
  • Explainable interaction with collaborative agents/robots
  • Studies on multimodal explanations in dyadic or multi-party settings (human-human or human-machine)
  • Multimodal datasets on explanatory interaction
  • Taxonomy and theories for multimodality and co-construction of explanations
  • Empirical studies on the effects of multimodal explanations on users
  • Applications of multimodal XAI systems and technologies

Submission Guidelines:

We invite submissions presenting original work, including work that may be described as ‘towards’ or ‘in progress’ on topics that are of relevance to this workshop. All submissions will undergo a double-blind review.

Two types of submissions are possible:

  • Full papers: 6 pages, excluding references
  • Short papers: 2 pages, excluding references

All submitted papers should be in 2-column format and should conform to the ACM guidelines. See Sections “Preparing paper with Latex” and “Preparing paper with Word” for information on the templates for preparing the manuscripts: https://icmi.acm.org/2024/guidelines/

Papers should be submitted through the OpenReview portal: https://openreview.net/group?id=ACM.org/ICMI/2024/Workshop/MultimodalXAI

All papers will be reviewed based on their relevance, originality, and quality. Full papers accepted after the double-blind review process will be eligible for oral and/or poster presentation. Accepted short papers will be eligible for poster presentation only. Please note that all accepted papers (full and short) will be published as workshop proceedings in the ACM Digital Library along with the ICMI proceedings.

Organisers:

  • Hendrik Buschmeier, Bielefeld University
  • Stefan Kopp, Bielefeld University
  • Teena C. Hassan, Bonn-Rhein-Sieg University of Applied Sciences