COVID-19 Update Please note revised timeline for submissions.
As Artificial Intelligence (AI) is increasingly being adopted into application solutions, the challenge of supporting interactions with humans is becoming more apparent. Partly this is to support integrated working styles, in which humans and intelligent systems cooperate in problem-solving, but also it is a necessary step in the process of building trust as humans migrate greater competence and responsibility to such systems. The challenge is to find effective ways to characterize, and to communicate, the foundations of AI-driven behavior when the algorithms and the knowledge on which those algorithms operate are far from transparent to humans. While XAI at large is primarily concerned with black-box learning-based approaches, model-based approaches are well suited — arguably better suited — for an explanation, and Explainable AI Planning (XAIP) can play an important role in helping users interface with AI technologies in complex decision-making procedures.
After the success of previous workshops on XAI and XAIP — e.g. at IJCAI 2017, at IJCAI 2018, and at ICAPS 2018-2019 — the mission of this workshop is to mature and broaden this community, fostering continued exchange on XAIP topics at ICAPS. Apart from XAI@IJCAI, the planning specific XAIP workshop also runs parallel to sister venues like EXTRAAMAS@AAMAS and XLoKR@KR (with a stronger focus on agent theory and knowledge representation respectively), as part of this broader community around explainable AI. In order to broaden the XAIP community at ICAPS, this year we include an additional set of topics regarding the role of user interfaces in XAIP acknowledging the inseparable role of interfacing in explanations.
The workshop includes – but is not limited to – the following topics:
- representation, organization, and memory content used in explanation
- the creation of such content during plan generation or understanding
- generation and evaluation of explanations
- contrastive explanations
- the way in which explanations are communicated and personalized to humans (e.g., plan summaries, answers to questions)
- the role of knowledge and learning in explainable planners
- human vs AI models in explanations
- links between explainable planning and other disciplines (e.g., social science, argumentation)
- use cases and applications of explainable planning
The UX of XAIP
- User interfaces for explainable automated planning and scheduling
- Plan and schedule visualization
- Mixed initiative planning and scheduling
- Emerging technology for human-planner interaction
- Metrics for human readability or comprehensibility of plans and schedules
- Explainable automated planning and scheduling for user interfaces
- Representing and solving planning domains for user interface creation and design tasks
- Plan, activity, and intent recognition of users’ interactions with interfaces
- Developing user (mental) models with description languages and decision processes
Here are a few recent surveys on topics in XAIP (newest first)
- The Emerging Landscape of Explainable AI Planning and Decision Making. Tathagata Chakraborti, Sarath Sreedharan, Subbarao Kambhampati. IJCAI 2020. [link]
- Explainable AI Planning (XAIP): Overview and the Case of Contrastive Explanation. Jorg Hoffmann and Daniele Magazzeni. In: Reasoning Web. Explainable Artificial Intelligence. [link]
- Explainable Agents and Robots: Results from a Systematic Literature Review. Sule Anjomshoae, Amro Najjar, Davide Calvaresi, and Kary Framling. AAMAS 2019. [link]
- Explanation in Artificial Intelligence: Insights from the Social Sciences. Tim Miller. In: Artificial Intelligence Journal. [link]
- Paper submission deadline: July 31st UTC-12 (Submissions are open)
- Notification of acceptance: TBD (Before the ICAPS 2020 early registration deadline)
- Camera-ready paper submissions: TBD
- Workshop date: 26/27th Oct 2020
We invite submissions of the following types:
- Full technical papers making an original contribution; up to 9 pages including references;
- Short technical papers making an original contribution; up to 5 pages including references;
- Position papers proposing XAIP challenges, outlining XAIP ideas, debating issues relevant to XAIP; up to 5 pages including references.
Submissions must be made through the following EasyChair link: https://easychair.org/conferences/?conf=xaip2020
Papers must be prepared according to the instructions for ICAPS 2020 (in AAAI format) available at https://www.aaai.org/Publications/Templates/AuthorKit20.zip. Authors who are considering submitting to the workshop papers rejected from the main conference, please ensure you do your utmost to address the comments given by ICAPS reviewers. Please do not submit papers that are already accepted for the main conference to the workshop.
Every submission will be reviewed by members of the program committee according to the usual criteria such as relevance to the workshop, the significance of the contribution, and technical quality. Authors can select if they want their submissions to be single-blind or double-blind (recommended for IJCAI dual submissions) at the time of submission.
The workshop is meant to be an open and inclusive forum, and we encourage papers that report on work in progress or that do not fit the mold of a typical conference paper.
At least one author of each accepted paper must attend the workshop in order to present the paper. The authors must register for the ICAPS main conference in order to attend the workshop. There will be no separate workshop-only registration.
Accepted papers will be compiled into post-workshop proceedings and posted on this page. Workshop proceedings are not archival and do not require the transfer of copyright.
XAI 2020 Sync In addition to the usual workshop proceedings, this year we are also exploring opportunities to cross-pollinate with the XAI Workshop at IJCAI 2020. As part of this, authors of accepted papers in either venue will have an option to present in the other venue as well. More details about this will be made available after the respective paper acceptance notifications.
- Tathagata Chakraborti IBM Research AI
- Jeremy Frank NASA Ames
- Rick Freedman SIFT
- Claudia V. Goldman General Motors
- Daniele Magazzeni King’s College London
Contact: tchakra2 AT ibm DOT com