HCXAI2021: CHI WS on Operationalizing Human-Centered Perspectives in Explainable AI Virtual Event Yokohama, Japan, May 9, 2021 |
| Conference website | https://hcxai.jimdosite.com/ |
| Submission link | https://easychair.org/conferences/?conf=hcxai2021 |
| Submission deadline | March 26, 2021 |
We are interested in a wide range of topics, from socio-technical aspects of XAI to human-centered evaluation techniques to the responsible use of XAI. We are especially interested in the discourse around one or more of the questions: who (e.g., clarifying who the human is in XAI, how different who's interpret explainability), why (e.g., how social and individual factors influence explainability goals), and where (e.g., contextual explainability differences in diverse application areas). We particularly welcome participation from the Global South and from stakeholders whose voices are under-represented in the dominant XAI discourse. The following list of guiding questions, by no means, is an exhaustive one; rather, it is provided as a source of inspiration:
- Who are the consumers and relevant stakeholders of XAI? What are their needs for explainability? What values are reflected and tensions arise in these needs?
- Why is explainability sought? What user goals should XAI aim to support? How are these goals shaped by technological, individual, and social factors?
- Where, or in what categories of AI applications, should we prioritize our XAI efforts? What do we need to understand about the users as well as the socio-organizational contexts of these applications?
- What are we missing from a technocentric view of XAI? Which human-centered and socio-technical perspectives should we bring in to better understand the who, why, where, to move towards human-centered XAI?
- How can we develop transferable evaluation methods for XAI? What key constructs need to be considered?
- Given the contextual nature of explanations, what are the potential pitfalls of standardization of evaluation metrics? How to take into account the who, why, and where in the evaluation methods?
- What are the explainability challenges where we move beyond the dominant one-to-one Human-AI interaction paradigms? How might a human-centered perspective address these challenges?
- What are the important research questions to be answered when we move towards a human-centered explainable artificial intelligence? Why are they important to be addressed now?
- What might operationalizing XAI in the Global South entail? Where are the points of alignment and departure? What insights should we be aware of while considering Human-centered XAI in the Global South?
Submission Guidelines
Researchers, practitioners, or policymakers in academia or industry who have an interest in these areas are invited to submit papers up to 4 pages (not including references). Templates: [Overleaf] [Word] [PDF]
Submissions are single-blind reviewed; i.e., submissions must include the author’s names and affiliation. The workshop's organizing committee will review the submissions and accepted papers will be presented at the workshop. We ask that at least one of the authors of each accepted position paper attends the workshop. Presenting authors must register for the workshop and at least one full day of the conference. Submissions must be original and relevant contributions to the workshop theme. We are looking for papers that take a well-justified position and can generate productive and lively discussions during the workshop. Examples include, but not limited to, research summaries, literature reviews, industrial perspectives, real-world approaches, study results, or work-in-progress research projects. Since this workshop will be held virtually, we welcome global and diverse participation. We encourage participation from under-represented perspectives and communities in XAI (e.g., lessons from the Global South, civil liberties and human rights perspectives, etc.)
Contact
All questions about submissions should be emailed to hcxai.workshop@gmail.com
