FARAI2020: Fair & Responsible AI Workshop |
Website | http://fair-ai.owlstown.com |
Submission link | https://easychair.org/conferences/?conf=workshopforacmsigchi |
Abstract registration deadline | February 11, 2020 |
Submission deadline | February 11, 2020 |
As AI changes the way decisions are made in organizations and governments, it is ever more important to ensure that these systems work according to the values that diverse users and groups find important. Researchers have proposed numerous algorithmic techniques to formalize statistical fairness notions, but emerging work suggests that AI systems must account for the real-world contexts in which they will be embedded in order to actually work fairly. These findings call for an expanded research focus beyond statistical fairness that includes fundamental understandings of human uses and the social impacts of AI systems, a theme central to the HCI community.
This one-day workshop aims to bring together a diverse group of researchers and practitioners to develop a cross-disciplinary agenda for creating fair and responsible AI systems. We invite academic and industry researchers and practitioners in the fields of HCI, machine learning (ML) and AI, and the social sciences to participate. By bringing together an interdisciplinary team, we aim to achieve the following outcomes:
1) Synthesis of emerging research discoveries and methods. An emerging line of work seeks to systematically study human perceptions of algorithmic fairness, explain algorithmic decisions to promote trust and a sense of fairness, understand human use of algorithmic decisions, and develop methods to incorporate them into AI design. How can we map the current research landscape to identify gaps and opportunities for fruitful future research?
2) Design guidelines for fair and responsible AI. Existing fairness AI toolkits aim to support algorithm developers, and existing human-AI interaction guidelines mainly focus on usability and experience. Can we create design guidelines for HCI and user experience (UX) practitioners and educators to design fair and responsible AI?
Submission Guidelines
To participate, submit a 2-4 page position paper in CHI extended abstract format via Easy Chair. We are open to diverse forms of the submissions, including reports on empirical research findings on fair and responsible AI, essays that offer critical stances and/or visions for future work, and show-and-tell case studies of industry projects.
Potential topics include:
- Human biases in human-in-the-loop decisions
- Human perceptions of algorithmic fairness
- Human-centered evaluation of fair ML models
- Explanations & transparency of algorithmic decisions
- Methods for stakeholder participation in AI design
- Decision-support system design
- Algorithm auditing techniques
- Ethics of AI
- Sociocultural studies of AI in practice
Position papers will be reviewed by two organizers and evaluated based on their quality, novelty, and fit with the workshop theme. One author of an accepted paper must attend the workshop and all participants must register for both the workshop and for at least one day of the conference.
Important dates:
- Position paper deadline: February 11, 2020
- Notification: February 28, 2020
- Workshop at CHI2020: April 26, 2020
Organizing Committees
- Min Kyung Lee, University of Texas at Austin, United States
- Nina Grgic-Hlaca, Max Plank Institute, Germany
- Michael Carl Tschantz, International Computer Science Institute, United States
- Reuben Binns, University of Oxford, United Kingdom
- Adrian Weller University of Cambridge, United Kingdom
- Michelle Carney, Google, United States
- Kori Inkpen, Microsoft Research, United States
Contact
All questions about submissions should be emailed to minkyung.lee@austin.utexas.edu