FATECV-2019: Fairness Accountability Transparency and Ethics (FATE) in Computer Vision |
Website | https://sites.google.com/google.com/fatecv |
Submission link | https://easychair.org/conferences/?conf=fatecv2019 |
Submission deadline | April 19, 2019 |
Computer vision has ceased to be a purely academic endeavor. From law enforcement, to border control, to employment, healthcare diagnostics, and assigning trust scores, computer vision systems have started to be used in all aspects of society. This last year has also seen a rise in public discourse regarding the use of computer-vision based technology by companies such as Google, Microsoft, Amazon and IBM. In research, works such as purport to determine a person’s sexuality from their social network profile images, and claims to classify “violent individuals” from drone footage. These works were published in high impact journals, and some were presented at workshops in top tier computer vision conferences such as CVPR.
On the other hand, seminal works such as published last year showed that commercial gender classification systems have high disparities in error rates by skin-type and gender, exposes the gender bias contained in current image captioning based works, and both exposes biases in the widely used CelebA dataset and proposes adversarial learning based methods to mitigate its effects. Policy makers and other legislators have cited some of these seminal works in their calls to investigate unregulated usage of computer vision systems.
We believe the vision community is well positioned to foster serious conversations about the ethical considerations of some of the current use cases of computer vision technology, and thus hold a workshop on the Fairness, Accountability, Transparency, and Ethics (FATE) of modern computer vision in order to provide a space to analyze controversial research papers that have garnered a lot of attention. Our workshop also seeks to highlight research on uncovering and mitigating issues of unfair bias and historical discrimination that trained machine learning models learn to mimic and propagate. We welcome submissions from broadly defined areas, and will have speakers discussing the ethical considerations underlying some of the most contentious recent research papers.
Submission Guidelines
We invite submissions of 4-page abstracts in CVPR format on current or previously published work addressing the topics outlined below. (For the previously published works re-formatting is not necessary.) We encourage interdisciplinary work, position papers, surveys and other discussions addressing issues that we should consider while conducting and publishing computer vision research.
We will make the accepted submissions available on our website as non-archival reports (there will be no proceedings). The accepted works will be presented at the poster session and some will be selected for oral presentation.
List of Topics
Fairness
- Techniques and models for fairness-aware visual data mining, information retrieval, recommendation, etc.
- Formalizations of fairness, bias, discrimination; trade-offs and relationships between them.
- Defining, measuring and mitigating problematic biases in datasets and models, improvement of data collection processes to be more fair, diverse, and inclusive.
- Translation of legal, social, and philosophical models of fairness into mathematical objectives.
- Qualitative, quantitative, and experimental studies on perceptions of algorithmic bias and unfairness.
- Measurement and data collection regarding potential unfairness in systems.
- Understanding how tools from causal inference can help us to better reason about fairness and the interplay between prediction and intervention.
Accountability
- Processes and strategies for developing accountable computer vision systems.
- Methods and tools for ensuring that algorithms comply with fairness policies.
- Metrics for measuring unfairness and bias in different contexts.
- Techniques for guaranteeing accountability without necessitating transparency.
- Techniques for ethical A/B testing.
Transparency
- Interpretability of computer vision models.
- Generation of explanations for models and algorithmic outputs.
- Design strategies for communicating the logic behind algorithmic systems.
- Trade-offs between privacy and transparency in computer vision systems.
- Qualitative, quantitative, and experimental studies on the effectiveness of algorithm transparency techniques in promoting goals of fairness and accountability.
Ethics
- Tools and methodologies for conducting audits of computer vision models.
- Empirical results from algorithm audits.
- Frameworks for conducting ethical and legal algorithm audits.
- Analysis of ethical dilemmas presented by recent computer vision works and applications.
- Exclusion and inclusion (e.g., exclusion of certain groups or beliefs, how/when to include stakeholders and representatives for the user population to be served).
- Overgeneralization, undergeneralization, and the cost of different errors (e.g., making false classifications on tasks including facial analysis technologies).
- Exposure (e.g., underrepresentation/overrepresentation of population groups).
- Dual use (e.g., the positive and negative aspects of computer vision applications, the close relationship between government and industry interests and computer vision research).
- Privacy protection (e.g., anonymization of biomedical images, best practices for researchers in industry to ensure the privacy of their users’ data, educating the public about how much industry and government may know about them, privacy protection for data annotated with intrinsic features such as emotion).
Committees
Organizing committee
- Timnit Gebru, Research Scientist Google AI, USA
- Daniel Lim, Associate Professor of Philosophy, Duke Kunshan University, China
- Yabebal Fantaye, Junior Research Chair, African Institute for Mathematical Sciences, South Africa
- Margaret Mitchell, Research Scientist and Lead of Ethical AI Team, Google AI, USA
- Anna Rohrbach, Postdoctoral Scholar, UC Berkeley, USA
Venue
Long Beach Convention Center
Long Beach, California, USA
Contact
All questions about submissions should be emailed to tgebru - at - gmail - dot - com.