AAAI19-IAW: AAAI 2019 Spring symposium: Interpretable AI for Well-Being: Understanding Cognitive Bias and Social Embeddedness Stanford University Stanford, CA, United States, March 25-27, 2019 |
Conference website | https://aaai.org/Symposia/Spring/sss19symposia.php#ss04 |
Submission deadline | February 4, 2019 |
Interpretable AI for Well-being: Understanding Cognitive Bias and Social Embeddedness
Description of the symposium
[Aims and new Challenges]
Interpretable AI is an artificial intelligence (AI) whose actions can be easily understood by humans. Recently, the European Union’s new General Data Protection Regulation (GDPR) has raised concerns about the emerging tools for automated individual decision-making. These tools use algorithms to make decisions based on user-level profiles, with the potential to significantly affect users. Recent AI technologies (e.g.: Deep Learning and other advanced machine learning methods) will change the world. However, excessive expectations for AI (e.g., the representation of general purpose AI in science fiction) and threat theory (e.g. AI will lead to unemployment) distort the judgment of many people. Understanding both the potential and the limitations of the current AI technologies is therefore very important.
Especially in the human health and wellness domains, interpretable AI remains a huge challenge. For example, “evidence-based medicine” requires us to show the current best evidence in making decisions about the care of patients. “Why did the system make this prediction?” will be a key question. Even if the system is not accurate, it must be explainable and predictable. Although statistical machine learning predicts the future based on past data, it is difficult to respond to a new event which has never seen in the past. Training data that has outliers or adversarially generated data may lead an AI-based system to make wrong predictions (sometimes with high confidence) in life or death situations in medical diagnoses. For AI to be safely deployed, these systems must be well-understood. One of the important goals in this year's symposium is to discuss the technical and philosophical challenges of interpretability for well-being AI.
AI also provides the new risk of amplifying our “cognitive bias” through machine learning, as we discussed in our previous AAAI18 Spring symposium on “beyond machine intelligence.” In the recent trend of big data becoming personalized, corresponding AI technologies for manipulating one’s cognitive bias are starting to evolve; examples of this include social media platforms such as Twitter and Facebook, and commercial recommendation systems. According to the “Echo chamber effect,” people with the same opinion tend to form communities, which makes it felt that everyone else also shares the same opinion. Recently, there has also been a movement to use such cognitive bias in the political world. We welcome discussions on “cognitive bias” in human or personal robot communications.
“Social embeddedness” of AI is also an important keyword in this symposium. We welcome diverse discussions on the relationships between AI and society. The topics on social embeddedness of AI may include such issues as “AI and future economics (such as basic income, impact of AI on GDP)” or “well-being society (such as happiness of citizen, life quality)”, etc. Cognitive Bias will be affected by how the AI is perceived particularly at the community (or societal) level. “Social embeddedness of AI” seems likely to become a significant area as AI continues to develop.
Scope of Interests
We will have the following scope of interests in this symposium:
- "Excessive expectation for AI - understanding possibilities and limitations of the current AI technologies",
- "Technical and philosophical challenges on interpretability for well-being AI"
- "Cognitive bias" and "social embeddedness of AI" in human/robot communications, from the socio-cultural/political aspects to the technical/practical, accuracy and efficiency issues in health, economics, and other fields.
For example, we have the following research questions in Interpretable AI for well-being.
1. Interpretable AI/ML
- How can we develop interpretable machine learning methods in well being AI that provide ways to manage the complexity of a model and/or generate meaningful explanations?
- How can we use the tools of causal inference to reason about fairness in well-being AI? Can causal inference lead to actionable recommendations and interventions? How can we design and evaluate the effect of interventions?
- What are the societal implications of algorithmic exploration? How can we manage the cost that such exploration might pose to individuals?
2. Unintended consequence of algorithms in well-being AI
- Can we use adversarial conditions to learn about the inner workings of algorithms?
- Can we learn from the ways they fail on edge cases?
- Can we achieve accountability in well being AI?
- How can we conduct reliable empirical black-box testing for ethically salient differential treatment?
- How can we manage the risks that such unintended consequence might pose to users?
- Machine Learning and other advanced analyses for Health & Wellness
The following topics are also within the scope of our interests, but not limited to;
1. How to quantify our cognitive bias or personal traits.
Word2vec analysis, sleep monitoring, diet monitoring, vital data, diabetes monitoring, running/sport calorie monitoring, personal genome, personal medicine, new type of self-tracking device, portable mobile tools, health data collection, quantified self tools, experiments, affective computing, wearables and cognition, brain fitness and training, learning enhancement strategies, sleep, dreaming, relaxation, meditation, yoga, physiology, nutriton, chemicals, electrical stimulation (tDCS, rTMS, CES, EEG, neurofeedback)
2. How to quantify our cognitive bias or personal traits.
Discovery informatics technologies; deep learning, data mining and knowledge modeling for wellness, collective intelligence/ knowledge, life log analysis (e.g., vital data analyses, Twitter–based analysis), data visualization, human computation, etc. ), biomedical informatics, personal medicine.
Cognitive and Biomedical Modeling; brain science, brain interface, physiological modeling, biomedical informatics, systems biology, network analysis, mathematical modeling, disease dynamics, personal genome, gene networks, genetics and lifestyle with microbiome, health/disease risk.
3. How to design better health and well-being space.
Social data analyses and social relation design, mood analyses, human computer interaction, health care communication system, natural language dialog system, personal behavior discovery, Kansei, zone and creativity, compassion, calming technology, Kansei engineering, gamification, assistive technologies, ambient assisted living (AAL) technology .
4. Applications, platforms and field studies
Medical recommendation systems, care support systems for the elderly, web services for personal wellness, games for health and happiness, life log applications, disease improvement experiments (e.g., metabolic syndrome, diabetes), sleep improvement experiments, healthcare /disable support system, community computing platforms.
5. AI and society (social embeddedness of AI)
Empirical or philosophical discussions on “AI and Society” are welcomed. Topics include “machine intelligence vs. human intelligence” and “how AI affects our human society or ways of thinking”. Issues related to “cognitive bias” in the recent trend of “big data becomes personal” are of particular interest. However, the topics will not be limited to above examples.
Format
The symposium is organized by the invited talks, presentations, and posters and interactive demos.
Submissions
Interested participants should submit either full papers (8 pages maximum) or extended abstracts (2 pages maximum). Extend abstracts should state your presentation types (long paper (6-8 pages), short paper (1-2 pages), demonstration, or poster presentation). The electronic version of your paper should be sent to aaai2019-iaw@cas.lab.uec.ac.jp by November 23th, 2018.
Important Dates
The deadline has been extended.
The submission deadline :November 16th, 2018 ⇒ November 23th, 2018
Author Notification: December 3rd, 2018
Camera-ready Papers: January 23th, 2019 (It might be changed.)
Registration deadline: March 1st, 2019
Symposium: March 25th-27th, 2019
Invited Speakers
- John C. Havens
Executive Director, The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems.
- Peter Pirolli
Ph.D., Senior Research Scientist, Florida Institute for Human & Machine Cognition.
We will add additional speakers, soon.
Organizing Committee
- Co-chairs
- Takashi Kido (Preferred Networks, Inc., Japan)
- Keiki Takadama (The University of Electro-Communications, Japan)
- Program committee
- Amy Ding (Carnegie Mellon University, U.S.A)
- Melanie Swan (DIYgenomics, U.S.A. )
- Katarzyna Wac (Stanford University, U.S.A and University of Geneva, Switzerland)
- Ikuko Eguchi Yairi (Sophia University, Japan )
- Fumiko Kano (Copenhagen Business School, Denmark)
- Takashi Maruyama (Stanford, U.S.)
- Chirag Patel (Stanford University, U.S.A )
- Rui Chen (Stanford University, U.S.A)
- Ryota Kanai (University of Sussex, UK)
- Yoni Donner (Stanford, U.S.A)
- Yutaka Matsuo (University of Tokyo, Japan)
- Eiji Aramaki (Nara Institute of Science and Technology, Japan)
- Pamela Day (Stanford, U.S.A )
- Tomohiro Hoshi (Stanford, U.S.A)
- Miho Otake (Riken, Japan)
- Yotam Hineberg (Stanford, U.S.A)
- Yukiko Shiki (Kansai University, Japan)
- Yuichi Yoda (Ritsumeikan University, Japan)
- Advisory committee
- Atul J. Butte (University of California San Francisco, U.S.A.)
- Seiji Nishino (Stanford University, U.S.A.)
- Katsunori Shimohara (Doshisha University, Japan)
- Takashi Maeno (Keio University, Japan)
- Hiroshi Maruyama (Preferred Networks Inc. )
- Robert Reynolds (Wayne University, U.S.A )
Contact
Takashi Kido
〒100-0004 Chiyoda-Ku, Otemachi,1-6-1, Tokyo Otemachi building 2F Preferred Networks, Inc.