WeASeL 2019: Optimizing Human Learning: Workshop eliciting Adaptive Sequences for Learning University of West Indies Kingston, Jamaica, June 4, 2019 |
Conference website | https://humanlearn.io/ |
Submission link | https://easychair.org/conferences/?conf=weasel2019 |
Abstract registration deadline | April 16, 2019 |
Submission deadline | April 16, 2019 |
What should we learn next? In this current era where digital access to knowledge is cheap and user attention is expensive, a number of online applications have been developed for learning. These platforms collect a massive amount of data over various profiles, that can be used to improve learning experience: intelligent tutoring systems can infer what activities worked for different types of students in the past, and apply this knowledge to instruct new students. In order to learn effectively and efficiently, the experience should be adaptive: the sequence of activities should be tailored to the abilities and needs of each learner, in order to keep them stimulated and avoid boredom, confusion and dropout. In the context of reinforcement learning, we want to learn a policy to administer exercises or resources to individual students.
Educational research communities have proposed models that predict mistakes and dropout, in order to detect students that need further instruction. Such models are usually calibrated on data collected in an offline scenario, and may not generalize well to new students. There is now a need to design online systems that continuously learn as data flows, and self-assess their strategies when interacting with new learners. These models have been already deployed in online commercial applications (ex. streaming, advertising, social networks) for optimizing interaction, click-through-rate, or profit. Can we use similar methods to enhance the performance of teaching in order to promote lifetime success? When optimizing human learning, which metrics should be optimized? Learner progress? Learner retention? User addiction? The diversity or coverage of the proposed activities? What the issues inherent to adapting the learning process in online settings, in terms of privacy, fairness (disparate impact, inadvertent discrimination), and robustness to adversaries trying to game the system?
Student modeling for optimizing human learning is a rich and complex task that gathers methods from machine learning, cognitive science, educational data mining and psychometrics. This workshop welcomes researchers and practitioners in the following topics (this list is not exhaustive):
- abstract representations of learning
- additive/conjunctive factor models
- adversarial learning
- causal models
- cognitive diagnostic models
- deep generative models such as deep knowledge tracing
- item response theory
- models of learning and forgetting (spaced repetition)
- multi-armed bandits
- multi-task learning
- reinforcement learning
Submission Guidelines
All papers must be original and not simultaneously submitted to another journal or conference. The following paper categories are welcome:
- Short papers between 2 and 3 pages
- Full papers between 4 and 6 pages
List of Topics
- How to put the student in optimal conditions to learn? e.g. incentives, companion agents, etc.
- When optimizing human learning, which metrics should be optimized?
- The progress of the learner?
- The diversity or coverage of the proposed activities?
- Fast recovery of what the student does not know?
- Can a learning platform be solely based on addiction, maximizing interaction?
- What kinds of activities give enough choice and control to the learner to benefit their learning (adaptability vs. adaptivity)?
- Do the strategies differ when we are teaching to a group of students? Do we want to enhance social interaction between learners?
- What feedback should be shown to the learner in order to allow reflective learning? e.g. visualization, learning map, score, etc. (Should a system provide a fake feedback in order to encourage the student more?)
- What student parameters are relevant? e.g. personality traits, mood, context (is the learner in class or at home?), etc.
- What explicit and implicit feedbacks does the learner provide during the interaction?
- What models of learning are relevant? E.g. cognitive models, modeling forgetting in spaced repetition.
- What specific challenges from the ML point of view are we facing with these data?
- Do we have enough datasets? What kinds of datasets are missing?
- How to guarantee fairness/trustworthiness of AI systems that learn from interaction with students? This is especially critical for systems that learn online.
Committees
Program Committee
- François Bouchet (LIP6/Sorbonne Université, Paris)
- Benoît Choffin (Didask & CentraleSupélec/LRI)
- Fabrice Popineau (CentraleSupélec/LRI)
- Julien Seznec (lelivrescolaire.fr & Inria Lille)
- Michal Valko (SequeL, Inria Lille)
- Jill-Jênn Vie (RIKEN AIP, Japan)
Organizing committee
- Fabrice Popineau, CentraleSupélec & LRI, France
- Michal Valko, Inria Lille, France
- Jill-Jênn Vie, RIKEN AIP, Japan
Venue
The conference will be held in Kingston, Jamaica on June 3 or 4, 2019.
Contact
All questions about submissions should be emailed to jill-jenn.vie@riken.jp.