CL-2018: Continual Learning Workshop NIPS 2018 Montreal, Canada, December 7, 2018 |
Conference website | https://sites.google.com/corp/view/continual2018/ |
Submission link | https://easychair.org/conferences/?conf=cl20180 |
Submission deadline | October 25, 2018 |
TL;DR: We invite you to our workshop on Continual Learning at this year’s NIPS. Submission deadline for 4-page abstracts is October 19th.
---------------------
Continual learning (CL) is the ability to learn continually from a stream of experiential data, building on what was learnt previously, while being able to reapply, adapt and generalize it to new situations. CL is a fundamental step towards artificial intelligence, as it allows the learning agent to continually extend its abilities and adapt them to a continuously changing environment, a hallmark of natural intelligence. It also has implications for supervised or unsupervised learning. For example, if a dataset is not randomly shuffled, or the input distribution shifts over time, a learned model might overfit to the most recently seen data, forgetting the rest -- a phenomenon referred to as catastrophic forgetting, which is a core issue CL systems aim to address.
Continual learning is characterized in practice by a series of desiderata. A non-complete list of which includes:
-
Online learning -- learning occurs at every moment, with no fixed tasks or data sets and no clear boundaries between tasks;
-
Presence of transfer (forward/backward) -- the learning agent should be able to transfer and adapt what it learned from previous experience, data, or tasks to new situations, as well as make use of more recent experience to improve performance on capabilities learned earlier;
-
Resistance to catastrophic forgetting -- new learning should not destroy performance on previously seen data;
-
Bounded system size -- the agent’s learning capacity should be fixed, forcing the system to use its resources intelligently, gracefully forgetting what it has learned so as to minimize potential loss of future reward;
-
No direct access to previous experience -- while the model can remember a limited amount of experience, a continual learning algorithm cannot assume direct access to all of its past experience or the ability to rewind the environment (i.e., t=0 exactly once).
In the first (2016) meeting of this workshop, the focus was on defining a complete list of desiderata of what a continual learning (CL) enabled system should be able to do. The focus of the 2018 workshop, organized during NIPS conference in Montreal, will be on:
-
how to evaluate CL methods; and
-
how CL compares with related ideas (e.g., life-long learning, never-ending learning, transfer learning, meta-learning) and how advances in these areas could be useful for continual learning.
In particular, different desiderata of continual learning seem to be in opposition (e.g., fixed model capacity vs non-catastrophic forgetting vs the ability to generalize and adapt to new situations), which also raises the question of what a successful continual learning system should be able to do. What are the right trade-offs between these different opposing forces? How do we compare existing algorithms in the face of conflicting objectives? What metrics are most useful to report? In some cases, trade-offs will be tightly defined by the way we choose to test the algorithms. What would be the right benchmarks, datasets or tasks for productively advancing this topic?
We encourage submission of four-page abstracts describing work in progress or completed work on topics (1) and (2) above, including work beneficial to the advancement of CL from related areas, such as:
-
Transfer learning
-
Multi-task learning
-
Meta learning
-
Lifelong learning
-
Few-shot learning
Finally, we will also encourage presentation of both novel approaches to CL and implemented systems, which will help concretize the discussion of what CL is and how to evaluate CL systems.
Confirmed speakers:
-
Marc’Aurelio Ranzato (Facebook AI Research)
-
John Schulman (OpenAI)
-
Raia Hadsell (DeepMind)
-
Chelsea Finn (Berkeley & Google Brain)
-
Yarin Gal (Oxford)
-
Juergen Schmidhuber (IDSIA/NNAISENSE)
Dates:
-
Submission deadline: Friday October 19
-
Workshop: Friday December 7th
-
Location: Montreal, Canada
Submission format: 4 page extended abstracts, which can include previously published work.
More details at the website:https://sites.google.com/corp/view/continual2018/
Submissions will be managed through EasyChair here: https://easychair.org/conferences/?conf=cl20180
We look forward to seeing you in December!
Razvan Pascanu, Yee Whye Teh, Mark Ring and Marc Pickett.