IT4DL: The AAAI-22 Workshop on Information Theory for Deep Learning Vancouver, Canada, February 28-March 1, 2022 |
Conference website | https://www.it4dl.org/ |
Submission link | https://easychair.org/conferences/?conf=it4dl |
Submission deadline | November 26, 2021 |
With the recent rapid development of advanced techniques on the intersection between information theory and machine learning, such as neural network-based mutual information estimators, deep generative models and causal representation learning, domain adaptation and generalization, and deep reinforcement learning, we believe information theoretic methods can provide new perspectives, theories and methods to the challenging problems of deep learning on the central issues of generalization, robustness, and explainability.
This workshop at AAAI-22 aims to bring together both academic researchers and industrial practitioners to share visions on the intersection between information theory and deep learning.
Submission Guidelines
All papers must be original and not simultaneously submitted to another journal or conference. The following paper categories are welcome:
- Full papers Submission of original work up to 8 pages in length (including references).
- Extended Abstract Summary of recently published journal/conference papers or preliminary results in the form of a 2-pages extended abstracts.
List of Topics
- Estimation of information theoretic quantities from data
- Information theoretic learning principles and their implementations for the generalization and robustness of deep neural networks
- Interpretation and explanation of deep neural neworks with information-theoretic methods
- Information theoretic methods for domain adaptation, out-of-domain generalization and relevant problems (such as robust transfer learning and lifelong learning)
- Information theoretic methods for learning from limited labelled data, such as few-shot learning, zero-shot learning, self-supervised learning, and unsupervised learning
- Information theoretic methods in generative models and causal representation learning
- Information theoretic methods for distributed deep learning
- Information theoretic methods for (deep) reinforcement learning
- Information theoretic methods for uncertainty quantification
- Information theoretic methods for multi-view, multi-task and general AI models
Committees
Program Committee
- TBA
Organizing committee
- Jose C. Principe
- Robert Jenssen
- Badong Chen
- Shujian Yu
Venue
The conference will be held in Vancouver, Canada
Contact
All questions about submissions should be emailed to Shujian Yu (yusj9011@gmail.com)