INNF 2019: Workshop on Invertible Neural Nets and Normalizing Flows ICML 2019 Long Beach, CA, United States, June 14-15, 2019 |
Conference website | https://invertibleworkshop.github.io/ |
Submission link | https://easychair.org/conferences/?conf=innf2019 |
Submission deadline | May 1, 2019 |
Research on invertible neural networks has recently seen a significant resurgence of interest in the ICML community. Invertible transformations offer two key benefits:
- They allow exact reconstruction of inputs and hence obviate the need to store hidden activations in memory for backpropagation
- They can be designed to track the changes in the probability density of the inputs that the transformation induces (in which case they are known as normalizing flows)
Like autoregressive models, normalizing flows can be powerful generative models that allow exact likelihood computations. With the right architecture, they can also generate data much faster than autoregressive models. As such, normalizing flows have been particularly successful in density estimation and variational inference.
The main goals of this workshop are:
- To provide an accessible introduction to normalizing flows for the wider community
- To create connections among researchers in the field, and encourage new ones to enter
- To track and summarize new works on invertible neural networks
- To identify existing applications and explore new ones
Submission Guidelines
We invite researchers to submit their recent work on the development, analysis, or application of work related to invertible neural nets and normalizing flows. Submissions should take the form of an extended abstract of 4 pages in PDF format using the ICML style. Submissions longer than 4 pages are allowed but reviewers are not expected to read more than 4 pages. Author names do not need to be anonymized. Submissions may include a supplement/appendix, but reviewers are not responsible for reading any supplementary material. Submissions will be accepted as poster presentations. Selected submissions will also be considered for contributed talks.
Submissions that are currently under review or that have been recently accepted for publication to another conference are permitted.
The submission deadline is May 1st, 2019 23h59 (Anywhere on Earth).
List of Topics
- Proposing new invertible transformation to improve expressiveness and tractability.
- Introducing different training criteria for invertible functions.
- Studying the information regularization of neural networks.
- Theoretical work in terms of optimization and/or expressivity of invertible networks.
- Regularizing for invertibility and solving inverse problems.
- Generalizations of and understanding of the change of variable theorem.
- Applying normalizing flows for exact or approximate inference.
- Normalizing flows with discrete distributions.
- Improving scalability of continuous normalizing flows.
- Hierarchical reinforcement learning.
- Exploration via randomized value function in RL.
- Continuous relaxation to discrete latent variables.
- Probabilistic programming.
Committees
Program Committee
- Chin-Wei Huang
- David Krueger
- Rianne van den Berg
- George Papamakarios
- Aidan Gomez
- Chris Cremer
- Ricky Chen
- Aaron Courville
- Danilo Rezende
Invited Speakers
- Eric Jang
- Laurent Dinh
- Diederik Kingma
- Prafulla Dhariwal
- Jakub Tomczak
- Yulia Rubanova
- Matt Hoffman
- Jorn-Henrik Jacobsen
Contact
Questions? Contact us at invertibleworkshop@gmail.com.