PAC-Bayes 2017: (Almost) 50 Shades of Bayesian Learning: PAC-Bayesian trends and insights Long Beach Convention Center Long Beach, CA, United States, December 9, 2017 |
Conference website | https://bguedj.github.io/nips2017/50shadesbayesian.html |
Submission link | https://easychair.org/conferences/?conf=pacbayes2017 |
Submission deadline | October 27, 2017 |
Industry-wide successes of machine learning at the dawn of the (so-called) big data era has led to an increasing gap between practitioners and theoreticians. The former are using off-the-shelf statistical and machine learning methods, while the latter are designing and studying the mathematical properties of such algorithms. The tradeoff between those two movements is somewhat addressed by Bayesian researchers, where sound mathematical guarantees often meet efficient implementation and provide model selection criteria. In the late 90s, a new paradigm has emerged in the statistical learning community, used to derive probably approximately correct (PAC) bounds on Bayesian-flavored estimators. This PAC-Bayesian theory has been pioneered by Shawe-Taylor and Willamson (1997), and McAllester (1998, 1999). It has been extensively formalized by Catoni (2004, 2007) and has triggered, slowly but surely, increasing research efforts during last decades.
We believe it is time to pinpoint the current PAC-Bayesian trends relatively to other modern approaches in the (statistical) machine learning community. Indeed, we observe that, while the field grows by its own, it took some undesirable distance from some related areas. Firstly, it seems to us that the relation to Bayesian methods has been forsaken in numerous works, despite the potential of PAC-Bayesian theory to bring new insights to the Bayesian community and to go beyond the classical Bayesian/frequentist divide. Secondly, the PAC-Bayesian methods share similarities with other quasi-Bayesian (or pseudo-Bayesian) methods studying Bayesian practices from a frequentist standpoint, such as the Minimum Description Length (MDL) principle (Grünwald, 2007). Last but not least, even if some practical and theory grounded learning algorithm has emerged from PAC-Bayesian works, these are almost unused for real-world problems.
In short, this workshop aims at gathering statisticians and machine learning researchers to discuss current trends and the future of {PAC,quasi}-Bayesian learning. From a broader perspective, we aim to bridge the gap between several communities that can all benefit from sharper statistical guarantees and sound theory-driven learning algorithms.
Submission Guidelines
- Submission website: https://easychair.org/conferences/?conf=pacbayes2017
- Page limit: 4 pages (without references)
- Please use the NIPS 2017 submission format
- Please make submission double blind
- We encourage original submissions. Please clearly indicate if the submitted work has been presented or published elsewhere.
All accepted papers will have a poster presentation, and we will select two papers for oral presentations.
Please note that at least one author of each accepted paper must be available to present the paper at the workshop.
List of Topics
- {PAC,quasi}-Bayesian generalization guarantees
- Novel theoretical perspectives on Bayesian methods
- Application of the PAC-Bayesian theory to different learning frameworks
- Learning algorithms inspired by a {PAC,quasi}-Bayesian analysis
Committees
Program Committee
TBA
Organizing committee
- Benjamin Guedj, Inria, France
- Pascal Germain, Inria, France
- Francis Bach, Inria, France
Invited Speakers
- Olivier Catoni, CNRS, France
- Peter Grünwald, CWI, The Netherlands
- François Laviolette, Université Laval, Canada
- Neil Lawrence, Amazon, UK
- Jean-Michel Marin, Université de Montpellier, France
- Dan Roy, University of Toronto, Canada
- Yevgeny Seldin, University of Copenhagen, Denmark
- John Shawe-Taylor, University College London, UK
Venue
The workshop will be part of the NIPS 2017 conference (Long Beach Convention Center, California - December 4-9, 2017).
Contact
All questions about submissions should be emailed to benjamin.guedj@inria.fr and pascal.germain@inria.fr