DeCoDeML2020: Deep Continuous-Discrete Machine Learning |
Website | https://sites.google.com/view/decodeml-workshop-2020/home |
Submission link | https://easychair.org/conferences/?conf=decodeml2020 |
Submission deadline | June 9, 2020 |
Since the beginnings of machine learning – and indeed already hinted at in Alan Turing’s groundbreaking 1950 paper ”Computing machinery and intelligence” – two opposing approaches have been pursued: On the one hand, approaches that relate learning to knowledge and mostly use ”discrete” formalisms of formal logic. On the other hand, approaches which, mostly motivated by biological models, investigate learning in artificial neural networks and predominantly use ”continuous” methods from numerical optimization and statistics. The recent successes of deep learning can be attributed to the latter, the ”continuous” approach, and are currently opening up new opportunities for computers to ”perceive” the world and to act, with farreaching consequences for industry, science and society. The massive success in recognizing ”continuous” patterns is the catalyst for a new enthusiasm for artificial intelligence methods. However, today’s artificial neural networks are hardly suitable for learning and understanding ”discrete” logical structures, and this is one of the major hurdles to further progress.
Accordingly, one of the biggest open problems is to clarify the connection between these two learning approaches (logical-discrete, neural-continuous). In particular, the role and benefits of prior knowledge need to be reassessed and clarified. The role of formal logic in ensuring sound reasoning must be related to perception through deep networks. Further, the question of how to use prior knowledge to make the results of deep learning more stable, and to explain and justify them, is to be discussed. The extraction of symbolic knowledge from networks is a topic that needs to be reexamined against the background of the successes of deep learning. Finally, it is an open question if and how the principles responsible for the success of deep learning methods can be transferred to symbolic learning.
2nd Workshop on Deep Continuous-Discrete Machine Learning (DeCoDeML) is a full-day workshop. We are aiming for a real workshop with a lot of interaction, and find a workshop is the right format because the topic is cutting-edge with much on-going work. Note that the workshop focuses on basic research questions (continuous/discrete and learning/knowledge in the era of deep learning), not consequences thereof or the like. The workshop will consist of:
- Oral presentations of the accepted papers. Depending on the number, they may range from 10 to 20 minutes each.
- A panel: Open problems in deep continuous-discrete machine learning and how they can be addressed. How can the scientific community organize itself to contribute?
- We will try to add a longer invited talk by a relevant and well-recognized expert.
Submission Guidelines
The workshop intends to attract papers on how recent deep learning methods can be connected to discrete structures and symbolic knowledge, including areas such as
- Pedagogical, decompositional, or hybrid pedagogical/decompositional knowledge/rule extraction from deep neural networks
- Prior knowledge used as constraints in deep neural networks (e.g., compiled into the architecture/topology, taking advantage of known symmetries, ...)
- Deep symbolic networks, e.g., networks of rules or trees, deep rule sets
- Deep hybrid continuous-discrete networks
- HPC or GPU implementations
- Applications to text, images, sequences, time series, ...
We request extended abstracts on work in progress, already finished work, published work, position statements, etc. between two and three pages long in Springer LNCS style. Author names and affiliations should be included (no blind reviewing). All submissions will be reviewed by at least three PC members. Accepted papers will be published on the workshop webpage. A special issue on the previous edition of the workshop in the section Machine Learning and Artificial Intelligence of the journal Frontiers in Big Data is currently in preparation. Further possibilities and future events will be discussed at the workshop.
Committees
Program Committee
- Henrik Boström, KTH Royal Institute of Technology, Sweden
- Ines Dutra, Universidade do Porto, Portugal
- Eibe Frank, University of Waikato, New Zealand
- Johannes Fürnkranz, TU Darmstadt, Germany
- Iryna Gurevych, TU Darmstadt, Germany
- Visvanathan Ramesh, Goethe University Frankfurt am Main, Germany
- Bertil Schmidt, Johannes Gutenberg University Mainz, Germany
- Ivan Titov, University of Edinburgh, UK
- Jochen Triesch, Goethe University Frankfurt am Main, Germany
- Ivan Vulic, University of Cambridge, UK
- Michael Wand, Johannes Gutenberg University Mainz, Germany
- Gerson Zaverucha, Federal University of Rio de Janeiro, Brazil
Organizing committee
- Kristian Kersting (Technical University Darmstadt)
- Stefan Kramer (Johannes Gutenberg University Mainz)
Venue
The 2nd Workshop on Deep Continuous-Discrete Machine Learning will take place on Monday, September 14 or Friday, September 18, 2020, in Ghent, Belgium, associated to ECML PKDD 2020. If the conference will take place "virtually" due to the Corona crisis, the workshop will also be organized as a virtual meeting.
Contact
All questions about submissions should be emailed to kersting@cs.tu-darmstadt.de and/or kramer@informatik.uni-mainz.de.