PDFL20: Workshop on Parallel, Distributed, and Federated Learning |
Website | http://pdfl.iais.fraunhofer.de |
Submission link | https://easychair.org/conferences/?conf=pdfl20 |
Abstract registration deadline | June 23, 2020 |
Submission deadline | June 23, 2020 |
Acceptance notification | July 9, 2020 |
Camera-ready-copy deadline | July 26, 2020 |
Important Dates
- Submission deadline: 9 June 2020
- Acceptance notification: 9 July 2020
- Camera-ready deadline: 26 July 2020
- Workshop: TBA
Overview
Many of today's parallel machine learning algorithms were developed for tightly coupled systems like computing clusters or clouds. However, data volumes generated from machine-to-machine or human-to-machine interaction, such as mobile phones or autonomous vehicles, surpass the amount that can conveniently be centralized. Thus, traditional cloud computing approaches are rendered infeasible. In order to scale parallel machine learning to these amounts of data, computation needs to be pushed towards the edge, that is, towards the data generating devices. By learning models directly on the data sources - which often have computation power of their own, for example, mobile phones, smart sensors and tablets - network communication is reduced by orders of magnitude. Moreover, it facilitates obtaining a global model without centralizing privacy-sensitive data. This novel form of parallel, distributed, and federated machine learning has gained substantial interest in recent years, both from researchers and practitioners, and may allow for disruptive changes in areas such as smart assistants, machine learning on medical or industrial data, and autonomous driving. This workshop is the third edition of the successful DMLE workshops at ECMLPKDD 2018 and 2019. We decided to rename the workshop to broaden the scope and integrate novel directions in the field, for example, privacy-preserving federated learning in the medical domain. The workshop aims to foster discussion, discovery, and dissemination of novel ideas and approaches for parallel, distributed, and federated machine learning.
We invite participation in the 3rd Workshop on Parallel, Distributed, and Federated Learning, to be held as part of the ECMLPKDD 2020 conference. This year we invite two types of submissions to the workshop:
- full length papers (16 pages)
- short papers (8 pages)
For all accepted papers, we invite the authors for a presentation as a poster. Moreover, for 4-6 papers, we invite the authors for a presentation as a talk during the workshop. We issue a Best Paper Award with certificate and a prize.
Topics
The main topics are, including, but not limited to:
- Federated learning
- Parallel machine learning
- On-device machine learning
- Edge computing for machine learning
- Decentralized deep learning
- In-situ methods
- Communication-efficient learning
- Privacy-preserving learning
- Black-box machine learning
- Distributed optimization
- Theoretical investigations on parallelization
- Large-scale machine learning, massive data sets
- Distributed data mining
- Fairness in Federated Learning
- Distributed Training of Generative Models
- Resource constraint machine learning
- Hardware aspects of distributed learning
Submission Guidelines
Authors should submit a PDF version in Springer LNCS style using the workshop's EasyChair site (https://easychair.org/conferences/?conf=pdfl20). The review process is single-blind. Papers will be published in the Springer LNCS ECMLPKDD workshop proceedings. Full papers have a page limit of 16 pages, short papers have 8 pages, including bibliography. Submitting a paper to the workshop means that if the paper is accepted at least one author commits to presenting it at the workshop. Papers not presented at the workshop will not be included in the proceedings.
Organizers
- Linara Adilova, Fraunhofer IAIS, Germany
- Michael Kamp, Monash University, Australia
- Yamuna Krishnamurthy, Royal Holloway University of London, United Kingdom
Invited Speaker
Peter Schlicht studied mathematics with a minor in computer science in Göttingen and received his doctorate in mathematics from the University of Leipzig. After a two-year research stay at the Ecole Polytechnique Fédérale (EPFL) in Lausanne (Switzerland), he joined Volkswagen Group Research in 2016 as an AI architect. There he deals with research questions on artificial intelligence technologies for automatic driving. He is particularly interested in distributed machine learning for autonomous driving, and methods for monitoring, explaining and robotizing deep neural networks, as well as securing them. Peter Schlicht is member of the Plattform Lernende Systeme working group "Mobilität und intelligente Verkehrssysteme" and project manager for AI-technologies for autonomous driving.
Program Committee
- Katharina Morik, TU Dortmund
- Stefan Wrobel, Fraunhofer IAIS
- Tamas Horvath, University of Bonn
- Mario Boley, Monash University
- Janis Keuper, Fraunhofer ITWM
- Mark Jelasity, University of Szeged
- Henning Petzka, Lund University
- Michael Mock, Fraunhofer IAIS
- Daniel Paurat, Telekom
- Dino Oglic, King's College, London
- Tim Wirtz, Fraunhofer IAIS
- Pascal Welke, University of Bonn
- Xiaoxiao Li, Yale University
- Rafet Sifa, Fraunhofer IAIS
- Nico Piatkowski, Fraunhofer IAIS
- Jochen Garcke, Fraunhofer SCAI
- Christian Bauckhage, Fraunhofer IAIS
- Sven Giesselbach, Fraunhofer IAIS
- Dorina Weichert, Fraunhofer IAIS