AVFakes19: Synthetic Realities 2019: ICML Workshop on Detecting Audio-Visual Fakes Long Beach Los Angeles, CA, United States, June 15, 2019 |
Conference website | https://sites.google.com/view/audiovisualfakes-icml2019/ |
Submission link | https://easychair.org/conferences/?conf=avfakes19 |
Abstract registration deadline | May 6, 2019 |
Submission deadline | May 6, 2019 |
With the latest advances of deep generative models, synthesis of images and videos as well as of human voices have achieved impressive realism. In many domains, synthetic media are already difficult to distinguish from real by the human eye and ear. The potential of misuses of these technologies is seldom discussed in academic papers; instead, vocal concerns are rising from media and security organizations as well as from governments. Researchers are starting to experiment on new ways to integrate deep learning with traditional media forensics and security techniques as part of a technological solution.
This workshop will bring together experts from the communities of machine learning, computer security and digital forensics in an attempt to highlight recent work and discuss future effort to address these challenges.
Our agenda will alternate contributed papers with invited speakers. The latter will emphasize connections among the interested scientific communities and the standpoint of institutions and media organizations.
Submission Guidelines
There are two tracks for submission:
- Research Track: Submissions to this track must introduce new ideas or results. Submissions should follow the ICML format and not exceed 4 pages, excluding references.
- Resubmission Track: Papers already published at other venues, with no format constraints.
The deadline for submission is May 1, 2019 May 6, 2019 (AoE). Reviews will be single-blind: reviewers will be anonymous, but authors will be visible.
List of Topics
The topics will include, but they are not limited to:
- Detection of synthesized and altered images, video and audio
- Explanation and interpretation of detection methods
- Deep synthetic biometric attacks and their detection
- Adversarial training and adversarial examples against detection systems, and defenses
- Audiovisual forensics
- Deep learning for counter-forensics
- Steganography, watermarking and digital signatures with deep neural networks
- Deep learning template protection against synthetic data
- Measuring the effectiveness of synthetic data on automated systems or humans
- Fake news detection from multi-modal sources and prior world knowledge
Programme Committees
- Dr. Battista Biggio, University of Cagliari (Italy)
- Dr. Pavel Korshunov, IDIAP (Switzerland)
- Dr. Thomas Mensink, UvA (the Netherlands)
- Dr. Giorgio Patrini, Deeptrace (the Netherlands)
- Delip Rao, AI Foundation (US)
- Arka Sadhu, USC (US)
- Prof. Shih-Fu Chang, Columbia University (US)
- Prof. Zeno Geradts, Dutch Forensics Institute (the Netherlands)
- Prof. Marcel Worring, UvA (the Netherlands)
Contact
All questions about submissions should be emailed to <g.patrini@deeptracelabs.com, asadhu@usc.edu>