DeepTest 2021: 3rd Workshop on Testing for Deep Learning and Deep Learning for Testing ICSE 2021 Madrid, Spain, May 25-27, 2021 |
Conference website | https://conf.researchr.org/home/icse-2021/deeptest-2021 |
Submission link | https://easychair.org/conferences/?conf=deeptest2021 |
Submission deadline | January 19, 2021 |
Notification Date | February 22, 2021 |
Camera Ready | March 12, 2021 |
Machine Learning (ML) is widely adopted in modern software systems, including safety-critical domains such as autonomous cars, medical diagnosis, and aircraft collision avoidance systems. Thus, it is crucial to rigorously test such applications to ensure high dependability. However, standard notions of software quality and reliability become irrelevant when considering ML systems, due to their non-deterministic nature and the lack of a transparent understanding of the models' semantics. ML is also expected to revolutionize software development. Indeed, ML is being applied for devising novel program analysis and software testing techniques related to malware detection, fuzzy testing, bug-finding, and type-checking.
The workshop will combine academia and industry in a quest for well-founded practical solutions. The aim is to bring together an international group of researchers and practitioners with both ML and SE backgrounds to discuss their research, share datasets, and generally help the field to build momentum. The workshop will consist of invited talks, presentations based on research paper submissions, and one or more panel discussions, where all participants are invited to share their insights and ideas.
Submission Guidelines
We accept two types of submissions:
- full research papers up to 8-page papers describing original and unpublished results related to the workshop topics.
- short papers up to 4-page papers describing both preliminary work, new insights in previous work, or demonstrations of testing-related tools and prototypes.
All submissions must conform to the ICSE 2021 formatting instructions. All submissions must be in PDF. The page limit is strict. Submissions must conform to the IEEE formatting instructions IEEE Conference Proceedings Formatting Guidelines (title in 24pt font and full text in 10pt type, LaTeX users must use \documentclass[10pt,conference]{IEEEtran} without including the compsoc or compsocconf options). DeepTest 2021 will employ a double-blind review process. Thus, no submission may reveal its authors’ identities. The authors must make every effort to honor the double-blind review process. In particular, the authors’ names must be omitted from the submission and references to their prior work should be in the third person.
If you have any questions, or wonder whether your submission is in scope, please do not hesitate to contact the organizers.
Important dates (AoE)
- Submission deadline: 12 Jan 2021
- Author notification: 22 Feb 2021
- Camera-ready version: 12 Mar 2021
List of Topics
DeepTest is an interdisciplinary workshop targeting research at the intersection of SE and ML. We welcome submissions that investigate:
- how to ensure the quality of ML-based applications, both at a model level and at a system level
- the use of ML to support software engineering tasks, particularly software testing
Relevant topics include, but are not limited to:
- Quality
- Quality implication of ML algorithms on large-scale software systems
- Application of classical statistics to ML systems quality
- Training and payload data quality
- Correctness of data abstraction, data trust
- High-quality benchmarks for evaluating ML approaches
- Testing and Verification
- Test data synthesis for testing ML systems
- White-box and black-box testing strategies
- ML models for testing programs
- Adversarial machine learning and adversary based learning
- Test coverage
- Vulnerability, sensitivity, and attacks against ML
- Metamorphic testing as software quality assurance
- New abstraction techniques for verification of ML systems
- ML techniques for software verification
- Dev-ops for ML
- Fault Localization, Debugging and Repairing
- Quality Metrics for ML systems, e.g., Correctness, Accuracy, Fairness, Robustness, Explainability
- Sensitivity to data distribution diversity and distribution drift
- Failure explanation and automated debugging techniques
- Runtime monitoring
- Fault Localization and anomaly detection
- Model repairing
- The effect of labeling costs on solution quality (semi-supervised learning)
- ML for fault prediction, localization, and repair
- ML to aid program comprehension, program transformation, and program generation
Committees
Organizing committee
- Andrea Stocco, Università della Svizzera italiana (USI)
- Gunel Jahangirova, Università della Svizzera italiana (USI)
- Vincenzo Riccio, Università della Svizzera italiana (USI)
- Onn Shehory, Bar Ilan University, Israel
- Eitan Farchi, IBM Haifa Research Lab, Israel
- Diptikalyan Saha, IBM India Research Lab, Bangalore, India
- Guy Barash, Western Digital, Israel
Steering committee
- Baishakhi Ray, Columbia University, USA
- Corina Pasareanu, NASA Ames, USA
- Sarfraz Khurshid, The University of Texas at Austin, USA
Venue
The conference will be held online, using the Zoom platform.
Contact
All questions about submissions should be emailed to the organizers.