DeMaL2021: Data-Efficient Machine Learning Virtual, CA, United States, August 14-18, 2021 |
Conference website | https://demalworkshop.github.io/kdd2021/index.html |
Submission link | https://easychair.org/conferences/?conf=demal20210 |
Submission deadline | June 4, 2021 |
Description and Objectives
Training, retraining and deploying large-scale machine learning models requires large amounts of high-quality data. Often, this is achieved via a time-consuming, labor intensive human annotation process. While in large-scale applications, there is an abundance of unlabeled, often extremely noisy data, there is a severe lack of high quality labeled data from which practitioners can train ML models that perform well on customer-facing applications. To this end, it is imperative that ML scientists and engineers devise innovative ways to deal with the constrained setting of small amounts of labeled data, and make the best use of limited (time and monetary) budget available to obtain annotated data. Thus, one needs to train dataefficient machine learning models. This has led to the proliferation of creative techniques such as data augmentation, transfer learning, self-supervised learning, active learning, multi-task learning to name a few. While many of these techniques have shown to work well under specific settings, web data offers additional challenges. Web data is multi-modal in nature, it has implicit signals from user-interactions, and often involves multiple agents.
Given the uniqueness, importance, and growing interest in these problems, the workshop on Data-efficient Machine Learning (DeMaL) is a venue to present ideas and solutions to these problems. The full day workshop aims to bring together practitioners in both academia and industry working on the collection, annotation and usage of labeled data for large scale data mining applications.
Topics of Interest
We identify broad set of techniques that can be used to learn from limited data. The topics of interest include, but are not limited to
- Semi-supervised and Self-supervised Learning
- Active Methods: active learning, bandit techniques
- Learning from Similar Tasks: transfer learning, multi-task learning, meta-learning, domain adaptation
- Crowdsourcing: human annotation methods, design of experiments
- Synthetic data: data augmentation, adversarial data generation
Given the data mining focus of this conference, we will also consider, although not limit to, the following application domains.
- Recommendation models: recommender systems, collaborative filtering, knowledge graphs
- E-commerce: fraud and abuse mitigation, misinformation, advertising
- Social media: misbehavior, sentiment analysis, cyberbullying
- Information retrieval: web search, ranking
- Time Series Analysis
Submission Guidelines
Authors are invited to submit papers of 2-8 pages in length. Papers should be submitted electronically in PDF format, using the ACM SIG Proceedings format, with a font size no smaller than 9pt. Submit papers through EasyChair. All submissions will be single blind and peer-reviewed. Each submission will be reviewed by at least 3 members of the PC. Papers will be evaluated according to their significance, originality, technical content, style, clarity, and relevance to the workshop. All accepted papers will be presented at the workshop. We encourage both academic and industry submissions of the following types, but not limited to:
- Novel research papers in full or short length
- Work-in-progress papers
- Position papers
- Survey papers
- Comparison papers of existing methods and tools
- Case studies
- Demo papers
- Extended abstracts
Important Dates
- Paper Submission Deadline: June 4, 2021
- Acceptance notification: June 25, 2021
- Camera-ready due: June 30, 2021
- Publication of workshop proceedings: July 2, 2021
- Date of workshop: Between 14-18 August, 2021
Organizing Committe
- Sumeet Katariya, Amazon
- Nikhil Rao, Amazon
- Chandan Reddy, Virginia Tech