DLG'22: The Seventh International Workshop on Deep Learning on Graphs: Methods and Applications (DLG‘22) Virtual, Canada, February 28, 2022 |
Conference website | https://deep-learning-graphs.bitbucket.io/dlg-aaai22/ |
Submission link | https://easychair.org/conferences/?conf=dlg22 |
Abstract registration deadline | November 12, 2021 |
Submission deadline | November 12, 2021 |
The Seventh International Workshop on Deep Learning on Graphs: Methods and Applications (DLG-AAAI’22)
Vancouver, BC, Canada, Feb 22-March 1, 2021(in conjunction with AAAI 2022, https://aaai.org/Conferences/AAAI-22/)
Scope
Deep Learning models are at the core of research in Artificial Intelligence research today. It is well-known that deep learning techniques that were disruptive for Euclidean data such as images or sequence data such as text are not immediately applicable to graph-structured data. This gap has driven a tide in research for deep learning on graphs on various tasks such as graph representation learning, graph generation, and graph classification. New neural network architectures on graph-structured data have achieved remarkable performance in these tasks when applied to domains such as social networks, bioinformatics and medical informatics.
This wave of research at the intersection of graph theory and deep learning has also influenced other fields of science, including computer vision, natural language processing, inductive logic programming, program synthesis and analysis, automated planning, reinforcement learning, and financial security. Despite these successes, graph neural networks (GNNs) still face many challenges namely,
- Modeling highly structured data with time-evolving, multi-relational, and multi-modal nature. Such challenges are profound in applications in social attributed networks, natural language processing, inductive logic programming, and program synthesis and analysis. Joint modeling of text or image content with underlying network structure is a critical topic for these domains.
- Modeling complex data that involves mapping between graph-based inputs and other highly structured output data such as sequences, trees, and relational data with missing values. Natural Language Generation tasks such as SQL-to-Text and Text-to-AMR are emblematic of such challenges.
This workshop aims to bring together both academic researchers and industrial practitioners from different backgrounds and perspectives to the above challenges. The workshop will consist of contributed talks, posters, invited talks and panel discussion on a wide variety of the GNN methods and NLP applications. Work-in-progress papers, demos, and visionary papers are also welcome. This workshop intends to share visions of investigating new approaches and methods at the intersection of Graph Neural Networks and real-world applications.
Topic of Interest (including but not limited to)
We invite submission of papers describing innovative research and applications around the following topics. Papers that introduce new theoretical concepts or methods, help to develop a better understanding of new emerging concepts through extensive experiments, or demonstrate a novel application of these methods to a domain are encouraged.
- Graph neural networks on node-level, graph-level embedding
- Graph neural networks on graph matching
- Dynamic/incremental graph-embedding
- Learning representation on heterogeneous networks, knowledge graphs
- Deep generative models for graph generation/semantic-preserving transformation
- Graph2seq, graph2tree, and graph2graph models
- Deep reinforcement learning on graphs
- Adversarial machine learning on graphs
And with particular focuses but not limited to these application domains:
- Learning and reasoning (machine reasoning, inductive logic, theory proving)
- Computer vision (object relation, graph-based 3D representations like mesh)
- Natural language processing (information extraction, semantic parsing (AMR, SQL), text
- generation, machine comprehension)
- Bioinformatics (drug discovery, protein generation)
- Program synthesis and analysis
- Reinforcement learning (multi-agent learning, compositional imitation learning)
- Financial security (anti-money laundering)
Submission Guidelines
Submissions are limited to a total of 5 pages, including all content and references, and must be in PDF format and formatted according to the new Standard ACM Conference Proceedings Template. Following this KDD conference submission policy, reviews are double-blind, and author names and affiliations should NOT be listed. Submitted papers will be assessed based on their novelty, technical quality, potential impact, and clarity of writing. For papers that rely heavily on empirical evaluations, the experimental methods and results should be clear, well executed, and repeatable. Authors are strongly encouraged to make data and code publicly available whenever possible. The accepted papers will be posted on the workshop website and will not appear in the KDD proceedings.
Important dates:
Submission deadline: Nov 12, 2021
Author notification: Dec 3, 2021
Camera-ready deadline: Jan 3, 2021
Workshop: Feb 28, 2021
Submission link: https://easychair.org/cfp/DLG22
Workshop Chairs
Lingfei Wu (JD.Com Silicon Valley Research Center)
Jian Pei (Simon Fraser University)
Jiliang Tang (Michigan State University)
Yinglong Xia (Facebook AI)
Proceeding Chair
Xiaojie Guo (JD.COM Silicon Valley Research Center)
Publicity Chair
Yuanqi Du (George Mason University)
Committees
Organizing committee
- Jian Pei, Simon Fraser University, Canada
- Charu Aggarwal, IBM Research AI, USA
- Philip S. Yu, University of Illinois at Chicago, USA
- Xuemin Lin, University of New South Wales, Australia
- Jiebo Luo, University of Rochester, USA
- Lingfei Wu, JD.Com Silicon Valley Research Center, USA
- Yinglong Xia, Facebook AI, USA
- Jiliang Tang, Michigan State University, USA
- Peng Cui, Tsinghua University, China
- William L. Hamilton, McGill University, Canada
- Thomas Kipf, University of Amsterdam, Netherlands
Workshop committee
- Ibrahim Abdelaziz, (IBM Research AI)
- Sutanay Choudhury (Pacific Northwest National Lab)
- Lingyang Chu (Simon Fraser University)
- Tyler Derr (Michigan State University)
- Stephan Günnemann (Technical University of Munich)
- Balaji Ganesan, (IBM Research AI)
- William L. Hamilton (McGill University)
- Tengfei Ma (IBM Research AI)
- Tian Gao (IBM Research AI)
- Thomas Kipf (University of Amsterdam)
- Renjie Liao (University of Toronto)
- Yujia Li, (DeepMind)Liana Ling (IBM Research AI)
- Yizhou Sun (University of California, Los Angeles)
- Hanghang Tong (Arizona State University)
- Richard Tong (Squirrel AI Learning)
- Jian Tang (Mila)
- Lingfei Wu (JD.Com Silicon Valley Research Center)
- Qing Wang (IBM Research AI)
- Yinglong Xia (Facebook AI)
- Liang Zhao (Emory University)
- Dawei Zhou (Arizona State University)
- Zhan Zheng (Washington University in St. Louis)
- Feng Chen (University at Albany - State University of New York)
- Yuanqi Du (George Mason University)
- Shen Kai (Zhejiang University)
Contact
All questions about submissions should be emailed to lwu@email.wm.edu