We especially (but not exclusively) call for submissions which will contribute to at least one of the following:
- New Graph Datasets: Novel graph-structured datasets—especially large-scale, application-oriented, and publicly accessible datasets. We also welcome methods and software packages that enable streamlined benchmarking of large-scale graph data, crawling or crowdsourcing for labeled graph data, and generation of realistic synthetic graphs.
- New ML Tasks: New ML tasks and applications on different types of graphs, at different levels (e.g., node, edge, subgraph or graph), with a special focus on real-world and industry-valued problems.
- New Metrics: New evaluation procedures and metrics of graph learning associated with the various tasks and datasets.
- Benchmarking Studies: Studies that benchmark multiple graph ML methods (especially graph neural networks) on non-trivial tasks and datasets. We explicitly encourage works that reveal limitations of existing models, optimize matches between model design and problems, and other novel findings about the behaviors of existing models on various tasks or datasets.
The acceptance of the contributed papers is decided on the meaningfulness of the established graph learning tasks/datasets and their potential of being formalized into new benchmarks, rather than the performance of ML models (old or new) on these tasks. We particularly welcome contributions of negative results of popular, state-of-the-art models on a new task/dataset, as these provide novel insights to the community’s understanding of the meta-knowledge of graph ML.
- Submission deadline:
Feb 15Feb 22, 2021 (Anywhere on Earth)
- Acceptance notification:
Mar 8Mar 15, 2021
- Camera-ready version due:
Mar 22Mar 29, 2021
Abstracts and papers can be submitted through CMT:
- A paper no longer than 4 pages (excluding references and the appendix) using the ACM “sigconf” LaTeX template (see the instruction by the Web Conference 2021).
- This workshop is non-archival. Relevant findings that have been recently published are also welcome.
- The submission is single-blinded for the ease of data/code sharing. The reviewers are anonymized but the authors do not need to be anonymized in the submission.
- Authors are strongly encouraged to include the corresponding datasets and code as supplementary materials in their submission. For large datasets or repositories, the authors can provide an external link through Github, Google drive, Dropbox, OneDrive, or Box. We limit the choice of storage platforms for security considerations. Please email the organizers if none of the listed platforms works for you.
- If the data cannot be made publicly available, an extra section is required to illustrate how the results of the established benchmark may generalize to other graph data.