Overview
GLB 2023 is the third edition of the Workshop of the Graph Learning Benchmarks, encouraged by the success of the previous editions. Inspired by the conference tracks in the computer vision and natural language processing communities that are dedicated to establishing new benchmark datasets and tasks, we call for contributions that establish novel ML tasks on novel graph-structured data which have the potential to (i) increase the diversity of graph learning benchmarks, (ii) identify new demands of graph machine learning in general, and (iii) gain a better synergy of how concrete techniques perform on these benchmarks. We also welcome contributions on data-centric graph learning, such as novel approaches to collect, annotate, clean, augment, and sythesize graph-structured data.
GLB 2023 will be a non-archival workshop; we are excited to host this edition in person in conjunction with KDD 2023.
Call for Papers
We encourage paper submissions relevant to (but not limited to) the following topics:
- Real-World Datasets: Novel real-world graph-structured datasets—especially large-scale, application-oriented, and publicly accessible datasets.
- Synthetic Datasets: Synthetic graph-structured datasets that are well-supported by graph theory, network science, or empirical studies, and can be used to reveal limitations of existing graph learning methods.
- Software Packages: Software packages which enable streamlined benchmarking large-scale online graphs, crawling or crowdsourcing of graph data, and generation of realistic synthetic graphs.
- Data Collection: Novel approaches to collect and annotate graph-structured data. Crowdsourcing and sampling methods on large networks.
- Data Processing: Novel approaches to clean and impute noisy/missing graph-structured data. Data augmentation approaches for self-supervision.
- Tasks: New learning tasks and applications on different types of graphs, at different levels (e.g., node, edge, subgraph, graph), with a special focus on real-world science-, health- or industry-oriented problems.
- Metrics: New evaluation procedures and metrics of graph learning associated with the various tasks and datasets.
- Benchmarks: Works benchmarking multiple existing GNNs on non-trivial tasks and datasets. We explicitly encourage works that reveal limitations of existing models or optimize matches between network designs and problems.
- Task Taxonomy: Discussions towards a more comprehensive and fine-grained taxonomy of graph learning tasks.
The contributed papers will be evaluated based on the meaningfulness of proposed tasks or datasets, their potential to become new benchmarks for graph learning, and their contributions to understanding the pros and cons of state-of-the-art graph learning techniques.
Important Dates
All deadlines are in Anywhere on Earth (AoE) time zone.
- Submission deadline:
May 30, 2023Extended to Jun. 8, 2023 - Acceptance notification:
Jun. 13, 2023Jun. 23, 2023 - Camera-ready version due:
Jun. 27, 2023Jul. 5, 2023 - Workshop: Aug. 7, 2023
Submission
Abstracts and papers can be submitted through CMT:
https://cmt3.research.microsoft.com/GLB2023
Format
- For unpublished submissions, please submit a paper no longer than 4 pages (excluding references and the appendices) using the ACM “sigconf” LaTeX template (see the instruction by KDD 2023).
The recommend setting for LaTex file manuscript is
\documentclass[sigconf, review]{acmart}
If your submission includes appendices, it should be included in the same file with the main manuscript.
- The submission is single-blinded for the ease of data and code sharing. The reviewers are anonymized but the authors do not need to be anonymized in the submission.
- This workshop is non-archival. Relevant findings that have been recently published are also welcome. For already published submissions, the paper can be submitted in the original format. These submissions will be very lightly reviewed for their relevance to this workshop.
- Authors are strongly encouraged to include the corresponding datasets and code as supplementary materials in their submission. For large datasets or repositories, the authors can provide an external link through Github, Google drive, Dropbox, OneDrive, or Box. We limit the choice of storage platforms for security considerations. Please email the organizers if none of the listed platforms works for you. We also encourage authors to contribute new datasets and tasks to our benchmark curation platform, Graph Learning Indexer (GLI).
- If the data cannot be made publicly available, an extra section is required to illustrate how the results of the established benchmark may generalize to other graph data.
Organizers
Please contact us through this email address if you have any questions.
A list of organizers can also be found here.