Skip to yearly menu bar Skip to main content


Workshop

New Frontiers in Graph Learning

Jiaxuan You · Marinka Zitnik · Rex Ying · Yizhou Sun · Hanjun Dai · Stefanie Jegelka

Theater A

Background. In recent years, graph learning has quickly grown into an established sub-field of machine learning. Researchers have been focusing on developing novel model architectures, theoretical understandings, scalable algorithms and systems, and successful applications across industry and science regarding graph learning. In fact, more than 5000 research papers related to graph learning have been published over the past year alone.

Challenges. Despite the success, existing graph learning paradigms have not captured the full spectrum of relationships in the physical and the virtual worlds. For example, in terms of applicability of graph learning algorithms, current graph learning paradigms are often restricted to datasets with explicit graph representations, whereas recent works have shown promise of graph learning methods for applications without explicit graph representations. In terms of usability, while popular graph learning libraries greatly facilitate the implementation of graph learning techniques, finding the right graph representation and model architecture for a given use case still requires heavy expert knowledge. Furthermore, in terms of generalizability, unlike domains such as computer vision and natural language processing where large-scale pre-trained models generalize across downstream applications with little to no fine-tuning and demonstrate impressive performance, such a paradigm has yet to succeed in the graph learning domain.

Goal. The primary goal of this workshop is to expand the impact of graph learning beyond the current boundaries. We believe that graph, or relation data, is a universal language that can be used to describe the complex world. Ultimately, we hope graph learning will become a generic tool for learning and understanding any type of (structured) data. We aim to present and discuss the new frontiers in graph learning with researchers and practitioners within and outside the graph learning community. New understandings of the current challenges, new perspectives regarding the future directions, and new solutions and applications as proof of concepts are highly welcomed.

Scope and Topics. We welcome submissions regarding the new frontiers of graph learning, including but not limited to:
- Graphs in the wild: Graph learning for datasets and applications without explicit relational structure (e.g., images, text, audios, code). Novel ways of modeling structured/unstructured data as graphs are highly welcomed.
- Graphs in ML: Graph representations in general machine learning problems (e.g., neural architectures as graphs, relations among input data and learning tasks, graphs in large language models, etc.)
- New oasis: Graph learning methods that are significantly different from the current paradigms (e.g., large-scale pre-trained models, multi-task models, super scalable algorithms, etc.)
- New capabilities: Graph representation for knowledge discovery, optimization, causal inference, explainable ML, ML fairness, etc.
- Novel applications: Novel applications of graph learning in real-world industry and scientific domains. (e.g., graph learning for missing data imputation, program synthesis, etc.)

Call for papers

Submission deadline: Thursday, Sept 22, 2022 (16:59 PDT)

Submission site (OpenReview): NeurIPS 2022 GLFrontiers Workshop

Author notification: Thursday, Oct 6, 2022

Camera ready deadline: Thursday, Oct 27, 2022 (16:59 PDT)

Workshop (in person): Friday, Dec 2, 2022

The workshop will be held fully in person at the New Orleans Convention Center, as part of the NeurIPS 2022 conference. We also plan to offer livestream for the event, and more details will come soon.

We welcome both short research papers of up to 4 pages (excluding references and supplementary materials), and full-length research papers of up to 8 pages (excluding references and supplementary materials). All accepted papers will be presented as posters. We plan to select around 6 papers for oral presentations and 2 papers for the outstanding paper awards with potential cash incentives.

All submissions must use the NeurIPS template. We do not require the authors to include the checklist in the template. Submissions should be in .pdf format, and the review process is double-blind—therefore the papers should be appropriately anonymized. Previously published work (or under-review) is acceptable.

Should you have any questions, please reach out to us via email:
glfrontiers@googlegroups.com

Chat is not available.
Timezone: America/Los_Angeles

Schedule