Stanford CS224W: Machine Learning with Graphs | 2021 | Lecture 8.1 – Graph Augmentation for GNNs
For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: https://stanford.io/2XQPDGQ
Jure Leskovec
Computer Science, PhD
In this lecture, we will continue talking about the different design choices when training and evaluating GNNs. Firstly, we cover graph augmentation techniques for improving the training of GNNs. We highlight two specific kinds of augmentations: 1) graph feature augmentation and 2) graph structure augmentation. For graph feature augmentation, we discuss methods for injecting additional node feature information into the graph. For graph structure augmentation, we discuss adding edges to improve message passing in sparse networks (e.g. virtual nodes), dropping edges to improve efficiency in dense networks, and subgraph sampling for reasoning over very large graphs.
To follow along with the course schedule and syllabus, visit:
http://web.stanford.edu/class/cs224w/
0:00 Introduction
0:31 Recap: Deep Graph Encoders
0:57 Recap: A General GNN Framework
3:58 Why Augment Graphs
5:49 Graph Augmentation Approaches
6:41 Feature Augmentation on Graphs
17:53 Add Virtual Nodes / Edges
22:35 Node Neighborhood Sampling
23:49 Neighborhood Sampling Example
