Stanford CS224W: Machine Learning with Graphs | 2021 | Lecture 17.3 – Cluster GCN: Scaling up GNNs
For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: https://stanford.io/3mrcimE
Jure Leskovec
Computer Science, PhD
Neighbor Sampling, presented in the previous lecture (17.2) constructs a computational graph separately for each node in a mini-batch. This creates a lot of redundancy in computing node embeddings within the mini-batch. A different approach is to sample a subgraph from a large graph that is small enough to be loaded into GPU. Then, the efficient and non-redundant full-batch GNN can be applied over the sampled subgraph. An example of this method is Cluster-GCN. Cluster-GCN first pre-processes a large graph by partitioning it into clusters of nodes. Then, during training, it samples clusters of nodes in each mini-batch and applies the full-batch GNN over the induced subgraph.
To follow along with the course schedule and syllabus, visit:
http://web.stanford.edu/class/cs224w/
#machinelearning #machinelearningcourse
