Stanford CS224W: Machine Learning with Graphs | 2021 | Lecture 7.1 – A general Perspective on GNNs
For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: https://stanford.io/3BjIqNd
Lecture 7.1 – A General Perspective on Graph Neural Networks
Jure Leskovec
Computer Science, PhD
In this lecture, we introduce a general perspective on graph neural networks. The key idea is that a complete GNN is represented by a GNN design space, which consists of GNN layer, layer connectivity, graph augmentation and learning objectives. Popular GNN variants, such as GCN, GraphSAGE, GAT, GIN, are special cases under the design space. When deploying GNNs to real-world applications, finding the right design in the GNN design space crucial.
Design Space for Graph Neural Networks: https://arxiv.org/abs/2011.08843
code implementation GraphGym: https://github.com/snap-stanford/GraphGym
To follow along with the course schedule and syllabus, visit:
http://web.stanford.edu/class/cs224w/
