Stanford CS224W: ML with Graphs | 2021 | Lecture 9.1 – How Expressive are Graph Neural Networks
For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: https://stanford.io/3GwTmur
Jure Leskovec
Computer Science, PhD
In this lecture, we provide a theoretical framework to analyze the expressive power of GNNs—the ability of a GNN to distinguish different graph structures. Specifically, we consider whether a GNN’s node embeddings can distinguish difference in nodes’ local neighborhood structures. To this end, we introduce the notion of computational graphs that a GNN uses to generate node embeddings. This view of GNNs leads to a key insight that the expressive power of a GNN can be fully characterized by the expressive power of neighbor aggregation function it uses.
To follow along with the course schedule and syllabus, visit:
http://web.stanford.edu/class/cs224w/
