Stanford CS224W: Machine Learning with Graphs | 2021 | Lecture 17.4 – Scaling up by Simplifying GNNs
For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: https://stanford.io/3w4ZKoc
Jure Leskovec
Computer Science, PhD
An alternative approach to scaling up GNNs is to simplify the model itself by removing the nonlinearity. By removing the nonlinearity, a GCN is reduced to a simple feature pre-processing model, which is extremely efficient and scales well to a large graph. However, the model’s expressiveness might be limited due to this simplification. Surprisingly, the simplified GCN model often works well on practical node classification benchmarks. This is because these benchmark graphs often exhibit homophily structure (nodes with the same target labels tend to be connected), and the simplified GCN naturally makes such predictions for nodes in a graph.
To follow along with the course schedule and syllabus, visit:
http://web.stanford.edu/class/cs224w/
#machinelearning #machinelearningcourse
