Stanford CS224W: Machine Learning with Graphs | 2021 | Lecture 12.2 – Neural Subgraph Matching
For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: https://stanford.io/318qjgF
Jure Leskovec
Computer Science, PhD
We learned about the definition of subgraphs and motifs, as well as traditional methods of identifying and characterizing subgraphs. In this lecture, we will further introduce the powerful neural architecture that could model subgraph relations. The key idea is to use neural networks to exploit geometric shape of embedding space to capture subgraph properties. We embed subgraphs into order embedding space, which could nicely encode subgraph isomorphism relationship by partial ordering. We finally talk about the loss function and the training process of neural subgraph matching.
To follow along with the course schedule and syllabus, visit:
http://web.stanford.edu/class/cs224w/
0:00 Introduction
1:38 Isomorphism as an ML Task
2:12 Task Setup
2:59 Overview of the Approach
4:29 Neural Architecture for Subgraphs (3)
6:56 Why Anchor?
8:14 Decomposing Gy into Neighborhoods
11:42 Subgraph Order Embedding Space
13:01 Why Order Embedding Space?
15:31 Order Constraint (2)
17:22 Loss Function: Order Constraint
19:57 Training Neural Subgraph Matching
21:00 Training Example Construction
21:32 Training Details
22:19 Subgraph Predictions on New Graphs
23:11 Summary: Neural Subgraph Matching
