CS224W: Machine Learning with Graphs | 2021 | Lecture 17.3 - Cluster GCN Scaling up GNNs
Jure Leskovec
Computer Science, PhD
Neighbor Sampling, presented in the previous lecture (17.2) constructs a computational graph separately for each node in a mini-batch. This creates a lot of redundancy in computing node embeddings within the mini-batch. A different approach is to sample a subgraph from a large graph that is small enough to be loaded into GPU. Then, the efficient and non-redundant full-batch GNN can be applied over the sampled subgraph. An example of this method is Cluster-GCN. Cluster-GCN first pre-processes a large graph by partitioning it into clusters of nodes. Then, during training, it samples clusters of nodes in each mini-batch and applies the full-batch GNN over the induced subgraph.
To follow along with the course schedule and syllabus, visit:
To get the latest news on Stanford’s upcoming professional programs in Artificial Intelligence, visit:
To view all online courses and programs offered by Stanford,
2 views
8
3
2 years ago 00:27:07 1
CS224W: Machine Learning with Graphs | 2021 | Lecture Walk Approaches for Node Embeddings
2 years ago 00:20:10 1
CS224W: Machine Learning with Graphs | 2021 | Lecture 2.3 - Traditional Feature-based Methods: Graph
2 years ago 00:27:30 1
CS224W: Machine Learning with Graphs | 2021 | Lecture 2.1 - Traditional Feature-based Methods: Node