Rethinking Graph Transformers with Spectral Attention | Researchers explain Graph ML Paper
Join the Learning on Graphs and Geometry Reading Group:
Paper:
Abstract:
In recent years, the Transformer architecture has proven to be very successful in sequence processing, but its application to other data structures, such as graphs, has remained limited due to the difficulty of properly defining positions. Here, we present the Spectral Attention Network (SAN), which uses a learned positional encoding (LPE) that can take advantage of the full Laplacian spectrum to learn the position of each node in a given graph. This LPE is then added to the node features of the graph and passed to a fully-connected Transformer. By leveraging the full spectrum of the Laplacian, our model is theoretically powerful in distinguishing graphs, and can better detect similar sub-structures from their resonance. Further, by fully connecting the graph, the Transformer does not suffer from over-squashing, an information bottleneck of mos
2 views
469
145
3 years ago 01:18:56 2
Rethinking Graph Transformers with Spectral Attention | Researchers explain Graph ML Paper
3 years ago 01:20:17 15
Rethinking Attention with Performers / A Generalization of Transformer Networks to Graphs
8 years ago 01:08:27 1
Andrus Adamchik - RESTful Data Services with LinkRest (Ru)
4 years ago 00:21:16 1
Consumer Medicine, LGBTQ Inc and Ethical Care
4 years ago 00:36:19 7
AI Weekly Update - May 26th, 2021 (#32!)
2 years ago 00:11:39 29
Графен и гигантское… Недостающий этап развития глаз. Новая лампа накаливания. Новости QWERTY №255
2 years ago 00:11:12 15
Learn English: Words with many meanings
1 year ago 00:27:14 1
What is Jacobian? | The right way of thinking derivatives and integrals