Publication
Contrastive Multi-View Representation Learning on Graphs
AbstractWe introduce a self-supervised approach for learning node and graph level representations by contrasting structural views of graphs. We show that unlike visual representation learning, increasing the number of views to more than two or contrasting multi-scale encodings do not improve performance, and the best performance is achieved by contrasting encodings from first-order neighbors and a graph diffusion. We achieve new state-of-the-art results in self-supervised learning on 8 out of 8 node and graph classification benchmarks under the linear evaluation protocol. For example, on Cora (node) and Reddit-Binary (graph) classification benchmarks, we achieve 86.8% and 84.5% accuracy, which are 5.5% and 2.4% relative improvements over previous state-of-the-art. When compared to supervised baselines, our approach outperforms them in 4 out of 8 benchmarks.
Download publicationRelated Resources
See what’s new.
2014
On the Definition of a Computational Fluid Dynamic Solver using Cellular Discrete-Event Simulation
The Discrete Event System Specification (DEVS) has rarely been applied…
2012
A Classification of Opening Posts in Commercial Software Help Forums
The opening posts in software help forums reflect the users’…
2019
GAMMA: Space Exploration Lander
Exploring new approaches to design and manufacturing processes for…
2010
3D Navigation
While advances in computing have empowered users to design and…
Get in touch
Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.
Contact us