Publication | AAAI Conference on Artificial Intelligence 2021

Cross-Domain Few-Shot Graph Classification

Few-shot learning is a setting in which a model learns to adapt to novel categories from a few labeled samples. Inspired by human learning, meta-learning address few-shot learning by leveraging distribution of similar tasks to accumulate transferable knowledge from prior experience which then can serve as a strong inductive bias for fast adaptation to downstream tasks. A fundamental assumption in meta-learning is that tasks in meta-training and meta-testing phases are sampled from the same distribution, i.e., tasks are i.i.d. However, in many real-world applications, collecting tasks from the same distribution is infeasible. Instead, there are datasets available from the same modality but different domains. In this research, we address this by introducing an attention-based graph encoder that can accumulate knowledge from tasks that are not similar.

Download publication


Cross-Domain Few-Shot Graph Classification

Kaveh Hassani

AAAI Conference on Artificial Intelligence 2021

We study the problem of few-shot graph classification across domains with nonequivalent feature spaces by introducing three new cross-domain benchmarks constructed from publicly available datasets. We also propose an attention-based graph encoder that uses three congruent views of graphs, one contextual and two topological views, to learn representations of task specific information for fast adaptation, and task-agnostic information for knowledge transfer. We run exhaustive experiments to evaluate the performance of contrastive and meta-learning strategies. We show that when coupled with metric-based meta-learning frameworks, the proposed encoder achieves the best average meta-test classification accuracy across all benchmarks.

Related Resources



Autodesk Research Celebrates Earth Day, Every Day

A round up of recent posts from the Research Blog highlighting our…



Boom Chameleon: Simultaneous capture of 3D viewpoint, voice and gesture annotations on a spatially-aware display

We introduce the Boom Chameleon, a novel input/output device…



Bricks: Laying the Foundations for Graspable User Interfaces

We introduce the concept of Graspable User Interfaces that allow…



PenLight: Combining a Mobile Projector and a Digital Pen for Dynamic Visual Overlay

Digital pen systems, originally designed to digitize annotations made…

Get in touch

Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.

Contact us