Publication | AAAI Conference on Artificial Intelligence 2021

Cross-Domain Few-Shot Graph Classification

Few-shot learning is a setting in which a model learns to adapt to novel categories from a few labeled samples. Inspired by human learning, meta-learning address few-shot learning by leveraging distribution of similar tasks to accumulate transferable knowledge from prior experience which then can serve as a strong inductive bias for fast adaptation to downstream tasks. A fundamental assumption in meta-learning is that tasks in meta-training and meta-testing phases are sampled from the same distribution, i.e., tasks are i.i.d. However, in many real-world applications, collecting tasks from the same distribution is infeasible. Instead, there are datasets available from the same modality but different domains. In this research, we address this by introducing an attention-based graph encoder that can accumulate knowledge from tasks that are not similar.

Download publication

Abstract

Cross-Domain Few-Shot Graph Classification

Kaveh Hassani

AAAI Conference on Artificial Intelligence 2021

We study the problem of few-shot graph classification across domains with nonequivalent feature spaces by introducing three new cross-domain benchmarks constructed from publicly available datasets. We also propose an attention-based graph encoder that uses three congruent views of graphs, one contextual and two topological views, to learn representations of task specific information for fast adaptation, and task-agnostic information for knowledge transfer. We run exhaustive experiments to evaluate the performance of contrastive and meta-learning strategies. We show that when coupled with metric-based meta-learning frameworks, the proposed encoder achieves the best average meta-test classification accuracy across all benchmarks.

Associated Researchers

Kaveh Hassani

Autodesk Research

View all Researchers

Related Resources

Publication

2024

Experiential Views: Towards Human Experience Evaluation of Designed Spaces using Vision-Language Models

Exploratory research on helping designers and architects anticipate…

Publication

2023

Language Model Crossover: Variation through Few-Shot Prompting

Pursuing the insight that language models naturally enable an…

Publication

2023

Neural Shape Diameter Function for Efficient Mesh Segmentation

Introducing a neural approximation of the Shape Diameter Function,…

Publication

2023

CLIP-Forge: Towards Zero-Shot Text-to-Shape Generation

Generating shapes using natural language can enable new ways of…

Get in touch

Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.

Contact us