Publication | AAAI Conference on Artificial Intelligence 2021
Cross-Domain Few-Shot Graph Classification
Few-shot learning is a setting in which a model learns to adapt to novel categories from a few labeled samples. Inspired by human learning, meta-learning address few-shot learning by leveraging distribution of similar tasks to accumulate transferable knowledge from prior experience which then can serve as a strong inductive bias for fast adaptation to downstream tasks. A fundamental assumption in meta-learning is that tasks in meta-training and meta-testing phases are sampled from the same distribution, i.e., tasks are i.i.d. However, in many real-world applications, collecting tasks from the same distribution is infeasible. Instead, there are datasets available from the same modality but different domains. In this research, we address this by introducing an attention-based graph encoder that can accumulate knowledge from tasks that are not similar.
Download publicationAbstract
Cross-Domain Few-Shot Graph Classification
Kaveh Hassani
AAAI Conference on Artificial Intelligence 2021
We study the problem of few-shot graph classification across domains with nonequivalent feature spaces by introducing three new cross-domain benchmarks constructed from publicly available datasets. We also propose an attention-based graph encoder that uses three congruent views of graphs, one contextual and two topological views, to learn representations of task specific information for fast adaptation, and task-agnostic information for knowledge transfer. We run exhaustive experiments to evaluate the performance of contrastive and meta-learning strategies. We show that when coupled with metric-based meta-learning frameworks, the proposed encoder achieves the best average meta-test classification accuracy across all benchmarks.
Related Resources
2024
Generative Design through Quality-Diversity Data Synthesis and Language ModelsA new paradigm for AEC design exploration, based on a combination of…
2023
Language Model Crossover: Variation through Few-Shot PromptingPursuing the insight that language models naturally enable an…
2022
MaskTune: Mitigating Spurious Correlations by Forcing to ExploreThis work proposes a masking strategy that prevents over-reliance on…
Get in touch
Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.
Contact us