Publication
MaskTune
Mitigating Spurious Correlations by Forcing to Explore
Activation visualizations of ERM (middle) and MaskTune (right) for Waterbirds samples, in which MaskTune enforces exploring new features. After applying MaskTune, the task-relevant input signals (bird features) are emphasized.
A fundamental challenge of over-parameterized deep learning models is learning meaningful data representations that yield good performance on a downstream task without over-fitting spurious input features. This work proposes MaskTune, a masking strategy that prevents over-reliance on spurious (or a limited number of) features. MaskTune forces the trained model to explore new features during a single epoch finetuning by masking previously discovered features. MaskTune, unlike earlier approaches for mitigating shortcut learning, does not require any supervision, such as annotating spurious features or labels for subgroup samples in a dataset. Our empirical results on biased MNIST, CelebA, Waterbirds, and ImagenNet-9L datasets show that MaskTune is effective on tasks that often suffer from the existence of spurious correlations. Finally, we show that \method{} outperforms or achieves similar performance to the competing methods when applied to the selective classification (classification with rejection option) task.
Code for MaskTune is available at https://github.com/aliasgharkhani/Masktune.
DownloadMaskTune generates a new set of masked samples by obstructing the features discovered by a model fully trained via empirical risk minimization (ERM). The ERM model is then fine-tuned for only one epoch using the masked version of the original training data to force new feature exploration. The features highlighted in yellow, red, and green correspond to features discovered by ERM, the masked features, and the newly discovered features by MaskTune, respectively.
Associated Researchers
Related Resources
03/22/2023
Learning from and Inspiring Women at Girl Geek X
Dr. Tonya Custis shares how she learned about management and strategy…
01/01/2021
Inferring CAD Modeling Sequences using Zone Graphs
In computer-aided design (CAD), the ability to “reverse engineer” the…
12/06/2022
CAPRI-Net: Learning Compact CAD Shapes with Adaptive Primitive Assembly
We introduce CAPRI-Net, a self-supervised neural net-work for learning…
11/22/2022
Inside Autodesk Research – Exploring our Research Teams
Learn more about Autodesk Research, including our Industry Futures,…
Get in touch
Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.
Contact us