Publication
PointMask: Towards Interpretable and Bias-Resilient Point Cloud Processing
This work explores unbiased point-cloud classification and provides a tool for interpretability (debugging) deep point-cloud networks. The method enables a user to see which variables (parts/points) in the input contribute most to a final prediction.
Download publicationAbstract
PointMask: Towards Interpretable and Bias-Resilient Point Cloud Processing
Saeid Asgari Taghanaki, Kaveh Hassani, Pradeep Kumar Jayaraman, Amir Hosein Khasahmadi, Tonya Custis
International Conference on Machine Learning 2020
Deep classifiers tend to associate a few discriminative input variables with their objective function, which in turn, may hurt their generalization capabilities. To address this, one can design systematic experiments and/or inspect the models via interpretability methods. In this paper, we investigate both of these strategies on deep models operating on point clouds. We propose PointMask, a model-agnostic interpretable information-bottleneck approach for attribution in point cloud models. PointMask encourages exploring the majority of variation factors in the input space while gradually converging to a general solution. More specifically, PointMask introduces a regularization term that minimizes the mutual information between the input and the latent features used to masks out irrelevant variables. We show that coupling a PointMask layer with an arbitrary model can discern the points in the input space which contribute the most to the prediction score, thereby leading to interpretability. Through designed bias experiments, we also show that thanks to its gradual masking feature, our proposed method is effective in handling data bias.
Related Resources
2024
What’s in this LCA Report? A Case Study on Harnessing Large Language Models to Support Designers in Understanding Life Cycle ReportsExploring how large language models like ChatGPT can help designers…
2021
Neural UpFlow: A Scene Flow Learning Approach to Increase the Apparent Resolution of Particle-Based LiquidsIn this research, we introduce a data-driven approach to increase the…
2021
A Learning Approach to Robot-Agnostic Force-Guided High Precision AssemblyIn this work we propose a learning approach to high-precision robotic…
2021
Robust Representation Learning via Perceptual Similarity MetricsA fundamental challenge in artificial intelligence is learning useful…
Get in touch
Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.
Contact us