Publication
PointMask: Towards Interpretable and Bias-Resilient Point Cloud Processing
This work explores unbiased point-cloud classification and provides a tool for interpretability (debugging) deep point-cloud networks. The method enables a user to see which variables (parts/points) in the input contribute most to a final prediction.
Download publicationAbstract
PointMask: Towards Interpretable and Bias-Resilient Point Cloud Processing
Saeid Asgari Taghanaki, Kaveh Hassani, Pradeep Kumar Jayaraman, Amir Hosein Khasahmadi, Tonya Custis
International Conference on Machine Learning 2020
Deep classifiers tend to associate a few discriminative input variables with their objective function, which in turn, may hurt their generalization capabilities. To address this, one can design systematic experiments and/or inspect the models via interpretability methods. In this paper, we investigate both of these strategies on deep models operating on point clouds. We propose PointMask, a model-agnostic interpretable information-bottleneck approach for attribution in point cloud models. PointMask encourages exploring the majority of variation factors in the input space while gradually converging to a general solution. More specifically, PointMask introduces a regularization term that minimizes the mutual information between the input and the latent features used to masks out irrelevant variables. We show that coupling a PointMask layer with an arbitrary model can discern the points in the input space which contribute the most to the prediction score, thereby leading to interpretability. Through designed bias experiments, we also show that thanks to its gradual masking feature, our proposed method is effective in handling data bias.
Related Resources
2023
CAD-LLM: Large Language Model for CAD GenerationThis research presents generating Computer Aided Designs (CAD) using…
2024
Experiential Views: Towards Human Experience Evaluation of Designed Spaces using Vision-Language ModelsExploratory research on helping designers and architects anticipate…
2023
Learned Visual Features to Textual ExplanationsA novel method that leverages the capabilities of large language…
2023
WorldSmith: Iterative and Expressive Prompting for World Building with a Generative AIUsing multi-modal generative AI to quickly and iteratively visualize…
Get in touch
Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.
Contact us