Publication | IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2022

JoinABLe

Learning Bottom-up Assembly of Parametric CAD Joints

An overview of assemblies in the Fusion 360 Gallery assembly dataset.

A critical part of assembly design in Fusion 360 and Inventor is aligning parts to one another to form joints. However, fully defining assembly joints is time-consuming for our customers and not fully automated. To address this challenge, we developed ‘JoinABLe’ a machine learning based approach to automatically create joints between pairs of parts in an assembly.

Download publication

Abstract

JoinABLe: Learning Bottom-up Assembly of Parametric CAD Joints

Karl D.D. Willis, Pradeep Kumar Jayaraman, Hang Chu, Yunsheng Tian, Yifei Li, Daniele Grandi, Aditya Sanghi, Linh Tran, Joseph G. Lambourne, Armando Solar-Lezama, Wojciech Matusik

IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2022

Physical products are often complex assemblies combining a multitude of 3D parts modeled in computer-aided design (CAD) software. CAD designers build up these assemblies by aligning individual parts to one another using constraints called joints. In this paper we introduce JoinABLe, a learning-based method that assembles parts together to form joints. JoinABLe uses the weak supervision available in standard parametric CAD files without the help of object class labels or human guidance. Our results show that by making network predictions over a graph representation of solid models we can outperform multiple baseline methods with an accuracy (79.53%) that approaches human performance (80%). Finally, to support future research we release the Fusion 360 Gallery assembly dataset, containing assemblies with rich information on joints, contact surfaces, holes, and the underlying assembly graph structure.

CAD assemblies contain valuable joint information describing how parts are locally constrained and positioned together. We use this weak supervision to learn a bottom-up approach to assembly. JoinABLe combines an encoder and joint axis prediction network together with a neurally guided joint pose search to assemble pairs of parts without class labels or human guidance.

Related Resources

Publication

2024

Neural UpFlow: A Scene Flow Learning Approach to Increase the Apparent Resolution of Particle-Based Liquids

In this research, we introduce a data-driven approach to increase the…

Publication

2024

Experiential Views: Towards Human Experience Evaluation of Designed Spaces using Vision-Language Models

Exploratory research on helping designers and architects anticipate…

Publication

2023

Learned Visual Features to Textual Explanations

A novel method that leverages the capabilities of large language…

Publication

2022

Reconstructing editable prismatic CAD from rounded voxel models

Reverse Engineering a CAD shape from other representations is an…

Get in touch

Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.

Contact us