Publication | IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2021
UV-Net
Learning from Boundary Representations
This paper presents a representation and neural network, UV-Net, to learn from Boundary representations (B-rep), the industry-wide standard for solid models in computer-aided design (CAD). This research has the potential to unlock numerous data-driven CAD applications such as auto-complete of modeling operations, smart selection tools, and shape similarity search.
Download publicationAbstract
UV-Net: Learning from Boundary Representations
Pradeep Kumar Jayaraman, Aditya Sanghi, Joseph G. Lambourne, Karl D.D. Willis, Thomas Davies, Hooman Shayani, Nigel Morris
IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2021
We introduce UV-Net, a novel neural network architecture and representation designed to operate directly on Boundary representation (B-rep) data from 3D CAD models. The B-rep format is widely used in the design, simulation and manufacturing industries to enable sophisticated and precise CAD modeling operations. However, B-rep data presents some unique challenges when used with modern machine learning due to the complexity of the data structure and its support for both continuous non-Euclidean geometric entities and discrete topological entities. In this paper, we propose a unified representation for B-rep data that exploits the U and V parameter domain of curves and surfaces to model geometry, and an adjacency graph to explicitly model topology. This leads to a unique and efficient network architecture, UV-Net, that couples image and graph convolutional neural networks in a compute and memory-efficient manner. To aid in future research we present a synthetic labelled B-rep dataset, SolidLetters, derived from human designed fonts with variations in both geometry and topology. Finally we demonstrate that UV-Net can generalize to supervised and unsupervised tasks on five datasets, while outperforming alternate 3D shape representations such as point clouds, voxels, and meshes.
Related Resources
2025
Towards Interactive AI-assisted Material Selection for Sustainable Building DesignAn AI-assisted workflow uses graph-based representations of wall…
2025
RECALL-MM: A Multimodal Dataset of Consumer Product Recalls for Risk Analysis using Computational Methods and Large Language ModelsNew multi-modal design dataset contains historical information about…
2024
DesignQA: A Multimodal Benchmark for Evaluating Large Language Models’ Understanding of Engineering DocumentationNovel benchmark aimed at evaluating the proficiency of multimodal…
2024
Make-A-Shape: a Ten-Million-scale 3D Shape ModelTrained on 10 million 3D shapes, our model exhibits the capability to…
Get in touch
Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.
Contact us