Publication | IEEE International Conference on Computer Vision (ICCV) 2021


Unsupervised Few-shot Learning of 3D Style Similarity Measure for B-Reps

This paper is a step towards development of machine learning (ML) models for perception of style and aesthetics by introducing a model that can generate a style loss (style difference between two 3D shapes) and style loss gradients wrt the input shape.

Three main points makes this paper special for Autodesk Research:

  1. It works on BReps.
  2. It does not require style labels for training (unsupervised).
  3. It proposes a simple way for capturing the subjective definition of style for each end user using few example shapes (few-shot learning).
Download publication


UVStyle-Net: Unsupervised Few-shot Learning of 3D Style Similarity Measure for B-Reps

Peter Meltzer, Hooman Shayani, Amir Khasahmadi, Pradeep Kumar Jayaraman, Aditya Sanghi, Joseph Lambourne

IEEE International Conference on Computer Vision (ICCV) 2021

Boundary Representations (B-Reps) are the industry standard in 3D Computer Aided Design/Manufacturing (CAD/CAM) and industrial design due to their fidelity in representing stylistic details. However, they have been ignored in the 3D style research. Existing 3D style metrics typically operate on meshes or pointclouds, and fail to account for end-user subjectivity by adopting fixed definitions of style, either through crowd-sourcing for style labels or hand-crafted features. We propose UVStyle-Net, a style similarity measure for B-Reps that leverages the style signals in the second order statistics of the activations in a pre-trained (unsupervised) 3D encoder, and learns their relative importance to a subjective end-user through few-shot learning. Our approach differs from all existing data-driven 3D style methods since it may be used in completely unsupervised settings, which is desirable given the lack of publicly available labelled B-Rep datasets. More importantly, the few-shot learning accounts for the inherent subjectivity associated with style. We show quantitatively that our proposed method with B-Reps is able to capture stronger style signals than alternative methods on meshes and pointclouds despite its significantly greater computational efficiency. We also show it is able to generate meaningful style gradients with respect to the input shape, and that few-shot learning with as few as two positive examples selected by an end-user is sufficient to significantly improve the style measure. Finally, we demonstrate its efficacy on a large unlabeled public dataset of CAD models. Source code and data will be released in the future.

Related Resources



3D-Printed Prosthetics for the Developing World

The growing availability of 3D printing has made it possible for…



A Survey of Software Learnability: Metrics, Methodologies and Guidelines

It is well-accepted that learnability is an important aspect of…



Community Enhanced Tutorials: Improving Tutorials with Multiple Demonstrations

Web-based tutorials are a popular help resource for learning how to…



Robotic assembly of timber joints using reinforcement learning

In architectural construction, automated robotic assembly is…

Get in touch

Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.

Contact us