Publication | IEEE International Conference on Computer Vision (ICCV) 2021

UVStyle-Net

Unsupervised Few-shot Learning of 3D Style Similarity Measure for B-Reps

This paper is a step towards development of machine learning (ML) models for perception of style and aesthetics by introducing a model that can generate a style loss (style difference between two 3D shapes) and style loss gradients wrt the input shape.

Three main points makes this paper special for Autodesk Research:

  1. It works on BReps.
  2. It does not require style labels for training (unsupervised).
  3. It proposes a simple way for capturing the subjective definition of style for each end user using few example shapes (few-shot learning).
Download publication

Abstract

UVStyle-Net: Unsupervised Few-shot Learning of 3D Style Similarity Measure for B-Reps

Peter Meltzer, Hooman Shayani, Amir Khasahmadi, Pradeep Kumar Jayaraman, Aditya Sanghi, Joseph Lambourne

IEEE International Conference on Computer Vision (ICCV) 2021

Boundary Representations (B-Reps) are the industry standard in 3D Computer Aided Design/Manufacturing (CAD/CAM) and industrial design due to their fidelity in representing stylistic details. However, they have been ignored in the 3D style research. Existing 3D style metrics typically operate on meshes or pointclouds, and fail to account for end-user subjectivity by adopting fixed definitions of style, either through crowd-sourcing for style labels or hand-crafted features. We propose UVStyle-Net, a style similarity measure for B-Reps that leverages the style signals in the second order statistics of the activations in a pre-trained (unsupervised) 3D encoder, and learns their relative importance to a subjective end-user through few-shot learning. Our approach differs from all existing data-driven 3D style methods since it may be used in completely unsupervised settings, which is desirable given the lack of publicly available labelled B-Rep datasets. More importantly, the few-shot learning accounts for the inherent subjectivity associated with style. We show quantitatively that our proposed method with B-Reps is able to capture stronger style signals than alternative methods on meshes and pointclouds despite its significantly greater computational efficiency. We also show it is able to generate meaningful style gradients with respect to the input shape, and that few-shot learning with as few as two positive examples selected by an end-user is sufficient to significantly improve the style measure. Finally, we demonstrate its efficacy on a large unlabeled public dataset of CAD models. Source code and data will be released in the future.

Related Resources

Publication

2023

XLB: A Differentiable Massively Parallel Lattice Boltzmann Library in Python

This research introduces the XLB library, a scalable Python-based…

Publication

2022

T-Domino: Exploring Multiple Criteria with Quality-Diversity and the Tournament Dominance Objective

A new ranking system for Multi-Criteria Exploration (MCX) that uses…

Publication

2019

Dynamic Experience Replay

We present a novel technique called Dynamic Experience Replay (DER)…

Project

2021

Software Learning

This learning project investigates advanced techniques for assisting…

Get in touch

Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.

Contact us