Publication
Neural Implicit Style-Net
Synthesizing shapes in a preferred style exploiting self supervision
Examples of style transfer results
AbstractWe introduce a novel approach to disentangle style from content in the 3D domain and perform unsupervised neural style transfer. Our approach is able to extract style information from 3D input in a self supervised fashion, conditioning the definition of style on inductive biases enforced explicitly, in the form of specific augmentations applied to the input.This allows, at test time, to select specifically the features to be transferred between two arbitrary 3D shapes, being still able to capture complex changes (e.g. combinations of arbitrary geometrical and topological transformations) with the data prior. Coupled with the choice of representing 3D shapes as neural implicit fields, we are able to perform style transfer in a controllable way, handling a variety of transformations. We validate our approach qualitatively and quantitatively on a dataset with font style labels.
Download publicationRelated Resources
05/22/2023
What’s In A Name? Evaluating Assembly-Part Semantic Knowledge in Language Models through User-Provided Names in CAD Files
The natural language names designers use in CAD software are a…
04/12/2022
Leveraging Robotics for Cleaner Construction Jobsites
The value in Spot’s ability to execute repeatable, autonomous missions…
01/01/2021
Meshmixer: Mesh Technology for Interactive Design and Fabrication
Meshmixer is a prototype design tool based on high-resolution dynamic…
02/15/2023
Why is applying Artificial Intelligence in Construction so difficult?
While applying AI in construction can be challenging, Kasia Borowska…
Get in touch
Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.
Contact us