Publication | Genetic and Evolutionary Computation Conference 2022
COIL
Constrained Optimization in Workshop on Learned Latent Space
We show for the first time how generative machine learning can learn a representation corresponding to a valid region of search space, enabling optimizers to search in the new latent space and always find solutions that satisfy constraints.
Download publicationAbstract
COIL: Constrained Optimization in Workshop on Learned Latent Space. Learning Representations for Valid Solutions
Bentley, P. J., Lim, S. L., Gaier, A. and Tran, L
Genetic and Evolutionary Computation Conference 2022
Constrained optimization problems can be difficult because their search spaces have properties not conducive to search, e.g., multimodality, discontinuities, or deception. To address such difficulties, considerable research has been performed on creating novel evolutionary algorithms or specialized genetic operators. However, if the representation that defined the search space could be altered such that it only permitted valid solutions that satisfied the constraints, the task of finding the optimal would be made more feasible without any need for specialized optimization algorithms. We propose Constrained Optimization in Latent Space (COIL), which uses a VAE to generate a learned latent representation from a dataset comprising samples from the valid region of the search space according to a constraint, thus enabling the optimizer to find the objective in the new space defined by the learned representation. Preliminary experiments show promise: compared to an identical GA using a standard representation that cannot meet the constraints or find fit solutions, COIL with its learned latent representation can perfectly satisfy different types of constraints while finding high-fitness solutions.
Associated Researchers
Soo Ling Lim
University College London
Linh Tran
Autodesk AI Lab
Related Resources
2024
Inspired by AI? A Novel Generative AI System To Assist Conceptual Automotive DesignThis research explores using generative AI to streamline automotive…
2024
Make-A-Shape: a Ten-Million-scale 3D Shape ModelTrained on 10 million 3D shapes, our model exhibits the capability to…
2023
What’s In A Name? Evaluating Assembly-Part Semantic Knowledge in Language Models through User-Provided Names in CAD FilesThe natural language names designers use in CAD software are a…
2022
UNIST: Unpaired Neural Implicit Shape Translation NetworkWe introduce UNIST, the first deep neural implicit modelfor…
Get in touch
Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.
Contact us