Publication 2025
Flow-based Domain Randomization for Learning and Sequencing Robotic Skills
Two robots learning a high-dimensional bimanual insertion task under flow-based domain randomization.
Abstract
Domain randomization in reinforcement learning is an established technique for increasing the robustness of control policies trained in simulation. By randomizing environment properties during training, the learned policy can become robust to uncertainties along the randomized dimensions. While the environment distribution is typically specified by hand, in this paper we investigate automatically discovering a sampling distribution via entropy-regularized reward maximization of a normalizing-flow–based neural sampling distribution. We show that this architecture is more flexible and provides greater robustness than existing approaches that learn simpler, parameterized sampling distributions, as demonstrated in six simulated and one real-world robotics domain. Lastly, we explore how these learned sampling distributions, along with a privileged value function, can be used for out-of-distribution detection in an uncertainty-aware multi-step manipulation planner.
Download publicationAssociated Researchers
Aidan Curtis
MIT CSAIL
Eric Li
MIT CSAIL
Michael Noseworthy
MIT CSAIL
Nishad Gothoskar
MIT CSAIL
Leslie Pack Kaelbling
MIT CSAIL
Related Publications
2025
In-Context Imitation Learning via Next-Token PredictionThis robotics approach allows flexible and training-free execution of…
2024
Toward Automated Programming for Robotic Assembly Using ChatGPTBy using specialized language agents for task decomposition and code…
2024
Bridging the Sim-to-Real Gap with Dynamic Compliance Tuning for Industrial InsertionA novel framework for robustly learning manipulation skills…
2024
ASAP: Automated Sequence Planning for Complex Robotic Assembly with Physical FeasibilityA physics-based planning approach for automatically generating…
Get in touch
Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.
Contact us