Publication
UniMoGen: Universal Motion Generation
UniMoGen generates realistic and diverse character motions in real time, controllable via action type, trajectory, and past motion context. It supports arbitrary skeleton topologies by operating in a skeleton-agnostic manner, and can produce long, smooth motion sequences that transition seamlessly across different styles. The figure shows a sample motion sequence generated by UniMoGen.
Motion generation is a cornerstone of computer graphics, animation, gaming, and robotics, enabling the creation of realistic and varied character movements. A significant limitation of existing methods is their reliance on specific skeletal structures, which restricts their versatility across different characters. To overcome this, we introduce UniMoGen, a novel UNet-based diffusion model designed for skeleton-agnostic motion generation. UniMoGen can be trained on motion data from diverse characters, such as humans and animals, without the need for a predefined maximum number of joints. By dynamically processing only the necessary joints for each character, our model achieves both skeleton agnosticism and computational efficiency. Key features of UniMoGen include controllability via style and trajectory inputs, and the ability to continue motions from past frames. We demonstrate UniMoGen’s effectiveness on the 100style dataset, where it outperforms state-of-the-art methods in diverse character motion generation. Furthermore, when trained on both the 100style and LAFAN1 datasets, which use different skeletons, UniMoGen achieves high performance and improved efficiency across both skeletons. These results highlight UniMoGen’s potential to advance motion generation by providing a flexible, efficient, and controllable solution for a wide range of character animations.
Download publicationRelated Publications
2026
Motion Generation: A Survey of Generative Approaches and BenchmarksThis research survey presents a structured review of recent motion…
2026
PointAloud: An Interaction Suite for AI-Supported Pointer-Centric Think-Aloud ComputingointAloud is a suite of novel AI-driven pointer-centric interactions…
2025
A Scalable Attention-Based Approach for Image-to-3D Texture MappingA fast transformer-based method that generates 3D textures from a…
2024
Wavelet Latent Diffusion: Billion-Parameter 3D Generative Model with Compact Wavelet EncodingsAddressing a common limitation of generative AI models, WaLa encodes…
Get in touch
Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.
Contact us