Publication | Distributed Autonomous Robotic Systems 2022

A force-mediated controller for cooperative object manipulation with independent autonomous robots

ABOVE – A: A robot collective transports an unknown object following a leader’s guidance. B: Two helper robots handle a small rigid object, guided by a human leader’s force applied to one corner (blue arrow). The left-hand robot experiences a single multi-dimensional wrench at its end-effector, with no disambiguation of components resulting from the leader, the object’s inertial properties, and forces due to the other agent. C: Physical testing of cooperative manipulation of a basket, using a Franka Emika Panda. D: An example application scenario: a robot helps a human manipulate a load in a challenging field situation of installing solar panels.

Our research was driven by a perceived need for spontaneous multi-agent collaboration under human guidance. Two control features are required to enable this: firstly, the non-human agents must be equipped with the ability to adaptively cooperate (to eliminate the need for precise and lengthy calibration processes, or high speed direct communication). Secondly, each robot must have a contextual reasoning layer which allows it to filter potentially ambiguous control input (for example, contact-based guidance) in order to infer the intent of the human operator. This type of control framework will make human-robot cooperation in challenging field settings (such as construction, or large scale assembly), both safer and more flexible.

Download publication

Abstract

A force-mediated controller for cooperative object manipulation with independent autonomous robots

Nicole E Carey, Justin Werfel

Distributed Autonomous Robotic Systems 2022

We consider cooperative manipulation by multiple robots assisting a leader, when information about the manipulation task, environment, and team of helpers is unavailable, and without the use of explicit communication. The shared object being manipulated serves as a physical channel for coordination, with robots sensing forces associated with its movement. Robots minimize force conflicts, which are unavoidable under these restrictions, by inferring an intended context: decomposing the object’s motion into a task space of allowed motion and a null space in which perturbations are rejected. The leader can signal a change in context by applying a sustained strong force in an intended new direction. We present a controller, prove its stability, and demonstrate its utility through experiments with (a) an in-lab force-sensitive robot assisting a human operator and (b) a multi-robot collective in simulation.

A video demonstration of contextual interpretation of control force using dimensional constraints, and other aspects of this research publication.

Associated Researchers

View all researchers

Related Resources

Article

2024

Autodesk Research Summer Internship: Empowering Innovation and Real-World Experience

Autodesk Research is looking for great summer interns…

Publication

2022

Harnessing Game-Inspired Content Creation for Intuitive Generative Design and Optimization

A multi-scale generative design model that adapts the Wave Function…

Publication

2022

SkexGen: Autoregressive Generation of CAD Construction Sequences with Disentangled Codebooks

We present SkexGen, a novel autoregressive generative model for…

Project

2010

Chronicle

Exploring a new type of system that will allow users to easily share…

Get in touch

Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.

Contact us