Publication | Distributed Autonomous Robotic Systems 2022

A force-mediated controller for cooperative object manipulation with independent autonomous robots

ABOVE – A: A robot collective transports an unknown object following a leader’s guidance. B: Two helper robots handle a small rigid object, guided by a human leader’s force applied to one corner (blue arrow). The left-hand robot experiences a single multi-dimensional wrench at its end-effector, with no disambiguation of components resulting from the leader, the object’s inertial properties, and forces due to the other agent. C: Physical testing of cooperative manipulation of a basket, using a Franka Emika Panda. D: An example application scenario: a robot helps a human manipulate a load in a challenging field situation of installing solar panels.

Our research was driven by a perceived need for spontaneous multi-agent collaboration under human guidance. Two control features are required to enable this: firstly, the non-human agents must be equipped with the ability to adaptively cooperate (to eliminate the need for precise and lengthy calibration processes, or high speed direct communication). Secondly, each robot must have a contextual reasoning layer which allows it to filter potentially ambiguous control input (for example, contact-based guidance) in order to infer the intent of the human operator. This type of control framework will make human-robot cooperation in challenging field settings (such as construction, or large scale assembly), both safer and more flexible.

Download publication


A force-mediated controller for cooperative object manipulation with independent autonomous robots

Nicole E Carey, Justin Werfel

Distributed Autonomous Robotic Systems 2022

We consider cooperative manipulation by multiple robots assisting a leader, when information about the manipulation task, environment, and team of helpers is unavailable, and without the use of explicit communication. The shared object being manipulated serves as a physical channel for coordination, with robots sensing forces associated with its movement. Robots minimize force conflicts, which are unavoidable under these restrictions, by inferring an intended context: decomposing the object’s motion into a task space of allowed motion and a null space in which perturbations are rejected. The leader can signal a change in context by applying a sustained strong force in an intended new direction. We present a controller, prove its stability, and demonstrate its utility through experiments with (a) an in-lab force-sensitive robot assisting a human operator and (b) a multi-robot collective in simulation.

A video demonstration of contextual interpretation of control force using dimensional constraints, and other aspects of this research publication.

Related Resources



Learned Visual Features to Textual Explanations

A novel method that leverages the capabilities of large language…



Vice VRsa: Balancing Bystander’s and VR user’s Privacy through Awareness Cues Inside and Outside VR

Promoting mutual awareness and privacy among virtual reality users and…



CLIP-Forge: Towards Zero-Shot Text-to-Shape Generation

Generating shapes using natural language can enable new ways of…



CAPRI-Net: Learning Compact CAD Shapes with Adaptive Primitive Assembly

We introduce CAPRI-Net, a self-supervised neural net-work for learning…

Get in touch

Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.

Contact us