Publication | Distributed Autonomous Robotic Systems 2022

A force-mediated controller for cooperative object manipulation with independent autonomous robots

ABOVE – A: A robot collective transports an unknown object following a leader’s guidance. B: Two helper robots handle a small rigid object, guided by a human leader’s force applied to one corner (blue arrow). The left-hand robot experiences a single multi-dimensional wrench at its end-effector, with no disambiguation of components resulting from the leader, the object’s inertial properties, and forces due to the other agent. C: Physical testing of cooperative manipulation of a basket, using a Franka Emika Panda. D: An example application scenario: a robot helps a human manipulate a load in a challenging field situation of installing solar panels.

Our research was driven by a perceived need for spontaneous multi-agent collaboration under human guidance. Two control features are required to enable this: firstly, the non-human agents must be equipped with the ability to adaptively cooperate (to eliminate the need for precise and lengthy calibration processes, or high speed direct communication). Secondly, each robot must have a contextual reasoning layer which allows it to filter potentially ambiguous control input (for example, contact-based guidance) in order to infer the intent of the human operator. This type of control framework will make human-robot cooperation in challenging field settings (such as construction, or large scale assembly), both safer and more flexible.

Download publication

Abstract

A force-mediated controller for cooperative object manipulation with independent autonomous robots

Nicole E Carey, Justin Werfel

Distributed Autonomous Robotic Systems 2022

We consider cooperative manipulation by multiple robots assisting a leader, when information about the manipulation task, environment, and team of helpers is unavailable, and without the use of explicit communication. The shared object being manipulated serves as a physical channel for coordination, with robots sensing forces associated with its movement. Robots minimize force conflicts, which are unavoidable under these restrictions, by inferring an intended context: decomposing the object’s motion into a task space of allowed motion and a null space in which perturbations are rejected. The leader can signal a change in context by applying a sustained strong force in an intended new direction. We present a controller, prove its stability, and demonstrate its utility through experiments with (a) an in-lab force-sensitive robot assisting a human operator and (b) a multi-robot collective in simulation.

A video demonstration of contextual interpretation of control force using dimensional constraints, and other aspects of this research publication.

Associated Researchers

View all researchers

Related Resources

Publication

2024

TimeTunnel Live: Recording and Editing Character Motion in Virtual Reality

An animation authoring interface for recording and editing motion in…

Publication

2020

PointMask: Towards Interpretable and Bias-Resilient Point Cloud Processing

Deep classifiers tend to associate a few discriminative input…

Publication

2022

Evolving Through the Looking Glass: Learning Improved Search Spaces with Variational Autoencoders.

Nature has spent billions of years perfecting our genetic…

Project

2011

OrgOrgChart: The Evolution of an Organization

This project looks at the evolution of a company’s structure over…

Get in touch

Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.

Contact us