Teaching Robots to Cooperate in Strange New Worlds

Nic Carey


How do we build robust, resilient structures and support systems on other planets, where resources are scarce, communication is difficult, and adaptability is key to survival?

Researchers at Autodesk are collaborating with NASA’s Resilient Extra-Terrestrial Habitat Institute (RETHI) to develop cooperative algorithms that allow robot collectives to work together to transport and manipulate objects in extra-planetary environments. Our key research question is how to achieve maximum spontaneity and task flexibility in situations that may involve uncontrolled and unpredictable environments, and intermittent or non-existent communication and networking infrastructure.

To do this, we are leveraging behaviors seen in groups of humans navigating collective tasks. If a group of people needs to work together to carry a bulky piece of furniture, they do not need to be told the exact weight of the object beforehand, nor the precise location of their fellow helpers, nor even the intended path and goal position (as long as one member of the group has a clear understanding of the destination point). The process by which we do this is fairly sophisticated, relying on a combination of finely tuned neuromuscular signaling and years of experience in assessing, analyzing, and handling unknown objects.

When humans approach and pick up an object, we immediately perceive key features about its shape and weight distribution. We can use this to make decisions about how it will respond when we try to manipulate it, and comparing against our own strength and capacity, we can even project the bounds of a safe operational envelope within which we know we can fully support, stabilize and control the load. If the object is very heavy or unbalanced, we instinctively respond to attempts to move outside this envelope by stiffening our joints, imparting more resistive forces, and dampening the motion of the load to bring it back to a controllable configuration.

Having thus developed an internal model of a shared object’s inertial features, and an estimation of the controllable ‘pose basin’ within which we have confidence in our ability to support and stabilize it, humans can intuit a manipulation goal imparted by a single knowledgeable leader through pure force application. We subconsciously filter out the passive inertial forces experienced due to the load itself, and glean an estimation of the leader’s intent through the forces and torques they apply through the object.

Endowing a robot with these skills requires a particular set of hardware capabilities, such as force sensitivity, tunable compliance, and high (ideally redundant) degrees of freedom in their operational space. Fortunately, recent generations of co-bots have begun to incorporate these features. By using an off-the-shelf compliant co-bot platform (the Franka Emika Panda) as the base unit for our robot collective, we developed a framework that lets a group of robots manipulate objects based on human-inspired sensory systems and control algorithms, requiring no knowledge of, or direct communication with, other members of the collective. Guidance through the manipulation space is imparted exclusively through forces transmitted on the shared load, by a single leader. The robots can adapt their behavior according to the amount of contextual information available to them, while ensuring the load always remains stable and controllable. For example, if a robot’s internal model of the shared object is poor, it can reactively stiffen its joints when the object starts to move too fast or into a configuration where the individual robot is unsure it will be able to maintain control. Alternatively, a more accurate model permits a greater degree of controllability in more spatial dimensions, allowing the collective to rotate and twist the shared load to avoid obstacles or place it precisely into a goal location.

Because each individual robot does not know the others’ states and positions (or even how many partners they have sharing the task), there is a strong possibility of conflicts arising in their estimation of the task parameters – or local environmental conditions could lead to a misalignment and disagreement. Traditional industrial robots with rigid, non-compliant joints cannot resolve these kinds of conflicts, and require continual high-speed communication to ensure all state information is always available to every participant, or they will be forced to a hard stop when disagreements arise, requiring reconfiguration and realignment before they can approach the task again. Using an actively compliant platform allows us to give each robot a degree of elasticity (or ‘springiness’) in its motion and joint control. In this way, conflicts can be accommodated, and through an adaptive control process which attempts to minimize these conflicting forces, the robots gradually bring themselves into alignment and achieve a consensus on task goals.

The future scenario envisioned by RETHI and Autodesk is a team of low-cost mobile robot ‘helpers’ that can be pooled or distributed as needed across a dynamic, dangerous off-world construction site or built structure. Able to independently assess and respond to local conditions while in service of a larger goal, they can deploy spontaneously to transport, lift, or adjust any object under the guidance of a single leader. Our next research step is to investigate how each robot within the collective can use force signals and active local object manipulation to move from a poor or non-existent object model to a more accurate and precise estimate, improving its own controllability and performance even while engaged in a shared manipulation task.

Nic Carey is a Principal Research Scientist at Autodesk. Read more about this ongoing research project.

Get in touch

Have we piqued your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities

Contact us