Publication | Conference on Neural Information Processing Systems 2022
Communicating Natural Programs to Humans and Machines
Four ARC tasks, the goal is to correctly infer the unseen output from the given examples.
This study was conducted to understand how humans use language to instruct each other in order to perform specific tasks. Autodesk researchers found that many tasks can be instructed using language (e.g., “Can you align all the bathroom stalls on this floor?”), while the output must be very specific.
This paper was presented at the Conference on Neural Information Processing Systems 2022.
Download publicationAbstract
Communicating Natural Programs to Humans and Machines
Samuel Acquaviva, Yewen Pu, Marta Kryven, Theodoros Sechopoulos, Catherine Wong, Gabrielle E Ecanow, Maxwell Nye, Michael Henry Tessler, Joshua B. Tenenbaum
Conference on Neural Information Processing Systems 2022 (Featured Presentation)
The Abstraction and Reasoning Corpus (ARC) is a set of procedural tasks that tests an agent’s ability to flexibly solve novel problems. While most ARC tasks are easy for humans, they are challenging for state-of-the-art AI. What makes building intelligent systems that can generalize to novel situations such as ARC difficult? We posit that the answer might be found by studying the difference of language: While humans readily generate and interpret instructions in a general language, computer systems are shackled to a narrow domain-specific language that they can precisely execute. We present LARC, the Language-complete ARC: a collection of natural language descriptions by a group of human participants who instruct each other on how to solve ARC tasks using language alone, which contains successful instructions for 88% of the ARC tasks. We analyze the collected instructions as ‘natural programs,’ finding that while they resemble computer programs, they are distinct in two ways: First, they contain a wide range of primitives; Second, they frequently leverage communicative strategies beyond directly executable codes. We demonstrate that these two distinctions prevent current program synthesis techniques from leveraging LARC to its full potential and give concrete suggestions on how to build the next-generation program synthesizers.
Associated Researchers
Yewen Pu
Former Autodesk
Samuel Acquaviva
MIT
Marta Kryven
MIT
Theodoros Sechopoulos
MIT
Catherine Wong
MIT
Gabrielle E Ecanow
MIT
Maxwell Nye
MIT
Michael Henry Tessler
MIT
Joshua B. Tenenbaum
MIT
Related Resources
2025
Towards Certification-Ready Designs: A Research Investigation of Digital Twins for High-Performance EngineeringDevelopment and validation of a digital twin for a sensor-equipped UAV…
2023
ANPL: Towards Natural Programming with Interactive DecompositionInteractive programming system ensures users can refine generated code…
2022
T-Domino: Exploring Multiple Criteria with Quality-Diversity and the Tournament Dominance ObjectiveA new ranking system for Multi-Criteria Exploration (MCX) that uses…
2023
Language Model Crossover: Variation through Few-Shot PromptingPursuing the insight that language models naturally enable an…
Get in touch
Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.
Contact us