International Conference on Machine Learning 2023

Amortizing Pragmatic Program Synthesis with Rankings

Figure 1: Rather than using the RSA algorithm directly in a synthesizer (left), our approach uses the RSA algorithm to generate a simulated communication dataset of partial rankings. We then distill the dataset of partial rankings into a global pragmatic ranking – a single, total ordering of all programs. This global ranking is then used to build a fast pragmatic synthesizer, which is both effective at communicating with end-users and faster than the RSA synthesizer.

Abstract

In program synthesis, an intelligent system takes in a set of user-generated examples and returns a program that is logically consistent with these examples. The usage of Rational Speech Acts (RSA) framework has been successful in building pragmatic program synthesizers that return programs which — in addition to being logically consistent — account for the fact that a user chooses their examples informatively. However, the computational burden of running the RSA algorithm has restricted the application of pragmatic program synthesis to domains with a small number of possible programs. This work presents a novel method of amortizing the RSA algorithm by leveraging a global pragmatic ranking — a single, total ordering of all the hypotheses. We prove that for a pragmatic synthesizer that uses a single demonstration, our global ranking method exactly replicates RSA’s ranked responses. We further empirically show that global rankings effectively approximate the full pragmatic synthesizer in an online, multi-demonstration setting. Experiments on two program synthesis domains using our pragmatic ranking method resulted in orders of magnitudes of speed ups compared to the RSA synthesizer, while outperforming the standard, non-pragmatic synthesizer.

Download publication

Associated Researchers

Saujas Vaduguru

Carnegie Mellon University

Priyan Vaithilingam

Harvard University

Elena Glassman

Harvard University

Daniel Fried

Carnegie Mellon University

View all researchers

Related Publications

Publication

2023

CAD-LLM: Large Language Model for CAD Generation

This research presents generating Computer Aided Designs (CAD) using…

Publication

2023

Hypothesis Search: Inductive Reasoning with Language Models

We propose to improve the inductive reasoning ability of LLMs by…

Publication

2023

Generating Pragmatic Examples to Train Neural Program Synthesizers

Using neural networks is a novel way to amortize a synthesizer’s…

Publication

2023

ANPL: Towards Natural Programming with Interactive Decomposition

Interactive programming system ensures users can refine generated code…

Get in touch

Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.

Contact us