Publication 2024

Evaluating Large Language Models for Material Selection

Overview of the method used to create the corpus of questions submitted to survey participants and to the LLMs, the experiments used to evaluate the LLMs, and the evaluation metrics used to compare the LLM results to the survey responses.

Abstract

Material selection is a crucial step in conceptual design due to its significant impact on the functionality, aesthetics, manufacturability, and sustainability impact of the final product. This study investigates the use of Large Language Models (LLMs) for material selection in the product design process and compares the performance of LLMs against expert choices for various design scenarios. By collecting a dataset of expert material preferences, the study provides a basis for evaluating how well LLMs can align with expert recommendations through prompt engineering and hyperparameter tuning. The divergence between LLM and expert recommendations is measured across different model configurations, prompt strategies, and temperature settings. This approach allows for a detailed analysis of factors influencing the LLMs’ effectiveness in recommending materials. The results from this study highlight two failure modes, and identify parallel prompting as a useful prompt-engineering method when using LLMs for material selection. The findings further suggest that, while LLMs can provide valuable assistance, their recommendations often vary significantly from those of human experts. This discrepancy underscores the need for further research into how LLMs can be better tailored to replicate expert decision-making in material selection. This work contributes to the growing body of knowledge on how LLMs can be integrated into the design process, offering insights into their current limitations and potential for future improvements.

Download publication

Associated Researchers

Yash Patawari Jain

Carnegie Mellon University

Christopher McComb

Carnegie Mellon University

View all researchers

Related Publications

Publication

2022

Material Prediction For Design Automation Using Graph Representation Learning

Successful material selection is critical in designing and…

Publication

2023

Conceptual Design Generation Using Large Language Models

Generating design concepts in product design using Large Language…

Publication

2023

What’s In A Name? Evaluating Assembly-Part Semantic Knowledge in Language Models through User-Provided Names in CAD Files

The natural language names designers use in CAD software are a…

Publication

2024

HG-CAD: Hierarchical Graph Learning for Material Prediction and Recommendation in Computer-Aided Design

This work presents a new Machine Learning architecture to support…

Get in touch

Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.

Contact us