Publication | International Conference on Machine Learning and Applications 2022

SimCURL

Simple Contrastive User Representation Learning from Command Sequences

SimCURL learns user representations from a large corpus of unlabeled command sequences. These learned representations are then transferred to multiple downstream tasks that have only limited labels available.

This paper is an effort towards user modeling based on the raw command sequences of Fusion360. Proper encoding of commands are crucial for better understanding user behavior and making intelligent software. In SimCURL we proposed a method for learning representations of these command sequences.

Download publication

Abstract

SimCURL: Simple Contrastive User Representation Learning from Command Sequences

Hang Chu, Amir Khasahmadi, Karl D.D. Willis, Fraser Anderson, Yaoli Mao, Linh Tran, Justin Matejka, Jo Vermeulen

International Conference on Machine Learning and Applications 2022

User modeling is crucial to understanding user behavior and essential for improving user experience and personalized recommendations. When users interact with software, vast amounts of command sequences are generated through logging and analytics systems. These command sequences contain clues to the users’ goals and intents. However, these data modalities are highly unstructured and unlabeled, making it difficult for standard predictive systems to learn from. We propose SimCURL, a simple yet effective contrastive self-supervised deep learning framework that learns user representation from unlabeled command sequences. Our method introduces a user-session network architecture, as well as session dropout as a novel way of data augmentation. We train and evaluate our method on a real-world command sequence dataset of more than half a billion commands. Our method shows significant improvement over existing methods when the learned representation is transferred to downstream tasks such as experience and expertise classification.

Related Resources

Publication

2024

SLiMe: Segment Like Me

We explore leveraging extensive vision-language models for segmenting…

Publication

2022

CLIP-Forge: Towards Zero-Shot Text-to-Shape Generation

Generating shapes using natural language can enable new ways of…

Get in touch

Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.

Contact us