Publication | International Conference on Machine Learning and Applications 2022

SimCURL

Simple Contrastive User Representation Learning from Command Sequences

SimCURL learns user representations from a large corpus of unlabeled command sequences. These learned representations are then transferred to multiple downstream tasks that have only limited labels available.

This paper is an effort towards user modeling based on the raw command sequences of Fusion360. Proper encoding of commands are crucial for better understanding user behavior and making intelligent software. In SimCURL we proposed a method for learning representations of these command sequences.

Download publication

Abstract

SimCURL: Simple Contrastive User Representation Learning from Command Sequences

Hang Chu, Amir Khasahmadi, Karl D.D. Willis, Fraser Anderson, Yaoli Mao, Linh Tran, Justin Matejka, Jo Vermeulen

International Conference on Machine Learning and Applications 2022

User modeling is crucial to understanding user behavior and essential for improving user experience and personalized recommendations. When users interact with software, vast amounts of command sequences are generated through logging and analytics systems. These command sequences contain clues to the users’ goals and intents. However, these data modalities are highly unstructured and unlabeled, making it difficult for standard predictive systems to learn from. We propose SimCURL, a simple yet effective contrastive self-supervised deep learning framework that learns user representation from unlabeled command sequences. Our method introduces a user-session network architecture, as well as session dropout as a novel way of data augmentation. We train and evaluate our method on a real-world command sequence dataset of more than half a billion commands. Our method shows significant improvement over existing methods when the learned representation is transferred to downstream tasks such as experience and expertise classification.

Related Resources

Publication

2021

Neural UpFlow: A Scene Flow Learning Approach to Increase the Apparent Resolution of Particle-Based Liquids

In this research, we introduce a data-driven approach to increase the…

Publication

2024

XLB: A Differentiable Massively Parallel Lattice Boltzmann Library in Python

This research introduces the XLB library, a scalable Python-based…

Publication

2023

Neural Shape Diameter Function for Efficient Mesh Segmentation

Introducing a neural approximation of the Shape Diameter Function,…

Publication

2023

Learned Visual Features to Textual Explanations

A novel method that leverages the capabilities of large language…

Get in touch

Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.

Contact us