This program is tentative and subject to change.

Thu 24 Oct 2024 14:40 - 15:00 at IBR East - Machine Learning and Programming Languages

Large language models have changed the landscape for code completion, but the techniques by which prompts are constructed and model responses are handled fail to fully leverage the wealth of structured semantic information available to modern language servers and from there, IDEs. We contend that AIs need IDEs, too! This paper integrates LLMs into the Hazel Assistant, which uses static retrieval and error correction techniques driven by the Hazel Language Server. The Hazel Language Server is able to identify types and typing contexts for holes in program sketches, with total error correction ensuring that a semantically meaningful program sketch is always available. This allows for semantic contextualization with information that is not lexically local to the cursor, but is semantically local to the programmer’s intent as expressed through the type system. Contextualized model completions are then iteratively refined via dialog with a language server, providing static error correction. To validate these methods, we present MVUBench, a dataset of MVU applications written in Hazel and TypeScript, particular suited to contrast methods of static retrieval and contextualization. Further, we duplicate our methods and test suite in TypeScript in order to validate the applicability of these methods to higher-resource mainstream languages.

This program is tentative and subject to change.

Thu 24 Oct

Displayed time zone: Pacific Time (US & Canada) change

13:40 - 15:20
Machine Learning and Programming LanguagesOOPSLA 2024 at IBR East
13:40
20m
Talk
CYCLE: Learning to Self-Refine the Code Generation
OOPSLA 2024
Yangruibo Ding Columbia University, Marcus J. Min Columbia University, Gail Kaiser Columbia University, Baishakhi Ray Columbia University, New York; AWS AI Lab
14:00
20m
Talk
Evaluating the effectiveness of Deep Learning Models for Foundational Program Analysis Tasks
OOPSLA 2024
Qian Chen Nanjing University, Chenyang Yu Department of Computer Science and Technology, Nanjing University, Ruyan Liu Department of Computer Science and Technology, Nanjing University, Chi Zhang Nanjing University, Yu Wang Nanjing University, Ke Wang , Ting Su East China Normal University, Linzhang Wang Nanjing University
14:20
20m
Talk
Knowledge Transfer from High-Resource to Low-Resource Programming Languages for Code LLMs
OOPSLA 2024
Federico Cassano Northeastern University, John Gouwar Northeastern University, Francesca Lucchetti Northeastern University, Claire Schlesinger Northeastern University, Anders Freeman Wellesley College, Carolyn Jane Anderson Wellesley College, Molly Q Feldman Oberlin College, Michael Greenberg Stevens Institute of Technology, Abhinav Jangda Microsoft Research, Arjun Guha Northeastern University; Roblox
Pre-print
14:40
20m
Talk
Statically Contextualizing Large Language Models with Typed Holes
OOPSLA 2024
Andrew Blinn University of Michigan, Xiang Li University of Michigan, Ann Arbor, June Hyung Kim University of Michigan, Cyrus Omar University of Michigan
15:00
20m
Talk
WhiteFox: White-box Compiler Fuzzing Empowered by Large Language Models
OOPSLA 2024
Chenyuan Yang University of Illinois at Urbana-Champaign, Yinlin Deng University of Illinois at Urbana-Champaign, Runyu Lu Huazhong University of Science and Technology, Jiayi Yao The Chinese University of Hong Kong, Shenzhen, Jiawei Liu University of Illinois at Urbana-Champaign, Reyhaneh Jabbarvand University of Illinois at Urbana-Champaign, Lingming Zhang University of Illinois at Urbana-Champaign