This program is tentative and subject to change.

Thu 24 Oct 2024 12:00 - 12:20 at San Gabriel - Compilers and Optimisation 1

Mainstream compilers perform a multitude of analyses and optimizations on the given input program. Each analysis (such as points-to analysis) may generate a program-abstraction (such as points-to graph). Each optimization is typically composed of multiple alternating phases of inspection of such program-abstractions and transformations of the program. Upon transformation of a program, the program-abstractions generated by various analyses may become inconsistent with the modified program. Consequently, the correctness of the downstream inspection (and consequent transformation) phases cannot be ensured until the relevant program-abstractions are stabilized; that is, the program-abstractions are either invalidated or made consistent with the modified program. In general, the existing compiler frameworks do not perform automated stabilization of the program-abstractions and instead leave it to the compiler pass writers to deal with the complex task of identifying the relevant program-abstractions to be stabilized, the points where the stabilization is to be performed, and the exact procedure of stabilization. In this article, we address these challenges by providing the design and implementation of a novel compiler-design framework called Homeostasis.

Homeostasis automatically captures all the program changes performed by each transformation phase, and later, triggers the required stabilization using the captured information, if needed. We also provide a formal description of Homeostasis and a correctness proof thereof. To assess the feasibility of using Homeostasis in compilers of parallel programs, we have implemented our proposed idea in IMOP, a compiler framework for OpenMP C programs. Furthermore, to illustrate the benefits of using Homeostasis, we have implemented a set of standard data-flow passes, and a set of involved optimizations that are used to remove redundant barriers in OpenMP C programs. Implementations of none of these optimizations in IMOP required any additional lines of code for stabilization of the program-abstractions. We present an evaluation in the context of these optimizations and analyses, which demonstrates that Homeostasis is efficient and easy to use.

This program is tentative and subject to change.

Thu 24 Oct

Displayed time zone: Pacific Time (US & Canada) change

10:40 - 12:20
Compilers and Optimisation 1OOPSLA 2024 at San Gabriel
10:40
20m
Talk
Compilation of Shape Operators on Sparse Arrays
OOPSLA 2024
Alexander J Root Stanford University, Bobby Yan Stanford University, Peiming Liu Google Inc, Christophe Gyurgyik Stanford University, Aart Bik Google, Inc., Fredrik Kjolstad Stanford University
Pre-print
11:00
20m
Talk
Compiler Support for Sparse Tensor Convolutions
OOPSLA 2024
Peiming Liu Google Inc, Alexander J Root Stanford University, Anlun Xu Google, Yinying Li Google, Fredrik Kjolstad Stanford University, Aart Bik Google, Inc.
11:20
20m
Talk
Compiling Recurrences over Dense and Sparse Arrays
OOPSLA 2024
Shiv Sundram Stanford University, Muhammad Usman Tariq Stanford University, Fredrik Kjolstad Stanford University
11:40
20m
Talk
Fully Verified Instruction Scheduling
OOPSLA 2024
Ziteng Yang Georgia Institute of Technology, Jun Shirako Georgia Institute of Technology, Vivek Sarkar Georgia Institute of Technology
12:00
20m
Talk
Homeostasis: Design and Implementation of a Self-Stabilizing Compiler (TOPLAS)
OOPSLA 2024
Aman Nougrahiya IIT Madras, V Krishna Nandivada IIT Madras
Link to publication