This program is tentative and subject to change.

Mon 21 Oct 2024 09:50 - 10:10 at Pasadena - Morning Session Chair(s): Jens Palsberg

Parallel programming is essential to achieve high-performance, and numerous works combined programming languages, runtime and compilers to help the de- ployment of effective high-performance applications at scale. Many recent pro- gramming models allow a specification of a task graph representing the appli- cation to be created by the programmer. For example, the depends clause in OpenMP 4.0 allows the programmers to create arbitrary dependences between OpenMP tasks. Habanero Data Driven Tasks [11] and OCR Event Driven Tasks and Events [1] provide similar capabilities as well. While these systems enable the creation of task graphs, they exhibit varying degrees of separation of concerns, or decoupling of program correctness from performance. Concurrent Collections (CnC) is a parallel programming model, with an ex- ecution semantics that is influenced by dynamic dataflow, stream-processing, and tuple spaces [4]. CnC was developed in part to address the need for making parallel programming accessible to non-expert developers. It relies on users to specify explicitly the data and control dependences between tasks, in turn allow- ing to automate the generation [10] of high-performance parallel programs for a variety of targets [5], from distributed computers to GPUs [7] to FPGAs [6]. A CnC program has a deterministic semantics: any implementation of it that follows the dependences specified in the CnC program will produce the same outputs. This deterministic semantics, and the separation of concerns between the domain experts, in charge of describing the core application dependences, and the tuning experts mapping this program to a particular hardware are the primary characteristics that differentiate CnC from other parallel programming models. In this talk, we will overview the fundamental concepts of CnC, its history, and achievements in terms of programmability and performance. We will de- scribe several iterations of CnC implementations, including CnC execution mod- els on top of Java [3] and C++ [12,11], as well as several domain-specific uses of CnC, from general-purpose programming [2], GPGPU-centric [10,7], exascale- focused [8], to centered on large, heterogeneous applications [9].

This program is tentative and subject to change.

Mon 21 Oct

Displayed time zone: Pacific Time (US & Canada) change

09:00 - 10:30
Morning SessionVIVEKFEST at Pasadena
Chair(s): Jens Palsberg University of California, Los Angeles (UCLA)
09:00
10m
Talk
Welcome (Raj Barik/Rajiv Gupta/Jens Palsberg)
VIVEKFEST
Raj Barik Gitar Co., Rajiv Gupta University of California at Riverside (UCR), Jens Palsberg University of California, Los Angeles (UCLA)
09:10
20m
Research paper
Scalable Small Message Aggregation on Modern Interconnects
VIVEKFEST
09:30
20m
Talk
Michael Hind (IBM Research)
VIVEKFEST

09:50
20m
Talk
Concurrent Collections: An Overview
VIVEKFEST
Kathleen Knobe Rice University, Zoran Budimlic , Robert Harrison , Mohammad Mahdi Javanmard Stony Brook University, NY, USA, Louis-Noël Pouchet Colorado State University
10:10
20m
Research paper
Hidden assumptions in static verification of data-race free GPU programs
VIVEKFEST
Tiago Cogumbreiro University of Massachusetts Boston, Julien Lange Royal Holloway, University of London