The ACM SIGPLAN International Conference on Software Language Engineering (SLE) is devoted to the principles of software languages: their design, their implementation, and their evolution.

With the ubiquity of computers, software has become the dominating intellectual asset of our time. In turn, this software depends on software languages, namely the languages it is written in, the languages used to describe its environment, and the languages driving its development process. Given that everything depends on software and that software depends on software languages, it seems fair to say that for many years to come, everything will depend on software languages.

Software language engineering (SLE) is the discipline of engineering languages and their tools required for the creation of software. It abstracts from the differences between programming languages, modelling languages, and other software languages, and emphasizes the engineering facet of the creation of such languages, that is, the establishment of the scientific methods and practices that enable the best results. While SLE is certainly driven by its metacircular character (software languages are engineered using software languages), SLE is not self-satisfying: its scope extends to the engineering of languages for all and everything.

Like its predecessors, the 17th edition of the SLE conference, SLE 2024, will bring together researchers from different areas united by their common interest in the creation, capture, and tooling of software languages. It overlaps with traditional conferences on the design and implementation of programming languages, model-driven engineering, and compiler construction, and emphasizes the fusion of their communities. To foster the latter, SLE traditionally fills a two-day program with a single track, with the only temporal overlap occurring between co-located events.

SLE 2024 will be co-located with SPLASH 2024 and take place in Pasadena, California, United States.

Dates
Plenary
You're viewing the program in a time zone which is different from your device's time zone change time zone

Sun 20 Oct

Displayed time zone: Pacific Time (US & Canada) change

09:00 - 10:30
SLE Welcome and KeynoteSLE at IBR East
Chair(s): Peter D. Mosses Delft University of Technology and Swansea University
09:00
15m
Day opening
SLE Welcome
SLE
G: Ralf Lämmel Universität Koblenz, P: Juliana Alves Pereira PUC-Rio, P: Peter D. Mosses Delft University of Technology and Swansea University
09:15
75m
Talk
There Is Only One Time in Software (Language) Engineering!
SLE
K: Benoit Combemale University of Rennes, Inria, CNRS, IRISA
DOI File Attached
10:30 - 11:00
Coffee BreakCatering at Foyer
10:30
30m
Coffee break
Break
Catering

11:00 - 12:30
Software Language Integration and CompositionSLE at IBR East
Chair(s): Juliana Alves Pereira Pontifical Catholic University of Rio de Janeiro (PUC-Rio)
11:00
30m
Talk
Cooperative Specification via Composition Control
SLE
Christopher Esterhuyse University of Amsterdam, L. Thomas van Binsbergen University of Amsterdam
DOI Pre-print
11:30
30m
Talk
Aconite: Towards Generating Sirius-Based Graphical Editors from Annotated Metamodels
SLE
Nathan Richardson University of York, Dimitris Kolovos University of York, Antonio Garcia-Dominguez University of York
DOI
12:00
30m
Talk
Towards an In-context LLM-based Approach for Automating the Definition of Model Views
SLE
James Pontes Miranda IMT Atlantique, LS2N (UMR CNRS 6004), Hugo Bruneliere IMT Atlantique, LS2N (UMR CNRS 6004), Massimo Tisi IMT Atlantique, LS2N (UMR CNRS 6004), Gerson Sunyé Nantes Université, LS2N (UMR CNRS 6004)
DOI
12:30 - 14:00
12:30
90m
Lunch
Lunch
Catering

14:00 - 15:30
Software Language Design and Implementation ISLE at IBR East
Chair(s): L. Thomas van Binsbergen University of Amsterdam
14:00
30m
Talk
Concrete Syntax Metapatterns
SLE
Luka Miljak Delft University of Technology, Casper Bach Poulsen Delft University of Technology, Rosilde Corvino TNO-ESI
DOI
14:30
30m
Talk
Efficient Demand Evaluation of Fixed-Point Attributes Using Static Analysis
SLE
Idriss Riouak Department of Computer Science, Lund University, Sweden, Niklas Fors Lund University, Jesper Öqvist Cognibotics, Görel Hedin Lund University, Christoph Reichenbach Lund University
DOI Pre-print
15:00
30m
Talk
The Design of a Self-Compiling C Transpiler Targeting POSIX Shell
SLE
Laurent Huberdeau Université de Montréal, Cassandre Hamel Université de Montréal, Stefan Monnier Université de Montréal, Marc Feeley Université de Montréal
DOI
15:30 - 16:00
Coffee BreakCatering at Foyer
15:30
30m
Coffee break
Break
Catering

16:00 - 17:30
SLE Body of Knowledge (SLEBoK)SLE at IBR East
Chair(s): Eric Van Wyk Department of Computer Science and Engineering, University of Minnesota
16:00
30m
Talk
DSLs in Racket: You Want It How, Now?
SLE
Yunjeong Lee National University of Singapore, Kiran Gopinathan National University of Singapore, Ziyi Yang National University of Singapore, Matthew Flatt University of Utah, Ilya Sergey National University of Singapore
DOI
16:30
30m
Talk
Design of Software Representation Languages: a Historical Perspective
SLE
Anthony I. (Tony) Wasserman Software Methods and Tools
DOI
17:00
30m
Talk
The Linguistic Theory Behind Blockly Languages
SLE
Friedrich Steimann Fernuniversität in Hagen, Robin Stunic Fernuniversität in Hagen
DOI

Mon 21 Oct

Displayed time zone: Pacific Time (US & Canada) change

09:00 - 10:30
Empirical Studies and Experience ReportsSLE at IBR East
Chair(s): Benoit Combemale University of Rennes, Inria, CNRS, IRISA
09:00
30m
Talk
Trading Runtime for Energy Efficiency
SLE
Simão Cunha University of Minho, Luís Silva University of Minho, João Saraiva University of Minho, João Paulo Fernandes LIACC, Universidade do Porto, Porto, Portugal
DOI
09:30
30m
Talk
Cloud Programming Languages and Infrastructure From Code: An Empirical Study
SLE
Georg Simhandl University of Vienna, Uwe Zdun University of Vienna
DOI
10:00
30m
Talk
Statically and Dynamically Delayed Sampling for Typed Probabilistic Programming Languages
SLE
Gizem Caylak KTH Royal Institute of Technology, Daniel Lundén Oracle, Viktor Senderov Institut de Biologie de l'École Normale Supérieure, David Broman KTH Royal Institute of Technology
DOI
10:30 - 11:00
Coffee BreakCatering at Foyer
10:30
30m
Coffee break
Break
Catering

11:00 - 12:30
Software Language Design and Implementation IISLE at IBR East
Chair(s): Jeff Smits Delft University of Technology
11:00
30m
Talk
Type Checking with Rewriting Rules
SLE
Dimi Racordon EPFL, LAMP
DOI
11:30
30m
Talk
Trieste: A C++ DSL for Flexible Tree Rewriting (Tool paper)
SLE
Sylvan Clebsch Microsoft Azure Research, Matilda Blomqvist Uppsala University, Elias Castegren Uppsala University, Matthew Johnson Azure Research, Microsoft, Matthew J. Parkinson Microsoft Azure Research
DOI
12:00
30m
Talk
Method Bundles (New Ideas/Vision paper)
SLE
Dimi Racordon EPFL, LAMP, Dave Abrahams Adobe
DOI
12:30 - 14:00
12:30
90m
Lunch
Lunch
Catering

14:00 - 15:30
Analysis and OptimizationSLE at IBR East
Chair(s): Nico Jansen Software Engineering, RWTH Aachen University
14:00
30m
Talk
Trellis: A Domain-Specific Language for Hidden Markov Models with Sparse Transitions
SLE
Lars Hummelgren KTH Royal Institute of Technology, Viktor Palmkvist KTH Royal Institute of Technology, Linnea Stjerna KTH Royal Institute of Technology, Xuechun Xu KTH Royal Institute of Technology, Joakim Jalden KTH Royal Institute of Technology, David Broman KTH Royal Institute of Technology
DOI
14:30
30m
Talk
Reducing Write Barrier Overheads for Orthogonal Persistence
SLE
Yilin Zhang University of Tokyo, Omkar Dilip Dhawal Indian Institute of Technology Madras, V Krishna Nandivada IIT Madras, Shigeru Chiba University of Tokyo, Tomoharu Ugawa University of Tokyo
DOI
15:00
30m
Talk
Bugfox: A Trace-based Analyzer for Localizing the Cause of Software Regression in JavaScript
SLE
Yuefeng Hu The University of Tokyo, Hiromu Ishibe The University of Tokyo, Feng Dai The University of Tokyo, Tetsuro Yamazaki University of Tokyo, Shigeru Chiba University of Tokyo
DOI Pre-print
15:30 - 16:00
Coffee BreakCatering at Foyer
15:30
30m
Coffee break
Break
Catering

16:00 - 17:30
Panel Discussion and AwardsSLE at IBR East
Chair(s): Ralf Lämmel Universität Koblenz
16:00
60m
Panel
AI Effects on Research and Education: A Programming and Software Language Perspective
SLE
O: Ralf Lämmel Universität Koblenz, P: Shigeru Chiba University of Tokyo, P: Felienne Hermans Vrije Universiteit Amsterdam, P: Bernhard Rumpe RWTH Aachen University, P: João Saraiva University of Minho
DOI
17:00
15m
Awards
Award Presentations
SLE

17:15
15m
Day closing
SLE Closing
SLE

Accepted Papers

Title
Aconite: Towards Generating Sirius-Based Graphical Editors from Annotated Metamodels
SLE
DOI
Bugfox: A Trace-based Analyzer for Localizing the Cause of Software Regression in JavaScript
SLE
DOI Pre-print
Cloud Programming Languages and Infrastructure From Code: An Empirical Study
SLE
DOI
Concrete Syntax Metapatterns
SLE
DOI
Cooperative Specification via Composition Control
SLE
DOI Pre-print
Design of Software Representation Languages: a Historical Perspective
SLE
DOI
DSLs in Racket: You Want It How, Now?
SLE
DOI
Efficient Demand Evaluation of Fixed-Point Attributes Using Static Analysis
SLE
DOI Pre-print
Method Bundles (New Ideas/Vision paper)
SLE
DOI
Reducing Write Barrier Overheads for Orthogonal Persistence
SLE
DOI
Statically and Dynamically Delayed Sampling for Typed Probabilistic Programming Languages
SLE
DOI
The Design of a Self-Compiling C Transpiler Targeting POSIX Shell
SLE
DOI
The Linguistic Theory Behind Blockly Languages
SLE
DOI
Towards an In-context LLM-based Approach for Automating the Definition of Model Views
SLE
DOI
Trading Runtime for Energy Efficiency
SLE
DOI
Trellis: A Domain-Specific Language for Hidden Markov Models with Sparse Transitions
SLE
DOI
Trieste: A C++ DSL for Flexible Tree Rewriting (Tool paper)
SLE
DOI
Type Checking with Rewriting Rules
SLE
DOI

Call for Papers

Topics of Interest

SLE covers software language engineering in general, rather than engineering a specific software language. Topics of interest include, but are not limited to:

  • Software Language Design and Implementation
    • Approaches to and methods for language design
    • Static semantics (e.g., design rules, well-formedness constraints)
    • Techniques for specifying behavioral/executable semantics
    • Generative approaches (incl. code synthesis, compilation)
    • Meta-languages, meta-tools, language workbenches
  • Validation of Software Language Tools and Implementations
    • Verification and formal methods for language tools and implementations
    • Testing techniques for language tools and implementations
    • Simulation techniques for language tools and implementations
  • Software Language Maintenance
    • Software language reuse
    • Language evolution
    • Language families and variability, language and software product lines
  • Software Language Integration and Composition
    • Coordination of heterogeneous languages and tools
    • Mappings between languages (incl. transformation languages)
    • Traceability between languages
    • Deployment of languages to different platforms
  • Domain-Specific Approaches for Any Aspects of SLE (analysis, design, implementation, validation, maintenance)
  • Empirical Studies and Experience Reports of Tools
    • User studies evaluating usability
    • Performance benchmarks
    • Industrial applications
  • Synergies between Language Engineering and Emerging/Promising Research Areas
    • AI and ML language engineering (e.g., ML compiler testing, code classification)
    • Quantum language engineering (e.g., language design for quantum machines)
    • Language engineering for cyber-physical systems, IoT, digital twins, etc.
    • Socio-technical systems and language engineering (e.g., language evolution to adapt to social requirements)
    • Etc.

Types of Submissions

SLE accepts the following types of papers:

  • Research papers: These are “traditional” papers detailing research contributions to SLE. Papers may range from 6 to 12 pages in length and may optionally include 2 further pages of bibliography/appendices. Papers will be reviewed with an understanding that some results do not need 12 full pages and may be fully described in fewer pages.

  • New ideas/vision papers: These papers may describe new, unconventional software language engineering research positions or approaches that depart from standard practice. They can describe well-defined research ideas that are at an early stage of investigation. They could also provide new evidence to challenge common wisdom, present new unifying theories about existing SLE research that provides novel insight or that can lead to the development of new technologies or approaches, or apply SLE technology to radically new application areas. New ideas/vision papers must not exceed 5 pages and may optionally include 1 further page of bibliography/appendices.

  • SLE Body of Knowledge: The SLE Body of Knowledge (SLEBoK) is a community-wide effort to provide a unique and comprehensive description of the concepts, best practices, tools, and methods developed by the SLE community. These papers can focus on, but are not limited to, methods, techniques, best practices, and teaching approaches. Papers in this category can have up to 20 pages, including bibliography/appendices.

  • Tool papers: These papers focus on the tooling aspects often forgotten or neglected in research papers. A good tool paper focuses on practical insights that will likely be useful to other implementers or users in the future. Any of the SLE topics of interest are appropriate areas for tool demonstrations. Submissions must not exceed 5 pages and may optionally include 1 further page of bibliography/appendices. They may optionally include an appendix with a demo outline/screenshots and/or a short video/screencast illustrating the tool.

Workshops: Workshops will be organized by SPLASH. Please inform us and contact the SPLASH organizers if you would like to organize a workshop of interest to the SLE audience. Information on how to submit workshops can be found on the SPLASH 2024 Website.

Submission

SLE 2024 has a single submission round for papers, including a rebuttal phase, where all authors of research papers will have the possibility of responding to the reviews on their submissions.

Authors of accepted research papers will be invited to submit artifacts.

Format

Submissions have to use the ACM SIGPLAN Conference Format “acmart”; please make sure that you always use the latest ACM SIGPLAN acmart LaTeX template, and that the document class definition is \documentclass[sigplan,anonymous,review]{acmart}. Do not make any changes to this format!

Ensure that your submission is legible when printed on a black and white printer. In particular, please check that colors remain distinct and font sizes in figures and tables are legible.

To increase fairness in reviewing, a double-blind review process has become standard across SIGPLAN conferences. Accordingly, SLE will follow the double-blind process. Author names and institutions must be omitted from submitted papers, and references to the authors’ own related work should be in the third person. No other changes are necessary, and authors will not be penalized if reviewers are able to infer their identities in implicit ways.

All submissions must be in PDF format. The submission website is: https://sle24.hotcrp.com

Concurrent Submissions

Papers must describe unpublished work that is not currently submitted for publication elsewhere as described by SIGPLAN’s Republication Policy. Submitters should also be aware of ACM’s Policy and Procedures on Plagiarism. Submissions that violate these policies will be desk-rejected.

Policy on Human Participant and Subject Research

Authors conducting research involving human participants and subjects must ensure that their research complies with their local governing laws and regulations and the ACM’s general principles, as stated in the ACM’s Publications Policy on Research Involving Human Participants and Subjects. If submissions are found to be violating this policy, they will be rejected.

Reviewing Process

All submitted papers will be reviewed by at least three members of the program committee. Research papers and tool papers will be evaluated concerning soundness, relevance, novelty, presentation, and replicability. New ideas/vision papers will be evaluated primarily concerning soundness, relevance, novelty, and presentation. SLEBoK papers will be reviewed on their soundness, relevance, originality, and presentation. Tool papers will be evaluated concerning relevance, presentation, and replicability.

For fairness reasons, all submitted papers must conform to the above instructions. Submissions that violate these instructions may be rejected without review at the discretion of the PC chairs.

For research papers, authors will get a chance to respond to the reviews before a final decision is made.

Artifact Evaluation

SLE will use an evaluation process to assess the quality of artifacts on which papers are based to foster the culture of experimental reproducibility. Authors of accepted research papers are invited to submit artifacts. For more information, please see the Artifact Evaluation page.

Awards

  • Distinguished paper: Award for the most notable paper, as determined by the PC chairs based on the recommendations of the program committee.
  • Distinguished artifact: Award for the artifact most significantly exceeding expectations, as determined by the AEC chairs based on the recommendations of the artifact evaluation committee.

Publication

All accepted papers will be published in the ACM Digital Library.

AUTHORS TAKE NOTE: The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work.

SLE and Doctoral Students

SLE encourages students to submit to the SPLASH doctoral symposium. Authors of accepted doctoral symposium papers on SLE topics will also have the chance to present their work to the SLE audience.

Contact

For additional information, clarification, or answers to questions, please get in touch with the program co-chairs (P.D.Mosses at tudelft.nl and Juliana at inf.puc-rio.br).

SLE’24 implements a two-rounds review process that also evaluates the quality of the artifacts supporting accepted research papers. This is the Artifact Evaluation track.

Authors of research papers accepted for SLE 2024 will be invited (by mail) to submit artifacts. Any kind of artifact that is presented in the paper can be submitted (tools, grammars, metamodels, models, programs, algorithms, scripts, proofs, datasets, statistical tests, checklists, surveys, interview protocols, visualizations, annotated bibliographies, and tutorials).

The submitted artifacts will be reviewed by a dedicated Artifact Evaluation Committee (AEC). The approved artifacts will then be made first-class bibliographic objects, easy to find and cite. Depending on the quality of the artifact, the artifact might be awarded with different kinds of “badges” that are visible on the final paper.

The submission is additional to your already accepted paper at SLE24. It will not have a negative impact.

The artifact evaluation process of SLE borrows heavily from processes described at www.artifact-eval.org.

Submission and Reviewing Process

  • Authors of Artifacts need an accepted paper at SLE24.
  • Authors use this submission page: https://sle24ae.hotcrp.com/paper/new
  • Authors need to submit a PDF version of the accepted paper for evaluating the artifact-paper consistency.
  • The artifact can be associated with a different set of authors (different from the accepted paper).
  • Authors must submit a single artifact for a paper (1-to-1 mapping, paper-to-artifact).
  • Artifacts can be provided as zip or DOI.
  • The artifact evaluated by the AEC and linked in the paper must be precisely the same. Chairs will assure that DOIs point to the specific version evaluated, or that the SHA is the same.
  • PDF and artifact should NOT be anonymized anymore.
  • The process includes a Kicking-the-tires stages (close to a rebuttal): Reviewers report on possible problems that may prevent artifacts from being properly evaluated. Authors will be given three days to read and respond to the kick-the-tires reports of their artifacts and solve the issues (potentially resubmitting artifacts). Thereafter, the actually reviewing takes place.

Quality Criteria

Submitted artifacts will be evaluated by the AEC concerning the following criteria. Depending on the criteria, different Badges are assigned (we limit us the ‘evaluated’ and ‘available’ badge).

Artifacts ‘Evaluated’ (Badge)

  • Documented: At minimum, an inventory of artifacts is included, and sufficient description provided to enable the artifacts to be exercised.

  • Consistent: The artifacts are relevant to the associated paper, and contribute in some inherent way to the generation of its main results.

  • Complete: To the extent possible, all components relevant to the paper in question are included. (Proprietary artifacts need not be included. If they are required to exercise the package, then this should be documented, along with instructions on how to obtain them. Proxies for proprietary data should be included to demonstrate the analysis.)

  • Exercisable: Included scripts and/or software used to generate the results in the associated paper can be successfully executed, and included data can be accessed and appropriately manipulated.

(These points are evaluated by the AEC.)

There are two quality levels of the ‘Evaluated’ badge:

  • ‘Evaluated Functional’ assures the minimal functionality.
  • ‘Evaluated Reusable’ award artifact that exceeds the minimal functionality.

(This distinction is decided on by the Chairs, based on the comments of the AEC.)

Artifacts ‘Available’ (Badge)

  • Identification: Using DOIs to identify published objects is standard. It is important to use a DOI that points to the specific version with which the results of the paper can be reproduced (for Zenodo: do not use the “always latest” DOI; for FigShare: use a DOI with a version suffix, e.g., “.v1”).

  • Long-Term Availability: It is necessary that the artifacts are archived in an archive that hosts the artifacts on a long-term basis, such as in digital libraries of the ACM, Zenodo, etc. (version repositories do not fulfill this requirement, as the hosting company could decide at any time to discontinue the service, as done by Google, for example: Google Code).

  • Immutability: It is necessary that the artifact cannot be changed after publication because the reader needs to use the material exactly as the authors did to obtain their result.

Detailed definitions of these badges and the respective evaluation criteria may be found at the ACM Artifact Review Badging site.

(These points are assured by the Chairs.)

Important Dates (Authors and PC)

  • Artifact submission Deadline: 9.09.2024 (AOE)
  • Start Kick-the-tires author response period: 19.09.2024
  • End Kick-the-tires author response period: 23.09.2024 (AOE)
  • Author artifact Notification: 11.10.2024

Important Dates (PC)

  • Start bidding on artifacts: 10.09.2024
  • End bidding on artifacts: 12.09.2024 (AOE)
  • Start Kick-the-tires evaluation (assignment of artifacts to PC): 13.09.2024
  • End Kick-the-tires evaluation: 18.09.2024 (AOE)
  • Start artifact evaluation: 19.09.2024
  • End artifact evaluation: 07.10.2024 (AOE)
  • Discussion and decision: 08-10.10.2024

Further Information

For further information on the artifact evaluation of SLE 2024, feel free to contact the artifact evaluation chairs.

Best regards, Sérgio Medeiros and Johannes Härtel SLE24 Artifact Evaluation Co-Chairs

Additional Hints

If you are not certain how to submit your artifacts, these are some commonly known hints:

  • Use an archive file (gz, xz, or zip) containing everything needed for supporting a full evaluation of the artifact. The archive file has to include at least the artifact itself and a text file named README.txt that contains the following information: ** An overview of the archive file, documenting the content of the archive. ** A setup/installation guide giving detailed instructions on how to setup or install the submitted artifact. ** Detailed step-by-step instructions on how to reproduce any experiments or other activities that support the conclusions given in the paper.
  • When preparing your artifact, consider that your artifact should be as accessible to the AEC as possible. In particular, it should be possible for the AEC to quickly make progress in the investigation of your artifact. Please provide some simple scenarios describing concretely how the artifact is intended to be used. For a tool, this would include specific inputs to provide or actions to take, and expected output or behavior in response to this input.
  • For artifacts that are tools, it is recommended to provide the tool installed and ready to use on a virtual machine for VirtualBox, VMware, SHARE, a Docker image, or a similar widely available platform.
  • Please use widely supported open formats for documents (e.g., PDF, HTML) and data (e.g., CSV, JSON).