This workshop, which is in its second instance, is about all aspects of unsoundness in type system and verification tools and theories. It is meant to entertain a community-wide discussion on possible sources of unsoundness and how to avert, address, and tackle them. We are particularly interested in the presentation of previously unknown or lesser known problems as well as discussions of well-known soundness holes and how they affect the day-to-day of programming language researchers and users.

Background

Advanced static type systems are becoming more mainstream than ever. On one end of the spectrum, dynamic languages like JavaScript and Python are adopting sophisticated type systems to precisely describe idiomatic programming patterns in traditionally dynamically-typed languages. The soundness of these type systems is often a secondary goal or even an afterthought, causing all sorts of problems for programmers, who may get runtime errors in unexpected places. On the other end of the spectrum, formal verification has been gaining traction both within the Programming Languages community and beyond. Soundness in this context is often a safety-critical property that cannot be compromised on. Somewhere between the two are statically typed functional and object-oriented programming languages like Haskell, Scala, and OCaml.

Previously unknown soundness bugs are detected in many of these tools on a regular basis. For example, in the previous instance of this workshop, which focused on formal verification, a fundamental problem was uncovered with many tools integrating both software verification and object-oriented programming. There are also endless reports of compiler bugs in general-purpose programming languages, where many of these bugs are located in the heart of the type checker, according to a recent survey [1].

A fascinating aspect of this soundness quagmire is that many of the soundness bugs that are found across programming languages and paradigms often share some fundamental characteristics and can often be loosely related with each other, even when they originate in vastly different contexts and formalisms. As just one example, a faulty interaction between GADTs and multi-stage programming (a form of metaprogramming) which was originally reported in the OCaml compiler was later found to also exists in Scala [2]: essentially the same bug was found even though Scala’s type system foundations are nothing like OCaml’s – Scala uses generic class hierarchies and does not even have proper GADTs, only a loose encoding of them!

To us, this is evidence that the problems of unsoundness in type systems and verification is of general interest in and of itself and is worth a general community-wide conversation. Such a conversation could affects theoreticians and practitioners across several disciplines, paradigms, and application domains of advanced static type systems and programming language research.

Intended Audience

The participants of UNSOUND will be able to share their experience and exploits on how different type system and verification tools can either be broken or expose confusing behavior to users. We think this will be greatly beneficial, not only because it will help all of us iron out those sources of unsoundness, but also because it will facilitate understanding of the foundational differences and commonalities between the underlying models and assumptions of the various research lines.

The current academic environment encourages us to talk about the success case of our work. In this workshop, we want to address and learn from failure cases, and we want to reinforce the bedrock of our understanding to avert such failures as much as possible in the future. Because of the siloed nature of academic research, as researchers, it is easy to focus on specific aspects of type systems and verification while turning a blind eye to others. For example, a line of research focusing on aliasing control in OO may not take into consideration the implications of their work in other areas, like termination. We believe that learning from the issues of many type system and verification projects can broaden the attention of researchers to topics which so far escaped their focused area of research; e.g., from only type correctness to also avoiding stack overflows.

We also believe that this environment will be particularly beneficial for young researchers that are in search of open questions in type systems and verification. This may provide a motivation to dive deep into the details of any particular system or tool, or to expand one’s individual area of expertise to get a wider and more objective and critical view of the area as a whole.

The workshop, which is in its second instance, is meant to welcome both people with strong theoretical skills and people who just like hacking things. We do not expect fully-polished submissions and we will not have formal proceedings. Students are especially welcome to attend.

[1] Stefanos Chaliasos, Thodoris Sotiropoulos, Georgios-Petros Drosos, Charalambos Mitropoulos, Dimitris Mitropoulos, and Diomidis Spinellis. 2021. Well-typed programs can go wrong: a study of typing-related bugs in JVM compilers. Proc. ACM Program. Lang. 5, OOPSLA, Article 123 (October 2021), 30 pages. https://doi.org/10.1145/3485500

[2] See “Time-travelling evidence” in https://counterexamples.org/dubious-evidence.html

Goals

The goals of the workshop are:

  • To discover sources of unsoundness in different type systems and verification tools
  • To share experiences and exploits on how different tools can either be broken or expose confusing behavior
  • To broaden the attention of researchers to topics which so far escaped their focused area of research; e.g., from only type correctness to also avoiding stack overflows
  • To challenge assumptions uncritically assumed as valid reasoning principles in the field
  • To connect researchers from different areas of type systems and verification
  • To engage with and encourage the next generation of researchers in verification

Examples for possible contributions would be:

  • Defining of soundness and how it can diverge between languages and tools.
  • Exploring the divergences between user assumptions and actual definitions of soundness.
  • Summarizing common sources of unsoundness and why they emerge.
  • Reporting logic errors in the specification of a verification tool, e.g., universe inconsistencies.
  • Finding bugs in the implementation of type & proof checkers.
  • Discovering overconfident generalizations of sound subsystems to larger settings, e.g., imperative techniques in OO settings.
  • Formally characterizing escape hatches, which most practical systems possess, and finding how to use them without compromising the soundness of the parts of a program that don’t use them.
  • Reporting on unexpected soundness holes in type systems for dynamic languages, which can lead to more surprises at runtime.
  • Disproving soundness statements in published papers.
  • Finding statements proven in published literature that should no longer be trusted because they relied on a broken system.
  • Simply proving False in a verification tool or exhibiting non-termination in a total language; in particular, we are interested in practical ways to trick available tools to accept wrong input.
  • Breaking reasoning about programs with types by breaking the type system of the programming language in new and interesting ways.
  • Bad interactions between axiomatic choices in libraries used in proofs.
  • Impacts of the false sense of security when the chain of trust is broken by subtle unsoundness in verification tools.

Call for Presentations

Contributions:

The submission should consist in a two-page extended abstract. Additional material (bibliography, related work, and code examples) will not count toward this limit. We strongly encourage authors to include instructions to reproduce results or exploits.

There will be a friendly and open-minded peer review process, focusing on checking that the submitted material is appropriate for presentation at the workshop and likely to spur interesting conversations.

Publication

Accepted extended abstract will be made publicly available on the workshop webpage. However, presentation at UNSOUND does not count as prior publication, will not appear in formal proceedings, and can later be published at a conference of the authors’ choosing.

Instruction to Authors

Submission

Authors should be aware of ACM’s policies on plagiarism https://www.acm.org/publications/policies/plagiarism.

Program Committee members are allowed to submit extended abstracts/presentations.

Papers should be submitted online at:

https://unsound2024.hotcrp.com/

Formatting:

Submitted abstracts should be in portable document format (PDF), formatted using the ACM SIGPLAN style guidelines. Authors should use the acmart format, with the acmsmall sub-format for ACM proceedings. For details, see:

http://www.sigplan.org/Resources/Author/#acmart-format

It is recommended to use the review option when submitting a abstract; this option enables line numbers for easy reference in reviews.

Who is involved?

Unsound is currently managed by Lionel Parreaux, Marco Servetto, and Jan Bessai. You can chat with us in the unsound-workshop channel on Discord.