HOR is a forum to present work concerning all aspects of higher-order rewriting. The aim is to provide an informal and friendly setting to discuss recent work and work in progress. The following is a non-exhaustive list of topics for the workshop:
To give a presentation at the workshop, please submit an extended abstract (between 2 to 5 pages 7 pages (for the final version)) via Easychair:
https://easychair.org/conferences/?conf=hor2025
Please use LaTeX and the Easychair style to prepare your submission:
https://easychair.org/publications/easychair.zip
HOR is a platform for discussing open questions, ongoing research, and new perspectives, as well as new results. Extended abstracts describing work in progress, preliminary results, research projects, or problems in higher-order rewriting are very welcome. Specifically, short versions of recently published papers are welcome, and submission to HOR does not preclude formal publication at other venues.
The workshop has informal electronic proceedings that will be made available on this website.
For questions regarding submission, please contact the PC chair Pablo Barenbaum.
TBA
Rewriting Induction (RI) is a formal system in term rewriting for proving inductive theorems. Recently, RI has been extended to higher-order Logically Constrained Term Rewriting Systems (LCTRSs), which makes it an interesting tool for program verification with inductive theorems as an interpretation for program equivalence. A major challenge when proving inductive theorems with RI is the generation of suitable induction hypothesis, preferably automatically. Two existing heuristics often fail. Here, we consider another approach: rather than inventing new heuristics for proving individual cases, we consider classes of equivalences. This is achieved by introducing templates for detecting specific tail and non-tail recursive shapes. Whenever each of the two programs fit into such a template we can generate an equation which is guaranteed to be an inductive theorem.
In most quantum programming languages, measurements are performed in the traditional computational basis. We present a work-in-progress where we dispose of this seemingly obvious choice and instead, we propose a quantum lambda-calculus where each lambda term is decorated with the basis in which it is to be measured.
This will allows to define a type system where terms which were previously treated linearly, are now treated in a classical fashion. This is done without having to rely on unitary gates to do so and thus, adding an abstraction layer between programs and circuitry. Another unexpected benefit was found designing the type system. We found that, when restricting the uncertainties to their extreme, we were left with a formal system for specification akin to Hoare logic.
In order to achieve this, we utilized the realizability technique to extract a lambda-calculus complete with its type system. We present the language and showcase a few examples.
I will present relatively recent work on extending the Agda and Rocq interactive theorem provers with user-defined rewrite rules. Such extensions come with their own set of challenges: unrestrained they break many expected meta-theoretical properties of the provers. I will explain how we addressed those challenges, focusing in particular on the verification of type preservation and confluence. I will then present a new proposal enabling the use of custom rewrite rules only locally to a definition (or a module) which opens interesting new possibilities for tools like Agda, Rocq, or the Dedukti logical framework.
This will include joint work with Jesper Cockx, Gaëtan Gilbert, Yann Leray and Nicolas Tabareau.
In “traditional” infinitary rewriting based on ordinal-indexed rewriting sequences and strong Cauchy convergence, a key property of rewriting systems is compression, that is, the fact that rewriting sequences of arbitrary ordinal length can be compressed to sequences of length ω. Famous examples of compressible systems are left-linear first-order systems and infinitary λ-calculi. In this work, we investigate compression in the equivalent setting of coinductive infinitary rewriting, which we recall in a slightly augmented form: we extend it to rewriting of (possibly non-wellfounded) derivations in an arbitrary sytem of derivation rules. Then we define the coinductive counterpart of compressed rewriting sequences, and we present a general coinductive procedure turning arbitrary infinitary rewriting derivations into compressed ones, without relying on convergence. The coinductive presentation of the two aforementioned examples are endowed with compression lemmas as instances of our general method.
We reformulate the open problem of type safety in Hutchins’ Pure Subtype Systems (PSS) by proving it under a different conjecture. PSS (hereafter in the singular) harmoniously mixes terms and types, thus enabling a number of advanced language features that combine dependent types with higher-order subtyping. In PSS, terms and types belong to the same kind (everything is a subtype) and the resulting theory is based on subtyping. However, conflating terms and types results in high impredicativity and complicates PSS meta-theory. The crucial property of type safety in PSS hinges on the well-known problem of transitivity elimination. Despite Hutchins’ attempts, he failed to prove a key commutativity property, and type safety has remained an open problem for more than a decade. We introduce a reformulation of Hutchins’ PSS, which we name Machine-Based Pure Subtype System (MPSS), which is based on a mechanism reminiscent of the Krivine Abstract Machine, where the proof of transitivity elimination is direct. Alas, in our MPSS, subtyping is scope-dependent and type safety rests on the conjecture that subtyping is congruent with covariant contexts. Our reformulation of type safety in PSS uncovers a dilemma about a commutativity/congruence problem, which sheds new light on possible solutions and evidences the difficulty of the challenge proposed by Hutchins.
We are interested in the cost of evaluating functional programs implemented by means of rewriting, be it higher- or first-order term rewriting or graph rewriting. Our starting observation is that in such structured rewrite systems the cost of matching exhibiting a redex-pattern, and of substitution plugging-in the right-hand, is an a priori non-neglible part of the cost of a rewrite step. To account for that cost we propose to decompose rewrite steps into 3 stages: matching, replacement, and substitution, with the 1st and 3rd stages themselves consisting of rewrite steps in a so-called Substitution Calculus (SC). More precisely, in a structured rewrite system, a rewrite step from a structure s to a structure t arises as C[ρ] : C[l] → C[r] for some structure C[x] and rule ρ : l → r by substituting the rule symbol ρ respectively the structures l and r for x in C[x]. For instance, in the case of higher-order term rewriting, rewrite steps invoke higher-order substitutions for both the matching and substitution stages. To get a hold on the cost of a rewrite step in structured rewriting, we propose substitution be made explicit leading to the rewrite step C[ρ] being decomposed into s SC↞ C[l] → C[r] ↠SC t for some substitution calculus SC, with the cost of the step being the sum of the costs of its 3 stages:
The decomposition allows to investigate whether the cost of the SC-steps in stages 1 and 3 can be accounted for by the replacement step in stage 2, and / or whether SC-steps themselves are unit cost or need to be decomposed further, in a given structured rewrite system. We illustrate this for the archetypical second-order HRS of beta-steps in the lambda-calculus. We show that of three well-known notions of beta-reduction: weak, leftmost–outermost, and multistep, the cost of the SC-steps is neglible for weak and leftmost–outermost beta, but is not neglible for multistep beta:
In the late 2000s Honda and Laurent found an interesting correspondence between the (asynchronous, local, internal) pi-calculus and a variant of (polarized) linear logic proof nets. Ehrhard and Laurent later understood that these were in fact differential proof nets, that is, proofs for an extension of linear logic called differential linear logic. This logical system admits cut-elimination, thus yielding what looks like a concurrent version of the Curry-Howard correspondence. Unfortunately, such a correspondence is imperfect: in Ehrhard and Regnier's original formulation, which is the one considered by Ehrhard and Laurent, the non-determinism of differential proof nets is incapable of soundly matching the non-determinism of the pi-calculus (technically, the natural rewriting systems associated with proof nets and with processes are not bisimilar).
In the meantime, Caires and Pfenning introduced their own concurrent version of the Curry-Howard correspondence, using plain linear logic on one side and the session-typed pi-calculus on the other. The work of Honda, Laurent and Ehrhard was sort of eclipsed by the success of this correspondence, and was not pursued any further.
It never occurred to anyone (including, until recently, myself) that, rather than pointing out some insurmountable fundamental gap, the failure of the first correspondence is simply suggesting that Ehrhard and Regnier's cut-elimination rules for differential linear logic should be changed. Although it is still unclear (especially to myself) to what extent this idea may be developed, it has already led to a couple of interesting applications of linear-logical techniques to the pi-calculus. It also improved our (or, at least, my) understanding of the relationship between Honda and Laurent's and Caires and Pfenning's correspondences. Albeit still difficult to formalize technically (at least for me), the relationship is much tighter than what it looks like at first sight (at least my sight).
I will explain how the pi-calculus suggests alternative cut-elimination rules for differential linear logic, essentially rediscovering Honda and Laurent's proof nets, but with a different syntax and backed up by the solid semantic foundations provided by differential linear logic. From there, I will introduce the applications I mentioned above, namely intersection types for the pi-calculus (a joint work with Dal Lago, de Visme and Yoshimizu published a few years ago) and a work in progress with Jaramillo and Perez in which we develop a type system enforcing deadlock-freedom for a very expressive session-typed pi-calculus. Along the way, and if time allows it, I will try to give my understanding of the kinship between Honda and Laurent's and Caires and Pfenning's concurrent versions of Curry-Howard.
A popular, effective simplification ordering for proving termination of rewrite rules in a first-order setting is Dershowitz' recursive path ordering (RPO). Its higher-order generalization, the computability path ordering (CPO) by Blanqui, Jouannaud and Rubio is as effective, but much less popular. In this paper, we show that it can indeed be used in practice to save lots of complex termination arguments. Our examples originate from a recent proposal of the first two authors using a natural deduction style to describe quantum computations.