HOR 2025
12th International Workshop on
Higher-Order Rewriting
Birmingham, United Kingdom
14th July, 2025
affiliated with FSCD 2025

Overview

HOR is a forum to present work concerning all aspects of higher-order rewriting. The aim is to provide an informal and friendly setting to discuss recent work and work in progress. The following is a non-exhaustive list of topics for the workshop:

Information about previous editions can be found here.

Submission

To give a presentation at the workshop, please submit an extended abstract (between 2 to 5 pages 7 pages (for the final version)) via Easychair:

https://easychair.org/conferences/?conf=hor2025

Please use LaTeX and the Easychair style to prepare your submission:

https://easychair.org/publications/easychair.zip

HOR is a platform for discussing open questions, ongoing research, and new perspectives, as well as new results. Extended abstracts describing work in progress, preliminary results, research projects, or problems in higher-order rewriting are very welcome. Specifically, short versions of recently published papers are welcome, and submission to HOR does not preclude formal publication at other venues.

The workshop has informal electronic proceedings that will be made available on this website.

For questions regarding submission, please contact the PC chair Pablo Barenbaum.

Important dates

All times are Anywhere on Earth.

Invited speakers

Proceedings

TBA

Programme

09:00–10:00 Session 1
09:00 Kasper Hagens and Cynthia Kop
Higher-order inductive theorems via recursor templates (Abstract)

Rewriting Induction (RI) is a formal system in term rewriting for proving inductive theorems. Recently, RI has been extended to higher-order Logically Constrained Term Rewriting Systems (LCTRSs), which makes it an interesting tool for program verification with inductive theorems as an interpretation for program equivalence. A major challenge when proving inductive theorems with RI is the generation of suitable induction hypothesis, preferably automatically. Two existing heuristics often fail. Here, we consider another approach: rather than inventing new heuristics for proving individual cases, we consider classes of equivalences. This is achieved by introducing templates for detecting specific tail and non-tail recursive shapes. Whenever each of the two programs fit into such a template we can generate an equation which is guaranteed to be an inductive theorem.

09:30 Alejandro Díaz-Caro, Octavio Malherbe and Rafael Romero
Basis-Sensitive Quantum Typing via Realizability (Abstract)

In most quantum programming languages, measurements are performed in the traditional computational basis. We present a work-in-progress where we dispose of this seemingly obvious choice and instead, we propose a quantum lambda-calculus where each lambda term is decorated with the basis in which it is to be measured.

This will allows to define a type system where terms which were previously treated linearly, are now treated in a classical fashion. This is done without having to rely on unitary gates to do so and thus, adding an abstraction layer between programs and circuitry. Another unexpected benefit was found designing the type system. We found that, when restricting the uncertainties to their extreme, we were left with a formal system for specification akin to Hoare logic.

In order to achieve this, we utilized the realizability technique to extract a lambda-calculus complete with its type system. We present the language and showcase a few examples.

10:00–10:30 Coffee Break
10:30–12:00 Session 2
10:30 Invited talk: Théo Winterhalter
Controlling computation in type theory (Abstract)

I will present relatively recent work on extending the Agda and Rocq interactive theorem provers with user-defined rewrite rules. Such extensions come with their own set of challenges: unrestrained they break many expected meta-theoretical properties of the provers. I will explain how we addressed those challenges, focusing in particular on the verification of type preservation and confluence. I will then present a new proposal enabling the use of custom rewrite rules only locally to a definition (or a module) which opens interesting new possibilities for tools like Agda, Rocq, or the Dedukti logical framework.

This will include joint work with Jesper Cockx, Gaëtan Gilbert, Yann Leray and Nicolas Tabareau.

11:30 Rémy Cerda and Alexis Saurin
Compression for Coinductive Infinitary Rewriting (A Preliminary Account) (Abstract)

In “traditional” infinitary rewriting based on ordinal-indexed rewriting sequences and strong Cauchy convergence, a key property of rewriting systems is compression, that is, the fact that rewriting sequences of arbitrary ordinal length can be compressed to sequences of length ω. Famous examples of compressible systems are left-linear first-order systems and infinitary λ-calculi. In this work, we investigate compression in the equivalent setting of coinductive infinitary rewriting, which we recall in a slightly augmented form: we extend it to rewriting of (possibly non-wellfounded) derivations in an arbitrary sytem of derivation rules. Then we define the coinductive counterpart of compressed rewriting sequences, and we present a general coinductive procedure turning arbitrary infinitary rewriting derivations into compressed ones, without relying on convergence. The coinductive presentation of the two aforementioned examples are endowed with compression lemmas as instances of our general method.

12:00 Valentin Pasquale and Álvaro García-Pérez
Towards the type safety of Pure Subtype Systems (Abstract)

We reformulate the open problem of type safety in Hutchins’ Pure Subtype Systems (PSS) by proving it under a different conjecture. PSS (hereafter in the singular) harmoniously mixes terms and types, thus enabling a number of advanced language features that combine dependent types with higher-order subtyping. In PSS, terms and types belong to the same kind (everything is a subtype) and the resulting theory is based on subtyping. However, conflating terms and types results in high impredicativity and complicates PSS meta-theory. The crucial property of type safety in PSS hinges on the well-known problem of transitivity elimination. Despite Hutchins’ attempts, he failed to prove a key commutativity property, and type safety has remained an open problem for more than a decade. We introduce a reformulation of Hutchins’ PSS, which we name Machine-Based Pure Subtype System (MPSS), which is based on a mechanism reminiscent of the Krivine Abstract Machine, where the proof of transitivity elimination is direct. Alas, in our MPSS, subtyping is scope-dependent and type safety rests on the conjecture that subtyping is congruent with covariant contexts. Our reformulation of type safety in PSS uncovers a dilemma about a commutativity/congruence problem, which sheds new light on possible solutions and evidences the difficulty of the challenge proposed by Hutchins.

12:30–14:00 Lunch Break
14:00–15:30 Session 3
14:00 Invited talk: Vincent van Oostrom
Accounting for the cost of substitution in structured rewriting (Abstract)

We are interested in the cost of evaluating functional programs implemented by means of rewriting, be it higher- or first-order term rewriting or graph rewriting. Our starting observation is that in such structured rewrite systems the cost of matching exhibiting a redex-pattern, and of substitution plugging-in the right-hand, is an a priori non-neglible part of the cost of a rewrite step. To account for that cost we propose to decompose rewrite steps into 3 stages: matching, replacement, and substitution, with the 1st and 3rd stages themselves consisting of rewrite steps in a so-called Substitution Calculus (SC). More precisely, in a structured rewrite system, a rewrite step from a structure s to a structure t arises as C[ρ] : C[l] → C[r] for some structure C[x] and rule ρ : l → r by substituting the rule symbol ρ respectively the structures l and r for x in C[x]. For instance, in the case of higher-order term rewriting, rewrite steps invoke higher-order substitutions for both the matching and substitution stages. To get a hold on the cost of a rewrite step in structured rewriting, we propose substitution be made explicit leading to the rewrite step C[ρ] being decomposed into s SC↞ C[l] → C[r] ↠SC t for some substitution calculus SC, with the cost of the step being the sum of the costs of its 3 stages:

  1. the matching phase, the SC-expansion s SC↞ C[l] exhibiting the to-be-replaced substructure l within s;
  2. the replacement C[l] → C[r] of l by r according to rule ρ;
  3. the substitution phase, the SC-reduction C[r] ↠SC t plugging-in the substructure r yielding t.
We exemplify this idea for higher-order term rewrite systems (HRSs; having the simply typed λαβη-calculus as SC), for first-order term rewrite systems (TRSs; having an SC based on letrec), for term graph rewrite systems (TGRSs; having an SC, the ж-calculus, based on sharing) and for port-graph rewrite systems (PGRSs; having an SC based on indirection).

The decomposition allows to investigate whether the cost of the SC-steps in stages 1 and 3 can be accounted for by the replacement step in stage 2, and / or whether SC-steps themselves are unit cost or need to be decomposed further, in a given structured rewrite system. We illustrate this for the archetypical second-order HRS of beta-steps in the lambda-calculus. We show that of three well-known notions of beta-reduction: weak, leftmost–outermost, and multistep, the cost of the SC-steps is neglible for weak and leftmost–outermost beta, but is not neglible for multistep beta:

  • For weak and leftmost–outermost beta-reduction the substitutions to be performed are essentially first-order, suggesting one can implement both by TRSs. Further observing that TRS-steps are not unit cost (due to replication being built-in in its SC), we then further implement TRSs by TGRSs. Rewrite steps of the latter being obviously unit cost allows us then (by some careful reasoning) to conclude the same for weak and leftmost–outermost beta.
  • That is then contrasted to that multistep beta-reduction is not unit cost. We base ourselves on the known fact that reaching a normal form is much more costly than the number of multistep betas needed to reach that normal form, entailing that the cost of substitution must be high (as difference between the former and the latter). We illustrate this phenomenon for an implementation of multistep beta by means of a PGRS, decomposing every multistep beta into: 4 SC-expansion steps of size 1, 1 replacement step of size 4, followed by a number of SC-reduction steps, a number that necessarily must be large, and indeed is large as we demo.

15:00 Enno Folkerts
Constructing a Curry Algebra where Indeterminates are not Left Cancellative (Abstract)
The origin of this extended abstract is the question which property of universal algebra is best suited to model the role of free variables in term models of the lambda calculus. In some mathematical contexts variables and indeterminates are just the same thing. In this paper however, a variable is a term that can be turned into a semantic object when considering term models. Indeterminates on the other hand are defined by a universal property and purely semantical objects. The main result of this paper is the identification of a central property of universal algebra that holds for variables in the typical term models while we construct a combinatory algebra where this property does not hold for indeterminates.
15:30–16:00 Coffee Break
16:00–17:30 Session 4
16:00 Invited talk: Damiano Mazza
Revisiting Honda and Laurent's Correspondence between the Pi-Calculus and Linear Logic (Abstract)

In the late 2000s Honda and Laurent found an interesting correspondence between the (asynchronous, local, internal) pi-calculus and a variant of (polarized) linear logic proof nets. Ehrhard and Laurent later understood that these were in fact differential proof nets, that is, proofs for an extension of linear logic called differential linear logic. This logical system admits cut-elimination, thus yielding what looks like a concurrent version of the Curry-Howard correspondence. Unfortunately, such a correspondence is imperfect: in Ehrhard and Regnier's original formulation, which is the one considered by Ehrhard and Laurent, the non-determinism of differential proof nets is incapable of soundly matching the non-determinism of the pi-calculus (technically, the natural rewriting systems associated with proof nets and with processes are not bisimilar).

In the meantime, Caires and Pfenning introduced their own concurrent version of the Curry-Howard correspondence, using plain linear logic on one side and the session-typed pi-calculus on the other. The work of Honda, Laurent and Ehrhard was sort of eclipsed by the success of this correspondence, and was not pursued any further.

It never occurred to anyone (including, until recently, myself) that, rather than pointing out some insurmountable fundamental gap, the failure of the first correspondence is simply suggesting that Ehrhard and Regnier's cut-elimination rules for differential linear logic should be changed. Although it is still unclear (especially to myself) to what extent this idea may be developed, it has already led to a couple of interesting applications of linear-logical techniques to the pi-calculus. It also improved our (or, at least, my) understanding of the relationship between Honda and Laurent's and Caires and Pfenning's correspondences. Albeit still difficult to formalize technically (at least for me), the relationship is much tighter than what it looks like at first sight (at least my sight).

I will explain how the pi-calculus suggests alternative cut-elimination rules for differential linear logic, essentially rediscovering Honda and Laurent's proof nets, but with a different syntax and backed up by the solid semantic foundations provided by differential linear logic. From there, I will introduce the applications I mentioned above, namely intersection types for the pi-calculus (a joint work with Dal Lago, de Visme and Yoshimizu published a few years ago) and a work in progress with Jaramillo and Perez in which we develop a type system enforcing deadlock-freedom for a very expressive session-typed pi-calculus. Along the way, and if time allows it, I will try to give my understanding of the kinship between Honda and Laurent's and Caires and Pfenning's concurrent versions of Curry-Howard.

17:00 Alejandro Diaz-Caro, Gilles Dowek and Jean-Pierre Jouannaud
Proving Termination With CPO (Abstract)

A popular, effective simplification ordering for proving termination of rewrite rules in a first-order setting is Dershowitz' recursive path ordering (RPO). Its higher-order generalization, the computability path ordering (CPO) by Blanqui, Jouannaud and Rubio is as effective, but much less popular. In this paper, we show that it can indeed be used in practice to save lots of complex termination arguments. Our examples originate from a recent proposal of the first two authors using a natural deduction style to describe quantum computations.

Committees

Programme Committee:

Steering Committee: