# On Semantical Methods for Higher-Order Complexity Analysis

## Deivid Rodrigues do Vale

Promotor: prof.dr. Herman Geuvers (RU)

Co-promotor: dr. Cynthia Kop (RU)

Radboud University

Date: *19 April 2024*

Thesis: PDF

#### Summary

In 1965 Alan Cobham proposed the following broadly formulated questions: “is it always more difficult to multiply than to add?” and “why?”. It turns out that these apparently innocent questions formulated in the context of basic arithmetic operations carry a deep connection with the notions of algorithm and their intrinsic complexity. An algorithm is like a cake recipe, it tells us precisely how to execute tasks to achieve an end: the cake. By intrinsic complexity, we mean that no matter how we choose to formulate the cake’s recipe there will be an intrinsic difficulty to it. Cobham then continues to establish a class (or let’s say a collection in informal terms) of those algorithms that are “simple” that is we can execute them in a small enough number of steps. This collection is today known as the class P, of algorithms computed in polynomial time. The Cobham’s thesis states that such algorithms are precisely those that we can execute in practice.

Since then a myriad of computational models have been proposed and each of them came with their notion of feasibility. An example of that is term rewriting systems which when subjected to some conditions can be used to capture exactly the class of functions computable in polynomial time. This result is interesting in practice since expressing algorithms in a declarative rewriting style is often easier than explicitly defining them in terms of Turing machines. However, hitherto the different characterizations of polytime in the rewriting context have been formulated in terms of first-order rewriting, i.e., systems where functions may not be passed as arguments; for example, functions like map or fold (which are often used in functional programming) are excluded.

The goal of this thesis is then to fill in this gap and develop a higher-order complexity analysis framework using higher-order rewriting systems as the underlying computational model. To this end, we introduce a new class of higher-order algebraic semantics: tuple interpretations. The defining characteristic of tuple interpretations is that we can completely split the dual notions of cost and size when measuring the complexity of such higher-order systems. This allows for a better fine-grained study of their running time by providing often tight upper bounds parametrized by both the cost and the size of the arguments. We then proceed to apply this method in a variety of higher-order systems with different operational semantics, i.e., full rewriting (so no restriction on the evaluation) and the call-by-value evaluation strategy.

As a main result of this thesis, we provide a characterization of the higher-order complexity class BFF (Basic Feasible Functions) in terms of second-order rewriting systems admitting a semantic tuple interpretation that is polynomially bounded. Finally, we provide a certification engine Nijn/Onijn capable of formally certifying (in the Coq proof assistant) termination proofs given by higher-order termination tools.