In theoretical computer science and mathematical logic, specifically in realizability, a partial combinatory algebra (pca) is an algebraic structure which abstracts a model of computation. The definition of pcas uses an idea from combinatory logic. The realizability topos over a pca is a model of higher-order intuitionistic logic where informally every function is computable in the model of computation specified by the pca.
A partial applicative structure is simply a set A {\displaystyle A} equipped with a partial binary operation A × A ⇀ A {\displaystyle A\times A\rightharpoonup A} called application. In the context of realizability, this operation is usually denoted by simple juxtaposition, i.e., ( a , b ) ↦ a b {\displaystyle (a,b)\mapsto ab} . It is usually not associative; by convention, the notation a b c {\displaystyle abc} associates to the left as ( a b ) c {\displaystyle (ab)c} , matching the standard convention in λ-calculus.[1]: 1
The terms (or expressions) over a partial applicative structure A {\displaystyle A} are defined inductively:[1]: 2 [2]: 27
(In other words, they form the free magma over the disjoint union A + V {\displaystyle A+V} where V {\displaystyle V} is the set of variables.)
A term is closed when it contains no variables. A closed term may be evaluated in the natural way: a constant a ∈ A {\displaystyle a\in A} evaluates to itself, and if the terms e 1 {\displaystyle e_{1}} and e 2 {\displaystyle e_{2}} respectively evaluate to a 1 {\displaystyle a_{1}} and a 2 {\displaystyle a_{2}} , then e 1 e 2 {\displaystyle e_{1}e_{2}} evaluates to a 1 a 2 {\displaystyle a_{1}a_{2}} , if this is defined. Note that the evaluation is a partial operation, since not all applications are defined. We write t ↓ {\displaystyle t\downarrow } to simultaneously express that the term t {\displaystyle t} evaluates to a defined value and denote this value (this matches standard notation for values of partial functions). We also write t ≃ u {\displaystyle t\simeq u} when both closed terms t {\displaystyle t} and u {\displaystyle u} either do not evaluate to a defined value, or evaluate to the same value.
A substitution operation is also defined in the natural way: if t {\displaystyle t} is a term, x {\displaystyle x} is a variable and u {\displaystyle u} is another term, t [ u / x ] {\displaystyle t[u/x]} denotes the term t {\displaystyle t} with all occurrences of x {\displaystyle x} replaced by u {\displaystyle u} .
The partial applicative structure A is said to be combinatory complete or functionally complete if, for every term t ( x 0 , … , x n ) {\displaystyle t(x_{0},\dots ,x_{n})} (that is, a term t {\displaystyle t} whose variables are among x 0 , … , x n {\displaystyle x_{0},\dots ,x_{n}} ), there exists an element a ∈ A {\displaystyle a\in A} such that:[2]: 27 [1]: 3
A partial combinatory algebra (pca) is a combinatory complete partial applicative structure. A total combinatory algebra (tca) is a pca whose application operation is total.
Informally, the combinatory completeness condition requires an analogue of the abstraction operation from lambda calculus to exist inside the pca.
In the same way as there is a translation from λ-terms to terms of the SKI combinator calculus by eliminating λ-abstractions using combinators, pcas can be characterized by the existence of elements which satisfy equations analogous to the S and K combinators. Note, however, that some care must be taken in the statement and proof since application is not always defined in a pca.
Theorem:[2]: 28 [1]: 3 A partial applicative structure A {\displaystyle A} is combinatory complete if and only if there exist two elements k , s ∈ A {\displaystyle k,s\in A} such that:
For the proof, in the forward direction, if A {\displaystyle A} is combinatory complete, it suffices to apply the definition of combinatory completeness to the terms t K ( x , y ) := x {\displaystyle t_{K}(x,y):=x} and t S ( x , y , z ) := ( x z ) ( y z ) {\displaystyle t_{S}(x,y,z):=(xz)(yz)} to obtain K {\displaystyle K} and S {\displaystyle S} with the required properties.
It is the converse that involves abstraction elimination. Assume we have K {\displaystyle K} and S {\displaystyle S} as stated. Given a variable x {\displaystyle x} and a term t {\displaystyle t} , we define a term ⟨ x ⟩ t {\displaystyle \langle x\rangle t} whose variables are those of t {\displaystyle t} minus x {\displaystyle x} , which plays a role similar to λ x ⋅ t {\displaystyle \lambda x\cdot t} in λ-calculus. The definition is by induction on t {\displaystyle t} as follows:[1]: 3 [2]: 28
Beware that the analogy between ⟨ x ⟩ t {\displaystyle \langle x\rangle t} and λ x ⋅ t {\displaystyle \lambda x\cdot t} is not perfect. For example, the terms ( ⟨ x ⟩ t ) t ′ {\displaystyle (\langle x\rangle t)t'} and t [ t ′ / x ] {\displaystyle t[t'/x]} are not generally equivalent in a reasonable sense, e.g., taking a variable y {\displaystyle y} different from x {\displaystyle x} and a , b ∈ A {\displaystyle a,b\in A} constants, we have ( ⟨ x ⟩ y ) ( a b ) = K y ( a b ) {\displaystyle (\langle x\rangle y)(ab)=Ky(ab)} , which cannot be considered equivalent to y {\displaystyle y} because the latter always evaluates to c {\displaystyle c} if y {\displaystyle y} is replaced by a constant c ∈ A {\displaystyle c\in A} , while the former may not as a b {\displaystyle ab} may be undefined.[1]: 4–5
However, if t ′ {\displaystyle t'} is a constant a {\displaystyle a} , then ( ⟨ x ⟩ t ) a {\displaystyle (\langle x\rangle t)a} is indeed equivalent to t [ a / x ] {\displaystyle t[a/x]} in the sense that substituting all variables for some constants in these two terms gives the same result (per ≃ {\displaystyle \simeq } ).[1]: 4
Moreover, substituting variables by constants in ⟨ x ⟩ t {\displaystyle \langle x\rangle t} always evaluates to a defined result, even if this would not be the case by substituting variables in t {\displaystyle t} . For example, if a , b ∈ A {\displaystyle a,b\in A} are two constants, the term ⟨ x ⟩ ( a b ) {\displaystyle \langle x\rangle (ab)} (abstracting a variable which does not appear) is equal to S ( K a ) ( K b ) {\displaystyle S(Ka)(Kb)} . By the assumptions on K {\displaystyle K} and S {\displaystyle S} , this is well-defined, even though a b {\displaystyle ab} may not be well-defined.[1]: 4
These remarks imply that for all term t ( x 0 , … , x n ) {\displaystyle t(x_{0},\dots ,x_{n})} , the value a := ⟨ x 0 ⟩ … ⟨ x n ⟩ t ↓ {\displaystyle a:=\langle x_{0}\rangle \dots \langle x_{n}\rangle t\downarrow } is well-defined and satisfies the two requirements of combinatory completeness.[1]: 4
The first Kleene algebra K 1 {\displaystyle {\mathcal {K}}_{1}} consists of the set N {\displaystyle \mathbb {N} } with application a b := ϕ a ( b ) {\displaystyle ab:=\phi _{a}(b)} , where ϕ a {\displaystyle \phi _{a}} denotes the a {\displaystyle a} -th partial recursive function in a standard Gödel numbering.[1]: 15 [2]: 29
This pca can also be relativized to an oracle D ⊆ N {\displaystyle D\subseteq \mathbb {N} } : we define a pca K 1 D {\displaystyle {\mathcal {K}}_{1}^{D}} with carrier N {\displaystyle \mathbb {N} } by setting a b := ϕ a D ( b ) {\displaystyle ab:=\phi _{a}^{D}(b)} , where ϕ a D {\displaystyle \phi _{a}^{D}} is the a {\displaystyle a} -th partial recursive function with oracle D {\displaystyle D} .[1]: 15 [2]: 30
We can form a pca (in fact a tca) by quotienting the set of closed (untyped) λ-terms by β-equivalence and taking the application to be the one inherited from λ-calculus.[1]: 23 [2]: 30