# An Introduction to Primitive Recursive Functions

Before we get into the definition of primitive recursive functions, let me tell you the two surprising things about them.

1. Primitive recursive functions are more powerful than you'd expect from their definition. They can simulate almost all the functions you care about, including addition, multiplication, isPrime, nthPrime, and more.
2. Primitive recursive functions are less powerful than you'd expect from how many functions they can simulate. In particular, they're not Turing-complete; there exist computable functions which P.R. functions cannot simulate, although you have to look hard for them.

With that, let's get into the definition. Primitive recursive functions are built from five base functions, which we've written in JavaScript below. (If you need a quick JavaScript refresher, there's one in the appendix.)

const z = (x) => 0 // zero function
// z(3) => 0

const s = (x) => x + 1 // successor function
// s(3) => 4

const p = (i) => (...x) => x[i] // projection ("selection") function
// p(3)(3, 1, 4, 1, 5) = p3(3, 1, 4, 1, 5) => 1

const c = (g, ...hs) => (...x) => g(...hs.map(h => h(...x))) // composition function
// const sz = c(s, z)(5) => 1

const rho = (f, g) => (n, x) => { // primitive recursion function
const self = rho(f, g)

if (n == 0) {
return f(x)
} else {
return g(self(n - 1, x), n - 1, x)
}
}

The first four are natural, but the fifth (primitive recursion) is a bit harder to understand.

We construct a primitive recursion rho out of two functions: f, which acts as a base case, and g, which we apply on every loop. Each invocation of rho(f, g) then keeps track of n, the number of steps left, and x, the working variable(s).

Let's see some examples to internalize how it works. Consider add:

const add = rho(p(0), c(s, p(0)))

The first argument of rho is called f and handles the base case. Here, f is p(0), so our base case will be p(0)(x), or just x (as invoked on line 5 of rho).

The second argument of rho is called g and handles the recursive case. Here, g is c(s, p(0))), which we can simplify out:

// equivalent here:
const g = c(s, p(0))
const g = (a, b, c) => s(p(0)(a, b, c))
const g = (a, b, c) => s(a)

As invoked on line 7 of rho, g will resolve like this:

// as invoked on line 7 of rho
g(self(n - 1, x), n - 1, x)
= s(p(0)(self(n - 1, x), n - 1, x)
= s(self(n - 1, x))

which amounts to add(a, b) = s(add(a-1, b)). Overall, then, we have

// f handles the n == 0 case
// g handles the n > 0 case
const add = (n, x) => n == 0 ? x : s(add(n-1, x))

Looking at add in action, we see these two working in concert:

add(3, 5)

// self(3, 5)
// Trying to compute g(self(2, 5), 2, 5)...
// self(2, 5)
// Trying to compute g(self(1, 5), 1, 5)...
// self(1, 5)
// Trying to compute g(self(0, 5), 0, 5)...
// self(0, 5)
// base case: f(5)
// computing s(5)
// computing s(6)
// computing s(7)
// 8

The important thing to notice is that the first argument to self is always decreasing: (3, 5)...(2, 5)...(1, 5)...(0, 5). We get to apply g exactly n times.

Let's do another example to solidify this idea. How would we write multiplication?

const mult = rho(..., ...)

The idea is to use repeated addition. The f function is the "base" function, and so ought to return zero. Our g is responsible for adding an x at every iteration.

// we need to choose g so that the recursive case we have:
g(self(n - 1, x), n - 1, x)
// becomes the recursive case we want:
// in this case, we compose add with the first and third arguments of g.
g = c(add, p(0), p(2))

Combining them, we find a working primitive recursive mult function.

const mult = rho(z, c(add, p(0), p(2)))

// self(3, 5)
// Trying to compute g(self(2, 5), 2, 5)...
// self(2, 5)
// Trying to compute g(self(1, 5), 1, 5)...
// self(1, 5)
// Trying to compute g(self(0, 5), 0, 5)...
// self(0, 5)
// base case: f(5) => 0
// 15

So PR functions can clearly express for-loops. Sure enough, they can also express conditionals.

const isZero = rho(c(s, z), z)
isZero(5) // 0
isZero(0) // 1

The trick is to exploit rho's built-in if-statement. If $n=0$ we hit f and return 1. If $n \neq 0$, then we get g, and g is just the zero function.

We're halfway through our introduction to primitive recursive functions. Hopefully you're convinced that they're quite powerful; they can write for-loops and if statements. You might begin to suspect that they're powerful enough to express any computable function. Perhaps surprisingly, you'd be wrong.

Let's examine an example of a function that isn't primitive recursive, and try to understand what breaks down. The function we'll examine is the Collatz Conjecture Checker function. Here it is in JavaScript:

const checkCollatz = (n) => {
if (n == 1) {return true}
else if (n % 2 == 0) {return checkCollatz(n / 2)}
else if (n % 2 == 1) {return checkCollatz(3 * n + 1)}
}

It checks whether, if you start at a given $n$ and then do $n \rightarrow \frac{n}{2}$ when $n$ is even and $n \rightarrow 3n + 1$ when $n$ is odd, you eventually reach $1$. The Collatz Conjecture (still unproven) claims that this sequence eventually reaches $1$ for all starting $n$. For example:

$$10 \rightarrow 5 \rightarrow 16 \rightarrow 8 \rightarrow 4 \rightarrow 2 \rightarrow 1$$

Let's try to write the Collatz Conjecture Checker as a P.R. function.

We begin our attempt to construct the Collatz Conjecture Checker by building nextCollatz, which maps each number to the next one in the sequence.  It shouldn't come as a surprise that subtraction (with non-negative outcome), division, and modulo are primitive recursive.

const nextCollatz = (x) => isZero(x % 2) * (x / 2) + (1 - isZero(x % 2)) * (3 * x + 1)

nextCollatz(3) // => 3 * 3 + 1 => 10
nextCollatz(4) // => 4 / 2 => 2

That went pretty well. Now we need to write the part of the checker that goes like this:

while (x !== 1) {
x = nextCollatz(x)
}

Perhaps we can do it with $\rho$. Recall the format of $\rho$:

const rho = (f, g) => (n, x) => {
const self = rho(f, g)

if (n == 0) {
return f(x)
} else {
return g(self(n - 1, x), n - 1, x)
}
}

$\rho$ terminates or not based on the value of $n$, so we definitely need to hold our primary variable in $n$ instead of $x$. And then we'll try to squeeze our nextCollatz logic to operate on $n$.

The problem arises immediately. The only place where we recurse on rho happens on the 7th line of rho, where we call self(n - 1, x). But there, we don't have arbitrary control over the transformation that happens to $n$ (for example, to pass along nextCollatz($n$) instead of just $n-1$). Nor can we somehow squeeze the primary variable into $x$, which is passed along unchanged. Our only chance to apply transformations happens after self returns, and by extension after the total number of recursive steps (exactly $n$ of them) have already been determined.

That what's dooms the attempt to build a primitive recursive Collatz Conjecture Checker, which needs to decide after every step whether it needs to take another iteration.

What we notice is that if you call a P.R. function with $n=100$, it's destined to do exactly 100 (primary) recursions. And since there's no way to change that dynamically, P.R. functions can simulate for loops but not while loops.

const unk = rho(f, g) // regardless of the value of f and g
unk(100, 100) // unk calls itself (and g) 100 times.

## Appendix

### JavaScript refresher

Two non-obvious features of JavaScript come up in our definitions. The first is higher order functions, which you can think of as normal functions except with the arguments separated by => and invoked with multiple sets of parens:

const add = a => b => a + b
// addFive(4) => 9
The second is the spread operator ..., which has two meanings. A ... in the parameter list takes a variable number of parameters and converts it into a list:
const intoList = (...x) => x
// intoList(3, 4, 5) => [3, 4, 5]
Whereas a ... in a function invocation takes a list and converts it into several parameters:
const addThree = (x, y, z) => x + y + z
// addThree(...[4, 5, 6]) => 15