(nω)(n\omega) vs. (ωn)(\omega n)

Marvin Borner

2023-07-28

General assumptions in this post

While analyzing various rapidly growing sequences in pure lambda calculus due to procrastination, I stumbled upon (nω)(n\omega) and (ωn)(\omega n). I found the behavior of these very similar, yet very different sequences interesting, so maybe someone else will, too.

nn, ω\omega?!

The ω\omega combinator is defined by λ(00)\lambda(0 0). It therefore duplicates every term that’s applied. (ωω)(\omega \omega) will result in an infinite loop.

For this sequence to work, a number nn encoded in lambda calculus uses the Church encoding. Church numerals are defined by an nn-fold composition of the outermost abstraction: 2λλ(1(10))\langle 2\rangle\equiv\lambda\lambda(1 (1 0)) Similar to ω\omega, these terms will also duplicate their first argument – this time nn times.

(nω)(n\omega)

In this reduction ω\omega gets cloned nn times. After that, ω\omega duplicates itself repeatedly.

(2ω) (λλ(1(10)) λ(00)) λ(λ(00) (λ(00) 0)) λ(λ(00) (00)) λ((00)(00)) \begin{align*} (\langle2\rangle\omega)\equiv&\ (\lambda\lambda(1 (1 0))\ \lambda(0 0))\\ \leadsto&\ \lambda(\lambda(0 0)\ (\lambda(0 0)\ 0))\\ \leadsto&\ \lambda(\lambda(0 0)\ (0 0))\\ \leadsto&\ \lambda((0 0) (0 0)) \end{align*}

As ω\omega duplicates itself n1n-1 times, the growth of (nω)(n\omega) is in O(2n)\mathcal{O}(2^n).

For higher nn, the reduced term will look like this: λ(((00)2)2...)2n1 times\lambda\underbrace{(((0 0)^2)^2...)^2}_{n-1\ \mathrm{times}}

Fun fact: The exact bit count of its binary lambda calculus encoding is 2n+22^{n+2}.

(ωn)(\omega n)

This term is just the inverted application of (nω)(n\omega). Therefore, nn first gets duplicated by ω\omega and then nn clones itself nn times:

(ω2) (λ(00) λλ(1(10))) (λλ(1(10)) λλ(1(10))) λ(λλ(1(10)) (λλ(1(10)) 0)) λ(λλ(1(10)) λ(1(10))) λλ(λ(2(20)) (λ(2(20)) 0)) λλ(λ(2(20)) (1(10))) λλ(1(1(1(10)))) 4 \begin{align*} (\omega\langle2\rangle)\equiv&\ (\lambda(0 0)\ \lambda\lambda(1 (1 0)))\\ \leadsto&\ (\lambda\lambda(1 (1 0))\ \lambda\lambda(1 (1 0)))\\ \leadsto&\ \lambda(\lambda\lambda(1 (1 0))\ (\lambda\lambda(1 (1 0))\ 0))\\ \leadsto&\ \lambda(\lambda\lambda(1 (1 0))\ \lambda(1 (1 0)))\\ \leadsto&\ \lambda\lambda(\lambda(2 (2 0))\ (\lambda(2 (2 0))\ 0))\\ \leadsto&\ \lambda\lambda(\lambda(2 (2 0))\ (1 (1 0)))\\ \leadsto&\ \lambda\lambda(1 (1 (1 (1 0))))\\ \equiv&\ \langle4\rangle \end{align*}

Due to its repeated internal application, ωn\omega n turns out to represent nnn^n: λλ1...1nn1 times(10)\lambda\lambda\underbrace{1\circ...\circ1}_{n^n-1\ \mathrm{times}}\circ(10)

Fun fact: The exact bit count of its binary lambda calculus encoding is 5nn+65\cdot n^n + 6. The factor 55 is the result of encoding repeated applications of 11s.

(ωin)(\omega_i n) and beyond

Adding more applications to the ω\omega combinator will result in a higher power tower. Using (ω3n)(\omega_3 n) with ω3=λ((00)0)\omega_3=\lambda((0 0) 0), for example, will reduce to nnnn^{n^n}. (ω33)(\omega_3\langle 3\rangle) already can’t easily be solved by most traditional reducers.

Detour: After getting caught in the depths of Googology, I now know that these power towers have quite interesting notations. Using Knuth’s up-arrow notation, 3333^{3^3} can be written as n3n\uparrow\uparrow3. One can escalate this even further: n3n\uparrow\uparrow\uparrow3 would equal n(n(nn))n\uparrow\uparrow (n\uparrow\uparrow(n\uparrow\uparrow n)). Graham’s number abuses this notation to create an unimaginably huge number. Yet, to come back to lambda calculus, numbers even bigger than Graham’s number can be represented in just 8 bytes of binary encoded lambda calculus! See this Googology entry and this codegolf post.

However, these numbers are still basically zero compared to the growth of busy beavers. One of my next posts will discuss some interesting aspects of the busy beaver of lambda calculus, BBλ\mathrm{BB}_\lambda.

Thanks for reading. Suggest improvements or fixes via email. Support on Ko-fi.

Methods used in this article include bruijn’s :time and :length commands.


Imprint · AI Statement · RSS