Bowers' Arrays are very useful and interesting tools for producing absolutely immense numbers. This note will tell you about this insanely fast-growing function, so you too can experience the fun of really big numbers.

Anyway, let's start with the simplest array, \(\{ \}\) (the empty array). This is defined as:

\(\{ \}\) = \(1\)

Not very big, is it? Don't run away yet, and instead observe the might of a one entry array!

\(\{ a \}\) = \(a\)

I'm not really making a good impression, am I? Don't worry, because they get bigger.

\(\{ a,b\}\) = \({ a }^{ b }\)

That's more like it! These three arrays (\(\{ \}\), \(\{ a\}\), and \(\{ a,b\}\)), along with their definitions, make up Rule 1 of Bowers' Arrays. Rule 1 is used to deal with arrays of length 0, 1 or 2.

Rule 2 is simply \(\{ a,b,\dots ,n,1\}\) = \(\{ a,b,\dots ,n\}\). In other words, any trailing \(1\)s in the array can and must be removed without affecting its value.

Rule 3 is \(\{ a,1,\dots ,n\}\) = \(a\). If the second entry in the array is a \(1\), then the overall value of the array is \(a\). This is somewhat similar to how \({ a }^{ 1 }\) = \(a\).

Rule 4 is where the more interesting behaviour of Bowers' Arrays come in, and is much more complex that the previous three rules. \(\{ a,b,1,\dots ,1,m,\dots ,n\}\) = \(\{ a,b,1,\dots ,\{ a,b-1,1,\dots ,1,m,\dots ,n\} ,m-1,\dots ,n\}\).

Said in natural language, if the third entry is \(1\), then the final one in the unbroken chain of 1s following from the third entry (e.g. the 6th entry in \(\{ a,b,1,1,1,1,n\}\), and the 3rd entry in \(\{ a,b,1,m,1,n\}\)) is replaced by a copy of the entire array but with the second entry decremented by 1, and the entry after the chain of 1s decremented by 1. Additionally, all entries before the one changed to a modified copy of the array are replaced with \(a\).

As a simple example of the fourth rule in action, \(\{ 3,3,1,3,1,3\}\) = \(\{ 3,3,\{ 3,2,1,3,1,3\} ,2,1,3\}\).

The fifth and final rule is that \(\{ a,b,c,\dots ,n\}\) = \(\{ a,\{ a,b-1,c,\dots ,n\} ,c-1,\dots ,n\}\). In natural language, that means that if rules one through four do not apply, the the second entry is replaced by a copy of the array but with the second entry decremented by \(1\), and the third entry of the original array is decremented by \(1\).

As a quick task, try simplifying a small array like \(\{ 3,3,3\} \) with a repeated application of these rules. You will soon find that it becomes unmanageably huge. And if only a three entry array with small numbers in it makes such a huge number, imagine how big a ten entry array full of tens would be!

## Comments

Sort by:

TopNewestWhat got you interested in Bowers' Arrays? They are not a topic that people usually come across. – Calvin Lin Staff · 2 years, 11 months ago

Log in to reply

– Thomas Jones · 2 years, 11 months ago

I was once a fan of higher dimensional shapes, and in my googling for more information I stumbled across Bower's website. I explored that website a bit, and soon discovered the Infinity Scrapers page. From there, I was hooked.Log in to reply

Question - what if the second entry is an array?

Like, what about this: {3, {3, 2, 3}, 2}

Just making sure I got the idea down for this.. – Yitz Deng · 2 years, 11 months ago

Log in to reply

– Ivan Koswara · 2 years, 11 months ago

You can't simplify the whole array yet. But you can simplify the inner array to {3, {3, 1, 3}, 2} = {3, 3, 2}.Log in to reply

Do you know if Bower arrays are always computable and do they always (in principle) end up yielding a number? What might I use Bower arrays to do (other than construct fantastically large quantities)? – Josh Silverman Staff · 2 years, 11 months ago

Log in to reply

– Thomas Jones · 2 years, 11 months ago

For the first, yes as long as each entry is greater than or equal to 1. For the second, I haven't a clue. That oen's left as an exercise for the reader.Log in to reply

– Abhishek Sinha · 2 years, 11 months ago

I think he is referring to turing computability. This array computation seems to have a remote connection with the computability of Ackermann function.Log in to reply