Category Archives: Numbers

What are the p-adic numbers?

The p-adic numbers are a completion of rational numbers with respect to the p-adic norm and form a non-Archimedean field. They stand in contrast to the real numbers, which are a completion of the rationals with respect to absolute value and which do respect the Archimedean property. The p-adic numbers were first described in 1987 by Kurt Hensel and later generalized in the early 20th century by József Kürschák which paved the way for a myriad of mathematical work involving the P-adics in the just over a century since.

To start talking about the p-adics it makes sense to start with the Archimedean property, which as mentioned above the p-adics do not adhere to.  First we need an absolute value function or valuation, ||, which is basically a function that gives some concept of magnitude to an element of a field (so it’s a mapping from a field to the positive real numbers). Given this, the zero element should map to zero and the valuation of the product of two elements should be the same as the product of the valuation of each element individually. The last condition (which determines whether or not the valuation is Archimedean) is that if the norm of an element is less than or equal to 1, then the norm of 1 plus that element is less than some constant, C, which is independent of the choice of x. If the last condition holds for C equal to 1, then the valuation satisfies the ultrametric inequality: the valuation of 2 elements is less than or equal to the maximum of the valuation of each element. If the ultrametric inequality is satisfied then the valuation is non-Archimedean. Otherwise it is an Archimedean valuation.

While this is a bit opaque, it makes more sense now moving into defining Archimedean fields: a field is Archimedean if given a field with an associated valuation and a non-zero element, x, of that field, then there exists some natural number n such that |Σk=1nx|>1. Otherwise,  the field is non-Archimedean and the ultrametric inequality holds. Basically what this means is that if we can measure distance and we are living in a nice Archimedean world, if we walk forward we can go as far as we want. While if we were to live in a non-Archimedean world and we try to walk forward we would at best stay in place and possibly move backward.

Now that that’s out of the way and (hopefully) the weirdness of a non-Archimedean world has been established, it’s time to talk about the p-adics. Any non-zero rational number, x, may be expressed in the form x ,where a and b are relatively prime to some fixed prime p and r is an integer. Using this, the p-adic norm of x is defined as |x|=p-r , which is non-Archimedean. For example, when p=3, |6|=|2*3|=1/3, |9|=|32|=1/9 and |6+9|=|15|=|3*5|=1/3 or when p=5 , |4/75|=|4/(52*3)|= 25, |13/250|=|13/(2*53)|=125 while |4/75 + 13/250|=|17/325|=|17/(52*13)|=25. So now that we have this we can proceed identically as when constructing the real numbers using the absolute value and define p as the set of equivalence classes of Cauchy sequences with respect the p-adic norm. After some work it can be shown that every element in pcan be written uniquely as Σk=makpk,where am does not equal zero and m may be any integer.

The most common use of p-adics I found was in showing the existence (or lack thereof) of rational or integer solutions to problems. For example, the Hasse principle (also known as the local-global prinicipal ) was discovered by Helmut Hasse in the 1920’s and attempts to give a partial converse of the statement that if a polynomial of the form Σaijxiyj+Σbixi+c=0 has a rational solution then it has a solution for all expansions of Q. The Hasse principal asserts that if such a polynomial has a solution in R and every Qp then it has solution in Q. An example of this is x2-2=0, which has (irrational) solution square root of 2 in R. However, it does not have solution in Q5 , and so by the Hasse principal it does not have a solution in Q, which we know to be true. Another use of the P-adics which is fairly interesting is in transferring standard real or complex polynomial equations to their tropical (the semi ring of the reals with the addition of an infinity element under the laws of composition addition and min (or max)) polynomial counterpart, a process which runs into issues due to the necessity of the ultrametric inequality.


Click to access HasseMinkowski.pdf

Leonardo of Pisa – The Great Fibonacci

Figure 1-Fibonacci. Image: Public domain, via Wikimedia Commons.

Most mathematically inclined people are familiar with the famous and unique Fibonacci sequence. Defined by the recurrence relation (*) Fn=Fn-1+Fn-2 with initial values F1=1 and F2=1 and (or sometimes F0=1 and F1=1), the Fibonacci sequence is an integer sequence (1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, …) with many remarkable mathematical and real world applications. However, it seems that few are as well informed on the man behind this sequence as they are on the sequence itself. Did you know that Fibonacci didn’t even discover the sequence? Of course not! Predating Fibonacci by almost a century, the so called “Fibonacci sequence” was actually the brainchild of Indian mathematicians interested in poetic forms and meter who, through studying the unique arithmetic properties of certain linguistic sequences and syllable counts, derived a great deal of insight into some of the most fascinating mathematical patterns known today. But with a little bit of time (few hundred years), some historical distortion, inaccurate accreditation[1], and a healthy dose of blind western ethnocentrism and voila! Every high school kid in America now thinks there is a connection between Fibonacci and pizza. Or is it Pisa? (That’s a pun, laugh.) While often given more credit than deserved for the “discovery” of the sequence, Fibonacci was nonetheless an instrumental player in the development of arithmetic sequences, the spread of emerging new ideas, and in the advancement of mathematics as a whole. We thus postpone discussion of Fibonacci’s sequence – don’t worry, we shall return – to examine some of the other significant and often overlooked contributions of the “greatest European mathematician of the middle ages.”[1]

Born around the year 1175 in Pisa, Italy, Leonardo of Pisa (more commonly known as Fibonacci) would have been 840 years old this year! (Can you guess the two indexing numbers between which Fibonacci’s age falls?[2]) The son of a customs officer, Fibonacci was raised in a North African education system under the influence of the Moors.[3] Fibonacci’s fortunate upbringing and educational experience allowed him the opportunity to visit many different places along the Mediterranean coast. It is during these travels that historians believe Fibonacci may have first developed an interest in mathematics and at some point come into contact with alternative arithmetic systems. Among these was the Hindu-Arabic number system – the positional number system most commonly used in mathematics today. It appears that we owe a great deal of respect to Fibonacci for, prior to introducing the Hindu-Arabic system to Europe, the predominant number system relied on the far more cumbersome use of roman numerals. It is interesting to note that while the Hindu-Arabic system may have been introduced to Europe as early as the 10th century in the book Codex Vigilanus, it was Fibonacci who, in conjunction with the invention of printing in 1482, helped to gain support for the new system. In his book Liber abbaci[4], Fibonacci explains how arithmetic operations (i.e., addition, subtraction, multiplication, and division) are to be carried out and the advantages that come with the adoption of such a system.

Figure 2-Golden spiral. Image: Weisstein, Eric W. "Golden Spiral." From MathWorld--A Wolfram Web Resource.

Figure 2-Golden spiral. Image: Weisstein, Eric W. “Golden Spiral.” From MathWorld–A Wolfram Web Resource.

Whereas the number system most familiar to us uses the relative position of numbers next to each other to represent variable quantities (i.e., the 1’s, 10’s, 100’s, 1000’s, … place), Roman numerals rely on a set of standard measurement symbols which, in combination with others, can be used to express any desired quantity. The obvious problem with this approach is that it severely limits the numbers that can be reasonably represented by the given set of symbols. For example, the concise representation of the number four hundred seventy eight in the Hindu-Arabic system is simply 478 in which “4” is in the hundreds place, “7” is in the tens place, and “8” is in the ones place. In the Roman numeral system, however, this same number takes on the form CDLXXVIII. As numbers increase arbitrarily so does the complexity of their Roman numeral representation. The adoption of the Hindu-Arabic number system was, in large part, the result of Fibonacci’s publications and public support for this new way of thinking. Can you imagine trying to do modern mathematical analysis with numbers as clunky as MMMDCCXXXVIII??? Me either. Thanks, Fibonacci!

Fibonacci’s other works include publications on surveying techniques, area and volume measurement, Diophantine equations, commercial bookkeeping, and various contributions to geometry.[4] But among these works nothing stands out more than that of Fibonacci’s sequence – yes, we have returned! Among the more interesting mathematical properties of Fibonacci’s sequence is undoubtedly its connection to the golden ratio (shall be defined shortly). To illustrate, we look momentarily at the ratios of several successive Fibonacci numbers. Beginning with F1=1 and F2=1 we see that the ratio F2/F1=1. Continuing in this manner using the recurrence relation (*) from above or any suitable Fibonacci table we find that F3/F2=2, F4/F3=3/2, F5/F4=5/3,F6/F5=8/5, F7/F6=13/8, F8/F7=21/13, … As the indexing number tends to infinity, the ratio of successive terms converge to the value 1.6180339887… (the golden ratio) denoted by the Greek letter phi. We may thus concisely represent this convergent value by the expression as the lim n–> infinity (Fn+1/Fn). Studied extensively, the golden ratio is a special value appearing in many areas of mathematics and in everyday life. Intimately connected to the concept of proportion, the golden ratio (sometimes called the golden proportion) is often viewed as the optimal aesthetic proportion of measurable quantities making it an important feature in fields including architecture, finance, geometry, and music. Perhaps surprisingly, the golden ratio has even been documented in nature with pine cones, shells, trees, ferns, crystal structures, and more all appearing to have physical properties related to the value of (e.g., the arrangement of branches around the stems of certain plants seem to follow the Fibonacci pattern). While an interesting number no doubt, we must not forget that mathematics is the business of patterns and all too often we draw conclusions and make big picture claims that are less supported by evidence and facts than we may believe. There is, in fact, a lot of “woo” behind the golden ratio and the informed reader is encouraged to be weary of unsubstantiated claims and grandiose connections to the universe. It is also worth mentioning that, using relatively basic linear algebra techniques, it is possible to derive a closed-form solution of the n-th Fibonacci number.

Figure 3-Computing the 18th Fibonacci Number in Mathematica.

Figure 3-Computing the 18th Fibonacci Number in Mathematica.

Omitting the details (see link for thorough derivation), the n-th Fibonacci number may be computed directly using the formula Fn=((φ)(n+1)+((-1)(n-1)/(φ)^(n-1))/((φ2)+1).[5] While initially clunky in appearance, this formula is incredibly useful in determining any desired Fibonacci number as a function of the indexing value n. For example, the 18-th Fibonacci number may be calculated using F18=((φ)(18+1)+((-1)(18-1)/(φ)^(18-1))/((φ2)+1)=2584. Comparing this value to a list of Fibonacci numbers and to a Mathematica calculation (see picture above), we see that the 18-th Fibonacci number is, indeed, 2584. Without having to determine all previous numbers in the sequence, the above formula allows us to calculate directly any desired value in the sequence saving substantial amounts of time and processing power.

From the study of syllables and poetic forms in 12th-century India to a closed-form solution for the n-th Fibonacci number via modern linear algebra techniques, our understanding of sequences and the important mathematical properties they possess is continuing to grow. Future study may reveal even greater mathematical truths whose applications we cannot yet conceive. It is thus the beauty of mathematics and the excitement of discovery that push us onward, compel us to dig deeper, and to learn more from the world we inhabit. Who knows, you might even be the next Leonardo of Pizza – errrrr Pisa. What patterns will you find?
[1] French mathematician Edouard Lucas (1842-1891) was the first to attribute Fibonacci’s name to the sequence. After which point little is ever mentioned of the Indian mathematicians who laid the groundwork for Fibonacci’s research.

[2] Answer: n=15 –> 610 and n=16 –> 987.

[3] Medieval Muslim inhabitants of the Maghreb, Iberian Peninsula, Sicily, and Malta.[2]

[4] Translation: Book of Calculation[3]


 [1] Knott, Ron. Who Was Fibonacci? N.p., 11 Mar. 1998. Web. 27 Apr. 2015.

[2] “Moors.” Wikipedia. Wikimedia Foundation, n.d. Web. 27 Apr. 2015.

[3] Leonardo Pisano – page 3: “Contributions to number theory”. Encyclopædia Britannica Online, 2006. Retrieved 18 September 2006.

[4] “Famous Mathematicians.” The Greatest Mathematicians of All Time. N.p., n.d. Web. 28 Apr. 2015.

 [5] Grinfeld, Pavel. “Linear Algebra 18e: The Eigenvalue Decomposition and Fibonacci Numbers.” YouTube. YouTube, 2 Dec. 2014. Web. 28 Apr. 2015.

 Figure 1: Fibonacci. Digital image. Wikimedia Foundation, n.d. Web. 27 Apr. 2015.

Figure 2: Golden Spiral. Digital image. Mathworld. Wolfram, n.d. Web. 1 May 2015.

Figure 3: Ross, Andrew Q. Closed-Form Computation of Fibonacci. Digital image. Mathematica, 28 Apr. 2015. Web. 28 Apr. 2015.

Convergence of a Divergent Series and other Tests

In class we have been toying with the idea of classifying diverging infinite series, such as the sum: Σ k (k = 1,) = 1 + 2 + 3 + … which, as we add it up, continues on to infinity. We also messed around with some series notations and came to a conclusion that the sum adds up to -1/12. Now, I have no intention to claim, nor prove, that it equals -1/12. In fact, I would like to do the opposite; I would like to show that it in no way converges. If an infinite series converges, that means it sums up to a real number s: Σ ak = s. Likewise, an infinite series diverges if the sum of the series equals ±∞ or it does not add to any value. There are a few methods I will use to prove that this series diverges, and these methods can also be used to determine whether any infinite series converges or diverges.

The first test I want to look at is the Root Test. In this test, we take the sequence ak raise it to the power 1/k, and take the limit of that as k→∞. If this limit is greater than 1, the series diverges; if the limit is less than 1, it converges. For the sum of all integers, that gives us limit as k→∞ of k(1/k) . As k→∞, limit of k(1/k) → 1. Unfortunately the limit equals 1, which means it is inconclusive; there is a chance that this does indeed sum up to -1/12. Let’s leave the score at 0-0.

I will now put the series through the Ratio Test. It is called the Ratio Test because we take a ratio of the k+1 term divided by the k-th term and take the limit of that as k→∞: the limit of ak+1/ak. Similar to the Root Test, if the limit is greater than one it diverges, and if the limit is less than 1 it converges. For the dubious series which is under examination, that limit (k+1)/k, and as k→∞ the limit equals 1. Once again, this test is inconclusive. The score is still 0-0.

Now it’s time to get serious; so far every test I’ve done has been disappointingly empty of an answer. In this test if a series bk > ak and bk converges, then ak must also converge and if bk < ak and bk diverges, then ak diverges as well. My claim is that Σ k (k = 1,∞) diverges and now I will compare it with a series Σ 1 (k = 1,n) = n. We can see that 1+1+1+… grows much more slowly than 1+2+3+… thus we can use it as a comparison. By another test, the Term Test, if the series converges then lim an = 0. The series must diverge then if the limit doesn’t equal 0. The limit of 1 as n→∞ = 1 ≠ 0, thus the sum of 1 on (k=1,∞) diverges. Since Σ 1 (k = 1,∞) < Σ k (k = 1,∞), the latter series must also diverge. Now the score is 1 for divergence, and 0 for -1/12.

While it can now be seen that Σ k (k = 1,∞) does indeed diverge, the comparison test relies on having knowledge that a similar infinite series will either converge or diverge. In this case, I have to know that Σ 1 (k = 1,∞) diverges before I can compare it to the original series. I used the Term Test to show that Σ 1 (k = 1,∞) diverged, but I can also use it to show that Σ k (k = 1,∞) diverges. The limit as n→∞ of k = ∞, and by the Term Test diverges. The problem with the Term Test is that the limit ak can be equal to zero, but the series can still diverge. Therefore, this test is only useful if the limit does not equal 0 or to insure that a converging series does indeed converge.

By both the Term and Comparison tests, I was able to show that Σ k (k = 1,∞) diverges and is not equal to -1/12. In class though, we determined that the infinite series itself doesn’t sum to a negative number, there is no possible way that adding large positive whole numbers together would result in a negative rational number, but rather -1/12 represented a value or categorization of the infinite sum Σ k (k = 1,∞). How did this number come about? The idea is definitely not of “fringe mathematics” and has some excellent arguments. The idea stems from Σ (-1)k (k = 0,∞) = 1 – 1 + 1 – 1 + … The mathematician Srinivasa Ramanujan gave this sum a “value” of ½, since it seems to jump between one and zero equally, with ½ as the average.

The second sum Σ k(-1)k-1 (k = 1,∞) = ¼ as this sum added to itself equals Σ (-1)k (k = 0,∞). That is 1 – (2 – 1) + (3 – 2) + … = 1 – 1 + 1 – 1 + … Finally, Σ k (k = 1,∞) – 4(Σ k (k = 1,∞)) = 1 + (2 – 4) + 3 – (4 – 8) + … equals 1 – 2 + 3 – 4 + … = Σ k(-1)k-1 (k = 1,∞), or rather -3(Σ k (k = 1,∞)) = ¼ Σ k (k = 1,∞) = -1/12.

Ramanujan discovering the value -1/12 for the the infinite series ak = k. Image: Srinivasa Ramanujan via Wikimedia Commons.

The specific stipulation given is that the series must go to its limit. The partial summations of any of these series will produce a number unlike the total summation. For the majority of mathematics, to say Σ k (k = 1,∞) = ∞ makes more sense and requires significantly less head-scratching than -1/12.

There is one more test for convergence that I did not talk about, as the infinite series I was examining did not apply, and that is the Integral Test. For Σ ak, the function f(k) = ak. The reason why I could not apply it is because it only works with positive, non-increasing functions bounded on [1,∞). If the integral of f(x) is a real number, then the series converges, whereas if that same integral diverges, then the series diverges. The first case I will look at is Σ 1/k (k = 1,∞), where f(x) = 1/x and is a positive, non-increasing function. The integral of 1/x = log(x) evaluated from 1 to ∞. This integral turns out to diverge, and therefore the series also diverges. The second case is similar: Σ 1/k2 (k = 1,∞), where f(x) = x-2. The integral of 1/x2 = -(1/x) evaluated over the same interval. The sum of these two bounds is 0 – (-1) = 0 + 1 = 1. Since the indefinite integral converges, the series also converges.

These tests help determine whether a series converges or not. I used them to prove with basic mathematics that Σ k (k = 1,) diverges, rather than converging on a negative rational number. While the values given to divergent indefinite series can provide an idea of how they relate to each other, they require a fair amount of assumption and a lot of counterintuitive work to calculate. It is far easier and more practical to state that Σ k (k = 1,) diverges.

Sources: (Foundations of Analysis by Joseph L. Taylor)

Drop the Base


A demonstration of exactly how a shift of base can change our perception of time. Image: Jeremy Keith via flickr.

As a kid, when we were first introduced to numbers, they were just something we memorized, learned to accept, and started using on a regular basis. While this seems almost second nature to most of us, there was a time where the idea of a number system was a new thing and, like all new things, it was discovered multiple times by different people who had different setups. One of the more interesting areas of variation between different number systems would be the base that different number systems used.

Before going into detail about what a base is, it is important to understand that base systems are primarily used by number systems which also use position to determine how large a number is. For example, the Arabic numeral system is positional because I can use the same symbol in a different position to change the value of the number. While 01 is only the value one in this system, by switching these numbers around to 10, I have changed the value now to ten. This is different from something like the Roman numeral system which, for the most part, wouldn’t be considered a positional system because in two different numbers, like X and XIII, the value of the symbol X doesn’t change.

Now, what does this have to do with the base of a number system? The thing is, the base of a positional number system is the number of different symbols you can have in any single position. For example, the Arabic numeral system is base 10 because we can have ten different symbols in a single position (1, 2, 3, 4, 5, 6, 7, 8, 9, 0). In addition to defining how many different symbols you can have in any one position, the value of the base will also affect how much of a change in value a symbol will have based on its position. As I had mentioned earlier, different number system have different bases. The primary reason why would most likely be just because they may have had a different system for counting which lead to that decision. Having a base 10 system is the more common one and a lot of people give credit to that due to the fact that the average number of fingers we have on our hands combined is ten and people like to count using their fingers. On the other hand, the Mayans had a numeral system which consisted of base 20. Unlike most people from Europe, the Mayans wouldn’t wear shoes which meant they could count using ten fingers and ten toes. Even the Babylonian’s had a numeral system with base 60. I honestly couldn’t say why but I am sure they had a good reason for doing so.

Even current day computers use a different base than 10. Instead, computers count using base 2 which means they can only have a 0 or 1 in any position. How can something like this work? The reason why different numeral systems can have different bases is because all positional systems use mathematics in combination with the base size to determine how important a certain symbol is based on its position. This means that it is easy to convert from any base system into a different one. For example, if I want to convert the binary number (100010) into a base 10 number, all I need to do is figure out the base 10 value at each position and add them together. Since this is base 2, every position will be multiplied by 2i with i being the current position. This means:

100010 -> 1*25 + 0*24 + 0*23 + 0*22 + 1*21 + 0*20 -> 1*32 + 0*16 + 0*8 + 0*4 + 1*2 + 0*1 -> 32 + 2 = 34

To make things even more bizarre, a base can be found in more than just numeral systems. Another great example of a system which has a base is the alphabet. In our case, the Roman alphabet has a 26 base system or 52 (if you include capitalization). In addition, a lot of the different measuring units we have also have their own setups for bases. There are 12 inches in a foot, 16 ounces in a pound, 60 seconds in a minuet, and even 12 months in a year. And yet, for all of these we use a base 10 counting system instead of creating our own symbols for each measuring units. Then again, imagine how confusing that would be. In most places, people realize how difficult it can be constantly converting from one base system to another which is why certain measuring systems like the metric system uses a constant base of 10 between unit sizes to make things easier.

In the end, the point is that different bases are used everywhere. Whether you are dealing with numbers or some other system entirely, you will usually be able to find a base of some kind connected to the system. While it may be difficult to have to constantly deal with different kinds of bases, bases are necessary for people to be able to have such a large variety with such a limited number of symbols. Bases are here and they are here to stay.


What even is distance? p-adic numbers and completion of the set of real numbers

I chose this topic for a blog post based on a tangent mentioned in class: p-adic numbers. I was intrigued that there was a different way to complete the rational numbers since it took me long enough to accept that irrational numbers got to be on the number line at all. If it weren’t for 45-45-90 triangles with rational side lengths that needed hypotenuses, I honestly don’t think I would have ever accepted irrationals as more than some theoretical, mathematical construct, that while consistent, had no value to me personally beyond computation. So I was really excited to see if p-adic numbers would make more intuitive sense to me. Sadly, after some initial research it just wasn’t clicking. But after a bit more digging, I found this video:

The direct comparison of the completion of the real numbers using irrational numbers and p-adic numbers was really helpful to ground my understanding of the p-adic numbers. Hopefully this video and my discussion below will also help some of you get a feel for p-adic numbers.

Before we jump into the p-adic completion of the rationals, I’d like to discuss how we can complete the set of rational numbers in general. When we talk about completing the rational numbers, we’re really talking about completing a metric space. A metric space is defined as a set, X, with a metric, or global distance function, g(x,y), such that for every two points x and y in X, the distance between them is a non-negative, real number. A metric space must also satisfy three other properties:

• g(x,y) = 0 iff x=y
• g(x,y) = g(y,x)
• g(x,y) + g(y,z) ≥ g(x,z) (The Triangle Inequality)

This results in a metric space (X,g). (For our familiar completion of the rational numbers, we will use the absolute value function as our global distance function so d(x,y) = |x-y|.)

Now, a metric space (X,g) is called complete if every Cauchy sequence in (X,g) converges in (X,g). This means (spoilers) that if a Cauchy sequence converges to a value not in the metric space, the metric space is not complete. A Cauchy sequence is a sequence whose element become closer to one another as the sequence is extended. Luckily, we already have our metric, g(x,y) which determines the positive distance between any two points in X that we will use to calculate “closeness.” Thus, we call a sequence {x1,x2,x3,…} Cauchy if for every positive, real number E, there is some positive integer N such that all natural numbers m and n greater than N, g(xm,xn) < E.

Because of this, the set of rational numbers is not a complete metric space. Take for example the sequence {2, 2.2, 2.23, 2.236, 2.2360, 2.23606, …}. We will use d(x,y) as our global distance function. By definition, this is a Cauchy sequence of rational numbers, but its limit is √5. Because this is not an element of the rational numbers, the sequence does not converge in (Q,d). Therefore the set of rational numbers cannot be a complete metric space.

To complete this metric space, we simply add the limits of all the Cauchy sequences that can be constructed from the rationals to the rationals to form what we know as the reals. This fills in all the gaps in the metric space so every Cauchy sequence in (R,d) converges in (R,d).

Now for the formation of p-adic numbers. The primary change when working with p-adic numbers is the definition of distance. Before, we used the positive distance between two numbers on a number line, d(x,y) = |x-y|, as the metric for distance. The p-adic numbers are formed using a fundamentally different conception of distance that is still consistent with all the requirements of a metric needed to define a metric space.

We start with the formation of a p-adic number by choosing and fixing a prime, p. Every non-zero, rational number x can be written in the form x=(pn)*(r/s) where p is the fixed prime, r and s are integers, and p, r, and s are all coprime. Using this formation, we can represent all the rationals as p-adic numbers in terms of the fixed prime, p. We can then define the p-adic absolute value to create a different definition of distance to be used as a metric later on. This absolute value is defined as:

|x|p = |(pn)*(r/s)| = p-n

We also define |0|p = 0 for consistency. Notice that this definition of distance means the p-adic absolute values can only take the form of whole numbers, unit fractions, and zero. As a result, when taking the differences between two rational number x and y, this results in a very different formation of distance. The video posted above gives a great example:

When p = 7, 28814 and 2 are ‘closer together’ than 3 and 2.

This works because |28814 – 2|7 = |28812|7 = |74*(12/1)|7 = 7-4 = 1/2401 and
|3 – 2|7 = |1|7 = |04*(1/1)|7 = 70 = 1. Since 1/2401 < 1, the distance between 28814 and 2 is less than the distance between 3 and 2. Because of this, as the distance between two numbers contains factors of larger and larger powers of p, the distance decreases.

So to see if the rationals are complete using this new metric, we must check to see if every Cauchy sequence of rational numbers has a limit that is itself a rational number. Because each p-adic formation of the rationals is unique based on the initial choice of p (written Qp) and because there are infinitely many primes, it is impossible to check every p-adic formation of the rationals. But we can write a general case using p to show one example where the rationals are incomplete which is sufficient to show that the rationals are not complete using any p-adic metric.

{p0, p0+p1, p0+p1+p2, p0+p1+p2+p3, p0+p1+p2+p3+p4, …}

Because each successive term in the sequence adds a higher power of p to the previous term, we are adding smaller and smaller pieces to each term under the p-adic metric. This is because the distance between successive terms is pn and as n increases, p-n decreases. Thus, this is a Cauchy sequence. But the limit of this sequence is infinite, so it is not a term in the set of rational numbers. So again, we find ourselves in a situation where we must add the limit of each Cauchy sequences of rational numbers that can be formed using the p-adic metric to complete the rationals.

The hardest thing for me to wrap my head around while studying the p-adic completion of the rationals is that this completion does not result in the set of real numbers that result from adding the limits of the Cauchy sequences using d(x,y) to the rationals. Unlike infinite decimals that we represent as irrational constants or roots, p-adic numbers have a finite decimal component and can expand indefinitely to the left. So any p-adic number can be represented as a power series such as:

x-mp-m + x-m+1p-m+1 + … + x0 + x1p1 + x2p2 + …

Which can be written in decimal form as:


This idea was so foreign to me that I needed it to be explicitly spelled out in the video before I could even consider it a valid way to describe a number. I’m still trying to decide if I like it. It’s definitely cool, but it’s also scary – like nuclear fission. I’m still wary of the repercussions of the p-adic definition of absolute value, but I’m glad to have explored the Wonderland like world that it creates.


Click to access 0001_0006.pdf

Fermat’s Last Troll

The Urban Dictionary defines the word troll as:


  1. (n.) Sometimes compared to the Japanese ‘Oni’, a troll is a supernatural creature of Scandinavian folklore, whose race was thought to have carried massive stones into the countryside. Lives in hills, mountains, caves, or under bridges. They are stupid, large, brutish, hairy, long-nosed, and bug-eyed, and may also have multiple heads or horns. Trolls love to eat people, especially small children.
  2. (n.) An ugly little plastic doll with big hair.
  3. (v.) To fish by dragging bait behind a moving motor-boat.
  4. (v.) The act of posting a deliberately provocative message to a newsgroup or message board with the intention of causing maximum disruption and argument

Based on the final definition, I would like to propose that Pierre de Fermat pulled off the greatest troll in mathematical history (possibly unintentionally).

Problem II.8 in the 1621 edition of the Arithmetica of Diophantus. On the right is the margin that was too small to contain Fermat’s alleged proof of his “last theorem”.

Fermat was a French lawyer and mathematician who is remembered for things like discovering an original method of finding the greatest and the smallest ordinates of curved lines, and researching number theory, analytic geometry, probability and optics. Basically, he did a lot of math stuff; however, his most famous contribution to mathematics is referred to as Fermat’s Last Theorem or FLT. This marvelous contribution was written in the margin of a book. Fermat simply wrote:

“It is impossible to separate a cube into two cubes, or a fourth power into two fourth powers, or in general, any power higher than the second, into two like powers. I have discovered a truly marvellous proof of this, which this margin is too narrow to contain.”


An incredibly poignant theorem, but he just didn’t have space to write out it’s proof. Fermat later proved FLT for the specific cases n=4, but left his too-wide-for-the-margin proof unwritten.  For the next 358 years mathematicians worked night and day to figure out exactly what couldn’t fit in that little margin.

On Monday 19 September 1994, Andrew Wiles, an English mathematician with a childhood fascination with Fermat’s Last Theorem, finalized a correct and elegant proof for FLT. Almost 400 years of work had finally culminated in a seemingly magical proof. It was a national holiday and people ran from their houses yelling “FERMAT IS SOLVED!!! THE WORLD IS AT PEACE!!!”. I kid. That didn’t actually happen, but someone did write a poem about it:

Fermat’s Last Theorem Proven

By Marion Cohen

Fermat said the proof was too large
to fit in the right or left marg-.
True, back of the paper
or proof made to taper
might help, but he said, “I’m in charge.”

Now Wiles didn’t mind paper waste.
In fact, it was true to his taste
to use up whole reams
to realize his dreams
and he crossed out instead of erased.

Fermat was all snickers and smiles
as he smugly stayed clear of the aisles
and he thought “they’ll be glum
“but that proof will succumb
“though it’s going to take quite a-Wiles.”

Cohen takes the point of view that Fermat was conscious of his choice to never write down his proof. This raises the question – did Fermat actually have a feasible proof? If he didn’t, was he planning on upsetting the mathematical community for 400 years?  Let’s look a little deeper.

Fermat’s note first appeared in a collection of his edited works that was published by his son Samuel in 1670. This wasn’t the only time that Fermat claimed to have a proof, but didn’t have the time or paper or something to write it out. More than a century later, mathematicians like Euler had to reconstruct proofs for many of Fermat’s theorems.

One way to address his note is to claim that Fermat’s comment was simply a private note that he never intended to publish. Simply a little “Hey self, write this awesome, brilliant proof down later. You didn’t write it now because you had no paper. Love, me”. This wouldn’t really make sense. In his day and age it was pretty normal to write and publish commentaries on ancient works. The publishers intentionally left large margins in such works so that modern commentators could write their comments and show their contemporaries how much smarter they were than their forefathers. Basically, we can be pretty confident that Fermat was at least considering that his work would be published.

When I was researching this subject, I came to the realization that this story could have ended differently. Fermat didn’t publish a lot of his work. What if he had decided to publish his commentary on the Diophantus after all? What if he decided to publish his commentary after he had worked out the proof for n=4  and then decided that he didn’t have a proof for the theorem after all? What if he had edited out his comment? We wouldn’t have Fermat’s Last Theorem. He wouldn’t have sent mathematicians on a blind-leading-the-blind hunt in the forest. Wiles may have never had a reason to devote his life to proving FLT. Essentially, it was fortunate for the development of number theory that Fermat wasn’t prone to editing his papers. Or did he intentionally avoid editing so as to troll future generations — make them think he had all these brilliant ideas and proofs, but end up just making them work and argue down (what he thought) was a rabbit hole?

Sir Andrew Wiles. Image copyright C. J. Mozzochi, Princeton N.J.

Wiles’s proof involves mathematics that wasn’t  invented or discovered (that’s the topic of an entirely different blog post) until centuries after his death. The mathematical historian Howard Eves once said that “Fermat’s Last Theorem has the peculiar distinction of being the mathematical problem for which the greatest number of incorrect proofs have been published.” Did Fermat use his version of an online forum to post his version of a troll? Or did he genuinely believe he had a solution that would have been proved incorrect? Or did he actually have a brilliant proof that doesn’t use any of Wiles’s modern math? We will never know, but I think the image of Fermat as the first comment troll is the most fun.


Poem: Discovering Patterns in Mathematics and Poetry By Marcia Birken, Anne Christine Coon’s_Last_Theorem#Fermat.27s_conjecture

Binary numbers: the base of modern science

In the class, we learned about the strange base-60-system of Babylon, and I was wondering were there any other counting system that seems unfamiliar to us.

Because I am taking a programming class, the first thing that came into my mind was the binary system. Binary numbers represent values using only two different symbols: 0(zero) and 1(one). This system seems easier than base-10 system, because we only need to remember two symbols to express all integers. For instance, the first 10 integers in the decimal system (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) can be expressed as : “ 0, 1, 10, 11, 100, 101, 110, 111, 1000, 1001.”

  1. Origin of binary numbers

Although occupying more space, the expression of numbers in the binary system seems easier than in the decimal system. Then I am wondering who first invented it? It is said Gottfried Leibniz, a German mathematician and philosopher who is famous for the inventing of Calculus, first discover the modern binary number system and it appears in his article “Explanation of the Binary Arithmetic” . Leibniz also indicated that the ancient ruler of China Fuxi first invented the binary system in his work — “I Ching”; in “I Ching”, the binary numbers are being used to divine the fate of ancient Chinese people, for those people believe that the mysterious secrets of the universe are all in these simple numbers of signs.

Image: BenduKiwi and Machine Elf 1735, via Wikimedia Commons.

  1. The arithmetic of binary numbers

Like the decimal system, binary numbers also have their arithmetic.

  • Addition

Addition is the simplest operation in the binary system. Adding two single-digit binary numbers is relatively easy, like this:

0 + 0 → 0

0 + 1 → 1

1 + 0 → 1

1 + 1 → 0, carry 1 (since 1 + 1 = 2 = 0 + (1 × 21) )

Here when adding two 1s, we carry the value divided by 2, and add the quotient to it the left-next positional value, so multiple-digits number addition is:

0 1 1 0 1

+   1 0 1 1 1


= 1 0 0 1 0 0

  • Subtraction

The subtraction of binary numbers is the inverse operation of addition, when two single-digit number doing a subtraction, like this :
0 − 0 → 0

0 − 1 → 1, borrow 1(from the left-next position)

1 − 0 → 1

1 − 1 → 0

So, likewise, the multiple-digits numbers’ subtraction are like this:

*   * * *   (starred columns are borrowed from)

1 1 0 1 1 1 0

−    1 0 1 1 1


1 0 1 0 1 1 1


Multiplication in binary numbers is simpler than in the decimal system, for two number A and B, there are only two rules:

  1. If the digit in Bis 0, the partial product is also 0
  2. If the digit in B is 1, the partial product is equal to A

For example, the binary numbers 1011 and 1010 are multiplied as follows:

1 0 1 1   (A)

× 1 0 1 0   (B)


0 0 0 0   ← Corresponds to the rightmost ‘zero’ in B

1 0 1 1     ← Corresponds to the next ‘one’ in B

0 0 0 0

+ 1 0 1 1


= 1 1 0 1 1 1 0

And for division, it is the inverse operation of multiplication.

  1. Transfer between binary number and decimal number

How to transfer a binary number into a decimal number? For example:

11011(2) = 1 * 2^4 + 1*2^3+0*2^2+1*2^1+1=27(10)

And the inverse transfer is to count the power of 2s in a decimal number, like:

36(10) = 1*2^5+ 0*2^4 +0*2^3+1*2^2+0*2^1+0*2^0= 100100(2)

  1. The application of binary numbers

The reason why I introduce the binary numbers is that they are the base of modern science, especially the computer science. The basic element of a computer is the logical circuit, which only has two basic situations: 0 for switch off, and 1 for switch on. As the old saying: less is more. The binary system is coincidentally perfectly fitting the feature of the logical circuit(0 for no, and 1 for yes). And the logical circuit led to the invent of computer. For example, when calculating, the computer will translate the numbers into binary form and do the operations, and then transfer the answer back to decimal number like it shown above.

  • Conclusion

It is unbelievable when you think of the powerful computer is based on the binary system, and considering the huge works computer have done so far, we can say that the binary system is the key of modern science and technologies. Even when I am typing this article, the binary numbers keep working in my computer.


From Obscurity to Fame: The Man Behind the Proof

Andrew Wiles. Image: Denise Applewhite, Princeton University Office of Communications.

In class, we have talked about the importance of context. We cannot simply look at the mathematics of an ancient document to understand what its purpose is. We must also look into the time period, the background, the situation, the author and the many other facts of the culture to be able to understand the true importance of the document presented. This is easily understood when talking about ancient documents, but often seems to be lost when talking about current proofs. That’s why I decided to look into who Andrew Wiles really is.

Andrew Wiles was born on April 11, 1953 in Cambridge, England. His father was Maurice Frank Wiles, a Regius Professor of Divinity at the University of Oxford. When Andrew was born, his father was also Chaplain at Ridley Hall, Cambridge. A Regius Professor (because I didn’t know either) is a very prestigious level of professorship given within British universities specifically by a British monarch. They can be appointed in various fields, but must go through a rigid interview process with both the university and the national government. Andrew Wiles’ mother was Patricia Wiles. I tried to find more information on her, but couldn’t find any specifics. I also couldn’t find any information on whether he had siblings or not.

Andrew grew up with a deep love of Mathematics. In an interview with Nova he said, “I loved doing problems in school. I’d take them home and make up new ones of my own.” Talk about an ambitious kid. At the age of 10, Andrew found a book that would anchor the next 40 years of his life. He was one day wandering through the math books at the library and happened upon a book that explained Fermat’s last theorem. Of this experience he says, “This problem had been unsolved by mathematicians for 300 years. It looked so simple, and yet all the great mathematicians in history couldn’t solve it. Here was a problem, that I, a 10 year old, could understand, and I knew from that moment that I would never let it go. I had to solve it.” During his teens and college, he tried many different ways to solve this problem but to no avail.

When Andrew became a researcher, he put his dream of solving Fermat’s Last Theorem on the shelf and began to focus on other things. Andrew got his bachelor’s degree in (of course) mathematics in 1974 from Merton College, Oxford. He went on to get his PHD in 1980 from Clare College, Cambridge. He then taught at the Institute for Advanced Study in New Jersey, Princeton, the Institut des Hautes Études Scientifiques and the École Normale Supérieure in Paris. From 1988 to 1990, Andrew taught at Oxford and then went to Princeton.

It’s just before Andrew went to Oxford that things began to get interesting. In 1986, Andrew heard of the link between the Taniyama-Shimura conjecture and Fermat’s Last Theorem. He says, “I knew that moment that the course of my life was changing because this meant that to prove Fermat’s Last Theorem all I had to do was to prove the Taniyama-Shimura conjecture. It meant that my childhood dream was now a respectable thing to work on. I just knew that I could never let that go.” At this point, Andrew kept himself in isolation. He told no one else of his work and day after day continued to try and prove the Taniyama-Shimura conjecture.

During this time, he married Nada Canaan. And even with her, he told her nothing of his work with Fermat until they were on their honeymoon. She had heard of Fermat’s last theorem, but had no idea of its importance, especially for Andrew. I can imagine that would have been an interesting conversation at that time.

Finally, in 1993, he found the crucial breakthrough. A single line caught his eye, and he knew that it was the key for his proof. Though unknown to him at the time, his proof did contain an error. It took another year of close examination to correct that error and then finally, it was complete. 357 years after Pierre de Fermat had stated his original frustrating comment in the margin of Arithmetica, his great proof had finally been completed.

Andrew’s name quickly gained popularity within the mathematical community. But it is intriguing to think of his addition to the mathematical world in a wider viewpoint. As we look at his life and his work, will they be preserved one hundred, one thousand or even a hundred thousand years from now? Will we remember his name and why he was important? Now that Fermat’s theorem has been proved and is no longer an active query, will we eventually forget why he was important and what Andrew Wiles did for us?

This is a concept that we have discussed in the viewpoint of the Rhind Papyrus and Plimpton 322. Through the years, we have lost the history behind these items; the understanding of why they were written and what purpose they served. With the better documentation of today’s time, will we be able to access these facts on Wiles’ proof years down the road or will culture itself deem them unimportant and forget them? It is important to preserve the history of mathematics now so that generations afterward can learn from the findings of today.

Fun Facts about Andrew Wiles after his proof of Fermat’s Last Theorem:

  • In 1999, Andrew got an asteroid named after him called asteroid 9999 Wiles
  • In 2000, Andrew became Sir Andrew Wiles and was made a Knight Commander of the Order of the British Empire by the Queen
  • In Star Trek: The Next Generation (as Andrew was working on his proof), they stated the Fermat’s Last Theorem was still yet unproved. In Star Trek: Deep Space Nine, they correct themselves and mention Andrew’s proof.
  • Andrew’s proof is mentioned in The Girl Who Played With Fire, and also The Girl Who Kicked the Hornets’ Nest by Stieg Larsson
  • Both Tom Lehrer and BATS have written songs about Andrew’s proof.
  • In 2001, there is a musical written about Andrew Wiles and his proof of Fermat’s Last Theorem called “Fermat’s Last Tango”. You can watch it here:
    • By the way, it’s hilarious. Daniel Keane (who plays Andrew) meets Fermat and goes to the “Aftermath” to meet Gauss, Euclid and Newton.


Russian Peasant Multiplication versus modern base 2 multiplication in computers

At first, it may sound impossible, but there exist many ancient forms of arithmetic whose methodology falls very close in line with how the modern day computer processing unit (CPU) performs arithmetic. As a software engineer at Google, I have a lot of hands on experience with computers and how they function. While taking this class I quickly realized that it had managed to strike up interest in my mind that I had not expected. As we began to dive into ancient cultures and arithmetic, the similarities between Egyptian and Babylonian computation and the modern day arithmetic logic units inside the CPU came to life.

The similarities are both in the way they performed the same mathematical operations and in the general idea that there are clever ways to do arithmetic. I say clever mostly because they exist in a number system with a different base than our current working number system which is base 10. Computers, as many people know, at their very core, are restricted to the domain of base 2 numbers. I believe that most people are familiar with the fact that computers use a series of 1’s and 0’s denoted as “bits” to signify a particular bit either being turned on or turned off. Each bit is acting as a placeholder for a greater number. When a bit is turned on, it is denoting the presence of the number in that position. Here is a simple chart showing the first 16 numbers of binary, 0 – 15.


To me, I find systems of multiplication and division to be a lot more interesting than simple addition or subtraction, so I’m going to talk specifically about systems of multiplication. You can imagine that both in ancient times and in a computer, multiplication could be as simple as adding a number together many times to reach your desired result. For example, the problem of 2 being multiplied by 4 could be represented as 2 + 2 + 2 + 2 = 8. For small numbers this works out. But for both ancient mathematicians and computers, the time it takes to calculate numbers is very much of the essence. So in both cases, clever algorithms have been developed to speed up the process. My personal favorite of the ancient multiplication techniques is one that was very similar to the Egyptian multiplication that we learned in class. It goes by the name of Russian Peasant Multiplication. As best I can tell it, this name was given to it after accounts of western travellers witnessing the technique being used by peasants in Russia.

An example of this algorithm that I found in “The Russian Peasant Multiplication Algorithm: A Generalization” by Beverly J. Gimmestad shows it well:


At first glance this may look like garbage, but I will explain it. Given that we want to multiply the numbers 37 and 6, place them at the heads of their respective columns. Begin by dividing the left column by 2 and rounding the result down to the nearest integer. You can see in the example that we come to the number 18. In the column on the right, we take our number and instead of halving it, we double it, producing a 12. We then continue this process for as many loops as it will take for the column on the left to produce a 1 as its value. This brings us to the second table. Our next operation is to take every row that begins with an even number on the left and discard it. We can now gather the remaining numbers from the column on the right and add them together to reach our desired result. 37 x 6 = 6 + 24 + 192 = 222. It is worth noting here that the algorithm used by Egyptians for multiplication uses a very similar system of doubling. Perhaps it is possible that there is a connection between the two techniques.

So how is Russian Peasant Multiplication similar to what computers are doing? From a popular book in the field of Computer Science: “Computer Organization and Design” by Patterson and Hennessy, we get a very accurate description of the multiplication algorithm found in most modern day Arithmetic Logic Units inside of processors. We start with 3 numbers: A multiplicand, a multiplier, and a product which is initially 0. We will build up this product as we go. This figure I drew up shows a simple flowchart to assist with understanding:


Moving from least significant bit (the rightmost bit denoting the presence of a 1 in the number), to most significant bit in the multiplier we first determine whether the bit in the multiplier is a 0 or a 1. If we encounter a 1 in the rightmost position of the multiplier, this signifies that in this round, we need to add the multiplicand to our product. Once we have done that, we shift the multiplicand left by one bit essentially doubling it. We then shift the multiplier right by 1 bit throwing away its rightmost bit and halving the number. Sound familiar? (Russian Peasant Multiplication). We loop this process until we have examined each bit.


The example above is also taken from “Computer Organization and Design” and illustrates this algorithm in a grade-school like fashion which is frankly much easier to understand. Here we are multiplying the numbers 8 and 9. To break it down, the bit in the multiplier is determining if the multiplicand should be added for that round. In any round, the multiplicand is double what it was the round before, hence the shifting to the left. To give a bit of contrast, in our number system, a shift left represents a multiplication of 10 (e.g. 3 -> 30 -> 300). We add all the rounds together and reach or final product of 72 which is indicated by the 64th and the 8th bit being set.

It may be hard to see at first, but with a little thought you can see that this is not only close, but it is exactly what is performed in the Russian Peasant Multiplication algorithm. How fascinating it is that an ancient technique can be so useful centuries later in an application that they couldn’t have possibly imagined!

Source material:

Patterson, D., & Hennessy, J. (2012). Computer organization and design: The hardware/software interface (4th ed.). Waltham, MA: Morgan Kaufmann.

Gimmestad, B. (1991). The Russian Peasant Multiplication Algorithm: A Generalization. The Mathematical Gazette, 75(472), 169-171.

Exploring Up-Arrow Notation

One of the greatest things about math is that there are so many ways to do the same thing. We can express the same equation in multiple fashions and still keep the meaning the same. If you stop and think for a moment, you too will realize how awesome this is! How many different ways can you write the word “the” in English? Take a moment to mull this one over; it’s a tough one. Now if you came up with more than one way, you and I need to have a sit down conversation because you are most likely a genius. Anyways I digress, the point is the fact that math allows us so much flexibility in the ways we represent things is beyond amazing.

Take a look at the following exponential expression: , if you are familiar with how exponents work, you’ll recognize that this equals 8. Unfortunately, if you are unfamiliar with this particular area of mathematics, this expression is basically meaningless to you. Luckily for us, math allows us to put this into a different form. So now we can take and represent it instead as . This shows us that 2 raised to the power of 3 is really just 2 multiplied by 2 multiplied by 2. Still not simple enough for you? Well hallelujah, we can express it in another fashion. We can change to the simpler version of . So now we have 2 plus 2 plus 2 plus 2 for the grand total of 8. At the beginning we only had one form to look at this expression but by the end we have 3. There are so many ways to represent things in math that people began to push the limits of that fact. Much like exponential or scientific notation, other mathematicians came up with their own notations (ways of representing expressions in math). The one that I am going to focus on today is Knuth’s Up-Arrow notation.

Here we have ourselves a very fun way of representing math. The closest thing that it can be compared to is exponential notation. Looking at the above example, we can draw a couple of conclusions about the mathematical expressions we used. First, multiplication is just a series of addition operations. What I mean by that is, the expression is really just a short way of writing 2 plus 2 plus 2 plus 2 (a series of addition operations) Can you imagine how awful it would be if every time you had to double something, you had to write out every addition operation it took? It would be pure anarchy! Along the same vein, exponential notation is just a way of expressing a series of multiplication operations. So in this instance, would be the short way of writing . So now enter Knuth’s Up-Arrow notation, this notation gives us a way to represent a series of exponential operations.

The whole idea of multiple exponential operations can be a little daunting so let’s go over an example. Lets take a regular exponential operation that we are used to seeing, like . So this is easy enough to understand, lets convert it into up-arrow notation. As you can guess from the name, up-arrow notation uses an “up-arrow” as its symbol, so 32 turns into 3↑2. And there you have it, up-arrow notation! Just kidding, we have barely scratched the surface that is the awesomeness of up-arrow notation. That was an easy example; so let’s take it one step further. Let’s say that you wanted to write out 3 raised to the power of 3 raised to the power of 3 (333), that wouldn’t be too bad right? What if you wanted to take it out one more step? How about two more steps? Eventually you are going to hit your limit of how small you are actually able to write. But don’t you fret your little mathematician head; up-arrow notation is here to save the day. 3 raised to the power of 3 raised to the power of 3 (333) becomes the nice and simple expression 3↑↑3. Whew that was a whole lot easier and shorter to write out. This could continue until you were blue in the face. For example, if we take 3↑↑4, this doesn’t translate to 3 raised to the power of 3 raised to the power of 4 (334), this actually is equivalent to 3 raised to the power of 3 raised to the power of 3 raised to the power of 3, or 3333. So as you can see, these numbers begin to get bigger very quickly!

Moving on to something even more complex (yay for complexity!!! Wait….), let’s look at when we add a third arrow into the mix. Basically when you add an arrow, you create a series of up arrow operations. So if you have “n” arrows, you can expand it out into a series of (n-1) arrow operators. So looking at the example 3↑↑↑2, let’s expand this out into a series. The problem would then look like 3↑↑↑2 = 3↑↑3 = 3↑3↑3 = 7625597484987. So we had n=3, so when we expanded it out, it became a series of two arrow operations, and then we expanded that out to a series of one arrow operations. So now when we change this to 3↑↑↑3, we can again do this expansion. This time we get 3↑↑↑3 = 3↑↑(3↑↑3) = 3↑↑(3↑3↑3). If we look back at our double arrow example, we know that we will have 3 raised to the power of 3 raised to the power of 3 and so on 7625597484987 times! So as you can see, the triple arrow form grows unbelievably faster than even the double arrow form.

I hope that you are beginning to see the usefulness of this notation. You can take unnecessarily complicated expressions and shorten them to something much easier to read and much easier to write. Up-Arrow notation even goes as far as a quadruple arrow notation, but we will save that for another time. In the future, I hope you’ll use what you’ve learned here to confuse a friend or show off at a classy math party. Until then, just enjoy this fun little notation.