Author Archives: Hunter

What are the p-adic numbers?

The p-adic numbers are a completion of rational numbers with respect to the p-adic norm and form a non-Archimedean field. They stand in contrast to the real numbers, which are a completion of the rationals with respect to absolute value and which do respect the Archimedean property. The p-adic numbers were first described in 1987 by Kurt Hensel and later generalized in the early 20th century by József Kürschák which paved the way for a myriad of mathematical work involving the P-adics in the just over a century since.

To start talking about the p-adics it makes sense to start with the Archimedean property, which as mentioned above the p-adics do not adhere to.  First we need an absolute value function or valuation, ||, which is basically a function that gives some concept of magnitude to an element of a field (so it’s a mapping from a field to the positive real numbers). Given this, the zero element should map to zero and the valuation of the product of two elements should be the same as the product of the valuation of each element individually. The last condition (which determines whether or not the valuation is Archimedean) is that if the norm of an element is less than or equal to 1, then the norm of 1 plus that element is less than some constant, C, which is independent of the choice of x. If the last condition holds for C equal to 1, then the valuation satisfies the ultrametric inequality: the valuation of 2 elements is less than or equal to the maximum of the valuation of each element. If the ultrametric inequality is satisfied then the valuation is non-Archimedean. Otherwise it is an Archimedean valuation.

While this is a bit opaque, it makes more sense now moving into defining Archimedean fields: a field is Archimedean if given a field with an associated valuation and a non-zero element, x, of that field, then there exists some natural number n such that |Σk=1nx|>1. Otherwise,  the field is non-Archimedean and the ultrametric inequality holds. Basically what this means is that if we can measure distance and we are living in a nice Archimedean world, if we walk forward we can go as far as we want. While if we were to live in a non-Archimedean world and we try to walk forward we would at best stay in place and possibly move backward.

Now that that’s out of the way and (hopefully) the weirdness of a non-Archimedean world has been established, it’s time to talk about the p-adics. Any non-zero rational number, x, may be expressed in the form x ,where a and b are relatively prime to some fixed prime p and r is an integer. Using this, the p-adic norm of x is defined as |x|=p-r , which is non-Archimedean. For example, when p=3, |6|=|2*3|=1/3, |9|=|32|=1/9 and |6+9|=|15|=|3*5|=1/3 or when p=5 , |4/75|=|4/(52*3)|= 25, |13/250|=|13/(2*53)|=125 while |4/75 + 13/250|=|17/325|=|17/(52*13)|=25. So now that we have this we can proceed identically as when constructing the real numbers using the absolute value and define p as the set of equivalence classes of Cauchy sequences with respect the p-adic norm. After some work it can be shown that every element in pcan be written uniquely as Σk=makpk,where am does not equal zero and m may be any integer.

The most common use of p-adics I found was in showing the existence (or lack thereof) of rational or integer solutions to problems. For example, the Hasse principle (also known as the local-global prinicipal ) was discovered by Helmut Hasse in the 1920’s and attempts to give a partial converse of the statement that if a polynomial of the form Σaijxiyj+Σbixi+c=0 has a rational solution then it has a solution for all expansions of Q. The Hasse principal asserts that if such a polynomial has a solution in R and every Qp then it has solution in Q. An example of this is x2-2=0, which has (irrational) solution square root of 2 in R. However, it does not have solution in Q5 , and so by the Hasse principal it does not have a solution in Q, which we know to be true. Another use of the P-adics which is fairly interesting is in transferring standard real or complex polynomial equations to their tropical (the semi ring of the reals with the addition of an infinity element under the laws of composition addition and min (or max)) polynomial counterpart, a process which runs into issues due to the necessity of the ultrametric inequality.

Sources:

http://www.cut-the-knot.org/blue/LocalGlobal.shtml

http://en.wikipedia.org/wiki/P-adic_number

http://mathworld.wolfram.com/p-adicNumber.html

http://mathworld.wolfram.com/p-adicNorm.html

http://mathworld.wolfram.com/Valuation.html

http://www1.spms.ntu.edu.sg/~frederique/antchap5.pdf

http://www.math.ucla.edu/~i707107/HasseMinkowski.pdf

The path to Analytic Geometry (Or a few of the many geniuses it took to learn 5th grade math)

Analytic geometry is the study of geometry using a coordinate system. Basically it’s the idea of expressing geometric objects such a as a line or a plane as an algebraic equation, think y=mx+b or ax+by+cz=k. This may be done by use of the more familiar Cartesian coordinates, by something such as polar coordinates or by just about any system for defining coordinates in a Euclidean space. The Common Core has the concept of graphing introduced in 5th grade, and graphing simple functions in the 8th grade. It’s quite interesting that something which took brilliant men so long to develop is now introduced to ten year olds.

The earliest evidence of anything resembling analytic geometry was by the Geek mathematician Menaechmus (380–320 BC), who was a student of Eudoxus and a tutor of Alexander the Great. Proclus and Eutocius both report that Menaechmus discovered the ellipse, hyperbola and parabola and that these were initially called the “Menaechmian triad”. These were used along with something resembling analytic geometry to solve the Delian problem, which is to, given the edge of a cube to construct the edge of a cube with double the volume. Though most of what we know of Menaechmus and his exact solution is second hand as his original work was lost, it appears as though he argued his solution for doubling the cube with proportions of a side length to the area of a side which fairly quickly leads to conics.

Another early manifestation of analytic geometry was by Omar Khayyám, whom we have mentioned in class. He drew a connection between algebra and geometry in his solution of general cubic equations. His idea to do this was to create a geometrical construction of a cubic equation by considering the variable to be the edge of a cube and constructing a set of curves from which a solution could be discerned. While it might seem far flung from Cartesian coordinates it was a significant leap in connecting the separate concepts of algebra and geometry.

Analytic geometry was more or less formalized in the early 17th century independently by René Descartes and Pierre de Fermat. Descartes published first and so he is commonly credited as the sole creator which leads to analytic geometry often being call Cartesian geometry. As Fermat has already been much discussed, I’ll skip his background and instead jump to Descartes. René Descartes was a French mathematician and philosopher who is most well known as the (co-)creator of analytic geometry and as the father of modern philosophy. He is the origin of the well-known quote “Je pense, donc je suis” or “I think, therefore I am” which appeared in in Discours de la methode (Discourse on the Method).

While the Fermat and Descartes constructions are equivalent, they did differ in several ways which primarily stem from which direction their creator worked. Fermat started with the algebraic equation and described the analogous geometric curve while Descartes worked in reverse, starting with the curve and finding the equation. To contrast the methods, the way most of us learn analytic geometry is much more similar to Fermat than to Descartes, where we learn to recognize that a degree 1 polynomial will represent a straight line then we learn how to find that line, next that quadratic function represents a parabola and so on. Whereas if we were to learn as Descartes’ work, we would take a straight line then learn that it represented a degree 1 polynomial which is similar to Fermat.  But then working further in this direction, it doesn’t make sense to jump to parabolas and instead to talk about conics and all degree 2 polynomials with no reason to talk specifically about parabolas.

In 1637, Descartes published his method of connecting arithmetic, algebra, and geometry in the appendix La géométrie (The Geometry) of Discourse on the Method. However, given Descartes’s opaque writing style (to discourage “dabblers”) as well as The Geometry being written in French rather than in the more common (for academic purposes) Latin, the book was not very well received until it was translated into Latin in 1649, by Frans van Schooten, with the addition of commentary clarifying certain arguments. Interestingly, though Descartes is credited with the invention of the coordinate plane, since he describes all necessary concepts, no equations are in fact graphed in The Geometry and his examples used only one axis. It was not until its translation into Latin that the concept of 2 axes was introduced in Schooten’s commentary.

One of the most important early uses for analytic geometry was to help prove the validity of the heliocentric theory of planetary motion, the (then) theory that the planets orbited around the Sun. As analytic geometry was one of the first methods one could use to actually make computations about curves, it was used to model elliptical orbits so as to demonstrate the correctness of this theory. Analytical geometry, and particularly Cartesian coordinates, were instrumental in the creation of calculus. Just consider how you might calculate something like the “area under the curve” without the concept of the curve being described by some algebraic equation. Similarly, the idea of rate of change of as function of time at a particular time becomes much clearer when thought of as the slope of the tangent line, but to do this, we need to think of the function as having some representation in the plane for which we need analytic geometry.

Sources :

https://www1.maths.leeds.ac.uk/~sta6kvm/omar.pdf

http://www.math.wichita.edu/history/Men/descartes.html

http://www.encyclopedia.com/topic/Pierre_de_Fermat.aspx

Mathematics: Its Content, Methods and Meaning (Dover Books on Mathematics) Jul 7, 1999

by A. D. Aleksandrov and A. N. Kolmogorov

http://www-history.mcs.st-and.ac.uk/Biographies/Menaechmus.html

http://academic.sun.ac.za/mathed/Shoma/MATUNIT12_02.htm

http://www.cut-the-knot.org/WhatIs/WhatIsAnalyticGeometry.shtml

http://www.cut-the-knot.org/WhatIs/WhatIsAnalyticGeometry.shtml

http://en.wikipedia.org/wiki/Menaechmus

http://en.wikipedia.org/wiki/Discourse_on_the_Method

http://en.wikipedia.org/wiki/La_G%C3%A9om%C3%A9trie

The Lucas-Lehmer Test and Why It’s Pretty Neat

What is it?

The GIMPS Logo. Public domain, via Wikimedia Commons.

The GIMPS Logo. Public domain, via Wikimedia Commons.

The Lucas-Lehmer test is a test to determine the primality of Mersenne numbers, Mn=2n-1 and states that Mp is prime, if and only if sp-2 is congruent to 0 mod Mp. where sp is the sequence defined by s­i =si-12-2  with s0=4. In other words, calculate the p-2 term of this sequence, and if Mp divides it, then you have found a Mersenne prime.

The basis for the test was built by Edouard Lucas in the late 1870’s, and in 1876 he used a rough version of what would become the test to prove that 2127 -1 = 170141183460469231731687303715884105727 was prime, which stood as the largest known prime for 75 years, and is the largest prime ever found by hand calculations. However, Lucas was only able to prove the case where p was congruent to 1 mod 4. In the 1930’s Lehmer extended Lucas’s result by showing its truth for all odd primes p and in formulating it into the “Lucas-Lehmer test” which is still used for proving the primality of Mersenne numbers.

Some examples

The first 7 terms of si are 4, 14, 194, 37634, 1416317954, 2005956546822746114 and 4023861667741036022825635656102100994, which quite clearly demonstrates how quickly si grows.  Given this speed, simply look at a few somewhat more easily calculable (thought still dubious) cases whose outcome is obvious, M3, M4 and M=7=. First M­­3 =23-1=7 and from this list above s­1= 14 and 14 = 2*7, so the test tells us that 7 is prime which is true. Now looking at M4=16-1=15 which is not prime and since, s2=194=13*15-1 the test correctly yields that 15 is not prime. Lastly, looking at a larger example M7=27-1=127, it is not hard to see that 2,3,5,7 and 11 all do not divide 127 it is clearly a prime. Now performing the test s­5= 2005956546822746114 = 127*15794933439549182, which confirms that 127 is prime, however in this case in particular, checking the primes I listed is far far easier than performing division on a 19 digit number by a 3 digit number.

Clearly in these cases, checking by trial and error would be easier than using the test; however, as you check larger and larger Mp the number of primes you would need to check starts to become a bit extreme. For example, the smallest number M59 is divisible by is 179951 which is the 16336th prime. In cases such as this the Lucas-Lehmer test seems to be quite a bit easier than checking 16336 primes. In an even a less extreme example, M13 is 156 digits at which point it seems like, particularly if working without computer assistance, the Lucas-Lehmer test, which requires computing 11 terms and then dividing once, appears to be a better option than trial division which would entail dividing quite a few more times, and that although individually easier to compute, I certainly wouldn’t enjoy.

Proof

The proof of sufficiency given by J.W. Bruce for the Lucas-Lehmer test is the reason why I find the statement so memorable and actually really cool. While I can’t find how Lehmer originally proved the test, Michael Rosen published a proof in 1988 which the majority of other proofs are simplifications of (including Bruce’s), and though I won’t copy it, I will explain the lemma I find so interesting that the proofs hinge on.

Lemma: Let w=2+√3, v=2-√3 and a=2m-2. Then Sm = wa+v-a.

This is shown fairly easily by induction and is then used in the proof to get some useful equalities when assuming M­­p divides sp-2 . The proof then uses these equalities along with the fact that w is a unit in Z[√3] to find a contradiction when looking at the multiplicative group Z[√3]/qZ where q is a prime divisor of M­p. I don’t know about anyone else, but when speaking about whether ridiculously large numbers of the form 2n-1 are prime, I find that making the leap from an integer sequence to the ring Z[√3] to be a very non-intuitive step. I think it’s fascinating how much realizing this “strange” relation simplifies a seemingly very complex problem into something very straightforward.

In Conclusion

Well, I hope the takeaway from this is that if ever someone needs to figure out if a 100+digit number of the form 2n-1 is prime and you don’t have access to the internet (or similar resource), you now have a way to do it without determining a lot of primes up to it and dividing a bunch of times. Though really in all seriousness, I hope I’ve given a reasonable explanation to what the Lucas-Lehmer test is and why it’s so useful and I hope that anyone reading this (yeah right) will take a look at Rosen’s and Bruce’s proofs as they are really quite cool (even bigger yeah right).

Sources:

Rosen’s proof of the entire test: http://www.jstor.org/stable/2322904

Bruce’s Proof of Sufficiency: http://www.jstor.org/stable/2324959

http://primes.utm.edu/notes/by_year.html

http://www.mersennewiki.org/index.php/Lucas-Lehmer_Test

http://en.wikipedia.org/wiki/Lucas%E2%80%93Lehmer_primality_test

http://primes.utm.edu/mersenne/