Tag Archives: Mathematics

Who knew that an unlikely friendship and a few games of cricket with one of the greatest mathematicians in the early 20th Century could lead to a breakthrough in population genetics?

Today, it is almost commonplace for us in the scientific community to accept the influence natural selection and Mendelian genetics have on one another, however for the majority of human history this was not the case. Up until the early 1900s, many scientists believed that these concepts were nothing more than two opposing and unassociated positions on heredity. Scientists were torn between a theory of inheritance (a.k.a. Mendelian genetics) and a theory of evolution through natural selection. Although natural selection could account for variation, which inheritance could not, it offered no real explanation on how traits were passed on to the next generation. For the most part, scientists could not see how well Mendel’s theory of inheritance worked with Darwin’s theory of evolution because they did not have a way to quantify the relationship. It was not until the introduction of the theorem of genetic equilibrium that biologists acquired the necessary mathematical rigor to show how inheritance and natural selection interacted. One of the men who helped provide this framework was G.H. Hardy.

G. H. Hardy. Image: public domain, via Wikimedia Commons.

G. H. Hardy. Image: public domain, via Wikimedia Commons.

Godfrey Harold (G.H.) Hardy was a renowned English mathematician who lived between 1877-1947 and is best known for his accomplishments in number theory and for his work with the another great mathematician, Srinivasa Ramanujan. For a man who was such an outspoken supporter of pure mathematics and abhorred any practical application of his work[5], it is ironic that he should have such a powerful influence on a field of applied mathematics and help shape our very understanding of population genetics.

How did a pure mathematician come to work on population genetics? Well it all started with a few games of cricket. Whilst teaching at the University of Cambridge, Hardy would often interact with professors in other departments through friendly games of cricket and evening common meals [1]. It was through these interactions that Hardy came to know Reginald Punnett, cofounder of the genetics department at Cambridge and developer of Punnett Squares, which are named for him, and developed a close friendship with him[13].

Punnett, being one of the foremost experts in population genetics, was in the thick of the debate over inheritance vs. evolution. His interactions with contemporaries like G. Udny Yule, made him wonder why a population’s genotype, or the genes found in each person, did not eventually contain only variations, known as alleles, of a particular gene that are dominant. This was the question he posed to Hardy in 1908, and Hardy’s response was nigh on brilliant. The answer was so simple that it almost seemed obvious. Hardy even expressed that “I should have expected the very simple point which I wish to make to have been familiar to biologists’’ [4]. His solution was so simple in fact that unbeknownst to him, another scientist had reached the same conclusion around the same time in Germany [17]. In time, this concept would be known as Hardy-Weinberg Equilibrium (HWE).

In short, HWE asserts that when a population is not experiencing any genetic changes that would cause it to evolve, such as genetic drift, gene flow, selective mating, etc., then the allele (af) and genotypic frequencies (gf) will remain constant within a given population (P’). To calculate the gf for humans, a diploid species that receives two complete sets of chromosomes from their parents, we simply look at the proportion of genotypes in P’.

0 < gf < 1

To calculate the af, we look at the case where either the gene variation is homozygous and contains two copies of the alleles (dominant—AA || recessive—aa) or heterozygous and only has one copy of each allele (Aa). P’ achieves “equilibrium” when these frequencies do not change.

Hardy’s proof of these constant frequencies for humans, a diploid species that receives two complete sets of chromosomes from its parents, is as follows[1][4]:

If parental genotypic proportions are p AA: 2q Aa: r aa, then the offspring’s would be (p + q)2: 2(p + q)(q + r): (q + r)2. With four equations (the three genotype frequencies and p + 2q + r = 1) and three unknowns, there must be a relation among them. ‘‘It is easy to see that . . . this is q2 = pr” 

Which is then broken down as:

q =(p + q)(q + r) = q(p + r) + pr + q2

Then to:

q2 = q(1- p – r) – pr = 2q2 – pr     ——->   q2 = pr

In order to fully account for the population, the gf and af must sum to 1. And, since each subsequent generation will have the same number of genes, the frequencies remain constant and follows either a binomial or multinomial distribution.

One important thing to keep in mind, however, is that almost every population is experiencing some form of evolutionary change. So, while HWE shows that the frequencies don’t change or disappear, it is best used as a baseline model to test for changes or equilibrium.

When using the Hardy-Weinberg theorem to test for equilibrium, researchers divide the genotypic expressions into two homozygous events: HHο and hhο. The union of each event’s frequency ( f ), is then calculated to give the estimated number of alleles (Nf). In this case, the expression for HWE could read something like this:

Nf = f(HHο)  f(hhο)

However, another way to view this expression is to represent the frequency of each homozygous event as single variable, i.e. p and q. Using p to represent the frequency of one dominant homozygous event (H) and q to represent the frequency of one recessive homozygous event (h), gives the following: p = f(H) and q = f(h). It then follows that p² = f(HHο) and q² = f(hhο). By using the Rule of Addition and Associative Property to calculate the union of the two event’s frequencies, we are left with F = (p+q)². Given that the genotype frequencies must sum to one, the prevailing expression for HWE emerges when F is expanded:

Fp² +2pq + q² = 1

Using this formula, researchers can create a baseline model of P’ and then identify evolutionary pressures by comparing any subsequent frequencies of alleles and genotypes (F) to F. The data can then be visually represented as a change of allele frequency with respect to time.

HWE represents the curious situation that populations experience when their allele frequencies change. This situation is realized by first assuming complete dominance, then calculating the frequency of alleles, and then using the resultant number as a baseline with which to compare any subsequent values. Although there are some limitations on how we can use HWE—namely, identifying complete dominance, the model is very useful in identifying any evolutionary pressures a population may be experiencing and is one of the most important principles in population genetics. Developed, in part, by G.H. Hardy, it connected two key theories: the theory of inheritance and the theory of evolution. Although, mathematically speaking, his observation/discovery was almost trivial, Hardy provided the mathematical rigor the field sorely needed in order to see that the genotypes didn’t completely disappear and, in turn, forever changed the way we view the fields of biology and genetics.


  1. Edwards, A. W. F. “GH Hardy (1908) and Hardy–Weinberg Equilibrium.”Genetics3 (2008): 1143-1150.
  2. Edwards, Anthony WF. Foundations of mathematical genetics. Cambridge University Press, 2000.
  3. Guo, Sun Wei, and Elizabeth A. Thompson. “Performing the exact test of Hardy-Weinberg proportion for multiple alleles.” Biometrics(1992): 361-372.
  4. Hardy, Godfrey H. “Mendelian proportions in a mixed population.” Science706 (1908): 49-50.
  5. Hardy, G. H., & Snow, C. P. (1967). A mathematician’s apology. Reprinted, with a foreword by CP Snow. Cambridge University Press.
  6. Pearson, Karl. “Mathematical contributions to the theory of evolution. XI. On the influence of natural selection on the variability and correlation of organs.”Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character(1903): 1-66.
  7. Pearson, K., 1904. Mathematical contributions to the theory of evolution. XII. On a generalised theory of alternative inheritance, with special reference to Mendel’s laws. Philos. Trans. R. Soc. A 203 53–86.
  8. Punnett, R. C., 1908. Mendelism in relation to disease. Proc. R. Soc. Med. 1 135–168.[PMC free article] [PubMed]
  9. Punnett, R. C., 1911. Mendelism. Macmillan, London.
  10. Punnett, R. C., 1915. Mimicry in Butterflies. Cambridge University Press, Cambridge/London/New York.
  11. Punnett, R. C., 1917. Eliminating feeblemindedness. J. Hered. 8 464–465.
  12. Punnett, R. C., 1950. Early days of genetics. Heredity 4 1–10.
  13. Snow, C. P., 1967. G. H. Hardy. Macmillan, London.
  14. Stern, C., 1943. The Hardy–Weinberg law. Science 97 137–138. [PubMed]
  15. Sturtevant, A. H., 1965. A History of Genetics. Cold Spring Harbor Laboratory Press, Cold Spring Harbor, NY.
  16. Weinberg, Wilhelm. “Über vererbungsgesetze beim menschen.” Molecular and General Genetics MGG1 (1908): 440-460.
  17. Weinberg, W. “On the demonstration of heredity in man.” Boyer SH, trans (1963) Papers on human genetics. Prentice Hall, Englewood Cliffs, NJ(1908).

Figure: Wikimedia Commons

Is Math an Invention or a Discovery?

A few months ago I was sitting at home watching one of those shows about the universe, you know where they try to condense everything there is to know about our world into a few short episodes? This particular episode was about Isaac Newton and all of his work. In this episode they were discussing how he invented Calculus and how he forever changed the way that we understand our universe. At this point I was pretty intrigued when my boyfriend raised a question I never really gave much thought to. He said to me “Do you believe math is something we discovered or something we invented?” My immediate reaction was that math was a discovery; there is no way that we just made all of this up! After this conversation occurred I started to notice that this was a question I began to think about often, but I never really could come up with a solid answer. So I will raise the same question again, was Math invented or discovered?

Fibonacci Sequence in a sunflower. Image: Ginette, via Flickr.

Fibonacci Sequence in a sunflower. Image:
Ginette, via Flickr.

Let’s start with the discovery side of things; there are many different mathematicians who believe that math was a discovery, such as Plato and Euclid. Mathematical Platonism is “the metaphysical view that there are abstract mathematical objects whose existence is independent of us and our language, thought, and practices.”[1] This philosophical viewpoint is stating that our universe is made up entirely of math. When we begin to understand math we are allowing ourselves to understand more about how the world around us works [2]. Have you ever thought about how math occurs in nature, that there are patterns and sequences all around us? Euclid believed that nature was a physical manifestation of math [3]. Examples of mathematics in nature include honeycombs, wings of insects, shells, and flowers. We also find the opposite of patterns in nature, uniqueness. The theory that no two snowflakes are the same is an example of uniqueness occurring in nature. Another more modern theory that supports the notion that math is a discovery is the mathematical universe hypothesis, which was proposed by a cosmologist Max Tegmark. This theory states, “Our external physical reality is a mathematical structure.”[4] Basically he is saying that math is not necessarily used to describe our universe, but rather our universe is one mathematical object. I think this theory is very intriguing and would make perfect sense. It would explain why math can be applied to everything that we know.

On the other side we have the belief that math is an invention. The most common theory is that math is a completely human construct, which we made up in order to help us have a better understanding of the world around us. This theory is called the intuitionist theory. The theory is a rejection to Mathematical Platonism and states that “The truth of a mathematical statement is a subjective claim: a mathematical statement corresponds to a mental construction, and a mathematician can assert the truth of a statement only by verifying the validity of that construction by intuition.”[5] Opposing the mathematical universe hypothesis is Gödel’s first incompleteness theorem. His theorem states that any theory that it has axioms can’t be consistent and complete at the same time. [6] This theory would show that math itself is like one giant loop. Every time we solve one problem based on assumptions we gain another problem that we must now base on assumptions we made from the last problem. This cycle will continue to repeat itself over and over and is inexhaustible.

Another common observation about math is how we actually carry out the process. If math were a discovery would we always have the same method for each problem. As shown in class the Egyptians had a completely different way to multiply that can be more effective than our current system of multiplication because it involves less memorization. Are our different methods for the same math problem enough to show that math is an invention? Or is it enough that we can get to the same solution, so the process isn’t as important? There is even the possibility that there are more discoveries to be made which could end our need for different methods to get to the same solution. There could be a missing link in our chain that we have to work around in order to get the solutions we need, but if we found that missing link we would only need one method to solve our mathematical problems.

In my own opinion the recurring theme of mathematics in nature is evidence enough for me to believe that math is a discovery and not an invention. With that said there are compelling arguments on both sides and it may take us years, if ever, to really prove whether or not math is a discovery or an invention

[1] http://plato.stanford.edu/entries/platonism-mathematics/

[2] http://science.howstuffworks.com/math-concepts/math4.htm

[3] https://www.youtube.com/watch?v=X_xR5Kes4Rs

[4] http://en.wikipedia.org/wiki/Mathematical_universe_hypothesis

[5] http://en.wikipedia.org/wiki/Intuitionism

[6] http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems

The Passionate Statistician

Fig. 1 Portrait of Florence Nightingale

Almost everyone has heard of Florence Nightingale: the Nurse; very few people have heard about Florence Nightingale: the Statistician. Ironically, however, she is one of the most important statisticians to have ever graced the field. Yes, her improvements to sanitation were revolutionary, and surely saved countless lives, but how she was able to bring about those improvements was equally innovative.

Picture this: It’s 1855 during the Crimean War. The air is rank and humid, filled with the smell of blood and sulfur—the fresh, salty aroma of the adjacent Black Sea long forgotten. Although you are required to help care for the many wounded soldiers, you are also charged with collecting, accurately keeping, and even analyzing army mortality records.

After months of attending to these records and conducting your analyses, you are faced with an appalling, yet undeniable, truth: more soldiers are dying from poor sanitation than from combat. In fact, the mortality rates from disease are so great that “during the first seven months of the Crimean campaign, a mortality rate of 60 per cent. per annum from disease alone occurred, a rate of mortality which exceeded even the Great Plague in London…”[1]

Florence Nightingale (FN) believed that allowing 16,000 men[2] to die from causes that were easily prevented with improved sanitation was almost akin to murder, and it would be equally criminal to do nothing to prevent these needless deaths from happening again[3]. She also felt it was downright disgraceful, if not scandalous, for a nation that considered itself the epitome of civilization to be this neglectful of their sanitation policy[4]. Sadly, these poor sanitary conditions were not just associated with the battlefield. As the war ended and FN returned home, she found that army barracks and even the hospitals experienced equally appalling mortality rates from disease[5].

She knew sanitation policies needed to be improved. She knew her statistical analysis was the best tool she could use to convince others of this need. Yet, she also knew that she would have to develop a better way to convey her data to the general public. She realized that although the empirical evidence would easily convince those who knew how to read the data, publishing it in the traditional way would severely limit the amount of people who could actually utilize the information. A smaller group of supporters meant that it would take more time to bring about the necessary improvements, and more time meant that more people would die. No, if her campaign was to succeed, she needed to have something that enabled almost everyone to easily draw the same conclusion she had for herself: their nation’s sanitation methods were claiming more lives than their enemy’s artillery.

She decided to model her data as a graphical illustration, specifically as a Rose chart (also known as a Polar Area or Coxcomb chart).

FN 1858 Rose Chart Fig. 2 Nightingale’s 1858 Rose Chart that graphically illustrates mortality rates Text: The Area of the blue, red, & black wedges are each measured from the centre as the common vertex. The blue wedges measured from the centre of the circle represent area for area the deaths from Preventible or Mitigable Zymotic diseases; the red wedges measured from the centre the deaths from wounds; & the black wedges measured from the centre the deaths from all the other causes. The black line across the red triangle in Nov. 1854 marks the boundary of the deaths from all other causes during the month. In October 1854, & April 1855; the black area coincides with the red; in January & February 1856, the blue coincides with the black. The entire areas may be compared by following the blue, the red & the black lines enclosing them.

At the time, these illustrations were quite an extraordinary feat, and very few statisticians had previously attempted to use these representations. This is because if the statistician was not careful with their calculations, then the representations could very easily mislead the observer. In fact, even FN was mislead at one point. Her initial analysis of the army’s mortality records led her to conclude that it was malnutrition, and not sanitation, that was the major cause of death during the Crimean campaign[6].

As can be seen in this link here [7], a Rose chart consists of multiple wedges. These wedges are called sectors and each one represents a different category in one’s data. It is important to note that the individual value of each category is represented by the sector’s area and not the radius. This was actually what caused FN to misinterpreted her data at first, as she used the radius, instead of the area, to represent the value of each sector. Despite the fact that the calculations can make it tricky to accurately represent numerical data, the chart can, on-the-other-hand, simplify comparisons and enable observers to easily identify causation[8].

Florence Nightingale was a true devotee to both statistical analysis and improved health care. She was an innovative woman, who pioneered the means with which to effectively communicate statistics’ findings that describe social phenomena. She was both revered and admired by her compatriots in the field of mathematics, and her great admiration for statistics earned her the nickname, “The Passionate Statistician.”


[1] Pearson, Egon Sharpe, Maurice George Kendall, and Robert Lewis Plackett, eds. Studies in the History of Statistics and Probability. Vol. 2. London: Griffin, 1970.

[2] Nightingale, F. (1858) Notes on Matters Affecting the Health, Efficiency, and Hospital Administration of the British Army Harrison and Sons, 1858

[3] Nightingale, Florence. Letter to Sir John McNeill. 1 March 1857. Letter H1/ST/NC3/SU74, copy ADD MSS 45768 f29 of Florence Nightingale: The Crimean War: Collected Works of Florence Nightingale, Vol.14. Ed. Lynn McDonald. Ontario, Canada: Wilfrid Laurier Univ. Press, 2010. 498-500. Print.

[4] Nightingale, F. (1859) A Contribution to the Sanitary History of the British Army during the Late War with Russia London: John W. Parker and Son.

[5] Nightingale, Florence. Notes on hospitals. Longman, Green, Longman, Roberts, and Green, 1863.

[6] Cohen, I.B. The Triumph of Numbers: How Counting Shaped Modern Life, W. H. Norton, 2006.

[7] ims25. “Mathematics of the Coxcombs” Understanding Uncertainty,11 May 2008. Web. 18 Dec. 2014.

[8] Small, Hugh (1998). Florence Nightingale: Avenging Angel. Constable, London.

[9] Heyde, Christopher C., and Eugene Seneta, eds. Statisticians of the Centuries. Springer, 2001.

[10] Farr, William. Letter to Florence Nightingale. (1857-1912). MS.8033, Nos. 1-20. Florence Nightingale and William Farr Collection. The Wellcome Library, Archives. London. Print.

[Fig. 1] Wikicommons: Duyckinick, Evert A. Portrait Gallery of Eminent Men and Women in Europe and America. New York: Johnson, Wilson & Company, 1873.

[Fig. 2] Wikicommons: Public Domain

Three Centers of a Triangle

There’s far too little geometry—excluding topology and non-Euclidean stuff—on this blog, so let’s add a little.

Euler Line

Euler line HU. Points H, U, and S are
respectively the circumcenter, centroid,
and orthocenter. Image: Rene Grothmann at the German Language Wikipedia.

Our goal is to get to the Euler line, a line that passes through a triangle’s circumcenter, centroid, and orthocenter. The line is only determined for non-equilateral triangles; the points coincide in the equilateral case. We’ll look at the three points above.

The circumcenter, centroid, and orthocenter are all “centers” of triangle. But what is a center of a triangle? Surely, it’s not a point equidistant to all points on the triangle. Our triangle would be a circle in that case.

The circumcenter of a triangle ABC is the center O of the circle K that triangle ABC is inscribed in.


Circumcenter O of triangle ABC. Image drawn by me.

The circumcenter is actually the intersection of the three perpendicular bisectors of the triangle: FE, IG, and DH. To see this, first suppose that triangle ABC has a circumscribed circle K with center O. Draw radii AO, BO, and CO to each of the triangles vertices. This creates three smaller triangles AOBBOC, and AOC. In each of these smaller triangles, drop an altitude from O. For example, in triangle AOBaltitude OD would be dropped. This splits AOB into two smaller triangles that are congruent by SAS, Line OD is perpendicular to AB by construction, and AD = DB. Hence OD is indeed a perpendicular bisector of side AB. Repeating this for other sides shows that the center of the circumscribed circle is the intersection of ABC‘s perpendicular bisectors.

Moreover, the intersection of any to perpendicular bisectors is equidistant from each of the triangle’s vertices. The reader can see this by considering triangle AOC. Perpendicular bisector IG splits AOC into triangles that are congruent by SAS. It follows that lengths AO and OC are equal. Repeat for the other sides. We then see that the intersection of the perpendicular bisectors is equidistant from the triangle’s vertices. Thus the perpendicular bisectors of a triangle uniquely determine its circumcenter.

The centroid is the intersection of a triangle’s three medians, lines drawn from a vertex that bisect the opposite side. As said in class, the centroid is the center of mass for a thin, triangular solid with uniformly distributed mass.


Centroid O of triangle ABC. Drawn by me.

The reader may suspect whether the three medians of a triangle intersect. Clearly two of the medians intersect; otherwise our triangle ABC would be a line. But the full proof is a little tedious. The proof involves assuming that two medians AF and CE intersect and drawing a parallelogram using the midpoints of the medians. We link to some proofs: http://jwilson.coe.uga.edu/EMAT6680Fa06/Chitsonga/MEDIAN/THE%20MEDIANS%20OF%20A%20TRIANGLE.htm uses classical geometry and http://math.stackexchange.com/questions/432143/prove-analytically-the-medians-of-a-triangle-intersect-in-a-trisection-point-of uses vectors.


Four congruent triangles using midpoints. Image drawn by me.

Interestingly, the midpoints of the sides of triangle ABC—the ends of the medians—cut the triangle into four congruent triangles. We will prove this in a roundabout way. Let E be the midpoint of AB. Draw a line EF parallel to AC where F intersects BC. Similarly draw FD parallel to AB. By construction, EFDA and EFCD are parallelograms. Then AD = EF = DC, so D is the midpoint of AC. Similarly, F is the midpoint of BC. The reader can see that the triangles are congruent by repeatedly applying SAS.

Our final center is the orthocenter, the intersection of the three altitudes of a triangle. An altitude is a segment drawn from a vertex that is perpendicular to the opposite side. As with the two previous centers, the intersection of the altitudes at a single point isn’t immediately obvious.


Orthocenter O of triangle ABC. Drawn by me.

We show that the altitudes of triangle ABC intersect. Construct triangle DEF with triangle ABC inscribed in it by making sides DF, FE, and DE parallel respectively to BC, AB, and AC. Draw altitude BK where intersects DF. Since AC is parallel to DEBK is perpendicular to DE. Moreover, ADBC and BACE are parallelograms, so DB = AC = AE. Hence BK is a perpendicular bisector of DE. We repeat the argument for the other altitudes of triangle ABC. Then the altitudes of ABC intersect because the perpendicular bisectors of DEF intersect.

There are a few other centers of a triangle that are either irrelevant to the Euler line or take too long to construct (i.e. I’m tired of drawing diagrams). The incenter is the center of the circle inscribed within a triangle. The incenter also turns out to be the center of a triangle’s angle bisectors. The Euler line doesn’t pass through the incenter.

The nine-point circle is the circle that passes through the feet of the altitudes (the end that isn’t the vertex) of a triangle.

Nine-Point Circle

Nine-point circle of ABC. Image: Maksim, via Wikimedia Commons.

Strangely, the circle also passes through the midpoints of the sides of its triangle. But that’s not all. The circle passes through the Euler points, the midpoints of the segments joining the triangle’s vertices to the triangle’s orthocenter. Thus the nine-point circle does indeed pass through nine special points of a triangle. The center of the nine-point circle lies on the Euler line.

After all this, we still haven’t proved that the circumcenter, centroid, and orthocenter lie on the same line. We won’t prove this. Here’s a video of the proof by Salman Khan: https://www.youtube.com/watch?v=t_EgAi574sM. The proof uses a few facts about the centers we haven’t discussed, but these facts aren’t too hard to show. Refer back to my four congruent triangles picture. Let O, K, and L respectively be the circumcenter, centroid, and orthocenter of triangle ABC. Then Khan proves that triangle DOK is similar to triangle BLK. This implies angles DKO and CKL are equal, which means O, K, and L lie on the same line.

Sources and cool stuff:

H.S.M. Coxeter and Samuel L. Greitzer’s Geometry Revisited

Paul Zeitz’s The Art and Craft of Problem Solving (Chapter 8 is called “Geometry for Americans”)

Wolfram on the nine-point circle: http://mathworld.wolfram.com/Nine-PointCircle.html

A fun way to play with the Euler line: http://www.mathopenref.com/eulerline.html

Khan’s Euler line video: https://www.youtube.com/watch?v=t_EgAi574sM

Wolfram on the Euler line: http://mathworld.wolfram.com/EulerLine.html

Classical median proof: http://jwilson.coe.uga.edu/EMAT6680Fa06/Chitsonga/MEDIAN/THE%20MEDIANS%20OF%20A%20TRIANGLE.htm

Vector median proof: http://math.stackexchange.com/questions/432143/prove-analytically-the-medians-of-a-triangle-intersect-in-a-trisection-point-of

Illiterate Math or ∫ ? what? ≠ :)

In class the other day, a student wondered, “Why don’t they just use symbols? Didn’t Euclid or Eudoxus notice how much easier it would be to use symbols and pictures? All of this language is so difficult.”


I believe no one would deny that mathematics is its own language. In fact, I think most people would categorize modern mathematics as a very foreign language. Symbols, operations, variables, etc. are mashed together in such a way that only a mathematician can understand what it is expressed or being questioned. This is absolutely necessary on some level. Mathematicians, and other users of mathematics, need an efficient and effective way to represent operations and the language of computation. On the other hand, it is quite sad to me that a “universal” language is so foreign. Indeed, if mathematics is a universal language, why is it understood by so few, and even despised by so many?

Mathematics didn’t start out being a foreign language. As described in Boyer and Merzbach, the earliest algebra descriptions did not use symbols. It was referred to as rhetorical algebra and an equation like x+2-z=3 would be written similar to: a thing plus two and minus a thing is equal to three. Sure, this description seems a little weird and clunky, but for most people, I argue it is easier to figure out and less “scary” or off-putting than the typical symbolic equation given above.

In Book I, proposition 41 from Euclid’s Elements, it describes something in language that all people could attempt to understand.

Thus, if a parallelogram has the same base as a triangle, and is between the same parallels, then the parallelogram is double (the area) of the triangle.

The above sentence may be difficult to understand, but it is approachable by anyone. All of the words can be defined or looked up. Representing mathematics with words allows everyone to be mathematically literate. I believe the loss of language in mathematics is a hindrance to it accessibility, popularity, and literateness.

Why was Euclid’s Elements so popular? Everyone seems to have read it, from President’s to peasants, for at least 1000 years. Part of the reason for this may be that it was actually readable. Anyone who could read, could read Euclid’s Elements. It is hard to imagine my calculus text being written in such a way, but it is important to ask, is there value in writing it in that way? Would it be a worthwhile undertaking to attempt to translate mathematics into words? Would that make it more accessible, more enjoyable, and more relevant to everyone? Would non-mathematicians be able to pick up a book and attempt to understand it? I know very few people who could pick up this and understand it:


However, most people might understand or could at least attempt to understand: write out one-half, eight times. Between each one-half write a plus symbol. Now, starting at zero and continuing until seven, raise each iteration of one-half to a power. The first one-half is raised to zero and the last one-half is raised to 7. Compute each iteration, and then add it all together. This is called summing a sequence of numbers from zero to seven. If I read this description when I first saw the above notation, I would not have been so afraid of it. I also might have tried to do it. Instead, seeing this foreign language scared me off. It just seemed too complicated to understand and figure out.

Most of my favorite math books do just that (Zero: The Biography of a Dangerous Idea; 100 Essential Things you didn’t know you didn’t know). They are books about math that contain numerous verbal descriptions of operations, ideas, and calculations. Sure, they also contain real formulas, but that is a supplement to the written description of the idea. Finding ways to use language to write about and describe math is important. Math should be accessible to all even if it is not computable by all. Wouldn’t approaching math in this way allow people to be at least as literate in mathematics as they are in English?

The Thirteen Books of Euclid’s Elements; translated from the text of Heiberg with introduction and commentary by Sir Thomas L. Heath, K.C.B., K.C.V.O., F.R.S., SC.D. Camb., Hon. D.SC.
Oxford, Volume III, Books X-XIII and Appendix: https://archive.org/details/JL_Heiberg___EUCLIDS_ELEMENTS_OF_GEOMETRY

A History of Mathematics, Jan 11, 2011 by Carl B. Boyer and Uta C. Merzbach

equation taken from http://www.math.utah.edu/grad/exam/DiffEquatF2014.pdf

Turing, Leibniz and Hilbert’s Entscheidungsproblem

Alan Turing. This image is in the public domain in the US because its copyright has expired.

Alan Turing. This image is in the public domain in the US because its copyright has expired.

What’s the first thing you think of when you hear the name Gottfried Leibniz? Let me guess: calculus.  Now what do you think when you hear of Alan Turing?  You might think of codebreaking during World War II, or the new movie coming out about him (The Imitation Game), or maybe you haven’t heard of him.  So why would I mention these two together? Computers of course! Wait, what do these two have to do with computers? Well let’s take a look and see.

The Entscheidungsproblem origins start with Gottfried Leibniz in the seventeenth century.  Leibniz had successfully created a mechanical calculating machine, one of the first of its kind.  This calculating machine led him to question if a machine could be made that could determine the truth values of mathematical statements.  In his research, he found that one would have to find a formal language to create this machine.  In 1900, David Hilbert, a German mathematician, included the following in his 23 unsolved (at the time) problems designed to further the disciplines in mathematics:

“10. Determination of the solvability of a Diophantine equation.  Given a diophantine equation with any number of unknown quantities and with rational integral numerical coefficients: to devise a process according to which it can be determined by a finite number of operations whether the equation is solvable in rational integers. (Winton Newson’s translation of HIlbert’s original problem, as quoted in D. Joyce)”

By 1928, Hilbert had broadened his question about Diophantine equations to a much more general question about mathematical statements in general: is there an algorithm that is universally valid. This created a new idea; is there an algorithm that can tell us if any algorithm will terminate?  The last of these three ideas was the beginning of the Entscheidungsproblem. In May, 1936 Alan Turing wrote a paper called “On Computable Numbers, with an Application to Entscheidungsproblem.”  In this paper, Turning reformatted Kurt Godels results on the limits of proof and computation.  He made a hypothetical device known as the Turing machine and went on to prove there was no solution to the Entscheidungsproblem.  He did this by using his Turing machine to show that the halting problem is undecidable; that it is impossible to know whether a program will finish running or continue forever.

The Turing machine itself can represent a computing machine.  It can change symbols on a strip of tape based on a set of tools.  A Turing machine has 3 main components, the first being an infinite tape. This infinite tape would be divided into cells in which a symbol can be placed.  In the tape there would be a head.  The head accesses one cell at a time, while moving either right to left or left to right.  The third component would be a member were there would be a fixed finite number of states.  After having these three components you have three actions: 1) write a symbol, 2) move either left or right, and 3) update its current state.  The formal definition of a Turing machine is defined as a 7-tuple.  The seven elements of the tuple would be as follows: a set of states, an input alphabet, the tape alphabet, the start state, a unique accept state, a unique reject state, and a transition function.

A Turing Machine, without infinite tape. Image: Rocky Acosta, via Wikimedia Commons.

A Turing Machine, without infinite tape. Image: Rocky Acosta, via Wikimedia Commons.

Turing’s work on the Entscheidungsproblem and the Turing machine can be thought of as the birth of computer science and digital computers.  During World War II the idea of the Turing machine was used and manipulated into a simpler form, as well as into an actual electronic computer.  This led to machines such as the counter machine, register machine, and random access machine.  All of these machines launched us even further into the computer era.

It is interesting to see that the birth of the modern computer came from the Entscheidungsproblem, an idea that Leibniz had first thought of.  Why would I think this is interesting?  Leibniz had also worked on binary numbers and arithmetic, which is similar to what is used today in modern computing. It seems that Leibniz was ahead of his time.  Alan Turing seems to have just taken his ideas and brought them to our times.  We can see that without Turing we wouldn’t have modern computers the way they are. This means we wouldn’t be able to do any math that requires a computer to help with computations.  Think, how many times have you used your computer to access the Internet to get the answer to a math problem you were unable to solve? Not only that, but studying math wouldn’t have been as easy.  Knowledge that is passed through the Internet wouldn’t be possible without computers, with no YouTube to help show how to solve math problems, with no Khan Academy or Wolframalpha, and no easy access to any knowledge of any past essays that were written.

Sadly, Turing’s end wasn’t a happy one.  Living in England in the early 20th century as a gay man led him to commit suicide.  Leibniz lived almost twice as long as Turing.  It makes you wonder if we could have had even more interesting computing machines or ways of thinking of computational mathematics if he had lived a full life past the age of 41.


History on Turings life – http://www.math.rutgers.edu/courses/436/Honors02/turing.html

Hilberts Problems – http://aleph0.clarku.edu/~djoyce/hilbert/problems.html http://mathworld.wolfram.com/HilbertsProblems.html

Turings paper “On computable Numbers, with an application to the Entscheidungsproblem – http://plms.oxfordjournals.org/content/s2-42/1/230.full.pdf+html

Gottfried Wilhelm Leibniz on wikipedia – http://en.wikipedia.org/wiki/Gottfried_Wilhelm_Leibniz

Wikipedia article on the Turing Machine – http://en.wikipedia.org/wiki/Turing_machine

Math Tricks and Fermat’s Little Theorem

So you think you’re a math whiz. You storm into parties armed with math’s most flamboyant tricks. You can recite the digits of π and e to 50 digits—whether in base 10 or 12. You can calculate squares with ease, since you’ve mastered the difference of squares x2 – y2 = (x + y)(x – y). In tackling 572, simply notice that 572 – 72 = (57 + 7)(57 – 7) = 64*50 = 3200. Adding 72 to both sides gives 572 = 3249.

Image by Hashir Milhan from
Wikimedia Commons under
Creative Commons.

You can also approximate square roots using the truncated Taylor series x ≈ c + (x – c2)/(2c) where c2 is the closest perfect square less than or equal to x. So √17 ≈ 4 + (17 – 16)/(2*4) = 4.125, whereas √17 = 4.123105 . . ..

But do you know what number theory is? It’s not taught in high school, and everyone’s repertoire of math tricks needs some number theory. Mastering modular arithmetic—the first step in number theory—will make you the life of the party. Calculating 83 mod 7 just means find the remainder after dividing by 7: 83 = 11*7 + 6, so 83 ≡ 6 mod 7. But it’s actually easier since 83 = 7*12 + (-1), so 83 ≡ -1 ≡ 6 mod 7. Modular arithmetic reveals the secrets of divisibility. Everybody knows the trick to see whether 3 divides a number; you just add the digits and check if 3 divides that number. But the reasoning is obvious when you write m = 10nan + 10n-1an-1 + . . . + 10a1 + a0 where the ai are the digits of m. Each 10k has a remainder of 1 modulo 3 so man + an-1 + . . . + a1 + a0 mod 3. Using this method generates tricks for other integers.

For instance, if 13 divides m, then 13 divides a0 – 3a1 – 4a2a3 + 3a4 + 4a5 + a6 – 3a7 – 4a+ . . . and the pattern continues. This is because

10 ≡ -3 mod 13,

102 ≡ 10*10 ≡ (-3)(-3) ≡ 9 ≡ -4 mod 13,

103 ≡ 102*10 ≡ (-4)(-3) ≡ 12 ≡ -1 mod 13,

104 ≡ 103*10 ≡ (-1)*(-3) ≡ 3 mod 13,

105 ≡ 104*10 ≡ 3*(-3) ≡ -9 ≡ 4 mod 13,

and 106 ≡ 105*10 ≡ 4*(-3) ≡ -12 ≡ 1 mod 13.

From 106 and onwards the pattern repeats. In fact, calculating 10n mod k for successive n will reveal the divisibility rule for k.

Then comes Fermat’s little theorem, the key to solving seemingly impossible calculations.

Pierre de Fermat
Fermat. Image from Wikimedia
Commons. Under public domain.

The theorem states for a prime p and integer a that aa mod p. If p doesn’t divide a, then  ap -1 ≡ 1 mod p. I’ll illustrate the power of this little result in a computation. Let’s find 2371 mod 5. We’ll be using 24 ≡ 1 mod 5, which we get from Fermat’s little theorem. Now 2371 = 236823 =(24)9223, so by the theorem, 2371 = (24)9223 ≡ 19223 ≡ 1*23 ≡ 8 ≡ 3 mod 5. Exploiting Fermat’s little theorem can impress your friends, but try to avoid questions. Computing residues modulo a composite number—calculating b mod n for a composite number n—may require paper and ruin the magic.

Leonhard Euler proved a more general version of Fermat’s little theorem; it’s called the Euler-Fermat theorem. This theorem isn’t for parties; explaining it to the non-mathematically inclined will always require paper and some time. Nonetheless, it will impress at dinner if you have a napkin and pen.

Understanding this theorem requires Euler’s totient function φ(n).

Leonhard Euler
Euler. From Wikimedia
Commons. Under public domain.

The number φ(n) for some n is the number of positive integers coprime with n that are less than or equal to n. Two numbers a and b are coprime if their greatest common factor is one. Hence 14 and 3 are coprime because their biggest shared factor is 1, but 21 and 14 aren’t coprime because they have a common divisor of 7. Moreover, φ(14) = 6 because 14 has six numbers less than or equal to it that are coprime with it: 1, 3, 5, 9, 11, and 13. Notice that if p is prime, φ(p) = p – 1 because every number less than p is coprime with p.

Now the Euler-Fermat theorem states that aφ(n) ≡ 1 mod n, which looks similar to ap -1 ≡ 1 mod p for a prime p. In fact, if = φ(p) = p – 1 for a prime p, the theorem reduces to Fermat’s little theorem.

Fermat’s little theorem has another generalization, Lagrange’s theorem. Joseph-Louis Lagrange was Euler’s student. Lagrange’s theorem generalizes both the previous theorems and doesn’t even require numbers. But due to the required background in group theory, I won’t go over the theorem. You can find links to more information on Lagrange’s theorem below.

Remember, a math whiz doesn’t need props like a magician does. Hook your audience with some modular arithmetic, and reel the people in with Fermat’s little theorem. If you want to get complicated, the most you’ll need is a pen and some paper.

Sources and cool stuff:

Math tricks: http://www.businessinsider.com/x-math-party-tricks-that-will-make-you-a-rockstar-2013-6?op=1

Modular arithmetic: http://nrich.maths.org/4350

Proof of Fermat’s little theorem: https://primes.utm.edu/notes/proofs/FermatsLittleTheorem.html

Euler-Fermat theorem and its proof: http://www.artofproblemsolving.com/Wiki/index.php/Euler%27s_Totient_Theorem

Lagrange’s theorem (only for the brave): http://cims.nyu.edu/~kiryl/teaching/aa/les102903.pdf

Fermat’s bio: http://www-history.mcs.st-and.ac.uk/Biographies/Fermat.html

Euler info: https://3010tangents.wordpress.com/2014/10/05/leonhard-euler-eulers-identity-fermats-last-theorem-and-the-basel-problem/

Lagrange’s bio: http://www-history.mcs.st-and.ac.uk/Biographies/Lagrange.html

Number theory textbooks: Gordan Savin’s Numbers, Groups, and Cryptography and George E. Andrews’s Number Theory

Interesting sources of math tricks and problems: Paul Zeitz’s The Art, Craft of Problem Solving and The USSR Olympiad Problem Book, and What is Mathematics by Richard Courant and Herbert Robbins

Early Chinese Mathematics

Math is something that is found all throughout history.  It was used for may different reasons, in many different cultures.  What I find interesting is how these different cultures learned some of the same ideas without even having knowledge of the others’ work. These works could be anything from counting systems to Pascal’s triangle.  It can also include how one culture passed its knowledge on to another. This makes you wonder how some ideas that were known in western civilization could also be found in Asia.  As I was looking into this I found some very interesting facts about mathematics in China. Some small examples of math found in China begin with something called oracle bone scripts: scripts carved into animal bones or turtle shells. These scripts contain some of the oldest records in China.  This, like the clay from babylonian times, had many different uses including math.  Chinese culture also had something called the six arts: Rites, Music, Archery, Charioteerring, Calligraphy, and Mathematics.  Men who excelled in these arts were known as perfect gentlemen.

In China, like in India, one can find the use of a base ten numeral system.  This is quite different from the Babylonians, which makes it seem like there must have been some conduit of knowledge between India and China.  In China, around 200 BCE, they used something called “rod numerals.”  Rod numeral counting is very similar to what we use today.  This counting system consisted of digits that ranged from one to nine, as well as 9 more digits to represent the first nine multiples of 10.  The numbers one through nine were represented by rods going vertically, while the numbers of the power of 10 were horizontal.  This means that every other digit was horizontal while its neighbor was vertical.  For example 215 would be represented like this ||—|||||.  If one wanted to use a zero you would have to use an empty space.  The empty space is also something that can been seen in the Babylonian counting system.  As with the Babylonians, a symbol was eventually used for zero.  Interestingly enough, before there was a symbol for zero, counting rods included negative numbers. A number being positive or negative depended on its color: black or red.  This idea of having negative numbers didn’t come about in another culture until around 620 CE in India.  It seems quite apparent that several ideas that originated in China could possibly have been passed on to a neighboring country. 

Rod numerals. Image: Gisling, via Wikimedia Commons.

Rod numerals. Image: Gisling, via Wikimedia Commons.

The use of counting rods as a counting system brought about another very interesting mathematical concept, the idea of a decimal system.  China first used decimal fractions in the 1st century BCE.  Fractions were used like they are today, with one number on top of another.  For example, today if you used a faction for one half, it would be written like this: 1/2.  Using rod numbers you can do the same thing like this: | / ||.  Not only could this be represented as a fraction but it could also be written as a decimal.  To do this one would simply write the number out and insert a special character to show where the whole number started.  For example, if you wanted to say 3.1213, you would write it as a whole number like this: |||—||—|||.  To show where the left side of the decimal starts, you would mark it with a symbol under the number to the left of the decimal point, in this case under the first 3.  To me the use of rod numbers is so similar to how we use our numbers today that even the arithmetic that was used can be done easily by someone in our culture.  Addition is done almost the same except they would work from left to right.  Multiplication and division were used as well.   The use of base ten as well as using rod numerals made complicated equations much easier to attain, such as the use of polynomials and even Pascals triangle.

The Yanghui Triangle. Image: Public domain, via Wikimedia Commons.

The triangle known as “Pascal’s” in the west, in a Chinese manuscript from 1303 CE. Image: Public domain, via Wikimedia Commons.

Centuries before Pascal, the Chinese knew about Pascal’s triangle.  Shen Kuo, a polymathic Chinese scientist was known to have used Pascal’s triangle in the 12th century CE.  It appears that knowledge of Pascal’s triangle begins even before this. The first finding of Pascal’s triangle was in ancient India around 200 BCE.  We can see that this idea was sprouting around and found evidence in different cultures, from Persia to China and to Europe.  This again makes one wonder how this knowledge base was passed around from one culture to another.  Lacking historic details, it is hard to see if this idea of Pascal’s triangles was thought up individually or if this concept was somehow passed from one culture to another.

It seems that in all cultures there is a need for counting, which in turn brings about the need for math.  The cultural implications can mean that you are a “perfect gentlemen” by having mathematical knowledge, or it could lead a greater knowledge that can be passed on to other cultures.  In China, we see that many ideas of numbers and mathematics were thought up on their own without having other culture’s ideas intervening.  We can also see that the knowledge that was passed on was able to thrive and turn into something even more intriguing.  It is apparent that we can always learn and teach others to help our knowledge grow.


Boyer, Carl B., and Uta C. Merzbach. A History of Mathematics. 3rd ed. Hoboken, NJ: Jon Wiley and Sons, 2010. Print.





Musings: The Poincaré Conjecture

Mathematics is no stranger to unsolved problems. Time and time again, equations, conjectures, and theorems have stumped mathematicians for generations. Perhaps the most famous of these problems was Fermat’s Last Theorem, which stated there is no solution for the equation xp+yp=zp, where x, y, and z are positive integers and p is an integer greater than 2. Pierre de Fermat proposed this theorem in 1637, and for over three hundred fifty years, it baffled mathematicians around the globe. It was not until 1994 that Andrew Wiles finally solved the centuries-old theorem.

Though the most famous, Fermat’s Last Theorem was by no means the only unsolved problem in mathematics. Many problems remain unsolved to this day, driving many institutions throughout the world to offer up prizes for the first person to present a working solution for any of the problems. Some few are general questions, such as “Are there infinitely many real quadratic number fields with unique factorization?” However, most of the problems are specific equations proposed by a single or multiple mathematicians and are generally named after their proposer(s), such as the Jacobian Conjecture or Hilbert’s Sixteenth Problem.

One such problem, proposed by Henri Poincaré in 1904 and thus named the Poincaré Conjecture, remained unsolved until 2002.  In order to encourage work on the conjecture, the Clay Mathematics Institute made it a part of the Millennium Problems, which included several of the most difficult mathematics problems without proofs. A proof to any of the problems, including the Poincaré Conjecture, came with a reward of one million US dollars. To this day, the Poincaré Conjecture remains the only problem solved.

The Poincaré Conjecture is a problem in geometry but concerns a concept that, for many, is difficult to comprehend and all but impossible to visualize. The best means to approach it is to imagine a sphere, perfectly smooth and perfectly proportioned. Now, imagine an infinitesimally-thin, perfectly flat sheet of cardboard cuts into the sphere. If you were to take a pen and draw on the cardboard where the sphere and the cardboard intersect, you would produce a circle. If you were to take the sheet of cardboard and move it up through the sphere, the circle where it and the sphere intersect would gradually shrink. Eventually, just as the cardboard is at the edge of the sphere, the circle will have shrunk to a single point.

Plane-sphere intersection. Image: Zephyris and Pbroks13, via Wikimedia Commons.

Note that in the field of topology, this visualization applies to any shape that is homeomorphic to a three dimensional sphere (referred to as a 2-sphere in topology since its surface locally looks like a two dimensional plane, much as how the Earth appears flat while standing on its surface). Homeomorphic refers to a concept in the field of topology concerning, what is essentially, the distortion of a shape. For instance, one of the simplest examples in three dimensions is that a cube is homeomorphic to a sphere, since if you were to compress and mold the cube (much as you would your childhood PlayDoh), you could eventually shape it into a sphere. However, in topology, you are not allowed to create or close holes in a shape. This is why shapes such as a donut or a cinder-block are not homeomorphic to a sphere, due to the holes that go through them.

Poincaré proposed a concept concerning homeomorphism and the previously described visualization, and it is here where imagining the problem no longer becomes possible. We live in a three-dimensional world, where any position in space can be plotted based on relativity to three axes, all perpendicular to each other. To imagine a fourth spatial dimension perpendicular to those three is mentally impossible, as is any shape with higher dimensions, and yet many problems in geometry and physics relate to a fourth and even higher dimensions. The Poincaré Conjecture relates to these higher dimensional shapes, specifically closed 3-manifolds (shapes with a locally three dimensional surface). It states that, if a loop can be drawn on a closed 3-manifold and then be constricted to a single point, much like the intersection of the cardboard plane and the sphere in the aforementioned example, then the closed 3-manifold is homeomorphic to a 3-sphere, the set of points equidistant from a central point in four dimensions (Morgan).

If the concept of the Poincaré Conjecture is difficult to conceive, its solution by Russian mathematician Grigori Perelman in 2002 is almost incomprehensible. Due to the number of variables involved, one could not simply set up a system of equations between a three-dimensional space and a 3-sphere. Instead, Perelman used a differential geometry concept called Ricci Flow, developed by American mathematician Richard Hamilton. In short, it is a system which automatically contracts to a point on any surface, and it proved to be the precise tool needed to prove the Poincaré Conjecture. (THIS video does a good job of explaining it in layman’s terms) (Numberphile)

An example of Ricci flow. Image: CBM, via Wikimedia Commons.

Interestingly, despite the immense difficulty of solving such an abstract problem as the Poincaré Conjecture, Perelman refused the prize awarded to him for his accomplishment. His solution to the problem was an exercise in his own enjoyment, and as he later stated upon being offered the Fields Medal (the mathematician equivalent of the Nobel) and the immense monetary prize,  “I’m not interested in money or fame; I don’t want to be on display like an animal in a zoo.” Later, he also argued that his contribution to the solution of the Poincaré Conjecture was “no greater than that of… Richard Hamilton,” and that he felt the organized mathematical community was “unjust.” (BBC News, Ritter)

To this day, the Poincaré Conjecture remains the only Millennium Problem solved. Its proof wound up leading to the solution of various other related geometrical problems and closed a century-old mystery. As the field of mathematics continues to grow and progress, it is only a matter of time until other unsolved problems come to resolution.

Works Cited

Morgan, John W. “RECENT PROGRESS ON THE POINCARÉ CONJECTURE AND THE CLASSIFICATION OF 3-MANIFOLDS.” The American Mathematical Society 42.1 (2004): 57-78. The American Mathematical Society. The American Mathematical Society, 29 Oct. 2004. Web. 9 Oct. 2014. http://www.ams.org/journals/bull/2005-42-01/S0273-0979-04-01045-6/S0273-0979-04-01045-6.pdf

Jaffe, Arthur M. “The Millennium Prize Problems.” The Clay Mathematics Institute. The Clay Mathematics Institute, 4 May 2000. Web. 09 Oct. 2014.

Numberphile. “Ricci Flow – Numberphile.” YouTube. YouTube, 23 Apr. 2014. Web. 09 Oct. 2014.

“Russian Maths Genius Perelman Urged to Take $1m Prize.” BBC News. BBC, 24 Mar. 2010. Web. 09 Oct. 2014.

Ritter, Malcom. “Russian Mathematician Rejects $1 Million Prize.” Russian Mathematician Rejects $1 Million Prize. The Associated Press, 1 July 2010. Web. 09 Oct. 2014.

Mathematical Mindset

Image: Tom Blackwell, via flickr.

Image: Tom Blackwell, via flickr.

Upon discussing ancient Indian math, the point was brought up about whether or not it would be beneficial learn Vedic-Math, involving various techniques to do rapid mental calculations. The basic argument is that it provides quick arithmetic tricks and may be able to get more people interested in mathematics. However, with these techniques only the computation gets mastered and the idea behind the computation is lost. Again I was reminded of this while studying about Diophantus and his book, Arithmetica, because the text is largely computational and gives many worked out examples while forgoing formal proofs. Mathematics is such a broad and universal subject, and as such everyone has a unique experience and mindset when it comes to math. Some people gravitate more towards formal proof and generality, while others gravitate towards computation and specific solutions. The fact that Arithmetica was more a collection of problems in Algebra’s applications instead of an Algebra textbook shows how ancient the dichotomy of mathematical mindset is. This is especially evident among different professions. An accountant and an electrical engineer or a physicist and a mathematician may see the same problem in a completely different light.

From a physicist’s viewpoint mathematics is a model. Philip Davis and Reuben Hersh’s book, The Mathematical Experience, devotes a section to a physicist’s viewpoint towards mathematical rigor. In the section they interview one anonymous physicist to which they considered his scientific feelings to be general. I found this physicist’s thoughts on mathematical proof to be interesting. “To him, proofs were relatively uninteresting and they were largely unnecessary in his personal work. […] Proof is for cosmetic purposes and also to reduce somewhat the edge of insecurity on which one always lives.” The physicist also describes how questions of generality and truth are not important to him because all scientific work is “of provisional nature.” Physicists and engineers simply view mathematics as a toolkit to move them from one end of their problem to the other. It doesn’t need to be rigorously proven and generalized so long as the answer makes sense in the specific circumstance. Computation is critical when dealing with problems of a physical nature, but how important is it when it comes to teaching and learning difficult concepts?

Perhaps the most important aspect of having different mathematical mindsets is how they affect those that we teach. Communicating a mathematical idea can be just as difficult as learning it in the first place, so it is critical to understand how effective imposing one mindset or the other is. These different frames of mind are brilliantly presented in a paper written by Frederick Reif from the University of California at Berkeley titled, Interpretation of Scientific or Mathematical Concepts: Cognitive Issues and Instructional Implications. Reif discusses various ways to interpret a scientific concept and the advantages and disadvantages each have.

The mindset that involves generality for all solutions of a problem Reif refers to as declarative specification. Declarative specification relies on stored knowledge that defines a concept explicitly by its characterizing features, which could be shown as a proof of a theorem. Reif states that while declarative specifications can be precise and general, the disadvantage lies in translating the interpretation because it may be ambiguous and lead to faulty intuition. An example of this would be interpreting the concept acceleration as the rate of change in velocity with respect to time, written more compactly as dv/dt. While this example is explicit and seems elementary towards an expert, it is important to look at this objectively and consider how others might interpret it. Someone who is just learning calculus or physics may not fully understand earlier concepts such as derivatives or vectors and this new concept can then be misinterpreted.

Reif refers to the interpretation process based on computation and example as procedural specification. The interpretation involves “implementing [a] procedure in the particular instance of interest.” Procedural specifications provide more detailed and explicit specifications for a problem and often an easier starting point. In the example of interpreting the concept of acceleration, one would implement specific steps to break down the general idea into smaller problems. They would measure velocity at different points in time, take the difference in velocity, take the difference in time, and then take the ratio of the differences. Some of the disadvantages of this type of interpretation are that it is often more lengthy and certain aspects of the general case may be brushed over.

My own experience involves both of these mindsets. During my first semester at the University I had a teaching assistant in my calculus class that insisted that if I was able to understand the general case I would be able to solve any problem. This turned out to be true for many of the topics in my math classes. However, this was not the case for my physics courses, where if I had not had significant practice working through specific examples I would not have understood the concept at hand. Experts often teach solely towards the general case because that is how they were taught to view a problem. I think it would be beneficial to incorporate both ways to view a problem to appeal towards all mindsets.  Elaboration toward specific examples with specific solutions may cause others who get lost by the generality and abstraction to further understand and see the larger picture. Both viewpoints have their benefits and together they provide a good balance while working to solve problems. It could also be beneficial for someone who is entirely engrained in their viewpoint to try and understand a topic from another perspective. Computation and theory are equally important and each provide meaning for the other.


Bellos, Alex. “Nirvana by Numbers.” BBC Radio. British Broadcasting Corporation, 28 Oct. 2013. Web. 1 Sept. 2014.

Boyer, Carl B., and Uta C. Merzbach. A History of Mathematics. 3rd ed. Hoboken, NJ: Jon Wiley and Sons, 2010. Print.

Davis, Philip J., and Reuben Hersh. The Mathematical Experience. Birkhauser Boston, 1981. Print

Reif, Frederick. Interpretation of Scientific or Mathematical Concepts: Cognitive Issues and Instructional Implications. Cognitive Science, Volume 11, Issue 4. Oct. 1987. Online. http://onlinelibrary.wiley.com/doi/10.1207/s15516709cog1104_1/abstract