Category Archives: European mathematics

Who knew that an unlikely friendship and a few games of cricket with one of the greatest mathematicians in the early 20th Century could lead to a breakthrough in population genetics?

Today, it is almost commonplace for us in the scientific community to accept the influence natural selection and Mendelian genetics have on one another, however for the majority of human history this was not the case. Up until the early 1900s, many scientists believed that these concepts were nothing more than two opposing and unassociated positions on heredity. Scientists were torn between a theory of inheritance (a.k.a. Mendelian genetics) and a theory of evolution through natural selection. Although natural selection could account for variation, which inheritance could not, it offered no real explanation on how traits were passed on to the next generation. For the most part, scientists could not see how well Mendel’s theory of inheritance worked with Darwin’s theory of evolution because they did not have a way to quantify the relationship. It was not until the introduction of the theorem of genetic equilibrium that biologists acquired the necessary mathematical rigor to show how inheritance and natural selection interacted. One of the men who helped provide this framework was G.H. Hardy.

G. H. Hardy. Image: public domain, via Wikimedia Commons.

G. H. Hardy. Image: public domain, via Wikimedia Commons.

Godfrey Harold (G.H.) Hardy was a renowned English mathematician who lived between 1877-1947 and is best known for his accomplishments in number theory and for his work with the another great mathematician, Srinivasa Ramanujan. For a man who was such an outspoken supporter of pure mathematics and abhorred any practical application of his work[5], it is ironic that he should have such a powerful influence on a field of applied mathematics and help shape our very understanding of population genetics.

How did a pure mathematician come to work on population genetics? Well it all started with a few games of cricket. Whilst teaching at the University of Cambridge, Hardy would often interact with professors in other departments through friendly games of cricket and evening common meals [1]. It was through these interactions that Hardy came to know Reginald Punnett, cofounder of the genetics department at Cambridge and developer of Punnett Squares, which are named for him, and developed a close friendship with him[13].

Punnett, being one of the foremost experts in population genetics, was in the thick of the debate over inheritance vs. evolution. His interactions with contemporaries like G. Udny Yule, made him wonder why a population’s genotype, or the genes found in each person, did not eventually contain only variations, known as alleles, of a particular gene that are dominant. This was the question he posed to Hardy in 1908, and Hardy’s response was nigh on brilliant. The answer was so simple that it almost seemed obvious. Hardy even expressed that “I should have expected the very simple point which I wish to make to have been familiar to biologists’’ [4]. His solution was so simple in fact that unbeknownst to him, another scientist had reached the same conclusion around the same time in Germany [17]. In time, this concept would be known as Hardy-Weinberg Equilibrium (HWE).

In short, HWE asserts that when a population is not experiencing any genetic changes that would cause it to evolve, such as genetic drift, gene flow, selective mating, etc., then the allele (af) and genotypic frequencies (gf) will remain constant within a given population (P’). To calculate the gf for humans, a diploid species that receives two complete sets of chromosomes from their parents, we simply look at the proportion of genotypes in P’.

0 < gf < 1

To calculate the af, we look at the case where either the gene variation is homozygous and contains two copies of the alleles (dominant—AA || recessive—aa) or heterozygous and only has one copy of each allele (Aa). P’ achieves “equilibrium” when these frequencies do not change.

Hardy’s proof of these constant frequencies for humans, a diploid species that receives two complete sets of chromosomes from its parents, is as follows[1][4]:

If parental genotypic proportions are p AA: 2q Aa: r aa, then the offspring’s would be (p + q)2: 2(p + q)(q + r): (q + r)2. With four equations (the three genotype frequencies and p + 2q + r = 1) and three unknowns, there must be a relation among them. ‘‘It is easy to see that . . . this is q2 = pr” 

Which is then broken down as:

q =(p + q)(q + r) = q(p + r) + pr + q2

Then to:

q2 = q(1- p – r) – pr = 2q2 – pr     ——->   q2 = pr

In order to fully account for the population, the gf and af must sum to 1. And, since each subsequent generation will have the same number of genes, the frequencies remain constant and follows either a binomial or multinomial distribution.

One important thing to keep in mind, however, is that almost every population is experiencing some form of evolutionary change. So, while HWE shows that the frequencies don’t change or disappear, it is best used as a baseline model to test for changes or equilibrium.

When using the Hardy-Weinberg theorem to test for equilibrium, researchers divide the genotypic expressions into two homozygous events: HHο and hhο. The union of each event’s frequency ( f ), is then calculated to give the estimated number of alleles (Nf). In this case, the expression for HWE could read something like this:

Nf = f(HHο)  f(hhο)

However, another way to view this expression is to represent the frequency of each homozygous event as single variable, i.e. p and q. Using p to represent the frequency of one dominant homozygous event (H) and q to represent the frequency of one recessive homozygous event (h), gives the following: p = f(H) and q = f(h). It then follows that p² = f(HHο) and q² = f(hhο). By using the Rule of Addition and Associative Property to calculate the union of the two event’s frequencies, we are left with F = (p+q)². Given that the genotype frequencies must sum to one, the prevailing expression for HWE emerges when F is expanded:

Fp² +2pq + q² = 1

Using this formula, researchers can create a baseline model of P’ and then identify evolutionary pressures by comparing any subsequent frequencies of alleles and genotypes (F) to F. The data can then be visually represented as a change of allele frequency with respect to time.

HWE represents the curious situation that populations experience when their allele frequencies change. This situation is realized by first assuming complete dominance, then calculating the frequency of alleles, and then using the resultant number as a baseline with which to compare any subsequent values. Although there are some limitations on how we can use HWE—namely, identifying complete dominance, the model is very useful in identifying any evolutionary pressures a population may be experiencing and is one of the most important principles in population genetics. Developed, in part, by G.H. Hardy, it connected two key theories: the theory of inheritance and the theory of evolution. Although, mathematically speaking, his observation/discovery was almost trivial, Hardy provided the mathematical rigor the field sorely needed in order to see that the genotypes didn’t completely disappear and, in turn, forever changed the way we view the fields of biology and genetics.


  1. Edwards, A. W. F. “GH Hardy (1908) and Hardy–Weinberg Equilibrium.”Genetics3 (2008): 1143-1150.
  2. Edwards, Anthony WF. Foundations of mathematical genetics. Cambridge University Press, 2000.
  3. Guo, Sun Wei, and Elizabeth A. Thompson. “Performing the exact test of Hardy-Weinberg proportion for multiple alleles.” Biometrics(1992): 361-372.
  4. Hardy, Godfrey H. “Mendelian proportions in a mixed population.” Science706 (1908): 49-50.
  5. Hardy, G. H., & Snow, C. P. (1967). A mathematician’s apology. Reprinted, with a foreword by CP Snow. Cambridge University Press.
  6. Pearson, Karl. “Mathematical contributions to the theory of evolution. XI. On the influence of natural selection on the variability and correlation of organs.”Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character(1903): 1-66.
  7. Pearson, K., 1904. Mathematical contributions to the theory of evolution. XII. On a generalised theory of alternative inheritance, with special reference to Mendel’s laws. Philos. Trans. R. Soc. A 203 53–86.
  8. Punnett, R. C., 1908. Mendelism in relation to disease. Proc. R. Soc. Med. 1 135–168.[PMC free article] [PubMed]
  9. Punnett, R. C., 1911. Mendelism. Macmillan, London.
  10. Punnett, R. C., 1915. Mimicry in Butterflies. Cambridge University Press, Cambridge/London/New York.
  11. Punnett, R. C., 1917. Eliminating feeblemindedness. J. Hered. 8 464–465.
  12. Punnett, R. C., 1950. Early days of genetics. Heredity 4 1–10.
  13. Snow, C. P., 1967. G. H. Hardy. Macmillan, London.
  14. Stern, C., 1943. The Hardy–Weinberg law. Science 97 137–138. [PubMed]
  15. Sturtevant, A. H., 1965. A History of Genetics. Cold Spring Harbor Laboratory Press, Cold Spring Harbor, NY.
  16. Weinberg, Wilhelm. “Über vererbungsgesetze beim menschen.” Molecular and General Genetics MGG1 (1908): 440-460.
  17. Weinberg, W. “On the demonstration of heredity in man.” Boyer SH, trans (1963) Papers on human genetics. Prentice Hall, Englewood Cliffs, NJ(1908).

Figure: Wikimedia Commons

Differential & Integral Calculus – The Math of Change

Most will remember their first experience with calculus. From limits to derivatives, rates of changes, and integrals, it was as if the heavens had opened up and the beauty of mathematics was finally made clear. There was, in fact, more to the world than routine numerical manipulation. Numbers and symbols became the foundational building blocks with which theories could be written down, examined, and shared with others. The language of mathematics was emerging and with it a new realm of thinking. For me, calculus marked the beginning of an intellectual awakening and with it a new way of thinking. It is therefore perhaps worthy to examine the early development of our modern calculus and to provide a more concrete historical context.

The method of exhaustion. Image: Margaret Nelson, illustration for New York Times article “Take it to the Limit” by Steven Strogatz.

The distinguishing feature of our modern calculus is, undoubtedly, its unique ability to utilize the power of infinitesimals. However, this power was only realized after more than a millennium of intense mathematical debate and reformation. To the early Greek mathematicians, the notion of infinity was but a paradoxical concept lacking the geometric backing necessary to put it on a rigorous footing. It was this initial struggle to provide both a convincing and proper proof for the existence and usage of infinitesimals that led to some of the greatest mathematical development this world has ever seen. The necessity for this development is believed to be the result of early attempts to calculate difficult volumes and areas of various objects. Among the first advancements was the use of the method of exhaustion. First used by the Greek mathematician Eudoxus (c. 408-355 BC) and later refined by the Chinese mathematician Liu Hui in the 3rd century AD[1], the method of exhaustion was initially used as a means of “sandwiching” a desired value between two known values through repeated application of a given procedure. A notable application of this method was its use in estimating the true value of pi through inscribing/circumscribing a circle with higher degree n-gons.[1] With the age of Archimedes (c. 287-212) came the development of heuristics – a practical mathematical methodology not guaranteed to be optimal or perfect, but sufficient for the immediate goals.[2] Followed by advancements made by Indian mathematicians on trigonometric functions and summations (specifically work on integration), the groundwork for modern limiting analysis began to unfold and thus the relevance for infinitesimals in the mathematical world.

Isaac Newton. Image: Portrait of Isaac Newton by Sir Godfrey Kneller. Public domain.

By the turn of the 17th century, many influential mathematicians including Isaac Barrow, René Descartes, Pierre de Fermat, Blaise Pascal, John Wallis, and others had already been applying the results on infinitesimals to the study of tangent lines and differentiation.[2] However, when we think of modern calculus today the first names to come to mind are almost certainly Isaac Newton and Gottfried Leibniz. Before 1650, much of Europe was still in what historians refer to as the Hellenistic age of mathematics. Prior to the contributions of Newton and Leibniz, European mathematics was largely an “informal mass of various techniques, methods, notations, and theories.”[2] Through the creation of a more structured and algorithmic approach to mathematics, Newton and Leibniz succeeded in transforming the heart of the mathematical system itself giving rise to what we now call “the calculus.”

Both Newton and Leibniz shared the belief that the tangent could be defined as a ratio but Newton insisted that it was simply the ratio between ordinates and abscissas (the x and y coordinates respectively in the plane in regular Euclidean geometry).[2] Newton further added that the integral was merely the “sum of the ordinates for infinitesimals intervals in the abscissa” (i.e., the sum of an infinite number of rectangles).[4] From Leibniz we gain the well-known “Leibniz notation” still in use today. Leibniz denoted infinitesimal increments of abscissas and ordinates as dx and dy and the sum of infinitely many infinitesimally thin rectangles as a “long s” which today constitutes our modern integral symbol ò.[2] To Leibniz, the world was a collection of infinitesimal points and that infinitesimals were ideal quantities “less than any given quantity.”[3] Here we might draw the connection between this description and our modern use of the greek letter e (epsilon) – a fundamental tool in modern analysis in which assertions can be made by proving that a desired property is true provided we can always produce a value less than any given (usually small) epsilon.

From Newton, on the other hand, we get the groundwork for differential calculus which he developed through his theory on Fluxionary Calculus first published in his work Methodus Fluxionum.[2] Initially bothered by the use of infinitesimals in his calculations, Newton saught to avoid using them by instead forming calculations based on ratios of changes. He defined the rate of generated change as a fluxion (represented by a dotted letter) and the quantity generated as a fluent. He went on to define the derivative as the “ultimate ratio of change,” which he considered to be the ratio between evanescent increments (the ratio of fluxions) exactly at the moment in question – does this sound familiar to the instanteous rate of change? Newton is credited with saying that “the ultimate ratio is the ratio as the increments vanish into nothingness.”[2/3] The word “vanish” best reflects the idea of a value approaching zero in a limit.

The derivative of a function.

The derivative of a function.

Contrary to popular belief, Newton and Leibniz did not develop the same calculus nor did they conceive of our modern Calculus. Both aimed to create a system in which one could easily manage variable quantities but their intial approaches varied. Newton believed change was a variable quantity over time while for Leibniz change was the difference ranging over a sequence of infinitely close values.[3] The historical debate has therefore been, who invented calculus first? The current understanding is that Newton began work on what he called “the science of fluents and fluxions” no later than 1666. Leibniz on the other hand did not begin work until 1673. Between 1673 and 1677, there exists documented correspondence between Leibniz and several English scientists (as well as Newton himself) where it is believed that he may have come into contact with some of Newton’s   unpublished manuscripts.[2] However, there is no clear consensus on how heavily this may have actually influenced Leibniz’s work. Eventually both Newton and Leibniz became personally involved in the matter and in 1711 began to formally accuse each other of plagiarism.[2/3] Then in the 1820’s, following the efforts of the Analytical society, Leibnizian analytical calculus was formally accepted in England.[2] Today, both Newton and Leibniz are credited for independently developing the foundations of calculus but it is Leibniz who is credited with giving the discipline the name it has today: “calculus.”

The applications of differential and integral calculus are far reaching and cannot be overstated. From modern physics to neoclassical economics, there is hardly a discipline that does not rely on the tools of calculus. Over the course of thousands of years of mathematical development and countless instrumental players (e.g. Newton and Leibniz), we now have at our disposal some of the most advanced and beautifully simple problem solving tools the world has ever seen. What will be the next breakthrough? The next calculus? Only time will tell. What is certain is that the future of mathematics is, indeed, very bright.

Works Cited

[1]Dun, Liu; Fan, Dainian; Cohen, Robert Sonné (1966). “A comparison of Archimdes’ and Liu Hui’s studies of circles”. Chinese studies in the history and philosophy of science and technology 130. Springer. p. 279. ISBN 0-7923-3463-9., Chapter , p. 279

[2]“History of Calculus.” Wikipedia. Wikimedia Foundation, n.d. Web. 14 Mar. 2015.

[3]“A History of the Calculus.” Calculus History. N.p., n.d. Web. 14 Mar. 2015.

[4] Valentine, Vincent. “Editor’s Corner: Voltaire and The Man Who Knew Too Much, Que Sera, Sera, by Vincent Valentine.” Editor’s Corner: Voltaire and The Man Who Knew Too Much, Que Sera, Sera, by Vincent Valentine. ISHLT, Sept. 2014. Web. 15 Apr. 2015.

Henri Poincaré: A twentieth century polymath

Many of the first scientists would now be considered madly interdisciplinary. Aristotle’s fields of study ranged from mechanics and optics to medicine and the classification of animals, not to mention philosophy and other fields outside the natural sciences. Archimedes not only was fascinated by proving mathematical principles, he also applied them to physics, astronomy, and engineering. Newton invented principles which now are part of calculus while developing his theory of motion. Leonardo da Vinchi and other known Renaissance men were notoriously broad in their fields of knowledge and investigation. Gradually, mathematicians and scientists became more specialized. Darwin focused on biology, Cauchy on mathematics, Einstein on physics, and so on. Now, we recognize some academics as experts in such fields as number theory, particle physics, or Lie groups.

Henri Poincaré was one of the last of the generation of Renaissance men. While he was principally a mathematician, some of his work extended firmly into the world of physics. On the side he was a mining engineer and a philosopher. To see how varied and numerous his contributions were, see this list of things named after him, most of which are mathematical or physical topics.

Henri Poincaré
Image: Connormah via Wikimedia Commons.
Public domain.

Classical physics works very well for large objects with low speeds. In the late 1800s, physicists simultaneously realized that their understanding of the universe utterly failed to explain the behavior of small objects or fast objects. Two theories forever revolutionized our understanding of the universe: relativity, which explains fast moving objects, and quantum mechanics, which explains the behavior of very small objects like electrons. Poincaré contributed mathematically to both of them. Hendrik Antoon Lorentz derived the famous Lorentz transforms which explain relativity on a simple level. Lorentz discovered the Lorentz transforms without collaborating with Poincaré. However, Poincaré did critique Lorentz’ papers and offer additional input, ideas, and encouragement. It was this relationship with Lorentz that would later lead Poincaré into quantum mechanics.

Out of quantum mechanics and relativity, quantum mechanics has by far influenced the world more. It contributed to several major developments, including the understanding of atoms, nuclear power, and semiconductors. Of course, to semiconductors we owe much of our modern society. The development of the transistor would not have been possible without quantum mechanics. Transistors enabled the building of modern computers, cell phones, and the Internet.

For these reasons, Poincaré’s contributions to quantum mechanics are among his most important contributions to math and science. Poincaré was invited to the first Solvay Conference in 1911 on quantum theory by Lorentz. This appears to be the first time Poincaré was exposed to this new theory. In spite of this, his energetic participation in the discussions at the conference were noted by the other participants. In that conference, Max Planck presented a new theory about black body radiation.

Participants in the First Solvay Conference, 1911.
Image: Fastfission via Wikimedia Commons.
Public domain.

Black body radiation simply refers to the light given off by all objects as they cool. By 1911, enough experiments had been done that the wavelengths of light emitted from black bodies of different temperatures were known. However, classical physics failed to explain these results. Plank attempted to explain them by introducing the idea of “resonators” which could produce electromagnetic radiation. Although Planck didn’t consider matter to be made up of these resonators, this is a natural extension of his theory. Poincaré thought of this and questioned how Planck’s theory could explain the transfer of heat within an object. He quickly got to work rederiving Planck’s result and putting it on a more solid theoretical ground. In keeping with quantum theory, his reasoning used probability rather than absolute knowledge about particles. He did arrive at the same result as Planck, although he was more rigorous in doing so:

Unfortunately, just eight months after the First Solvay conference, Henri Poincaré passed away without living to see the impact his research would have on math and physics.


McCormmach, Russell (Spring 1967), “Henri Poincaré and the Quantum Theory”, Isis 58 (1): 37-55, doi:10.1086/350182

Plank’s Law on Wikipedia

Henri Poincaré on Wikipedia

Poincaré’s original paper on Planck’s theory (in French) can be seen here.

János Bolyai

A portrait, allegedly* of János Bolyai, by Mór Adler. Image: Pataki Márta and Szajci, via Wikimedia Commons.

One of my favorite things that we’ve been able to learn about this semester has been the different mathematicians that we’ve studied. It fascinates me to hear of their interaction with each other and how they affected one another’s work. As much as I love the math, the historical aspect is something that I had never heard and something that I love learning about. One of those mathematicians that I wanted to know more about was János Bolyai. Of him Gauss would say, “I regard this young geometer Bolyai as a genius of the first order.” Coming from someone like Gauss, that is quite the compliment. I wanted to know what made Gauss say that about such a young mathematician.

János Bolyai was born in Kolozsvár (which is now the city of Cluj in Romania) to Zsuzsanna Benkö and Farkas Bolyai, who was also a great mathematician, physicist and chemist at the Calvinist College. Like many fathers, Farkas wanted his son to follow in his footsteps and perhaps to achieve more than even Farkas himself had achieved in the field. So, he raised János with that goal in mind. However, Farkas was a firm believer that a strong body would lead to a strong mind, so in János’s younger years, most of the attention was spent developing his physical body. (O’Connor & Robertson, 2004)

János quickly became a child prodigy. According to Barna Szénássy in History of Mathematics in Hungary until the 20th Century, “… when he was four he could distinguish certain geometrical figures, knew about the sine function, and could identify the best known constellations. By the time he was five [he] had learnt, practically by himself, to read. He was well above the average at learning languages and music. At the age of seven he took up playing the violin and made such good progress that he was soon playing difficult concert pieces.” (Szenassy, 1992). Bolyai’s childhood and adolescence were fascinating. His father wanted to send him to live with Gauss as a student in order to accelerate his mathematical education, but Gauss would not agree to it. Because the Bolyai family didn’t have the financial assets to send János to an expensive university, they made the decision to send him to the Royal Academy of Engineering at Vienna to study military engineering. He truly was a “jack of all trades.” He finished the seven year engineering program in just four years, became an excellent sportsman and even performed as a violinist in Vienna. He was in the military for eleven years, where he became known as the greatest swordsman and dancer in the Austro-Hungarian Imperial Army. (O’Connor & Robertson, 2004) It wasn’t until 1820 that he began intense study on Euclid’s parallel postulate and the development of hyperbolic geometry. One of János Bolyai’s most recognized quotations comes from a letter that he wrote to his father when he said that he had, “created a new, another world out of nothing.”

The story is well-known of the publication of Bolyai’s work on hyperbolic geometry. During János’s military service, his father read the mathematical work that his son had sent him previously and then went to where János was stationed. Farkas then encouraged his son to publish his work. János later said, “Had my father not happened to urge or even force me at Marosvásárhely, on my way to duty in Lemberg, to immediately put things to paper, possibly the contents of the Appendix would never have seen the light of day.” When Farkas sent a copy of his son’s work to his old friend, Gauss, Gauss responded by saying, “To praise it would amount to praising myself. For the entire content of the work … coincides almost exactly with my own meditations which have occupied my mind for the past thirty or thirty-five years.”

Bolyai’s work on the parallel axiom led to the development of what would be known as a “pseudosphere,” which is an object that extends infinitely, but has a finite volume. This object was created by Beltrami many years later, but now is seen as an embodiment of hyperbolic geometry.

The story of János Bolyai ends as a sad one. He did not manage his money very well, gave very little care or attention to the family estate he had inherited, and finally left his wife and children. Years after his work on hyperbolic geometry, he found the works of another geometer named Lobachevsky, who he thought was fictional; a cover that Gauss had created in order to steal his work on hyperbolic geometry. He quit working on mathematics entirely and focused on “a theory of all knowledge.” (O’Connor & Robertson, 2004) Although he may not have felt like he received the credit that he deserved for his work, János Bolyai was indeed, as Gauss called him, “a genius of the first order.” He gave the world of mathematics a new way of understanding the concept of parallelism and the way in which mathematics relates to our natural world.

*Editor’s note: The portrait here, which also appears on postage stamps honoring János Bolyai, has long been associated with the mathematician but is not authentic. For more information, see “The Real Face of János Bolyai” by Tamás Dénes.

Works Cited

O’Connor, J., & Robertson, E. (2004, March). János Bolyai. Retrieved from MacTutor History of Mathematics:

Szenassy, B. (1992). History of Mathematics in Hungary until the 20th Century. New York: Springer-Verlag Berlin Heidelberg.

Just Enough to Notice

Portrait of Ernst Heinrich Weber.

Ernst Heinrich Weber
           Among the different fields of science, mathematics is the umbrella of them all. It amazes me how we can mathematically make sense of anything around us. Although it may take a quite a bit of time to fully understand the function of something, the possibility is there. Because of this ability, the problems both physically and psychologically can be deconstructed. This train of thought eventually led me to Ernst Heinrich Weber (1795 – 1878). His works and legacy proved to be phenomenal and led to many new studies in the field of psychology, physiology, and anatomy. His dedication towards his career gave him great recognition among the science community. He became known as one of the founders of psychophysics and experimental psychology. The implications of experimental psychology itself gave a new perspective for many psychologists in creating new research studies and finding new discoveries. Unquestionably Ernst Heinrich Weber was one of the most intuitive people during his time. Not only was he one of the pioneers of experimental psychology, he was able to produce to what we know as Weber’s Law.

Weber’s Law
             Some of Ernst Weber’s most known works come from his studies on human physiological perception. During his research, he claimed that humans had the ability to distinguish the relative difference of two different items and not the absolute difference. We can essentially differentiate two different objects by using our sensory and perception to notice if one object is bigger, heavier, or louder than the other. This type of intuitive statistics is what Weber called Just-Noticeable Difference (JND). Just-Noticeable Difference is also known as difference limen, differential threshold, or least perceptible difference. Weber’s first experiments with this notion began by using weights and comparing them. He wanted to know the minimum amount of difference between two weights in order to tell them apart. What he discovered was that the most noticeable amount of weight was in the difference of 3% between the two. This discovery eventually led him to produce an equation that is now known as Weber’s Law.

Weber’s Law is shown as follows:
ΔR/R = k

Where R is the amount of existing stimulation present. (Delta) R is the difference of existing stimulation that gives us the amount that needs to be added for the Just-Noticeable Difference. Lastly k represents a constant that is different for every sense being measured. Something to note is that it has been found that in a range of high intensities, this law is proved to be invalid. However, one of Ernst Weber’s colleagues, Gustav Theador Fechner, was able to use Weber’s findings and was able to develop his own formula that measured all types of Just-Noticeable Difference senses.

Examples of Weber’s Law
To understand Weber’s Law a little more, there are a couple of examples that can be made.

Image: Roman Oleinik, via Wikimedia Commons.

Suppose there are 50 pennies and each penny weighs 2.5 grams (g). If we add the total weight, we will get 125g. If we take 3% of 125g to get our Just-Noticeable Difference, we will get about 3.75g. Therefore if we have two bags of pennies and put 50 (125g) in one and 52 (132.5 g) in another, we should be able to tell which bag is heavier, just by holding the two bags with your arms in the air.

A good visual example of this is through this link on YouTube:

In the video, the person is using his stereo system in his car and adjusting the volume to show how Weber’s Law can be applied. He shows that it is difficult distinguish the difference when the volume is loud or quiet. Although he doesn’t show a precise way to measure and find the Just-Noticeable Difference, his presentation gives a clear idea of how Weber’s Law is applied.

            With a little bit of curiosity, we can find hundreds of different types of mathematics being applied. Ernst Heinrich Weber is an example of using conventional math and applying into something at the time that may seem less mathematical. Because of his ability to combine both physics and psychology, he was able to create a new field of study and even a law named after him. As we continue to study mathematics and its history, we can see that all types of studies can be seen with a mathematical lens.


The Forgotten Mathematician

Pierre Wantzel was born in 1814 on the 5th of June in Paris. His father was a professor of mathematics at École speciale du Commerce after serving in the army. Due to this Pierre started his life with a natural love for mathematics. He attended school at his home town of Ecouen where he demonstrated this love. When he was only 9 years old his teachers would turn to him for help when judging the difficulty of problems. His love and skills for mathematics was realized by his parents when they sent him to École des Arts et Métiers de Châlons. He was surprisingly 12 years old he went there, and this was far younger than most.   His teacher was the well-known Étienne Bobillier, a mathematician known for his works on polar curves and algebraic surfaces. This helped kinder his mathematical skill, but it did not last long because in 1827 the school was reformatted. This was because France itself was facing revolts and other political issues. The school was reformed to become less academic, and this caused Pierre to take his studies elsewhere.

In 1828 he traveled to the Collège Charlemagne to continue his studies and receive language coaching. He later married the daughter of language coach, but before this he accomplished many feats of genius including editing a second edition book by Reynauld, Treatise on arithmetic, at only 15 in 1829. This book featured a method for finding square roots that was never proved. He proved the method, and in doing so he received the first prize for dissertation from his college. Later on he took the entrance exam to École Polytechnique and the science section for École Normale. He placed first in both of these, something never before achieved. Furthering his education he traveled to Ponts et Chaussées, an engineering school, but did not stay long. He remained there for a year until 1835 where he journeyed to the Ardennes. Following a similar pattern he later traveled to Berry after only a year at the Ardennes. After studying engineering he decided that teaching mathematics was his true dedication. In order to achieve this he took a leave from his occupation, and went to become a lecturer for a school from his past, École Polytechnique. He later became a professor of applied mathematics at École des Ponts et Chaussées but not before becoming an engineer in 1841. Continuing with his true interests he began teaching classes on not only mathematics but physics as well. He continued his educational career becoming the entrance exam examiner In 1843. He was not confined by his university, however, as he traveled around Paris to many schools teaching there too.

The tools of Pierre Wantzel. Image: Mcgill via Wikimedia Commons.

Pierre achieved fame when he published what would become his most important works. These were on the subject on radicals, and solving equations and they were dubbed as some of the most famous problems of the time. Publishing them in Liouville’s Journal he was the first to prove that it was impossible to duplicate a cube and trisect an angle with a ruler and compass. Gauss had originally stated that it was impossible but offered no proof. This is what Pierre accomplished in his 1837 paper where he traces the solution back to cube roots, something that proves impossible to do with those tools. This was built off of the work of others, yet it still went beyond what had been previously done. Continuing his works Pierre delved into equations, and from this he created new proofs of algebraic equations deemed impossible. These were solved not by providing a solution but proving they were impossible to solve. He revised a proof of Abel’s theorem in 1845, stating that it was impossible to solve any equations where the exponent n is greater than 5. He also added details to many vague solutions on the subject these solutions were proposed by famous mathematicians such as Ruffini. Pierre published over 20 works throughout the course of his life a few of these branch out into the field of physics, specifically dealing with extreme pressure differences.

Pierre was a strong man who focused on his work so much that he sacrificed his sleep and meals to do so. Pierre Wantzel did not live out his life fully as he overworked himself. He relied on coffee and opium to continue his lifestyle, and this ultimately resulted in his demise. In 1848 at the age of 33 he died and the world lost a great mind. Overall his works were very important yet were not remembered as well as others. This is commonly attributed to the classical nature of the problem he is famous for. Several other mathematicians mentioned the problem yet they have given no proof. Max Simon’s work from 1906 does mention Pierre’s, but it was published as a supplement to another work rather than as its own. Another reason is his early death. Due to his potential yet little time to achieve true greatness he is less known. Sadly he was not elected as a member of the Académie des Sciences. His achievements were great and if he had lived longer he would have achieved much more.

John Wallis: a Man before his Time

Stipple engraving of John Wallis by R. Cooper via Wikimedia Commons

John Wallis was born to Joanna Chapman and Reverend John Wallis in Ashford, Kent in 1616. He was the 3rd of 5 children but was primarily raised by his mother, because his father died when he was 6 years old. Later he moved to Tenterden, Kent with his mother after there was an outbreak of the plague in his home town. In Tenterden he attended the James Movat’s grammar school and here he showed his true potential. When he turned 13 he felt that he was ready for the university and within a year he was attending the Martin Holbeach’s school located in Felsted. He took many classes and learned several languages yet mathematics became his passion. Hs family’s plan was for him to become a doctor, but one night he found an arithmetic book, and over a few weeks he mastered it with his brother’s help. This was his first step towards becoming one of the most valuable mathematicians of the time, and along the path of discovering many things so revolutionary they feel as if they are before their time.

Later in his life Wallis attended Emmanuel College Cambridge where finally no one could disrupt his mathematics studies. He studied many other subjects including astronomy, medicine and cryptography. Unlike his family’s plan he never wished to become a doctor, but he did spend time studying medicine. It is worth noting that he was the first person to defend Francis Glisson’s theory on blood circulation. After graduating with a Bachelor of Arts degree in 1637 he followed with his masters in 1640. He was later given a fellowship at Queens’ College, Cambridge, and here he took orders and assisted in deciphering royalist messages. He was against the execution of Charles I as he felt that it was lasting hatred from the Independents. During the civil war he was a great aid for the Parliamentarians, and because of this the Parliamentarians placed him in charge of the Church of St Gabriel in Fenchurch Street. During the same year his mother died and he was given the family’s estate. Not only being a genius but also applying this to aid his country captivates my interests as he was a dedicated man. Being a man of brilliance Wallis frequently did large mental calculations. This is due to his insomnia which kept him awake for many long hours. He occupied them by calculating numbers while lying in his bed, and not only honed his skill, but also his memory as he would recite them the next morning. One of his greatest mental math problems was when he not only calculated the square root of a 53 digit number. I personally find this to be fascination, as he not only revolutionized many fields but he also was so dedicated that all of his time was spent on mathematics. His feats of mathematics alone bring him above his peers but this dedication and knowledge furthers his status as being beyond his time.

He later married Susanna Glyde in March of 1645, but in doing so he forfeited his fellowship at Queen’s College. From here he traveled back to London and started small group meetings of scientists. This would later grow into the Royal Society of London. His true passion for mathematics was revealed during the Society meetings. This was sparked when Oughtred’s Clavis Mathematicae was read out loud. Wallis mastered the book in a matter of weeks and went on to write his own book. His book, Treatise of Angular Sections, went unpublished for 40 years. He also developed solutions to 4th degree equations. These solutions were similar to those of Thomas Harriot’s, another mathematician of the time who graduated from Oxford. Wallis was appointed to be the chair of Geometry at Oxford in 1649 where he remained for the remainder of his life.

The first page of the Arithmetica Infinitorum, 1656, via Wikimedia Commons.

The first page of the Arithmetica Infinitorum, 1656, via Wikimedia Commons.

During his lifetime he produced many important mathematical works. These include his Arithmetica Infinitorum, a book, and the method of indivisibles. Arithmetica Infinitorum was published in 1656 and extends the methods of analysis used by Descartes and Cavalieri. It soon became the standard book being used to teach due to how it expanded the field, and is recognized as his most important work. It focuses on conic sections and states that all planes made from them can be represented with algebraic coordinates as well as featuring Wallis’ work on integral calculus, solving for several integrals such as x-1 and xn. The book also gives an accurate number for π, and it does this by representing π by a series and then solving. He was also the first to use the principle of interpolation. This principle constructs new points of data inside of the range of discrete known points in order to achieve a solution. This method was used through the 17th century by mathematicians and is still used in engineering today. It was in this book that the symbol ∞ was used for infinity was first seen. Wallis selected this because its curvature can be traced out infinitely many times. ∞ is a symbol that was derived from the Ouroboros, or the snake biting its own tail, which represented endlessness. Wallis’s greatest work impacted his future works as well as others. In 1659 Wallis used its principles in his work on cycloids. This idea was previously proposed by Pascal but Wallis was the one to reach a solution. Consequently in doing this he also applied his principles to algebraic curves. This allowed for a solution to a semi-cubical parabola, something that had troubled previous mathematicians, this solution was found by William Neil. The Arithmetica Infinitorum alone is a work of greatness, as it allowed for several of the most troubling problems to be solved. There are few others to compare it to that had such an impact. It revolutionized the field as it was truly before its time.

In his lifetime Wallis also published another book titled Treatise on Algebra. Here he demonstrates that negative roots are complex numbers and that it is possible to factor a polynomial into roots featuring complex numbers. It also challenges Descartes’ rule of signs, and because of this Wallis received much criticism from the mathematical community. Wallis’s view on negative numbers was different than what we except today, as he believed that they were less than nothing. This did not hamper his works as he demonstrated complex knowledge of them.

Wallis published one final book in his life, called Algebra. Well known to be ahead of its time, it features the systematic use of formulae. Wallis uses this to analyze the space that a particle moving at a constant velocity is at any time. He used the ratio of space to the length rather than what many previous mathematicians, and this new revolutionary idea opened up many possibilities for solving problems. This book also featured a second edition titled Opera that was notably expanded from the first, as it featured many more examples. The amount of work that went into all of the books that Wallis published along with the concepts inside bewilders me, and shows his dedication to mathematics.

Wallis also did work in the field of physics. When the theory of collision of bodies was proposed to the Royal Society, Wallis alongside other mathematicians was tasked in sending feedback and similar solutions in order to support the theory. This theory went on to become what is called conservation of momentum. Wallis was the only one to consider a situation other than just perfect elastic bodies, doing work with imperfect ones as well. Later on he made other contributions to physics such as his work with center of gravity and dynamics. Wallis was a fantastic mathematician and a brilliant mind. Wallis is remembered for his many contributions in mathematics and physics and rightfully deserves to be remembered.