At first, it may sound impossible, but there exist many ancient forms of arithmetic whose methodology falls very close in line with how the modern day computer processing unit (CPU) performs arithmetic. As a software engineer at Google, I have a lot of hands on experience with computers and how they function. While taking this class I quickly realized that it had managed to strike up interest in my mind that I had not expected. As we began to dive into ancient cultures and arithmetic, the similarities between Egyptian and Babylonian computation and the modern day arithmetic logic units inside the CPU came to life.

The similarities are both in the way they performed the same mathematical operations and in the general idea that there are clever ways to do arithmetic. I say clever mostly because they exist in a number system with a different base than our current working number system which is base **10**. Computers, as many people know, at their very core, are restricted to the domain of base **2** numbers. I believe that most people are familiar with the fact that computers use a series of **1’s** and **0’s** denoted as “bits” to signify a particular bit either being turned on or turned off. Each bit is acting as a placeholder for a greater number. When a bit is turned on, it is denoting the presence of the number in that position. Here is a simple chart showing the first **16 **numbers of binary, **0 – 15**.

To me, I find systems of multiplication and division to be a lot more interesting than simple addition or subtraction, so I’m going to talk specifically about systems of multiplication. You can imagine that both in ancient times and in a computer, multiplication could be as simple as adding a number together many times to reach your desired result. For example, the problem of **2** being multiplied by **4** could be represented as **2 + 2 + 2 + 2 = 8**. For small numbers this works out. But for both ancient mathematicians and computers, the time it takes to calculate numbers is very much of the essence. So in both cases, clever algorithms have been developed to speed up the process. My personal favorite of the ancient multiplication techniques is one that was very similar to the Egyptian multiplication that we learned in class. It goes by the name of Russian Peasant Multiplication. As best I can tell it, this name was given to it after accounts of western travellers witnessing the technique being used by peasants in Russia.

An example of this algorithm that I found in “The Russian Peasant Multiplication Algorithm: A Generalization” by Beverly J. Gimmestad shows it well:

At first glance this may look like garbage, but I will explain it. Given that we want to multiply the numbers **37** and **6**, place them at the heads of their respective columns. Begin by dividing the left column by **2** and rounding the result down to the nearest integer. You can see in the example that we come to the number **18**. In the column on the right, we take our number and instead of halving it, we double it, producing a **12**. We then continue this process for as many loops as it will take for the column on the left to produce a **1** as its value. This brings us to the second table. Our next operation is to take every row that begins with an even number on the left and discard it. We can now gather the remaining numbers from the column on the right and add them together to reach our desired result. **37 x 6 = 6 + 24 + 192 = 222**. It is worth noting here that the algorithm used by Egyptians for multiplication uses a very similar system of doubling. Perhaps it is possible that there is a connection between the two techniques.

So how is Russian Peasant Multiplication similar to what computers are doing? From a popular book in the field of Computer Science: “*Computer Organization and Design*” by Patterson and Hennessy, we get a very accurate description of the multiplication algorithm found in most modern day Arithmetic Logic Units inside of processors. We start with 3 numbers: A multiplicand, a multiplier, and a product which is initially 0. We will build up this product as we go. This figure I drew up shows a simple flowchart to assist with understanding:

Moving from least significant bit (the rightmost bit denoting the presence of a 1 in the number), to most significant bit in the multiplier we first determine whether the bit in the multiplier is a 0 or a 1. If we encounter a 1 in the rightmost position of the multiplier, this signifies that in this round, we need to add the multiplicand to our product. Once we have done that, we shift the multiplicand left by one bit essentially doubling it. We then shift the multiplier right by 1 bit throwing away its rightmost bit and halving the number. Sound familiar? (Russian Peasant Multiplication). We loop this process until we have examined each bit.

The example above is also taken from “Computer Organization and Design” and illustrates this algorithm in a grade-school like fashion which is frankly much easier to understand. Here we are multiplying the numbers **8 **and **9**. To break it down, the bit in the multiplier is determining if the multiplicand should be added for that round. In any round, the multiplicand is double what it was the round before, hence the shifting to the left. To give a bit of contrast, in our number system, a shift left represents a multiplication of **10 **(e.g. **3 **->** 30 **->** 300**). We add all the rounds together and reach or final product of **72** which is indicated by the **64th** and the **8th** bit being set.

It may be hard to see at first, but with a little thought you can see that this is not only close, but it is *exactly *what is performed in the Russian Peasant Multiplication algorithm. How fascinating it is that an ancient technique can be so useful centuries later in an application that they couldn’t have possibly imagined!

**Source material:**

Patterson, D., & Hennessy, J. (2012). *Computer organization and design: The hardware/software interface* (4th ed.). Waltham, MA: Morgan Kaufmann.

Gimmestad, B. (1991). The Russian Peasant Multiplication Algorithm: A Generalization. *The Mathematical Gazette,* *75*(472), 169-171.