Now more than ever, our society’s use of computers is far greater than it was before. From creating documents to playing video games, computers are to thank for the simplicity we have when doing these things. In addition, as time has gone by we have been able to create more “powerful” computers that can do things that we were never able to do before. But what actually is behind this “power”? One of the most obvious things people will tell you is the fact that current computers have better processors that allow more to be done in less time. While this is definitely a big reason why, another major reason why we have more “power” is because of Mathematics.

Everything that a computer does can be boiled down to numbers; most likely binary numbers but numbers nonetheless. The only thing the processor within a computer does is manipulate these numbers in specific ways. In most of these cases, the manipulations will be based around simple mathematics. One large part of mathematics is trying to create proofs. If we have a proof that makes a certain statement, then we don’t need to worry about it when making other proofs or functions. The Pythagorean Theorem is an excellent example of a simple mathematical proof that can easily be translated into a few number manipulations by the processor.

But what if we didn’t have the Pythagorean Theorem? Even if we didn’t have this proof, we could still figure out the length of a hypotenuse in a right triangle. One possible way of solving this problem would be by recursively going through every possible length of the hypotenuse and see if it can connect to the right angle produced by the other two sides. While this looks like a ludicrous way for a human to solve this problem, a computer could do this kind of computation relatively quickly. The major thing though is that, just like you, a computer can solve this problem much more quickly if it can use the Pythagorean Theorem which, in turn, will make the computer more “powerful”. While this example focused on a mathematical proof that almost everyone knows, there are many proofs out there which are far more obscure and may seem pointless for a human to use but actually make certain mathematical computations for a computer faster.

One of the most useful areas of mathematics for a computer would be linear algebra. Linear algebra spends a large amount of time dealing with number values within different dimensional vectors. The reason this is so useful for computer is because the primary way that numbers are stored on a computer is in different dimensional arrays which work just like vectors. This means that any proofs that we have that make certain types of vector manipulation easier will also make it easier for computers. A good example of this is how we can deal with linear equations in linear algebra using the Gaussian elimination algorithm. Unlike when we were in high school and the main way of solving these types of equations was by figuring out which exactly which variables we wanted to get rid of in which order, the Gaussian-elimination algorithm is much more straight forward by treating each equation as a vector. While this doesn’t really cut out time in solving the problem, it does make it much easier for programmers to implement linear equations in their code without accidentally doing any unnecessary work. This leads to a cleaner implementation of linear equations with less likelihood of programmers accidentally creating bugs in the code.

The best comparison of processor speed vs mathematics in a computer would have to be from the 1940s to 1990s when comparing America to the Soviet Union. In the 1940s World War 2 had started and, once this happened, both the Americans and Soviets started working on producing computers to help them crack codes and do other important tasks. The big difference, however, between America and the Soviet Union was that America had much more access to the components they needed to build computers, which meant they could build more and run faster. The Soviet Union, on the other hand, did not have access to nearly as many resources due to their extreme isolation from the rest of the world [1]. This meant that they had to build their computers with the resources and knowledge they had alone or copy designs from the western models. This meant that computers were a much more scarce resource and had to be used in the best way possible. If the computer they had didn’t have the speed necessary to run the program they created, then they had to build a better program.

When the Soviet Union fell, many of the Soviet Union’s mathematicians immigrated to America. Once they were in America the amount of papers that were produced by Americans working in the same area as a Soviet Union immigrant dropped significantly [2]. This, I believe, is mostly because of the fact that the only way the Soviet Union was able to compete with America in computational power was by having a better mathematical base in the programs they used. This, in turn, meant that the Soviet Union put a large emphasis on mathematics.

For the longest time, the way America created more “powerful” computers was by increasing the speed of the processor. However, we have now reached a barrier when trying to produce faster processors. Instead of producing faster processors, now we see that computers will have multiple processors. While this can lead to more “powerful” computers, the power of a computer is now fully based on the programs that people make for them and the mathematical proofs they use in those programs. In conclusion, now more than ever the power of a computer fully relies on our expansion in the area of mathematics.

Sources:

[1] (http://en.wikipedia.org/wiki/History_of_computer_hardware_in_Soviet_Bloc_countries) Soviet Union didn’t have access to the same computers that Capitalist countries had. Either had to build their own or create copies of Western Models.

[2] (http://www3.nd.edu/~kdoran/Doran_Math.pdf) Soviet Union Immigrants started doing more Mathematics Papers than Americans after the fall of the Soviet Union (Better at Mathematics?)