Tag Archives: Computer Science

The Diffie-Hellman Key Exchange

Scenario:

Let’s say we have a crowded room of people. Anyone in the room can talk to anyone else, but when they do, everyone else in the room can hear them. If you wanted to exchange secret information with another person, anyone could lean in, listen, and steal your secrets. So maybe you think about the possibility of using a cipher of some kind to mangle your message to the effect that only the person the message was meant for could unscramble it and understand its meaning. The obvious problem with this idea is that you would first have to tell the person the key to your cipher, and you would be back to square one. Anyone could listen, hear the key, and decrypt the messages for themselves. What if I were to tell you, however, that there is a way for two people in the room to each shout a special message and from there be able to encrypt all of their future messages with a secure unbreakable cipher that nobody else could possibly figure out? By the end of this post, you will see how this is possible. Some of you may already know where this is going and for those who don’t, allow me to introduce the magic that is known as the Diffie-Hellman Key Exchange.

Introduction:

What is now known as the Diffie-Hellman Key Exchange was first published by the scheme’s namesakes Whitfield Diffie and Martin Hellman in 1976. Much later, it became known that the British signals intelligence agency, GCHQ, had developed the same scheme as early as 1975. Up until this point it was agreed upon in the cryptography community that the only way to establish a secure cipher between two communication points was to first exchange a cipher key in private. To be able to subjugate the need for this initial private meeting was a really big deal, especially given the implications it has for the Internet (computers communicating across long physical distances). To compare to the above example, the internet can easily be described as the largest crowded room of people ever. I am now more than 300 words into this post without tying it to any subject material from our class, however. It’s time to talk about the mathematical techniques that make this all possible.

Modular arithmetic in the context of cryptography:

I think it was really fortunate that we had a lesson in class on the beginnings of modular arithmetic. As a computer scientist, it has opened up a huge realm of blog post topics that I can speak about from a knowledgeable perspective due to so many CS topics relying on the techniques of modular operations. My favorite layman’s description of modular arithmetic is taking a number of hours, and translating them onto a clock. A clock, of course, only has 12 possible positions for the hour hand to fall under. So given a measurement of time that is greater than 12, what are we to do? Anyone can see how 10 hours measured on a clock would point the hour hand to the number 10. Likewise it’s easy to understand how 22 hours would wrap around and also point the hour hand at the number 10. Our brains have gotten good at automatically doing the modular arithmetic in this situation. In terms of math, all we are doing is dividing our number by 12 and then taking the remainder. 22 / 12 = 1 remainder 10. We would define this by saying 22 mod 12 = 10. Without really thinking about it, using these simple techniques, we have created a mechanism that is both necessary and perfect for cryptography and almost all modern forms of ciphering messages. Modular arithmetic provides us the functions we need to create something that is really easy to calculate in one direction, but is impossible to calculate in the other direction. Given the number 10, is there any way to tell what number of hours we had originally? It could be 10, it could be 22, it could be 34, or it could be any multiple of 12 with an added 10.

Diffie-Hellman Key Exchange example:

So how does the Diffie-Hellman key exchange actually work? Lets lay the key equations out first.

Now these are a lot of terms that mean nothing now, but I will define them as we go. Lets walk through the steps of the exchange:

      1. The numbers g and p are predefined numbers and known by everyone. For security’s sake and deeper reasons I won’t go into, g and p should be prime.
        • For our example, lets use the common values for this problem: g = 5 and p = 23
      2. Each party involved chooses a secret number they will keep private.
        • Lets say that A chooses their number to be a = 6, and B chooses b = 15.
      3. Party A then computes their shared value to be equal to Sa = ga mod p.
        • In this example ga mod p = 56 mod 23 = 8
      4. Party B then computes their shared value to be equal to Sb = gb mod p.
        • In this example gb mod p = 515 mod 23 = 19
      5. The parties involved now shout their shared values Sa and Sb to each other, not caring if anyone hears.
        • A shouts their number 8 to B, and B shouts their number 19 to A.
      6. Now we take into account the equations above. Each party now calculates the final shared secret, SS, using SS = Sba mod pfor party A, and SS = Sab mod p for party B.
        • Party A computes the value: SS = Sba mod p = 196 mod 23 = 2
        • Party B computes the value: SS = Sab mod p = 815  mod 23 = 2
        • Now both parties have a shared secret SS = 2 that they were able to compute and never send over the wire!

These are a lot of numbers to try and make sense of. The picture of why this works is very unclear. My favorite example of why this works that is easier to understand is one to do with colors and mixtures of paint, the diagram for which comes from a professor by the name of A.J. Han Vinck at the University of Duisburg-Essen.

caption

Image: A.J. Han Vinck

If we can convince ourselves that raising one number to the power power of a second number and then taking the modulus of that number is similar to mixing two colors of paint together, this example makes a lot of sense. We see that we start out with everyone knowing a common paint color. This would be our values of g and p. Both Alice and Bob then choose a secret color. These would be a and b from our above example. They then mix the color of paint they are going to send over the wire. These are the light orange and blue colors we see going over public transport and from our example, we know them as Sa and Sb. On the other side of the transport, the respective parties add their own secret colors to the mixture they received from the other. In the end, the exact same ingredients have been added to both parties mixture of paint so intuitively the end result is the same. As a third party, even if you knew about the two colors of paint that were publicly shared, you could never separate out those colors and know what the secret colors of Alice and Bob were. You would never be able to construct the all important final color.

Once there is a secret established between the two parties, they can use the key in a wide variety of ciphers including just using it to mangle a single message (known as a one time pad). The practical uses of Diffie-Hellman have a few restrictions on the numbers g, p, a, and b in order to keep the keys unbreakable. For instance, the number p must be a very large prime number. With these proper requirements in place though, you can find this trick employed all over the internet. Versions of it are included in almost all “secure” or “encrypted” network connections.

source material: Color scheme image: A.J. Han Vinck, University of Duisburg-Essen SVG version: Flugaal – A.J. Han Vinck, Introduction to public key cryptography, p. 16

Advertisements

How I Learned to Build Programs and Love Mathematics

A photo of an old computer. Image: Arkadiusz Sikorski via flickr.

A photo of an old computer. Image: Arkadiusz Sikorski via flickr.

Now more than ever, our society’s use of computers is far greater than it was before. From creating documents to playing video games, computers are to thank for the simplicity we have when doing these things. In addition, as time has gone by we have been able to create more “powerful” computers that can do things that we were never able to do before. But what actually is behind this “power”? One of the most obvious things people will tell you is the fact that current computers have better processors that allow more to be done in less time. While this is definitely a big reason why, another major reason why we have more “power” is because of Mathematics.

Everything that a computer does can be boiled down to numbers; most likely binary numbers but numbers nonetheless. The only thing the processor within a computer does is manipulate these numbers in specific ways. In most of these cases, the manipulations will be based around simple mathematics. One large part of mathematics is trying to create proofs. If we have a proof that makes a certain statement, then we don’t need to worry about it when making other proofs or functions. The Pythagorean Theorem is an excellent example of a simple mathematical proof that can easily be translated into a few number manipulations by the processor.

But what if we didn’t have the Pythagorean Theorem? Even if we didn’t have this proof, we could still figure out the length of a hypotenuse in a right triangle. One possible way of solving this problem would be by recursively going through every possible length of the hypotenuse and see if it can connect to the right angle produced by the other two sides. While this looks like a ludicrous way for a human to solve this problem, a computer could do this kind of computation relatively quickly. The major thing though is that, just like you, a computer can solve this problem much more quickly if it can use the Pythagorean Theorem which, in turn, will make the computer more “powerful”. While this example focused on a mathematical proof that almost everyone knows, there are many proofs out there which are far more obscure and may seem pointless for a human to use but actually make certain mathematical computations for a computer faster.

One of the most useful areas of mathematics for a computer would be linear algebra. Linear algebra spends a large amount of time dealing with number values within different dimensional vectors. The reason this is so useful for computer is because the primary way that numbers are stored on a computer is in different dimensional arrays which work just like vectors. This means that any proofs that we have that make certain types of vector manipulation easier will also make it easier for computers. A good example of this is how we can deal with linear equations in linear algebra using the Gaussian elimination algorithm. Unlike when we were in high school and the main way of solving these types of equations was by figuring out which exactly which variables we wanted to get rid of in which order, the Gaussian-elimination algorithm is much more straight forward by treating each equation as a vector. While this doesn’t really cut out time in solving the problem, it does make it much easier for programmers to implement linear equations in their code without accidentally doing any unnecessary work. This leads to a cleaner implementation of linear equations with less likelihood of programmers accidentally creating bugs in the code.

The best comparison of processor speed vs mathematics in a computer would have to be from the 1940s to 1990s when comparing America to the Soviet Union. In the 1940s World War 2 had started and, once this happened, both the Americans and Soviets started working on producing computers to help them crack codes and do other important tasks. The big difference, however, between America and the Soviet Union was that America had much more access to the components they needed to build computers, which meant they could build more and run faster. The Soviet Union, on the other hand, did not have access to nearly as many resources due to their extreme isolation from the rest of the world [1]. This meant that they had to build their computers with the resources and knowledge they had alone or copy designs from the western models. This meant that computers were a much more scarce resource and had to be used in the best way possible. If the computer they had didn’t have the speed necessary to run the program they created, then they had to build a better program.

When the Soviet Union fell, many of the Soviet Union’s mathematicians immigrated to America. Once they were in America the amount of papers that were produced by Americans working in the same area as a Soviet Union immigrant dropped significantly [2].  This, I believe, is mostly because of the fact that the only way the Soviet Union was able to compete with America in computational power was by having a better mathematical base in the programs they used. This, in turn, meant that the Soviet Union put a large emphasis on mathematics.

For the longest time, the way America created more “powerful” computers was by increasing the speed of the processor. However, we have now reached a barrier when trying to produce faster processors. Instead of producing faster processors, now we see that computers will have multiple processors. While this can lead to more “powerful” computers, the power of a computer is now fully based on the programs that people make for them and the mathematical proofs they use in those programs. In conclusion, now more than ever the power of a computer fully relies on our expansion in the area of mathematics.

Sources:

[1] (http://en.wikipedia.org/wiki/History_of_computer_hardware_in_Soviet_Bloc_countries) Soviet Union didn’t have access to the same computers that Capitalist countries had. Either had to build their own or create copies of Western Models.

[2] (http://www3.nd.edu/~kdoran/Doran_Math.pdf) Soviet Union Immigrants started doing more Mathematics Papers than Americans after the fall of the Soviet Union (Better at Mathematics?)

How Fermat’s theorem contributes to modern computer science

After reading several different sources of math history, I was attracted by Fermat’s history and his theorems about prime numbers. I realized that I am kind of interested in these numbers and anything related to these numbers after digging into the computer science major for few years. Why? I think the primary reason is because prime numbers are widely used in computer science for security. For example, the RSA algorithm, which is a public-key crypto system, uses prime numbers to generate public keys. If you are not familiar with the RSA algorithm, I will explain a little bit more here. Basically, the RSA algorithm generates a public key based on two large prime numbers. The prime numbers are secret. Anybody is able to use the public key to encrypt a message. When I first learned this RSA algorithm in my algorithm class, I felt like it was magic! After taking computer networking, the computer networking class, I realized that prime numbers were everywhere in my life. For example, you go to amazon.com to buy some stuff, you need to login to your account and pay the bill online. Behind the scenes, there are prime numbers securing your account and transactions. This interest brought a new question to me, which was how to generate a big prime number? Another way to think about this question is how to determine whether a number is a prime?

caption

FLT and modern computer science. Image by the author.

Because of my curiosity about prime numbers, I enjoyed reading and thinking about FLT. In class, FLT means Fermat’s Last Theorem, but in this blog post, it means Fermat’s Little Theorem. Basically, FLT says if p is a prime number, then for any integer a, a^p – p is a multiple of p. If a is not divisible by p, then a^(p-1) – 1 is an integer which is multiple of p (Wikipedia). How can this math theorem help a computer science student? I solved some programming questions related to prime numbers. For example, generate all prime numbers less than n. If n is 10, my program should return 2, 3, 5, 7 as a result. The other programming question was to write a program that runs a primality test, which is used to see whether a number is prime. For example, if the input number is 10, my program should return true if 10 is a prime number and false if 10 is not a prime number. I was able to solve such questions by using an inefficient algorithm. If a number was very big, it took more than an hour to get the final result. Obviously, I did not expect such a slow algorithm. By combining FLT into programming, I can write an efficiency algorithm to solve those questions.

Computer science majors might be wondering how a Fermat primality test works. First of all, we have an integer n and we need to choose some integers co-prime to n. Then we need to calculate a^(n-1) mod n. Let’s say the result is different from 1, then n is composite. If the result is 1, then we can say n may or may not be a prime number. For example, let’s use p = 341 and a = 2, then we have 2^340 = 1 (mod 341). Obviously, 341 is not a prime number because 341 = 11 * 31. So we call 341 a pseudoprime base 2. Such a pseudoprime number is also called a Carmichael number. This is a primary reason why we cannot directly use the FLT for a primality test: it will be fooled by some Carmichael numbers. That means we have to do something else to get it work on solving programming questions! There are only 21853 pseudoprimes base 2 for n from 1 to 25 * 10^9. How does this data help us? If 2^n – 1 mod n = 1, then n is a prime number, or n is one of those 21853 pseudoprimes. I think we can build a list of pseudoprimes before running our program for primality test. When we need to check if a number is a prime number, we can check if this number is a pseudoprimes in the list. If it is not one of the pseudoprimes, we can start running the program for primality test. The FLT was like the roots of a binary tree. People used this root to branch out its left children and right children. In our case, one of the children referred to is the Fermat primality test. At this moment, I cannot stop digging deeper in this tree. I want to traverse the whole tree just like running a breadth-first search in my brain.

Number theory is a mystery! After reading the articles, I asked myself a question “How many unknown theorems still exist?” It might be like the universe – people have only explored a tiny part of it. It may hide more secrets behind the scenes – people need time to reveal those secrets. Probably after a few years, scientists will discover many more theorems, just like FLT.

Grace Hopper: a Woman of Many Firsts

Commodore Grace M. Hopper, USNR Official portrait photograph, by James S. Davis. Public domain.

Commodore Grace M. Hopper, USNR Official portrait photograph, by James S. Davis. Public domain, via Wikimedia Commons.

Grace Hopper was a woman of many firsts – she was the first woman promoted to the rank of Rear Admiral of the Navy, she helped develop the first compiler, she was awarded the first Computer Sciences Man of the Year award from the Data Processing Management Association, she was the first woman to receive National Medal of Technology, and she was the first American and the first woman to ever be recognized as a Distinguished Fellow of the British Computer Society. Her vision for the future of computing, and her dogged pursuit in spite of continued opposition to her vision of what a computer could be, have had an undeniable impact on the pervasive presence of technology we sometimes take for granted today. She operated under the philosophy of “go ahead and do it, you can apologize later”, a philosophy that can easily be taken to heart by anyone working in computer science today.

Early life and education

Grace Brewster Murray was born December 9, 1906. She was a curious child, dismantling seven alarm clocks at the age of seven to see how they worked (her mother limited her to tinkering with only one clock after finding out what she was doing). She graduated Phi Beta Kappa with her BA in Mathematics and Physics from Vassar in 1928. After graduating, she accepted a position at Vassar as an Assistant Professor, a position she held while also working on her Masters in Mathematics at Yale in 1930 and PhD in 1934. She remained as a professor at Vassar through 1943 when she took a leave of absence to join the US Naval Reserve. Her enlistment was a hard fight because at 34, she was too old, and she was underweight for an enlistee, and they believed that her position as a civilian Mathematics professor was too valuable. After fighting for exceptions, she was accepted into the WAVE (Women Accepted for Volunteer Emergency Service) program.

That Famous “Computer Bug” Was an Actual Bug!:  World War II and After

Grace Hopper was sent to train at the Naval Reserve Midshipman’s School and graduated first in her class in 1944. Because of her strong mathematics background she was assigned to Bureau of Ordnance Computation Project at Harvard where she worked on the Mark computer series. She was the third person to program a Mark I computer, and she quickly recognized the potential future for computers. After WWII she resigned from her leave of absence at Vassar to join Harvard’s Computation Laboratory as a research fellow, working with Mark II and Mark III computers.
It was during this period that she encountered the infamous computer bug—a moth that had shorted out the Mark II. She is credited with popularizing the term “computer bug” (though not necessarily coining the term itself).

H96566k

The first computer bug. Image: U.S. Naval Historical Center Online Library Photograph, via Wikimedia Commons.

After the war she remained in the Navy as a reserve officer and joined the Remington Rand company, overseeing programming of their UNIVAC computer. In 1952 her team created first compiler (FLOW-MATIC); she fought for years to convince industry that machines could be “taught” to understand human-readable programming languages. The language she helped create was the precursor to COBOL. This was the first programming language written using English-like instructions, instead of machine code or languages close to machine code like assembly language. She retired from the Navy in 1966 but was recalled at age 60 to help standardize communication between different languages being developed and utilized by the Navy and wider military. She was forced because of her age to retire in 1986 at the age of 80. She had been made Rear Admiral at this point, and was the oldest serving officer in the service. Saying that she would be “bored stiff” to stop working completely, she accepted a position as a senior consultant in the private sector, where she remained until she passed away at the age of 84. She was laid to rest in Arlington National Cemetery. Her (incomplete) list of honors and awards listed on the naval webpage stretches for more than a page. Below is a small sampling:

1963 – Fellow, American Association for the Advancement of Science

1972 to 1986 – thirty seven honorary doctorates from institutions across the country

1973 – Elected to membership in the National Academy of Engineering.

1973 – Legion of Merit

1973 – Distinguished Fellow of the British Computer Society

1976 – Distinguished Member Award, Washington D.C. Chapter, ACM [Association for Computing Machinery]

1980 – Meritorious Service Medal

1983 – Institute of Electrical and Electronic Engineers Computer Pioneer Medal

1983 – Association for Computing Machinery Distinguished Service Award

1985 – The Grace Murray Hopper Service Center built at NARDAC [Navy Regional Data Automation Center] San Diego.

1986 – Defense Distinguished Service Medal

1990 – National Medal of Technology

Grace Hopper and the impact of the Compiler: Her Contributions to Computing

Her contributions to the world of computer science cannot be overstated. While championing the adoption of compilers that would allow programmers to write software in English, she was constantly told that “things weren’t done that way”, so they shouldn’t change. She never accepted this as the way things should be, and because of her, we write code in beautiful languages like Java and MySQL, instead of trying to speak the machine-understood language of 0’s and 1’s. She simply didn’t accept the common notion at the time, which was that computers would never be capable of advancing beyond arithmetic operations written using machine language.

In 1952 Grace Hopper began working on A-0, the precursor to the compiler and modern programming languages. A-0 was a collection of subroutines recorded on a tape, each with a call number identifier. She created a compiled program by writing a series of call numbers which the computer would locate on the tape. After presenting her work, she was met with the opinion that “computers couldn’t write their own programs”. It took two years before this view would change. Grace Hopper said this was because “…people are allergic to change, you have to go out and sell the idea.”

Between 1955 and 1959 at Remington Rand, Grace Hopper worked on the Flow-Matic after finding that business people were uncomfortable with the mathematical notation required for data processing at the time. Her response was to create Flow-Matic, a programming language that allowed data processing with English keywords. The programming language was the first to use English-like statements, the first to separate the description of data from operations on it. Flow-Matic was the precursor to COBOL (COmmon Business-Oriented Language), a language still used today (for better or worse) and the most ubiquitous language to date.

Conclusion

After reading numerous biographies and articles about the incredible accomplishments of the woman nicknamed “Amazing Grace”, I was really struck by Grace Hopper’s remarks that her greatest contribution had been “all the young people she had taught”. She was brilliant, passionate, talented and driven, and she was keenly aware of the need to educate as many people as possible so that innovative thoughts and ideas could spread like wildfire. There is a reason the annual celebration of women in computing and currently the largest gathering of women in computing to date (2014’s Grace Hopper Celebration saw over 8000 attendees) is named in her honor. She is an inspiration not only to women in computer science, but to anyone passionate about science and engineering, to anyone who dares to try new things in spite of opposition and doubt.

References:

http://www.biography.com/people/grace-hopper-21406809

http://www.cs.yale.edu/homes/tap/Files/hopper-story.html

http://www.history.navy.mil/bios/hopper_grace.htm

http://www.thocp.net/biographies/hopper_grace.html

https://www.sdsc.edu/ScienceWomen/hopper.html

http://www.greatwomen.org/women-of-the-hall/search-the-hall/details/2/78-Hopper

http://en.wikipedia.org/wiki/FLOW-MATIC

http://www.computinghistory.org.uk/det/5487/Grace-Hopper-completes-the-A-0-Compiler/