Prime numbers are of immense importance in cryptography, computational number theory, information science and computer science. There are several algorithms to test if a number is prime. Some of them are fast, but no fast algorithm to factorize a number is known.
A primality test is deterministic if it outputs True when the number is a prime and False when the input is composite with probability 1. Otherwise, the primality test is probabilistic. A probabilistic primality test is often called a pseudoprimality test.
In the following, is assumed to be a positive integer greater than .
This is one of the simplest deterministic primality tests, by naively checking the condition of a number being prime. It uses the fact that a prime number is not divisible by any other positive integer other than itself and , so we can turn it into its contrapositive: if a number is divisible by some other positive integer besides itself and , it is composite.
1 2 3 4 5 6
However, it is not necessary to check all numbers from to .
Your friend has written a program to check whether or not a number is prime. It is very simple and works by checking if is divisible by each number from 2 all the way to . When you see this, you curse your friend and tell him he's wasting time.
If your friend didn't want to waste time, what is (approximately) the biggest number that he needs to check before he can determine that any number is prime?
Suppose that is composite, so for . We claim that at least one of is not greater than . Indeed, if both are greater than , then , a contradiction. Thus whenever is composite, one of its factors is not greater than , so we can modify the range endpoint above:
1 2 3 4 5 6 7
We loop through for times, so the time complexity is , multiplied the time complexity of division (about as fast as multiplication at around , using Newton-Raphson division algorithm). This gives poor performance for large values of .
Wilson's theorem is an interesting number theoretic result that can be turned into a primality test.
A positive integer is prime if and only if .
Thus we can simply compute to check whether is prime.
1 2 3 4 5
However, this is absurdly slow; this takes multiplications, even slower than trial division above. (This is comparable to the unmodified trial division, checking up to instead of .)
This primality test uses the idea of Fermat's little theorem.
Let be a prime number and be an integer not divisible by . Then is always divisible by , or .
The idea of Fermat primality test is to use the contrapositive: if for some not divisible by we have , then is definitely composite.
However, it's not true that if is composite, then any works! For example, consider . We can compute that , so if we have but we pick , we cannot conclude that is composite. So we need to choose that allows the test to pass, marking as composite.
But how do we choose such ? The solution is simple: make it probabilistic; that is, we pick randomly. To give high confidence, we repeat the test several times, say times.
1 2 3 4 5 6 7 8
If but is composite, then is called a Fermat liar for , and is called a Fermat pseudoprime to base . Otherwise, is called a Fermat witness for .
To make things worse, there are numbers which are Fermat pseudoprime to all bases! That is, for all relatively prime to , but itself is composite. These numbers are called Carmichael numbers, the first of which is .
The good thing is that Carmichael numbers are pretty rare. The bad thing is that there are infinitely many of them. This makes this test rarely used, as it's not feasible to simply store all Carmichael numbers (since there are infinitely many of them). Another good thing is that if is not a Carmichael number, then at least half of the integers in the range are Fermat witnesses, so if we can tell that is not a Carmichael number, every iteration of the test passed halves the probability that is composite. The probability of a composite number is mistakenly called prime for iterations is .
The running time of the above algorithm is for tests, multiplied by for modular exponentiation, multiplied by some time for multiplication (schoolbook multiplication uses time). Although this test has flaws (being probabilistic, and Carmichael numbers), it runs in polylogarithmic time, which is fast.
Miller-Rabin primality test, in some sense, is a more advanced form of Fermat primality test. Here is the algorithm first, with explanation to follow.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
To understand the algorithm, we need a couple of concepts.
First, suppose is prime, and consider the modular equation . What are the solutions to this equation? We know that , or . Since is prime, has to divide either or ; this leads to .
Now, suppose is an odd prime and is an integer relatively prime to . By Fermat's little theorem, we know that . The idea is that we repeatedly divide the exponent by two: as long as the exponent is even, for some , we can invoke the above result, giving or . In the latter case, we can invoke the result again, until either we get the first case or the exponent is odd.
In other words, we have the following theorem:
Let be an odd prime, and let where is an odd integer and is a positive integer. Also let be a positive integer coprime to . Then at least one of the following must hold:
- Some of is congruent to .
Miller-Rabin primality test uses the contrapositive of this theorem. That is, if for some , neither of the above holds, then is clearly not prime. This explains the bulk of the algorithm:
1 2 3 4 5 6 7
Here, we compute . (In the algorithm, we name the integer as instead of as we don't know if it's a prime yet.) If , then the second point in the theorem is true. Otherwise, we test if , which means the first point is true, or otherwise we square , giving . We repeat this again, checking if it is congruent to ; if it doesn't, we square it to , and so on until . Since we know if is prime, there is no need to check one more time. If we ever encountered the
if x == n-1: break line, then the first point is true; otherwise, neither point is true and thus is certainly composite. Note that we're using the
for-else construct here; if the
for exits because of
else clause is not run, otherwise it is run. When the
break is not encountered, we know that neither point is true, so is certainly composite and thus we can go to the
else clause and immediately return
Now, the problem is on finding the value of . How do we select it? There is no good way to do so with certainty, which means we simply select at random and repeat it for several, , times, like in Fermat primality test. Just like in Fermat primality test, there are a couple of terms: if for a composite , fails to detect the compositeness of , then is called a strong liar for and is a strong pseudoprime to base , while if manages to detect it, it is called a witness for .
For any composite , at least of the integers in the range are witnesses for , which is better than Fermat primality test: it has Carmichael numbers where none of the integers are witnesses, and even if is not Carmichael, there are only half witnesses guaranteed. For Miller-Rabin primality test, the probability of a composite is mistakenly passed as a prime for iterations is .
However, it's still a flaw that the primality is not guaranteed. If we have an upper bound on , a solution is simply to fix the values of . For example, by simply testing for , the test is guaranteed correct as long as , so if we can guarantee that is lower than this, checking those seven values of is enough. There are many such deterministic variants.
The running time of this algorithm is composed of two parts: preparation, when we compute , and testing, when we pick a random . The preparation is simply . The testing is for trials, multiplied by for modular exponentiation, multiplied by time for multiplication; this is exactly the same running time as Fermat primality test.
This section needs a more thorough explanation, as well as code for the algorithm.
So far, the primality tests discussed are either prohibitively slow (polynomial in instead of , the number of digits of ), or only probabilistic (not guaranteed to be correct). This makes people back then to wander, whether the problem PRIMES, determining primality of a test, is actually solvable "quickly" enough (in polynomial time in number of digits) or not; that is, whether it is in complexity class or not.
In 1976, Gary Miller (the one who invented the Miller-Rabin primality test above, together with Michael Robin) also wrote about "Miller test," a deterministic variant (actually the original version) of Miller-Rabin primality test. This test runs in polynomial time, but its correctness depends on the so-far unsettled question of the generalized Riemann hypothesis, and thus this test is unsatisfactory.
However, in 2002, Manindra Agrawal, Neeraj Kayal, and Nitin Saxena at the Indian Institute of Technology solved this problem affirmatively by constructing such deterministic test that runs in polynomial time that doesn't depend on such unproven hypotheses. Their test is called AKS test, given on a paper titled simply PRIMES is in P.
The idea of the algorithm relies on the simple fact for all coprime to , if and only if is prime. Note that the two sides are polynomials; that is, each coefficient must be compared. The proof is simply done by Fermat's little theorem as used in Fermat primality test, together with for all if and only if is prime.
The above is a primality test by itself, but it takes exponential time in number of digits, so the paper uses a similar fact: , which means for some polynomials . If , this fact can be checked in polynomial time. This fact is still satisfied by all primes, but some composites also pass, for different values of . The paper thus proves that there exists some small enough and small enough set , such that if for all the relation is satisfied, then cannot be composite, proving that it is prime.
Currently, the running time of this algorithm is (that is, for some ). The paper originally had on the exponent, but this is quickly reduced down by works of various authors. This is still slower than Miller-Rabin primality test as well as more complex, which is why Miller-Rabin test is used more widely at the moment for practical numbers (say, in that range), but theoretically this algorithm is of great interest for being the first algorithm to prove that primality is not a hard problem after all.
You've stumbled onto a significant vulnerability in a commonly used cryptographic library. It turns out that the random number generator it uses frequently produces the same primes when it is generating keys.
Exploit this knowledge to factor the (hexadecimal) keys below, and enter your answer as the last six digits of the largest factor you find (in decimal).
Key 1: 1c7bb1ae67670f7e6769b515c174414278e16c27e95b43a789099a1c7d55c717b2f0a0442a7d49503ee09552588ed9bb6eda4af738a02fb31576d78ff72b2499b347e49fef1028182f158182a0ba504902996ea161311fe62b86e6ccb02a9307d932f7fa94cde410619927677f94c571ea39c7f4105fae00415dd7d Key 2: 2710e45014ed7d2550aac9887cc18b6858b978c2409e86f80bad4b59ebcbd90ed18790fc56f53ffabc0e4a021da2e906072404a8b3c5555f64f279a21ebb60655e4d61f4a18be9ad389d8ff05b994bb4c194d8803537ac6cd9f708e0dd12d1857554e41c9cbef98f61c5751b796e5b37d338f5d9b3ec3202b37a32f