P versus NP
\(P\ vs.\ NP\), or whether \(P = NP\) or \(P \neq NP\), is one of the most famous computer science problems that has not yet been solved. It is an open problem, and one of the seven Millennium Prize Problems, whose solution comes with a $1,000,000 prize awarded by the Clay Mathematics Institute. Although \(P\) and \(NP\) are just two classes of the entire spectrum of complexity classes within the field of complexity theory, the majority of common problems computed by computers fall under one of these two categories.
The question is whether a computer that quickly checks the validity of a solution for a hard problem can also find the solution itself. Since solutions to \(P\), or polynomial-time, problems can be solved and verified quickly by computers; and solutions to \(NP\), or non-deterministic polynomial-time problems, are fast to verify yet extremely time-consuming to solve, then \(P = NP\) implies that problems in \(NP\) can be computed equally fast to problems in \(P\); problems that can be verified quickly can also be solved quickly.
Formally speaking, problems in \(P\) can be solved using an abstract computational model known as deterministic Turing Machines, and usually take a polynomial amount of space, known as polynomial-space, \(PSPACE\); whereas problems in NP can be solved using non-deterministic Turing Machines, and lie in the complexity space called non-deterministic polynomial space, \(NPSPACE\).
The interest in asserting a solution to this problem lies in the implications that are to arise if \(P = NP\); the heaviest one being that cryptographic algorithms, or algorithms that secure all very confidential data, would be easy to break, as their security relies on the computational complexity of algorithms in \(NP\). If solutions to said cryptographic algorithms could be easily found, many documents that have been encrypted for security purposes would be exposed, such as the entire modern worldwide e-commerce infrastructure, or governmental documents hosted online.
Contents
P Algorithms
The vast majority of computer science theories deal with improving the speed and memory space taken to compute algorithms, as faster algorithms give extra time for more computations, and less space taken may even allow parallel computing. The time it takes a computer to compute a problem is called its time complexity, or Big O Notation, while the space taken is called its space complexity. The time and space taken to perform these algorithms are usually measured in the size of the input and the number of elements the algorithm has to manipulate.
Suppose a teacher is trying to find the tallest student in the class. The only way for the teacher to find this student is to look at every single one of them, keeping track of the name of the tallest student. For this reason, finding the tallest student in a class of \(n\) students is said to take time proportional to \(n\), the number of students. Now, If not all the students were in the class room but instead were at recess, and the teacher called in two of them at a time, compared their heights, found the winner, then repeated with a different pair of students, this pairwise comparison would take time proportional to \(n^2\), revealing the information necessary to compute the solution.
Now, suppose the teacher is trying to record how much memory space her algorithm takes. In the linear time algorithm, which takes time proportional to \(n\), she can just write the names of the students down when they're taller than their predecessors. In the worst case, she'd be writing \(n\) names down. However, she realizes that since she doesn't need to keep track of all the names who are taller than their predecessors, she only keeps track of the tallest student overall, deleting the names of previous students she wrote down. This would reduce the space her algorithm takes to a single slot in memory, or proportional to \(1\).
Algorithms that increase in time and space polynomially to the input, n, such as \(n^1\), \(n^2\), or even \(n^{99}\), where \(n\) is being raised to a power, are said to be in the class of “\(P\)”, for polynomial.
NP Algorithms
Very complex algorithms, unlike algorithms in \(P\), are algorithms that take computers an extremely long time and space to solve; time that grows exponentially as the number of elements in the input increases. This exponential time is described as any number is raised to the \(n^{th}\) power, such as \(2^N\).
Although it seems like the range within problems in \(P\) is quite large already, the time taken by algorithms in \(NP\) is of an entirely different magnitude. Take, for instance, a linear time algorithm that takes 1 second per element to solve a problem containing 100 elements, (\(N = 100\)). This problem will take 100 seconds to solve. It is polynomially faster than an algorithm that takes \(N^3\) to run, as time proportional to \(N^3\) will take almost three hours to process 100 elements. However, an algorithm whose execution time is proportional to \(2^N\) will take 300 quintillion years! That discrepancy gets much larger as N grows indefinitely [1].
Problems categorized as \(NP\), which stands for non-deterministic polynomial time algorithms, are problems that take exponential time to solve, yet the validity of the solution can be verified in polynomial time. One can think of problems in \(NP\) as riddles, where they’re very difficult to solve, yet once the answer is given, it seems rather obvious.
Prime Factorization
One of the most famous problems in \(NP\) is prime factorization, or finding the prime numbers that multiply together to form larger numbers. Finding these primes is usually a very time consuming effort since no polynomial time algorithm exists. The most common way to find the prime factors of a number involves trial and error, dividing the number up into smaller and smaller chunks until only primes remain.
As shown, no polynomial time algorithm is known for factoring an N-bit number. However, given a set of prime numbers, testing whether they multiply to a larger number is a polynomial step process. Therefore, prime factorization is in \(NP\).
The majority of computer science aims at reducing the time and space complexity of algorithms given inputs of immeasurable size, as they go to infinity. Although it seems like the range within algorithms in \(P\) is quite large, an algorithm in \(NP\) is generally beyond what is expected of a computer. For this reason, distinguishing between \(P\) and \(NP\), and reducing the complexity of \(NP\) to a number akin to an algorithm in \(P\) would save computers, and humans performing those calculations, a tremendous amount of time to be used for other computations.
An important note is that \(P\) problems are generally easy to solve, while \(NP\) problems are considered hard; however \(P\) problems could have very large constants, making them nearly impossible for modern computers to solve. Additionally, there are some solutions to \(NP\) problems (such as the knapsack problem) that can be solved reasonably quickly although its time complexity is still in the class \(NP\).
Additionally, as problems in \(NP\) are defined as all problems that can be solved with a non-deterministic Turing Machine, then note that \(P\) is a subset of the complexity class \(NP\), as the machinery for solving \(NP\) problems can also be used to solve problems in \(P\). The central question of whether \(P = NP\), then, asks whether the opposite is true; whether problems in \(NP\) can be solved with the machinery used to solve problems in \(P\).
For further reference into NP problems, see the travelling salesman problem, the clique problem, the SAT or 3-SAT problems, or any of the graph coloring problems.
Reductions and Problems in NP-Complete
In the effort to answer the \(P\ \text{vs}\ NP\) question, computer scientists have found a subset of \(NP\) problems that are at least as difficult to solve as any other \(NP\) problem; meaning that an answer to one of these problems will solve every \(NP\) problem. These sets of problems are known as NP-Complete problems. The majority of research regarding the question, \(P = NP\), deals with \(NP-\text{Complete}\) problems.
NP-Complete problems have two basic properties:
1) It is in NP. 2) Every problem in NP is reducible to it in polynomial time.
Reductions are at the core of the \(P\ \text{vs}\ NP\) question, as it helps generalize solutions from one problem to an entire subset of problems. A reduction is an algorithm for transforming one problem into another, where if problem A is reduced to problem B, and a solution for problem B is known, this solution can be used as a subroutine to solve problem A efficiently.
Reduction Example
Bob just moved into town and is trying to find the closest shoe store. Since he has yet to purchase a phone or Internet connection, he decides to knock on Mary’s door in order to get some directions.
After introducing each other and Bob stating his inquiry, Mary asks Bob if Bob passed the gas station on his way into town. Since Bob did see the gas station, Mary tells Bob to get to the gas station, head one block North, and find the shoe store in the North East corner.
Because Mary knew that Bob could easily get to the gas station, Mary reduced the problem of finding the shoe store by skipping the instructions necessary to get to the gas station. In a similar case, if a problem in \(NP\) can be reduced in polynomial time and space to a problem in \(P\), \(N = NP\) as the bulk of the problem in \(NP\) will have been done in the reduction, which takes polynomial time, and then the problem in \(P\) will be solved in polynomial time as well, taking the problem in \(NP\) a combined total polynomial time.
By reducing complex problems into well-known, easier problems, generally in \(NP-Complete\), if said \(NP-Complete\) problem is solved, it can be assumed that the reduced problem can be solved as well, with similar mechanisms and similar time complexity. For this reason, if a polynomial-time solution to an \(NP-Complete\) problem is found, it can be adapted to solve all others.
To save time, researchers first solved the very first reduction, a problem known as Circuit-SAT, where the goal is to decide whether a given Boolean assignment will output a 1 or a 0, is used to solve the more general SAT problem. Later, the 3SAT problem was reduced to SAT, and the chain of reductions began to categorize as many problems as possible as \(NP-Complete\). These problems include graph coloring theorems, to airline scheduling, to bin packing, to protein folding, to auction pricing, to VLSI design, to minimizing soap films, to winning at Super Mario Bros [2]. Building the list of NP-Complete problems is like building a toolbox that can help in short-circuiting the solution to problems in \(NP\), and this has been the primary way computer scientists and mathematicians have attempted to answer whether problems in \(NP\) can be solved as fast as problems in \(P\).
Status of Research
In 2002, a poll was conducted where 61 mathematicians and computer scientists believe \(P\) probably does not equal \(NP\), whereas only 9 said otherwise and, as they later told reporters, mainly to contradict the popular sentiment.[1]. Scott Aaronson, a renowned theoretical computer scientist, argues that the majority of techniques to prove that \(P=NP\), namely, finding a single link between any of the tens of thousands of problems in \(P\), to any of the tens of thousands of problems in \(NP-complete\), have been explored to no avail; whereas, like any other successful scientific hypothesis, the \(P \neq NP\) hypothesis has passed several tests that it had no good reason to pass were it false. [2].
The \(P\ vs.\ NP\) problem is extremely important to deepen understanding of computational complexity. As of late, much of RSA cryptography, which is commonly used to secure Internet transactions, has been developed based on the assumption that prime factorization is very complex, in \(NP\) and finding a solution using brute force would take attackers several years. However, new approaches in quantum computing have been discovered, that can factor numbers extremely efficiently, in polynomial time.
Discoveries made by Peter Shor, a member of the Computer Science and Artificial Intelligence Lab’s Theory of Computation Group (TOC) at MIT, uses computers with a large number of quantum bits, which are atoms in an ion trap. These computers use laser pulses to carry out algorithms on each atom, correctly factoring large numbers. The system is designed so that more atoms and lasers can improve the efficiency of the machines, making them able to solve increasingly large numbers [3]. Quantum computers are at the forefront of reducing problems from \(NP\) to \(P\), and scientists believe that one day in the near future this reduction will be easily computed.
References
- Hardesty, L. Explained: P vs. NP. Retrieved June 2016, from http://news.mit.edu/2009/explainer-pnp
- Aaronson, S. The Scientific Case for P not equalt o NP. Retrieved from http://www.scottaaronson.com/blog/?p=1720
- Chu, J. The beginning of the end for encryption schemes?. Retrieved March 2016, from http://news.mit.edu/2016/quantum-computer-end-encryption-schemes-0303