, or whether or , is one of the most famous computer science problems that has not yet been solved. It is an open problem, and one of the seven Millennium Prize Problems, whose solution comes with a $1,000,000 prize awarded by the Clay Mathematics Institute. Although and are just two classes of the entire spectrum of complexity classes within the field of complexity theory, the majority of common problems computed by computers fall under one of these two categories.
The question is whether a computer that quickly checks the validity of a solution for a hard problem can also find the solution itself. Since solutions to , or polynomial-time, problems can be solved and verified quickly by computers; and solutions to , or non-deterministic polynomial-time problems, are fast to verify yet extremely time-consuming to solve, then implies that problems in can be computed equally fast to problems in ; problems that can be verified quickly can also be solved quickly.
Formally speaking, problems in can be solved using an abstract computational model known as deterministic Turing Machines, and usually take a polynomial amount of space, known as polynomial-space, ; whereas problems in NP can be solved using non-deterministic Turing Machines, and lie in the complexity space called non-deterministic polynomial space, .
The interest in asserting a solution to this problem lies in the implications that are to arise if ; the heaviest one being that cryptographic algorithms, or algorithms that secure all very confidential data, would be easy to break, as their security relies on the computational complexity of algorithms in . If solutions to said cryptographic algorithms could be easily found, many documents that have been encrypted for security purposes would be exposed, such as the entire modern worldwide e-commerce infrastructure, or governmental documents hosted online.
The vast majority of computer science theories deal with improving the speed and memory space taken to compute algorithms, as faster algorithms give extra time for more computations, and less space taken may even allow parallel computing. The time it takes a computer to compute a problem is called its time complexity, or Big O Notation, while the space taken is called its space complexity. The time and space taken to perform these algorithms are usually measured in the size of the input and the number of elements the algorithm has to manipulate.
Suppose a teacher is trying to find the tallest student in the class. The only way for the teacher to find this student is to look at every single one of them, keeping track of the name of the tallest student. For this reason, finding the tallest student in a class of students is said to take time proportional to , the number of students. Now, If not all the students were in the class room but instead were at recess, and the teacher called in two of them at a time, compared their heights, found the winner, then repeated with a different pair of students, this pairwise comparison would take time proportional to , revealing the information necessary to compute the solution.
Now, suppose the teacher is trying to record how much memory space her algorithm takes. In the linear time algorithm, which takes time proportional to , she can just write the names of the students down when they're taller than their predecessors. In the worst case, she'd be writing names down. However, she realizes that since she doesn't need to keep track of all the names who are taller than their predecessors, she only keeps track of the tallest student overall, deleting the names of previous students she wrote down. This would reduce the space her algorithm takes to a single slot in memory, or proportional to .
Algorithms that increase in time and space polynomially to the input, n, such as , , or even , where is being raised to a power, are said to be in the class of “”, for polynomial.
Very complex algorithms, unlike algorithms in , are algorithms that take computers an extremely long time and space to solve; time that grows exponentially as the number of elements in the input increases. This exponential time is described as any number is raised to the power, such as .
Although it seems like the range within problems in is quite large already, the time taken by algorithms in is of an entirely different magnitude. Take, for instance, a linear time algorithm that takes 1 second per element to solve a problem containing 100 elements, (). This problem will take 100 seconds to solve. It is polynomially faster than an algorithm that takes to run, as time proportional to will take almost three hours to process 100 elements. However, an algorithm whose execution time is proportional to will take 300 quintillion years! That discrepancy gets much larger as N grows indefinitely .
Problems categorized as , which stands for non-deterministic polynomial time algorithms, are problems that take exponential time to solve, yet the validity of the solution can be verified in polynomial time. One can think of problems in as riddles, where they’re very difficult to solve, yet once the answer is given, it seems rather obvious.
One of the most famous problems in is prime factorization, or finding the prime numbers that multiply together to form larger numbers. Finding these primes is usually a very time consuming effort since no polynomial time algorithm exists. The most common way to find the prime factors of a number involves trial and error, dividing the number up into smaller and smaller chunks until only primes remain.
As shown, no polynomial time algorithm is known for factoring an N-bit number. However, given a set of prime numbers, testing whether they multiply to a larger number is a polynomial step process. Therefore, prime factorization is in .
The majority of computer science aims at reducing the time and space complexity of algorithms given inputs of immeasurable size, as they go to infinity. Although it seems like the range within algorithms in is quite large, an algorithm in is generally beyond what is expected of a computer. For this reason, distinguishing between and , and reducing the complexity of to a number akin to an algorithm in would save computers, and humans performing those calculations, a tremendous amount of time to be used for other computations.
An important note is that problems are generally easy to solve, while problems are considered hard; however problems could have very large constants, making them nearly impossible for modern computers to solve. Additionally, there are some solutions to problems (such as the knapsack problem) that can be solved reasonably quickly although its time complexity is still in the class .
Additionally, as problems in are defined as all problems that can be solved with a non-deterministic Turing Machine, then note that is a subset of the complexity class , as the machinery for solving problems can also be used to solve problems in . The central question of whether , then, asks whether the opposite is true; whether problems in can be solved with the machinery used to solve problems in .
In the effort to answer the question, computer scientists have found a subset of problems that are at least as difficult to solve as any other problem; meaning that an answer to one of these problems will solve every problem. These sets of problems are known as NP-Complete problems. The majority of research regarding the question, , deals with problems.
NP-Complete problems have two basic properties:
1) It is in NP. 2) Every problem in NP is reducible to it in polynomial time.
Reductions are at the core of the question, as it helps generalize solutions from one problem to an entire subset of problems. A reduction is an algorithm for transforming one problem into another, where if problem A is reduced to problem B, and a solution for problem B is known, this solution can be used as a subroutine to solve problem A efficiently.
Bob just moved into town and is trying to find the closest shoe store. Since he has yet to purchase a phone or Internet connection, he decides to knock on Mary’s door in order to get some directions.
After introducing each other and Bob stating his inquiry, Mary asks Bob if Bob passed the gas station on his way into town. Since Bob did see the gas station, Mary tells Bob to get to the gas station, head one block North, and find the shoe store in the North East corner.
Because Mary knew that Bob could easily get to the gas station, Mary reduced the problem of finding the shoe store by skipping the instructions necessary to get to the gas station. In a similar case, if a problem in can be reduced in polynomial time and space to a problem in , as the bulk of the problem in will have been done in the reduction, which takes polynomial time, and then the problem in will be solved in polynomial time as well, taking the problem in a combined total polynomial time.
By reducing complex problems into well-known, easier problems, generally in , if said problem is solved, it can be assumed that the reduced problem can be solved as well, with similar mechanisms and similar time complexity. For this reason, if a polynomial-time solution to an problem is found, it can be adapted to solve all others.
To save time, researchers first solved the very first reduction, a problem known as Circuit-SAT, where the goal is to decide whether a given Boolean assignment will output a 1 or a 0, is used to solve the more general SAT problem. Later, the 3SAT problem was reduced to SAT, and the chain of reductions began to categorize as many problems as possible as . These problems include graph coloring theorems, to airline scheduling, to bin packing, to protein folding, to auction pricing, to VLSI design, to minimizing soap films, to winning at Super Mario Bros . Building the list of NP-Complete problems is like building a toolbox that can help in short-circuiting the solution to problems in , and this has been the primary way computer scientists and mathematicians have attempted to answer whether problems in can be solved as fast as problems in .
In 2002, a poll was conducted where 61 mathematicians and computer scientists believe probably does not equal , whereas only 9 said otherwise and, as they later told reporters, mainly to contradict the popular sentiment.. Scott Aaronson, a renowned theoretical computer scientist, argues that the majority of techniques to prove that , namely, finding a single link between any of the tens of thousands of problems in , to any of the tens of thousands of problems in , have been explored to no avail; whereas, like any other successful scientific hypothesis, the hypothesis has passed several tests that it had no good reason to pass were it false. .
The problem is extremely important to deepen understanding of computational complexity. As of late, much of RSA cryptography, which is commonly used to secure Internet transactions, has been developed based on the assumption that prime factorization is very complex, in and finding a solution using brute force would take attackers several years. However, new approaches in quantum computing have been discovered, that can factor numbers extremely efficiently, in polynomial time.
Discoveries made by Peter Shor, a member of the Computer Science and Artificial Intelligence Lab’s Theory of Computation Group (TOC) at MIT, uses computers with a large number of quantum bits, which are atoms in an ion trap. These computers use laser pulses to carry out algorithms on each atom, correctly factoring large numbers. The system is designed so that more atoms and lasers can improve the efficiency of the machines, making them able to solve increasingly large numbers . Quantum computers are at the forefront of reducing problems from to , and scientists believe that one day in the near future this reduction will be easily computed.
- Hardesty, L. Explained: P vs. NP. Retrieved June 2016, from http://news.mit.edu/2009/explainer-pnp
- Aaronson, S. The Scientific Case for P not equalt o NP. Retrieved from http://www.scottaaronson.com/blog/?p=1720
- Chu, J. The beginning of the end for encryption schemes?. Retrieved March 2016, from http://news.mit.edu/2016/quantum-computer-end-encryption-schemes-0303