Complexity Classes
Complexity classes are the heart of complexity theory which is a central topic in theoretical computer science. A complexity class contains a set of problems that take a similar range of space and time to solve, for example "all problems solvable in polynomial time with respect to input size," "all problems solvable with exponential space with respect to input size," and so on. Problems are usually proven to be in a particular complexity class by running the problem on an abstract computational model, usually a Turing machine. Some of the largest unsolved problems in computer science have to do with proving whether or not two complexity classes are equivalent or not, the most notable example of this is the infamous P = NP problem — scientists have so far been unable to prove or disprove the equality of the P class and the NP class.
There are hundreds of complexity classes, and this page will describe a few of the most commonly encountered classes.
Contents
Complexity Classes
Complexity classes help computer scientists groups problems based on how much time and space they require to solve problems and verify solutions. For example, complexity can help describe how many steps it would take a Turing machine to decide a problem \(A\)? Having a solid grasp of Big-O notation is necessary for understanding complexity classes.
Some complexity classes are a subset of others. For example, the class of problems solvable in deterministic polynomial time, \(P\), is a subset of the class of problems solvable in nondeterministic polynomial time \(NP\).
The time complexity of an algorithm is usually used when describing the number of steps it needs to take to solve a problem, but it can also be used to describe how long it takes verify an answer (more on this in the NP section). There are many ways of finding time complexity. One could figure out time complexity by determining how many times a particular line of code executes in a program, or by figuring out how many steps a Turing machine takes when solving the problem. Knowing the time complexity of an algorithm helps programmers and computer scientists know if an algorithm is efficient or not. It helps us to know how long an algorithm will take to give an answer.
The space complexity of an algorithm describes how much memory the algorithm needs in order to operate. In terms of Turing machines, the space needed to solve a problem relates to the number of spaces on the Turing machine’s tape it needs to do the problem.
For more information about space complexity and time complexity see the space hierarchy theorem and the time hierarchy theorem
Complexity classes are useful ways of organizing similar types of problems. For many complexity classes, there exist many open problems — for example, if this complexity class is equal to that complexity class. In some of these questions, if the answer was "yes they are equal" there would be huge theoretical consequences where the entire polynomial hierarchy of complexity classes collapses in from hundreds of levels into a few, and there are many practical applications too. For example, if it turns out that \(P = NP\), many of our security algorithms that protect our computers, banks, and more, will be obsolete.
P
The class \(P\) contains decision problems (problems with a “yes” or “no” answer) that are solvable in polynomial time by a deterministic Turing machine. Sometimes this is phrased as: \(P\) is the class of languages that are decidable in polynomial time by a deterministic Turing machine.
Many problems can be solved using a brute-force approach, but this often takes exponential time. If a problem has a polynomial time algorithm, it can be solved much more efficiently than by the brute force method. Many of the algorithms that are actually used in practice are polynomial time algorithms — exponential time algorithms are typically less useful and are avoided as much as possible.
A problem in \(P\) can be solved in \(n^k\) time for some constant \(k\) and where \(n\) is the size of the input.
Once some polynomial time algorithm is discovered for a problem, it is often possible to improve its running time. For example, the naive algorithm for multiplying two n-digit numbers has a running time of \(O(n^2)\). The Karatsuba algorithm improved on the naive algorithm and can multiply two n-digit numbers in \(\Theta(n^{\log_2 3}) \approx \Theta(n^{1.585})\) time. These are both polynomial time running times, but the Karatsuba method has smaller constant factors.
Problems in P
Finding if numbers are relatively prime
NP and coNP
The class \(NP\) contains decision problems (problems with a “yes” or “no” answer) that are solvable by a Turing machine in nondeterministic polynomial time — this includes problems that are solvable in polynomial time up to problems that are solvable in exponential time. While they can take a long time to solve, problems in \(NP\) can be verified by a Turing machine in polynomial time. This means that if given a “yes” answer to an \(NP\) problem, you can check that it is right in polynomial time. This “yes” answer is often called a witness or a certificate.
For example, factoring large numbers is an \(NP\) problem. If you are given a large number and asked what it’s prime factorization is, it will probably take you a long time to do this. However, if someone claims they know the prime factorization for a given large number, it is really easy to verify this — just take the factors they have produced and multiply them together to see if the product is the original large answer.
Proving that a problem is in \(NP\) typically involves showing that you can provide a witness string that can be verified in polynomial time.
\(P\) is contained within \(NP\), which can be written \(P \subset NP\), but it is unknown if \(P\) and \(NP\) are equal. However, most theoretical computer scientists believe that \(P \ne NP\).
coNP
If a problem \(X\) is in \(NP\), then its complement, \(\overline{X}\) is in \(coNP\). \(coNP\) contains problems that have a polynomial time verification for “no” answers — if given a solution that does not solve the problem, it is easy to verify if that solution does not work.
A problem that is in \(coNP\) is a satisfiability problem called \(coSAT\) where \(coSAT = \{b : b \text{ is a boolean expression with no satisfying assignments}\}\). To provide the “no” witness just provide an instance that does have a satisfying assignment, this can be checked in polynomial time. If the proposed assignment results in a satisfying assignment, we know that the instance is not a member of \(coSAT\). However, if the witness string does not provide a satisfying assignment, that doesn’t prove that the problem is in \(coNP\) — a “no” witness that will work might have yet to be proposed.
For \(NP\) and \(coNP\) problems, you don’t need to be able to verify all of the “yes” or “no” answers at once in polynomial time, you just need to be able to verify a particular “yes” or “no” answer in polynomial time in order for a problem to be in \(NP\) or \(coNP\)[2]. Proving that a problem is in \(coNP\) usually involves providing a witness string that can be verified in polynomial time.
Some problems are in \(NP \cap coNP\). Problems in this class have the property that both “yes” and “no” witness strings can be checked in polynomial time. This includes all problems in \(P\). In other words, \(P\) is a subset of both \(NP\) and \(coNP\).
For example, take the problem, \(PRIME\) that determines if a number \(x\) is a prime number. This is a problem in \(coNP\) because it is really easy to check if a number is not prime — the proposed counterproof, the witness string, would just include the factors of the number. If the factors are numbers other than \(x\) and \(1\), we have successfully verified a “no” answer. It is also the case that \(PRIME\) is in \(NP\) meaning that it takes polynomial time to verify a "yes" witness too. In fact, testing primality is a problem in \(P\), proving this was a huge advance in computer science.
It is unknown if \(NP = coNP\), though most theoretical computer scientists believe that they are not equal.
Key Relationships
\(P \subseteq NP\)
\(P \subseteq coNP\)
Problems in NP
Boolean satisfiability (Circuit-SAT, SAT, 3-SAT, etc)
NP-Complete Problems
A problem is \(NP-complete\), sometimes written \(NP-C\), if it is both in \(NP\) and is \(NP-Hard\).
NP-Hard
An \(NP-hard\) problem is at least as hard as the hardest problems in \(NP\). A problem \(A\) is \(NP-hard\) if for every problem \(L\) in \(NP\), there is a polynomial-time reduction from \(L\) to \(A\). Another way to put it, more generally is, a problem \(A\) is reducible to problem \(B\) if an algorithm for solving problem \(B\) could also be used to solve problem \(A\).
\(NP-hard\) problems do not have to be in \(NP\), and they do not have to be decidable. For example, the halting problem is an \(NP-hard\) problem, but is not an \(NP\) problem.
NP-complete
\(NP-complete\) problems are very special because any problem in the \(NP\) class can be transformed or reduced into \(NP-complete\) problems in polynomial time. This means that if you can solve an \(NP-complete\) problem, you can solve any other problem in \(NP\). An important consequence of this is that if you could solve an \(NP-complete\) problem in polynomial time, then you could solve any \(NP\) problem in polynomial time.
The Cook-Levin Theorem shows that satisfiability problems are \(NP-complete\).
Problems that are NP-complete
Boolean satisfiability (Circuit-SAT, SAT, 3-SAT, etc)
RP and coRP
The class \(RP\), randomized polynomial time, is the class for which there exists a probabilistic Turing machine with the following properties:
It always runs in polynomial time in the input size
If the correct answer is “no”, it always returns “no”
If the correct answer is “yes,” then it returns “yes” with probability at least \(\frac{1}{2}\)*, otherwise, it returns “no”.
This means that if the correct answer is “no” then the algorithm will definitely return “no” but if the correct answer is “yes,” the algorithm might return the wrong answer (“no”) up to half of the time. If the algorithm does return a “yes,” then the answer is definitely “yes” — a “yes” answer is always true, but a “no” answer might be wrong.
*The \(\frac{1}{2}\) here is chosen arbitrarily, but the class \(RP\) is defined with the half. Since the error is one-sided, running many trials can reduce the error, this is called amplification. If the algorithm is repeated \(t\) times independently: reject the string \(x\) if any of the \(t\) runs reject and accept \(x\) otherwise. The probability that the algorithm makes a mistake and returns the wrong answer \(t\) times is \(\frac{1}{2^t}\), which goes to zero as \(t\) increases.
Similar to \(NP\) and \(coNP\), there is a class containing the complement of \(RP\) called \(coRP\).
\(coRP\) is defined as follows:
It always runs in polynomial time in the input size
If the correct answer is “yes”, it always returns “yes”
If the correct answer is “no,” then it returns “no” with probability at least \(\frac{1}{2}\), otherwise, it returns “yes”.
This means that if the answer is "yes" then the algorithm will definitely return "yes" but if the answer is "no" the algorithm might return an incorrect answer.
There are some problems that exist in both \(RP\) and \(coRP\), which is \(RP \cup coRP\).
\(P\) is a subset of \(RP\). It is unknown if \(P = RP\). \(P\) is also a subset of \(coRP\) and it is not known if \(RP = coRP\). \(RP\) is a subset of \(NP\), and \(coRP\) is a subset of \(coNP\).
Key Relationships
\(P \subset RP\)
\(RP \subset NP\)
\(coRP \subset coNP\)
\(RP\) has to do with randomized algorithms.
BPP and ZPP
BPP
\(BPP\) (bounded-error probabilistic polynomial time) is the class for which there exists a probabilistic Turing machine with the following properties: have the following properties:
It is guaranteed to run in polynomial time
On any given run of the algorithm, it has a probability of at most \(\frac{1}{3}\) of giving the wrong answer, whether the answer is “yes” or “no.”
This means that if the correct answer is “yes,” the algorithm will output a “yes” with a probability \(\geq \frac{2}{3}\). If the correct answer is “no”, the algorithm will output a “no” with a probability \(\geq \frac{2}{3}\). If the correct answer is “no”, the algorithm will output a “yes” (the wrong answer) with a probability \(\leq \frac{1}{3}\). If the correct answer is “yes”, the algorithm will output a “no” (the wrong answer) with a probability \(\leq \frac{1}{3}\).
Monte Carlo algorithms are in the \(BPP\) class.
It is unknown if \(BPP = P\), but many computer scientists believe that they are equal. To discover if \(P = BPP\) would be as important as solving the \(P vs NP\) problem. It would show if every probabilistic algorithm can be replaced by a deterministic one.
ZPP
\(ZPP\) (zero-error probabilistic polynomial time) is the class for which there exists a probabilistic Turing machine with the following properties:
It is guaranteed to run in polynomial time
It may return “yes,” “no,” or “I don’t know” answer
If it returns a “yes” or a “no”, then that is the correct answer
It returns “I don’t know” with probability \(\leq \frac{1}{2}\)
This means that if always returns either the correct answer or “I don’t know,” and the machine will return “I don’t know” with probability less than or equal to half.
The class \(ZPP\) is exactly equal to the intersection of \(RP\) and \(coRP\).
Las Vegas algorithms are included in the \(ZPP\) class.
Key Relationships
\(P \subseteq BPP\)
\(ZPP = RP \cap coRP\)
\(ZPP \subseteq BPP\)
\(RP \subseteq BPP\)
\(coRP \subseteq BPP\)
\(RP \cap coRP \subseteq BPP\)
PSPACE, NPSPACE, and EXPSPACE
The class \(PSPACE\) is the set of decision problems that can be solved by a deterministic Turing machine in a polynomial amount of space with respect to the input size. \(NPSPACE\) is the set of decision problems that can be solved by a nondeterministic Turing machine in a polynomial amount of space with respect to the input size. Similarly, \(EXPSPACE\) is the set of decision problems that can be solved by deterministic Turing machine in an exponential amount space with respect to the input size.
By Savitch's theorem \(PSPACE = NPSPACE\) and \(EXPSPACE = NEXPSPACE\).
\(P\) and \(NP\) are contained in \(PSPACE\), but it is unknown if any of those are equivalent to each other.
Key Relationships
\(PSPACE = NPSPACE\)
\(EXPSPACE = NEXPSPACE\)
\(P \subseteq NP \subseteq PSPACE\)
\(PSPACE \subseteq EXP\)
See Also
References
- Jacobs , K. Theory of Computation. Retrieved June 26, 2016, from http://ocw.mit.edu/courses/mathematics/18-404j-theory-of-computation-fall-2006/
- , . coNP. Retrieved June 26, 2016, from https://en.wikipedia.org/wiki/Co-NP
- , B. P np np-complete np-hard.svg. Retrieved June 29, 2016, from https://en.wikipedia.org/wiki/File:P_np_np-complete_np-hard.svg
- , Q. Complexity subsets pspace. Retrieved June 29, 2016, from https://en.wikipedia.org/wiki/File:Complexity_subsets_pspace.svg