Complexity Theory
Complexity theory is a central topic in theoretical computer science. It has direct applications to computability theory and uses computation models such as Turing machines to help test complexity. Complexity theory helps computer scientists relate and group problems together into complexity classes. Sometimes, if one problem can be solved, it opens a way to solve other problems in its complexity class. Complexity helps determine the difficulty of a problem, often measured by how much time and space (memory) it takes to solve a particular problem. For example, some problems can be solved in polynomial amounts of time and others take exponential amounts of time, with respect to the input size.
Complexity theory has real world implications too, particularly with algorithm design and analysis. An algorithm can be analyzed in terms of its complexity, this is often described in big-O notation. Often times, programmers want to write efficient algorithms, and being able to tell if an algorithm runs in polynomial time versus exponential time can tell a programmer if his or her algorithm is the best choice or not. Complexity theory has applications for biologists studying neurons, electrical engineers who design hardware, linguists who study languages and grammars, and physicists building quantum computers.[1]
Both in theory and in practice, complexity theory helps computer scientists determine the limits of what computers can and cannot do.
In many cases, problems can be modeled as decision problems which are problems that can be answered with a “yes” or a “no.” For example, “is this number prime?”, “does this graph have a hamiltonian path?” “is there an assignment of variables to the equation such that a set of constraints are satisfied?” Examples of decision problems include the travelling salesperson problem, 3SAT, and primality testing. Decision problems can be simulated on computational models such as Turing machines.
Contents
Background topics for complexity theory
Complexity theory can be one of the more challenging topics in theoretical computer science since it requires a fair amount of background. To really appreciate complexity theory, one should be familiar with the following topics:
Regular languages, context-free grammars, and context-free languages. These topics provide the vocabulary for describing problems that complexity theory deals with.
Turing Machines are the usual model for testing where a problem belongs in the complexity hierarchy, so you should be familiar with how Turing machines are defined and how they work.
Complexity Classes
Here is a brief overview of complexity classes, find out more on its wiki page.
Complexity classes are used to group together problems that require similar amounts of resources. For example, the group of problems that can be solved in polynomial time are considered a part of the class P. The group of problems that take an exponential amount of space are in the class EXPSPACE. Some classes are contained within other classes — for example, if a problem can be solved in polynomial time, it can certainly be solved in exponential time too. The image below describes the relationship of many common complexity classes.
Determining if classes are equivalent rather than just contained in one another is a key problem in theoretical computer science.This image shows the relationships of many popular complexity classes.
Complexity and Algorithms
Complexity is often used to describe an algorithm. One might hear something like “my sorting algorithm runs in oh of \(n^2\) time” in complexity, this is written as \(O(n^2)\) and is a polynomial running time. Complexity is used to describe resource use in algorithms. In general, the resources of concern are time and space. The time complexity of an algorithm represents the number of steps it has to take to complete. The space complexity of an algorithm represents the amount of memory the algorithm needs in order to work.
The time complexity of an algorithm describes how many steps an algorithm needs to take with respect to the input. If, for example, each of the \(n\) inputed elements is only operated on once, this algorithm would take \(O(n)\) time. If, for example, the algorithm needs to operate on one element of an input (no matter the input size), this is a constant time, or \(O(1)\), algorithm since no matter the input size only one thing is done. If an algorithm does \(n\) operations for each one of the \(n\) elements inputed to the algorithm, then this algorithm runs in \(O(n^2)\) time.
In algorithm design and analysis, there are three types of complexity that computer scientists think about: best-case, worst-case, and average-case complexity.
Best, Worst, and Average-case Complexity
Complexity can describe time and space, this wiki will speak in terms of time complexity, but the same concepts can be applied to space complexity.
Let’s say you are sorting a list of numbers. If the input list is already sorted, your algorithm probably has very little work to do — this could be considered a “best-case” input and would have a very fast running time. Let’s take the same sorting algorithm and give it an input list that is entirely backwards, and every element is out of place. This could be considered a “worst-case” input and would have a very slow running time. Now say you have a random input that is somewhat ordered and somewhat disordered (an average input). This would take the average-case running time.
If you know something about your data, for example if you have reason to expect that your list will generally be mostly sorted, and can therefore count on your best-case running time, you might choose an algorithm with a great best-case running time, even if it has a terrible worst and average-case running time. Usually, though, programmers need to write algorithms that can efficiently handle any input, so computer scientists are generally particularly concerned with worst-case running times of algorithms.
Complexity of Important Problems
Complexity has theoretical applications as well. Many important problems in computer science, such as the P vs NP problem are explained using complexity theory. Many important theoretical computer science problems essentially boil down to
- “Are these two problems reducible to each other; does an answer to problem A help us solve problem B?”
OR
- “Is complexity class X equivalent complexity class Y?”
Answers to these types of questions often have profound implications for theoretical computer science and real world applications. For example, if P were proven to be equal to NP, most of our security algorithms, like RSA, would be incredibly easy to break.
In the 1970s, Cook and Levin proved that Boolean satisfiability is an NP-Complete problem, meaning that it can be transformed into any other problem in the NP class. In other words, the satisfiability problem can model any other problem in NP. This shows us a powerful consequence of complexity theory: by determining how to solve a satisfiability problem and using the Cook-Levin theorem, we know how to approach all other problems in NP. Additionally, this means that if computer scientists figure out how to solve an NP-compete problem in polynomial time, all other problems in NP could be solved in polynomial time, in other words P would equal NP.
Testing Complexity
To “test” a problem’s complexity, computer scientists will try to solve the problem on a Turing machine and see how many steps (time complexity) and how much tape (space complexity) it requires to decide a problem.
We can use a Turing machine to solve an instance of a problem or verify a proposed answer for the problem. For example, the problem might be the traveling salesperson problem, and the instance would be a particular graph that the traveling salesperson is traveling. One instance of the traveling salesperson problem could have the salesperson traveling in New York City and another instance could have the salesperson traveling in London. In either case, the goal of the salesperson is the same, though the specific graph may be different.
The running time of a particular problem, like the traveling salesperson problem, may depend on the particular instance. For example, larger instances may require more time to solve. This is why the complexity of a given problem is calculated as a function of the size of the particular instance. Usually, the size of the input is measured in bits. In general, complexity theory deals with how algorithms scale with an increase in the input size.[3]. Instances are encoded as strings of bits that follow particular patterns or rules (similar to regular languages and context free languages. The Turing machine will take this problem, modeled as a language, and feed the input to the problem.
While we saw on the Turing machine wiki that a Turing machine takes in a program and operates on an input according to that program, in complexity proofs, we usually just abstract away the specific Turing machine program. The Church-Turing thesis says that any computable problem can be computed on a Turing machine, so we can safely assume that if a problem is computable, there is a programming for it. The time complexity of a problem is determined by how many steps the Turing machine takes to solve the problem, and the space complexity of the problem is how many spaces on the tape the machine needed.
References
- Scheideler, C. 1 Introduction to Complexity Theory. Retrieved July 10, 2016, from http://www.cs.jhu.edu/~scheideler/courses/600.471_S05/lecture_1.pdf
- Jacobs , K. Theory of Computation. Retrieved June 26, 2016, from http://ocw.mit.edu/courses/mathematics/18-404j-theory-of-computation-fall-2006/
- , . Computational complexity theory. Retrieved July 12, 2016, from https://en.wikipedia.org/wiki/Computational_complexity_theory
- , N. File:Turing machine 2b.svg. Retrieved July 12, 2016, from https://en.wikipedia.org/wiki/File:Turing_machine_2b.svg