The correct title of this article is #P-complete. The substitution of the # is due to technical restrictions.
The #P-complete problems (pronounced "sharp P complete" or "number P complete") form a complexity class in computational complexity theory. The problems in this complexity class are defined by having the following two properties:
The problem is #P-hard, meaning that every other problem in #P has a Turing reduction or polynomial-time counting reduction to it. A counting reduction is a pair of polynomial-time transformations from inputs of the other problem to inputs of the given problem and from outputs of the given problem to outputs of the other problem, allowing the other problem to be solved using any subroutine for the given problem. A Turing reduction is an algorithm for the other problem that makes a polynomial number of calls to a subroutine for the given problem and, outside of those calls, uses polynomial time. In some cases parsimonious reductions, a more specific type of reduction that preserves the exact number of solutions, are used.
#P-complete problems are at least as hard as NP-complete problems.[1] A polynomial-time algorithm for solving a #P-complete problem, if it existed, would solve the P versus NP problem by implying that P and NP are equal. No such algorithm is known, nor is a proof known that such an algorithm does not exist.
Examples
Examples of #P-complete problems include:
How many different variable assignments will satisfy a given general Boolean formula? (#SAT)
How many different variable assignments will satisfy a given DNF formula?
How many different variable assignments will satisfy a given 2-satisfiability problem?
These are all necessarily members of the class #P as well. As a non-example, consider the case of counting solutions to a 1-satisfiability problem: a series of variables that are each individually constrained, but have no relationships with each other. The solutions can be efficiently counted, by multiplying the number of options for each variable in isolation. Thus, this problem is in #P, but cannot be #P-complete unless #P=FP. This would be surprising, as it would imply that P=NP=PH.
Easy problems with hard counting versions
Some #P-complete problems correspond to easy (polynomial time) problems. Determining the satisfiability of a Boolean formula in disjunctive normal form is easy: such a formula is satisfiable if and only if it contains a satisfiable conjunction (one that does not contain a variable and its negation), whereas counting the number of satisfying assignments is #P-complete. Furthermore, deciding 2-satisfiability is easy compared to counting the number of satisfying assignments. Topologically sorting is easy in contrast to counting the number of topological sortings. A single perfect matching can be found in polynomial time, but counting all perfect matchings is #P-complete. The perfect matching counting problem was the first counting problem corresponding to an easy P problem shown to be #P-complete, in a 1979 paper by Leslie Valiant which also defined the class #P and the #P-complete problems for the first time.[3]
Approximation
There are probabilistic algorithms that return good approximations to some #P-complete problems with high probability. This is one of the demonstrations of the power of probabilistic algorithms.
Many #P-complete problems have a fully polynomial-time randomized approximation scheme, or "FPRAS," which, informally, will produce with high probability an approximation to an arbitrary degree of accuracy, in time that is polynomial with respect to both the size of the problem and the degree of accuracy required. Jerrum, Valiant, and Vazirani showed that every #P-complete problem either has an FPRAS, or is essentially impossible to approximate; if there is any polynomial-time algorithm which consistently produces an approximation of a #P-complete problem which is within a polynomial ratio in the size of the input of the exact answer, then that algorithm can be used to construct an FPRAS.[4]