In mathematics, the error function (also called the Gauss error function), often denoted by erf, is a function e r f : C → C {\displaystyle \mathrm {erf} :\mathbb {C} \to \mathbb {C} } defined as:[1] erf z = 2 π ∫ 0 z e − t 2 d t . {\displaystyle \operatorname {erf} z={\frac {2}{\sqrt {\pi }}}\int _{0}^{z}e^{-t^{2}}\,\mathrm {d} t.}
The integral here is a complex contour integral which is path-independent because exp ( − t 2 ) {\displaystyle \exp(-t^{2})} is holomorphic on the whole complex plane C {\displaystyle \mathbb {C} } . In many applications, the function argument is a real number, in which case the function value is also real.
In some old texts,[2] the error function is defined without the factor of 2 π {\displaystyle {\frac {2}{\sqrt {\pi }}}} . This nonelementary integral is a sigmoid function that occurs often in probability, statistics, and partial differential equations.
In statistics, for non-negative real values of x, the error function has the following interpretation: for a real random variable Y that is normally distributed with mean 0 and standard deviation 1 2 {\displaystyle {\frac {1}{\sqrt {2}}}} , erf x is the probability that Y falls in the range [−x, x].
Two closely related functions are the complementary error function e r f c : C → C {\displaystyle \mathrm {erfc} :\mathbb {C} \to \mathbb {C} } is defined as
erfc z = 1 − erf z , {\displaystyle \operatorname {erfc} z=1-\operatorname {erf} z,}
and the imaginary error function e r f i : C → C {\displaystyle \mathrm {erfi} :\mathbb {C} \to \mathbb {C} } is defined as
erfi z = − i erf i z , {\displaystyle \operatorname {erfi} z=-i\operatorname {erf} iz,}
where i is the imaginary unit.
The name "error function" and its abbreviation erf were proposed by J. W. L. Glaisher in 1871 on account of its connection with "the theory of Probability, and notably the theory of Errors."[3] The error function complement was also discussed by Glaisher in a separate publication in the same year.[4] For the "law of facility" of errors whose density is given by f ( x ) = ( c π ) 1 / 2 e − c x 2 {\displaystyle f(x)=\left({\frac {c}{\pi }}\right)^{1/2}e^{-cx^{2}}} (the normal distribution), Glaisher calculates the probability of an error lying between p and q as: ( c π ) 1 2 ∫ p q e − c x 2 d x = 1 2 ( erf ( q c ) − erf ( p c ) ) . {\displaystyle \left({\frac {c}{\pi }}\right)^{\frac {1}{2}}\int _{p}^{q}e^{-cx^{2}}\,\mathrm {d} x={\tfrac {1}{2}}\left(\operatorname {erf} \left(q{\sqrt {c}}\right)-\operatorname {erf} \left(p{\sqrt {c}}\right)\right).}
When the results of a series of measurements are described by a normal distribution with standard deviation σ and expected value 0, then erf (a/σ √2) is the probability that the error of a single measurement lies between −a and +a, for positive a. This is useful, for example, in determining the bit error rate of a digital communication system.
The error and complementary error functions occur, for example, in solutions of the heat equation when boundary conditions are given by the Heaviside step function.
The error function and its approximations can be used to estimate results that hold with high probability or with low probability. Given a random variable X ~ Norm[μ,σ] (a normal distribution with mean μ and standard deviation σ) and a constant L > μ, it can be shown via integration by substitution: Pr [ X ≤ L ] = 1 2 + 1 2 erf L − μ 2 σ ≈ A exp ( − B ( L − μ σ ) 2 ) {\displaystyle {\begin{aligned}\Pr[X\leq L]&={\frac {1}{2}}+{\frac {1}{2}}\operatorname {erf} {\frac {L-\mu }{{\sqrt {2}}\sigma }}\\&\approx A\exp \left(-B\left({\frac {L-\mu }{\sigma }}\right)^{2}\right)\end{aligned}}}
where A and B are certain numeric constants. If L is sufficiently far from the mean, specifically μ − L ≥ σ√ln k, then:
Pr [ X ≤ L ] ≤ A exp ( − B ln k ) = A k B {\displaystyle \Pr[X\leq L]\leq A\exp(-B\ln {k})={\frac {A}{k^{B}}}}
so the probability goes to 0 as k → ∞.
The probability for X being in the interval [La, Lb] can be derived as Pr [ L a ≤ X ≤ L b ] = ∫ L a L b 1 2 π σ exp ( − ( x − μ ) 2 2 σ 2 ) d x = 1 2 ( erf L b − μ 2 σ − erf L a − μ 2 σ ) . {\displaystyle {\begin{aligned}\Pr[L_{a}\leq X\leq L_{b}]&=\int _{L_{a}}^{L_{b}}{\frac {1}{{\sqrt {2\pi }}\sigma }}\exp \left(-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right)\,\mathrm {d} x\\&={\frac {1}{2}}\left(\operatorname {erf} {\frac {L_{b}-\mu }{{\sqrt {2}}\sigma }}-\operatorname {erf} {\frac {L_{a}-\mu }{{\sqrt {2}}\sigma }}\right).\end{aligned}}}
The property erf (−z) = −erf z means that the error function is an odd function. This directly results from the fact that the integrand e−t2 is an even function (the antiderivative of an even function which is zero at the origin is an odd function and vice versa).
Since the error function is an entire function which takes real numbers to real numbers, for any complex number z: erf z ¯ = erf z ¯ {\displaystyle \operatorname {erf} {\overline {z}}={\overline {\operatorname {erf} z}}} where z ¯ {\displaystyle {\overline {z}}} denotes the complex conjugate of z {\displaystyle z} .
The integrand f = exp(−z2) and f = erf z are shown in the complex z-plane in the figures at right with domain coloring.
The error function at +∞ is exactly 1 (see Gaussian integral). At the real axis, erf z approaches unity at z → +∞ and −1 at z → −∞. At the imaginary axis, it tends to ±i∞.
The error function is an entire function; it has no singularities (except that at infinity) and its Taylor expansion always converges. For x >> 1, however, cancellation of leading terms makes the Taylor expansion unpractical.
The defining integral cannot be evaluated in closed form in terms of elementary functions (see Liouville's theorem), but by expanding the integrand e−z2 into its Maclaurin series and integrating term by term, one obtains the error function's Maclaurin series as: erf z = 2 π ∑ n = 0 ∞ ( − 1 ) n z 2 n + 1 n ! ( 2 n + 1 ) = 2 π ( z − z 3 3 + z 5 10 − z 7 42 + z 9 216 − ⋯ ) {\displaystyle {\begin{aligned}\operatorname {erf} z&={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }{\frac {(-1)^{n}z^{2n+1}}{n!(2n+1)}}\\[6pt]&={\frac {2}{\sqrt {\pi }}}\left(z-{\frac {z^{3}}{3}}+{\frac {z^{5}}{10}}-{\frac {z^{7}}{42}}+{\frac {z^{9}}{216}}-\cdots \right)\end{aligned}}} which holds for every complex number z. The denominator terms are sequence A007680 in the OEIS.
For iterative calculation of the above series, the following alternative formulation may be useful: erf z = 2 π ∑ n = 0 ∞ ( z ∏ k = 1 n − ( 2 k − 1 ) z 2 k ( 2 k + 1 ) ) = 2 π ∑ n = 0 ∞ z 2 n + 1 ∏ k = 1 n − z 2 k {\displaystyle {\begin{aligned}\operatorname {erf} z&={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }\left(z\prod _{k=1}^{n}{\frac {-(2k-1)z^{2}}{k(2k+1)}}\right)\\[6pt]&={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }{\frac {z}{2n+1}}\prod _{k=1}^{n}{\frac {-z^{2}}{k}}\end{aligned}}} because −(2k − 1)z2/k(2k + 1) expresses the multiplier to turn the kth term into the (k + 1)th term (considering z as the first term).
The imaginary error function has a very similar Maclaurin series, which is: erfi z = 2 π ∑ n = 0 ∞ z 2 n + 1 n ! ( 2 n + 1 ) = 2 π ( z + z 3 3 + z 5 10 + z 7 42 + z 9 216 + ⋯ ) {\displaystyle {\begin{aligned}\operatorname {erfi} z&={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }{\frac {z^{2n+1}}{n!(2n+1)}}\\[6pt]&={\frac {2}{\sqrt {\pi }}}\left(z+{\frac {z^{3}}{3}}+{\frac {z^{5}}{10}}+{\frac {z^{7}}{42}}+{\frac {z^{9}}{216}}+\cdots \right)\end{aligned}}} which holds for every complex number z.
The derivative of the error function follows immediately from its definition: d d z erf z = 2 π e − z 2 . {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} z}}\operatorname {erf} z={\frac {2}{\sqrt {\pi }}}e^{-z^{2}}.} From this, the derivative of the imaginary error function is also immediate: d d z erfi z = 2 π e z 2 . {\displaystyle {\frac {d}{dz}}\operatorname {erfi} z={\frac {2}{\sqrt {\pi }}}e^{z^{2}}.} An antiderivative of the error function, obtainable by integration by parts, is z erf z + e − z 2 π + C . {\displaystyle z\operatorname {erf} z+{\frac {e^{-z^{2}}}{\sqrt {\pi }}}+C.} An antiderivative of the imaginary error function, also obtainable by integration by parts, is z erfi z − e z 2 π + C . {\displaystyle z\operatorname {erfi} z-{\frac {e^{z^{2}}}{\sqrt {\pi }}}+C.} Higher order derivatives are given by erf ( k ) z = 2 ( − 1 ) k − 1 π H k − 1 ( z ) e − z 2 = 2 π d k − 1 d z k − 1 ( e − z 2 ) , k = 1 , 2 , … {\displaystyle \operatorname {erf} ^{(k)}z={\frac {2(-1)^{k-1}}{\sqrt {\pi }}}{\mathit {H}}_{k-1}(z)e^{-z^{2}}={\frac {2}{\sqrt {\pi }}}{\frac {\mathrm {d} ^{k-1}}{\mathrm {d} z^{k-1}}}\left(e^{-z^{2}}\right),\qquad k=1,2,\dots } where H are the physicists' Hermite polynomials.[5]
An expansion,[6] which converges more rapidly for all real values of x than a Taylor expansion, is obtained by using Hans Heinrich Bürmann's theorem:[7] erf x = 2 π sgn x ⋅ 1 − e − x 2 ( 1 − 1 12 ( 1 − e − x 2 ) − 7 480 ( 1 − e − x 2 ) 2 − 5 896 ( 1 − e − x 2 ) 3 − 787 276480 ( 1 − e − x 2 ) 4 − ⋯ ) = 2 π sgn x ⋅ 1 − e − x 2 ( π 2 + ∑ k = 1 ∞ c k e − k x 2 ) . {\displaystyle {\begin{aligned}\operatorname {erf} x&={\frac {2}{\sqrt {\pi }}}\operatorname {sgn} x\cdot {\sqrt {1-e^{-x^{2}}}}\left(1-{\frac {1}{12}}\left(1-e^{-x^{2}}\right)-{\frac {7}{480}}\left(1-e^{-x^{2}}\right)^{2}-{\frac {5}{896}}\left(1-e^{-x^{2}}\right)^{3}-{\frac {787}{276480}}\left(1-e^{-x^{2}}\right)^{4}-\cdots \right)\\[10pt]&={\frac {2}{\sqrt {\pi }}}\operatorname {sgn} x\cdot {\sqrt {1-e^{-x^{2}}}}\left({\frac {\sqrt {\pi }}{2}}+\sum _{k=1}^{\infty }c_{k}e^{-kx^{2}}\right).\end{aligned}}} where sgn is the sign function. By keeping only the first two coefficients and choosing c1 = 31/200 and c2 = −341/8000, the resulting approximation shows its largest relative error at x = ±1.40587, where it is less than 0.0034361: erf x ≈ 2 π sgn x ⋅ 1 − e − x 2 ( π 2 + 31 200 e − x 2 − 341 8000 e − 2 x 2 ) . {\displaystyle \operatorname {erf} x\approx {\frac {2}{\sqrt {\pi }}}\operatorname {sgn} x\cdot {\sqrt {1-e^{-x^{2}}}}\left({\frac {\sqrt {\pi }}{2}}+{\frac {31}{200}}e^{-x^{2}}-{\frac {341}{8000}}e^{-2x^{2}}\right).}
Given a complex number z, there is not a unique complex number w satisfying erf w = z, so a true inverse function would be multivalued. However, for −1 < x < 1, there is a unique real number denoted erf−1 x satisfying erf ( erf − 1 x ) = x . {\displaystyle \operatorname {erf} \left(\operatorname {erf} ^{-1}x\right)=x.}
The inverse error function is usually defined with domain (−1,1), and it is restricted to this domain in many computer algebra systems. However, it can be extended to the disk |z| < 1 of the complex plane, using the Maclaurin series[8] erf − 1 z = ∑ k = 0 ∞ c k 2 k + 1 ( π 2 z ) 2 k + 1 , {\displaystyle \operatorname {erf} ^{-1}z=\sum _{k=0}^{\infty }{\frac {c_{k}}{2k+1}}\left({\frac {\sqrt {\pi }}{2}}z\right)^{2k+1},} where c0 = 1 and c k = ∑ m = 0 k − 1 c m c k − 1 − m ( m + 1 ) ( 2 m + 1 ) = { 1 , 1 , 7 6 , 127 90 , 4369 2520 , 34807 16200 , … } . {\displaystyle {\begin{aligned}c_{k}&=\sum _{m=0}^{k-1}{\frac {c_{m}c_{k-1-m}}{(m+1)(2m+1)}}\\[1ex]&=\left\{1,1,{\frac {7}{6}},{\frac {127}{90}},{\frac {4369}{2520}},{\frac {34807}{16200}},\ldots \right\}.\end{aligned}}}
So we have the series expansion (common factors have been canceled from numerators and denominators): erf − 1 z = π 2 ( z + π 12 z 3 + 7 π 2 480 z 5 + 127 π 3 40320 z 7 + 4369 π 4 5806080 z 9 + 34807 π 5 182476800 z 11 + ⋯ ) . {\displaystyle \operatorname {erf} ^{-1}z={\frac {\sqrt {\pi }}{2}}\left(z+{\frac {\pi }{12}}z^{3}+{\frac {7\pi ^{2}}{480}}z^{5}+{\frac {127\pi ^{3}}{40320}}z^{7}+{\frac {4369\pi ^{4}}{5806080}}z^{9}+{\frac {34807\pi ^{5}}{182476800}}z^{11}+\cdots \right).} (After cancellation the numerator and denominator values in OEIS: A092676 and OEIS: A092677 respectively; without cancellation the numerator terms are values in OEIS: A002067.) The error function's value at ±∞ is equal to ±1.
For |z| < 1, we have erf(erf−1 z) = z.
The inverse complementary error function is defined as erfc − 1 ( 1 − z ) = erf − 1 z . {\displaystyle \operatorname {erfc} ^{-1}(1-z)=\operatorname {erf} ^{-1}z.} For real x, there is a unique real number erfi−1 x satisfying erfi(erfi−1 x) = x. The inverse imaginary error function is defined as erfi−1 x.[9]
For any real x, Newton's method can be used to compute erfi−1 x, and for −1 ≤ x ≤ 1, the following Maclaurin series converges: erfi − 1 z = ∑ k = 0 ∞ ( − 1 ) k c k 2 k + 1 ( π 2 z ) 2 k + 1 , {\displaystyle \operatorname {erfi} ^{-1}z=\sum _{k=0}^{\infty }{\frac {(-1)^{k}c_{k}}{2k+1}}\left({\frac {\sqrt {\pi }}{2}}z\right)^{2k+1},} where ck is defined as above.
A useful asymptotic expansion of the complementary error function (and therefore also of the error function) for large real x is erfc x = e − x 2 x π ( 1 + ∑ n = 1 ∞ ( − 1 ) n 1 ⋅ 3 ⋅ 5 ⋯ ( 2 n − 1 ) ( 2 x 2 ) n ) = e − x 2 x π ∑ n = 0 ∞ ( − 1 ) n ( 2 n − 1 ) ! ! ( 2 x 2 ) n , {\displaystyle {\begin{aligned}\operatorname {erfc} x&={\frac {e^{-x^{2}}}{x{\sqrt {\pi }}}}\left(1+\sum _{n=1}^{\infty }(-1)^{n}{\frac {1\cdot 3\cdot 5\cdots (2n-1)}{\left(2x^{2}\right)^{n}}}\right)\\[6pt]&={\frac {e^{-x^{2}}}{x{\sqrt {\pi }}}}\sum _{n=0}^{\infty }(-1)^{n}{\frac {(2n-1)!!}{\left(2x^{2}\right)^{n}}},\end{aligned}}} where (2n − 1)!! is the double factorial of (2n − 1), which is the product of all odd numbers up to (2n − 1). This series diverges for every finite x, and its meaning as asymptotic expansion is that for any integer N ≥ 1 one has erfc x = e − x 2 x π ∑ n = 0 N − 1 ( − 1 ) n ( 2 n − 1 ) ! ! ( 2 x 2 ) n + R N ( x ) {\displaystyle \operatorname {erfc} x={\frac {e^{-x^{2}}}{x{\sqrt {\pi }}}}\sum _{n=0}^{N-1}(-1)^{n}{\frac {(2n-1)!!}{\left(2x^{2}\right)^{n}}}+R_{N}(x)} where the remainder is R N ( x ) := ( − 1 ) N ( 2 N − 1 ) ! ! π ⋅ 2 N − 1 ∫ x ∞ t − 2 N e − t 2 d t , {\displaystyle R_{N}(x):={\frac {(-1)^{N}\,(2N-1)!!}{{\sqrt {\pi }}\cdot 2^{N-1}}}\int _{x}^{\infty }t^{-2N}e^{-t^{2}}\,\mathrm {d} t,} which follows easily by induction, writing e − t 2 = − 1 2 t d d t e − t 2 {\displaystyle e^{-t^{2}}=-{\frac {1}{2t}}\,{\frac {\mathrm {d} }{\mathrm {d} t}}e^{-t^{2}}} and integrating by parts.
The asymptotic behavior of the remainder term, in Landau notation, is R N ( x ) = O ( x − ( 1 + 2 N ) e − x 2 ) {\displaystyle R_{N}(x)=O\left(x^{-(1+2N)}e^{-x^{2}}\right)} as x → ∞. This can be found by R N ( x ) ∝ ∫ x ∞ t − 2 N e − t 2 d t = e − x 2 ∫ 0 ∞ ( t + x ) − 2 N e − t 2 − 2 t x d t ≤ e − x 2 ∫ 0 ∞ x − 2 N e − 2 t x d t ∝ x − ( 1 + 2 N ) e − x 2 . {\displaystyle R_{N}(x)\propto \int _{x}^{\infty }t^{-2N}e^{-t^{2}}\,\mathrm {d} t=e^{-x^{2}}\int _{0}^{\infty }(t+x)^{-2N}e^{-t^{2}-2tx}\,\mathrm {d} t\leq e^{-x^{2}}\int _{0}^{\infty }x^{-2N}e^{-2tx}\,\mathrm {d} t\propto x^{-(1+2N)}e^{-x^{2}}.} For large enough values of x, only the first few terms of this asymptotic expansion are needed to obtain a good approximation of erfc x (while for not too large values of x, the above Taylor expansion at 0 provides a very fast convergence).
A continued fraction expansion of the complementary error function was found by Laplace:[10][11] erfc z = z π e − z 2 1 z 2 + a 1 1 + a 2 z 2 + a 3 1 + ⋯ , a m = m 2 . {\displaystyle \operatorname {erfc} z={\frac {z}{\sqrt {\pi }}}e^{-z^{2}}{\cfrac {1}{z^{2}+{\cfrac {a_{1}}{1+{\cfrac {a_{2}}{z^{2}+{\cfrac {a_{3}}{1+\dotsb }}}}}}}},\qquad a_{m}={\frac {m}{2}}.}
The inverse factorial series: erfc z = e − z 2 π z ∑ n = 0 ∞ ( − 1 ) n Q n ( z 2 + 1 ) n ¯ = e − z 2 π z [ 1 − 1 2 1 ( z 2 + 1 ) + 1 4 1 ( z 2 + 1 ) ( z 2 + 2 ) − ⋯ ] {\displaystyle {\begin{aligned}\operatorname {erfc} z&={\frac {e^{-z^{2}}}{{\sqrt {\pi }}\,z}}\sum _{n=0}^{\infty }{\frac {\left(-1\right)^{n}Q_{n}}{{\left(z^{2}+1\right)}^{\bar {n}}}}\\[1ex]&={\frac {e^{-z^{2}}}{{\sqrt {\pi }}\,z}}\left[1-{\frac {1}{2}}{\frac {1}{(z^{2}+1)}}+{\frac {1}{4}}{\frac {1}{\left(z^{2}+1\right)\left(z^{2}+2\right)}}-\cdots \right]\end{aligned}}} converges for Re(z2) > 0. Here Q n = def 1 Γ ( 1 2 ) ∫ 0 ∞ τ ( τ − 1 ) ⋯ ( τ − n + 1 ) τ − 1 2 e − τ d τ = ∑ k = 0 n ( 1 2 ) k ¯ s ( n , k ) , {\displaystyle {\begin{aligned}Q_{n}&{\overset {\text{def}}{{}={}}}{\frac {1}{\Gamma {\left({\frac {1}{2}}\right)}}}\int _{0}^{\infty }\tau (\tau -1)\cdots (\tau -n+1)\tau ^{-{\frac {1}{2}}}e^{-\tau }\,d\tau \\[1ex]&=\sum _{k=0}^{n}\left({\frac {1}{2}}\right)^{\bar {k}}s(n,k),\end{aligned}}} zn denotes the rising factorial, and s(n,k) denotes a signed Stirling number of the first kind.[12][13] There also exists a representation by an infinite sum containing the double factorial: erf z = 2 π ∑ n = 0 ∞ ( − 2 ) n ( 2 n − 1 ) ! ! ( 2 n + 1 ) ! z 2 n + 1 {\displaystyle \operatorname {erf} z={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }{\frac {(-2)^{n}(2n-1)!!}{(2n+1)!}}z^{2n+1}}
where a1 = 0.278393, a2 = 0.230389, a3 = 0.000972, a4 = 0.078108
erf x ≈ 1 − ( a 1 t + a 2 t 2 + a 3 t 3 ) e − x 2 , t = 1 1 + p x , x ≥ 0 {\displaystyle \operatorname {erf} x\approx 1-\left(a_{1}t+a_{2}t^{2}+a_{3}t^{3}\right)e^{-x^{2}},\quad t={\frac {1}{1+px}},\qquad x\geq 0} (maximum error: 2.5×10−5)
where p = 0.47047, a1 = 0.3480242, a2 = −0.0958798, a3 = 0.7478556
erf x ≈ 1 − 1 ( 1 + a 1 x + a 2 x 2 + ⋯ + a 6 x 6 ) 16 , x ≥ 0 {\displaystyle \operatorname {erf} x\approx 1-{\frac {1}{\left(1+a_{1}x+a_{2}x^{2}+\cdots +a_{6}x^{6}\right)^{16}}},\qquad x\geq 0} (maximum error: 3×10−7)
where a1 = 0.0705230784, a2 = 0.0422820123, a3 = 0.0092705272, a4 = 0.0001520143, a5 = 0.0002765672, a6 = 0.0000430638
erf x ≈ 1 − ( a 1 t + a 2 t 2 + ⋯ + a 5 t 5 ) e − x 2 , t = 1 1 + p x {\displaystyle \operatorname {erf} x\approx 1-\left(a_{1}t+a_{2}t^{2}+\cdots +a_{5}t^{5}\right)e^{-x^{2}},\quad t={\frac {1}{1+px}}} (maximum error: 1.5×10−7)
where p = 0.3275911, a1 = 0.254829592, a2 = −0.284496736, a3 = 1.421413741, a4 = −1.453152027, a5 = 1.061405429
All of these approximations are valid for x ≥ 0. To use these approximations for negative x, use the fact that erf x is an odd function, so erf x = −erf(−x).
This approximation can be inverted to obtain an approximation for the inverse error function: erf − 1 x ≈ sgn x ⋅ ( 2 π a + ln ( 1 − x 2 ) 2 ) 2 − ln ( 1 − x 2 ) a − ( 2 π a + ln ( 1 − x 2 ) 2 ) . {\displaystyle \operatorname {erf} ^{-1}x\approx \operatorname {sgn} x\cdot {\sqrt {{\sqrt {\left({\frac {2}{\pi a}}+{\frac {\ln \left(1-x^{2}\right)}{2}}\right)^{2}-{\frac {\ln \left(1-x^{2}\right)}{a}}}}-\left({\frac {2}{\pi a}}+{\frac {\ln \left(1-x^{2}\right)}{2}}\right)}}.}
The complementary error function, denoted erfc, is defined as
erfc x = 1 − erf x = 2 π ∫ x ∞ e − t 2 d t = e − x 2 erfcx x , {\displaystyle {\begin{aligned}\operatorname {erfc} x&=1-\operatorname {erf} x\\[5pt]&={\frac {2}{\sqrt {\pi }}}\int _{x}^{\infty }e^{-t^{2}}\,\mathrm {d} t\\[5pt]&=e^{-x^{2}}\operatorname {erfcx} x,\end{aligned}}} which also defines erfcx, the scaled complementary error function[26] (which can be used instead of erfc to avoid arithmetic underflow[26][27]). Another form of erfc x for x ≥ 0 is known as Craig's formula, after its discoverer:[28] erfc ( x ∣ x ≥ 0 ) = 2 π ∫ 0 π 2 exp ( − x 2 sin 2 θ ) d θ . {\displaystyle \operatorname {erfc} (x\mid x\geq 0)={\frac {2}{\pi }}\int _{0}^{\frac {\pi }{2}}\exp \left(-{\frac {x^{2}}{\sin ^{2}\theta }}\right)\,\mathrm {d} \theta .} This expression is valid only for positive values of x, but it can be used in conjunction with erfc x = 2 − erfc(−x) to obtain erfc(x) for negative values. This form is advantageous in that the range of integration is fixed and finite. An extension of this expression for the erfc of the sum of two non-negative variables is as follows:[29] erfc ( x + y ∣ x , y ≥ 0 ) = 2 π ∫ 0 π 2 exp ( − x 2 sin 2 θ − y 2 cos 2 θ ) d θ . {\displaystyle \operatorname {erfc} (x+y\mid x,y\geq 0)={\frac {2}{\pi }}\int _{0}^{\frac {\pi }{2}}\exp \left(-{\frac {x^{2}}{\sin ^{2}\theta }}-{\frac {y^{2}}{\cos ^{2}\theta }}\right)\,\mathrm {d} \theta .}
The imaginary error function, denoted erfi, is defined as
erfi x = − i erf i x = 2 π ∫ 0 x e t 2 d t = 2 π e x 2 D ( x ) , {\displaystyle {\begin{aligned}\operatorname {erfi} x&=-i\operatorname {erf} ix\\[5pt]&={\frac {2}{\sqrt {\pi }}}\int _{0}^{x}e^{t^{2}}\,\mathrm {d} t\\[5pt]&={\frac {2}{\sqrt {\pi }}}e^{x^{2}}D(x),\end{aligned}}} where D(x) is the Dawson function (which can be used instead of erfi to avoid arithmetic overflow[26]).
Despite the name "imaginary error function", erfi x is real when x is real.
When the error function is evaluated for arbitrary complex arguments z, the resulting complex error function is usually discussed in scaled form as the Faddeeva function: w ( z ) = e − z 2 erfc ( − i z ) = erfcx ( − i z ) . {\displaystyle w(z)=e^{-z^{2}}\operatorname {erfc} (-iz)=\operatorname {erfcx} (-iz).}
The error function is essentially identical to the standard normal cumulative distribution function, denoted Φ, also named norm(x) by some software languages[citation needed], as they differ only by scaling and translation. Indeed,
Φ ( x ) = 1 2 π ∫ − ∞ x e − t 2 2 d t = 1 2 ( 1 + erf x 2 ) = 1 2 erfc ( − x 2 ) {\displaystyle {\begin{aligned}\Phi (x)&={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{x}e^{\tfrac {-t^{2}}{2}}\,\mathrm {d} t\\[6pt]&={\frac {1}{2}}\left(1+\operatorname {erf} {\frac {x}{\sqrt {2}}}\right)\\[6pt]&={\frac {1}{2}}\operatorname {erfc} \left(-{\frac {x}{\sqrt {2}}}\right)\end{aligned}}} or rearranged for erf and erfc: erf ( x ) = 2 Φ ( x 2 ) − 1 erfc ( x ) = 2 Φ ( − x 2 ) = 2 ( 1 − Φ ( x 2 ) ) . {\displaystyle {\begin{aligned}\operatorname {erf} (x)&=2\Phi {\left(x{\sqrt {2}}\right)}-1\\[6pt]\operatorname {erfc} (x)&=2\Phi {\left(-x{\sqrt {2}}\right)}\\&=2\left(1-\Phi {\left(x{\sqrt {2}}\right)}\right).\end{aligned}}}
Consequently, the error function is also closely related to the Q-function, which is the tail probability of the standard normal distribution. The Q-function can be expressed in terms of the error function as Q ( x ) = 1 2 − 1 2 erf x 2 = 1 2 erfc x 2 . {\displaystyle {\begin{aligned}Q(x)&={\frac {1}{2}}-{\frac {1}{2}}\operatorname {erf} {\frac {x}{\sqrt {2}}}\\&={\frac {1}{2}}\operatorname {erfc} {\frac {x}{\sqrt {2}}}.\end{aligned}}}
The inverse of Φ is known as the normal quantile function, or probit function and may be expressed in terms of the inverse error function as probit ( p ) = Φ − 1 ( p ) = 2 erf − 1 ( 2 p − 1 ) = − 2 erfc − 1 ( 2 p ) . {\displaystyle \operatorname {probit} (p)=\Phi ^{-1}(p)={\sqrt {2}}\operatorname {erf} ^{-1}(2p-1)=-{\sqrt {2}}\operatorname {erfc} ^{-1}(2p).}
The standard normal cdf is used more often in probability and statistics, and the error function is used more often in other branches of mathematics.
The error function is a special case of the Mittag-Leffler function, and can also be expressed as a confluent hypergeometric function (Kummer's function): erf x = 2 x π M ( 1 2 , 3 2 , − x 2 ) . {\displaystyle \operatorname {erf} x={\frac {2x}{\sqrt {\pi }}}M\left({\tfrac {1}{2}},{\tfrac {3}{2}},-x^{2}\right).}
It has a simple expression in terms of the Fresnel integral.[further explanation needed]
In terms of the regularized gamma function P and the incomplete gamma function, erf x = sgn x ⋅ P ( 1 2 , x 2 ) = sgn x π γ ( 1 2 , x 2 ) . {\displaystyle \operatorname {erf} x=\operatorname {sgn} x\cdot P\left({\tfrac {1}{2}},x^{2}\right)={\frac {\operatorname {sgn} x}{\sqrt {\pi }}}\gamma {\left({\tfrac {1}{2}},x^{2}\right)}.} sgn x is the sign function.
The iterated integrals of the complementary error function are defined by[30] i n erfc z = ∫ z ∞ i n − 1 erfc ζ d ζ i 0 erfc z = erfc z i 1 erfc z = ierfc z = 1 π e − z 2 − z erfc z i 2 erfc z = 1 4 ( erfc z − 2 z ierfc z ) {\displaystyle {\begin{aligned}i^{n}\!\operatorname {erfc} z&=\int _{z}^{\infty }i^{n-1}\!\operatorname {erfc} \zeta \,\mathrm {d} \zeta \\[6pt]i^{0}\!\operatorname {erfc} z&=\operatorname {erfc} z\\i^{1}\!\operatorname {erfc} z&=\operatorname {ierfc} z={\frac {1}{\sqrt {\pi }}}e^{-z^{2}}-z\operatorname {erfc} z\\i^{2}\!\operatorname {erfc} z&={\tfrac {1}{4}}\left(\operatorname {erfc} z-2z\operatorname {ierfc} z\right)\\\end{aligned}}}
The general recurrence formula is 2 n ⋅ i n erfc z = i n − 2 erfc z − 2 z ⋅ i n − 1 erfc z {\displaystyle 2n\cdot i^{n}\!\operatorname {erfc} z=i^{n-2}\!\operatorname {erfc} z-2z\cdot i^{n-1}\!\operatorname {erfc} z}
They have the power series i n erfc z = ∑ j = 0 ∞ ( − z ) j 2 n − j j ! Γ ( 1 + n − j 2 ) , {\displaystyle i^{n}\!\operatorname {erfc} z=\sum _{j=0}^{\infty }{\frac {(-z)^{j}}{2^{n-j}j!\,\Gamma \left(1+{\frac {n-j}{2}}\right)}},} from which follow the symmetry properties i 2 m erfc ( − z ) = − i 2 m erfc z + ∑ q = 0 m z 2 q 2 2 ( m − q ) − 1 ( 2 q ) ! ( m − q ) ! {\displaystyle i^{2m}\!\operatorname {erfc} (-z)=-i^{2m}\!\operatorname {erfc} z+\sum _{q=0}^{m}{\frac {z^{2q}}{2^{2(m-q)-1}(2q)!(m-q)!}}} and i 2 m + 1 erfc ( − z ) = i 2 m + 1 erfc z + ∑ q = 0 m z 2 q + 1 2 2 ( m − q ) − 1 ( 2 q + 1 ) ! ( m − q ) ! . {\displaystyle i^{2m+1}\!\operatorname {erfc} (-z)=i^{2m+1}\!\operatorname {erfc} z+\sum _{q=0}^{m}{\frac {z^{2q+1}}{2^{2(m-q)-1}(2q+1)!(m-q)!}}.}
math.h
libm
erf
erfc
erff
erfl
erfcf
erfcl
log(erf)
libcerf
cerf
cerfc
cerfcx
erfi
erfcx
Indeed, Winitzki [32] provided the so-called global Padé approximation
{{cite book}}