Adaptive Simpson's method, also called adaptive Simpson's rule, is a method of numerical integration proposed by G.F. Kuncir in 1962.[1] It is probably the first recursive adaptive algorithm for numerical integration to appear in print,[2] although more modern adaptive methods based on Gauss–Kronrod quadrature and Clenshaw–Curtis quadrature are now generally preferred. Adaptive Simpson's method uses an estimate of the error we get from calculating a definite integral using Simpson's rule. If the error exceeds a user-specified tolerance, the algorithm calls for subdividing the interval of integration in two and applying adaptive Simpson's method to each subinterval in a recursive manner. The technique is usually much more efficient than composite Simpson's rule since it uses fewer function evaluations in places where the function is well-approximated by a cubic function.
Simpson's rule is an interpolatory quadrature rule which is exact when the integrand is a polynomial of degree three or lower. Using Richardson extrapolation, the more accurate Simpson estimate S ( a , m ) + S ( m , b ) {\displaystyle S(a,m)+S(m,b)} for six function values is combined with the less accurate estimate S ( a , b ) {\displaystyle S(a,b)} for three function values by applying the correction [ S ( a , m ) + S ( m , b ) − S ( a , b ) ] / 15 {\displaystyle [S(a,m)+S(m,b)-S(a,b)]/15} . So, the obtained estimate is exact for polynomials of degree five or less.
A criterion for determining when to stop subdividing an interval, suggested by J.N. Lyness,[3] is | S ( a , m ) + S ( m , b ) − S ( a , b ) | < 15 ε {\displaystyle |S(a,m)+S(m,b)-S(a,b)|<15\varepsilon }
where [ a , b ] {\displaystyle [a,b]} is an interval with midpoint m = a + b 2 {\displaystyle m={\frac {a+b}{2}}} , while S ( a , b ) {\displaystyle S(a,b)\,\!} , S ( a , m ) {\displaystyle S(a,m)\,\!} , and S ( m , b ) {\displaystyle S(m,b)} given by Simpson's rule are the estimates of ∫ a b f ( x ) d x {\textstyle \int _{a}^{b}f(x)\,dx} , ∫ a m f ( x ) d x {\textstyle \int _{a}^{m}f(x)\,dx} , and ∫ m b f ( x ) d x {\textstyle \int _{m}^{b}f(x)\,dx} respectively, and ε {\displaystyle \varepsilon } is the desired maximum error tolerance for the interval.
Note, ε i + 1 = ε i / 2 {\displaystyle \varepsilon _{i+1}=\varepsilon _{i}/2} .
To perform adaptive Simpson's method, do the following: if | S ( a , m ) + S ( m , b ) − S ( a , b ) | < 15 ε i {\displaystyle |S(a,m)+S(m,b)-S(a,b)|<15\varepsilon _{i}} , add S ( a , m ) {\displaystyle S(a,m)} and S ( m , b ) {\displaystyle S(m,b)} to the sum of Simpson's rules which are used to approximate the integral, otherwise, perform the same operation with | S ( a , m − a 2 ) + S ( m − a 2 , m ) − S ( a , m ) | < 15 ε i + 1 {\textstyle \left|S{\left(a,{\frac {m-a}{2}}\right)}+S{\left({\frac {m-a}{2}},m\right)}-S(a,m)\right|<15\varepsilon _{i+1}} and | S ( m , b − m 2 ) + S ( b − m 2 , b ) − S ( m , b ) | < 15 ε i + 1 {\textstyle \left|S{\left(m,{\frac {b-m}{2}}\right)}+S{\left({\frac {b-m}{2}},b\right)}-S(m,b)\right|<15\varepsilon _{i+1}} instead of | S ( a , m ) + S ( m , b ) − S ( a , b ) | < 15 ε i {\displaystyle |S(a,m)+S(m,b)-S(a,b)|<15\varepsilon _{i}} .
Some inputs will fail to converge in adaptive Simpson's method quickly, resulting in the tolerance underflowing and producing an infinite loop. Simple methods of guarding against this problem include adding a depth limitation (like in the C sample and in McKeeman), verifying that ε/2 ≠ ε in floating-point arithmetics, or both (like Kuncir). The interval size may also approach the local machine epsilon, giving a = b.
Lyness's 1969 paper includes a "Modification 4" that addresses this problem in a more concrete way:[3]: 490–2
The epsilon-raising maneuver allows the routine to be used in a "best effort" mode: given a zero initial tolerance, the routine will try to get the most precise answer and return an actual error level.[3]: 492
A common implementation technique shown below is passing down f(a), f(b), f(m) along with the interval [a, b]. These values, used for evaluating S(a, b) at the parent level, will again be used for the subintervals. Doing so cuts down the cost of each recursive call from 6 to 2 evaluations of the input function. The size of the stack space used stays linear to the layer of recursions.
Here is an implementation of adaptive Simpson's method in Python.
from __future__ import division # python 2 compat # "structured" adaptive version, translated from Racket def _quad_simpsons_mem(f, a, fa, b, fb): """Evaluates the Simpson's Rule, also returning m and f(m) to reuse""" m = (a + b) / 2 fm = f(m) return (m, fm, abs(b - a) / 6 * (fa + 4 * fm + fb)) def _quad_asr(f, a, fa, b, fb, eps, whole, m, fm): """ Efficient recursive implementation of adaptive Simpson's rule. Function values at the start, middle, end of the intervals are retained. """ lm, flm, left = _quad_simpsons_mem(f, a, fa, m, fm) rm, frm, right = _quad_simpsons_mem(f, m, fm, b, fb) delta = left + right - whole if abs(delta) <= 15 * eps: return left + right + delta / 15 return _quad_asr(f, a, fa, m, fm, eps/2, left , lm, flm) +\ _quad_asr(f, m, fm, b, fb, eps/2, right, rm, frm) def quad_asr(f, a, b, eps): """Integrate f from a to b using Adaptive Simpson's Rule with max error of eps.""" fa, fb = f(a), f(b) m, fm, whole = _quad_simpsons_mem(f, a, fa, b, fb) return _quad_asr(f, a, fa, b, fb, eps, whole, m, fm) from math import sin print(quad_asr(sin, 0, 1, 1e-09))
Here is an implementation of the adaptive Simpson's method in C99 that avoids redundant evaluations of f and quadrature computations. It includes all three "simple" defenses against numerical problems.
#include <math.h> // include file for fabs and sin #include <stdio.h> // include file for printf and perror #include <errno.h> /** Adaptive Simpson's Rule, Recursive Core */ float adaptiveSimpsonsAux(float (*f)(float), float a, float b, float eps, float whole, float fa, float fb, float fm, int rec) { float m = (a + b)/2, h = (b - a)/2; float lm = (a + m)/2, rm = (m + b)/2; // serious numerical trouble: it won't converge if ((eps/2 == eps) || (a == lm)) { errno = EDOM; return whole; } float flm = f(lm), frm = f(rm); float left = (h/6) * (fa + 4*flm + fm); float right = (h/6) * (fm + 4*frm + fb); float delta = left + right - whole; if (rec <= 0 && errno != EDOM) errno = ERANGE; // depth limit too shallow // Lyness 1969 + Richardson extrapolation; see article if (rec <= 0 || fabs(delta) <= 15*eps) return left + right + (delta)/15; return adaptiveSimpsonsAux(f, a, m, eps/2, left, fa, fm, flm, rec-1) + adaptiveSimpsonsAux(f, m, b, eps/2, right, fm, fb, frm, rec-1); } /** Adaptive Simpson's Rule Wrapper * (fills in cached function evaluations) */ float adaptiveSimpsons(float (*f)(float), // function ptr to integrate float a, float b, // interval [a,b] float epsilon, // error tolerance int maxRecDepth) { // recursion cap errno = 0; float h = b - a; if (h == 0) return 0; float fa = f(a), fb = f(b), fm = f((a + b)/2); float S = (h/6)*(fa + 4*fm + fb); return adaptiveSimpsonsAux(f, a, b, epsilon, S, fa, fb, fm, maxRecDepth); } /** usage example */ #include <stdlib.h> // for the hostile example (rand function) static int callcnt = 0; static float sinfc(float x) { callcnt++; return sinf(x); } static float frand48c(float x) { callcnt++; return drand48(); } int main() { // Let I be the integral of sin(x) from 0 to 2 float I = adaptiveSimpsons(sinfc, 0, 2, 1e-5, 3); printf("integrate(sinf, 0, 2) = %lf\n", I); // print the result perror("adaptiveSimpsons"); // Was it successful? (depth=1 is too shallow) printf("(%d evaluations)\n", callcnt); callcnt = 0; srand48(0); I = adaptiveSimpsons(frand48c, 0, 0.25, 1e-5, 25); // a random function printf("integrate(frand48, 0, 0.25) = %lf\n", I); perror("adaptiveSimpsons"); // won't converge printf("(%d evaluations)\n", callcnt); return 0; }
This implementation has been incorporated into a C++ ray tracer intended for X-Ray Laser simulation at Oak Ridge National Laboratory,[4] among other projects. The ORNL version has been enhanced with a call counter, templates for different datatypes, and wrappers for integrating over multiple dimensions.[4]
Here is an implementation of the adaptive Simpson method in Racket with a behavioral software contract. The exported function computes the indeterminate integral for some given function f.[5]
;; ----------------------------------------------------------------------------- ;; interface, with contract (provide/contract [adaptive-simpson (->i ((f (-> real? real?)) (L real?) (R (L) (and/c real? (>/c L)))) (#:epsilon (ε real?)) (r real?))]) ;; ----------------------------------------------------------------------------- ;; implementation (define (adaptive-simpson f L R #:epsilon [ε .000000001]) (define f@L (f L)) (define f@R (f R)) (define-values (M f@M whole) (simpson-1call-to-f f L f@L R f@R)) (asr f L f@L R f@R ε whole M f@M)) ;; the "efficient" implementation (define (asr f L f@L R f@R ε whole M f@M) (define-values (leftM f@leftM left*) (simpson-1call-to-f f L f@L M f@M)) (define-values (rightM f@rightM right*) (simpson-1call-to-f f M f@M R f@R)) (define delta* (- (+ left* right*) whole)) (cond [(<= (abs delta*) (* 15 ε)) (+ left* right* (/ delta* 15))] [else (define epsilon1 (/ ε 2)) (+ (asr f L f@L M f@M epsilon1 left* leftM f@leftM) (asr f M f@M R f@R epsilon1 right* rightM f@rightM))])) ;; evaluate half an interval (1 func eval) (define (simpson-1call-to-f f L f@L R f@R) (define M (mid L R)) (define f@M (f M)) (values M f@M (* (/ (abs (- R L)) 6) (+ f@L (* 4 f@M) f@R)))) (define (mid L R) (/ (+ L R) 2.))