March 8, 2016 --- Class 17 --- Numerical Integration in One Dimension,
Error Estimation, Simpson's Rule Rederived, Examples
Numerical Integration
We continued our disscussion of some of the standard methods for
numerical integration in one dimension. This is discussed in
Section 11.1 of CSM. Additional material is available in
Numerical Recipes or in Abramowitz and Stegun. In the last class
we derived the rectangle rule, trapezoid rule, and Simpson's rule.
Simpson's rule, with Mathematica
There is a Mathematica notebook showing how integrals needed for
the derivation of Simpson's rule can be easily done. Of course,
the integrals can also be done by hand. The notebook can be found
in ~sg/Documents/SimpsonsRule2016.nb and there is a pdf in
~sg/Documents/SimpsonsRule2016.pdf.
Error estimates
We derived error estimates by using Taylor expansions of the
function, for the rectange and trapezoid rules. For the rectangle rule,
total error = (1/2) h (b-a)
where h is the bin width, sometimes called \Delta x, the integral
goes from a to b and stands for the average of the derivative
of the function.
For the trapeziod rule, the corresponding formula is
total error = (1/6) h^2 (b-a)
An interesting point about the trapeziod rule is the error only
has even terms in h, i.e., the next order term is h^4, not h^3. This
will be important later.
The derivation for Simpson's rule is a litte different in that we
just Taylor expand the function around the midpoint of the pair of
bins and get an expression that involves the second derivative of f
and indicated an error of order h^5 (in the pair of bins). Then, we
needed to use a finite difference approximation for the second
derivative. We rederived Simpson's rule and found the local error
is order h^5, including the error in the approximation to the 2nd
derivative. The global error is then order h^4. This is two orders
better than the trapezoid rule. This remarkable benefit comes from
the symmetry that arises in using two adjacent bins. Another way to
see this comes from considering what we might be neglecting when we
used a parabolic function to approximate f(x), which was the first
way we derived Simpson's rule. Our parabolic approximation to f(x)
can differ from the true function by cubic, quartic and higher order
polynomials, but all those polynomials must vanish at x0, x1 and x2.
It is convenient to translate the x axis by x1, so our integration
region goes from -h to +h. In that case, the higher order
polynomials vanish a -h, 0, and +h. The lowest order polynomial
that does so is (x+h)x(x-h). We can add another power of x to
get a higher order polynomial that vanishes at the correct places.
The Mathematica notebook mentioned above, integrates both these
functions. The integral of the cubic is zero by symmetry. The
integral of the quadratic does not vanish, but its integral is
proportional to h^5, thus confirming the local error estimate above.
Simpson's Rule Rederived
For the following discussion, let N be the number of bins.
The leading error in the trapeziod rule is 1/N^2. We can use an
idea similar to what we did to remove sample size bias, but taking
into account the fact that dependence of the error on N is different.
Starting from the trapezoid rule, we used the formulae below
for correcting the leading 1/N^2 errors.
S_N = A + E/N^2
S_2N = A + E/(2N)^2
so A = 4/3 S_2N -1/3 S_N
This technique can be extended in the method known as
Romberg integration. This expanded upon in Numerical Recipes.
Practical codes may be found there and are discussed below.
Applying this to a one-dimensional integral, we will easily find that
the weighting of the sum of two expressions on the right hand side
exactly coincides with the Simpson's rule formula. We previously
found that Simpson's Rule has a leading error of order 1/N^4. Since
the approach above removes the leading 1/N^2 error and since the
error of the trapezoid rule only has even powers of 1/N (asserted, but
not proved), this is not surprising.
Testing Trapeziod, Simpson and Romberg Methods
In ~sg/integration_test/double_precision, you will find source code
and output files. The source code has been modified so we can see
how the methods converge. We went over the source code in a bit of detail.
In particular, we discussed how static variables are used in the
evaluation of the trapiziod rule to reduce the bin width and only
evaluate the function at new points.
The integral of cos(x) from x=0 to x=pi is calculated by xqsimp2.
Here is the output:
1 7.853981633974e-01 3.333333333333e+29
2 9.480594489685e-01 1.002279877492e+00
3 9.871158009728e-01 1.000134584974e+00
4 9.967851718862e-01 1.000008295524e+00
5 9.991966804851e-01 1.000000516685e+00
6 9.997991943200e-01 1.000000032265e+00
7 9.999498000921e-01 1.000000002016e+00
8 9.999874501175e-01 1.000000000126e+00
9 9.999968625353e-01 1.000000000008e+00
10 9.999992156342e-01 1.000000000000e+00
The first column is the level at which we call the routine, i.e., with
2^(j-1) intervals in the integration range. The second column is the
trapzoid rule and the third column is for Simpson's rule. Note that
the j=1 value for Simpson's rule is bogus since we need to go at least
to j=2 to use Simpson's rule. In cos.ax, you will find the error
plotted vs. j. The crosses are for the trapezoid rule, the diamonds
are for Simpson's rule. You will also find three points from
the Romberg routine.
We found that from j=0 to j=10 the value for the trapezoid rule
fall by 6 orders of magnitude. This corresponds to increasing the
number of intervals by 2^10 or 1024. Since the error for the
trapzoid rule is supposed to fall like n^{-2}, and 1024^2 is about
10^6, this makes perfect sense.
Simpson's rule is seen to converge much more quickly and falls
12 orders of magnitude. This is the expected behavior. With the
Romberg method, the error is already about 10^{-12} for j=5 and
drops below 10^{-15} by j=7. Quite an impressive improvement!
A Second Example
A second example integrates sqrt(1-x^2) from 0 to 1. Curiously, it
looks very different. This is because of the singularity in the
derivative of the integrand at x=1. Simpson's rule does not converge
with a higher power of j. Plot sqrt_1minus_xsqr.ax or plot both graphs
together. We recalled that the expected error for the trapezoid rule
depends on the average value of the second derivative of the function.
A Third Example
In the third example, we again integrate sqrt(1-x^2), but from 0 to 0.5.
This avoids the region with high derivative and the result follows
the pattern of the first example where Simpson's rule is much better
than the trapeziod rule. Look at
~sg/integration_test/double_precision/newexample.ax .