February 2, 2012 --- Class 8 --- Error of the Euler-Richardson Algorithm,
Extrapolation to zero height, Root finding
Activities:
Error of the Euler-Richardson Algorithm
To find the dependence of the error in the Euler-Richardson
we wanted to used larger step sizes than before. It was easy
to modify the runem.csh script to do that. We plotted
the position at t=140 for various step sizes. The dependence
seemed nonlinear, but we really wanted to find the error.
We then discussed how we can easily use awk to find the
error. plotem2.csh leaves a file temp1.ax with the position
as a function of step size. It looks like this:
[sw246-05:~/chap3] sg% mo temp1.ax
2.0 6.454932e+03
1.0 6.454655e+03
0.5 6.454585e+03
0.2 6.454566e+03
0.1 6.454563e+03
0.05 6.454562e+03
0.02 6.454562e+03
0.01 6.454562e+03
0.002 6.454562e+03
0.005 6.454562e+03
Clearly, given the limited number of digits we printed out,
the error is zero by the smallest step size, so we can use the
postion on the last line as the correct value and subtract
that from the previous values to get the error. How to
we get awk to do the subtraction on lines with larger step
size? Any easy way is to store what we read in and do the
subtraction at the end. This is complicated enough that it
deserves to be in a file of its own.
Here is ~sg/chap3/subtract_last.a
{step[NR]=$1; y[NR]=$2}
END{const=y[NR]; for(i=1; i<=NR;++i) print step[i],y[i]-const}
The result of running this awk script on temp1.ax
is this output
2.0 0.37
1.0 0.093
0.5 0.023
0.2 0.004
0.1 0.001
0.05 0
0.02 0
0.01 0
0.002 0
0.005 0
To see if the error obeys a power law, we would like to make
a log-log plot and see it is a straight line. However,
we can't do that with the data above, because no computer
likes to calculat the log of zero. However, if we use the
head command, we can restrict the input of axis to the first
5 lines.
awk -f subtract_last.a temp1.ax |head -5 | axis x l y l |xplot
does the trick! You will see a perfectly straight line that
indicates that Euler-Richardson errors are order (Delta t)^2.
This is really great, because if means and if we reduce
the step size by two, the error is four times smaller.
That is a big saving.
Extrapolation error
The code integrated until the height is negative, but we need
velocity when height is zero. When I first wrote the code,
I used a constant velocity extrapation to get back to zero
height. However, that is dumb and when print size was a
second or more could results in a significant error. it is
much smarter to recall elementary mechanics for uniform
acceleration. In this way, we compute the speed at
zero height as sqrt(v*v+2*g*y).
Adaptive step size algorithm
We looked at the code ~sg/chap3/adapt_step_nonlin_interp_2012.c
This code can change its step size to keep the velocity error
small. We don't have to be too careful about the step
size we set since the code will adjust.
We ran the code in adapt_step.c to see how it increases
or decreases the step size depending on how accurately our two
estimates of the velocity agree with each other.
With different initial step sizes and print times, we
found that the step size could increase or decrease depending
up whether we picked too large or too small an initial step
size. Sometimes the print time prevented the step size
from increasing because there would not a integer number
of steps between print times.
Root finding
Since we really want to find the initial height that
gives v_true/v_naive=0.99, we are really trying to
find the root of an equation. Let ratio(height)=
v_true/v_naive. We are trying to solve ratio(height)-0.99=0.
In the program adapt.c, the secant method is used to solve
the problem of what height must the object be dropped from.
The secant method is easy to implement because only the
function, not its derviative is needed. Another advantage
of this method is that it is not necessary to know that the
root lies in a particular range. However, the secant method
may be unstable.
Some of the other methods require knowing that the root lies in
a particular interval. One knows there is a root because
the function changes sign. The bisection method evaluates
the function at the middle of the interval. From the sign
of the function at the midpoint, it can be determined which
half of the interval contains the root. The function is
then evaluated in the middle of that half interval, etc.
This method doesn't have any stability problems, but it is
slow to converge since the error only decreases by a factor
of 2 each time the interval is cut in half. After all, in
this method we are only using the sign of the function at
the endpoints of the current interval.
In the method of false position, we approximate the
function by a straight line between the endpoints of the
interval and use the root of the straight line to guess
where the root is. We evaluate the function at the guessed
root and replace the endpoint with the same sign of the
function with the guess. We then use the two current
endpoints in the same procedure. Convergence can be slow.
The method of false position is a lot like the secant method
except that in the secant method we always keep the two most
recent points. When we do that, we might no longer be in the
original interval that brackets a root. (In fact, we don't
have to start off bracketing the root with the secant method.)
This is what leads to the potential instability.
We also talked about the Newton-Raphson method, which is
a good one, but it required being able to calculate the
function and it derivative. Since we don't know the
derivative, another method is preferred for this problem.
We will test some of the root finding methods in the next class.