February 25, 2013 --- Class 14 --- Application of the Jackknife
Method, Hist, Blocker
Activities:
Biased Estimators and the Jackknife method
We want to test the idea of bias reduction by doing a numerical
experiment. In the last class, we discussed a program that uses
pseudorandom numbers to simulate the process of measuring the
velocities unformly distributed between zero and one.
A program ~sg/jackknife/jackknife.c implements the jackknife method
of bias reduction. The jackknife program can take up to two
arguments. The first is the sample size, the second, optional
argument, is the number of samples to create. On each line of
output is the sample estimate of 1/(sum of x_i), and the jackknife
value (which is corrected for bias).
For sample size five, I ran the program to repeat the experiment a
few times and we saw that the number varied, I then ran
jackknife 5 100
to produce 100 instances of the experiment with sample size 5. The
class spent some time discussing how we would look at the data.
We know that for infinite sample size, the correct answer is 2, but
that for finite sample size the answer is not even 2 on average. We
also want to know if the bias reduction really works.
[sw246-00:~/jackknife] sg% jackknife 5 10
3.000586 2.913277
2.216128 2.177672
1.885894 1.680463
1.866940 1.760818
1.968507 1.794492
1.777420 1.644192
1.891794 1.675787
1.708738 1.600792
1.493175 1.391724
2.159595 1.920740
Here are ways to analyze the data:
1) Create a scatter plot and see if the 2nd value is closer to 2 on
average.
2) Produce a histogram of the first column and histogram of the 2nd
column. Compare them.
3) Compute the average value of the first column and the average
value of the second column. Compare them.
A scatter plot is very easy with axis, you just need the argument -m 0 .
It was also easy to add a diagonal line and to see that most points
fell below the diagonal, i.e., the first number is usually larger than
the second. Since we know from the integral approach that on average
sample size 5 gives 2.1696, we know the unbiased value is too big.
The fact that the bias reduced values are smaller is encouraging.
I love to make histograms and I have a great program to do that which
you are welcome to copy and use. The program is called hist. Here are
some comments that explain how to use it.
Hist for Making Histograms
hist is a program for making histograms and may be found in
~sg//src/misc
for the source (hist.c) and the executable is in ~sg/bin. The options
for hist are described in the comments at the top of hist.c:
/* Make histogram of a list of data *
* option -n specifies number of bins *
* option -s specifies size of bin *
* option -x specifies lower and upper limits of histogram *
* option -g specifies output in form suitable for "graph" *
* or "axis" *
* if -g 1 we get a full bar for each bin *
* if -g 2 the bars don't go down to the origin *
* option -m specifies the line type for use with "axis" *
hist takes it input from the stardard input. It will also take
10 bins by default. So, if you have file of number called file,
hist samples_$i
echo -n Naive $i " " >>! results
awk '{print $1}' samples_$i | blockerr 1 |awk '{print $8, $10}' >>results
echo -n Jackknife $i " " >>! results
awk '{print $2}' samples_$i | blockerr 1 |awk '{print $8, $10}' >>results
end
This is another example of how we can figure out a useful procedure
interactively and then we find that being able to automate it for
different cases is a great advantage.
The results file can easily be displayed with the command:
awk '{print $2,$3,$4}' results |axis e |xplot
The upper set of values is from the naive estimate, and the lower
is the jackknife corrected values. One immediately sees that the
jackknife values much more quickly approach 2, the asymptotic
value.
The file ~sg/jackknife/jackknife.ax on Swain cluster, was
prepared before class, and includes both the above results
and the analytic results derived in the last
class. The crosses are from doing the integral numerically with
Mathematica. One can see excellent agreement between the
statistical approach here and the analytic results.
In our next class, we will implement the procedure from my jackknife.c
code in Mathematica and compare the effort to do this analysis there.
We will also consider how the jackknife method can be used to estimate
the error and start our study of chaos.
The following supplementary material was not covered in class.
Other approaches to removing the sample bias include using the
basic idea behind the jackknife bias reduction of the Mathematica
generated results, and applying a "second order" jacknife technique
on the jackknifed results to remove the 1/n^2 errors.
(Assuming there are error terms e/n + e'/n^2, derive a formula
involving S_n, S_(n-1) and S_(n-2) that removes both error terms to
get a second order bias reduced result.)
Applying the jackknife to the Mathematica results works
quite nicely as you can see by the following command:
cat jackknife.ax j.math.ax |axis |xplot
The lower crosses are the results from Mathematica. The commands
for doing the analysis can be found in ~sg/jackknife/bias2.math