For more than a century the inherent difficulty of formula-based inferential statistics has baffled scientists, induced errors in research, and caused million of students to hate the subject.
Complexity is the disease. Resampling (drawing repeated samples from the given data, or population suggested by the data) is a proven cure. Bootstrap, permutation, and other computer-intensive procedures have revolutionized statistics. Resampling is now the method of choice for confidence limits, hypothesis tests, and other everyday inferential problems.
In place of the formidable formulas and mysterious tables of parametric and non-parametric tests based on complicated mathematics and arcane approximations, the basic resampling tools are simulations, created especially for the task at hand by practitioners who completely understand what they are doing and why they are doing it. Resampling lets you test hypotheses and calculate confidence intervals for most sorts of data, even those that cannot be analyzed with formulas.
The growing stream of scientific articles using resampling techniques, both as a basic tool as well as for difficult applications, testifies to resampling’s value. And the swelling literature in mathematical statistics shows its acceptance on a theoretical basis, after many years in the wilderness.
Solid research demonstrates that students taught resampling learn statistics more fully, and actually enjoy introductory courses (see the Teaching section). They say that resampling is the way that statistics should be taught. Read what students and teachers have to say for themselves.