Mathematics  Science Policy

Setting an Optimal a That Minimizes Errors in Null Hypothesis Significance Tests
Published:
Tuesday, February 28, 2012
Author:
Joseph F. Mudge et al.
by Joseph F. Mudge, Leanne F. Baker, Christopher B. Edge, Jeff E. Houlahan
Null hypothesis significance testing has been under attack in recent years, partly owing to the arbitrary nature of setting a (the decisionmaking threshold and probability of Type I error) at a constant value, usually 0.05. If the goal of null hypothesis testing is to present conclusions in which we have the highest possible confidence, then the only logical decisionmaking threshold is the value that minimizes the probability (or occasionally, cost) of making errors. Setting a to minimize the combination of Type I and Type II error at a critical effect size can easily be accomplished for traditional statistical tests by calculating the a associated with the minimum average of a and ß at the critical effect size. This technique also has the flexibility to incorporate prior probabilities of null and alternate hypotheses and/or relative costs of Type I and Type II errors, if known. Using an optimal a results in stronger scientific inferences because it estimates and minimizes both Type I errors and relevant Type II errors for a test. It also results in greater transparency concerning assumptions about relevant effect size(s) and the relative costs of Type I and II errors. By contrast, the use of a?=?0.05 results in arbitrary decisions about what effect sizes will likely be considered significant, if real, and results in arbitrary amounts of Type II error for meaningful potential effect sizes. We cannot identify a rationale for continuing to arbitrarily use a?=?0.05 for null hypothesis significance tests in any field, when it is possible to determine an optimal a.
More...


