Alpha Values and P-values
Alpha Values and P-values
In conducting a test of significance or hypothesis test, there are two numbers that are easy to get confused. These numbers are easily confused because they are both numbers between zero and one, and are both probabilities. One number is called the p-value of the test statistic. The other number of interests is the level of significance or alpha(α). We will examine these two probabilities and determine the difference between them.
Hypothesis Test
There are two types of hypothesis namely:
- Null Hypothesis
- Alternative Hypothesis
The null hypothesis of a test always predicts no effect or no relationship between variables.
The alternative hypothesis states your research prediction of an effect or relationship.
In conducting a test of significance or hypothesis test, there are two numbers that are easy to get confused. These numbers are easily confused because they are both numbers between zero and one, and are both probabilities. One number is called the p-value of the test statistic. The other number of interests is the level of significance or alpha(α). We will examine these two probabilities and determine the difference between them.
There are two types of hypothesis namely:
- Null Hypothesis
- Alternative Hypothesis
The null hypothesis of a test always predicts no effect or no relationship between variables.
The alternative hypothesis states your research prediction of an effect or relationship.
In hypothesis tests, two errors are possible, Type I and Type II errors.
Type I error: Supporting the alternate hypothesis when the null hypothesis is true.
Type II error: Not supporting the alternate hypothesis when the alternate hypothesis is true.
Alpha Values
The number alpha (𝛂) is the threshold value that we measure p-values against. It tells us how extreme observed results must be in order to reject the null hypothesis of a significance test.
The value of alpha(𝛂) is associated with the confidence level of our test.
Example:
For results with a 90 percent level of confidence, the value of alpha(𝛂) is
1–0.90 = 0.10.
In general, for results with a ‘C’ percent level of confidence, the value of alpha(𝛂) is 1-(C/100).
The most commonly used alpha value (𝛂) is 0.05 or 5%. The reason for this is both because consensus shows that this level is appropriate in many cases, and historically, it has been accepted as the standard.
The alpha value gives us the probability of a type I error. Type I errors occur when we reject a null hypothesis that is actually true. Thus, in the long run, for a test with a level of significance of 0.05 = 1/20, a true null hypothesis will be rejected one out of every 20 times.
Why is an alpha level of 0.05 commonly used?
Seeing as the alpha(α) level is the probability of making a Type I error, it seems to make sense that we make this area as tiny as possible. For example, if we set the alpha(α) level at 10% then there is a large chance that we might incorrectly reject the null hypothesis, while an alpha (α) level of 1% would make the area tiny. So why not use a tiny area instead of the standard 5%?
The smaller the alpha(α) level, the smaller the area where you would reject the null hypothesis. So if you have a tiny area, there’s more of a chance that you will NOT reject the null hypothesis, when in fact you should. This is a Type II error. In other words, the more you try and avoid a Type I error, the more likely a Type II error could creep in. Scientists have found that an alpha(α) level of 5% is a good balance between these two issues.
P-Values
A p-value is also a probability. Every test statistic has a corresponding probability or p-value. This value is the probability that the observed statistic occurred by chance alone, assuming that the null hypothesis is true.
Since there are a number of different test statistics, there are a number of different ways to find a p-value. For some cases, we need to know the probability distribution of the population.
The p-value of the test statistic is a way of saying how extreme that statistic is for our sample data. The smaller the p-value, the more unlikely the observed sample.
Difference Between P-Value and Alpha Values
To determine if an observed outcome is statistically significant, we compare the values of alpha(α) and the p-value. There are two possibilities that emerge:
- The p-value is less than or equal to alpha(α). In this case, we reject the null hypothesis. When this happens, we say that the result is statistically significant. In other words, we are reasonably sure that there is something besides chance alone that gives us an observed sample.
- The p-value is greater than alpha(α). In this case, we fail to reject the null hypothesis. When this happens, we say that the result is not statistically significant. In other words, we are reasonably sure that our observed data can be explained by chance alone.
- The implication of the above is that the smaller the value of alpha(α) is, the more difficult it is to claim that a result is statistically significant.
- On the other hand, the larger the value of alpha(α) is the easier it is to claim that a result is statistically significant.
Comments
Post a Comment