What is the Difference Between Alpha And

Published on January 2017 | Categories: Documents | Downloads: 96 | Comments: 0 | Views: 1271
of 3
Download PDF   Embed   Report

Comments

Content

What Is the Difference Between Alpha and P-Values?
By Courtney Taylor, About.com Guide

Ads:

    

Statistics Data Statistics Chart Example Statistics Probability Distribution Statistics Learn Statistics Ads SPSS Software in Malaysiawww.abytech.comWorld’s most used StatisticSoftware for business and researches One-way Anova Analysis.www.quantitativeskills.comFree One way Anova online. Means, f-test, multiple t-tests. Create a Facebook Profilewww.Facebook.comFind & Share A Wide Variety Of Facebook Applications. Sign Up Now!

 

See More About inferential statistics statistics questions In conducting a test of significance or hypothesis test there are two numbers that are easy to get confused. One number is called the p-value of the test statistic. The other number of interest is the level of significance, or alpha. These numbers are easily confused because they are both numbers between zero and one, and are in fact probabilities. Alpha – The Level of Significance

The number alpha is the threshold value that we measure p-values against. It tells us how extreme observed results must be in order to reject the null hypothesis of a significance test.

The value of alpha is associated to the confidence level of our test. The following lists some levels of confidence with their related values of alpha:

   

For results with a 90% level of confidence, the value of alpha is 1 - 0.90 = 0.10. For results with a 95% level of confidence, the value of alpha is 1 - 0.95 = 0.05. For results with a 99% level of confidence, the value of alpha is 1 - 0.99 = 0.01.

And in general, for results with a C% level of confidence, the value of alpha is 1 – C/100. Although in theory and practice many numbers can be used for alpha, the most commonly used is 0.05. The reason for this both because consensus shows that this level is appropriate, and historically it has been accepted as the standard. The alpha value gives us the probability of a type I error. Type I errors occur when we reject a null hypothesis that is actually true. Thus, in the long run, for a test with level of significance of 0.05 = 1/20, a true null hypothesis will be rejected one out of every 20 times.

P-Values

The other number that is part of a test of significance is a p-value. A p-value is also a probability, but it comes from a different source than alpha. Every test statistic has a corresponding probability or p-value. This value is the probability that the observed statistic occurred by chance alone.

Since there are a number of different test statistics, there are a number of different ways to find a p-value. For some cases we need to know the probability distribution of the population.

The p-value of the test statistic is a way of saying how extreme that statistic is for our sample data. The smaller the pvalue, the more unlikely the observed sample.

Statistical Significance

To determine if an observed outcome is statistically significant, we compare the values of alpha and the p -value. There are two possibilities that emerge:



The p-value is less than or equal to alpha. In this case we reject the null hypothesis. When this happens we say that the result is statistically significant. In other words, we are reasonably sure that there is something besides chance alone that gave us an observed sample.



The p-value is greater than alpha. In this case we fail to reject the null hypothesis. When this happens we say that the result is not statistically significant. In other words, we are reasonably sure that our observed data can be

explained by chance alone. The implication of the above is that the smaller the value of alpha is, the more difficult it is to claim that a result is statistically significant. On the other hand, the larger the value of alpha is the easier is it to claim that a result is statistically significant. Coupled with this, however, is the higher probability that what we observed can be attributed to chance.

Probability distribution for sum of two dice C.K.Taylor

If you spend much time at all dealing with statistics, pretty soon you run into the phrase “probability distribution.” It is here that we really get to see how much the areas of probability and statistics overlap. Although this may sound like something technical, the phrase probability distribution is really just a way to talk about organizing a list of probabilities. A probability distribution is a function or rule that assigns probabilities to each value of a random variable. The distribution may in some cases be listed. In other cases it is presented as a graph.

Example

Suppose that we roll two dice and then record the sum of the dice. Sums anywhere from two to 12 are possible. Each sum has a particular probability of occurring. We can simply list these as follows:

          

The sum of 2 has a probability of 1/36 The sum of 3 has a probability of 2/36 The sum of 4 has a probability of 3/36 The sum of 5 has a probability of 4/36 The sum of 6 has a probability of 5/36 The sum of 7 has a probability of 6/36 The sum of 8 has a probability of 5/36 The sum of 9 has a probability of 4/36 The sum of 10 has a probability of 3/36 The sum of 11 has a probability of 2/36 The sum of 12 has a probability of 1/36

This list is a probability distribution for the probability experiment of rolling two dice. We can also consider the above as a probability distribution of the random variable defined by looking at the sum of the two dice.

Graph of a Probability Distribution

A probability distribution can be graphed, and sometimes this helps to show us features of the distribution that were not apparent from just reading the list of probabilities. The random variable is plotted along the x-axis, and the corresponding probability is plotted along the y - axis.

 

For a discrete random variable, we will have a histogram For a continuous random variable, we will have the inside of a smooth curve

The rules of probability are still in effect, and they manifest themselves in a few ways. Since probabilities are greater than or equal to zero, the graph of a probability distribution must have y-coordinates that are nonnegative. Another feature of probabilities, namely that one is the maximum that the probability of an event can be, shows up in another way.

Area = Probability

The graph of a probability distribution is constructed in such a way that areas represent probabilities. For a discrete probability distribution, we are really just calculating the areas of rectangles. In the graph above, the areas of the three bars corresponding to four, five and six correspond to the probability that the sum of our dice is four, five or six. The areas of all of the bars add up to a total of one.

In the standard normal distribution, or bell curve, we have a similar situation. The area under the curve between two z values corresponds to the probability that our variable falls between those two values. For example, the area under the bell curve for -1 < z < 1 accounts for approximately 68% of the total area. The area here is much more complicated than a rectangle. That is why calculus and other advanced mathematics is necessary in order to use most continuous probability distributions.

http://statistics.about.com/od/Inferential-Statistics/a/What-Is-The-Difference-Between-AlphaAnd-P-Values.htm

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close