Tests for Differences Between Means

Published on June 2016 | Categories: Documents | Downloads: 39 | Comments: 0 | Views: 374
of 5
Download PDF   Embed   Report

Medical Stastics

Comments

Content

Tests for Differences Between Means
There are several tests to determine whether groups (means) are different. We will concentrate
this week on t tests and the analysis of variance (ANOVA). There are independent and dependent
forms of both tests for situations where samples are independent or dependent, respectively.
Both the t test and ANOVA will consider the differences between groups as well as the
variability associated with the groups. The larger the mean difference and the less variability, the
more likely the groups will be significantly different. On the other hand, the smaller the mean
difference and the more variable each group is, the less likely the groups will be different.

This figure shows three examples where the mean difference between 2 groups is the same but
the variability associated with each group is different. In which case would you predict would the
means MOST likely be significantly different?
The information in this module will focus on the following:





Independent t test (test for differences between 2 independent groups)
Dependent t test (test for differences between 2 dependent groups)
One way Analysis of Variance (test for differences between 3 or more independent
groups)
Analysis of Variance for Repeated measures (test for differences between 3 or more
dependent groups)

T-tests
Independent t test


When to use: An independent t test is used to test for statistically significant
differences between 2 groups that are not related. Examples are: males vs
females; Caucasian vs Hispanic; treatment vs control.







Assumptions: The two samples are independent, were randomly selected, and
represent the population from which they were drawn; the two groups are from
the same population; data are normally distributed; the variability for each group
is similar.
What if assumptions are violated in any given situation: If the sample is not
random, generalizations are limited. If the data are not normal, they can be
mathematically adjusted to normalize (such as taking the logarithm of every
score) or you can choose a different statistics test where normality is not required
(Mann-Whitney U test is one example). For variability, a Levene’s test for the
equality of variances is typically performed to check this assumption; if variances
are different, an adjusted t test can be performed (look at your SPSS printout
from the t test).
How to calculate: The equation is below and is the ratio of the mean difference
between groups to the pooled standard deviation for the groups. The equation
generates a t statistic. Subscripts T and C stand for treatment and control, var is
variance (which is the standard deviation squared), and n is the number of
subjects.

o

o

o

This “calculated t” must be greater than the “critical t” for the groups to be
statistically different. The critical t is a number from a table (see A.2) that
is based on what mean difference can be expected by chance based on the
sample size (represented by "degrees of freedom"). The actual difference
must exceed that expected by chance.
The numerator is also called the true variance, the real difference between
means. The denominator is also known as the error variance, the variation
about the mean. Remember that both samples come from the same
population. If the means are similar, the true variance divided by the error
variance equals 1.0. Since there exists variability in populations, the result
is often greater than 1.0. That which can be expected by chance is the
critical t. If the two groups have a large mean difference, the t will be larger
than the critical t.
This can also be thought of as a “signal to noise” ratio. The difference
between the means is the signal and the denominator or error variance is
the noise. Groups of people (i.e. samples) will vary in the population just
due to chance (error variance). A treatment and a control group must
differ more than this variability for a researcher to say that the treatment
made a difference in the treatment group, in other words, that the
treatment “worked”.

Dependent t test







When to use: A dependent t test is used to test for statistically significant
differences between 2 groups that are related. Usually the groups are the same
people measured before and after an intervention or under different conditions
however dependent groups can also be separate groups of people that have been
matched based on a set of characteristics.
Assumptions: The groups must be randomly selected from the population and
the data normally distributed.
What if assumptions are violated in any given situation: If the sample is not
random, generalizations are limited. If the data are not normal, they can be
mathematically adjusted to normalize (such as taking the logarithm of every
score) or you can choose a different statistics test where normality is not required
(Wilcoxon signed rank test is one example).
How to calculate: The formula for the dependent t is: Where n is the number of
paired scores and D is the difference between pairs of scores,
o Notice that several of the terms in this equation look like those from the
calculation for standard deviation. The numerator is the sum of the
differences between paired scores and the denominator is roughly the
standard deviation of the difference between scores (variability of the
difference between scores).

where D is the difference between pairs of scores:



As in the independent version of the t test, a t statistic is generated in the
dependent t test. The calculated t must be greater than the critical t for there to
be a significant difference between the means. Use Table A.2 and df = n - 2 to get
the critical t.

One-way ANOVA






When to use: This is also known as a Single Factor ANOVA. This is used when
there are 3 or more group means you are trying to compare. For example, you
want to see which exercise program lowers cholesterol more, an 8 week flexibility
program, an 8 week aerobic exercise program or an 8 week strength training
program. You would recruit a sample of people and randomly assign them to one
of the 3 groups. At the end of the study, you'd compare the cholesterol levels of
each group using a one-way ANOVA.
Assumptions: The assumptions are similar to those of the t test. Samples must be
randomly drawn from a normally distributed population and the variances of the
samples must be similar. If the data are not normally distributed, a test such as
the Kruskal-Wallis one-way ANOVA by ranks test can be used.
How to calculate: Similar to the t test, the ANOVA calculation gives you a number
that you compare to a table of "critical" values. However, ANOVA gives an F value
(or sometimes called an F ratio; called F because a guy named Fisher figured this
out). If your calculated F is greater than the critical F, there is a significant
difference between means. If not, there is no difference. Remember that the
critical F from the table gives you roughly what can be expected to occur by
chance for that number of people that you've measured. To use the table you need
the number of groups, the total number of scores, and the number of people in
each group. Based on that you find the critical F. More specifically, there are two
degrees of freedom that are calculated, df1 (columns of table) and df2 (rows of
table), where df1 equals the # of groups minus 1 and df2 is the overall number of
scores minus the # of groups. For example, in the Faigenbaum article, the df1 is 2
(3 groups minus 1) and df2 is 51 (54 scores minus 3). Reading the F from
Table A.3 you get a critical F of about 3.19 (halfway in between 40 and 60). look at
the results section and you'll see that the F values are reported. Those greater
than
3.19 are "significant" and those below are not.

o

o

The actual formula is a multi-step, lengthy one. I can post something for
you so you can see it but I'm a bit hesitant to do so... Let me know if you'd
like to see it; it would be optional material.
IMPORTANT! ANOVA analyses gives you one overall p-value which tells
you whether there is a mean difference. If you have 3 groups, there are
several combinations of means you could compare. As your number of
groups increases, you will have a larger number of comparisons you could
make between any 2 means. But ANOVA by itself doesn't tell you
specifically which 2 means are different. So, if you get a p-value less than
0.05 from the ANOVA, you must take one more step and perform a posthoc test. Several of these exist and each has pros and cons. Essentially
what a post hoc test tells you is which 2 means are different. Notice that in
the Faigenbaum paper that table 2 gives you a p-value from the ANOVA
and then superscripts give which means are different between groups.

Repeated Measures ANOVA






When to use: ANOVA for repeated measures is used to test for differences in
group means when the people in the sample are measured repeatedly over time.
For example, if you wanted to determine whether music influenced aerobic
performance, you would recruit a sample of people and have them perform a
maximal aerobic test under different conditions: no music, fast music, slow
music. Each person would be tested 3 times on non-consecutive days and would
perform under each condition in random order. The average aerobic performance
would be compared between the 3 conditions using a repeated measures ANOVA.
Assumptions: Samples must be randomly drawn from a normally distributed
population and the variances of the samples must be similar. If the data are not
normally distributed, a test such as the Friedman two-way ANOVA by ranks test
can be used.
How to calculate: Same idea as above but with slightly different formulas to take
advantage of the repeated measures that were made. In essence, each person is
their own control which is a powerful statistical design.

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close