Skip to Main Content

Quantitative data collection and analysis

Statistical tests - parametric

Calculating a Z-score (or Standard score) of a distribution allows you to compare data from more than one distribution.

It is a standardised measure which allows you to compare across two different distributions. e.g. if you want to know who did better in an exam but are looking at two different exams where the results would be distributed differently.

A comparison just of the test scores would not useful i.e. 60 % on one exam could be a better performance than 60% on another.

The Z Score measures how many standard deviations scores are away from the mean.

Most Z-scores will lie in the range -2 to +2.

if the Z-score is positive (+ve) this indicates that the observation is greater than the mean i.e. above average.

If the Z-score is negative (-ve) this indicates that the observation is lower than the mean (below average)

A Z-score of 0 indicates that the observation equals the mean.

It does assume a normally distributed sample or population.

 

Z score = (Observed value minus the mean) divided by standard deviation

Z score equation

Z = Z score

x = observed values from your sample

x̄ = Mean

s = standard deviation


standard normal table - allows you work out the proportion of the area of a normal curve between the Mean and a Z score.

e.g. to find out what proportion of people did better on a exam than yourself 

e.g. to find out what was the mark which 40% or more of students achieved.

A t-test is a parametric test (see under samples and population) that can tell you how significant the differences are between the means of two groups are, e.g. did the differences just occur by chance or is there a real difference?

A large t-score indicates that the groups are different.

A small t-score indicates that the groups are similar.

Every t-value has an associated p-value. This is the probability that the results from your sample were obtained by chance e.g. p = 0.05 (5%). The lower the value indicates that the results did not occur by chance i.e. p=0.001 indicates that there is only a one in a thousand chance that your result arose from sampling error (given the null hypothesis is true) i.e. an effect has been detected.

 

There are 3 types of t-test:

  • Independent samples (between participants or unrelated design) - compares the means for two groups (participants perform in only one of two conditions).
  • Paired sample (within participants or related) t-test - compares the means from the same group at different times (participants perform in both conditions).
  • A one sample t-test - compares the mean of a single group against a known mean.

There are many tools to help calculate the t-test:

Procedure for interpreting the t-test score:

  • Calculate the t-value test statistic (see under Testing Hypotheses - Significance Testing)
  • Compare the obtained t value with the critical value listed in a t Values table. (you need to know the degrees of freedom and selected a level of significance)
  • If the obtained value is greater than the critical value in the table, the null hypothesis (i.e which states the means are equal - there is no difference) is not the best explanation for any observed differences.
  • If the obtained value is less than the critical value in the table, the null hypothesis can be accepted as the best explanation for any observed differences.

Alternative tests that do not make assumptions regarding the distribution:

  • Wilcoxon Signed Ranks  (as alternative to paired samples t-test)
  • Mann-Whitney U test (as an alternative to between subjects t-test)

 

ANOVA (Analysis of Variance) is a parametric test (see samples and population).

It is used to determine whether there are any statistically significant differences between the means of two or more independent (unrelated) groups.

One of the advantages of using ANOVA over a t-test is that it can be scaled up to more complex research design, for instance if you need to compare three levels of variable (rather than two), e.g. to test whether exam performance differed based on test anxiety levels amongst students, dividing students into three independent groups (e.g., low, medium and high-stressed students).

One-way design: e.g. language development of being in pre-school for 5, 10 or 20 hours per week. 

  • The treatment variable here is the number of hours in school (grouping or between groups). There are 3 levels 5, 10 and 20 hours (i.e. 3 possibilities)
  • Language development is the outcome measure.

Factorial design (is a more complex design): e.g. there is more than one treatment factor e.g if we also look at gender differences in the above example. You would have a matrix table e.g 3 x 2. There will be 6 possibilities now in this situation.

 Gender

Number of hours of pre-school participation

 Group 1 (5 hours)

 Group 2 (10 hours)

 Group 3 (20 hours)

 Male

 Language   development test score 

 Language   development test score 

Language development  test score 

 Female

Language development  test score 

Language development  test score 

Language development  test score 

e.g taken from Salkind N.J. (2017) Statistics for people wh (think they) hate statistics. 6th edn. London: SAGE (p.246)

 

The test statistic for ANOVA is the F-value

It is a ratio of the variability among groups to the variability within groups (i.e. The variance between groups divided by the variance within groups.

Between subjects - where you are comparing one variable for several groups

Within subjects - where you are comparing the values of several variables for one group

 

Interpreting the F-value:

  • Work out the F value
  • Determine the number of degrees of freedom for the numerator and the denominator
  • Locate the critical value on F-Table by reading across to locate the degrees of freedom of the numerator and own to locate the degrees of freedom for the denominator.. The critical value is at the intersection of these two values.
  • If the obtained value is greater than the critical value (tabled value) the null hypothesis (that the means are equal to one another) is not the best explanation for any observed differences.
  • If the obtained value is less than the critical value, then we can accept the null hypothesis as being the best explanation.

Alternative distribution-free test alternatives to ANOVA:

  • Kruskal-Wallis
  • Friedman