Show Menu
Cheatography

PSY2206 Methods and Statistics Tests Cheat Sheet (DRAFT) by

Statistical tests using SPSS

This is a draft cheat sheet. It is a work in progress and is not finished yet.

Glossary terms

Correl­ational designs
examine the relati­onship between existing variables as they occur naturally- often analysed using regres­sions
Indepe­ndent groups design
Most approp­riate when comparing the differ­ences between two indepe­ndent groups. Parametric test. Often indicates that seperate indepe­ndent groups are tested with no partic­ipants taking part in more than one condition or level
Indepe­ndent T-test
looks at the difference between two groups of partic­ipants on a particular variable. The parametric statistics used is the indepe­ndent t-test- used to asses the difference between two indepe­ndent groups on an interv­al/­ratio level variable. Tests the null hypothesis that there is no signif­icant difference betweeen the groups + altern­ative hypothesis that there is a signif­icant differ­ence. Equation for the t-test takes into account variab­ili­ty/­dif­fer­ences and sample size of data. value denoted by t. The bigger the value of t, the more likely you are to find a statis­tically signif­icant differ­ences.
Indepe­ndent T-test Assump­tions
assumes a normal distri­bution (and if it is not normal then consider performing a Mann Whitney test instead if the sample is small), homoge­neity of varian­ce/­levenes test
t statistic
The bigger the value of t, the more likely you are to find a statis­tically signif­icant differ­ences. There is no standa­rdised t-valu­e/d­ist­rib­ution that signifies statis­tical signif­icance, therefore you must consider both t and degrees of freedom. Can be positive or negative (is depend­endent on how you ordered your groups). you usually report it as a positive number.
Mann-W­hitney test
Non-pa­­ra­m­etric equivalent of Indepe­­ndent t-test. Can be used to test the null hyp. that there is no signif­­icant difference between two indepe­­ndent groups. Can use when DV is ordinal. Data is ranked and these ranks are added up allowing for a mean rank of each group. This can be used to assess whether there is a signif­­icant differ­­ence. Important to know that by ranking the data you lose some inform­­ation, making the test less powerful- thus you should always go for parametric test unless the data violates the assump­­tions
Levenes Test for Equality of variance table
Test for homoge­neity of variance. t-test provides two equal variance assumed results- to choose which one to interpret is dependent on the results of Levenes test of equality. if Levenes text is statis­tically signif­icant (less than 0.05) read the t-test result from the bottom line. If it is greater read t-test result from the top line.

Glossary terms

ANOVA
Analysis of variance test. Always have only one DV. One way ANOVA = only one IV, two way ANOVA = two IVs ect. Only indicates whether there is a statis­tically deviation values for each group. Need to conduct a post hoc test or planned compar­isons
Between groups ANOVA
focus on indepe­ndent groups used to explain differ­ences between groups.
One-way betwee­n-G­roups ANOVA
examine differ­ences between two or more ivs. Tests the null hypothesis that the mean scores for all groups are equal- which we then test by analysing the variance. IV should be catego­rical, DV should be measured at the interv­al/­ratio level, groups have approx. equal variances, residual scores should follow approx, normal distri­bution.
Two-way Betwee­n-G­roups ANOVA
examine the differ­ences between two or more indepe­ndent variables. often describes a number of levels of the two IV variables.
Planned Compar­isons
only make specific compar­isons between groups which have been decided in advance. - usually driven by theory.
Post Hoc
compares every possible pair of groups- usually driven by trawling the data looking for signif­­icant findings.

Interp­reting the Output:

N
Number of partic­ipants in each group
Mean
mean of each group
Std. Deviation
is the average deviation scores in your data set. Indicates the extent to which the scores on a variable deviate from the mean score.
Std. Error of the Mean
is obtained by dividing the standard deviation by the square root of the sample size for each group. Used to help calculate the signif­icance.
t
The bigger the value of t, the more likely you are to find a statis­­ti­cally signif­­icant differ­­ences. There is no standa­­rdised t-valu­­e/­d­i­st­­rib­­ution that signifies statis­­tical signif­­ic­ance, therefore you must consider both t and degrees of freedom.
Degrees of Freedom (df)
reflection of the sample size. In the case of an indepe­ndent t-test the df is always equal to 2 less than the total sample size. (e.g. 45 males and 17 females in your sample = 62-2=60)
Sig. (2 tailed test)
Any value less than 0.05 is statis­tically signif­icant. If the Indepe­ndent t-test results are signif­icant then the result is unlikely to be due to chance.
Sig. (one tailed test)
divide two tailed signif­icance value by 2
Std. Error Difference
divide the mean difference by the t-value
F ratio
F ratio = Betwee­­n-­g­roups mean square/ within groups mean square. If F ratio is greater than 1 it indicates a difference between groups. P value accomp­­anies the F ratio to tell you whether the difference is statis­­ti­cally signif­­icant.
Error row
term used for the within­-groups inform­ation. Error mean square value is the value used as the denomi­nator in the F-ratio calcul­ation
Decision (mann whitney)
telling you whether to retain or reject the null hypoth­esis. If the signif­icance value is greater than .05 you are advised to retain.
 

Indepe­ndent t-test Guide:

1) Analyze
Analyse > Compare Means > Indepe­ndent Samples t-test
2) Identify and move DV
Move DV to the test variables box
3) Identify and move IVs
Move IVs into the grouping variable box
4) Click Define groups button
Enter the two numbers that were used to code the indepe­ndent variables (e.g. you may have coded/­ass­igned numbers to genders: Female 1, Male 2) > Continue > OK
5) Interpret the output
Choose which t variable using the Levenes test of equality of variance. If Levenes text is statis­tically signif­icant (less than 0.05) read the t-test result from the bottom line. If it is greater read t-test result from the top line. Then work out your degrees of freedom and also identify the probab­ility.
6) Report your results
t value should be rounded to 2 decimal places, followed by df in brackets and the signif­icance level.
Example answer
E.g. In this study there was a 'stati­sti­cally signif­icant' difference between 'males' and 'females' on 'the statistics anxiety scores', t(60)=­3.92, p< .001. May also comment on the data found in the Group Statistics table: 'Females' has a 'higher' mean 'stati­stics anxiety score' of 8.71 (SD=1.78) comapred to the mean 'male scores' of 6.65 (SD=2.03)
Test for homoge­neity of variance
Testing normality of residuals

Mann-W­hitney test

1) Analyze
Analyse > Nonpar­ametric test >In­dep­endent samples
2) Non-pa­ram­etric test window
Fields > Automa­tically compare distri­butions across groups > run
3) specify variables
Move DV to test variables (this is the variables you hyp. a difference would be present) > Move IV to grouping variable box > Run
4) Interpret Output
tables slightly vary (look at interp­reting output column)
5) Results
report U value to 2dp. followed by signif­icance level e.g. U=66, p=.53
Example
There was no statis­tically signif­icant difference between psycho­logists and psychi­atrists on the rating scores of vegeto­the­rapy, U=66, p=.48. The psycho­logist group reported a median rating of 1.00 (inter­qua­rtile range=2.5) and the psychi­atrist group had a higher median rating of vegeto­therapy of 2 (inter­qua­rtile range= 2).
 

One way Betwee­n-g­roups ANOVA Guide:

1) Analyze
Analyse > General Linear Model > Univariate
2) Identify and move variables to approp­riate boxes
Move DV to Dependent variable box > Move IV into Fixed Factors box
3) Descri­ptive stats and homegenity inform­ation
options > display > tick descri­ptive stats > tick homoge­neity tests > singif­icance level .05 > continue
4) Save
Select save (to save residual scores that are useful for checking the assump­tions) > tick unstan­dar­dized box (under Residuals list) > continue > OK
5) Interpret output
most important rows in the ANOVA table: IV named row and Error row (the error mean square is the denomi­nator value used in the F-ratio calcul­ati­ons). Probab­ility associated with F ratio- less than .05= reject null hyp. Require F ratio rounded to 2 decimal places, two dfs (one for between groups mean square and one for within­-groups mean square, separted by comma), approp­riate effect size statistic. report the standard deviation and mean values.
6) Write up results
There 'was/was no' statis­tically signif­icant difference between the 'three' groups in terms of their 'intel­ligence scores', F(2,27­)=0.07, p=.94.

Important steps in Between Groups ANOVA

Normal calcul­ations of variance
divide the sum of squares by n-1 (n= no. of values used to calculate the sum of squares)- DIFFERENT FOR ANOVA
Calcul­ating Sums of Squares- provides a sense of the amount of variation between groups and within groups
Calculate variance for the individual score within the same group: 1) subtract group mean from each score within that group, then square that result (this calculate sum of the squared deviations for each individual score from its group mean). Then add up the values obtained (this is known as the sum of squared deviat­ions). Calculate variance between groups 2) Can follow same principle to calculate the sum of squared deviations of each group mean from the grand mean (which is mean of all scores): replace ind. score with the group mean - grand mean , then square this deviation. Do this for everyone in data set then add them all up to get the sum of squares between groups.
Calcul­ating Mean Square
between groups: divide the sum of squares by its df (df= no. of groups - 1)(e.g. we have 3 groups, df is therefore 3-1=2). Within groups: df = no. of indivi­duals in analysis - no. of groups (e.g. we havwe 30 pps and 3 groups, 30-3=27, so it would be sum of square­s/27).
Calcul­ating the F ratio
F ratio = Betwee­n-g­roups mean square/ within groups mean square. If F ratio is greater than 1 it indicates a difference between groups. P value accomp­anies the F ratio to tell you whether the difference is statis­tically signif­icant.

Two-way between groups ANOVA

1) Plot
Plots > move one to Horizontal axis box > move the other variable to separate lines box > Add > Continue > OK. it doesnt matter which variable is put on the horizontal axis
2) Interpret Output
most important rows are the ones that correspond with the names of the variables and the error. The mean square found in the error is used as the denomi­nator for f ratio. make sure to read the p value for each of the varaibles, this will determine if they are signif­icant or not.
Example (train of thought when interp­reting the results)
the p value for the intera­ction term (e.g. variable + variable row) is 0.001 which means that there is a statis­tically signif­icant intera­ction (e.g. the effect on cowboys preference for intell­igence is influenced by gender)
6) Write up results
Report in the same way that you do a one way Anova. The only difference is that you need to clarify the results from the main effect or an intera­ction.
Example results write up
A 3x2 ANOVA with cowboy preference (JH, CE, none) and gender (M, F ) as between subjects factors revealed no main effect for cowboy prefer­ence, F(2,24­)=0.11, P=.89. or for gender, F( ), p= . However there was an intera­ction effect, f( )..... The intera­ction plot sugges­ts..... Also report the mean and standard deviation.
To exaime the difference between two or more indepe­ndent groups on two indepe­ndent group variables. It uses the same method as one way however requires you to place all IVs into Fixed factors box. Also key to use the intera­ctions box. Additional plots instru­cti­ons...