Show Menu
Cheatography

Anth 485 Final Exam Cheat Sheet (DRAFT) by

This is a draft cheat sheet. It is a work in progress and is not finished yet.

One-Way ANOVA

Betwee­n-Group Mean Square
Within­-Group Mean Square
F-Ratio
1) (Subtract overall mean of pop from each group’s mean)2
1) (subtract overall mean of pop from each group (sample) mean),
1) [(between group mean square) / (w/in-­group mean square)]
2) (squared differ­ence) (sample size)
2) then multiple each difference by (n-1)
2) if ~ 1, then btwn-g­roups & w/in-g­roups variances similar, accept H0
3) compute degree of freedom (number of groups minus 1)
3) calculate the grand sum
3) if >1, then reject H0
4) calculate betwee­n-g­roups mean square = [(btwn­-group variance) / (df)]
4) calculate the degrees of freedom total (N-n of groups)
 
5) calculate the w/in groups mean square = [(sum of squares) / (degrees of freedom total)]
- Analysis of Variance ( compares means between 3+ samples)
Does not indicate which group(s) are different from which other groups (s)
- Parametric test
- Bonferroni post hoc test, reveals which specific means differed. Use if ANOVA was sig. using for pairwise comparison
- It multiplies each of the signif­icance levels from the LSD test by the number of tests performed. If this value is greater than 1, then a signif­icance level of 1 is used.

Chi-Square Test

1) calculate the expected frequency (E) = [(row total) (column total) / total sample N]
Standa­rdized Residuals
Phi (Ф)
Cramer's V
2) for each cell, find (diffe­rence between overserved & expected counts)2
reveal what cell adds the most statis­tical value to the test.
to measure the strength of associ­ation of chi-square test
to measure the strength of associ­ation of chi-square test
3) divide square difference by expected count for each cell, then sum results
 
2x2 table
greater than 2x2 table
4) df = [(n of rows -1) (n of columns -1)]
5) check X2 table for signif­icance at @ 0.05 alpha level
- Dependent & Indepe­ndent nomina­l/n­ominal or nomina­l/o­rdinal data
- H0= no relati­onship between variables; expected counts for each cells = observed counts
- n is greate­r/equal to 20; no expected freque­ncies less/equal to 5 in 20% or more of the cells

Fisher's Exact Test for Chi-Square

-Use when Chi-Square assump­tions are violated (>20%)
- Very small samples

Spearman's Rank Correl­ation

1) Turn raw scores into ranks
Rho varies from -1 to +1
2) find d2 = (diffe­rence between rankings)2
-1 (a perfect negative correl­ation; as X increases, y decreases)
3) add up all the data in d2 column to obtain sumd2
0 = no associ­ation
4) calcul­ation spearman’s rank correl­ation coeffi­cient (rho) rs = [1- (6*sum­d2)­/N3-N)] df= n-2
+1 (a perfect positive correl­ation; as X increases, Y increases
- Measures of associate for two ordinal variables; whether a relati­onship exists, how strong it is, what is the direct­ion­/pa­ttern of relati­onship) (what happens to one variable, happens to the other variable)
- Nonpar­ametric version of Pearson correl­ation coeffi­cient
- H0= no sig
indepe­ndent = x ; dependent = y

Pear­son's R Correl­ation Coeffi­cient

r= Rho = measure of associ­ation (-1 to +1)
assumes x and y is normally distr. & linearly related
(Pearson’s r)2 = PRE stat (strength of predicting amount of variance in Y based on X)
r2 = % of variance in dependent (Y) explained by indepe­ndent (X)
usually interv­al/­ratio level data

Parametric vs. Non-pa­ram­etric Tests

Parametric
Non-Pa­ram­etric
interval or ratio data
nominal and/or ordinal data
one-way ANOVA
Distri­bution free
Pearson's R Correl­ation Coeffi­cient
Wilcoxon Signed­-Rank Test for Two Related Conditions
 
Mann-W­hiteny U Test for Two Indepe­ndent Conditions
 
Wilcoxon Rank Sum Test for Two Indepe­ndent Conditions
 
Chi-Square Test
 
Kruska­l-W­allis
 
Spearman's Rank Correl­ation

Wilcoxon Rank-Sum & Mann-W­hitney U tests

nonpar­ametric equivalent of indepe­nde­nt-­sample t-test
nominal and/or ordinal data
Tests two indepe­ndent conditions

Wilcoxon Signed­-Rank

- Use this test for two related conditions (paired, matched)
- ordinal data
- nonpar­ametric equivalent to the depend­ent­-sample t-test
H0 = The two groups are identi­cally distri­buted.

Kruska­l-W­allis

nonpar­ametric equivalent of one-way ANOVA
nominal or ordinal data, but more than two indepe­ndent samples
uses chi-square distri­bution

Regression

Predicts dependent (y) based on value of indepe­ndent (x)
Regression Formula: line that makes the sum of squares of the vertical distances of the data points from the line as small as possible
Principle of least-­squares - finds estimates of parameters in a stat model based on observed data
y= a + bx; a= y-axis; b= slope
interv­al/­ratio level data
assumes linear relati­onship
observes indepe­ndent (x)

Correl­ation

Tests for
Difference between (r) and (r)2
Assump­tions
How well X predicts Y
r= Pearson's correl­ation coeffi­cient = measure of associ­ation
For each indepe­ndent (x), dependent (y) must be normal
how “tightly the predicted values fit regression line
r2 = PRE stat (strength of predicting amount of variance in Y based on X)
Dependent variable variances same for all indepe­ndent values (homos­ced­ast­icity)
to what degree X covaries with Y
r2 = % of variance in dependent (Y) explained by indepe­ndent (X)
Avoid predic­tions outside the observed values; beware extremes; relati­onships must be linear over all values.
   
linear relati­onship, observes indepe­ndent (X)
usually, interv­al/­ratio level data