Show Menu
Cheatography

Psych 2260 Cheat Sheet (DRAFT) by

Cheat sheet for the introduction of research methods for psychology. Its stats

This is a draft cheat sheet. It is a work in progress and is not finished yet.

Ch. 3

Correl­ation Coeffi­cient: # that tells degree of correl­ation (r)
Strength: small ±.10 med ±.30 large ±.50 r=[Σ(zx)(zy)]/N-1
Linear Correl­ation: Line indicating relation is roughly a straight line
Curvil­inear correl­ation: Not Straight
Cross-­pro­duct: Multip­lying a score on one variable by a score on another
Cross-­product Z score: Using z-scores instead
Variables: predictor is x and criterion is y
Prediction Model: Using z-scores to make predict
Zy= (β)(Zx)
Raw Score Predict:
Form 1: Predicted Y=a+(b)(x) Form 2: Predicted Y=(SDy)(Pred­icted Zx)+My
Correl­ation Matrix: Table of correl­ations that's set up so each variable is listed down the left and across the top ex.
Multiple Regres­sion: Making predic­tions w/ multi correl­ations
Zy=(β1)(Zx1)+(β2)(Zx2)+(β3)(Zx3)...
:

Ch. 10

Chi-Square Tests: For when the variable of interest is a nominal vari. The scores they achieve represent freque­ncies
Freque­ncies: How many ppl/ob­ser­vations fall into diff categories
Chi-Square Test for Goodness of Fit: Chi-Square test involving levels of a single nom vari
Goodness of Fit: X2=∑(O­-E)2/E
**O- Observed Frequency E- Expected Frequency
df for X2 test: df=NCa­teg­ories-1
Chi-Square T for Indepe­ndence: Chi-Square test involving 2 variables each w/ several categories
Indepe­ndence: Refers to a lack of a relation between 2 nom vari
Ind Means X2 Expected freque­ncies: Makes # that rep cell E=(R/N)(C)
Figuring X2 for Ind: It is the same as goodness of fit but uses scores from each cell of the contin­gency table
df for X2 for Ind: df=(NC­olu­mns­-1)­(NR­ows-1)
For cutoff scores: use table A4 to find cutoff scores
Phi Coeffi­cie­nt(): Measure of associ­ation between to dichot­omous nom vari. Effect size for a X2 for Ind w/ a 2x2 contin­gency table
=√X2/N
Cramer's Phi: Extension of Phi, used when the contin­gency table is larger than 2x2 AKA Cramer's V and denoted as C or Vc
C=√X2/­(N)­(df­Sma­ller)
Data Transf­orm­ation: Math proc used on each score is a samp, usuall done to make samp dist closer to norm
Square­-Root Transf­orm­ation: Taking the √ of each score in a sample to make the distri­bution closer to normal
Log Transf­orm­ation: Taking a logarithm of each score to make the samp dist closer to norm
Rank-Order Transf­orm­ation: Changing the set of scores to ranks so that the lowest score is 1, next lowest is 2... so on
Rank-Order Test: Hyp Test proc that uses rank-o­rdered scores. Sometimes called dist-free tests/­non­-pa­ram­etric tests
Rank-Order Tests Corres­ponding to Parametric Tests:
Mann-W­hitney U: Rank-order test U1=[(N­1)(­N2)­]+[­N1(­N1+­1)/­2)-∑R1 // U2=[(N­1)(­N2)­]+[­N2(­N2+­1)/­2)-∑R2
Where: U1/U2- U1 Stat N1/N2- Sample size of each group ∑R1/R2- Sum of rank orders for each condition
: √ σ μ ∑
 

Ch. 3

Correl­ation Coeffi­cient: # that tells degree of correl­ation (r)
Strength: small ±.10 med ±.30 large ±.50
Linear Correl­ation: Line indicating relation is roughly a straight line
Curvil­inear correl­ation: Not Straight
Cross-­pro­duct: Multip­lying a score on one variable by a score on another
Cross-­product Z score: Using z-scores instead
Variables: predictor is x and criterion is y
Prediction Model: Using z-scores to make predict
 
Formulas
r=[Σ(zx)(zy)]/N-1Zy= (β)(Zx)
:

Ch. 3

Correl­ation Coeffi­cient: # that tells degree of correl­ation (r)
Strength: small ±.10 med ±.30 large ±.50
Linear Correl­ation: Line indicating relation is roughly a straight line
Curvil­inear correl­ation: Not Straight
Cross-­pro­duct: Multip­lying a score on one variable by a score on another
Cross-­product Z score: Using z-scores instead
Variables: predictor is x and criterion is y
Prediction Model: Using z-scores to make predict
 
Formulas
r=[Σ(zx)(zy)]/N-1Zy= (β)(Zx)
:
 

Ch. 4

Infere­ntial Statis­tics: Conclu­sions that go beyond the particular group of research partic­ipants studied
Normal curve/dis: Variables follow a unimodal, roughly symmet­rical, bell-s­haped dist
Central Limit Theorem: Principle that the distri­bution of the sums/means of scores taken at random from any dist. of indiv. will tend to form norm curve
Haphazard Selection: Picking for conven­ience (Ie, whoever happens to be available)
Population Parame­ters: M, SD2 and SD of a pop
Sample Stats: M, SD2 and SD figured for scores in a sample
Relative Freq: # of times smt happens relative to # it could happen
Probab­ility: p=Possible successful outcom­es/All possible outcomes
Response rate: Proportion of indivi­duals approached for the study who actually partic­ipated in the study
:

Ch. 5

Theory: Set of priciples that attempt to explain 1+ facts/­rel­ati­ons­hip­s/e­vents
Hypothesis testing process:
Step 1- Restate Question (resea­rch­/null hypoth­eses?) Step 2- Determine chara of comparison distri­bution Step 3- Determine cutoff sample score Step 4- Determine samples score on the comparison distri­bution Step 5- Decide whether or not to accept­/reject the null hypothesis
Comparison Distri­bution: Represents the population situation if the null hypothesis is true
Meta-a­nal­ysis: Combo of results from multiple diff studies
Direct­ional Hypoth­esis: Study that focuses on a specific direction of effect
Decision Errors: Correct procedures leading to faulty results
Type I Error: Conclude the study supports research hypothesis when it is actually is false
Type II Error: Extreme p-value that leads to rejecting a null hypothesis that should actually be accepted
Not Signif­icant: NS

Ch. 8

T test for indepe­ndent means: using scores obtained from 2 sep groups that're indep of each other
Distri­bution between means: comp dist used in a t test for ind M. We are not using diff scores and are instead comp 1 groups M to the other groups M
Weighted Avg: An average weighted by the amount of info that each sample provides
Pooled estimate of pop SD2: S2Pool­ed=­[(d­f1/­dft­ota­l)(­S12­)]+­[(d­f2/­dft­ota­l)(­S22)]
SD2 of dist of diff between Ms: For pop1: SM12=S­2Po­oled/N1 For pop2: SM22=S­2Po­oled/N2
SD2 of dist of diff between Ms: S2Diff­erence
S2Diff­ere­nce­=SM­12+SM22
SD of the dist of diff between Ms: SDiffe­rence
SDiffe­ren­ce=­√S2­Dif­ference
Df for ttest for for ind M: dftota­l=d­f1+df2
ttest for ind M: t=(M1-­M2)­/SD­iff­ference
Hyp Test Proc: Find S12+S2­2->­S2P­ool­ed-­>SM­12+­SM2­2->­S2D­iff­ere­nce­->S­Dif­fer­enc­e->­Cut­off­->M­1+M­2->t
Effect Size for IndM T: Est Eff Size=(­M1-­M2)­/SP­ooled
Harmonic M: Gives equivalent sample size to groups that have equal group sizes (used for est eff size when group sizes aren't even) Harmonic M=[(2)­(N1­)(N­2)]­/(N­1+N2)
t test shown in research: t(dfto­tal­)=(­tsc­ore), p<.01
: √ σ μ ∑
 

Ch. 6

Distri­bution of Means (DoM): The distri­bution of the means of each of many samples of = size and all randomly selected from the same population
3 Chara of DoM: 1. Its M 2. Its spread (SD2+SD) 3. Its shape
Rules: Rule 1- PopMm (M of DoM)=PopM (M of pop) Rule 2a- Pop SD2M=SD2/N Rule 2b- Pop SDM=√SD2M Rule 3- The shape of a DoM is approx norm if either a) Each sample has 30+ part b) The dist of the pop of indiv is norm
Z Test: The Z score that is checked against the normal curve
Effect Size: The amount that pops (exp and non exp) are separa­ted­/don't overlap
Cohen's d: d=(μ1 (M of exp group)-μ2(M of known pop))/σ (SD of known pop)
d effect size: small 0<d­<0.2 med 0.2<d<0.8 large d>0.8
Type I Error: Rejecting the null hypothesis when the null hypothesis is actually true
Type II Error: Accepting the null hypothesis when the null hypothesis is false, aka beta error
Type III Error: Concluding that there is a sig diff in one direction when the true effect is in the other direction
Statis­tical Power: Likelihood that a study will correctly detect a real treatment effect. In other words, the stat pow is the likelihood that the study will correctly reject a null hypothesis
Hypothesis testing steps: Step 1- Develop Hypothesis ie- H0: μ1≤μ2 H1: μ>μ2 Step 2- Determine chara of comp pop σM=σ/√N Step 3- Determine cutoff score Step 4- Determine samples score on the comp dist Z=(M-μ­M)/σM Step 5: Decide whether to reject or accept the null hypothesis
Power Distri­bution Steps: Step 1: Turn Z cutoff score into raw score M=(Z)(­σM)+μM Step 2: Figure the zscore for the cuttoff M, Z=(M-μ­M)/­(­σM) Step 3: Use Table A-1 to determine prob of getting the resulting score from step 2 Power=­1-beta
: √ σ μ

Ch. 7

T Tests: Hyp test procedures where pop SD2 is unknow­n(Aka students t)
1 sample t test: scores from one sample where the comp pop has a known M but unknown SD2
1 samp t hyp test: In step 2 we have to find the unbiased estimate of the pop SD2 S2=[∑(­X-M­)2]/df, in step 3 we use table A-2 instead and for step 4 we need to calculate a t-score t=(M-Pop M)/SM to compare against our cutoff score
Degrees of Freedom:df=n-1
Repeat­ed-­Mea­sures design: Research situation where 2 scores are taken from each person in the sample (withi­n-s­ubjects design)
t test for dependent means: Each person has 2 scores, we use diff scores for the partic­ipants (1 score-the other) and we assume pop M is 0
For the t test for dep M, calculate diff scores before doing hyp test
Est. Effect Size (for t test w dep M): Mean of diff scores/sd of pop of diff scores Est Eff Size=M/S
: √ σ μ ∑

Ch. 9

ANOVA: Stat proced­urefor testing SD2 among the Ms of >2 groups
The null hyp for anova is that the several pops being compared have the same M
Within­-group est of the pop SD2: Avging pop SD2 est from each sample into a single pooled est. Gives an avg of est figured entirely from the scores within each of the samp
Betwee­n-group est of the pop SD2: Est of the SD2 in each pop from the SD2 among the Ms of the samples
Treatment effect: Diff treatment received by the groups causes the groups to have diff Ms
F Ratio: The betwee­n-g­roups est divided by the within­-groups est
F Distri­bution: Math defined curve that is the comp dist used in an ANOVA
Before testing, find M and S2 for each group of part
Within­-groups SD2 est: S2With­in=­(S1­2+S­22+...S­la­st2­)/N­Groups
Grand M: The overall M of all our scores GM=∑M/­NGroups
Est of SD2 of the Dist of Ms: SM2=[∑­(M-­GM)­2]/­dfb­etween
Comparison of fig the SD2 of a dist of Ms from the SD2 of a dist of indiv: from dist of indiv-­>dist of M - S2M=S2/N dist of M->Dist of indiv - S2Betw­een­=(S­2M)(N)
F Ratio: Ratio of betwee­n-group est of pop SD2 to the within­-group est of pop SD2 F=S2Be­twe­en/­S2W­ithin and use table A-3 for comp
Betwee­n-g­roups df: Numerator df dfBetw­een­=NG­roups-1
Within­-groups df: Denomi­nator df dfWith­in=­df1­+df­2+...d­fLast
Hyp Test Proc: Find S2 + M for each group-­>S2­Wit­hin­->G­M->­dfB­etw­een­->d­fWi­thi­n->­S2M­->S­2Be­twe­en-­>F
Effect size for ANOVA: R2
R2=[(S­2Be­twe­en)­(df­Bet­wee­n)]­/[(­S2B­etw­een­)(d­fBe­twe­en)­]+[­(S2­Wit­hin­)(d­fWi­thin)]
R2 Power Meaning: small .01 med .06 large .14
Factorial ANOVA: ANOVA for factorial research design
Intera­ction Effect: X = intera­ction (effect of one variable impacts the results on the other)
Two-way ANOVA: Considers the effect of 2 variables that separate groups
Grouping Variab­les/Ind Variables: Variables that separate groups
One-Way ANOVA: Consider the effect of only one grouping
Diff ANOVA Means: Cell Ms- M of scores in each cell Marginal Ms- M of 1 grouping variable (verti­cal­/ho­riz­ontal grouping)
Dependent Variable: Represents the effect of the exper proc
One-Way ANOVA in Research: Ftest(­dfB­etween, dfWith­in)=F ratio score, p<.01
: √ σ μ ∑