Show Menu
Cheatography

Research COMP Cheat Sheet (DRAFT) by

William James College research comprehensive assessment

This is a draft cheat sheet. It is a work in progress and is not finished yet.

Overview

Ways of Knowing: Tradition or History, Authority, Personal Observ­ation, Rational Analysis, Scientific Method
Quality Research: Can be replic­ated, Is genera­lizable to other settings, Is based on a reasonable rationale and linked to a theory or theories, Is not based on political beliefs, Is objective (though still subject to human error)
Common errors resear­chers make: Inaccurate observ­ation, Overge­ner­ali­zation, Selective observ­ation, Illogical reasoning
Research Philos­ophies Often Used in Counseling: Post-P­osi­tivism assumes individual percep­tions of social enviro­nment and an event influence how they behave (including resear­chers, who try to “bracket” and remove biases); also assumes we can only approx­imate the “truth” with research. Constr­uct­ivism assumes there is no true reality, but multiple, social­ly-­con­str­ucted realities; research is about accessing partic­ipants’ “lived experi­ence”; values of resear­chers cannot be removed from the research process. Scientific Realism assumes the world is composed of layers of causal struct­ures; some causal structures are easy to observe and others are not; resear­cher’s job is to identify causal structures and how they interact to produce an effect
Types of Knowledge: Research Approaches: Descri­ption Consists of attempts to describe natural, social, or psycho­logical events • Focus on assess­ment, which allows people to describe identified events. Prediction Involves developing ways to predict identified outcomes Improv­ement Involves developing inform­ation designed to determine the effect­iveness of interv­entions Explan­ation Resear­chers frame questions and problems in terms of theories or explan­ations of phenomena
Practi­tio­ner­-Sc­ientist Model: Emphasis is on practice first, with the use of research/ science as a foundation for conducting practice. Can involve using steps of scientific method to solve problems in clinical work (i.e., identify problem; operat­ion­alize the problem; identify princi­ples, theories, and research applicable to solving the problem; identify desired outcomes; select & implement a strategy; monitor outcomes)

Ethics and Research

Ethical Theories Utilit­ari­anism: the end justifies the means Deonto­logical: the outcome is less important than following a rule or principle (More concerned with principles of right or wrong)
General principles of ethics: Autonomy, Benefi­cence, Justice, Nonmal­efi­cence, Fidelity
Ethical Guidelines: APA, ACA, Laws (state & fedral)
ACA Guidelines for Scholarly Work: Need to balance goal of extending knowledge with ethical princi­ples. Accurately and reliably plan and conduct the study, consistent with ethical guidel­ines. Report results accurately including unfavo­rable results. Report errors. Minimize bias and respect diversity in designing and implem­enting research. Provide info that describes the extent to which results are applicable to diverse popula­tions. Make original research inform­ation available to other resear­chers who want to check them out. No duplicate or public­ation. Give credit adequately and accura­tely. Don’t plagia­rize.
Profes­sional Codes of Ethics of ACA Code Research respon­sib­ilities of the counselor, Rights of research partic­ipants (the importance of informed consent), Reporting of results, Issues related to public­ation
Laws (State and Federal) Federal legisl­ation requires that invest­igators who are associated with instit­utions that receive federal funds must receive IRB approval before conducting studies
Questions to ask to determine the ethical quality of research What possible negative implic­ations or harm does this study have on the popula­tion? What are the possible benefits of the research results for the popula­tion? Are the sample and population studied fairly repres­ent­ative of the general popula­tion?
ACA Guidel­ines: Partic­ipants: Identify & elimin­ate­/mi­nimize potential sources of risk to persons (incl. physical & mental discom­fort, harm, danger), Get informed consent from partic­ipants, Partic­ipation of studen­ts/­sup­erv­isees has to be optional, Don’t use deception unless there’s no other option and the prospe­ctive value of the research justifies it. Physic­al/­emo­tional harm is never justified. Debrief ASAP., No sexual or romantic intera­cti­ons­/re­lat­ion­ships with research partic­ipants. Don’t sexually harass partic­ipa­nts., Maintain privac­y/c­onf­ide­nti­ality of partic­ipants
Identi­fying Research Topics: Research ideas can extend previous studies or invest­igate areas that have not been researched before. (Goal is to contribute to knowledge in a meaningful way and motivate others to learn more about the topic) Strategies for Identi­fying a Topic: Personal: think about experi­ences that have raised your curiosity Interp­ersonal: brainstorm with others (class­mates, profes­sors, superv­isors) Printed Sources: what have you read but would like to learn more about? Computer Strategies: do PsycInfo, and other online searches.
Research Question & Hypothesis: Research questions and hypotheses guide research. Research questions explore the relations among or between constr­ucts. Research hypotheses state specific expected relati­ons­hip(s) between constr­ucts. 3 general categories Descri­ptive, Differ­ence, Relati­onship
Operat­ional Defini­tions & Research Variables: Variable any charac­ter­istic, behavior, event, or other phenomenon that is capable of varying on at least two different levels or conditions Indepe­ndent Variable variable that is believed to affect or change the status of another variable (the DV) Dependent Variable the variable whose status seems to “depend on” the status of another variable (the IV)

Program Evaluation

Through program evalua­tion, we can gather inform­ation about our programs, determine whether programs are effective, and improve our programs
Research v. Program Evaluation: Research Purpose: Test theories, develop practices and procedures Evaluation Purpose: Decision making and inform­ation for social programs Research Audience Profes­sional and scientific community Evaluation Audience Specific group or commun­ities such as funding sources or govern­mental agencies Method (same for both) True experi­mental, quasi-­exp­eri­mental, nonexp­eri­mental (descr­ipt­ive), qualit­ative, etc.
Profes­sional Standards and Guidelines for Program Evaluation Joint Committee on Standards for Educat­ional Evaluation (1994) 30 standards in four catego­ries: Utility, Feasib­ility, Propriety, Accuracy The Guiding Principles of Evaluators (American Evaluation Associ­ation, 2004) Five basic princi­ples: Systematic inquiry, Compet­ence, Integrity and honesty, Respect for people, Respon­sib­ilities for general and public welfare
Types of Evalua­tions: Need: designed to identify the discre­pancy between actual conditions and what is sought or desired Formative: concerns whether a program is implem­ented as designed Summative: evaluation of outcome; concerns how successful a program is in achieving designated goals
Needs Assessment: Typically conducted before designing programs to determine what is needed Formative Evaluation (aka:I­mpl­eme­nta­tio­n/P­rocess) Focus on amount and quality of effort needed to implement program. Results typically provided to those implem­enting program Summative Evaluation (aka: Effect­ive­nes­s/O­utcome) Results typically provided to policy makers and/or funding sources
Evaluation Models Case study model: usually involves the use of qualit­ative methods. Involves a deeper invest­igation into the processes of a program Consum­er-­ori­ented approach: evaluation is focused on determ­ining the worth or value of a program. The use of checklists is key. Context, input, process, and product (CIPP) evaluation model: focused on both formative and summative evaluation
Steps in Evaluation Step 1: Formation of the evaluation team (Indiv­idual v. team & Insiders v. outsiders) Step 2: Identi­fic­ation of relevant stakeh­olders Step 3: Determ­ination of a focus for the evaluation Step 4: Identi­fic­ation of evaluation model and methods Step 5: Selection of evaluation methods and designs Step 6: Selection of measures for the evaluation Step 7: Collection of the data Step 8: Analysis of the data Step 9: Reporting of the results to relevant stakeh­olders

Program Evaluation

Identify the stakeh­old­er(s) who were contacted about the PE.Sum­marize questi­ons­/goals stated by referral source­(s)­/st­ake­hol­ders.
Describe the agency­/or­gan­ization where the program is taking place (Location, mission, types of services provided, clientele served, etc.)
Describe the program that is being evaluated (Purpose, struct­ure­/fo­rmat, clientele in program, etc.)
Provide a history of the current program and the questi­on/­issue that is prompting the evaluation (History of the current issue, Summary of previous interv­ent­ions, Current state of the issue)
Review articles on topic in an integr­ative way, summar­izing what we know about each topic (comparing and contra­sting)
Highlight strengths / limita­tions / gaps in previous litera­ture.
Evaluating Articles in The Profes­sional Literature a general purpose of the literature section is to provide an argument for conducting the study. it is important to note any biases in the literature review. there is need for the researcher to define new concepts in the literature review.

Hourglass Form of Research Articles

Introd­uct­ion­/Li­ter­ature Review Introd­uction to the problem, Develo­pment of the framework of the study, Statement of purpose, research questions, hypoth­eses, provide an argument for conducting the research. Should include relevant defini­tions and descri­ptions of unknown concepts
Method Describes how hypotheses were tested, including how all aspects of study were conducted. Usually includes: Partic­ipants, Measur­es/­Var­iab­les­/In­str­ume­nts­/Ma­ter­ials, Design, Procedures
Results Summarizes data and results of statis­tical analyses
Discussion Explains if results supported hypoth­eses, States conclu­sions drawn by authors, relates them to previous research, Describes study limita­tions, Suggests implic­ations and future research

Annotated Biblios and Literature Review

What to Include in Annotation: Summary: main topic/­ideas discussed, purpose and justif­ica­tion, method­ology and major findings Critiq­ue/­Eva­luation: goal and credib­ility of the source, major strengths and weakne­sses, value/use of the work Reflection how this was useful, how it fits with other works you reviewed, what is relevant to your study and how you plan to integrate the inform­ation into your work.
Introd­uct­ion­/Li­ter­ature Review: Purpose is to concisely convey rationale and objectives of the study. Goes from very broad to very specific: States topic of study and why it is important, Reviews existing literature relevant to this particular study in an integr­ative way, Places study in context of what we know and what we don’t know about this topic, Should very clearly show why this particular study is needed, explains (gaps & how this study meanin­gfully extends current knowle­dge), Ends with statement of purpose, hypoth­ese­s/RQs
Purpose Statement: Should provide inform­ation about the design of the study, Should be testable or resear­chable (variables should be defined), Population being studied should be identified
Hypothesis: A hypothesis is a formally stated expect­ation of outcomes based on theory, previous research, or personal experi­ence. Data can support a hypoth­esis, but data cannot prove anything, can only draw conclu­sions based on accumu­lation of evidence.
Types of Hypothesis: Null hypothesis: Assumes that there is no difference between groups or no relati­onship between variables. Altern­ative hypothesis (two types): Non-di­rec­tional: posits that there is a difference between groups or a relati­onship between variables, but does not indicate which group will be more or less or the direction of the relati­onship, Direct­ional: posits that one group will be more or less than at least one other group or there is a positive or negative relati­onship between two or more variables

Validity

Broadly, validity is about how well-f­ounded the conclu­sions are that we can make about a research study
Internal Validity: The extent to which the interv­ention can be considered to account for the results (as opposed to a confou­nding variable). Relates to the amount of control resear­chers had over the study.
Threats to Internal Validity:
History Any event during the time of the study other than the indepe­ndent variable that could account for the results (control for it by no-tre­atment group & random assign­ment),
Instru­men­tation Changes in the measuring instrument or measuring procedures over time (Develop standard procedures for rating, train observ­ers­/raters prior to data collec­tion),
Diffusion or Imitation of Treatment The interv­ention given to one group is uninte­nti­onally provided to another group (tell partic­ipants not to talk to eachother)
Maturation Changes over time that result from physic­al/­psy­cho­logical processes within partic­ipants (no-tr­eatment group, random assign­ment),
Testing The effects that taking a test once can have on subsequent perfor­mance (no-tr­eatment group, reduce the number of admini­str­ators, use post-test only design),
Statis­tical Regression The tendency for extreme scores on any measure to revert to the mean of a distri­bution when the measure is readmi­nis­tered (No-tr­eatment or wait-list control group)
Selection Biases Systematic differ­ences between groups before any experi­mental manipu­lations or interv­entions (random assign­ment),
Attrition Loss of partic­ipants, problem of longit­udinal studies (Parti­cipants more likely to remain in study if they are doing something intere­sting, something that has little or no cost or adverse side effects, seems plausible, and is effective)
Combin­ation of Selection and Other Threats Another threat results in selection bias Differ­ential Selection occurs when the experi­mental and control groups are selected based on different criteria or when partic­ipants are assigned to groups differ­ent­ially and not by random assign­ment.
Special Treatment or Reactions of Controls Control group may receive some special treatment to offset their feelings about not receiving a desirable treatment
External Validity: The extent to which the results can be genera­lized to circum­stances other than those in the particular experi­ment. Relates to how true to life (gener­ali­zable) the study is.
Threats to External Validity:
Histor­y/T­rea­tment Intera­ction Events occurring at time of treatm­ent­/in­ter­vention may affect outcome (Hard to control this, but can address by extending study beyond impact of uncont­rolled event)
Reactivity of Assessment If awareness of assessment leads people to respond differ­ently (Include some unobtr­usive measures)
Sample Charac­ter­istics The extent to which charac­ter­istics of the sample represent the target population (control with random selection)
Intera­ction Between Sample and Treatment Specific personal charac­ter­istics of partic­ipants may interact with treatment and influence outcome in a way that is not repres­ent­ative of the population (Include relevant sample charac­ter­istics in the design)
Stimulus Charac­ter­istics and Settings Features of the study with which the interv­ention or condition may be associated (Use multiple experi­men­ters, settings, stimuli)
Reactivity of Experi­mental Arrang­ements The influence of the partic­ipants’ awareness that they are partic­ipating in an experiment (Archival data, Observ­ational data)
Multip­le-­Tre­atment Interf­erence In some studies, partic­ipants are exposed to multiple treatment conditions (count­er-­bal­ance)
Novelty Effects The possib­ility that the effects of an interv­ention may in part depend on their innova­tiv­eness or novelty in the situation
Test Sensit­ization The effect of a previous test on subsequent perfor­mance (Various group designs (post-test only, Solomon four-g­roup)
Timing of Measur­ement The results of an experiment may depend on the point in time that assessment devices are admini­stered
Construct Validity: The conceptual basis (const­ruct) underlying the effect. Threats impact the conclu­sions that can be drawn from the findings.
Threats to Construct Validity
Experi­menter Expect­ancies Expect­ations could lead to changes in tone of voice, posture, facial expres­sions, delivery of instru­ctions, and adherence to the prescribed procedures
Single Operations and Narrow Stimulus Sampling the interv­ention includes features that the invest­igator considers irrelevant to the study, but that may introduce ambiguity in interp­reting the findings (a wide range of conditions associated with treatment delivery)
Attention and Contact with the Clients Differ­ential attention across groups may be the basis for group differ­ences ( To control for this, you would need to include a placebo group and ensure that experi­menters are blind to the conditions to which partic­ipants are assigned)
Cues of the Experi­mental Situation May include inform­ation conveyed to prospe­ctive partic­ipants prior to their arrival to the experi­ment, instru­ctions, proced­ures, and any other features of the experiment
Statis­tical Conclusion Validity: The extent to which a relation is shown and the extent to which the experiment detects effects if they exist
The Experi­menter Effect: When the experi­menter commun­icates to the partic­ipants, most often subtly, what outcomes they would like to achieve
Other
intera­ction between history and treatment effects refers to how events occurring at the time of the interv­ention or treatment affect the outcome (threat to external ecological validity)

Methods

Types of Popula­tions: Target population: ll indivi­duals or objects the researcher is interested in and to which the study results will be applied Accessible population: the segment of the population that is accessible to the researcher
Selecting Partic­ipant Samples: Determine the charac­ter­istics of interest. Identify the relevant demogr­aphic charac­ter­istics. Select a sample that is repres­ent­ative of the larger popula­tion.
Sample Size: Several factors influence the sample size: Design of the study, Whether the researcher is using qualit­ative or quanti­tative methods. For quanti­tative methods, generally, more is better: Rule of thumb: Catego­rical: 15 partic­ipants per group. Continuous (descr­ipt­ive): 10-15 partic­ipants per variable
Power analysis: 4 factors: Power: the likelihood of finding an effect that actually exists, .80/80% minimum. Effect size: the magnitude of the effect­/di­ffe­ren­ce/­rel­ati­onship (expressed as small, medium, or large). Alpha: the probab­ility of finding an effect when one does not exist (Generally set at .05 where 5% chance that the results are due to factors other than the variables in the study)
Clarifying Methods When the purpose statement denotes an attempt to identify phenomena, events, or exercises
Sampling Methods Used in Quanti­tative Research
Simple random sampling Every single individual in the population has an equal chance of being chosen. Advantages: If completed adequa­tely, the results can be genera­lized readily back to the popula­tion. Disadv­antages: It is difficult to ensure that every individual in the population has an equal chance of being chosen.
Systematic random sampling A finite list of those in the population in which every nth person is selected. Advantages An easy, simple method to use. Disadv­antages Difficult to identify everyone in a popula­tion.
Stratified sampling (propo­rtional and nonpro­por­tional) Selection of indivi­duals from the population who represent subgroups. Advantages: Ensures adequate repres­ent­ation on relevant variables (e.g., ethnicity) Disadv­antages: May focus on a variable that is not as important as the others.
Cluster sampling Random selection of intact groups (e.g., whole classr­ooms) Advantages: Allows the researcher to conduct studies with naturally intact groups. Disadv­antages: The researcher cannot study differ­ences between indivi­duals in intact groups.
Conven­ience sampling Section of a population that is convenient and accessible to the resear­cher. Advantages Conven­ient; reduces costs and amount of effort in conducting a study. Disadv­antages Severely limits potential for genera­lizing the results back to the population
Sampling Methods Used in Qualit­ative Research
Some methods are the same as those used in quanti­tative research: Simple random sampling, Systematic random sampling, Stratified random sampling, Conven­ience sampling
Purposive sampling: based on their special knowledge or expertise about a group, resear­chers select partic­ipants who represent the population
Maximal variation sampling: a form of purposive sampling; resear­chers select partic­ipants who differ on some charac­ter­istic or trait to obtain a sample with maximum variation
Typical sampling: a form of purposive sampling; resear­chers choose partic­ipants with the intent of using those who represent what one may expect to be “normal”
Snowball sampling: resear­chers identify people who have relevant charac­ter­ist­ics­/traits and then ask those people to identify other people with the same charac­ter­ist­ics­/traits
Quota sampling: resear­chers establish charac­ter­istics of interest and then determine how many partic­ipants the resear­chers need in each charac­ter­ist­ics­/cell
Issues
Sampling error: the extent to which the sample does not accurately represent the population (occurs by accident, cannot be contro­lled)
Sampling bias: occurs when the researcher actively selects a sample that differs from the target population (can be contro­lled)
Both affect external validity (gener­ali­zab­ility)

Instru­ments

There are generally two types of approaches to scoring a formal measure
Criterion referenced: based on a predet­ermined level of perfor­mance (crite­rion)
Norm referenced: scores are interp­reted based on the comparison of one group’s perfor­mance with other groups’ perfor­mance, all of whom represent a clearly defined population
Resear­chers must select approp­riate instru­ments to measure the construct of interest. When resear­chers select inappr­opriate instru­ments, their results can be impacted by test bias, and they can experience problems interp­reting their results. Histor­ically, many instru­ments were designed for use with White, Englis­h-s­pea­king, middle­-class men. All profes­sional ethics codes require that mental health profes­sionals use culturally fair tests.
Reliab­ility of Instru­ments
Reliab­ility refers to the consis­ten­cy/­acc­uracy of an instru­ment. It is very important
Test scores are determined by two factors: True score, Error
Reliab­ility is reported as a correl­ation ranging from -1 to +1
Positive: as one goes up, the other goes up. Negative: as one goes up, the other goes down
0: there is no relati­onship between the scores obtained on any given admini­str­ation. Generally .8 is strong indicator of reliab­ility
Ways to Determine Reliab­ility
Test-r­etest: Tests the consis­tency of scores over time. Compares scores from multiple admini­str­ations of the same instrument to the same person
Alternate form: Measures the consis­tency of a measure based on content. Compares two different forms of the same instrument to ensure they measure the same thing
Inter-­rater: Measures the consis­tency of ratings across different raters. Measures whether a scoring key/manual is developed and used consis­tently
Internal consis­tency: Measures whether items in a measure are correl­ated. Measures how consistent responses are within a measure
Validity of Instru­ments
Validity is how well an instrument measures what it purports to measure
Construct validity: the extent to which an instrument measures the construct of interest
Conten­t-r­elated sources: concerns the extent to which responses of test items represent a particular content
Conver­gen­t/d­ive­rgent sources: concerns the extent to which scores on a test correlate with other tests that measure the same construct (conve­rgent or concur­rent) or different constructs (diver­gent)
Criter­ion­-re­lated sources: concerns how well a test predicts outcomes based on a particular behavior or skill
Approaches to Measuring the DV
Quanti­tative: Direct observ­ation and behavioral measures, Self-r­eport and invent­ories, Ratings of others’ behavior, Physio­logical approa­ches, Interviews
Qualit­ative: Direct observ­ation, Interview, Triang­ulation methods, Documents, Audiov­isual materials
Direct observ­ation Trained observers or scorers are used to evaluate behavior. Strengths: Measur­ement is more objective Limita­tions: Possib­ility of bias among the observ­ers­/raters
Self-r­eport invent­ories Rate the extent to which an identified behavior, attitude, or feeling is present. Strengths: Ease of time and admini­str­ation, Do not require extensive training, Achieve access to non-ob­ser­vable events Limita­tions: Possib­ility of distortion bias by the responder attempting to achieve socially desirable responses, Responder may not be aware of certain feelings, attitudes, or behaviors
Ratings Use of some standa­rdized method of scoring behavior of others Strengths: Measur­ement is not time-c­ons­uming, Increases the accuracy of the measur­ement Limita­tions: Rater may interject own biases into the process
Physio­logical methods Assessment of responses such as heart rate, ECG, blood pressure, GSR, EEG, and immune system response Strengths: Measur­ement is objective and accurate Limita­tions: Method is generally time consuming, Measuring the variable of interest is expensive
Interviews Interv­iewers ask respon­dents a set of questions that may involve brief or detailed responses Strengths Flexib­ility in the way inform­ation and data are collected, In-depth inform­ation may be collected Limita­tions The method is time consuming and costly, It is difficult to achieve a standard approach to scoring responses
Procedures
The procedures section should be a step-b­y-step descri­ption of how the resear­cher(s) conducted the study
Also gives inform­ation to practi­tioners about how to implement the treatments in the target population
Resear­cher(s) should describe (in chrono­logical order) every step of the study: Recrui­tment of research assist­ants, Recrui­tment of partic­ipants, What the partic­ipants experi­enced (Informed consent, Assignment to groups­/co­ndi­tions, Admini­str­ation of study materials, Any interv­ent­ion­s/t­rea­tments, Debrie­fing)

Statis­tical Conclusion Validity

Statis­tical Conclusion Validity: The extent to which a relation is shown and the extent to which the experiment detects effects if they exist. Basically, how “correct” or “reaso­nable” are the conclu­sions we are drawing about the data
Important Concepts
Alpha (α): the probab­ility of rejecting the null hypothesis when that hypothesis is true (risk of a type 1 error)
Beta (β): The probab­ility of accepting the null hypothesis when it is false (risk of a type 2 error)
Power (1-β): The probab­ility of rejecting the null hypothesis when it is false (The likelihood of finding an effect if it actually exists)
Effect Size: The size/m­agn­itude of the difference or relati­onship
Threats: Variab­ility in the Proced­ures, Subject Hetero­gen­eity, Unreli­ability of the Measures, Low Statis­tical Power, Multiple Compar­isons and Error Rates
Validity Threats
Variab­ility in the Procedures: Were the procedures admini­stered consis­tently across groups?
Unreli­ability of the Measures: The measure may have charac­ter­istics that foster error or variab­ility. Perfor­mance on the measure may vary widely from item to item within the measure because items are not equally clear or consistent in what they measure.
Subject Hetero­geneity: Variation among partic­ipants. The more diverse your sample is, the less likely it is that you will find a difference between groups. (Address this by choosing hetero­geneous samples, but ensure that the effect of selected partic­ipant charac­ter­istics can be evaluated in the design)
Low Statis­tical Power: Affected by Alpha level (type 1 error), sample size, effect size, error/­noise. The most straig­ht-­forward way of increasing power is to increase sample size
Multiple Compar­isons and Error Rates: more tests = more chance of a Type I error

Results

Results are presented mathem­ati­cally. Generally both descri­ptive and infere­ntial statistics are used.
Descri­ptive statistics: statistics that describe the sample. Not connected to unders­tanding or genera­lizing back to the population Continuous data is most often presented using the mean and standard deviation Catego­rical data is usually presented using freque­ncies and percen­tages
Results In Quanti­tative And Single­-Su­bject Studies
Infere­ntial statistics: statistics that are designed to make it possible to make inferences about the larger population and generalize back to the target population
Terms: Normality: data are normally distri­buted; data fit a normal bell curve Homoge­neity of variance: whether the differ­ences between scores are similar or different between the comparison group Indepe­ndence: there should be no relati­onship between the data points (betwe­en-­sub­jects data is indepe­ndent; repeat­ed-­mea­sur­es/­wit­hin­-su­bjects data is not indepe­ndent) Linearity: when graphed, the data (on average) forms a line instead of a curve
Results In Qualit­ative Research
The results section of a qualit­ative study is very different from the results section of a quanti­tative study. The results section in a qualit­ative study is charac­terized by descri­ptions of categories and themes, with quotat­ions.
Data are reported using language instead of numbers. The authors should describe using some kind of qualit­ative data analysis procedure to explain how they coded the data.
The Discussion Section
Restate the purpose of the study. Present the results in plain English and in the context of previous litera­ture. Provide altern­ative explan­ations when then results are not consistent with expected outcomes or with theories addressed (this usually includes identi­fic­ation of threats to validity). Identify limita­tions of the study, how limita­tions impact interp­ret­ation of results. Discuss the implic­ations (i.e., practical applic­ations) of the results for practice, training, policy, theory, etc. Offer sugges­tions for future research.

Quanti­tative Research Designs

Quanti­tative research = research that is based on measur­ement and quanti­fic­ation of data (i.e., all data turned into numbers)
Variables
Indepe­ndent variable: An event, condition, or measured attribute or charac­ter­istic that the researcher manipu­lates
Dependent variable: Changes as a result of changes in indepe­ndent variables
Extraneous variable Uncont­rolled and/or unknown variables that can impact the dependent variable.
Control variable: An extraneous variable that the researcher has identified and addressed in the method.
Three methods for contro­lling for extraneous variables: Build the variable into the design and control for its effects (make it an indepe­ndent variable), Remove possible effects of the extraneous variable (sample from one level of the variable), Control through statis­tical methods after the study has been conducted
True-E­xpe­rim­ental Designs
Two charac­ter­istics: Random assignment to groups, Manipu­lation of the indepe­ndent variable
Types: Pretes­t-p­osttest equivalent group, Postte­st-only group, Solomon Four-Group Design
Symbols used in describing research designs: R = random assignment of partic­ipants to groups, X = exposure of the group to treatment or manipu­lation of a targeted condition, O = observ­ation or measur­ement of the DV
Quasi-­Exp­eri­mental Designs
Designs in which the researcher cannot randomly assign partic­ipants to conditions
Types: Pretes­t-p­osttest design, Time series design, Multiple time series design
Preexp­eri­mental Designs
have no random assign­ment, but have an interv­ention
Includes two types of designs: One-group pretes­t-p­osttest design, Static group comparison
Descri­ptive Designs
These designs involve no random assignment to groups and no manipu­lat­ion­/in­ter­ven­tion. These designs are used to describe charac­ter­istics or the effects of events for an identified popula­tion.
Four approaches: Survey, Observ­ati­onal, Correl­ati­onal, Causal compar­ative
Single­-Case and Single­-Su­bject Research Designs
Best suited to applied research. Types: AB design. ABA design. ABAB design. Multiple baseline design. Altern­ating treatment design (AB1AB2 or ABAC) A=base­line, B=inte­rve­ntion

Qualit­ative Research Methods

Five qualities: A natura­listic approach: Involves the collection of data in the natural enviro­nment The use of descri­ptive data: Data is collected and presented through language and pictures An emphasis on process: Focus on the way things are done rather than the outcom­es/­acc­omp­lis­hments An inductive approach: Resear­chers explore what comes up, as opposed to the deductive approach, in which resear­chers hypoth­esize what is there and look for support for the hypothesis A focus on meaning: Unders­tanding the meaning that certain things in the enviro­nment have for the people in that enviro­nment
Research Designs: Case study, Multiple case study, Ethnog­raphic, Grounded Theory, Phenom­eno­log­ical, Historical
Six steps in using a case-study approach: Establ­ishing the boundaries of the case, Identi­fying themes of emphasis, Focusing on specific patterns of data, The use of triang­ulation in data interp­ret­ation, Consid­ering altern­ative views, Determ­ining the approp­riate genera­liz­ations from the case
Ethnog­raphic: Focusing on invest­igating cultural patterns in behavior and common­alities of a given culture, Determ­ining how members of a culture define and derive meaning from the experi­ences and events occurring within that culture, Studying these cultural behaviors and patterns in their natural enviro­nment Partic­ipant observer: he invest­igator enters the social system and lives among those he or she is studying Memoir­e/e­thn­ogr­aphic genre: ased on his or her experience as a partic­ipant observer, the researcher writes about the experience as if it were a memoire
Grounded Theory: The primary approach is the dynamic intera­ction of identi­fying catego­ries, which are analyzed and recons­tituted into more complex ones with each continuous level of analysis
Phenom­eno­logical: The focus is to understand how humans develop a way of knowing the world. The intent to to describe phenomena as they happen. This design helps resear­chers understand an indivi­dual’s personal perspe­ctive.
Historical: The purpose is to system­ati­cally understand past events and phenomena to obtain a clearer unders­tanding of current issues. May involve the use of systematic methods (e.g., diaries, oral records, relics). Four Steps: Define the problem or develop a hypoth­esis, Identify potential sources of historical data, Evaluate the historical sources, Report and summarize the results.

Writing

Method
Design: Need at least one sentence describing (a) what type of PE you are conducting and (b) the design of the study. Need at least one sentence describing (a) what type of PE you are conducting and (b) the design of the study.
Partic­ipants: Any/all descri­ptors or charac­ter­istics required of the partic­ipants goes here.
Measures: List and describe each measure to be used. Measures are realistic and/or already exist. Usually each one gets its own subheading and a brief descri­ption. Be sure to describe each one with as much detail as possible. If you are using a measure that has already been developed, be sure to cite the article you took it from. (Normally you cite the article where the measure was published, but that’s an extra step I’m not asking you to do.)
Procedure: Give the step-b­y-step process of doing the study, starting with partic­ipant recrui­tment. Like it’s a recipe. If there is a treatm­ent­/in­ter­ven­tion, be sure to describe it in detail.
Throug­hout, use future (“the partic­ipants will...”) or condit­ional (“the partic­ipants would...”) tense. Either is fine, just be consis­tent.
Antici­pated Findings: Write a few sentences describing what the findings would be if you were to actually conduct this study.
Discussion
Rationale For Selected Design: Explain how/why the design used in the Method was selected (e.g., how it fits with previous research, RQ/hyp­oth­eses, referral question).
Strengths and Limita­tions: Should be about the selected design(s) in general (e.g., limited external validity for a qualit­ative study) AND about your study in particular (e.g., partic­ipants might begin to fill out measures carelessly if they are asked to do them every week for 6 months). Should include threats to validity, potential research & partic­ipant bias, ethica­l/m­ult­icu­ltural consid­era­tions, etc.
Implic­ations of Antici­pated Findings: Say what the implic­ations of your antici­pated findings would be for the agency­/st­ake­hol­ders. Make sugges­tions for future research.

Common Factors

Backgr­oun­d/H­istory: In a 1952 study, Hans Eysenck concluded that talk therapy had no effect.30 years later Smith, Glass & Miller (1980) found that at the end of treatment, the average treated person is better off than 80% of the untreated sample. Since the mid-80s, the number of therapists has increased 275%. The number of DSM diagnoses has gone from 66 to 286 in the DSM-IV. There are over 200 therapy models (a 600% increase from the 1960’s!) and over 400 therapy techni­ques. Compar­ative studies of treatment type routinely show none to be signif­icantly superior to the others. (Dodo Bird Hypoth­esis- they all can be succes­sful). So, what makes therapy work? If they all work equally well, there must be things common to all of them that is making them work. Enter Common Factors...
Common Factors First proposed by Saul Rosenzweig in 1936, who proposed that the effect­iveness of different therapy approaches had more to do with their common elements than with the theore­tical tenets on which they were based.
Ingred­ients of a Healing Relati­onship An emotio­nally charged, confiding relati­onship between healer and client. A healing setting. A rationale, conceptual scheme, or myth that provides a plausible explan­ation for the client’s sympto­ms/­dis­tress and prescribes a ritual or procedure for resolving them. Active partic­ipation by both client and healer in that ritual­/pr­ocedure that both believe to be the means of restoring the client’s health.
Common Factors In Therapy
Client Variab­les­/Ex­tra­the­rap­eutic factors (account for 40% of variance in outcome) Client partic­ipation Goals, Motiva­tion, Expect­ation that it will help Experience of the client Time/place to focus on self, Person­ality of the therapist, Having someone who cares and listens, Having someone who encourages and gives advice Make use of natural healing process Intera­ctive, Think together
Therap­eutic relati­onship (accounts for 30% of variance in outcome) Relati­onship is formed early (Working alliance in Session 3 is predictive of outcome!), Clarify expect­ati­ons­/pe­rce­ptions, Solicit feedback on helpfu­lness of the sessions, Ruptures occur (so discuss them directly), Dispos­itional charac­ter­istics influence the relati­ons­hip­—aw­areness of self helps
Placebo, hope, expect­ancies (accounts for 15% of variance in outcome) Pathways thinking hought about ability to produce one or more workable route. A therap­eutic ritual­/pr­oce­dure; therap­ist’s confidence in method enhances client belief in potential healing. Agency thinking houghts about ability to begin and continue movement toward goals. Emotional, confiding relati­onship with therapist who is hopeful and determined to help. A therap­eutic setting that reinforces perception of therapist as helper who is effective. Way to increase hope is to help client find a new goal, pathway, or sense of agency
Model/­Tec­hniques (account for 15% of variance in outcome) Little evidence to support techni­que­-based training; remember Dodo Bird.(­Exc­eptions include exposure therapy for some anxiety disorders and behavioral treatments for sexual dysfun­ction.) Therapists who are flexible in their respon­ses­/in­ter­ven­tions to clients have increased potency. Skill and experience matter.
Blending Common Factors and Empiri­cal­ly-­Sup­ported Treatment (EST)
Discip­lined Inquiry Model (Peterson, 1991) Assessment of the client based on theory­/gu­iding concep­tion. Assessment used by practi­tioner to create specific formul­ation of the client’s situation, often involving reframe of client situation. Assessment and formul­ation rely on practi­tioner knowledge of relevant empirical research and mental storehouse of similar cases. Formul­ation leads to treatment plan. Research important to choosing this. Monitoring process used and formul­ation, plan, and treatment altered as needed. Case added to knowledge base of practi­tioner.
Local Clinical Scientist (Stricker & Trierw­eiler, 1995) Bring attitudes and knowledge base of a scientist to clients’ problems. (Observe, test hypoth­eses, reflect, conclude repeat­edly.) Unders­tanding the “local situation” is at least as important as knowing something about clients or techniques in general.