Quantitative Research Methods Glossary

Travis DixonUncategorized

The following list is a pretty comprehensive list of key terms for research methods in psychology. Please post a comment/question if you notice double-ups, errors and/or omissions. 

“True” Experiment: this is an experiment that takes place in a controlled environment and the researcher/s manipulate the independent variable to create conditions (e.g. treatment and control groups) to measure the dependent variable. It aims at eliminating confounding variables and isolating the IV as the single factor that is affecting the DV in order to determine cause and effect relationships between variables.

Animal Welfare: while studying on animals, they should be given adequate food, water and shelter and generally be well-looked after. This would alleviate any unnecessary suffering.

APA: American Psychological Association. This is an organization of Psychologists in America. They publish their own ethical guidelines and can review research proposals to be conducted in America.

Bias: When researchers view or interpret behaviour in a way that supports their pre-existing ideas and/or beliefs.

Bidirectional ambiguity: This exists in correlational studies where the direction of the correlation is unknown. In a positive correlational study, for example, as one variable increases so does the other. An inference is often drawn that variable A is influencing variable B. However, sometimes, which variable is influencing which is unclear. Here’s a hypothetical example: There is an increasing amount of violence on TV and violent crimes are also increasing. This could be because, a) the violent TV is causing more  violent acts, or b) because society is increasingly violent more TV shows are depcting this violence. This is “bidirectional ambiguity”.

BPA: British Psychological Association. This is the British equivalent of the APA.

Brain Imaging Technology: these could be used as part of an experiment, longitudinal or case study. They are large pieces of machinery that are used to measure various aspects of the brain, such as fMRI, MRI and PET.

Case Study: an in-depth investigation into an individual or group. A case study combines numerous research methods to gather the data.

Confidentiality: a participant’s private information or specific data in a research study should not be made public. Their data, if it is published, should remain anonymous.

Confounding Variable: any possible factor (that isn’t the IV) that could have influenced the dependent variable. i.e. it is a variable that is different in both conditions that was not manipulated by the researcher. This reduces the ability to draw cause-effect relationships.

Constancy of Condition: Ensuring that the conditions in which the experiment take place are the same for all participants.

Contamination: When events from outside of the research study affect (i.e. contaminate) the results of the study itself.

Control Group: The group that receives no “treatment”, or they may receive a placebo. This allows the researchers to compare the results between the treatment and control group to analyse the cause – effect relationship.

Correlational Study: collecting and analyzing data to try to find a relationship between two variables.

Counter-balancing: Designing the experiment in a way that reduces confounding variables such as order-effects. It usually involves the splitting of the groups in some manner.

Data: Information gathered that provides feedback on a certain aspect or criteria.

Debriefing: telling the participants at the end of the study what the aims and results of the research were.

Deception: when participants are wrongly lead to believe something in a research study that is untrue.

Demand Characteristics: this is when the participants form an opinion as to the aim of the study and their behaviour changes as a result. This is similar to the Hawthorne Effect, which is when people behave differently simply because they are being observed.

Dependent Variable: the factor that the researchers are measuring. It is the effect of the independent variable.

Double blind study: when the participants and the researchers are unaware who is in the treatment or control group.

Ecological validity: the extent to which the environment of the research and the task reflect real life. If the environment is too artificial, or the task is not natural, the study may lack ecological validity because the same results might not apply in real life.

Ethics: an ethic is a rule or principle that determines what is acceptable or unacceptable in regards to a particular area or field. It is a similar idea to morals, except ethics (in this context) applies to particular groups and professions (e.g. Psychologists). A moral depends on an individual’s beliefs and/or values; an ethic may change depending on the situation.

Euthanizing: putting an animal out of its misery by killing it. This is more humane than allowing suffering to endure. Example: Hetherington and Ranson’s lesioning of the hypothalamus on rats.

Experiment: when the effect of one variable (IV) on another (DV) is measured so a cause and effect relationship or a relationship can be determined.

Experimental (Treatment) Group: In an experiment this is the group that receives the “treatment”. The treatment is often what the researchers have hypothesized with affect the DV.

External Validity: how accurately the data can be generalized to people outside the sample (e.g. the target population).

Field experiment: This is when the independent variable is manipulated in a natural environment (e.g. a train station or on the street).

Generalizability: the extent to which the results and conclusions of the research can apply to populations (i.e. people) outside of the sample involved in the research.

Independent Samples: when participants (or subjects) in the sample are divided into separate groups and they receive different treatments. Example: Schachter and Singer’s Two Factor Experiment.

Independent Variable: the factor that the researcher wants to investigate the effects of. The researcher will change this variable to create different conditions in the experiment to compare the different effects on the dependent variable.

Informed consent: before research is conducted, participants are made aware that they are being asked to participate in an experiment and they give their approval.

Institutional Review Board: A board is a group of people formed for an official purpose. An Institutional Review Board is formed to read, review and offer suggestions on psychological research proposals so that they meet the ethical standards of a relevant organization (e.g. APA, BPA).

Internal Validity: this refers to the quality of the actual research methodology, including the methods used to obtain the data. Basically, did the study investigate and measure what it originally intended to.

Inter-rater reliability: When more than one researcher is collecting the data and their data are similar. E.g. during Bandura’s Bobo Doll research, more than one researcher was observing the children and they compared their data to ensure that they had similar results. This reduces potentials for researcher bias.

Interview: This is when data is collected from participants by asking questions directly (i.e. through talking). It can be done in person, over the telephone or even through skype (or other modern technology). They can take the form of structured, semi-structured, narrative or unstructured interviews. (HL students will learn more about these).

Justification: in animal studies, there needs to be a clear and worthwhile purpose to test on animals, especially if harm or death will be inflicted upon the animals.

Laboratory experiment: when the independent variable is manipulated in a controlled environment with the aim of eliminating possible confounding variables and establishing a cause and effect relationship.

Longitudinal Study: this is any study that is conducted over a long period of time. Case studies and observations are typically the methods used during a longitudinal study.

Manipulate: the researcher creates different conditions where the variable is different in each one. The DV stays the same. For example, in an experiment investigating the effects of sunlight on plant growth, the researchers would “manipulate” (i.e. change) the level of light for different groups of plants and would measure their growth (DV) to see the difference.

Matched Pairs: When participants are matched based on a particular characteristic and then randomly assigned to different conditions in the experiment. For example, Bandura used matched pairs by pairing the children up based on their pre-existing aggressive tendencies and then randomly allocating them to one of the conditions of the Bobo doll experiment.

Maturation: This occurs when the participants change in some manner during the course of the experiment, which may influence the results. For example, participants may get better at doing a particular skill that is tested in the experiment simply because they have had more time to practice.

Meta-Analysis: this is not technically a research method in itself. Researchers collect data from numerous studies that have already been conducted and compare the results to draw conclusions.

Mundane reality: A study is said to have “mundane reality” when the tasks the participants are asked to perform are reflective of real life or realistic behaviours. Does Asch’s study have mundane reality?

Natural experiment: A natural experiment has naturally occurring independent variables i.e. they existed before the study.

Observer Bias: Researchers who are observing behaviour may interpret behaviour differently based on their pre-existing beliefs, ideas and/or values. E.g. Two observers may view “violent” behavior differently because of their views on what “violence” is.

Observer-Expectancy Effect: This is when the researcher subconsciously influences the participant/s behaviour because they have a particular set of expectations about what the results of the study should show.

Opportunity sample: using participants who happen to be available at the time the research is being conducted (this is also called convenience sampling).

Order effects: if using repeated measures design, sometimes the order in which a participant does a task may alter the results. For instance, they may get better with practice and this could disrupt the results, or they remember something from the first condition that may alter their results. This is similar to maturation, where participants get better on the second or third trial simply because they have practiced the skill.

Participant Expectations: This is also known as the “expectancy effect” which is when participants may guess or have figured out the researcher/s aim of the study and may act in a way that they think they are expected to act.

Participants: The individuals involved in the study (sometimes referred to as ‘subjects’, especially in older studies).

Placebo Effect: The effect caused by the administering of a placebo.

Placebo: a ‘treatment’ that has no physiological effect, except for that which is caused by the recipient’s belief of its effects. Example: red light placebo group in Avery’s melatonin experiment.

Purposive sample: asking people directly whom the research believes will provide the best data.

Qasi-experiment: when there is an independent and dependent variable but the confounding variables cannot be controlled or the IV is not manipulated by the researchers.

Questionnaire: These are a series of written questions that participants have to answer. They may use open or closed questions, or a combination of both.

Random allocation: when participants in the sample are chosen to be in the treatment or control group by a systematically random method.

Random sample: where every member of the target population has an equal change of participating in the study and the participants are chosen using a systematically random method.

Reliability: how accurate and trustworthy a research study’s results are.

Repeated Measures: when participants in an experiment experience all the conditions (i.e. all changes in the IV).

Representative Sample: A sample is representative of the target population. That is to say, there is a strong probability that the results can be accurately generalized to the target population

Research Method: A systematic and structured way for data collection and analysis.

Researcher Bias: when a researcher’s own views, beliefs or opinions influence the research method. They may influence the research at any stage, such as design, data gathering or analysis. This is a major issue in qualitative studies but also in quantitative ones.

Right to withdraw: at any time participants should be allowed to leave or stop participating in a research study.

Sample Size: How many participants are in the sample. Written as (n) in reports.

Sample: the group of participants that are selected from the target population to study.

Self-selected sample: when people sign-up by themselves to be participants in a study.

Single blind study: When the participants don’t know if they are in the treatment or control group.

Social desirability effect: People want to be liked and so they may alter their behavior so they are not viewed negatively or foolishly. For example, in an interview about stereotypes, participants may not be honest about stereotypes the have through fear of being viewed as racist.

Stratified Sampling: When the demographic of the sample accurately reflects that of the target population. Samples can be stratified in terms of age, gender, ethnicity, etc.

Survey: These can be either in interview or questionnaire form. Participants are given open or closed questions (e.g. via email, telephone or on paper) and their results are gathered and collated. A small-scale survey is anything under 300 participants while over this is considered a large-scale survey.

Target Population: is the total population that you are interested in researching. The target population is too large to test everyone, so a sample is taken from that group.

Test-retest reliability: in order to test the reliability of results, one method is to replicate (copy) the same experiment again to see if the same results occur. If the experiment is repeated and the same results occur, it is said to have test-retest reliability.

Treatment group: a section of the sample in an experiment that receives a type of treatment from the researcher.

Validity: how accurate the results of the research are due to factors taken into account at the time of the research.