Internal Validity: And why I don’t teach it…

Travis DixonCurriculum, Internal Assessment (IB), Research Methodology

With no prescribed learning outcomes, we now have the freedom to devise our own. (via: bigstock)

I’d love to hear how you feel about my rationale for not teaching students how to evaluate studies based on internal validity. There is one exception, however: their IA. I only introduce the concept of internal validity during the analysis of their IA results and procedures, as this is the only study I expect them to be able to make valid evaluative remarks regarding internal validity.

Why do I feel this way? Here’s how I explain it in A student’s guide…

The internal validity of a study refers to the extent to which the study actually demonstrates the relationship that it intended to. In this instance, an explanation of a limitation of a study could involve explaining possible confounding variables that might have affected the results. For example, if I conducted my Rememberol™ study (this is an example I use throughout the introduction) on students and measured its effect on test scores in English, a critique of internal validity would involve investigating my methodology. Were the questions in my test fair? Was the test a good measure of memory? Were all extraneous variables controlled for, or might there be some other variables influencing my results?

Pharmaceutical medicament

I like to use a fictional drug (Rememberol) to teach basic research concepts in the introductory lessons and during the study of quantitative methods and the IA.

It is far easier to include explanations of the strengths of studies that we use in this course in relation to internal validity, than it is to explain potential limitations. After students learn about controls later in the course when they are designing and conducting their own experiment, they’ll be able to identify controls more easily in other examples of research. When asked to evaluate a study, therefore, they’ll be able to see how researchers aimed to ensure internal validity by employing such controls. This will allow the explanation of strengths of the study, which is an important part of evaluation.

It is extremely difficult to explain the limitations of the studies used in this course in terms of internal validity for a number of reasons. First of all, the studies that we use have been published in peer-reviewed articles, carefully designed by extremely professional and experienced researchers and highly scrutinized by other psychologists. It would be very difficult for a first year high school psychology student to notice a limitation that someone else hadn’t already noticed. Therefore, the only real critique that students would be expected to offer in terms of internal validity would be one that was probably proposed by someone else (e.g. in a textbook). In this case, students are not demonstrating their critical thinking, they are demonstrating their ability to regurgitate someone else’s.

This is why explanations of evaluation cannot be found in my textbook. Telling students what others have concluded about the strengths and limitations of theories and studies would only increase the content of the course and limit the development of students’ own thinking skills. I feel it’s far better to figure out how to think critically for themselves than it is to memorise all the possible statements of strengths and limitations of all the research they need to understand. There is simply too much content to do this. And students can provide excellent evaluations by focusing on external validity only, which they can apply easily (I’ll post this tomorrow).

A second reason why I strongly discourage trying to evaluate studies based on internal validity is that it requires an enormous increase in additional amount of methodology that students have to remember, if they are going to use this in an exam. In this course students read about new studies almost every lesson. In order to try to evaluate their methodology to assess internal validity would involve in-depth descriptions and scrutiny of their methodologies, and since our primary purpose in using studies in the first place is to develop conceptual understandings of significant relationships between variables, behaviours and cognitive processes, we only need to look at the methodology in as much detail as is required to show that relationship. Evaluating research is a secondary purpose to developing conceptual understandings of relationships, so internal validity simply adds a lot more potentially unnecessary content.

bigstock-128628200

My curriculum design decisions try to consider multiple factors, but the main ones are time and student workload. I aim to have fewer building blocks and stronger relationship chains. Internal validity involves many new building blocks.

When students will be expected to critically evaluate a study based on its internal validity is when they conduct their own experiment for the internal assessment. It is during this chapter, therefore, that they will learn about evaluating methodology based on internal validity and in their report where they will be expected to demonstrate their ability to assess internal validity of research. Higher Level students may also demonstrate this in their third question in Paper Three.

So that’s why I don’t teach students to focus on internal validity in the research that we investigate. The sheer amount of methodological details that would be required, and the time taken to explore internal validity in peer-reviewed, replicated experiments would simply add more superfluous content. If we had fewer complex concepts and more time, I would love to teach students how to analyze a study’s internal validity: but frankly, I don’t think we do. 

I’d love to hear your thoughts…