How to solve the reproducibility crisis in Psychology

Travis DixonCurriculum, Research Methodology

Psychology is in crisis, especially social psychology as the findings of hundreds of classic studies are failing replication. Here's an idea for how we can solve this problem.

The “reproducibility crisis” (or “replicability crisis”) is the term used to describe the recent discovery in psychology that many classic studies are failing to have their results reproduced. In fact, the whole of psychology, especially social psychology research, seems to be in a crisis regarding its credibility as a source of knowledge. Other fields like economics and the sciences are facing the same problem. But I think I have a simple solution.

The Problem

In psychology, studies are our key source of knowledge. But not every study is created equally and there are many ways to assess the credibility of a study. One important way we can determine the reliability of a study is by test-retest reliability, or replicability – the extent to which other researchers have copied the studies methodology and have gotten the same or similar results.

A famous real-life example of a study lacking replicability is Cuddy’s study on the effects of power posing. This was a revolutionary and hugely popular study, sparking a book and lecture tours and made Cuddy a celebrity (with the help of the widely popular TED Talk). Because of that study, psychologists thought they “knew” that power posing could increase confidence. But the study has failed many replications – other psychologists just can’t get the same results (Read more: Science Daily).

Read more: 

  • 7 ways to evaluate a study (Link)
  • So you want to assess ecological validity? (Link)
  • So you want to assess population validity? (Link)

So actually our knowledge of power posing is much different now – based on the replications psychologists would be more likely to say “we know power posing has no effect.”

But Cuddy’s research isn’t the only study failing replication. There is an ongoing project called “The Reproducibility Project” which involves hundreds of psychologists from around the world working together to replicate important studies. Findings in one report showed that only 50% of the original studies (14/28) had their results replicated (Source: The Atlantic). This is now a common finding and it’s why the reproducibility crisis

The Solution

So how can we solve this problem? I have what I think is a simple solution. It’s so simple, that there must be a reason why this doesn’t happen already and so I urge someone with more first-hand research experience to pop a note in the comments explaining why this doesn’t already happen. And if it does happen, post a link to the study.

Here’s the solution: replicate the study in the original publication. Simple.

Most original studies you’ll read only have one sample and one set of results recorded. Some articles may have multiple variations of the same experiment, but they’re variationsThey’re not replications.

If you’re a research and you’ve found something worth writing about and trying to get published, why on earth do they not repeat the study with a second set of participants? Well, there’s a few reasons but I’m going to refute them all.

  • Excuse #1: “It takes more time to conduct a second trial of an experiment.”
  • Answer #1: Well, I don’t care. Don’t waste my time by telling me your latest discovery if you haven’t bothered to take the time to replicate the study yourself.


  • Excuse #2: “Because most studies happen on college campuses, it could be the samples that are the problem and the results aren’t generalizing to other groups of people (e.g. students at other colleges, in other countries or older people).”
  • Excuse #3:“Most studies happen in colleges. By the time you do a replication, the word may be out and students could talk about it. This ‘contaminates’ the same and jeopardizes internal validity”
  • Response: Collaborate with colleagues at other Universities. Find another colleague or two or three (the more successful replications the more reliable and credible the study!) and swap replications – you replicate theirs, and they replicate yours. Collaborating internationally can also solve the problem of WEIRD bias in samples (Western, Educated, from Industrialized, Rich and Developed countries).


  • Excuse #4: “There’s no fame or glory in replications.”
  • Response: With this new system, now there can be. Collaborators on studies can have their names published on the original journal articles (or as a sub-contributor if the egos of professors means the glory and praise can’t be shared).


  • Excuse #5: “What if we spend all that time, money and effort and the study fails replication.”
  • Response: And that kids is why this reproducibility problem exists in the first place – researchers don’t want to run the risk of replicating their own study because if they don’t get the same results then all their work was a waste of time.


Perhaps deep down psychologists like Cuddy knew there was a chance of replication failure. After all, which young professor clamouring for fame doesn’t want to be the next Zimbardo, Milgram or Bandura? Sexy studies and exciting results are the way towards tenure, book deals, lecture tours, private gigs and celebrity status.

Or maybe the motivations are less selfish. Cuddy had a personal reason to want to study power posing and increasing people’s confidence. I imagine most psychologists end up specializing in their fields because they want to make a difference. They believe their results are going to help and they don’t want a pesky thing like replicability to stand in their way.

Researchers can easily safeguard their studies and help solve what I’d call a credibility crisis in psychology by simply being responsible for replicating their own studies.

But this solution may exacerbate another problem – publication bias. But that is a problem to be solved another day.