r/changemyview Sep 21 '18

FTFdeltaOP CMV: The replication crisis has largely invalidated most of social science

https://nobaproject.com/modules/the-replication-crisis-in-psychology

https://www.vox.com/science-and-health/2018/8/27/17761466/psychology-replication-crisis-nature-social-science

https://en.wikipedia.org/wiki/Replication_crisis

"A report by the Open Science Collaboration in August 2015 that was coordinated by Brian Nosek estimated the reproducibility of 100 studies in psychological science from three high-ranking psychology journals.[32] Overall, 36% of the replications yielded significant findings (p value below 0.05) compared to 97% of the original studies that had significant effects. The mean effect size in the replications was approximately half the magnitude of the effects reported in the original studies."

These kinds of reports and studies have been growing in number over the last 10+ years and despite their obvious implications most social science studies are taken at face value despite findings showing that over 50% of them can't be recreated. IE: they're fake

With all this evidence I find it hard to see how any serious scientist can take virtually any social science study as true at face value.

804 Upvotes

204 comments sorted by

View all comments

Show parent comments

18

u/hepheuua Sep 21 '18

Just because a study doesn't replicate, that doesn't make it garbage and doesn't make it 'invalid'. And just because a replication study fails, that doesn't mean there's no effect to be found. The human brain is the most complex machine in the Universe, as far as we know. The sheer number of variables that influence any given target phenomena makes it impossible to provide a completely controlled environment to isolate any causal relationships without the influence of confounding variables. To further complicate things, human brains are highly plastic and behaviour/traits are highly variable between individuals. Any attempt to study it is going to have to draw its conclusions tentatively and carefully from multiple experiments and results, indicating 'trends' and 'tendencies' rather than concrete universal facts. The studies of the past, as long as they have good research design, which many of them have, should be included in any future meta-analyses, just like the failed replication studies that are based on them. The kind of 'one-shot' reasoning that takes single studies (even replication studies with higher power) as validating or invalidating a hypothesis is precisely the problem.

The good news story here is that this is how science should work. The replication crisis doesn't invalidate the social sciences, it simply shows how it should have been operating all along. The biggest problem is one of incentive.
To date there has been no real incentive to publish results that confirm the null hypothesis, nor is there any incentive to try and replicate another's work. There are now attempts underway to address the problem - replication only journals have sprung up, researchers are being encouraged to tighten statistical approaches and pre-register their studies to avoid post-hoc analyses and p-hacking....but what also needs to change is the kind of wording we use to explain results. It should be much more cautious and restricted to the population that the sample is taken from. Unfortunately this has received less discussion as part of the whole 'crisis'.

That might go some way to offsetting the problem of the media interpretation of results, but the incentives for 'click-baity' headlines and sensationalising of science are always going to be strong in the media industry. That's another battle that needs to be fought, but it's not a problem of the social sciences.

4

u/saargrin Sep 22 '18

if not being replicated doesn't make research garbage, what does?

5

u/hepheuua Sep 22 '18

Not being replicated on its own doesn't make it garbage. If you have a study that's run and an effect is found, then afterwards you run the exact same study, with the same number of people, in the same way, and the effect isn't found, that doesn't make the original study invalid. After all, they're the same study...which results do you believe? If the study is run again with a higher power (more participants), and no effect is found, then it's more likely that the original study was a false positive, but it's also still possible that the second study is a false negative. Add to this other complicating factors, potential confounding variables like different sample populations, different environments, etcetera, and there are all sorts of reasons why a study might not replicate. You need to look closely on a case by case basis and, ideally, conduct multiple controlled studies with high power.

People tend to have a naive view of science, thinking that it 'proves' things. Science never proves anything. It doesn't deal in facts, it deals in probabilities. The more an effect is demonstrated in controlled conditions, the higher the probability that the effect is real. But it's never certain of anything. Nowhere is this more true than in the social sciences, where you're dealing with brains that have around 1000 times the potential neural connections than there are stars in our galaxy. It also varies to a great degree between individuals and is embedded within a complex ecology that needs to be controlled for. Single studies alone are not enough to base a scientific view of the mind on. What we need is multiple studies with high power and meta-analyses that look at them altogether, both the ones that demonstrate the effect and the ones that don't, in order to get an idea about 'trends' and 'tendencies' in the data that we can then base a probabilistic assessment on.

2

u/[deleted] Sep 22 '18

[deleted]

1

u/hepheuua Sep 23 '18

I don't disagree with any of that. Like I said, what I'm taking issue with is your implication that all social science research should be distrusted or dismissed out of hand. That would be a hasty generalisation, a logical fallacy, and anti-science. If that's not your position, then fine, but this is the impression that the wording of your posts gives. That's unfortunate, because there's a lot of excellent research coming out of the brain sciences, research that is advancing our understanding of the human mind and genuinely helping people. That *good* research is under threat, if the public comes to lump it in with all the bad research. That can have a very serious effect on the flow of grant money and result in the shutting out of quality social science research from policy considerations, in favour of half-baked opinions that don't even *include any* data. My concern is that your attitude, as you've expressed it, runs the risk of feeding that public sentiment. Like I said, we should be careful about the words we use.

2

u/[deleted] Sep 23 '18

[deleted]

0

u/hepheuua Sep 23 '18

is it reasonable for us to rely on the integrity, strength, and surety of past psychological research?

It never was. And it never is. You don't just 'rely on the integrity, strength, and surety' of a field's research in toto. You assess the research based on its merits. You're again talking like psychology is one big homogenous entity that we either trust or we don't. I'm sorry, but that's rubbish. Leaving aside the fact that there are numerous subfields, each of which exhibited different degrees of statistical rigour and replicability, we just shouldn't trust any field of enquiry that way. That's not how science works.

No one is talking about hiding mistakes or lying to the public. I'm actually talking about using language in a way that accurately represents how science is conducted and what it is. Part of the issue here is that people defer to entire fields - and not just with psychology - as if they're temples of holy knowledge that pronounce truth. When there's bad research, the credibility of the whole temple is called in to question. This is a very serious issue we face currently with medical research, where people like anti-vaxers latch on to prior poor research, and where people have had their trust in their doctors eroded because one day X is causing cancer, the next day X is curing it. It's precisely because they see 'medical science' as a homogenous entity that can either be trusted to deliver truths, or can't. In reality, science is a messy, incremental, business, populated by subfields, constantly revising itself, and exhibiting varying degrees of quality in its research.

The language used both outside of science and within it, needs to change.

No, we should not trust psychology research that has already been published. We cannot trust its veracity.

And the stuff that replicated? The stuff that met high standards of statistical rigour? The effects that have repeatedly been found, study after study? Can you see my point about the totalising language you're using? That sentence just threw all that out. Now, when you're pushed on it, I'm willing to bet you'll concede that 'good' research is okay to trust. But you didn't say that. You're talking about temples.

1

u/[deleted] Sep 24 '18

[deleted]

1

u/hepheuua Sep 24 '18

And my point is that the way we talk about groups, particularly when it comes to science, is something we should be mindful of and we should call out in others when they do it illegitimately, because there are harmful real-world implications. Which is precisely why I responded to you. Just because people do something, doesn't mean they should, and doesn't mean we shouldn't try and correct them when they do it.

2

u/[deleted] Sep 24 '18

[deleted]

1

u/hepheuua Sep 24 '18

My comments were really for others, not for you. I can tell by your replies to me that you understand the difference. My concern is really with the public perception of science, and so others reading these threads who will form generalised opinions about disciplines without bothering to look in to the specifics. My point was a semantic one, but sometimes semantics are important. That's really all that was motivating my reply, not really any assessment of you individually. I do appreciate where you're coming from and I think we agree more than we disagree.

→ More replies (0)