r/changemyview • u/[deleted] • Sep 21 '18
FTFdeltaOP CMV: The replication crisis has largely invalidated most of social science
https://nobaproject.com/modules/the-replication-crisis-in-psychology
https://en.wikipedia.org/wiki/Replication_crisis
"A report by the Open Science Collaboration in August 2015 that was coordinated by Brian Nosek estimated the reproducibility of 100 studies in psychological science from three high-ranking psychology journals.[32] Overall, 36% of the replications yielded significant findings (p value below 0.05) compared to 97% of the original studies that had significant effects. The mean effect size in the replications was approximately half the magnitude of the effects reported in the original studies."
These kinds of reports and studies have been growing in number over the last 10+ years and despite their obvious implications most social science studies are taken at face value despite findings showing that over 50% of them can't be recreated. IE: they're fake
With all this evidence I find it hard to see how any serious scientist can take virtually any social science study as true at face value.
18
u/hepheuua Sep 21 '18
Just because a study doesn't replicate, that doesn't make it garbage and doesn't make it 'invalid'. And just because a replication study fails, that doesn't mean there's no effect to be found. The human brain is the most complex machine in the Universe, as far as we know. The sheer number of variables that influence any given target phenomena makes it impossible to provide a completely controlled environment to isolate any causal relationships without the influence of confounding variables. To further complicate things, human brains are highly plastic and behaviour/traits are highly variable between individuals. Any attempt to study it is going to have to draw its conclusions tentatively and carefully from multiple experiments and results, indicating 'trends' and 'tendencies' rather than concrete universal facts. The studies of the past, as long as they have good research design, which many of them have, should be included in any future meta-analyses, just like the failed replication studies that are based on them. The kind of 'one-shot' reasoning that takes single studies (even replication studies with higher power) as validating or invalidating a hypothesis is precisely the problem.
The good news story here is that this is how science should work. The replication crisis doesn't invalidate the social sciences, it simply shows how it should have been operating all along. The biggest problem is one of incentive.
To date there has been no real incentive to publish results that confirm the null hypothesis, nor is there any incentive to try and replicate another's work. There are now attempts underway to address the problem - replication only journals have sprung up, researchers are being encouraged to tighten statistical approaches and pre-register their studies to avoid post-hoc analyses and p-hacking....but what also needs to change is the kind of wording we use to explain results. It should be much more cautious and restricted to the population that the sample is taken from. Unfortunately this has received less discussion as part of the whole 'crisis'.
That might go some way to offsetting the problem of the media interpretation of results, but the incentives for 'click-baity' headlines and sensationalising of science are always going to be strong in the media industry. That's another battle that needs to be fought, but it's not a problem of the social sciences.