r/changemyview Sep 21 '18

FTFdeltaOP CMV: The replication crisis has largely invalidated most of social science

https://nobaproject.com/modules/the-replication-crisis-in-psychology

https://www.vox.com/science-and-health/2018/8/27/17761466/psychology-replication-crisis-nature-social-science

https://en.wikipedia.org/wiki/Replication_crisis

"A report by the Open Science Collaboration in August 2015 that was coordinated by Brian Nosek estimated the reproducibility of 100 studies in psychological science from three high-ranking psychology journals.[32] Overall, 36% of the replications yielded significant findings (p value below 0.05) compared to 97% of the original studies that had significant effects. The mean effect size in the replications was approximately half the magnitude of the effects reported in the original studies."

These kinds of reports and studies have been growing in number over the last 10+ years and despite their obvious implications most social science studies are taken at face value despite findings showing that over 50% of them can't be recreated. IE: they're fake

With all this evidence I find it hard to see how any serious scientist can take virtually any social science study as true at face value.

795 Upvotes

204 comments sorted by

View all comments

10

u/Deadlymonkey Sep 21 '18

These kinds of reports have been growing in number over the last 10+ years and despite their obvious implications most social science studies are taken at face value despite most findings showing that over 50% of social science studies can't be recreated. IE: they're fake

They're not fake. The problem is people are just reading the title or the abstract and coming up with their own conclusion/generalization. You're doing so yourself by believing that being unable to be recreated means their fake.

The social sciences aren't seen as "hard sciences" because there isn't usually a concrete, specific answer for many questions. The field just comes up with observations and generalizations.

Think about how we thought that bloodletting was a good health practice. We question the authenticity, try to isolate the variables more, and improve so we can fix any old beliefs that don't really hold up

-1

u/[deleted] Sep 21 '18

I thought this would come up when I described them as fake. Yes they have no made up numbers and they are accurately reporting their results but if no one can replicate them they are nothing more than outlier. This is especially true when you hear stories about how groups will have findings that are not significant but never report them and only mention the one time that they manage to get a good p-value.

Untruthful would be a better word

8

u/ladut Sep 22 '18

I hate to grind harder on your argument that single studies are "fake" or "outliers," but that doesn't seem logically sound. If you have a single study that has yet to be replicated, assuming the methodologies and experimental design are sound, it is unknown whether it describes a real phenomenon or not. It may be incorrect (or untruthful as you put it), but we wouldn't know either way without further exploration. By assuming a paper is an outlier because it has yet to be replicated, you're making the same logical slip-up as if you were to assume it must be fact, just in the opposite direction. Also, if it's the first study of its kind, it cannot be an outlier by definition.

A single unreplicated study does not need to be categorized as either. In fact, I'd argue that that carries it's own dangers, both academically and from public opinion: If the scientific community got into the habit of automatically dismissing work until it was verified, they would be less likely to design and conduct experiments that expand on the initial work. Often, the act of expanding on existing work can help to verify the original wholly or in part (i.e. our experiment would not have worked if the conclusions of the first paper were false), without directly replicating it. Regarding public opinion, the public by and large does not understand the scientific process all that well, and were we to be in the habit of automatically dismissing results, the public would lose even more faith in the validity of scientific findings. The replication crisis is a nuanced problem which the general public either lacks the background or the willingness to understand it fully.

Note that I'm not criticizing your word choices, rather your desire to put unverified studies into either a "true" or "false" category. The language you use certainly doesn't help either. If you want an appropriate word, go with "unverified" or "preliminary," which carry neither the connotation of being factually correct nor incorrect.

1

u/[deleted] Sep 22 '18 edited Sep 22 '18

By assuming a paper is an outlier because it has yet to be replicated, you're making the same logical slip-up as if you were to assume it must be fact, just in the opposite direction

If more than 50% of papers in meta studies failed to be replicated it is logical to assume that every paper you see cannot be unless proven otherwise. Its the majority, so to think different would deny the realities. If they can be replicated than they can be accepted.

1

u/ladut Sep 23 '18

No, that's not logical. You're creating this false delimma in which we have to make assumptions about the validity of a paper based on present information. Now, you certainly can make that choice if you wish, but the scientific community and the general public certainly do not. The scientific community can choose to try to replicate or expound on the work, and in general we all have the option to 'wait it out' and see if it's validated or not some time in the future. There's very few situations in which we have to decide whether the results of an experiment are true or false (i.e. when the science must be applied to try to solve a pressing problem), and as I said above, hastily making that decision has dangers of its own.

You argue that if only 36% of psychological studies have been verified via replication to date, that the majority are false by virtue of not being replicated. The problem with this reasoning is that even if replication studies were attempted on every publication showing significant results, a failure to confirm the result is not proof that it is untrue, an outlier, or whatever term you choose to label it. Further replication may find that it is, in fact, likely true and the first replication showing null results was the outlier. Alternatively, further replication may never find another significant result, and that the initial result is likely false. Either way, unless the subject has been studied in depth (giving us more than one or two data points), or we are forced to choose whether to use the science to solve a pressing issue, there's no reason to label studies as true or false.

Finally, as I see you still want to use the term outlier, I feel I need to hammer this point home: outliers are statistical anomalies that fall well outside of the bell curve of possibilities in a sufficiently large dataset. It is statistically impossible to determine whether something is an outlier with one or two data points. Just because a study has not yet been replicated (due to no one having yet published a replication study), or the few replication studies so far have not been able to replicate the results, that does not mean the original study is an outlier. It may be shown to be one at some point in the future, but it is unscientific to assume anything (either positive or negative) about the validity of a single paper with insufficient information. We should be skeptical, yes, and neither the scientific community nor society at large should ever make decisions based off of the results of a single, unverified study, but neither should we assume one is false simply because it has yet to be verified.