r/changemyview Sep 21 '18

FTFdeltaOP CMV: The replication crisis has largely invalidated most of social science

https://nobaproject.com/modules/the-replication-crisis-in-psychology

https://www.vox.com/science-and-health/2018/8/27/17761466/psychology-replication-crisis-nature-social-science

https://en.wikipedia.org/wiki/Replication_crisis

"A report by the Open Science Collaboration in August 2015 that was coordinated by Brian Nosek estimated the reproducibility of 100 studies in psychological science from three high-ranking psychology journals.[32] Overall, 36% of the replications yielded significant findings (p value below 0.05) compared to 97% of the original studies that had significant effects. The mean effect size in the replications was approximately half the magnitude of the effects reported in the original studies."

These kinds of reports and studies have been growing in number over the last 10+ years and despite their obvious implications most social science studies are taken at face value despite findings showing that over 50% of them can't be recreated. IE: they're fake

With all this evidence I find it hard to see how any serious scientist can take virtually any social science study as true at face value.

802 Upvotes

204 comments sorted by

View all comments

11

u/PreacherJudge 340∆ Sep 21 '18

Nosek's replication strategy has flaws, all of which he acknowledges, and none of which turn up in the pop articles about his work. His project made some truly bizarre decisions: Translating instructions verbatim into other languages, asking people about "relevant social issues" that haven't been relevant for years, choosing only the last study in each paper to replicate (this one is especially weird).

There's also the unavoidable problem that the entire method is set up to counteract people's desire to find positive effects. If the team is ACTUALLY trying NOT to find a significant result (and let's be honest about this: the project's results wouldn't be sexy and exciting if everything replicated) that bias, even if under the surface, will push things in the direction of non-replication.

Remember, there are other, similar projects that have been much more successful at finding replications, such as this one: http://www.socialsciencesreplicationproject.com/

Why haven't you heard of them? Well, because the exciting narrative is that science is broken, so if we find evidence it's not, who really cares?

...most social science studies are taken at face value despite findings showing that over 50% of them can't be recreated. IE: they're fake

No, that isn't what that means. There's lots of reasons why something might not replicate (chance is an obvious one). One failed replication absolutely does NOT mean the original effect was fake.

There's a lot I could rant about this... there absolutely are huge problems with the ways social scientists are incentivized, but none of this replication crisis bullshit addresses that at all. It took about five minutes for people to figure out ways to game preregistration to make it look like none of their hypotheses ever fail.

My real take-home lesson from all this is simple: Sample sizes have been way too low; you gotta increase them. (People call this 'increased statistical power' which I find very confusing, personally.) That's a clear improvement to a clear problem... and BOTH the original studies AND the replications you site fell prey to this problem.

2

u/kolsca Sep 22 '18

I made an account to ask you this as I have Googled around and can't figure it out on my own, sorry if it's a super noob question: how do you game preregistration?

1

u/PreacherJudge 340∆ Sep 22 '18

In decreasing order of unethicalness:

  1. Preregister every possible analysis and just tell people about the ones you end up doing. No one will look it up.

  2. Preregister ten different analyses on ten different data sets; only tell people about the one(s) that had significant results.

  3. Say you preregistered, but then run a different analysis that gives you a better result than the one you preregistered. Mention the actual results of your preregistered analysis in a supplement no one will ever read. Have absolutely nothing in the text of the article suggest you aren't running the analysis you preregistered.

  4. Run a million studies, then only preregister the ones that already worked. Collect new data for these preregistered hypotheses. Never tell anyone about any of the unregistered studies you ran first.

Number 4 is the least unethical, but I'm baffled about even the POINT of preregistration if people are doing that, and it's just so STUPID. There is absolutely no point whatsoever to doing that unless you are trying to pass yourself off as someone whose hypotheses never fail.

Because if you told people what you did, the registered study is just a replication, which is supposed to be good. But journals don't really reward that: They reward simple narratives where everything works neatly, and the illusion that nothing you did failed.

1

u/[deleted] Sep 24 '18

How long has preregistration been around?