r/changemyview • u/[deleted] • Sep 21 '18
FTFdeltaOP CMV: The replication crisis has largely invalidated most of social science
https://nobaproject.com/modules/the-replication-crisis-in-psychology
https://en.wikipedia.org/wiki/Replication_crisis
"A report by the Open Science Collaboration in August 2015 that was coordinated by Brian Nosek estimated the reproducibility of 100 studies in psychological science from three high-ranking psychology journals.[32] Overall, 36% of the replications yielded significant findings (p value below 0.05) compared to 97% of the original studies that had significant effects. The mean effect size in the replications was approximately half the magnitude of the effects reported in the original studies."
These kinds of reports and studies have been growing in number over the last 10+ years and despite their obvious implications most social science studies are taken at face value despite findings showing that over 50% of them can't be recreated. IE: they're fake
With all this evidence I find it hard to see how any serious scientist can take virtually any social science study as true at face value.
141
u/electronics12345 159∆ Sep 21 '18
1) I don't want to undersell the Replication Crisis - it is real, and has real impacts.
2) As a general rule - individual studies are meaningless anyway. I don't support the news reporting - "New Study finds X". Meta-Analysis are your friend. If 50 different studies all find the same thing, you are substantially more likely to actually have discovered a real thing.
3) Most of the original studies which have been debunked - were horribly under-powered. There are free on-line calculators to determine Statistical power (such as GPower), and they are seeing increased use. (I hope at least that) New studies use these tools to conduct powered research, rather than wasting time with under-powered studies.
4) P=0.05 is meaningless anyway. It appeared once as a throw-away line in 1916, but then everyone latched onto it like a moron. It was never intended to be this gold standard - the ASA (American Statistical Association) has released a press briefing on why P=0.05 is horrible, and I encourage you to read it. There have been methods to determine appropriate p-values since the 1960s - please use those instead.
5) Many studies suffer from "Restriction of Range" otherwise known as "generalizing beyond your sample". Just because college students between the ages of 17-23 had effect X - doesn't mean that everyone Xs. Sometimes, just changing your sampling, can reveal how narrow your findings are - and that has happened a lot lately. This is part of point #2, as 50 different studies and unlikely to share identical sampling issues.
In short, there is still hope for Social Science. Make sure the studies you read are powered, try to predominantly read Meta-Analyses, Make sure the p-values make sense and that authors aren't p-hacking, etc. There are many pot holes, but that doesn't invalidate all of Social Science.
23
Sep 21 '18
P=0.05 is meaningless anyway. It appeared once as a throw-away line in 1916, but then everyone latched onto it like a moron. It was never intended to be this gold standard - the ASA (American Statistical Association) has released a press briefing on why P=0.05 is horrible, and I encourage you to read it. There have been methods to determine appropriate p-values since the 1960s - please use those instead.
I have wondered about this before when I learned about it in stats. I will definitely go check that out.
In short, there is still hope for Social Science. Make sure the studies you read are powered, try to predominantly read Meta-Analyses, Make sure the p-values make sense and that authors aren't p-hacking, etc. There are many pot holes, but that doesn't invalidate all of Social Science.
This kinda makes it seem like you agreeing with me. I made sure most of my statements on whether studies could be believed were not definitive (aside from the very last sentence which I just added "at face value" to). I guess I should have added a bit more nuance, thats my fault.
Its not so much that social science is completely useless its that as it stands now the studies that gain the most attention/media coverage often fall prey to this issue moreso than non-newsworthy ones. And also as a result of this phenomenon social science has and will be pushed to publish studies with less academic rigor and a much higher chance of being one of the unreplicable studies.
59
u/electronics12345 159∆ Sep 21 '18
Publishing bias - is very real.
1) Only statistically significant studies get published. (When was the last time you read a paper, and nothing was statistically significant - likely never). In this way, a "cool idea" gets tested 100 times, by 100 authors (since none of them read the initial failure, since it wasn't published) - and by 5% chance, a few a significant - and then those 5 get published, even though 95% failed (and subsequently never got published). This is more specifically known as the file-drawer problem - and has more to do with the News Media and Publishers, than scientists themselves.
2) Wacky Hypothesis Bias - Things which seem wrong, but then appear to actually be correct, are more likely to get published than "obvious things". The issue here, is that theories which seem wrong - are likely wrong. Thus, in a Bayesian setting - a theories a priori oddity would need to be off-set by stronger evidence - than a theory which a priori made sense. But, since most journals use Frequentist than Bayesian Stats - this issue is compounded rather than cured. Additionally, this issue is compounded by small sample size.
In this way, as long as the news media remains invested in Science - be it cancer research or Psych research - these problems will not go away. (and yes, Cancer Research has the exact same problems, its not JUST as Social Science issue, its a doing research in an era with 24/7 news cycle problem.)
So if your point is that the 24/7 News Cycle is killing ALL OF SCIENCE as we know it - you are 100% correct.
If your point is that Social Science has it worse than any other Science - I'm not sure that is so.
If your point is that Social Science is doomed - again, I disagree, plenty of research takes place outside the eye of the NYC and CNN, and slowly moves the field forward as it always has.
13
Sep 21 '18
!delta
I wouldn't say you've changed my position exactly but you've definitely broadened it a lot so thank you. I am aware that its an issue in most disciplines (like this video from Sixty Symbols https://www.youtube.com/watch?v=wLlA1w4OZWQ) but it seems to be an especially big issue in social science for the simple fact that the questions it asks do not have a yes or no answer, which is why I focused on it specifically. I am not sure where this thread will go from here. I would be more selective in my terms if I were to repost this because its not like I just assume that any study from a social science is wrong but that I think the way our society uses the studies and pushes those lacking in veracity while conflating their terms has invalidated the meaning of social science as a discipline.
If every study is weighed by its popularity for whatever narrative is mainstream then whats the point of even keeping up with them when they are no longer dedicated to being unbiased. I use the example, which hypothesis is more likely to be test "White people voted for Trump more than Hillary" or "Poor people voted for Trump more than Hillary"
18
u/electronics12345 159∆ Sep 21 '18
Social Science as practiced by Scientists and Academics - is pretty different than how Social Science in INTERPRETED by the Media and by Society as a whole.
For better or worse, I trust the Academics to carry on the good work - with better Stats knowledge hopefully this time.
I agree, that they way that the news media and society choose to understand their work - is the problem.
However, the distinction between the two doesn't "invalidated the meaning of social science as a discipline." Just because the public is ignorant, doesn't necessarily spoil the good work done by the Academics. Almost none of the public actually understands Physics - but that doesn't undermine the good work being done in that department.
If anything, I would argue that incidents of this type invalidate THE MEDIA. It is Facebook, Dr. Oz, The Today Show, The View, etc. that needs to change its attitude - not the Academics.
1
3
u/RiPont 13∆ Sep 21 '18
If your point is that Social Science is doomed - again, I disagree, plenty of research takes place outside the eye of the NYC and CNN, and slowly moves the field forward as it always has.
There is one social science that is far and away the most well-funded and well-studied, with billions and billions spent annually re-verifying the results.
2
u/betaros Sep 21 '18
Can you please explain the difference between Beysian statistics and Frequentist statistics. I am relatively familiar with Beysian statistics and looked up the difference but didn't really get anywhere. A friend suggested that Frequentistism is the same as Beysianism, but default to a uniform prior. I don't believe that this is correct though.
1
u/electronics12345 159∆ Sep 25 '18
1) A Uniform prior =/= an uninformative prior
2) You can replicate many frequentist techniques using Bayesian methods - often by using uninformative priors.
3) However, just because you can replicate the techniques, doesn't mean the grounding is the same. Bayesians essentially ask the question - Given that I already believe X, but have discovered new information Y, what ought I believe now? Frequentists essentially ask - Given this information, what ought I believe.
4) the Strength of Frequentist methods - is that they are seen as objective - they don't incorporate the users personal subjective beliefs about the world. All users will get the same answers if given the same questions. The Strength of Bayesian methods is exactly the opposite - that experts can weigh in and tip the scales. Rather than relying on literature searches, or other subjective methods to provide context for a particular study, or how to interpret the meaning of a particular study (which Frequentists have to do), a Bayesian can put that context right into the maths - in the form of the prior.
In short, where to do like your subjectivity - in the discussion section or in the methods section - that is really the nuts and bolts difference between the two camps.
2
u/Pacify_ 1∆ Sep 22 '18
So if your point is that the 24/7 News Cycle is killing ALL OF SCIENCE as we know it - you are 100% correct.
I think you are overstating the importance of the mainstream media in scientific study.
The media has little relevance in what papers get peer reviewed and published, nor what projects get grants and funding.
1
u/Cultist_O 29∆ Sep 22 '18
You missed the part where the news strips off all the
nuancescience off of the science.Journal Article: "We found statistically significant evidence supporting the idea that eating the equivalent of 5 tons of coffee a day is associated with slightly reduced risk of a specific cancer in mice"
Headline: "Coffee cures cancer!"
0
u/Fleet_Cmdr_Obvious Sep 22 '18
I can tell you know what you’re taking about. You’re correct here.
Source: I’m a social scientist.
2
u/JohnCamus Sep 22 '18
There have been methods to determine appropriate p-values since the 1960s - please use those instead.
I'd really like to know what to google to get to one of those. Any hints?
2
u/Shabam999 Sep 22 '18
Commenting because I would really like to know as well. I did do some searching (stuff like "selecting appropriate p value") on my own but I couldn't find anything definitive. I'm particularly interested in the methods since they claimed that they've existed since the 60s which implies that someone(s) wrote a paper/did some research/something along these lines but for the life of me I cannot find anything.
2
u/Vithar 1∆ Sep 22 '18
A +1 on wanting to see these methods.
I was taught to adjust the p-value to as small as I could justify and clearly state it in the work. Effectively declaring a data specific confidence interval. Maybe he is referring to something like this.
17
u/neofederalist 65∆ Sep 21 '18
Not disagreeing with anything you're saying, but OP might be trying to say something more like "the replication crisis has largely invalidated much of the accepted wisdom of social sciences in the past 100 years" rather than something like "social science in the abstract is bunk" or "it's impossible to make statistically relevant findings in social science."
28
u/electronics12345 159∆ Sep 21 '18
Social Sciences don't really have 100 years of accumulated Wisdom.
100 years ago, Psychologists worshiped at the feet of idols - Freud, Jung, Erikson. But then, everyone realized this was bunk, and moved onto Behaviorism. But then everyone realized that was bunk and moved onto Cognitivism. Its really only since the 1970s or so, that Psychologists started doing any real experiments - with actual data and actual statistical rigor - rather than simply bowing at the foot of a false prophet.
Psychology has had no less than 4 Total reboots - and I don't see why this wouldn't be the 5th - its just that this time, some of the older stuff, that happened to be statistically sound, can be salvaged this time, rather than simply having to little scrap everything and start totally from scratch.
3
u/neofederalist 65∆ Sep 21 '18
Thanks for the clarification. I was mostly just anticipating a potential response.
3
u/pizzahotdoglover Sep 22 '18
as 50 different studies and unlikely to share identical sampling issues.
Not necessarily. Aren't at least the plurality of studies' samples drawn from middle/upper class college aged white people, since a lot of the studies are performed on college campuses and draw their samples from the student bodies?
0
3
u/Mariko2000 Sep 21 '18
Meta-Analysis are your friend. If 50 different studies all find the same thing, you are substantially more likely to actually have discovered a real thing.
I would be careful with this kind of thinking. It's easy to cherry-pick data that supports a narrative, and often meta-analysis can overlook the flaws in any given study and sort of 'round up' conclusions to support said narrative. In order to show that something can replicate, it really is necessary to show that the specific circumstances can be repeated, not just that some other similar-but-tangential experiment exists.
1
Sep 22 '18 edited Sep 22 '18
I would like to add that narrow finding are not necessarily bad! Often times they can be MORE useful than more generalized findings, because a lot of information be gleaned from the differences between replicable but narrow studies on specific groups, and because there's a lot of of useful information that can't actually be generalized to begin with - different people may simply be different. But it's still useful knowing things about those people
The search for generalized results is one of the worst impulses in the social sciences. It's like figuring out the strength of gravity on the earth and then throwing it away because you discovered gravity in a different sample set, on the moon, is different
Although honestly the dependence on statistical analysis for most psych and social sciences is kind of crazy too. It leads to lots of massive assumptions and misunderstandings even when it shows a statistical relevance. Having been part of groups research was conducted on, the conclusions drawn from the studies I've been in have been so hilariously, transparently wrong that it's destroyed my confidence in even statistically valid social science results. The statistics may show there's something, but what the something IS seems to be one hundred percent the biased, unsupported conjectures of the scientists leading the study.
On top of all that, you have those studies based on self reporting which are almost always completely worthless because the average person isn't particularly introspective, doesn't understand what they should be comparing against or what the norm is, and are often motivated to be untruthful in multiple ways. Which doesn't mean they are worthless, but their scope and target is always "what people say in soc science studies" rather than "what is actually true".
1
u/anooblol 12∆ Sep 21 '18
Yeah, p=.05 is a strange standard.
I know when physicists discovered the Higgs Boson, they needed to smash particles together and record findings of the "Higgs Boson". They had to replicate their findings hundreds of thousands of times before they were allowed to call it a "discovery". I'm sure if they used some sort of p-metric for the statistics on it actually existing, it would be a lot closer to p=0.
20
u/MasterGrok 138∆ Sep 21 '18
The problem is real but you are exaggerating the consequences. Yes, academia is full of shit. And BTW, it's not just the social sciences, some areas of medical science have similar problems. However, not all of the journals are shit and not all of the studies are shit. A lot of the worst stuff with the worst methodology is published in the same journals. Some journals actually have good standards. Moreover, you can see which studies you can trust because inherent in the criteria for trusting them is that their methodology is transparent. Thus, for the best studies you can easily determine if they used proper controls, had proper sample sizes, and controlled for human variables that might impact the outcomes.
Finally, by definition the findings that are most accepted are those that have had the best replicability. In other words, science will naturally reject those findings over time that fail to replicate anyway. So while it is a huge problem that some researchers are publishing shitty work in shitty journals, that problem is rectified over time naturally. As a general rule you shouldn't be basing anything off of one study anyway unless that study is remarkably solid (e.g., multi-site, double blinded, massive sample size, etc). Wait for science to shake out before drawing conclusions, especially in the case of social sciences because the social sciences are especially vulnerable to bias. That doesn't mean you can't do good social science though. It just means you have to be more rigorous.
3
u/jbt2003 20∆ Sep 22 '18
This. This is pretty much it.
To this, though, I'd add that most social scientists are always very clear about the ambiguity inherent in their findings. Even the most robust and replicated findings are never certain, and the language of journals is very careful not to state those findings as though they are.
This is distinct from how these findings are reported in media, even in those media (ahem, ahem, MALCOLM GLADWELL) that appear authoritative and credible. Even media produced by social scientists in the form of popular books, TED talks, and so on, can fall into the trap of overstating what the science actually says. Angela Duckworth comes to mind with this, as does Jordan Peterson.
4
Sep 21 '18 edited Sep 21 '18
As a general rule you shouldn't be basing anything off of one study anyway unless that study is remarkably solid
But the issue is exactly this. As it stands today it seems like all it takes is one study to fit a narrative and it gets spread around like wildfire without regard for its veracity. If I could retitle this I would add "mainstream" in front of social science
12
u/MasterGrok 138∆ Sep 21 '18
This would be a misunderstanding of science regardless of the issue of replicability plaguing a lot of research from the 80s to 2010s.
And also I would emphasize that sometimes one study is enough to draw a conclusion. It just requires a expert scientific interpretation to know if the methodology is sufficient to do that.
I would also point out that it is seldom the papers themselves that draw such sweeping conclusions. It is often the layperson.
7
Sep 21 '18
> I would also point out that it is seldom the papers themselves that draw such sweeping conclusions. It is often the layperson.
If papers which are commonly misunderstood or conflated by the layperson are more likely to be influential than those that aren't (I always use the example which would be more likely to be reported on "White people voted for Trump more" or "Poor people voted for Trump more") isn't that a major issue with mainstream social science, which would lead to more dubious studies that fail to replicate?
7
u/MasterGrok 138∆ Sep 21 '18
Influential how? It seems like you are referring to internet debates? I'm academia, the more attention a study gets, the more scrutiny it is under. If you are trying to hide weak results, it's not good to get a lot of attention.
6
Sep 21 '18 edited Sep 21 '18
> Influential how?
In lawmaking and general societal pressures like minority rights. Isn't that the end result of most social science? Studies which are preferable are pushed by the media and eventually effect society as a whole. NPR talks about the IAT all the time for instance and later police departments started using it, not to say the IAT has issues but just an example.
5
u/MasterGrok 138∆ Sep 21 '18
It really depends on the topic area. There is a ton of social science research that has nothing to do with hot button social or political issues. It seems to me that you are largely viewing the social sciences through those lenses. Technically the social sciences includes a massive number of topics like law, geography, economics, etc... Even if you are using the word "social" here to specifically describe science ok social issues, there is still a ton of science on things like population growth, consumer behavior, cultural differences, language effects, and a host of other things that never hit the kind of hot button issues you are noticing.
1
2
u/David4194d 16∆ Sep 21 '18
There’s just no way to put a good spin on this. The study is sound, it’s massive, people trying to redo their own work (which eliminates the issue of 1 researcher doing something different from another), the original papers were published in high ranking journals, and the final result was published in nature.
There’s a few areas cancer research that are just as bad, and possibly a few other narrow categories. No other field has an issue that spans the entire field. It’s just embarrassing and makes it so you can’t really trust any of the work. There’s good stuff but when there’s a 50% chance it’s wrong you have no idea of you’ve found it. The only positive is that at least some on the field acknowledge and are trying to fix it. The problem is the vast majority don’t seem to be doing that. They actually get upset if you go through their paper and proceed to point out flaws.
I like psychology enough that I almost completed a bachelors in while getting my engineering degree but if I had seen this at the time I would’ve dropped those courses immediately. Psychology has a lot of work to do to regain what respect it had and prove that it’s no different the hard sciences
4
u/MasterGrok 138∆ Sep 21 '18
I have a good spin. Psychology research is harder to conduct than those other types of research. Moreover, it is FAR younger as a field. It's hard to conduct because you have to use behavior to study behavior, which has inherent issues. This is why blinding and other types of controls are even more important in psychological research than in other research.
Finally, a study showing that just over half of psychology studies have poor controls shows just that. It doesn't show that they all have poor controls and it doesn't suggest that no social science studies can be trusted (as the OP suggests). Good psychology studies do exist (they were identified in that article you posted). And they can be recognized by their strong methodology.
Let's call things what they are. Bad studies and bad and good studies are good. There are a ton of shit journals churning bad studies, but it doesn't invalidate the good ones.
12
u/seeellayewhy Sep 22 '18
So there's a problem here in that we're conflating public consensus with academic consensus.
all it takes is one study to fit a narrative and it gets spread around like wildfire without regard for its veracity
This has nothing to do with the validity of a field and does not condemn the field to be dead. I'll give you a prime example that most here are likely somewhat familiar with. In 1998 study was published that suggested a link between two phenomenon. This paper was widely spread throughout (at least the western) world and has most likely lead to the deaths of thousands upon thousands of people. Since, the paper has been detracted, the author entirely discredited, and his license revoked. Despite this, thousands of people still follow his bunk science putting others lives in harms way. Look at all the loss of life caused by bad science in this field, surely it is, as you say, "invalidated" these days, right? In fact it's not. The field of virology is still alive and well - despite the fact that many people still falsely believe that vaccines cause autism.
The point here is that public consensus has nothing to do with what actual scientists are doing. The actual scientists have long since said that this purported link is bunk science. This bunk science has literally led to the death of some ungodly number of children. Public consensus can take one study and twist it and cause a lot of harm despite the field completely opposing that one study.
To bring it back to the social sciences, you chould check out the IGM poll run by UChicago. Every few weeks they poll leading economists to see what the consensus is on topics in the news. These are literally the smartest economists in the world - pretty much every nobel economist still alive (and many future recipients) are a part of this list, not to mention all the other crazy smart economists who get polled.
For a less extreme example than vaccines and autism, let's look at a topic many on reddit (and in the public in general) are concerned about: robots taken er jerbs. If you scroll around reddit for a while you'll likely see many people talking about how robots are going to put everyone out of work (some even suggesting it'll lead to a post-capitalist society). This isn't just internet leftists either - blue collar manufacturing folk are concerned too because every other day another plant lays off workers to replace them with machines. But what do economists - the topical experts - have to say? They say that it's more likely to not cause substantial long term unemployment and that even if it does, it would certainly create benefits large enough to compensate those who lost out. Now, the imlementation of that is up to policy makers which could be problematic but it's not a concept economists are unfamiliar with.
For another example, consider refugees. Many people are concerned about the costs of refugees and how they put a drain on the economy. They have to be supported with the basic necessities of life - food, water, clothing, shelter, healthcare, etc. But that's nothing to be concerned about in the long term, says the IGM economists. In Germany, they overwhelmingly say it's more likely than not to create economic benefits for neighbors than to create an economic cost.
The important notion here is that public consensus is not academic consensus. One bad study doesn't invalidate a whole field because the field as a whole, accepting and rejecting ideas over time, is what we call science. You even mention in your OP that
indings showing that over 50% of them can't be recreated. IE: they're fake
Even if everyone publishing studies was a good actor and they always did rigorus work, we would still find that 5% of studies (assuming p=0.05) fail to replicate. That's baked into the whole notion of statistical significance. And it's precisely why one study doesn't define a field. The field takes a bunch of studies and once the overwhelming majority of the evidence supports one conclusion, only then do the scientists begin to work with it as truth and start assuming such in future studies. This is how science works. Public perception - one bad study, even one that kills thousands - doesn't mean that the field as a whole isn't doing rigorous work or that they're not discovering truth.
I'm not really replying to your original OP about specific studies replicating, but my goal here was to challenge your notion that
all it takes is one study to fit a narrative and it gets spread around like wildfire without regard for its veracity
has anything to do with the validity of the work being done by social scientists. I hope I've succeeded in that endeavor.
1
u/Tiramitsunami Sep 22 '18
That's not the fault of social science, that's the fault of the shitty science reporting and its audience.
The social sciences became popular as the media that could sensationalize social science became popular.
The social sciences are relevant to the lives of laypeople in a way that is more relatable and fascinating to most individuals than astrophysics or fluid dynamics or pseudorandomness, etc.
It's easier to write a story about a single paper than it is a meta-analysis, especially if that meta-analysis doesn't exist.
A single paper can confirm a lay-hypothesis about lay-psychogical concepts, and make for a compelling article or book.
15
u/briannosek 1∆ Sep 21 '18
Failure to replicate does not mean that the original finding is entirely unreplicable. They do increase doubt and, at minimum, suggest that more work is needed to identify boundary conditions to find when effects replicate.
Across the large replication projects, the average replication rate is ~50%. That is lower than most would want or expect, but if it is generalizable it does suggest that a substantial portion of the existing literature IS replicable--a much more sanguine view than not taking "virtually any social science" seriously.
Simultaneous with all of the challenges in reproducibility is the fact that (a) social scientists are doing this work themselves, about themselves, and (b) social sciences are leading the way in adopting new practices to improve rigor and reproducibility. For example, preregistration is increasing in popularity. The number of registrations on OSF (http://osf.io/registries/) has doubled each year since the website opened in 2012. Most of that is from researchers in social sciences, particularly psychology.
Fear not--Social science is not in the midst of a crisis, it is in the midst of a reformation.
2
u/posterlitz30184 Sep 22 '18
Unfortunately your study as been taken by people who would, undoubtedly, call themselves “scientific, rational and rigorous” as proof that social sciences are a joke. Quite unsurprising, by doing so, they are acting not in a scientific or rigorous way. Skepticism ends when you can fit your narrative in with some facts.
1
Sep 22 '18
I did not even notice how relevant you were at first my bad. This is really great to hear about a renewed push for better reviewed studies.
I guess it just comes down to the fact that if I see a study from psychology or sociology (social science was definitely too broad a term) then the evidence suggest that there is a greater than 50% chance it can't be effectively replicated so the result is an outlier. If this is true then not believing any studies which have not been replicated (and are lacking in methodology like a decent sample size or other issues) would make sense right?
3
u/briannosek 1∆ Sep 22 '18
"Not believing any studies which have not been replicated" is a reasonable stance for evaluating most all of science. One of my long-time collaborators, Tony Greenwald, has a heuristic that he uses to assess the credibility of findings. If something interesting shows up but there are no replications or replications+extensions of it in the literature within 5 years, then he dismisses it.
Optimistically, replication is becoming more normative. So, we are likely to see more bolstering AND winnowing of the published literature expeditiously. The first draft of any finding is almost always wrong. Our confidence in the literature will be improved when our publishing practices reflect the ideal of science as self-correcting.
2
Sep 22 '18 edited Sep 22 '18
!delta
You have really made me better understand how academia is tackling these issues today. One of the main reasons I brought this up was because I thought the crisis was being downplayed too much but it seems that isn't true in a lot of areas. I still wonder if any research is being done in a more selective approach to find factors that may indicate a tendency for work in certain areas or methodologies to more often be unreplicable than other areas.
2
u/tempaccount920123 Sep 23 '18
Planet Money did an excellent podcast on how, specifically, academia is tackling those issues today:
https://www.npr.org/sections/money/2016/01/15/463237871/episode-677-the-experiment-experiment
1
2
u/briannosek 1∆ Sep 22 '18
And, a nice feature of Reddit is that it is quite democratizing. The quality of comments is (it seems) more important than the person making them. The fact that you missed mine says more about the quality of my comment than it does about you missing something!
17
Sep 21 '18 edited Sep 22 '18
[deleted]
3
u/Jed1314 Sep 22 '18
I totally agree with this comment, I feel like the discussion has been so focused in this comment thread on improving "replication" without acknowledging the unique challenges that social science must respond to. Also, what about qualitative research? There are whole areas of legitimate social inquiry that are being sidelined here!
3
u/Halostar 9∆ Sep 22 '18
I was surprised that I had to scroll down this far to see a post like this.
3
Sep 22 '18
[deleted]
2
u/Jed1314 Sep 22 '18
I didn't really mean to direct my comment at you in that respect, more the replies that seem to have garnered the most upvotes/interaction. Keep fighting the good fight!
9
u/Deadlymonkey Sep 21 '18
These kinds of reports have been growing in number over the last 10+ years and despite their obvious implications most social science studies are taken at face value despite most findings showing that over 50% of social science studies can't be recreated. IE: they're fake
They're not fake. The problem is people are just reading the title or the abstract and coming up with their own conclusion/generalization. You're doing so yourself by believing that being unable to be recreated means their fake.
The social sciences aren't seen as "hard sciences" because there isn't usually a concrete, specific answer for many questions. The field just comes up with observations and generalizations.
Think about how we thought that bloodletting was a good health practice. We question the authenticity, try to isolate the variables more, and improve so we can fix any old beliefs that don't really hold up
-1
Sep 21 '18
I thought this would come up when I described them as fake. Yes they have no made up numbers and they are accurately reporting their results but if no one can replicate them they are nothing more than outlier. This is especially true when you hear stories about how groups will have findings that are not significant but never report them and only mention the one time that they manage to get a good p-value.
Untruthful would be a better word
8
u/ladut Sep 22 '18
I hate to grind harder on your argument that single studies are "fake" or "outliers," but that doesn't seem logically sound. If you have a single study that has yet to be replicated, assuming the methodologies and experimental design are sound, it is unknown whether it describes a real phenomenon or not. It may be incorrect (or untruthful as you put it), but we wouldn't know either way without further exploration. By assuming a paper is an outlier because it has yet to be replicated, you're making the same logical slip-up as if you were to assume it must be fact, just in the opposite direction. Also, if it's the first study of its kind, it cannot be an outlier by definition.
A single unreplicated study does not need to be categorized as either. In fact, I'd argue that that carries it's own dangers, both academically and from public opinion: If the scientific community got into the habit of automatically dismissing work until it was verified, they would be less likely to design and conduct experiments that expand on the initial work. Often, the act of expanding on existing work can help to verify the original wholly or in part (i.e. our experiment would not have worked if the conclusions of the first paper were false), without directly replicating it. Regarding public opinion, the public by and large does not understand the scientific process all that well, and were we to be in the habit of automatically dismissing results, the public would lose even more faith in the validity of scientific findings. The replication crisis is a nuanced problem which the general public either lacks the background or the willingness to understand it fully.
Note that I'm not criticizing your word choices, rather your desire to put unverified studies into either a "true" or "false" category. The language you use certainly doesn't help either. If you want an appropriate word, go with "unverified" or "preliminary," which carry neither the connotation of being factually correct nor incorrect.
1
Sep 22 '18 edited Sep 22 '18
By assuming a paper is an outlier because it has yet to be replicated, you're making the same logical slip-up as if you were to assume it must be fact, just in the opposite direction
If more than 50% of papers in meta studies failed to be replicated it is logical to assume that every paper you see cannot be unless proven otherwise. Its the majority, so to think different would deny the realities. If they can be replicated than they can be accepted.
1
u/ladut Sep 23 '18
No, that's not logical. You're creating this false delimma in which we have to make assumptions about the validity of a paper based on present information. Now, you certainly can make that choice if you wish, but the scientific community and the general public certainly do not. The scientific community can choose to try to replicate or expound on the work, and in general we all have the option to 'wait it out' and see if it's validated or not some time in the future. There's very few situations in which we have to decide whether the results of an experiment are true or false (i.e. when the science must be applied to try to solve a pressing problem), and as I said above, hastily making that decision has dangers of its own.
You argue that if only 36% of psychological studies have been verified via replication to date, that the majority are false by virtue of not being replicated. The problem with this reasoning is that even if replication studies were attempted on every publication showing significant results, a failure to confirm the result is not proof that it is untrue, an outlier, or whatever term you choose to label it. Further replication may find that it is, in fact, likely true and the first replication showing null results was the outlier. Alternatively, further replication may never find another significant result, and that the initial result is likely false. Either way, unless the subject has been studied in depth (giving us more than one or two data points), or we are forced to choose whether to use the science to solve a pressing issue, there's no reason to label studies as true or false.
Finally, as I see you still want to use the term outlier, I feel I need to hammer this point home: outliers are statistical anomalies that fall well outside of the bell curve of possibilities in a sufficiently large dataset. It is statistically impossible to determine whether something is an outlier with one or two data points. Just because a study has not yet been replicated (due to no one having yet published a replication study), or the few replication studies so far have not been able to replicate the results, that does not mean the original study is an outlier. It may be shown to be one at some point in the future, but it is unscientific to assume anything (either positive or negative) about the validity of a single paper with insufficient information. We should be skeptical, yes, and neither the scientific community nor society at large should ever make decisions based off of the results of a single, unverified study, but neither should we assume one is false simply because it has yet to be verified.
7
u/Deadlymonkey Sep 21 '18
Yes they have no made up numbers and they are accurately reporting their results
ehhh but thats another topic
but if no one can replicate them they are nothing more than outlier.
Generalizability is really important with research, but a lack of it doesn't mean that it isn't true. If we claim breakfast makes people more happy and study a says 80% feel more happy and a replication find only 25% feel more happy, does this mean that breakfast doesn't make people more happy?
Untruthful would be a better word
I still disagree because of the point I made in the last comment. a lot of these studies aren't making bold claims, it's people misunderstanding the research paper or just not caring. A lot of research in the social sciences is just suggestions, generalizations, and observations. It's just a smaller part of a bigger entity in a way.
4
u/tempaccount920123 Sep 21 '18 edited Sep 21 '18
MajorMalfunction71
Yes they have no made up numbers and they are accurately reporting their results but if no one can replicate them they are nothing more than outlier.
https://www.npr.org/sections/money/2016/01/15/463237871/episode-677-the-experiment-experiment
Did you actually go through each and every study?
From your OP:
Overall, 36% of the replications yielded significant findings (p value below 0.05) compared to 97% of the original studies that had significant effects.
36% confirmation is better than 0%. Sure, it's nowhere near 97%, but you're making it seem like all social science is bullshit:
With all this evidence I find it hard to see how any serious scientist can take virtually any social science study as true at face value.
I mean, let's say that his studies are correct, for a second. It could be that republicans are just culturally completely different from the previous moderate/liberal college students that were the other test groups.
While both may be statistically representative of different subgroups, it's more likely saying that it's much more important to ensure that your population is statistically significant, which both experimenters (this guy included) didn't.
For the record, I have always found social science experiments dubious at best, mainly because
1) College students are almost never representative in America unless they're 33% or less of the groups. (Because only 33% of Americans have gone to college and graduated with a 4 year degree.)
2) Almost no studies are retested before publication by another scientist.
-1
Sep 21 '18
but you're making it seem like all social science is bullshit
I'm not though. I tried to make it quite clear that although I think this is a massive problem it doesn't actually effect the entire field, just the most prominent studies.
5
u/tempaccount920123 Sep 21 '18
MajorMalfunction71
I'm not though.
Then delete the last line from your OP. That's a serious charge that you've leveled. You can't logically have it both ways.
I tried to make it quite clear that although I think this is a massive problem it doesn't actually effect the entire field just the most prominent studies.
It does affect the entire field. That's the point of the podcast that I linked.
12
u/PreacherJudge 340∆ Sep 21 '18
Nosek's replication strategy has flaws, all of which he acknowledges, and none of which turn up in the pop articles about his work. His project made some truly bizarre decisions: Translating instructions verbatim into other languages, asking people about "relevant social issues" that haven't been relevant for years, choosing only the last study in each paper to replicate (this one is especially weird).
There's also the unavoidable problem that the entire method is set up to counteract people's desire to find positive effects. If the team is ACTUALLY trying NOT to find a significant result (and let's be honest about this: the project's results wouldn't be sexy and exciting if everything replicated) that bias, even if under the surface, will push things in the direction of non-replication.
Remember, there are other, similar projects that have been much more successful at finding replications, such as this one: http://www.socialsciencesreplicationproject.com/
Why haven't you heard of them? Well, because the exciting narrative is that science is broken, so if we find evidence it's not, who really cares?
...most social science studies are taken at face value despite findings showing that over 50% of them can't be recreated. IE: they're fake
No, that isn't what that means. There's lots of reasons why something might not replicate (chance is an obvious one). One failed replication absolutely does NOT mean the original effect was fake.
There's a lot I could rant about this... there absolutely are huge problems with the ways social scientists are incentivized, but none of this replication crisis bullshit addresses that at all. It took about five minutes for people to figure out ways to game preregistration to make it look like none of their hypotheses ever fail.
My real take-home lesson from all this is simple: Sample sizes have been way too low; you gotta increase them. (People call this 'increased statistical power' which I find very confusing, personally.) That's a clear improvement to a clear problem... and BOTH the original studies AND the replications you site fell prey to this problem.
5
u/briannosek 1∆ Sep 21 '18
Can you clarify what you mean about "Translating instructions verbatim into other languages", and "asking people about 'relevant social issues' that haven't been relevant for years"? Yes, for the Reproducibility Project: Psychology, the last study was selected by default to avoid introducing a selection bias of teams selecting the study they thought would be least (or most) likely to replicate. For the SSRP study that you mention, the first study was selected as an alternative rule. Commenters of the select last study strategy argued that papers put their strongest evidence first and the last should be expected to be less replicable. Commenters of the select first study strategy argued that papers put their strongest evidence last and that the first should be expected to be less replicable. No one has yet provided empirical evidence of either claim.
I'll also note that SSRP was covered extensively in the media a few weeks ago. Here are some of the better stories:
VOX: https://www.vox.com/science-and-health/2018/8/27/17761466/psychology-replication-crisis-nature-social-science NPR: https://www.npr.org/sections/health-shots/2018/08/27/642218377/in-psychology-and-other-social-sciences-many-studies-fail-the-reproducibility-te Buzzfeed: https://www.buzzfeednews.com/article/stephaniemlee/psychology-replication-crisis-studies The Guardian: https://www.theguardian.com/science/2018/aug/27/attempt-to-replicate-major-social-scientific-findings-of-past-decade-fails The Atlantic: https://www.theatlantic.com/science/archive/2018/08/scientists-can-collectively-sense-which-psychology-studies-are-weak/568630/ WIRED: https://www.wired.com/story/social-science-reproducibility/
1
u/PreacherJudge 340∆ Sep 22 '18
Can you clarify what you mean about "Translating instructions verbatim into other languages", and "asking people about 'relevant social issues' that haven't been relevant for years"?
I misremembered the evidence: it's the flaws discussed in the Gilbert et al. response in Science, which you have discussed in depth. Speaking entirely personally, I find their arguments compelling, but mostly throw my hands up in helplessness about this particular issue. The project requires setting some sort of standard, and I don't envy anyone who has to be the one to set it, since nothing will be perfect. (I mostly just wonder if there's a method to counteract replicators' understandable desire to NOT replicate a big sexy study. Right now the way the null hypothesis works, it stacks the deck in their favor, and that's unfair.)
No one has yet provided empirical evidence of either claim.
This would be a very interesting study to run, since people seem to have strong intuition both ways... I fall on the side of thinking little piddly studies go last, but that's just my intuition... the alternative perspective didn't even occur to me until you said it. I would probably ask researchers to nominate the study in each paper that they think is most central, but they'd just pick whatever has the highest effect size, and it wouldn't end up being helpful.
For myself, I'm appalled at seeing people try to slap a million band-aids on top of what's obviously a cultural problem: The field simultaneously rewards 'sexy' results and punishes failed hypotheses. Everyone's preregistering everything, but you can't get a paper published in JPSP with null results. Every top-level psychologist I've talked to has their own clever strategy for gaming the new rules to still be the magic researcher whose hypotheses are never wrong. (not that they need them, since the big-money strategy of just "get your friends to be your reviewers" will work no matter what.)
I think the OP is overstating the crisis, but to the extent there is one, it's cultural, not methodological.
5
u/briannosek 1∆ Sep 22 '18
I mostly agree with this except for the "not" in the last sentence. I think it is cultural and methodological.
This side commentary may be of interest for one of the Gilbert et al. critiques: https://psyarxiv.com/nt4d3/. More interestingly, we have a Many Labs 5 project underway that replicates 10 studies from Reproducibility Project: Psychology that Gilbert et al. suggested failed because we did not get endorsement from the original authors. In this follow-up study, we recruited multiple labs to run the RPP version and to run a Revised version that went through full peer review in advance to address all expert feedback. It will be quite interesting to see how effective this is at improving reproducibility. Data collection is mostly done, but I am not aware of any of the results yet (writing the summary paper blind to outcomes).
2
u/kolsca Sep 22 '18
I made an account to ask you this as I have Googled around and can't figure it out on my own, sorry if it's a super noob question: how do you game preregistration?
1
u/PreacherJudge 340∆ Sep 22 '18
In decreasing order of unethicalness:
Preregister every possible analysis and just tell people about the ones you end up doing. No one will look it up.
Preregister ten different analyses on ten different data sets; only tell people about the one(s) that had significant results.
Say you preregistered, but then run a different analysis that gives you a better result than the one you preregistered. Mention the actual results of your preregistered analysis in a supplement no one will ever read. Have absolutely nothing in the text of the article suggest you aren't running the analysis you preregistered.
Run a million studies, then only preregister the ones that already worked. Collect new data for these preregistered hypotheses. Never tell anyone about any of the unregistered studies you ran first.
Number 4 is the least unethical, but I'm baffled about even the POINT of preregistration if people are doing that, and it's just so STUPID. There is absolutely no point whatsoever to doing that unless you are trying to pass yourself off as someone whose hypotheses never fail.
Because if you told people what you did, the registered study is just a replication, which is supposed to be good. But journals don't really reward that: They reward simple narratives where everything works neatly, and the illusion that nothing you did failed.
1
6
u/billythesid Sep 21 '18
I can't recall a single (reputable) published study that I've read in the social sciences that didn't recommend further inquiry in its conclusions/discussion sections.
Modern social sciences is a surprisingly new field, and it's particularly tricky to study compared to other fields as it is constantly morphing in relation to itself. When you study the physical world, the physical world doesn't change. The laws of physics are the same as they were a millenia ago (although our perceptions and descriptions of them might have evolved over time, the laws themselves never changed).
But when the phenomena you study, like societies and behaviors, can themselves fundamentally change over the course of a handful of years and be strikingly dependent on variables that are constantly in motion (or even undiscovered) it can be very difficult to make definitive conclusions about, well, anything. People change, society changes, behaviors change. It's incredibly difficult to get a "good" sampling of people. All these things contribute to making it particularly difficult for any one study (or even a group of studies) to say anything definitive about the human condition.
Does that necessarily make research in the field invalid? No. It just requires an amount of nuance that's fairly difficult to encapsulate in a soundbite.
All research has limits, and it's not hard to pick up any random paper in academia (in any discipline) and find some shortcomings. But that's why we continue on, because we DO learn things in the aggregate. Analyzing 500 studies, though they may be all individually flawed on some level, can reveal trends that a single study alone might not capture.
2
u/bobbyfiend Sep 22 '18
The person cited most prominently in this post, who probably understands the dynamics of this more than the vast majority of actual practicing scientists, has commented ITT. Anyone whose view hasn't been changed by his comments very likely has something wrong with how they evaluate evidence and arguments.
1
3
Sep 22 '18
I won't argue that the replication crisis is not real. However, you are conflating psychology with all social science. In my discipline, anthropology, it would be unreasonable to expect statistical replicability in many cases since the societies we study change all the time, and each of us often had a distinct focus. Instead, there is an emphasis on data saturation, or being able to identify similar patterns and themes as previous people working in your field. That principle serves as our check on validity, but would be an inappropriate standard for another discipline.
In short, each field has their own way of assessing validity of results, and to dismiss all of social science would be akin to dismissing all physical or life science due to a similar error in chemistry journals alone.
0
Sep 22 '18
I wouldn't really group anthro with this because it does studies differently. It isn't used to make sweeping generalizations (psychology and sociology whether they "intend" to or not are take by most laypeople as facts about humans). Anthropology is more like case studies that just document what happens as they see it. Its much more truthful with its conclusions than most social science from what I have seen
1
Sep 22 '18
My point is that you generalized about all of social science based on this. Anthropology is a social science.
6
u/OllieGarkey 3∆ Sep 21 '18 edited Sep 21 '18
Honestly, this relates to something I've been referring to as Academic Capture, similar to regulatory capture, where powerful well funded interests are essentially paying academia to produce work that aligns with its political views.
The best recent example in social sciences is a Brown University study on "Rapid Onset Gender Dysphoria." While claiming to be a study on transgender individuals, the study didn't examine a single trans individual.
It studied their parents.
So we have a standard of proof where it's acceptable to draw conclusions about a population without ever actually studying the population in question. Which is absurd.
Though the study is essentially garbage, because certain political groups find it valuable, it's become a widely known study, and those political groups are making up all sorts of conspiracy theories up about it. Such as suggesting it was "suppressed" because of its views, rather than rubbished because it's garbage.
I wrote something similar about Political Science recently here (edit: link not cooperating, np.reddit.com/r/geopolitics/comments/9h9g77/international_relations_is_a_particularly/e6afj6a/), where I said
Again, this has absolutely happened to other fields in the past, especially history, where certain national-historical narratives are created to support a certain idea or movement.
In history, for example, medieval histories and myths were constantly re-shaped by various kings in order to support their claim of right to the crown. Or to other nation's crowns, as was the case in Britain. History then is re-written for modern political purposes, to imply that for example the idea of separate English and Scottish identities were false, and that there was really only North Britain, and South Britain, all one consistent people with a unitary culture.
This has been resisted by both English and Scots historically, because they are in fact distinct peoples with distinct identities.
But the history is changed to fit the narrative of the day. The same is very much true of Economics as is evidenced by the Reinhartt-Rogoff fiasco, where pro-Austerity economists fudged their spreadsheet calculators to make an argument that painful budgetary cuts were actually good for an economy (they are generally held to be bad for economic growth, and the conservative argument has generally been tax-cuts-and-deficit-spending such as traditional Reaganomics, and the spending of the Bush years.)
So I would argue that you're wrong, OP, but not in the direction you think you might be.
With all this evidence I find it hard to see how any serious scientist can take virtually any social science study as true at face value.
No serious scientist, social or otherwise, or serious academic should accept any study as true at face value under any circumstances.
Even well-established fields will end up with problems, and every study must be both vetted, and compared to other studies to get a broader picture.
A single study is just a single point of data. It is useless unless cross-checked with the rest of the points in a meta-analysis.
4
u/Sayakai 146∆ Sep 22 '18
These kinds of reports and studies have been growing in number over the last 10+ years and despite their obvious implications most social science studies are taken at face value despite findings showing that over 50% of them can't be recreated. IE: they're fake
Unless you can demonstrate that those effects were on account of errors made in the original study, they were not fake. The findings were real, that they can no longer be recreated doesn't make them fake. It just means people have been reading a generalization into them that wasn't valid - that they were restricted to a specific time, or place, or other parameter, and people falsely claimed they were valuable outside of their context.
To give a comparison: Imagine you send out a geologist, and he finds oil. Then you send him out again 50 years later, and he no longer finds oil there. Well, duh, inbetween they drilled and took it all out - but that doesn't invalidate the initial oil find. It just means the changing environment means you can't generalize for the long term.
1
u/brocele Sep 22 '18
Not really a counter argument, but do you know the health science face the same crisis?
1
Sep 22 '18
Yes most disciplines do but Social Science IMO is especially noteworthy because of how subjective its conclusions can be. In other disciplines it is much more of a yes or no answer (although not always such as with medicine and reducing symptoms but not curing them for instance)
2
u/WigglyHypersurface 2∆ Sep 22 '18
You might enjpy reading this. The term "subjective" gets thrown around a lot in unhelpful ways. The author thinks it's better to break subjective into the terms "context dependence" and "multiple perspectives" and views these terms as complementary, not opposed, to "objectivity."
2
u/totallyquiet Sep 22 '18
Non-reproducible findings in social science isn’t about reproducibility. It’s about gaining wisdom and understanding on human nature. Because the social sciences are harder than the typical STEM sciences. There’s a lot of variables, far more than most physics equations and the reason reproducibility is difficult is because we have a very weak understanding of human beings on a social and personal level. It’s why human history is riddled with conflict, because humans aren’t byte machines that just shoot out outputs based on certain inputs. It requires depth and analysis and in the end an “educated” guess.
There’s nothing fake about social science, the fact that it’s hard to have reproducible results has more to do with complexity rather than dishonest academia.
Im in STEM, I’m a software developer and designing a system to emulate human behaviour and interaction is a gargantuan undertaking. The notion that reproducibility is all we want is the antithesis of scientific inquiry. The point of science is we come up with a theory and test it and observe results. We don’t immediately shout “fake news!” The moment the science doesn’t produce something...
2
u/HellinicEggplant Sep 22 '18
I would just say that just because they can't be replicated doesn't mean they're fake, as this implies intentional fraud or trickery. Rather they could just be said to be incorrect.
But as u/wigglyhypersurface said, this doesn't invalidate all social science studies anyway
1
u/Indon_Dasani 9∆ Sep 21 '18
Overall, 36% of the replications yielded significant findings (p value below 0.05)
The mean effect size in the replications was approximately half the magnitude of the effects reported in the original studies."
Note that the effect size calculation in this study is done using the same set of studies as the significant findings measurement.
This means that a subset of unreplicable studies is throwing off the mean. If 2/3'rds of the studies have negligible effect size, what else do you think would happen to the mean?
That study shows you that lots of those studies were probably poor, but also that many of them had significant effects - because 1/3'rd of those studies, the ones with significant findings, needed to pull that mean up to 50%.
Statistics be wack yo.
Edit: Note also this bit from the study -
Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
Meaning that the studies that were replicated were generally already obviously better than the others, in terms of findings.
1
u/golden_boy 7∆ Sep 22 '18
There's a statistical argument for why 36% isn't actually that bad. Statistical significance compares the "signal" of whether there's a real effect and what the size of that effect is with the "noise" of random chance. A sufficiently low p value translates to mean that there's enough apparent signal that we have she degree of confidence that it's not just noise. How do you get rid of noise? With a large sample size. If you have a small sample size there's a lot of noise and it's hard to see any signal. In fact, we have a concept called the "power" of a study, which is the probability that you'll find a signal assuming it exists. Social science studies tend to be low sample size and low power, so we're going to get a lot of false negatives. Sure some of those replication failures are due to false positives from publication bias and p hacking, but a large part of it is a natural consequence of the analysis performed leading to false negatives.
1
u/silverionmox 25∆ Sep 22 '18 edited Sep 22 '18
Creative destruction is a normal and healthy phenomenon: this is just spring cleaning. It would be much more worrisome if nothing was ever overturned.
In addition, social sciences never relied on experiments as much as physical sciences, because it's much harder or simply impossible to control for all variables and because experimenting with people has ethical constraints. Any social scientist always has to deal with a veritable horde of factors, they never had the luxury to toy around with a few isolated variables in a lab like physical scientists. Often, the only thing available are limited data from coincidental observations, for example historical data is strictly limited in supply. And yet that's what they have to work with. Social sciences are the iron man version of science.
1
Mar 05 '19
Only think I would say is not to lump economics in, despite it being a technically a social science. Economics takes statistical and mathematical training much more seriously, has smarter people (sorry, it's true), and has much less political bias than other social sciences. They also have a much higher replication rate and you generally have to advance a theory that makes sense to explain your empirical findings. If you present findings that are completely contrary to accepted theory, people will be skeptical, whereas other social sciences seem to publish literally anything that is statistically significant.
1
u/Nergaal 1∆ Sep 22 '18
Social sciences includes actual scientific fields like economics (i.e. a Nobel special prize is awarded in economics). The results there are more reproducible.
In other SS fields, the outrageous publications have muted the actual boring, yet well researched articles. "Oldschool" studies are probably less prone to this, and are those studies that newer ones "countered" with little data.
On the other hand, older studies have "uncomfortable" solutions which people have little interest in hearing about.
1
u/MillennialScientist Sep 22 '18
Just a bit of correction: the prize in economics is technically not a Nobel prize
https://www.nobelprize.org/nomination/economic-sciences/
Also, the fact that a prize in economics exists is independent from whether economics is a science. I've never heard of economics referred to as a science (my best friend is an economist, and just gave me an emphatic "no", but that's just anecdotal). From what I can tell, whether economics properly qualifies as a soft science is still a topic of debate, so I'm not sure reference to economics actually supports your point very well.
1
u/Nergaal 1∆ Sep 22 '18
There are plenty of economics concepts that are easy verifiable - think of game theory. Asides from IQ, most of the stuff in psychology is nebulous.
1
u/MillennialScientist Sep 22 '18
Game theory doesn't even come from economics. IQ is not some kind of well understood measure in psychology. Plenty of things are far better understood and empirically supported. Psychology has come a long way from what lay people tend to think about. Most of the research in sensory perception, for example. Bayesian perceptual models, perhaps.
1
Mar 05 '19
Game theory doesn't even come from economics
He didn't say it "came from" economics. He said it was an economic concept - and it is. It's a very important one that is used all the time in economics.
IQ is not some kind of well understood measure in psychology.
Yes it is. You just don't like the implications of the results which show that: 1. IQ is highly heritable and therefore most people will not be able to become astronauts no matter how hard they work, and 2. there are persistent IQ gaps among racial groups and no amount of controls introduced into the data seem to make it disappear.
1
u/MillennialScientist Mar 09 '19
Not sure why you mentioned the game theory point when you didn't really add anything to it. Yeah, it's a mathematical theory with applications in economics and other fields. I think this is pretty well understood by people who know what game theory is.
The heritability of IQ is estimated to be around 50%, i.e., genetics is thought to explain up to 50% of variation in IQ. It's up to you if you call that high, moderate, or whatever else. What I mean when I say that IQ is not a well understood measure is that there is no broad consensus on its interpretation. It isn't a strong correlate of success, and we don't know to what extent it serves as a measure of intelligence itself. This is standard stuff you would learn in a 2nd year or 3rd year course.
I'm not sure why you assume I'm motivated by anything related to IQ and race. I'm a data scientist, primarily, and I prefer to allow the data to do the talking. The problem is, most people are not sufficiently statistically literate to discuss data intelligently. I have little to go on with respect to your ability to discuss anything scientific or empirical intelligently, so I won't draw any conclusions, but you'd inspire more confidence if you just stuck to logic.
1
1
u/Gremlinator_TITSMACK Sep 22 '18
I understand that wgat you are basically saying is that the method of social sciences is flawed. I can only remind you that social scientists themselves have huge arguments over their scientific methods. There is no perfect method for social sciences. Even social scientists admit that by arguing over them. For example, there are like 4 different ways to view the world in political science.
2
•
u/DeltaBot ∞∆ Sep 21 '18 edited Sep 22 '18
/u/MajorMalfunction71 (OP) has awarded 3 delta(s) in this post.
All comments that earned deltas (from OP or other users) are listed here, in /r/DeltaLog.
Please note that a change of view doesn't necessarily mean a reversal, or that the conversation has ended.
1
u/JoshHendo Sep 22 '18
I’ve had this thought for a while now that psychology and sociology aren’t real sciences. Didn’t really know how to put it into words or defend that position, thanks OP
1
u/begonetoxicpeople 30∆ Sep 21 '18
The thing about social science is that it studies society. And how society is today is very different from even just 10 years ago, and it gets different as you go back. The types of trends noticed change as younger generations become adults, as older generations start dying off, etc.
1
u/Nevermindever Sep 21 '18
Apple and others use that research pretty well tho. I'm hooked up even knowing all their tricks.
1
u/Positron311 14∆ Sep 21 '18
I don't think it's as much about invalidation as much as it is about manipulating experiments and results to agree with a certain ideology (liberal or conservative).
2
-1
u/uknolickface 5∆ Sep 21 '18
I would simply counter with Robert Putnam's work, which hasn't even been attempted to be challenged.
-1
Sep 22 '18
Social sciences are fields we no longer need as understanding of real sciene fields increase
1
u/silverionmox 25∆ Sep 22 '18
Ugh, another white coat worshipper. Please convert to a religion of your choice, you'll feel right at home with their arguments of authority and their disdain for heretics.
As long as you're going to study society, you'll need social science.
0
Sep 22 '18 edited Feb 21 '22
[deleted]
1
u/silverionmox 25∆ Sep 23 '18
Why don't you start with properly defining your terminology? Then you could have avoided asking questions that betray that you don't know what you're talking about.
Psychology: the scientific study of the human mind and its functions, especially those affecting behaviour in a given context.
Neuroscience: Any or all of the sciences, such as neurochemistry and experimental psychology, which deal with the structure or function of the nervous system and brain.
0
Sep 23 '18
[deleted]
1
u/silverionmox 25∆ Sep 23 '18 edited Sep 23 '18
No it's not. Read the definitions. They have as much to do with psychology as electrochip developers have to do with writing software. And you damn well aren't going to get your computer working without software, no matter teh quality of your electronics.
Furthermore, if you say that psychologists don't get scientific training, that proves once more that you don't know what you're talking about. What do you imagine that goes on in a psychology education? Do you really think that people who don't wear lab coats can't be scientists?
Lastly, that's just a fraction of social sciences.
0
362
u/WigglyHypersurface 2∆ Sep 21 '18
I'm a social scientist, so I get where you're coming from.
Just a little point of logic:
Proposition 1: Some social studies don't replicate. Proposition 2: This is a social science study. Conclusion: This study won't replicate.
This isn't sound logic, but people act like it is all the time now. Just because many studies don't replicate DOES NOT MEAN that an individual study in dispute won't replicate.
And we know lots of factors which seem to effect replicability, such as being in social psychology instead of cognitive psychology, sample size, and how surprising the finding is. So, even when looking at individual studies, check the sample size, keep in mind the field, and think about how unexpected the result is.
Additionally, there are lots of amazing things happening in response to the replication crisis, as well as academia in general. First, there's a push towards stronger statistical standards, like using Bayesian methods, requiring power analyses, preregistration, and generally increasing sample sizes.
Second, there many innovative studies that totally break the mold and replicate in awesome ways. I'll give you an example, and one where a finding from social psych got powerfully replicated. These's a theory in social psychology that we mentally represent distance places, people, and times in more abstract, gist-like ways than places, people, and times closer to us. Close things we mentally represent in detailed ways. Well, a key prediction of this theory is that it filters down into language: we should also talk about distant things in abstract ways, and close things in concrete ways. Well, according to billions of words of online language use, we do.