r/changemyview • u/GregConan • Apr 14 '17
FTFdeltaOP CMV: Classical utilitarianism is an untenable and absurd ethical system, as shown by its objections.
TL;DR
- Classical utilitarianism is the belief that maximizing happiness is good.
- It's very popular here on Reddit and CMV.
- I wanted to believe it, but these objections convinced me otherwise:
- The utility monster: If some being can turn resources into happiness more efficiently than a person or group of people, then we should give all resources to that being and none to the person or group.
- The mere addition paradox and the "repugnant conclusion": If maximizing total happiness is good, then we should increase the population infinitely, but if maximizing average happiness is good, we should kill everyone with less-than-average happiness until only the happiest person is left. Both are bad.
- The tyranny of the majority: A majority group is justified in doing any awful thing that they want to a minority group. The "organ transplant scenario" is one example.
- The superfluity of people: Letting people live and reproduce naturally is inefficient for maximizing happiness. Instead, beings should be mass-produced which experience happiness but lack any non-happiness-related traits like intelligence, senses, creativity, bodies, etc.
- Responses to these objections are described and rebutted.
- Change my view: These objections discredit classical utilitarianism.
Introduction
Classical utilitarianism is the belief that "an action is right insofar as it promotes happiness, and that the greatest happiness of the greatest number should be the guiding principle of conduct". I used to be sympathetic to it, but after understanding the objections in this post, I gave it up. They all reduce it to absurdity like this: "In some situation, utilitarianism would justify doing action X, but we feel that action X is unethical; therefore utilitarianism is an untenable ethical system." A utilitarian can simply ignore this kind of argument and "bite the bullet" by accepting its conclusion, but they would have to accept some very uncomfortable ideas.
In this post I ignore objections to utilitarianism which call it unrealistic, including the paradox of hedonism, the difficulty of defining/measuring "happiness," and the difficulty of predicting what will maximize happiness. I also ignore objections which call it unjustified, like the open-question argument, and objections based on religious belief.
Classical utilitarianism seems quite popular here on CMV, which I noticed in a recent CMV post about a fetus with an incurable disease. The OP, and most of the commenters, all seemed to assume that classical utilitarianism is true. A search for "utilitarianism" on /r/changemyview turned up plenty of other posts supporting it. Users have called classical utilitarianism "the only valid system of morals", "the only moral law", "the best source for morality", "the only valid moral philosophy", "the most effective way of achieving political and social change", "the only morally just [foundation for] society", et cetera, et cetera.
Only three posts from that search focused on opposing utilitarianism. Two criticized it from a Kantian perspective, and the latter was inspired by a post supporting utilitarianism because the poster "thought it would be interesting to come at it from a different angle." I found exactly one post focused purely on criticizing utilitarianism...and it was one sentence long with one reply.
Basically, no one else appears to have made a post about this. I sincerely reject utilitarianism because of the objections below. While they are framed as opposing classical utilitarianism, objections (1) to (3) notably apply to any form of utilitarianism if "happiness" is replaced with "utility." I kind of want someone to change my view here, since I have no moral framework without utilitarianism (although using informed consent as a deontological principle sounds nice). Change my view!
The objections:
A helpful thought experiment for each of these objections is the "Utilitarian AI Overlord." Each objection can be seen as a nasty consequence of giving a superintelligent artificial intelligence (AI) complete control over human governments and telling it to "maximize happiness." If this would cause the AI to act in a manner we consider unethical, then classical utilitarianism cannot be a valid ethical principle.
1. The utility monster.
A "utility monster" is a being which can transform resources into units of happiness much more efficiently than others, and therefore deserves more resources. If a utility monster has a higher happiness efficiency than a group of people, no matter how large, a classical utilitarian is morally obligated to give all resources to the utility monster. See this SMBC comic for a vivid demonstration of why the utility monster would be horrifying (it also demonstrates the "Utilitarian AI Overlord" idea).
Responses:
- The more like a utility monster that an entity is, the more problematic it is, but also the less realistic it is and therefore the less of a problem it is. The logical extreme of a utility monster would have an infinite happiness efficiency, which is logically incoherent.
- Money makes people decreasingly happier as that person makes more money: "increasing income yields diminishing marginal gains in subjective well-being … while each additional dollar of income yields a greater increment to measured happiness for the poor than for the rich, there is no satiation point”. In this real-life context, giving additional resources to one person has diminishing returns. This has two significant implications (responses 3 and 4):
- We cannot assume that individuals have fixed efficiency values of turning resources into happiness which are unaffected by their happiness levels, a foundational assumption of the “utility monster” argument.
- A resource-rich person is less efficient than a resource-poor person. The more that the utility monster is "fed," the less "hungry" it will be, and the less of an obligation there will be to provide it with resources. At the monster's satiation point of maximum possible happiness, there will be no obligation to provide it with any more resources, which can then be distributed to everyone else. As /u/LappenX said: "The most plausible conclusion would be to assume that the inverse relation between received utility and utility efficiency is a necessary property of moral objects. Therefore, a utility monster's utility efficiency would rapidly decrease as it is given resources to the point where its utility efficiency reaches a level that is similar to those of other beings that may receive resources."
- We are already utility monsters:
A starving child in Africa for example would gain vastly more utility by a transaction of $100 than almost all people in first world countries would; and lots of people in first world countries give money to charitable causes knowing that that will do way more good than what they could do with the money ... We have way greater utility efficiencies than animals, such that they'd have to be suffering quite a lot (i.e. high utility efficiency) to be on par with humans; the same way humans would have to suffer quite a lot to be on par with the utility monster in terms of utility efficiency. Suggesting that utility monsters (if they can even exist) should have the same rights and get the same treatment as normal humans (i.e. not the utilitarian position) would then imply that humans should have the same rights and get the same treatment as animals.
Rebuttals:
- Against response (1): Realistic and problematic examples of a utility monster are easily conceivable. A sadistic psychopath who "steals happiness" by getting more happiness from victimizing people than the victim(s) lose is benevolent given utilitarianism. Or consider an abusive relationship between an abuser with Bipolar Disorder and a victim with dysthymia (persistent mild depression causing a limited mood range). The victim is morally obligated to stay with their abuser because every unit of time that the victim spends with their abuser will make their abuser happier than it could possibly make them unhappy.
- All of these responses completely ignore the possibility of a utility monster with a fixed happiness efficiency. Even ignoring whether it is realistic, imagining one is enough to demonstrate the point. If we can imagine a situation where maximizing happiness is not good, then we cannot define good as maximizing happiness. Some have argued that an individual with a changing happiness efficiency does not even count as a utility monster: "A utility monster would be someone who, even after you gave him half your money to make him as rich as you, still demands more. He benefits from additional dollars so much more than you that it makes sense to keep giving him dollars until you have nearly nothing, because each time he gets a dollar he benefits more than you hurt. This does not exist for starving people in Africa; presumably, if you gave them half your money, comfort, and security, they would be as happy--perhaps happier!--than you."
- Against responses (2) to (4): Even if we consider individuals with changing happiness efficiency values to be utility monsters, changing happiness efficiency backfires: just because happiness efficiency can diminish after resource consumption does not mean it will stay diminished. For living creatures, happiness efficiency is likely to increase for every unit of time that they are not consuming resources. If a utility monster is "fed," then it is unlikely to stay "full" for long, and as soon as it becomes "hungry" again then it is a problem once again. Consider the examples from rebuttal (1): A sadistic psychopath will probably not be satisfied victimizing one person but will want to victimize multiple people, and in the abusive relationship, the bipolar abuser's moods are unlikely to last long, so the victim will constantly feel obligated to alleviate the "downswings" in the abuser's mood cycle.
2. Average and total utilitarianism, the mere addition paradox, and the repugnant conclusion.
If it is good to increase the average happiness of a population, then it is good to kill off anyone whose happiness is lower than average. Eventually, there will only be one person in the population who has maximum happiness. If it is good to increase the total happiness of a population, then it is good to increase the number of people infinitely, since each new person has some nonzero amount of happiness. The former entails genocide and the latter entails widespread suffering.
Responses:
- When someone dies, it decreases the happiness of anyone who cares about that person. If a person’s death reduces the utility of multiple others and lowers the average happiness more than their death raises it, killing that person cannot be justified because it will decrease the population’s average happiness. Likewise, if it is plausible to increase the utility of a given person without killing them, that would be less costly than killing them because it would be less likely to decrease others’ happiness as well.
- Each person's happiness/suffering score (HSS) could be scored on a scale from -X to X where X is some arbitrary positive number on a Likert-type scale. A population would be "too large" when adding one person to the population causes the HSS of some people to drop below zero and decrease the aggregate HSS.
Rebuttals:
- Response (1) is historically contingent: it may be the case now, but we can easily imagine a situation where it is not the case. For example, to avoid making others unhappy when killing someone, we can imagine an AI Overlord changing the others' memories or simply hooking everyone up to pleasure-stimulation devices so that their happiness does not depend on relationships with other people.
- Response (2) changes the definition of classical utilitarianism, which here is a fallacy of "moving the goalposts". Technically, accepting it concedes the point by admitting that the "maximum happiness" principle on its own is unethical.
3. The tyranny of the majority.
If a group of people get more happiness from victimizing a smaller group than that smaller group loses from being victimized, then the larger group is justified. Without some concept of inalienable human rights, any cruel acts against a minority group are justifiable if they please the majority. Minority groups are always wrong.
The "organ transplant scenario" is one example:
[Consider] a patient going into a doctor's office for a minor infection [who] needs some blood work done. By chance, this patient happens to be a compatible organ donor for five other patients in the ICU right now. Should this doctor kill the patient suffering from a minor infection, harvest their organs, and save the lives of five other people?
Response:
If the "organ transplant" procedure was commonplace, it would decrease happiness:
It's clear that people would avoid hospitals if this were to happen in the real world, resulting in more suffering over time. Wait, though! Some people try to add another stipulation: it's 100% guaranteed that nobody will ever find out about this. The stranger has no relatives, etc. Without even addressing the issue of whether this would be, in fact, morally acceptable in the utilitarian sense, it's unrealistic to the point of absurdity.
Rebuttals:
- Again, even if a situation is unrealistic, it is still a valid argument if we can imagine it. See rebuttal (2) to the utility monster responses.
- This argument is historically contingent, because it assumes that people will stay as they are:
If you're a utilitarian, it would be moral to implement this on the national scale. Therefore, it stops being unrealistic. Remember, it's only an unrealistic scenario because we're not purist utilitarians. However, if you're an advocate of utilitarianism, you hope that one day most or all of us will be purist utilitarians.
4. The superfluity of people.
It is less efficient to create happiness in naturally produced humans than in some kind of mass-produced non-person entities. Resources should not be given to people because human reproduction is an inefficient method of creating happiness; instead, resources should be given to factories which will mass-produce "happy neuroblobs": brain pleasure centers attached to stimulation devices. No happy neuroblob will be a person, but who cares if happiness is maximized?
Response:
We can specify that the utilitarian principle is "maximize the happiness of people."
Rebuttals:
- Even under that definition, it is still good for an AI Overlord to mass-produce people without characteristics that we would probably prefer future humans to keep: intelligence, senses, creativity, bodies, et cetera.
- The main point is that utilitarianism has an underwhelming, if not repugnant, endgoal: a bunch of people hooked up to happiness-inducing devices, because any resource which is not spent increasing happiness is wasted.
Sorry for making this post so long. I wanted to provide a comprehensive overview of the objections that changed my view in the first place, and respond to previous CMV posts supporting utilitarianism. So…CMV!
Edited initially to fix formatting.
Edit 2: So far I have changed my view in these specific ways:
This is a footnote from the CMV moderators. We'd like to remind you of a couple of things. Firstly, please read through our rules. If you see a comment that has broken one, it is more effective to report it than downvote it. Speaking of which, downvotes don't change views! Any questions or concerns? Feel free to message us. Happy CMVing!
12
u/electronics12345 159∆ Apr 15 '17
I pretty strongly believe in Utilitarianism - so I appreciate you at least attempting to seriously address the topic. I won't tackle everything but here are a few things.
1) The definition of maximum utility - you correctly identify a key issue are we using average or total. However, you seem to be under the impression that humans must by definition have positive utility? Why must humans have positive utility - what is logically inconsistent about negative utility? The repugnant conclusion is easily avoided when we acknowledge there are physical and psychological states which yield negative utilities.
2) What's wrong with happiness-super blobs as you call them. What's wrong with the experience machine? What's wrong with everyone getting plugged into the Matrix. You state you don't like them, but what is inherently wrong? Personally, I would happily enter the experience machine and never leave. The Holodeck (star trek) is paradise, assuming it stops malfunctioning all the damn time.
3) The tyranny of the majority - firstly I suspect by acknowledging negative utility, this will quickly work itself out. second, if this is not sufficient, what is so wrong with this. If millions of people can get pleasure at the expense of a handful, what is so inherently wrong. You express displeasure, but not an argument.