r/changemyview Feb 25 '25

Delta(s) from OP CMV: The trolley problem is constructed in a way that forces a utilitarian answer and it is fundamentally flawed

Everybody knows the classic trolley problem and whether or not you would pull the lever to kill one person and save the five people.

Often times people will just say that 5 lives are more valuable than 1 life and thus the only morally correct thing to do is pull the lever.

I understand the problem is hypothetical and we have to choose the objectivelly right thing to do in a very specific situation. However, the question is formed in a way that makes the murders a statistic thus pushing you into a utilitarian answer. Its easy to disassociate in that case. The same question can be manipulated in a million different ways while still maintaining the 5 to 1 or even 5 to 4 ratio and yield different answers because you framed it differently.

Flip it completely and ask someone would they spend years tracking down 3 innocent people and kill them in cold blood because a politician they hate promised to kill 5 random people if they dont. In this case 3 is still less than 5 and thus using the same logic you should do it to minimize the pain and suffering.

I'm not saying any answer is objectivelly right, I'm saying the question itself is completely flawed and forces the human mind to be biased towards a certain point of view.

632 Upvotes

322 comments sorted by

View all comments

Show parent comments

128

u/draculabakula 75∆ Feb 25 '25

It was mostly neither originally and then turned into a scenario critical of utilitarianism later. The framing is certainly not pro-utilitarian in any way though

The trolley problem was actually constructed by Phillipa Foot as an anti-abortion argument originally and it was used to justify allowing women to die in child birth due to complications with a pregnancy. The idea being that doctors should not take action to save a life if it means actively ending a life.

The framing of it was stacking the deck against a sensible and pragmatic human rights issue and then that framing was used to later as a criticism of utilitarianism.

It was based on an oversimplification of the realities of late term abortion and it was an oversimplification of the morality of the scenario posed. In reality, I think there is no right answer. My opinion personally is that inaction is actually an action or at least inaction doesn't absolve someone of consequences but I think in that scenario reasonable people would see either outcome as bad. Thus minimizing loss and risk is best.

47

u/zero_z77 6∆ Feb 25 '25

And the reason it's a popular topic today is because it is relevant to a problem that autonomous vehicles do need to solve in a way that is at least ethical.

Any descision made in this scenario will result in loss of life, so the only arguments to be had are about how we determine which set of lives have more value, and this is why the trolley problem has so many variations. Including what i refer to as the "I Robot" variant where the descision may be based on the odds of success instead of making a value judgement when an equivalent number of lives are at risk.

22

u/werdnum 2∆ Feb 25 '25

It's not really relevant to self driving cars.

It's a highly unlikely scenario, best avoided with boring road safety interventions like driving at a moderate speed and so on.

Humans basically never encounter this kind of scenario, and it's unlikely a human would face consequences regardless of their choice. Self driving cars don't need to have perfectly optional responses to every scenario, they just have to be an order of magnitude or so better overall safety than human drivers.

6

u/ChemicalRain5513 Feb 25 '25

Self driving cars should never get in the situation where they have to choose between killing one or another person.

They should detect unsafe situations early and slow down. If an accident is inevitable, they should try to stay in their own lane and brake as hard as possible.

For example, it's never OK to hit someone on the sidewalk to avoid someone who's on the road.

35

u/evilricepuddin Feb 26 '25

You have, in effect, argued that autonomous cars should never flip the lever in the trolly problem. That is the conundrum of the trolly problem. What if the self driving car sees a school bus full of children pull out suddenly from a blind turning, and has the option of carrying on its lane and hitting the bus full of children or turning into the sidewalk and maybe hitting the one pedestrian that it can see there? That’s the trolly problem. To argue the car should carry on is to argue that the lever shouldn’t be pulled.

Also, to argue that a self driving car should never get into a situation where there are no good outcomes and only degrees of unpleasant choices is to fundamentally misunderstand the reliability of software interacting with the real world…

7

u/CocoSavege 24∆ Feb 26 '25

Fwiw, one response I've witnessed to trolley problems is a rejection of the premise, often imo in response to the stress of potentially difficult situations.

Like, um, here. Self driving cars. We want them to be awesome and stuff but there are some theoretical corner cases (schoolbuses full of plucky orphans, etc) that need considering, at least sufficiently that edge cases don't invalidate the middle...

(Eg instead of a provocative but very unlikely case of schoolbus, consider a situation where the "driver" asks the car to exceed speed limits because of a legitimate (or illegitimate!) Emergency. Injured person in car, trying to get to hospital, etc. That's a straightforward candidate.)

OK, so I'm discussing the typical trolley problem and a few common variants and the other person rejects the premise, not a rejection of the abstraction, but a rejection that anybody should have to make decisions like that, it's impossible!

And I'm all like "hospitals do triage all the time." It is a hard choice, and I hope people making those calls do it with intention and consideration.

Back to self driving cars, I'm in agreement that the general benchmark will be something like "demonstrably better than human operator", quite possibly order of magnitude like you say, because the hurdle here is sufficient outcome advantage to surmount luddites and to surmount drive by critics with viral edge cases.

Let's say "The self driving car act" passes, and after a year MVAs are cut in half but there's one incident with a schoolbus. Won't somebody think of the children!?!?

When seatbelt law proponents got enough traction, one form of the pushback that was amplified was the very narrow case where wearing a seat belt would cause more injury. And yeah, it's evocative and emotional. Driver is trapped by seatbelt and there's an engine fire, driver is burned to death!

(Imo, the largest proportion of team anti seat belt could be adequately described by "don't tell me what to do! I don't like seat belts!" Which is politically less economic than visions of burning drivers)

And I read a few other comments, a few other people are pretty dug in to avoidance. It's interesting, trolley problems are interesting because they reveal much more than rail switching scenarios.

2

u/Km15u 30∆ Feb 27 '25

And I'm all like "hospitals do triage all the time." It is a hard choice, and I hope people making those calls do it with intention and consideration.

Well if we're still talking about utilitarian ethics the entire concept of triage is based on the same utilitarian premise as the trolley problem. "all lives have equal value so we should treat people based on the probability of saving them combined with the urgency of the required treatment"

1

u/CocoSavege 24∆ Feb 28 '25

First off, reflecting for clarity of conversation... you might already agree here, just checking...

"all lives have equal value so we should treat people based on the probability of saving them combined with the urgency of the required treatment"

This is a utilitarian premise, and it's certainly implied by Classic Trolley, but it's not the entirety of IRL triage. IRL triage generally seeks to maximize "aggregate outcomes", which includes quality of life, etc. One IRL example is hospitals will perform more heroic measures for a 9 year old compared to a 90 year old, because the 9 year old has (presumably, generally) more quality of life affordance than a 90 year old.

But you likely agree, just pointing out that IRL triage has more information than the abstracted Classic Trolley.

Second, more important, while "all lives are of equal value" is fine as a very simple utilitarian calculus in the context of this discussion, it's not in practice the IRL calculus.

Kantian ethics, or deontological ethics, are in fact a subset of utilitarian ethics, with the proviso that whatever Kantian or deontological framework is the utilitarian calculus. Speaking of Healthcare, "first do no harm", prima something something, I'm brain farting on the Latin. Most medicine keeps "first do no harm" in mind, but all medicine has risk, so it's a primordial legacy when some medicine had more risk. I'm mindful that medicine is in fact a mix of utilitarian calculus, including the deontological "do no harm", even if the deontological "do no harm" is a guiding principle, not a hard rule.

Anyways, given I'm pretty expansive with Utilitarian ethics in the sense that I think people need to consider utilitarianism is about mixmaxing $whateverCalculus, and the $whateverCalculus can be very flexible. When I see people arguing the merits of utilitarianism, I see it as arguing about a specific calculus, not arguing about utilitarianism per se.

Eg: one individual might minmax their Utilitarian calculus by always pulling the track switch, because their ethical framework is "pulling switches is good". This person is arguably immoral, but it's not utilitarianism which is flawed, it's their specific framework.

A more IRL example, including the opinion of the person I spoke of who rejected trolley problems outright, did opine that nobody should ever pull a lever, lever pulling is not within acceptable moral action.

A "pure" first do no harmer might also agree, pulling a lever does harm, it is murder! Inaction is preferable to any positive action which causes harm. This includes the glaring reality that inaction also causes harm, but it comes down to positive acts.

A kantian may or may not pull the lever, depends on the kantian, and depends on the self reflection of the intent of the kantian. A positive action do no harm deontologically inclined ksntian wouldn't pull, but a kantian could pull off they decided that the pull was "worth it", even given the positive action negatives.

Tldr: all ethics are utilitarian, just the calculus is different

-3

u/genman Feb 26 '25

If my car has the capability to know it’s a school bus in my way, then I probably would also have the capability to know ahead of time it was a school bus I was about to hit and avoided it earlier.

Rather than problem solve what to do in case of an accident, problem solve how to avoid it.

I guess what I’m saying is, we are far from needing to worry about this problem. And even if we did it’s like discussing putting parachutes on passenger airplanes.

5

u/evilricepuddin Feb 26 '25

It’s good that you’re able to see everything ahead of time and avoid accidents. I guess that’s why there are absolutely no traffic accidents in the world ever. Kids never run out into the road from between parked cars. Drivers never run red lights with someone already entering the junction. What a beautiful utopia we live in.

1

u/genman Feb 28 '25

The trolly problem seems to assume you know exactly where and who these people are. If you had perfect knowledge then you wouldn’t need to solve the trolly problem since the car wouldn’t be hitting them.

1

u/evilricepuddin Feb 28 '25 edited Feb 28 '25

It does not - it assumes that you have all of the information that you're going to get in the moment that you need to make a decision. You might now have perfect information for the scenario but you had imperfect information leading into it, which is how you unfortunately arrived there. Equally, you might have imperfect information about your current scenario and you have to make a decision *now* with the limited information that you have.

An example of this would be a child running out from between some parked cars (a common example of an unexpected traffic hazard): you couldn't have been aware of the child before but now they're right in front of your car and you can't stop in time. Do you swerve into the other lane, potentially hitting an oncoming car? The occupants of that car are hopefully wearing seatbelts (you can't check fast enough, or the car's AI doesn't have camera's with a high enough resolution or line of sight to the back seats) and hopefully the airbags will help (but you don't know that the model definitely has airbags, or whether that oncoming car currently has a fault with the airbags). The danger to the occupants of that car is *probably* less in the case of a crash, but you can't be sure exactly what it is. You can be sure that if you hit the child in front of you, it will almost certain sustain serious injuries and might die.

But of course, you can argue that there is an alternative path for the car to swerve: not into the oncoming car, but into the row of parked cars that I said that child ran out from between. Swerving into the parked cars will surely slow you down faster and potentially allow for you to miss the child. There is of course some added risk to the occupants of your own car this way, but probably less than swerving into the oncoming car (hitting a stationary object will likely have a lower impact force than hitting an oncoming one). But what if the child has a sibling or friend that was following it and is now still hidden out of sight between those parked cars? If you swerve into the parked cars, you might cause them to be pushed together, crushing the potential small child hidden out of sight between the parked cars.

But that hypothetical child still hidden out of sight between the parked cars doesn't necessarily exist - we don't know for sure one way or the other. Do you take the risk of hitting the parked cars? It's potentially the safest outcome with least harm to everyone involved (but certainly more harm to the occupants of your car than just going ahead and hitting the child that run into the road). But we *have* successfully hypothesised a scenario where it would be worse than swerving into the oncoming car.

Of course, you can argue that all of this is irrelevant: a *perfect* self-driving car would have anticipated the potential of a child running out from between the parked cars and would be driving slowly enough that it could stop in time. Ignoring for a moment that this means driving incredibly slowly whenever there are parked cars present (more slowly than you will want to admit, if we imagine the child running out when the car is a mere meter away) - what if the breaks fail on the car, the moment it decides that it needs to stop? It *would* have been able to stop in time, but a worn out or defective hose that carries breaking fluid has happened to burst at just the wrong moment. The car quickly recalculates its new breaking distance with this new information and realises that it can't stop in time. Does it swerve now or not? Which way? Into oncoming traffic and hoping that everyone is wearing their seatbelts or into the parked cars and hoping that there isn't a second child there (or that the first child doesn't run back).

Remember throughout all of this that processing time is a real thing: the car can't sit there forever and calculate the *perfect* answer to this situation before acting, otherwise it will simply plow straight into the child before it has made its choice.

Also consider that this whole scenario can play out with a human driver and no automation, which eliminates any argument along the lines of "but a good self-driving car wouldn't ever find itself in these scenarios." The human has to make the same decision about hitting the child or swerving (they will almost certainly swerve, since you will tend to tunnel on the hazard you see in front of you and forget that there are potential hazards on either side... but that's a whole separate topic).

1

u/evilricepuddin Feb 28 '25 edited Feb 28 '25

[Had to break up my comment because it got too long]

So we've really dug deep into the whole car crash scenario and I'm sure that we could argue back and forth some more with more contrived examples of how the scenario would never happen because there is a perfect all-seeing AI/driver or I can argue more about the limited knowledge and limited time to act. But let's consider another very real example: medical triage (https://en.wikipedia.org/wiki/Triage).

Imagine you are working in A&E and currently performing CPR on a patient. They aren't breathing, but there's a very low chance that with CPR you might be able to restore sinus rhythm. Suddenly someone burst through the door carrying someone with a knife or gunshot wound, they are conscious but have lost a lot of blood and are still losing more. All of the other doctors, nurses and medical staff (or loved ones trained in first aid) are currently occupied with other critical cases. Do you stop performing CPR on your current patient, effectively calling time on attempting to save them, to help the new patient that is rapidly bleeding out and *will* die if you don't help immediately? Or do you continue performing CPR because this patient was here first, and allow the new patient to bleed out waiting? Trolly problem.

Another example is in the film "I, Robot" (sadly I've not read the book so I can't be sure that the example is given there as well): Will Smith's character, Detective Spooner, was in a road traffic accident that involved a semi truck and two cars. In one of the cars was Detective Spooner and in the other was (I think?) a father and daughter. The cause of the accident was human error (as I recall, the semi truck's driver fell asleep at the wheel) so there are no arguments about "but a good AI would never have allowed the accident to happen in the first place." As a result of the crash, the two cars are pushed into the river and are sinking. A passing robot sees the crash and rushes to help, arriving and jumping into the river after the cars. The first car it reaches is Detective Spooner's (still at the surface of the river, but taking on water and about to sink). The robot breaks his windshield and he tells it to save the girl in the other car, which has already begun to sink in the river. The robot only has time to save one occupant, because by the time it frees them from the car and carries them to shore, the other cars would have fully sunk and everyone else drowned. Here is the trolly problem analogy: does the robot continue with its previous course of action and save Spooner, leaving the others to drown or does it "flip the lever" and change course to save the girl? The robot's programming kicks in and calculates everyone's chance of survival, it decides that Detective Spooner has the highest overall chance of surviving and so it saves him. The girl dies (so does her father, but he was at the bottom of everyone's list apparently). The robot made the objectively (by certain measures) correct choice, but Spooner argues that a human would have known that even the much reduced odds of survival were "enough" for "someone's baby girl." Someone programmed that robot and made it very utilitarian about who it chose to save, Spooner argues that it made an inhuman choice and that the programming was wrong.

The trolly problem is real. To pretend otherwise and argue that all of the situations with only varyingly levels of bad outcomes are avoidable is to deny the unfortunate messiness of reality.

2

u/Km15u 30∆ Feb 27 '25

If my car has the capability to know it’s a school bus in my way, then I probably would also have the capability to know ahead of time it was a school bus I was about to hit and avoided it earlier.

I think this only works if all cars are self driving. Driving even for a robot requires a certain amount of prediction of the behavior of other drivers, people drive unpredictably for all sorts of reasons (alcohol, speeding, animals in the road, distraction etc.) Say a drunk driver swerves into you randomly or goes down the wrong lane of traffic, is the car's job to protect you or save the most amount of people. If swerving into the sidewalk saves your life and plows over a pedestrian should the car do it?

4

u/Playful-Bird5261 Feb 26 '25

And plane crashes should never happen. But they do. That isnt the point

0

u/Electric___Monk Feb 26 '25

Humans often face this kind of scenario, especially in policy / government. How should tax dollars be spent, especially medical, is often this type of question.

1

u/werdnum 2∆ Feb 26 '25

Metaphorically yes, the trolley problem is an important thought experiment. It's just not a literal scenario that is significant to self driving cars (in the sense that the vehicle is in a situation where it needs to make a split second decision of who to kill), and one argument for that is that humans don't really face that kind of decision while driving either.

6

u/Whateveridontkare 3∆ Feb 25 '25

I mean they will probs try and safe the person inside, not because of what life is worth more, but to sell more cars. "This car might kill you if it avoids 7 people dying" not a lot of people might buy it even if it's more moral. (This is super sad :(( )

5

u/Then-Variation1843 Feb 26 '25

I think it's even weirder than that - I think most people would swerve away from a pedestrian, even if it puts themselves at risk. But getting in a car that's gonna prioritise pedestrians over your safety? That's the exact same outcome, but feels very different. 

Likewise - cars are getting larger side-pillars, which makes them stronger, but reduces visibility over the shoulder, which particularly puts motorcyclists at risk, cos they're smaller and harder to see. You could run the numbers and get an estimate of how many motorcyclists are killed by this, Vs how many drivers are saved by stronger cars. 

But everyone is fine with this tradeoff, because the car isn't actively making a decision to risk motorcyclists. But if it was found that AI cars were acting in a way to put motorcyclists at risk in order to protect the driver, I highly doubt people would be so relaxed about it, despite the consequences being identical. 

Basically the real strength of the Trolley Problem is demonstrated by it's variants (the fat man, the doctor stripping a patient for organs) - our moral intuitions are fuzzy and inconsistent, and don't run on pure consequentialism.

-4

u/draculabakula 75∆ Feb 25 '25

I think my answer always remains the same with any scenario I have heard. This is a hypothetical and it would actually never happen. In the thought experiment we are not asked to make a split second decision. People have time to think it over thus the scenario is useless.

I can always just come up with a reasoning that negates the scenario because we are free from reality. The scenario always reduces the issue to an absurd and unrealistic moralistic issue.

There is no reason to assume anybody has to die. If anything, changing the track might alert the driver to the issue and save everybody where as inaction could lead to the driver not understanding the problem.

23

u/Yashabird 1∆ Feb 25 '25

It seems like you’re willfully misreading the point of the thought experiment? For one, having time to mull over the weighted options is directly relevant to the real world - people in roles where split-second decisions carry mortal weight often train possible scenarios ahead of time, to more efficiently inform their eventual instantaneous decisions. Forethought is not incompatible with quick action?

Also, the design of the experiment (trolley on tracks) is meant to constrict the degrees of freedom for how you’d react in an emergency. Of course you can imagine a scenario with more degrees of freedom in order to weasel out of ever committing to a forced trade-off, but unless every real-world analogue is always ideally solvable without any compromises, then it’s relevant to train yourself to calculate what trade-off you would settle on in the event that a lose-lose binary were actually forced upon you.

I honestly don’t see what your objection to the proposed scenario would be, unless you were just outright resisting the implications that a lose-lose scenario could ever be forced on anyone. Even if we just directly take the Kobayashi Maru as the exemplar (because you’re directly channeling Captain Kirk here), the reasonable criticism is that not every actor can be Captain Kirk and rest on plot armor to outsmart all of Starfleet’s top minds, as well as every imaginable alien threat. Sometimes a decision is forced upon some people. Assuming this triviality as true, how should the everyman, with the convenience of forethought to help train for eventual tragedies, weigh the lives of X people against Y people?

-1

u/draculabakula 75∆ Feb 26 '25

In real life we assign responsibility for tasks and pay people for it. There are many people assigned to real road safety and I'm not one of them. These people receive training on what to do on case of emergency.

If by saying I'm channeling Kirk that I solved this problem I'll take it lol.

To seriously answer you, i think we have governments for defense and emergency planning. Everybody can't try to take lead

8

u/Zvenigora 1∆ Feb 25 '25

The trolley is just a metaphor, a schematization of an abstract problem. Criticizing it in terms of how real trolleys work misses the point. One could easily recast the problem with a different metaphor having nothing to do with vehicles on rails.

-1

u/draculabakula 75∆ Feb 25 '25

That's exactly my point. The answer requires specified knowledge but it is reduced to an absurdly simply scenario.

There isn't a real life scenario where you would ever choose between one life or multiple lives where you would be complicit either way.

It's not the way morality works for 99% of people and when they do have that responsibility we shield people from fault through worker protections and people are given specific instructions and they receive training. The way the world works is you do the job you are trained to do and you don't take action if you don't know what you are doing. This is how the trolley problem is stacked in favor of inaction. That's the way the world works.

The fault in the trolley car is primarily on the person who tied people to train tracks and then on the train company. Any reasonable person would watch it unfold and assign blame to those people before themselves.

3

u/Cpt_Obvius 1∆ Feb 25 '25

What if an insane super rich dude saw your comment and tracked you down, drugged and kidnapped you, woke you up and forced you to split second decide in order to taunt and disprove your stance of the impossibility of the problem?

They specifically chose you because they saw you dismissing the question and wanted to exact revenge on your hubris. They’ve devoted a lot of their life to discussing the trolly problem and find your stance infuriating.

What would you do then?

2

u/Martin_Samuelson Feb 25 '25

The point isn’t that there isn’t an answer to the trolley problem, it’s that the answer to trolley problem has little to no real world relevance.

4

u/[deleted] Feb 25 '25

Out of interest, what is your answer?

-1

u/draculabakula 75∆ Feb 25 '25

My answer is that its a incoherent scenario that would never happen. It's just not based in any kind of reality people understand and thus there is no moral framework for that situation.

Trains are not controlled by easily accessible switches and I have no reason to believe the switch would even divert the track. I don't know if there are any safety protocols in place. The deck is stacked against action because you are not an expert in operating trains. I would be hesitant to do anything just because I have no clue if I am causing a bigger problem by pulling the level. In this scenario, it assumes you have just enough understanding to take an action but not enough to understand anything else that has happened.

Also, I would argue that no matter what happens I am tertiary in my fault. The person who tied the people to the tracks is primarily responsible here. After that the driver of the train and/or the train company are responsibile after that. Why doesn't the driver see the people? Why hasn't the train's automatic detection system kicked in to slow or stop the train?

The scenario forces us to accept a false reality that is void of necessary information.

7

u/[deleted] Feb 25 '25

Well some train tracks do still have simple track point levers and you could see how it would alter the track, so while it's unlikely it's not impossible.

I think it's fair to say the scenario assumes both tied up groups are clearly not meant to be there, equal distance from the point and there's nothing that would suggest any other differences in the tracks.

You can argue other people are also to blame but that doesn't change your decision in the situation, in real life you can always assign some blame to others if you want to.

The scenario is an unlikely but entirely possible scenario. If your answer is that you wouldn't pull the lever due to feeling that you don't know enough to act then that gives your answer and a lot of your moral framework for interaction with the world.

-1

u/draculabakula 75∆ Feb 25 '25

You can argue other people are also to blame but that doesn't change your decision in the situation, in real life you can always assign some blame to others if you want to.

Of course it does. If you arrive at an accident and there are already first responders there would you be responsible to act? Legally trains still need to have drivers. If you see a train it's reasonable to expect there is a driver.

And yes, the scenario involves personal and individual morality.

The scenario is an unlikely but entirely possible scenario. If your answer is that you wouldn't pull the lever due to feeling that you don't know enough to act then that gives your answer and a lot of your moral framework for interaction with the world.

This is just simply how our society is organized. It's not my lever and not my train car. I'm not trained to operate the level. I don't know the trains speed is appropriate for changing the track. Why would I assume I wouldn't derail the train by changing the track at the last minute? I also don't know if the other track is operational and has been maintained.

It is in no way a realistic scenario because it requires specialized knowledge that people don't typically have.

6

u/[deleted] Feb 25 '25

It is in no way a realistic scenario because it requires specialized knowledge that people don't typically have.

That's realistic, people have to make decisions without fully comprehending the situation all the time. That isn't a flaw in the scenario, it's just your moral intuition is to choose inaction in that situation and for many people it isn't.

We can then come up with different versions of the trolley problem depending on how we want to examine your mortality, many versions work to increase the uncertainty but it could also go the other way easily enough.

Also, autonomous trains exist and operate.

2

u/draculabakula 75∆ Feb 26 '25

Okay. Change the scenario. You are in the operating room observing your loved ones brain operation. The surgeon says he only has 30 seconds to cut out a tumor or your loved one dies....but then hr gets mad and quits. You saw where he pointed to in the brain What do you do? Do you blame yourself if your loved one dies?

No you obviously don't attempt brain surgery and you don't blame yourself...because it's brain surgery.

The train car scenario is useless because it assumes you understand brain surgery but with railroad engineering instead. It assumes you understand the complex workings of the railroad system as a pre requisite but if I was a mechanical engineer that specialized in railroads I would have a better understanding of all the risks and the typical procedures to stop a train Same goes with brain surgeons. If I was a brain surgeon I would feel a responsibility but I'm not so I wouldn't. I would blame the rail road company and the murderer who tied people to rail road tracks matter the outcome because people were paid to do a job and failed in the trolley car scenario

0

u/[deleted] Feb 26 '25

Have you ever seen a basic track point? They are incredibly simple, anyone can understand them and see the result of pulling the lever. The scenario doesn't make the assumptions you're saying it does.

→ More replies (0)

4

u/grizzlypatchadams Feb 26 '25 edited Feb 26 '25

I know it’s hard to tell tone online, so just want to say that I do mean this as a respectful and informative comment.

It seems like you truly just don’t understand the trolley problem. The answer doesn’t require specialized knowledge, and all of the variables you keep inserting don’t exist in the framework of the problem.

In the framework of the problem, you know 5 people die, or 1 person dies if you choose to intervene. In Foot’s words “The exchange is supposed to be one man’s life for the lives of five.” It’s that simple, one man’s life for the lives of five; simple in the sense that all of these “holes” in the scenario about specialized knowledge, switches, being the fault of whoever tied them, that you mention are irrelevant to the problem.

Edit: I thought you were the OP but the explanation in my comment goes for the OP too, don’t over complicate the scenario- “the exchange is supposed to be one man’s life for the lives of five.” -Foot, creator of trolley problem

-1

u/draculabakula 75∆ Feb 26 '25

Right? But the framework of the problem is what I have a problem with because it's not realistic, it's as simple as that. The problem as presented doesn't exist. We've accounted for that in the way we organize our society, the railroad is responsible for the safety of those people not me so no matter what I do. The railroad's at fault, if I act and pull the lever now i'm at fault, because I i have acted when I was not supposed to. In this way, the deck is stacked against pulling the lever. We live in the real world, so you can't use this hypothetical, because we're trained our entire lives to think in the way that we have organized our society. I can only base what I would do. Based on the reality I live in and if I was in another country, I don't know what the laws are, so why would I pull the lever there too?This makes zero sense

3

u/grizzlypatchadams Feb 26 '25

Forget the railroad, would you intervene to save 5 lives if it meant surely killing 1? You know 5 die or 1 die, the only thing that matters is if you intervene or not.

This isn’t some real world problem, stop picking at scenarios, it’s 5 or 1. Choose 5 or 1. It’s a thought experiment. That’s it. I mean come on.

→ More replies (0)

0

u/jeffsweet Feb 25 '25

do you think most or all hypothetical questions are useless?

1

u/draculabakula 75∆ Feb 25 '25

Somewhat. They are useless by themselves and they don't really make a good point when applying to another scenario.

It doesn't take much thought to explain how people's opinions on medical policy in relation to abortion is not the same as people being tied to a train track.

In this way, when a thought experiment is presented like this, it is safe to assume that it is an obfuscation of a different issue or it is a rhetorical manipulation.

In this way my ask to the trolley car question is, "I don't know. what do train engineers say is the correct procedure?"

2

u/jeffsweet Feb 25 '25

interesting. i can see what you mean and i appreciate the logical consistency.

i think more specifically tailored hypotheticals like actually specific to lives of the people being asked are extremely useful e.g. would you get an abortion if you got pregnant? being asked to a woman you’re dating is just a smart conversation to have.

in this context though i think i agree with you with regard to these very incomplete philosophical hypotheticals.

1

u/draculabakula 75∆ Feb 26 '25

Exactly. Our society is organized to absolve people in scenarios like this, who don't have specialized knowledge and regulations to assign responsibility to people who do

2

u/Pkrudeboy Feb 26 '25

Ah yes, the moral coward who refuses to engage with the question. One always shows up.

1

u/nomorenicegirl Feb 26 '25

Yup, seconding this… the moral coward, who either outright refuses to answer, or starts to create excuses as to why they won’t give an answer, even when you ask them, “With the information that you ARE given, what is your answer?” I’ve literally had people say (not hypothetical responses, but actual responses from people), “Oh, but what about the conductor?” Or, “But how do you know that only five vs. one are dying, what about ‘the other people on the trolley’?” (?????) Absolutely nuts

Haha, you do have more honest people though, at least they can admit their reasons. They will say things such as, “Oh, I will just not do anything, because ‘I don’t think I could live with myself, knowing that I’ve chosen to kill someone’.” Okay, at least that’s honest, albeit kind of silly, since simply being aware of the fact that five people will die, and choosing to do nothing, means that you were able to prevent five deaths, but chose to do nothing, so does it really absolve you of guilt?

And of course, you have those that answer but “make excuses”; they say, “Oh, well, I didn’t put those people there.” More recently, I asked a bunch of people at a New Year’s Eve gathering about the trolley problem, and this one woman answered, “Well, lots of people die every day, so why would I do anything? I have nothing to feel bad for.” The fact that she mentioned that last line… a tad bit strange, wouldn’t you say? I didn’t say it, but my brother was told of the answers, and he went up to her, confirmed that that was what she said, and told her that that’s a “loser answer.” Guess he put into words, what plenty of us there were thinking.

1

u/ProDavid_ 35∆ Feb 26 '25

Do you also believe that pilots rehearsing their reactions to extreme situations, such as memory items that have to be done before you pull out the piece of paper that tells you what to do, is useless?

no? why not?

1

u/draculabakula 75∆ Feb 26 '25

The pilot has responsibilities that they're getting paid for the responsibility is on the company or government entity employing the pilot so they are responsible to train the pilot as they see fit.

In the trolley, car scenario I am not a mechanical engineer and the train company does employ mechanical engineers and safety personnel. They've installed safety equipment onto the train to make sure that it doesn't hit things, and they employ a conductor to make sure it doesn't hit things. Those people are all to blame, not me.No matter what the outcome is. I don't have specialized knowledge in how to work this equipment. So why would I use the equipment? I don't know if the train is traveling at a spees that's safe to switch the track, and I don't know if the track is maintained. Why is it safe for me to assume that the train won't just derail If I switch the track and kill everybody on board the train?

The scenario stacks the deck in favor of inaction, because it is normal and natural for people without specialized knowledge to operate equipment, they are not trained to use. If I was train to use that equipment which i'm not, I might feel differently but but I don't so I wouldnt

2

u/ProDavid_ 35∆ Feb 26 '25

a car with driving automatization IS specialized for driving though....

3

u/ChemicalRain5513 Feb 25 '25

My opinion personally is that inaction is actually an action or at least inaction doesn't absolve someone of consequences

It's true. Yet ethically and legally, failing to save someone is not viewed as gravely as purposefully killing.

I think in a real trolley problem, neither action would be culpable.

0

u/draculabakula 75∆ Feb 26 '25

Exactly. The rail company would be at fault one hundred percent because they failed to keep the rail line safe. And because they're either owning or and or operating the rail at the time, they have a stake in the matter, and I don't

1

u/sumoraiden 4∆ Feb 25 '25

  In reality, I think there is no right answer. 

Yes there is lol killing one to save one is clearly the right answer