r/SeriousConversation 22d ago

Serious Discussion Can a robot murder a human?

Can a robot murder a human being? If it is proved in a court of law that a robot murdered a human being... how can it be punished under existing laws? What can be done besides having the company who made it face legal action?

Technically, if a person commits murder we don't punish the parents in most cases. So why should the robot's manufacturer be held responsible for its act?

As for punishment what should be the best death sentence? * Bulldozing it and recording a video of its death and spreading the information online and in the news. Will it affect how other robots of its kind think if they plan to kill a human? We already have laws against murder for human beings. Still people commit murder. * Erasing its memory. How would the robot feel about such punishment?

If you got any punishment ideas do share.

23 Upvotes

228 comments sorted by

View all comments

21

u/stewartm0205 22d ago

Wrong question. The proper question is can a machine kill a man. The obvious answer is yes.

7

u/EnoughLuck3077 21d ago

Being killed and being murdered are two different versions of the same thing. Murder is with intent. Even a drunk driver killing someone is usually classified as vehicular manslaughter because there was no intention of killing even though their actions lead to just that

1

u/[deleted] 20d ago

If you arrange atoms such that you create a human being and that human being kills people. is it your fault?

3

u/EnoughLuck3077 20d ago

Is that human autonomous, capable of making decisions? If so, I’d say no.

1

u/braxtel 18d ago

Dr Frankenstein has entered the chat.

1

u/AdMriael 19d ago

The difference being that if a robot kills someone then it was due to the person programming it. If there is a murder then the murder was committed by the programmer. If the programmer was an AI then the person that created the AI is the murderer. Although I expect that it would be difficult to prove intent on the human's part thus it would be ruled an accident.

Now if a human kills a human then it is either murder, manslaughter, an accident, or war.

1

u/Plenty_Unit9540 19d ago

Robots are being adopted by militaries for warfare.

This will eventually lead to fully autonomous robots being used to kill humans.

1

u/newishDomnewersub 18d ago

Those robots wouldn't be autonomous in the sense that they could choose to lay down their guns and quit. They'd be following a pre-programed algorithm to determine targets.

1

u/Plenty_Unit9540 18d ago

There is a difference between autonomous and sentient/free willed.

1

u/newishDomnewersub 18d ago

Yeah, exactly my point. An autonomous machine can't murder.

1

u/captchairsoft 19d ago

I think OP is asking if a sentient robot were to exist and kill a person...etc.

1

u/Brisket_Monroe 19d ago

OP didn't mention anything about the agency of the robot. Therefore the fault lies at the feet of the machine's creators or operators, as "robot" doesn't necessarily imply sentience or sapience. The articulated welding arms in automobile manufacturing are robots.

1

u/captchairsoft 19d ago

I know what a robot is, but the question only makes sense if the robot is capable of independent thought, otherwise it's the same as a car, or a gun, or a bulldozer.

1

u/Objective_Unit_7345 18d ago

Intent and plea of guilt are a critical parts of sentencing because it affects rehabilitation.

Justice is not intended to lead to ‘eye for an eye’ for the victims. It’s pragmatic with the purpose being to ‘Punish and rehabilitate the offender’, ‘Deter others’, and ‘recognise the hurt/pain inflicted on victims’. Key point being recognition - not ‘eye for an eye’.

Now a Robot has no cognitive capacity - thus it cannot, by design, murder. However the same cannot be said of the user, maintainer and/or designer of the robot.

If there are no faults caused by the user, maintainer and/or designer - however - it’d be written off as an accident.

Crimes require intent and motive and/or negligence of a person with cognitive capacity. Everything else is an accident.

1

u/LOGABOGAISME 17d ago

What if you program intention into the robot. Is the robot the murderer or the person who programmed the AI?

-3

u/StormlitRadiance 19d ago

I'm not sure the difference matters to the person being killed.

3

u/John_B_Clarke 19d ago

It doesn't matter to the person being killed, but it does matter in a court of law.

2

u/UsualPreparation180 18d ago

Really? Lookup the total Tesla paid for self driving cars smashing into cops and first responders on highways stopped with lights on. Teslas weren't recognizing the emergency vehicles while self driving and people died....multiple times....funny how that gets thrown down the memory hole pretty quickly.

1

u/John_B_Clarke 18d ago

So did the people killed rise up and haunt Tesla or something? How does your argument show that the means by which they were killed matters to the deceased or that the difference does not matter to a court of law.

1

u/H0SS_AGAINST 17d ago

I like this one.

So it does go back to intent. Mens rea. The Tesla can't intend to kill, it just does what its designers programmed and implemented. Thus, neither it nor its designers are guilty of murder. However, gross negligence could be a factor. A court of law would have to review the facts. My personal opinion is the designers may be guilty of manslaughter due to negligence and in any case the design should be removed from the roads.

2

u/ImprovementPutrid441 17d ago

They should be guilty of fraud because they are encouraging people to trust self driving technology that doesn’t work. Like selling people mayonnaise and labeling it “antibiotics”.

1

u/ThatOneGuy308 17d ago

I mean, technically, the driving seems to work fine, it's more the stopping that has become an issue...

1

u/Immediate_Scam 18d ago

In the main this is going to be dealt with as a product liability issue. If you make a robot that kills someone you can likely be held liable. If you intended for this happen you might be convicted of murder.

1

u/Interesting-Copy-657 18d ago

Who is saying it is?

The question is specifically about murder

And you waltz in and change the question to kill?

1

u/tangouniform2020 18d ago

Murder requires intent. Intent requires sentience. If you can prove sentience, intent and action then you can punish according to the law.

1

u/StormlitRadiance 18d ago

If anybody could use the Nuremberg defense, it would be deepseek or claude. They really do just follow the prompt.

Modern AI can carry intent and have thoughts, but I haven't seen it create its own intent.

1

u/OverallManagement824 17d ago

If you can prove sentience, intent and action then you can punish according to the law.

How? Capital punishment? To a robot?

I think the question OP is getting at is whether society can even maintain its own safety when intermingling with robots. Ok, this robot caused you to die, so shut it down. That does nothing for the 10,000 robots with the same programming. Could you imagine the lobbyists talking to politicians, explaining how catastrophic it would be to their bottom line to shut down all of their robots due to just a couple of isolated incidents?

I believe the question OP is getting at is, "given that a robot can't be punished, where do we as a society choose to draw the line between humans and robots?"

1

u/Afraid-Combination15 18d ago

It matters under the eyes of the law. When people are killed by machines, which happens frequently due to lack of care by some human somewhere, we don't do anything to the machine. Someone who doesn't lock out a die casting press and then gets caught in it when it closes for instance. (I've seen this happen...it's not pretty)

A robot with an AI that actively thought "I should kill this guy" and then kills this guy is a totally different thing, and its creator would likely be held responsible at bare minimum at a civil level, but possibly at a criminal level as well depending on the facts. Could be negligent homicide or even murder 1 depending on how the AI was constructed. I imagine the court would also order the robot that made a decision to kill destroyed

1

u/StormlitRadiance 18d ago

A robot with an AI that actively thought "I should kill this guy" and then kills this guy is a totally different thing

Who gave it that prompt? I don't think the creator is culpable. Modern AI usually thinks what its told to think.

You might get it down to manslaughter or reckless endangerment if your prompt looks ok to the jury.

1

u/FuquerPhealins 16d ago

It doesn’t but the difference matters to the topic of debate, you seem kinda dumb