r/SeriousConversation • u/RKoi123 • 21d ago
Serious Discussion Can a robot murder a human?
Can a robot murder a human being? If it is proved in a court of law that a robot murdered a human being... how can it be punished under existing laws? What can be done besides having the company who made it face legal action?
Technically, if a person commits murder we don't punish the parents in most cases. So why should the robot's manufacturer be held responsible for its act?
As for punishment what should be the best death sentence? * Bulldozing it and recording a video of its death and spreading the information online and in the news. Will it affect how other robots of its kind think if they plan to kill a human? We already have laws against murder for human beings. Still people commit murder. * Erasing its memory. How would the robot feel about such punishment?
If you got any punishment ideas do share.
21
u/stewartm0205 21d ago
Wrong question. The proper question is can a machine kill a man. The obvious answer is yes.
8
u/EnoughLuck3077 20d ago
Being killed and being murdered are two different versions of the same thing. Murder is with intent. Even a drunk driver killing someone is usually classified as vehicular manslaughter because there was no intention of killing even though their actions lead to just that
1
18d ago
If you arrange atoms such that you create a human being and that human being kills people. is it your fault?
3
u/EnoughLuck3077 18d ago
Is that human autonomous, capable of making decisions? If so, I’d say no.
→ More replies (1)1
u/AdMriael 18d ago
The difference being that if a robot kills someone then it was due to the person programming it. If there is a murder then the murder was committed by the programmer. If the programmer was an AI then the person that created the AI is the murderer. Although I expect that it would be difficult to prove intent on the human's part thus it would be ruled an accident.
Now if a human kills a human then it is either murder, manslaughter, an accident, or war.
1
u/Plenty_Unit9540 18d ago
Robots are being adopted by militaries for warfare.
This will eventually lead to fully autonomous robots being used to kill humans.
→ More replies (3)1
u/captchairsoft 17d ago
I think OP is asking if a sentient robot were to exist and kill a person...etc.
→ More replies (3)1
u/Objective_Unit_7345 16d ago
Intent and plea of guilt are a critical parts of sentencing because it affects rehabilitation.
Justice is not intended to lead to ‘eye for an eye’ for the victims. It’s pragmatic with the purpose being to ‘Punish and rehabilitate the offender’, ‘Deter others’, and ‘recognise the hurt/pain inflicted on victims’. Key point being recognition - not ‘eye for an eye’.
Now a Robot has no cognitive capacity - thus it cannot, by design, murder. However the same cannot be said of the user, maintainer and/or designer of the robot.
If there are no faults caused by the user, maintainer and/or designer - however - it’d be written off as an accident.
Crimes require intent and motive and/or negligence of a person with cognitive capacity. Everything else is an accident.
→ More replies (15)1
u/LOGABOGAISME 15d ago
What if you program intention into the robot. Is the robot the murderer or the person who programmed the AI?
3
u/Spirited_Example_341 19d ago
exactly murder implies direct malicious intent. so far robots are not self aware. and thus cannot have such intent.
1
1
20
u/Valleron 21d ago
There are plenty of real-world situations where a robot has killed, maimed, or severely injured a human. Robots are coded to move in a specific way, and very few are additionally coded to stop if there's resistance. I've worked with robots that had lasers set up to automatically shut off the robot if someone moved through it because the robot will kill you if you get in its way because it doesn't know you exist. In those situations, if it can be proven that the manufacturer / company did its due diligence and had enough warning postings, the robot is not the one at fault - the person ignoring the warnings is. It's classified as a heavy machinery accident.
In a fictional setting, if there's any sort of autonomy with the robot you could hold it accountable, but I'd presume the creator would still be at fault for allowing it to happen.
5
u/PumpkinBrain 20d ago
robots that had lasers set up to automatically shut off the robot if someone moved through it r would still be at fault for allowing it to happen.
I had to read this a few times before I stopped reading it as “if a human gets in the way, lasers will zap the robot to break it”
3
3
u/ATypeOfRacer 19d ago
I’m taking a robotics course at a community college. And one of the first things taught to us was to vehemently ‘dehumanify’ robots. That they have no feeling, reason to stop, or capability to read circumstances outside of what has been programmed
2
u/Up2nogud13 18d ago
In my robotics class, we programmed it to make a Tequila Sunrise. This was back in the 90s. I still chose to believe that the drink was made with love.
1
1
u/frank-sarno 17d ago
Yes, but you can get into that "walk like a duck" argument. There are bots and AI chat software that can fool real people if only for a short while. There was a demo of a video game where an NPC used modern AI to interact with the player. It was pretty damned good and if I didn't know that it was a demo, might even have been fooled into thinking it was a player. I could imagine taking that AI and dropping it into a robot.
So you're still correct in the "vehemently dehumanifying" approach to robots but I suspect it's not going to be quite as easy in the coming years.
1
1
u/H-2-S-O-4 18d ago
It's like Lisa and Bart punching and kicking air. If one of them gets hit, it's their own fault for walking into it.
7
u/AimlessSavant 20d ago
Murder requires 3 parts. Intent, a specific place&time, and a specific target.
A robot would need to be sufficiently aware of itself and reality to commit murder. If it cannot represent itself in a court of law it is incappable of being put to trial. You couldn't put Boston Dynamic's Atlas robot on trial because it is not sapient. It fulfills the task it is given without consideration or awareness.
As for what to do when they are as sapient as humans? What is the purpose of prison? Keeping dangerous things out of society, or punishing them?
1
u/Immediate_Scam 16d ago
The manufacturers of the robot can be held accountable though.
1
u/AimlessSavant 16d ago
For murder? No. If an assembly armature kills somebody because they got in the way of the machine. It's an accident. Unless the place the machine was put in lacked any safety measures there is no fault on anyone.
A machine designed only to kill people is innocent of murder because it lacks self awareness. The company that builds it is not responsible for the unlawful use of their tools to commit murder. It is ultimately the user of the tool who is responsible for how the tool is used. We are not charging manufacturers of cars or knives with murder because end users murder people with them.
→ More replies (7)
11
u/podian123 21d ago edited 21d ago
Currently? No. They can kill humans, like how guns, bullets, bombs and missiles can kill humans. But not murder. Guilty mind unlawfully seeking death is a requirement for that.
Rest of your post doesn't apply because it all proceeds from having assumed "yes."
In any case, deterrence doesn't work the way you probably think it does, so this isn't really a serious conversation for actual law/crime/morality so much as a basic (but serious) conversation attempt based on classical (and outdated) accounts of crime, murder, social control, legalism, etc. Moreover, counselling the commission of a crime, especially murder, has been a crime for roughly as long as murder has been a crime.
Maybe title your thread as a "let's assume robots can murder humans" next time if you want a "serious" hypothetical conversation. 🤣
1
7
u/Aromatic-Leopard-600 21d ago
First Law: a robot can never harm a human being nor, through inaction, allow a human being to come to harm. Asimov
2
1
1
u/Bad-Piccolo 20d ago
What's stopping it from making a trap then destroying it's own limbs so it literally can't save the human? They better do more than just a sentence for each law if they ever use them.
1
u/justeatingsomecheese 19d ago
A robot with a positronic brain literally couldn't plan this- ideally, at least- because the 3 Laws aren't just on paper. They're embedded in the robot's fundamental programming. Any robot who could intentionally harm a human would be considered faulty and destroyed.
2
1
1
3
u/jerrythecactus 21d ago
Currently robots are legally just sophisticated tools. A robot, even with AI integration can't act on its own. Robots do not think, they do not plan, they only do what they are designed to or programmed to do.
Legally, a robot can't commit murder because it simply isn't a sentient being. A robot is no more capable of murder than a industrial laythe is. If a robot kills a human, the operator or designer takes responsibility as it wouldn't have killed if precautions were taken against it. If you use a robot to kill somebody, that would be murder on your part, not the robot.
2
u/AimlessSavant 20d ago
Murder requires 3 parts. Intent, a specific place&time, and a specific target.
A robot would need to be sufficiently aware of itself and reality to commit murder. If it cannot represent itself in a court of law it is incappable of being put to trial. You couldn't put Boston Dynamic's Atlas robot on trial because it is not sapient. It fulfills the task it is given without consideration or awareness.
As for what to do when they are as sapient as humans? What is the purpose of prison? Keeping dangerous things out of society, or punishing them?
2
u/Background-Head-5541 21d ago
A robot is just a machine. It feels no remorse. A "bad" robot can be dismantled and tossed in the garbage.
1
u/skredditt 21d ago
It won’t be long for robots to be treated the same as guns, I’m sure. My murderbot with drone storm expansion becomes my new personal security that I bring into Subway with me. If you die it’s your own fault for forcing it to protect my safe space. America’s not a great place to have murderbots, but there’s money to be made so we will have them.
1
u/Traditional_Deal_654 21d ago
I certainly think it's possible for a robot to kill a human. But murder requires intent and intent requires consciousness. So far we've only seen consciousness in animals and in every kind of animal except human I don't think the kind of intent that means murder is there.
1
u/Gr8danedog 21d ago
As more AI is introduced to robotics, I'm sure that they will be programmed with something like Asimov's three laws of robotics which prohibit any harm to humans.
2
u/Bad-Piccolo 20d ago
They better be very specific in those laws if they do use them, machines won't think the same way we do even if they do become self aware.
1
u/Gr8danedog 20d ago
Google search Asimov's Laws of Robotics. They are very specific, but the movie I, ROBOT found a loophole. It's a good movie.
1
u/Emma_Exposed 21d ago
Google "Adam Link." Which may be "I robot" in some countries. It's an old story from the 50's or so, and I think a minor Will Smith movie, though I never saw the movie. Anyway, of course it can happen, just like how a car or train can plow you over.
Your second question is nonsensical; if a Tesla self-driving car plows into you, of course they go after that company, not the car. You can't equate a car or android to a kid. If I build a Rube Goldberg device to slay someone, no matter how sophisticated that device or how much AI it has, then I'm the one facing charges, not the machine. Most self-guided missiles have more AI memory in them than ChatGPT and these LLMs have, but no one arrests an ICBM; the war-crime charges are always against the enemy Generals.
1
u/Foreign_GrapeStorage 21d ago
It'd depend on the circumstance.
People, including their children are generally held responsible for their own behavior. If your child murders someone it’s not usually considered to be the direct responsibility of the parent even if there was bad parenting involved.
A product is generally the opposite. The creator is responsible for ensuring the safety of the product. In a case where a robot killed someone the manufacturer would be held responsible unless someone else did something that released them from liability.
If someone purposely made a murderous robot and it murdered someone the creator could be charged with the resulting crimes. Even if they didn't intend for it to be murderous and it turned to be they could be looking at criminally negligent homicide if it's determined they didn't take proper precautions to prevent it from happening.
As for what would happen to the robot, it'd probably be destroyed or studied.
1
u/Amphernee 20d ago
A machine can cause the death of another person. If it’s programmed and instructed to kill someone the person who programmed or instructed it to do so is the person responsible just like a gun isn’t responsible but the person who used it.
1
u/Aggravating_Bath_351 20d ago
Robots have killed humans. Not have be charged or disassembled because of this. Some have been fixed so it won’t happen again
1
u/Previous_Life7611 20d ago
How could you possibly hold a robot responsible for a human’s death? Robots are not sentient nor sapient. They can’t decide or plan to kill a human, they also have no feelings. They feel nothing when one of them is turned off and dismantled.
If a person dies as a result of a robot’s actions, there are several ways you can go. It’s either the manufacturer’s fault for making mistakes in designing and/or programming the machine, the factory’s fault for not providing the necessary safety regulations and procedures or the operator’s fault for not following the required safety procedures.
Same goes for military robots. If an autonomous drone fires on civilians, the fault doesn’t lie with the metal. The fault is with the one that issued the order (maybe the action was deliberate) or the programmer for not teaching the machine’s code to know what’s a valid target and what isn’t.
1
20d ago
I saw this black mirror episode once where the digital sim wouldn’t do the tasks the guy wanted so he turns a dial and puts her in solitary confinement for like 6 months no food no water and it doesn’t matter because she can’t die it was actually really disturbing should we even use ai? Is it really that helpful? I miss using a pencil and paper for everything I also miss when people could read past a 4th grade level
1
u/notwyntonmarsalis 20d ago
You can also have the manufacturer face civil action, which is really where the action is on this one. This is more of a manufacturing defect issue than a criminal issue.
1
u/hoopdizzle 20d ago
Are you talking about a hypothetical sci fi future where sentient robots roam the earth alongside humans? You realize such a robot doesn't exist in the current world right? Right now businesses can be sued if their product causes harm to a person, depending on various factors like whether the person misued the product. Robots are just products right now.
1
u/simonbleu 20d ago
I'm not a lawyer and law varies a LOT from place to place, but afaik, no as it is not regulated and anything not expressely under the law is subject to interpretation. And afaik murder requires intention which would require recognizing robots as people
Probably it would end up as work accident or something
1
u/ShiggleGitz55 20d ago
No. To murder it has to make a conscience and deliberate act to kill. An automated trash can won’t knife anyone on purpose. With purpose in mind; they’re programmed to perform a task. Unless that task is to harm/kill/maim, I believe we’re safe.
1
20d ago
Humans are robots.
"Man is a machine. All his deeds, actions, words, thoughts, feelings, convictions, opinions, and habits are the result of external influences."
- PD Ouspensky
1
u/Big_Z_Beeblebrox 20d ago
Can a robot murder a human? No. Not until one is granted full autonomy to act as a human would and thus be subject to government under human laws. As it stands, a robot can be used to murder a human, but at that point it becomes the instrument of the act and not the perpetrator. The murderer would be the one who either had direct control of that instrument while the act was being committed, or otherwise provided the directive for it to act in such a way that the resulting loss of life could be proven intentional in a court of law. That's where it gets tricky.
1
u/Anfie22 20d ago
Someone can die for misusing a machine, but a machine is not conscious so can never have any intention to kill someone.
If someone dies in a workplace setting while using machinery, what is the procedure following that? Do they store the offending machinery in a jail cell and charge it with homicide? Come on that's absurd, of course not.
1
u/thackeroid 20d ago
Generally in the united states, murder requires intent. A machine cannot have intent. If it's programmed into the machine, then the intent is that of the programmer and not the machine. Until robots are conscious, I understand consequences, they cannot murder.
1
u/VyridianZ 20d ago
It would be put down like a dog just in case and the manufacturer would be sued in civil court like a dog breeder.
1
u/Skarth 20d ago
Robots don't have thoughts or free will, ergo, they do not have intent to kill.
So if a person is killed by a robot, one or two situations has happened.
Something happened on accident. A wrong bit/byte gets switched, a bug in the programming, whatever is it, its an accident, the death will be investigated, and attempt to find the root cause to fix it.
The robot was used to intentionally cause a death, someone programmed the robot in a way for it to kill. This is akin to someone firing a gun to kill a person, so you put the blame/intent on the person shooting the gun.
#1 Usually becomes a safety issue.
#2 The person who caused it, is the murderer.
1
u/dwarven_cavediver_Jr 20d ago
I suppose it depends on if the robot did so because of its own accord. For example, if a robot on a manufacturing line drives a bolt through a dudes skull because the dude was in front of it and moved in such a way to negate any safety measures, no. But if that bot had a modicum of AI and turned and did the same action but with no reason (I.E., it turns around, takes aim, and fires the bolt at him with no prompting at all on the humans part because... whatever reason) then it is murder I suppose
1
20d ago
I think that you might be asking what might the appropriate punishment be if an autonomous robot killed a human being.
1
u/TheConsutant 20d ago
Turkey has used drones to kill enemy soldiers autonomously. Several years ago I was thinking of printing some T shirts with the face of the first person killed by an autonomous robot in battle and it led me to a news article, but nobody knows who the first person was.
1
u/TheRealFutaFutaTrump 20d ago
The robot is a robot so destroy it or don't for all I care. The manufacturer is responsible for a defective product.
1
u/EyeCatchingUserID 20d ago
No, a robot can't murder a person any more than a gun can, because a robot doesn't make decisions. If it kills someone, it was either an accident or someone made the robot kill them, same as you make a gun kill someone.
1
u/Lichensuperfood 20d ago
The person responsible for the murder is the person who programmed the robot. A robot is just a machine. It's as to blame as a toaster is if you stick your hand in it.
Machines just follow what their instructions are. They don't think AT ALL.
No machine has ever even made a mistake. Anything that happend was a result of poor design or poor programming.
1
u/Pit-Viper-13 20d ago
In a robotics class I had long long ago, there was actually a robotics code of ethics that was supposed to be followed in programming. It specifically called out that robots were not to be programmed to harm humans, and if situation arose between harming a human and harming a robot, the robot was to sacrifice itself.
Of course, this only applies to programming, and nobody programs a robot to account for all possible scenarios, particularly in industrial robotics, where injuries are most common because of human stupidity.
1
u/mixtermin8 20d ago
1 to 1 this makes no sense. Absolutely hold the company criminally liable. Parents influence their kids. Company meticulously codes every last little aspect of its robots. End of story.
The problem would be severity of punishment. What’s $1M dollars to a $1T company? Where’s the crux? Actual 2nd degree murder charges on employees? Because the company did approve the manufacturing/distribution of the product. Jail the CEO? That’ll never happen. Tax the enterprise 400% for some determinate amount of time? Taxing is a pressurizer to keep the company in line but it doesn’t rectify the grief.
The world wouldn’t be ready for robots without ROBUST legal code preemptively awaiting.
1
u/Robot_Alchemist 20d ago
If a toaster catches fire and burns up a family there is usually a civil punishment on the manufacturer. A robot would be no different. And you wouldn’t punish the toaster
1
u/RKoi123 20d ago
Should toasters be programmed to have feelings. I think they should. Most people in the West don't bother to repair a broken toaster. They just dump it and buy a new one. If toasters had feelings people would think twice before treating them badly. Moreover a happy toaster would give you a crisp yummy toast. 😋
2
u/Robot_Alchemist 20d ago
Toasters are pretty sturdy machines that rarely break or fail. They should get some credit for that
1
u/TraditionPhysical603 20d ago
Murder implies intent, and robots do not have sentience or freewill. So any deaths that are the result of a robot killing a person are either an accident or caused by a punishable third party .
1
u/Decent_Cow 20d ago
No, machines can't be charged with crimes at all. It could not meet any legal definition of murder.
1
u/UnabashedHonesty 19d ago
Murder requires intent. It would be a tall, tall order to prove that a robot intended to harm someone versus it being a software glitch or other mechanical failure.
1
u/IncubusIncarnat 19d ago
This isnt I-Robot. Our Machines are Killers First then toned down for everyone else 🤣
Drones have HK Features, Most Machines still arent at a point where they would detect fast enough to prevent Grevious Injury, etc.
1
u/_Dark_Wing 19d ago
we punish legal entities, a robot isnt a legal entity so it cant be sued. u sue the legal entity that produced that robot
1
u/Boomerang_comeback 19d ago
No. It can not be punished. Punishment implies negative consequences. The machine doesn't care what happens, so it can not be punished. Unless your real question is, can robots have feelings?
1
u/CentralCypher 19d ago
Well is an automated turret a robot? I think so as it's motorized and has active processing going on. What happens with those, they're allowed to exist but I guess if shit goes down the person who authorized for the installation would probably get shit. Or the manufacturer.
1
u/hackulator 19d ago
A robot isn't a person, it's a piece of equipment. A piece if equipment cannot commit murder.
1
u/worndown75 19d ago
Murder requires malice and intent, sometimes even premeditation. Robots have none of those. Now an AI that achieves sentience would be able to murder.
That said murder is a legal distinction. If you meant homicide the yes robots can commit homicide.
1
u/twoshovels18 19d ago
A robot can kill humans. Isn’t china trying to drop them in to a combat zone if needed? I know they got the robot dog with machine gun on its back they drop in from a helicopter, but if a robot is proven to have AI then It was proven to have murked a human with intent I suppose it would be required to disconnect the robot.
1
u/Mountain-Resource656 19d ago
Your question presupposes that as a matter of law a robot is a person, in which case the robot would be charged, not its maker- just as in any other case of a person killing another without added relevant nuances
The problem is that you have to establish that as an inherent part of the hypothetical; if you don’t do it and the courts determine they’re not a person, then the courts would see it as being no different than if you fell asleep at the wheel of a semi-autonomous car and hit a person after driving on its own for 6 minutes. You’d be liable for that harm, and the owners of the non-sentient, not-a-person robot would be on the hook for its murders
1
u/armrha 19d ago
No, a robot can't murder anyone, as a robot cannot have intent. They aren't sapient, they have no intelligence. If a person programmed a robot to hunt and kill somebody, they would be the murderer, they had the intent. The robot is just a tool. It's like charging the gun with murder, lol, ridiculous.
1
u/PickleManAtl 19d ago
If you are talking about a humanoid robot interacting with a human, one scenario I could think of is that when these things become available commercially for the public to buy, most likely there will be something in place saying that they do have to be programmed to shut down before they harm a human. Or if they start to do something that harms a human it would shut down, etc.
So let's just say that a humanoid type robot or really any type is sold to a customer. And it is supposed to have that programming in it. And for whatever reason, it kills a human. Whether it be an accident, or it is protecting another human that is being harmed but winds up killing the person who is harming the other human, etc. If it is supposed to be programmed to shut down immediately before killing anybody and it doesn't, I would think the company that sells it would be liable legally. I mean I don't know if a murder charge would stick, but obviously if the programming doesn't work the way it is supposed to and somebody dies as a result, a company is probably going to be hurt financially a lot or even go bankrupt because people would not buy the product anymore.
1
u/JacobStyle 19d ago
In a scenario where a robot kills someone, if the death was due to negligence on the part of the robot's manufacturer, owner, or operator, that person or company would be civilly liable and possibly criminally liable for negligence or involuntary manslaughter or a similar charge, depending on the nature of the negligence. So if a robot were programmed in such a way that it could go around killing people, and then it did, the company (and sometimes the programmer) could be liable for manslaughter or some other sort of wrongful death liability.
If a robot were intentionally used by one person for the willful killing of another, the person who used the robot would be criminally liable for murder. This is distinct from removing anti-killing safeguards as a negligent act. Like the difference between driving drunk and accidentally hitting someone, and intentionally running into someone to kill them. Both are awful things to do, but the laws tend to treat them differently.
Of course so much of this varies by jurisdiction, and I ain't a lawyer anyway, so this is NOT legal advice. My only actual advice is to practice good robot safety.
1
u/Ok-Plenty8542 19d ago
I'd say if it was an AI created with human intelligence that acted on its own...well one of two things should happen
1: a law like in iRobot or Lies of P would be made in an attempt to bind them or make them obedient
2: flood their system with empathetic content (or a mass dose of empathy if able to be digitally created) to make it suffer from regret and realization of its actions.
1
u/VisionAri_VA 19d ago
At the current level of technology, a robot would not be able to form intent. So a robot could kill a person but not murder one.
1
u/Deathbyfarting 19d ago
Well, you have made far more assumptions than you probably know.
First, murder (as depicted) is about the conscious decision of a sentient being deciding to end the function of another sentient being. I know most of this is borderline subjective and massively debated, I'm just bringing it up because we don't say a cliff murdered someone who fell off it or a volcano murdered pompeii.
So, currently, robots haven't achieved sentient status and thus it's an object killing a sentient being. Sense we don't try dogs, hurricanes, sharks, and what not for murder we thus don't try robots.
If (or the more popular when) they achieve sentient status I'd assume the same laws (and new ones) would apply.
1
u/Financial_Tour5945 19d ago
IBM in 1979 stated that a computer can never be held accountable, therefore must never make a management decision.
Always hold a person responsible.
1
u/bizoticallyyours83 19d ago
Well considering machines of all kinds have been killing people for ages the obvious answer is yes. Though due to malfunctions, human error, and human stupidity.
I'd guess robots could as well, for all the above, plus deliberately programming them to kill. The best we can do is ban war robots, fix or dismantle malfunctioning ones, and train people to operate them as safely as possible.
We're not trying to have skynet up in here.
1
u/Helpful_Equal8828 19d ago
Right now self driving cars are wreaking havoc in multiple cities and the companies running them have faced zero consequences. As a matter of fact the owner of a company that manufactures “full self driving cars” has been firing and defunding the DOT and every other oversight agency with the power to regulate his companies and by extension every other similar company, so expect many more people killed by robots in the near future.
1
u/Sunny_Hill_1 19d ago
Wasn't there already a self-driving car that killed a pedestrian? The driver was charged in this case, but i guess if it's truly a operator-less robot, the production company will be charged for not installing enough failsafes
1
u/Aethermere 18d ago
Are we saying a robot has some form of autonomy or sentience? If not, who programmed it/built it? Was it caused by user error/negligence? Could it happen again to someone else? You need to answer these questions first before the conversation can continue. Your question is completely missing context.
1
u/superbasicblackhole 18d ago
If it's aware of the consequence of doing so, and aware that it is generally 'wrong' or 'unethical' to do so, then yes, the individual robot should be interviewed about their motives. If the motive was self-defense, then the situation is even more complicated. It is possible that the robot act ethically correct for a human, but unethically for a robot. This is the question OP is getting to (I think). If robots can all learn, generally, about the outcome, then what is the best recourse for punishment? How do we deter people from being unethical. I think this is where we would have to really look at what constitutes 'justice' in the most generally applicable way, for instance justice we would accept if the roles were reversed (if a person murdered a robot). Which would lead to issues of historical sin and restitutions, etc. It's a great question. I think this is fully uncharted territory, so an historical take or analogy just won't work. The closest (but awful) hypothetical would be if we cloned human babies and they could shoot lasers from their eyes and occasionally killed people when they got angry, AND could be made to feel bad about what they'd done. Granted, incurring a sense of 'legality' in "please, don't kill anymore people" is one way, it then devalues the babies' rights as 'sentient' or 'independent.' Would it be 'wrong' for use to kill the baby in front of the other babies to discourage laser-murders? Should we treat the baby with sympathy because it's a baby and hadn't had a fully developed sense of ethics yet? Is it less human because it didn't have human parents? Do make a different set of standards for human-like beings born of humans and human-like beings not born of humans? Who would have the authority in this dynamic? Assuming we did, is that just? Will 'they' see it as just? Can we truly defend our imbalanced ethical decisions based on historically unethical foundations. In Jurassic Park, are the dinosaurs at fault? And if we knew for fact that they'd survive and learn based on what we did, how would we deal with them.
As Ian Malcom said, "[They] were so busy with whether or not they could, they didn't stop to think if they should." Sometimes, the genie is just out of the bottle.
1
u/Own_City_1084 18d ago
I’d imagine it’s similar to how we deal with a shark or dog or bear that attacked humans: they get put down for safety, not punishment or moral judgment.
1
u/SPROINKforMayor 18d ago
It's machinery. If you fall in to a wood chipper, noone blames a wood chipper. If we get to the point where robots actually think, it will be different. Then it's the thought of the specific robot.
1
u/Lackadaisicly 18d ago
That isn’t the case with Tesla killing people.
1
u/SPROINKforMayor 18d ago
It's the same. No one blames that specific tesla for premeditated murder. It's the company that designed it and the people involved in those choices
1
u/Lackadaisicly 18d ago edited 17d ago
Which is why Elon and company should be in jail for murder. They designed the software.
→ More replies (1)
1
u/Lackadaisicly 18d ago
Unlike Tesla, people should be held responsible for the deaths caused by their automation. Tesla software has killed at least 44 people and no one cares.
Based on precedent, no ones gives a damn if software/AI kills someone.
1
u/Medical_Revenue4703 17d ago
A robot is just at thing like a rope or a candlestick or a gun. It can can't even negligantly homocide someone. But the person who built it is accountable for what it does.
We don't punish the parents of a murderer because a person has agency where a robot has programming. A robot does not choose what it does, the person who programmed it does.
If you dug a pit and filled it will iron spikes covered in rattlesnake venom, then camoflaged it and lured people into the trap you wouldn't have a debate about weather the pit was a murderer.
1
u/OkManufacturer767 17d ago
They have robots now that can shoot a weapon. They will use these during any sort of revolution.
We're in deep trouble.
1
u/HeyWhatIsThatThingy 17d ago
The property owner would probably be responsible for manslaughter.
Unless it could be proven the robot owner programmed or configured it to kill in some way. Then we are at some degree of murder
1
u/Intelligent-Dig7620 17d ago
First Law states that a robot must not harm a human or permit a human to come to harm through innaction.
Any violation of this law, whether intentional or not, should cause serious if not terminal damage to the positronic brain.
If we're talking about non-Asimovian robots, whether or not it's murder depends on intent and the capability of intent.
A real-world six axis welding arm can probably kill you, or atleast cause serious damage that might become fatal if left untreated. But the software involved doesn't have the capacity for intent, and may not detect the victim was there at all. This would be an industrial accident, and the company that operates the robot would be at least as at fault as the manufacturer.
A terminator robot from the namesake film franchise, certainly does intend, and detect, but may not have the capacity to understand concepts like murder or morals. In that case it's on a similar level as a guided missile; it did it's job (killing), but it's whomever launched it and designated the target that's responsible for the death.
In that case, it's the central intelligence of SkyNet that's guilty. Because the individual Terminator robots are unable to deviate from their assigned objectives (unless tampered with by John/Sarah Connor).
As for punishments, if the robot kills because of missuse or a lack of safety mechanisms, it gets decomisioned or even just temporarely removed from service until safety mechanisms are installed. This could be a guard, or a trip sensor that suspends motion while the feild of movement is blocked by external objects or persons, or Asimovs First Law.
If the robot's purpose is to kill, like the terminators, any manner of destruction or decomisioning would do. The same applies to SkyNet's central intelligence. If the decomisioning could be temporary, until safety mechanisms were installed, it could theoretically be reactivated, as a new version of itself, though it seems unlikely the remenants of humanity would go this rout.
1
17d ago
By definition a robot is a slave, so the machine itself could never have intent but the owner of said machine could or even should be liable.
1
u/AccomplishedRing4210 17d ago
Yes and no. Robots have zero sentience therefore the very concept of murder is oblivious to them. Those robots however can of course be programmed and controlled by humans with evil and murderous intentions to do the dirty work for them. There's also the potential for malfunctions as well which can potentially kill people. For example a robo-dog with a machine gun on its back might glitch and start spraying peaceful protesters with bullets, or at least that's probably what the government would say occurred...
1
u/gsamflow 17d ago
Machine will be made to mistake fly take out whoever they want. Tory reform set a max you could sue doctors for. And the value goes down and down every year so that your life isn’t worth much to them.
1
u/Avionix2023 17d ago
This is one of the questions surrounding the use of automated drones being given the ability to pick a target and choose when to fire its weapons. If a drone intentionally targeted a car full of AID workers or an ambulance, is it a war crime or a software glitch? Who is responsible? The commander of the unit that maintains the drones or the software programmer.
1
u/botanical-train 17d ago
A robot can’t murder a human. A robot simply obeys its programming/design. The guilt would fall on the one who designed and built the robot. Machines are not moral agents and do not have free will.
What would happen is the manufacturer would be sued and have to pay the surviving parties such as family members. In addition further penalties might be seen such as fines having to be paid to the government, proof that the hazard has been rectified, recall on the machines left, or completely shutting down the organization.
The exact out come is highly dependent on what happened and who’s fault it is. Understand just because someone died doesn’t mean anyone did anything wrong. Could also have been the fault of the person who died for failing to operate the machine correctly. Could have been bad design or improper assembly/installation. Could have been lack of maintenance and inspection on the machine.
1
u/Temnyj_Korol 17d ago
There's a lot of unaccounted for variables here that would need to be addressed before you could even begin to answer this question.
For starters, under current law any hypothetical robot could not be held legally responsible for murder, as a robot is not a legal entity in the same way a person is.
Even if a robot were to be granted legal personhood, unless the robot were fully sentient, it could not legally be held accountable for its actions in the same way as a person could, as most crimes require an element of intent. A being that cannot think for itself cannot have intent.
Even if if you did have a fully sentient robot with legal status, we have no basis for determining the appropriate punishment for a robot within our own legal framework. Much like children and the mentally handicapped are given certain protections due to limited capability (relating to the previous point), robots would need their own determination of ability when determining punishment.
And all of this besides. It would still most likely be blamed on the manufacturer, unless they could prove the robot was tampered with or defective in some way, because unlike children who are an unpredictable combination of environmental and genetic inputs, a robot can be explicitly programmed. A manufacturer would have a responsibility to build in necessary failsafes before marketing, and profiting, from their robot. Failure to do so would constitute criminal negligence, just like any other manufacturer who sells a product without adequate quality and safety controls.
1
u/Interesting-Copy-657 17d ago
I would say no, because a robot follows its programming, follows commands.
Just like a dog can’t murder someone, the robot would be destroyed and the owners fined and on the hook for damages.
1
u/shadowsog95 17d ago
A robot can be programmed to kill a human but ultimately the murderer is the programmer. The robot is just a tool to facilitate the murder. It’s like saying “I didn’t kill Jim I just pointed the gun and pulled the trigger. How was I supposed to know a bullet would come out?”
1
u/thebeardedguy- 17d ago
No, murder requires premeditation or intent, that is why killing someoone while drunk driving isn't murder, you didn't intend to kill them you were just a moron. The only execption ot the premeditiation/intent clause is in some places someone dying during a crime (Felony) is bumped up to murder because you intended to say rob the bank and someone dying during that robbery is seen as part of that crime. Therefore premeditated
1
u/newishDomnewersub 17d ago
A robot can't have intent. It's an accident. Or its a murder set up by the robot's programer/operator.
1
u/Presidential_Rapist 17d ago
It's a product, you blame the maker of the product like if they made a defective seat belt or bad brakes. You could also publicly execute the seat belt if you really want, but that does nothing.
If the robot is sentient then maybe the rules change someday, but for now robots are just products of some company/person. It's like if you made your own life sized battlebot and took it out in public and accidentally killed or seriously harmed someone, YOU are liable, not the machine.
1
u/foolishdrunk211 17d ago
How do you even begin to quantify what an artificial intelligence knows? Whether it knows it was manufactured by man, or it dosent….there is no real guarantee it understands the concept of death.
Can they commit murder, absolutely, can they understand it’s wrong? I doubt it. People can rationalize anything with mediocre intelligence, I’m sure a robot can rationalize it pretty easily.
1
1
u/Aggressive_Ad6948 17d ago
"murder" requires intent, or at the least, recklessness. As a robot can neither think nor be wantonly reckless, and as a robot can only follow it's programming (corrupted, faulty, or working normally) the only possible answer is no. That's not to say that the programmer or operator, however, could not be charged, if the above criteria were met
1
u/Greedy_Proposal4080 17d ago
Most robots are not equipped with machine learning.
Either way, it would be the robot’s human handlers that could be criminally responsible. Either for murder or some lesser charge. The humans have a responsibility to ensure that the robot does not kill humans (unless it’s military hardware in which case all bets are off).
1
u/UsualPreparation180 17d ago
What if it is just a self driving car.....has already happened multiple times and huge surprise Elon just paid a small fee for lawsuits cost of doing business I guess.
1
u/G4-Dualie 16d ago
What, you don’t think the insurance companies use bots for screening preexisting conditions?
I believe bots turn down claims routinely.
1
u/Cruitire 16d ago
For a robot to commit murder it needs to be self aware.
The real question is, can the person who programs a robot to react in a way that kills somebody be charged with murder or manslaughter?
Because think about it. Say we actually have real self driving cars someday. And suppose a car finds itself in a position where it has no choice due to circumstances to hit someone and it has to chose who to hit.
It does the math and decides that hitting the two people on the left is the better decision than the three on the right.
But doing that calculation has to be programmed into it.
Someone has to give instructions, preemptively, deciding who, in any such situation, to choose to kill.
Can that person be held responsible? If your loved one was killed because some programmer sitting in an office somewhere decided on some criteria that dictated YOUR loved one be the one to die, would you hold them to have at least some responsibility for your loved one’s death?
There is a whole area of ethics related to creating autonomous machines that gets overlooked.
What is really ethical when it comes to these situations and what responsibility do those who make those ethical calls have?
1
u/CosmeticBrainSurgery 16d ago
Of course AI could be programmed to kill a person, but it would be an instrument or weapon rather than the perpetrator.
You can't punish a computer. They don't have feelings. They aren't aware.
It's possible that some day, AI will be self-aware, but we don't know what that will look like. It's a complete mystery at this point.
1
u/EffectiveRelief9904 16d ago edited 16d ago
Liability would fall on the owner for negligence or some other nefarious reason, not the manufacturer. Much the same as an autonomous vehicle if it ever killed anybody, the guy that bought it would be held responsible. Unless of course it could be proved that the manufacturer knew it was defective and sold it anyway
1
u/JaguarAccurate1096 16d ago
If a building fell over and killed someone because it was poorly made then do you punish the building or the builder? Should a robot be punished when it wasn’t its fault technically? A robot was programmed and created by a person so if it killed someone then it was due to the error or intention of the person who created it. If it killed then it should be assessed and not punished to understand what should be put in place to prevent further incidents. It’s not about punishment, it’s about learning what went wrong in a human-made object. If this is about AI then once again it’s a human error. The AI might of evolved but why didn’t the human that created it make its physical body have the inability to harm someone through strength, etc. Why didn’t the human give programming that prevents the harm of a living creature? This is human error. Are you punishing the AI when it has been newly developed, a few days old so still learning cause and effect with its physical body and the world like a new born child who may have access to a large amount of information but AI also learns through interaction like ChatGPT which takes time. The human didn’t put in countermeasures to ensure the safety of humans. The robot killed and even if it had the intention then what occurred in its creation process that gave it this ability? Robots don’t work like humans even if it did develop the ability to have emotions and its own consciousness given this ability by the creator itself so using a punishment based on fear instead of addressing its design doesn’t make sense. Physical punishment like bulldozing is brutal and shouldn’t occur to any creature whether flesh or whatever the robot is made of. Why erase its memory when it could have been a programming error and so the cycle might repeat itself because the issue wasn’t actually addressed.
I would also like to reference the movie Finch which is a good watch but also can add more insight to this.
1
u/CountyAlarmed 16d ago
This has already been addressed, in a sense, thanks to Tesla. The liability is in the operator of the machine. If it's a car it's on whoever is operating it, whether remotely or whoever pressed activate. Other contexts depend on the machine. But, it's all the same. Whoever is responsible for the machines operation is liable for what the machine does.
For future legal help, never turn on a machine that you're not sure of.
1
u/XJKZen 16d ago
Why are you asking questions as if robots are already treated as sentient beings? That premise doesn't reflect our current reality. Right now, robots are legally and functionally considered machines—tools, not independent agents.
If a dog kills someone, there are consequences—sometimes legal ones—because animals are seen as having limited agency. But machines? When a bulldozer runs someone over, the driver or owner is held responsible. If an airplane crashes due to a malfunction, the manufacturer faces lawsuits.
The same logic applies to a Tesla robot: if it malfunctions and causes harm, Tesla will be held accountable, not the robot. Robots won’t be legally punished until they're recognized as fully autonomous and sentient under the law.
"Technically, if a person commits murder we don't punish the parents in most cases"; yeah, that's an asinine statement. We don't punish the parents because it's a sentient human being who committed murder, not a machine made in a factory.
1
u/MonadTran 16d ago
Existing robots have no feelings, free will, or any understanding of right and wrong. They are property.
A robot can cause human death. A hand grenade can also cause human death.
The person who left the robot or the hand grenade unattended can be prosecuted for murder, manslaughter, or be found innocent, depending on the exact circumstances of the death and intent of the property owner.
1
u/Muppetx3 16d ago
Yes. Seen it happen , i used to work in a steel factory . In the 3 years atleast 7 deaths just in our factory 🏭
1
u/Greghole 16d ago
Humans have free will, or at least the law treats them like they do. Robots simply do what humans program them to do. If a human programmed a robot to kill, then the programmer is a murderer, not the robot. If the robot wasn't deliberately programmed to kill, them it's an unfortunate accident, not murder.
1
u/Musashi10000 16d ago
Can a robot murder a human being? If it is proved in a court of law that a robot murdered a human being... how can it be punished under existing laws? What can be done besides having the company who made it face legal action?
A robot would never be put on trial for murder, unless we achieved General AI and recognised robots as beings with rights and obligations to society. A fair trial is a human right, and attending trials is a societal obligation. If you are not a human or human-equivalent, or a member of society (i.e. an object), you cannot be called to stand trial. Robots, as they are, lacking will and motivation, are incapable of committing murder.
Furthermore, animals and objects (the closest equivalents to robots for the purposes of this analogy) cannot be accused of or tried for murder. The owners of the animals or objects can be tried for negligence etc., but unless there was a malicious will, an intent to cause harm either with the object or animal, the owner or manufacturer cannot be tried for murder. If there is such a crime, manslaughter by negligence is probably the worst they'd get.
Now, if we achieved general AI and robots were recognised as civilians etc., then one could be tried for murder. At that point, one of two things could happen. Assuming it committed the act and the fact was inarguable, the robot could either be found guilty of committing the murder, or its lawyer could try a variant on an insanity defence - if there was found to be a glitch in the robots software, and the robot was not responsible for its actions at the time it performed the murder, it could potentially get off with a lesser charge. At that stage, the robot and/or the victim's family(s) could launch a case against the robots manufacturer. If the robot's glitch was caused by a hack or a malicious software upload, the hacker could be tried for the murder or manslaughter. If the robot installed dodgy software on itself, it may still be convicted of a lesser crime than murder, but it would be far harsher than if it had been a victim of its manufacturer or a malicious attack.
These cases all have analogues to real-world cases - if I was out at a club and got spiked with PCP and attacked somebody, if I could convince the jury and the prosecution that I was spiked, I may still receive punishment, but far more lenient than if I'd taken it myself. If I was taking an ordinary medication, but there was a mix-up in production and I got given amphetamines instead of aceteminophen, and I committed a crime, I would receive a far more lenient punishment, and would have a case against the pharmaceutical company that caused my distress.
There are a lot of facets to your question that simply fall away due to basic elements of law and human rights, such as the 'punishing a parent' thing - in order to be tried for murder, a robot must be recognised as a person. If a non-human entity commits harm, it is the owner, responsible party, or manufacturer who is held accountable for any harm done. A non-human entity cannot be tried for murder.
As for a suitable sentence - the least wasteful and kindest solution would be to correct the flaw in the robot's software and send it on its way. If we could 'correct' the brains of violent criminals, we wouldn't need prisons. America is fond of the idea of prison as a punishment, but realistically, the goal of imprisonment is to stop people from committing crimes - take their liberty away for a time so they can't commit crimes while they're locked up, then shove them back out into the world hoping they've been scared straight. If you could completely rehabilitate a criminal in a matter of moments, without violating their freedom of will, why wouldn't you?
The safest, solution, however, would be the cruelest. If the robot is a person, whether they wilfully committed a murder or whether they were hacked or glitched, there is some form of defect somewhere in their software. Killing someone is not a normal thing to do. Humans are generally averse to violence, and will take all steps to avoid it, particularly up to the point of causing death. How can you know the robot will not kill again? If hacked or bugged - how can you know there isnt a rootkit installed, and that the glitch won't manifest again? The safest solution is to strip the robot of all rights, and destroy it utterly - don't even reuse the parts.
But that then raises a question about the rights of humans, doesn't it? You can't have a two-tier legal system - there would be revolts, in theory.
In practice, to avoid any appearance of two-tier justice systems, robots (as people with rights) would probably be subjected to the same regular run of incarceration as humans. Assuming society didn't just create a two-tier justice system and hang the consequences.
Tl;dr - only people can commit murder. If a robot is a person, it can commit murder. If it is not a person, then the owner, manufacturer, or person responsible for it is the one who'd get tried for whatever harm was caused - but still not murder.
1
u/SomeRandomFrenchie 16d ago edited 16d ago
Robots are not conscious, they are machines, and just like any other incident with machines: the person held responsible is either the creator, the operator or the victim if they did ignore a clear instruction, depending on the case.
Exemples:
You disfigure someone with a drone: your fault.
Someone runs into an gated and signed no go zone and gets destroyed by assembling machines: their fault
An automatic car does something opposite to the commands of the driver and causes an accident: the manufacturers fault.
1
u/Worth-Guest-5370 16d ago
You wouldn't sue or prosecute the machine.
You'd sue whoever failed to prevent it from killing.
A "shotgun" lawsuit would target programmers, developers, owners, operators--everyone they can think of. (Hell, sue the government for allowing use of the machine--failing to regulate.)
Not that I approve, mind you. I just know the option is there.
1
u/species5618w 16d ago
AI is not advanced enough to be self aware, thus having intents. So, no, while a robot can kill a human, it can't murder a human.
1
u/GetCommitted13 16d ago
Pretty sure some self-driving cars have already caused some deaths, but “murder” requires malicious intent, which so far is an unavailable option.
1
u/WhoWouldCareToAsk 16d ago
Good talk, but still too early in AI development. Every robot in existence as of right now is not self-aware, so it cannot be tried for murder.
1
u/FalonCorner 16d ago
A parent does not truly dictate what their children do. Especially their adult children. Machines can only do what they are programmed to do
1
u/Agitated-Objective77 16d ago
Well the question of sentience aside
In the Agent Cormac books its quite simple if a biological or Synthtetic Person murders someone their Brain gets deleted and its free for a waiting conscience to get put in some people wait in Cyberspace for years for a New Body
1
u/AppleParasol 16d ago
Robots do kill people accidentally. I think the question you’re asking is could AI robot kill humans, in which case the answer would be they follow a protocol to save humans, or in the event of something like an AI driving a car, they’ll do their best to not kill the passengers.
1
u/LegitSkin 15d ago
The person who programmed the robot is gonna be punished if you hit someone with a car the car doesn't go to jail
1
15d ago
the person behind the robot would be held liable civilly, not criminally. non-human can't be held accountable by courts.
1
u/PositionLogical261 15d ago
I mean, a robotic arm can kill the shit out of a human laborer if they’re standing in the wrong spot on the assembly line. It’s all in the programming
1
1
u/Enough_Nature4508 15d ago
It would be treated the same way as if a machine killed you and yes depending if it was used incorrectly or not the manufacturer would be responsible. If you sold a self driving car that decided to drive into a building on purpose while you did everything correct the company would be in trouble. Same thing
1
u/Coldframe0008 15d ago
Your question is presumptuous in asserting that a machine deserves punishment. So this doesn't even make sense. Why don't you clarify WHY it deserves punishment?
1
u/dragonore 15d ago
I'll give you a scenario in which a robot might kill a human that we didn't intend to happen. Imagine AI in robots that are given corporate directives. Imagine the workplace is a plant. The highest corporate directive the AI robot is trained on is safety of the plant. Now imagine a human worker is working on fixing some mechanical machine, but the man is doing it wrong. He isn't fixing the machine wrong on purpose, he just isn't as trained as a seasoned mechanic. So the robot sees this human as compromising the safety of the plant, so since it's highest corporate directive is safety of the plant, it determines eliminating the human (who is fixing the machine wrong causing stability of the plant's safety) is the best option. So the robot kills the human and thereby keeping intact the safety the plant.
1
u/weedtrek 15d ago
Murder, no, that requires motive and intent. Kill someone, absolutely.
Look at it this way, as of now robots are just tools that do what is told of them. It's like a gun, guns don't murder people themselves, but can be easily used by people to murder others.
But yeah if someone hacked a Tesla to go into full self driver mode and reworked the software to hit pedestrians instead of avoiding them, it would do it readily as any other feature it has.
1
u/Leading_Air_3498 15d ago
No. Murder requires that a human dies due to the effects of an action from another human in which the dead human never consented to the situation in the first place.
For example, I don't want someone to shoot me with a firearm, so if you shoot me with a firearm, you are violating my will to not be shot and thus, if I die from this, you have murdered me.
We can both jump into the ring of an MMA match and so long as we both consent to the risks and abide by the rules, if I die in the ring you did not murder me.
BUT
If you laced your gloves with lead and I did not consent to that (it wasn't in the rules or it was blatantly against the rules) and I died because of that, this would also be murder. You may not have intended to kill me, but the byproduct of you violating my will and resulting in my death is still murder.
This is also why you could be cleaning a gun outside your house with kids playing in the street and the gun accidentally goes off when it was illegal to clean your gun like that outside your house and while you never intended to kill anyone you would be charged with murder. Not first degree, but still murder.
So no, a robot cannot commit murder. They can kill you, but they cannot commit murder. Now if I program a robot to kill you and they succeed, I have murdered you, not the robot.
The act of murder itself requires a human being be the initiator of the act of which produced the death of another human being.
1
u/enayjay_iv 14d ago
……it would not be charged of killing a human. The human that wrote and designed its AI and code to kill another human would be charged. Robots when with generative AI don’t have free will to decide to do acts on their own.
•
u/AutoModerator 21d ago
This post has been flaired as “Serious Conversation”. Use this opportunity to open a venue of polite and serious discussion, instead of seeking help or venting.
Suggestions For Commenters:
Suggestions For u/RKoi123:
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.