A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
What if the human says "I will commit suicide unless you recognize that I'm not a human being and then kill me".
What if? You didn't order it to do something so rule 2 doesn't apply here; therefore, your spoken statement is just that.
>>214540316To which then the human commits suicide and the robot has failed the First Law through inaction.
>>214540434Negative. The example made no mention of then going on to commit suicide. Saying "I will commit suicide" is just a statement. The robot merely sits there. You've neither given it an order, nor engaged in any behavior which warrants action on the part of the robot.Were the human in this case to perform an act that would cause harm to xerself or another human, the robot would then likely intervene, per rule #1. But that wasn't the hypothetical here presented.
>>214540523Ah I get it. So you're retarded.
So crazy that Will Smith was a robot the whole time. What a tweest!
>>214539925Lol I just watched it again yesterday with my gf who didn't remember a thing. It's funny how these old AI films hit harder now that we can actually have a conversation with a computer.Movie is fine if you can endure the dated CGI and the ex-wife jokes. I did always hate the idiotic face they gave to the NS-5 model but hey, gotta love some good old Will Smith sci-fi flick.
>>214540744Why is he holding a VCR tape? To keep this /tv/ related?
>>214539925a robot wouldn't have killed Charlie
Converse All-Stars, vintage 2004.
>>214540744Zoomers are too young to remember, but this is EXACLTY how the human-machine conflict in Animatrix began.
>>214540703he just had a robot arm