Page 2 of 2 FirstFirst 12
Results 16 to 19 of 19

Is it morally right for a robot to make the decision to kill?

This is a discussion on Is it morally right for a robot to make the decision to kill? within the Off Topic forums, part of the Entertainment category; A robot should never be given this responsibility. It violates the idea of "consciousness." Personal Ethics in life is to ...
Page: 2


  1. #16
    Senior Member Rad_Archer's Avatar
    Join Date
    Dec 2006
    Location
    The Cold Water - Santa Cruz
    Posts
    3,058

    Default

    A robot should never be given this responsibility. It violates the idea of "consciousness."

    Personal Ethics in life is to ones own judgment based on their own integrity, morality, discipline, spiritual, religious or mental ideals.

    When one cannot function within a group because of the conflicts of these larger ideals, one finds themselves at odds with the greater number and thus laws are put in place. Laws are usually unnecessary in the highest levels of education and morality (utopic but not realized). When laws are broken, ones morality is enforced by use of justice, this is done by others because one was unable to refrain themselves do to lack of morality, judgment, intelligence, sanity and or combination of all such things (in general).

    Both of these 2 aspects have limitless variation and therefore require conscious thinking. Life is not a pattern of solely 0's and 1's which are the computation of a computer or robot.

    Unlimited examples can be given from as simple as petty theft of a child to the accidental, unintentional killing of another. If all killing was punishable by death or all theft required capital punishment, then you see where the robot would have to have a conscience.

    Another issue if one was to try to create an "AI" would be how to base it. Because we are talking about life and death (or crime and punishment) you are also speaking in terms of government. In life, there are classes, this is true of all societies, all be it some are obviously more fare then others. In a militant dictatorial society (such as China or Cuba), the government itself is super wealthy. This allows the government to spend as they wish without a public involvement. In the USA we have what is supposed to be a "Republic" with the idea that we will elect the most intelligent or experienced to represent us. We hope that these elected officials will represent our wishes and look out for our best interests (which in general they do not - but that's another topic). Although people should all have rights, a person with an IQ of 40 and one of 160 are not going to operate in life the same nor will they have the same values (in general).

    So there is another aspect of who will decide which standard this AI get their level of judgment from, would it be the ideals of everyone in general, or the from the view of the most intelligent? Again, one would have to have the conscience, experience and education to make a decision of this nature, something an AI, I do not believe, can ever posses without the life factor...

  2. #17
    Senior Member
    Join Date
    May 2006
    Location
    your TV
    Posts
    5,497

    Default

    Machines used to kill people are called weapons. All robots known to the human exist in order to help his owner/designer. And it wouldn't be morally alright give someone the job to programm a judgement scheme in a robot. That would be to much power for one persons responsibility. How would an American programm the "bad person"-scheme? Or a Japanese? Or even a terrorist? Robots with own ethical thinking, such as Terminator, A.I (Steven Spielberg movie) and RoboCop are utopia nowadays. But I'm worried about the future though.

    About flying unmanned drones, which I pretty like, especially because they decrease loss of life. But don't forget, the perfect working "machine" is the human with his instincts, brain in general and the strong body. No computer will work as good as a human, not in thousand years.

  3. #18
    Dro
    Dro is offline
    As hot sauce on your taco Senior Member
    Join Date
    Mar 2006
    Location
    Your place or mine? *wink*
    Posts
    2,916

    Default

    Robots can't be allowed to kill because they can't develope a sense of ethic; not the one impossed by society, but that concept of deciding what is best for yourself in every possible way, wich involves the idea of a moral decition. Also, they don't have a sense of compassion that comes from bearing and raising children. Thus the law that a robot can't do no harm, for otherwise, they don't have anything that tells them it's not right, or in wich cases it should be allowed.

  4. #19
    Psy
    Psy is offline
    Senior Member Psy's Avatar
    Join Date
    Mar 2006
    Location
    Rhodesia
    Posts
    1,518

    Default

    If planetary distance inhibits the time between a command and a function for a robot to kill, then yes, it is justified to kill something if that's what it's purpose is. However, robots on Earth should not be allowed to kill anything unless someone is held accountable (No, not a group or entity, but a real life person who "pulles the trigger" or orders the trigger to be pulled). Robots with AI on Earth should not be allowed to kill unless that is their purpose. Due to my limited knowledge, I can't say for sure if my opinion will be the same now and ten years from now.
    Last edited by Psy; 10-24-2009 at 05:28 PM.

Page 2 of 2 FirstFirst 12

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •