A robot should never be given this responsibility. It violates the idea of "consciousness."
Personal Ethics in life is to ones own judgment based on their own integrity, morality, discipline, spiritual, religious or mental ideals.
When one cannot function within a group because of the conflicts of these larger ideals, one finds themselves at odds with the greater number and thus laws are put in place. Laws are usually unnecessary in the highest levels of education and morality (utopic but not realized). When laws are broken, ones morality is enforced by use of justice, this is done by others because one was unable to refrain themselves do to lack of morality, judgment, intelligence, sanity and or combination of all such things (in general).
Both of these 2 aspects have limitless variation and therefore require conscious thinking. Life is not a pattern of solely 0's and 1's which are the computation of a computer or robot.
Unlimited examples can be given from as simple as petty theft of a child to the accidental, unintentional killing of another. If all killing was punishable by death or all theft required capital punishment, then you see where the robot would have to have a conscience.
Another issue if one was to try to create an "AI" would be how to base it. Because we are talking about life and death (or crime and punishment) you are also speaking in terms of government. In life, there are classes, this is true of all societies, all be it some are obviously more fare then others. In a militant dictatorial society (such as China or Cuba), the government itself is super wealthy. This allows the government to spend as they wish without a public involvement. In the USA we have what is supposed to be a "Republic" with the idea that we will elect the most intelligent or experienced to represent us. We hope that these elected officials will represent our wishes and look out for our best interests (which in general they do not - but that's another topic). Although people should all have rights, a person with an IQ of 40 and one of 160 are not going to operate in life the same nor will they have the same values (in general).
So there is another aspect of who will decide which standard this AI get their level of judgment from, would it be the ideals of everyone in general, or the from the view of the most intelligent? Again, one would have to have the conscience, experience and education to make a decision of this nature, something an AI, I do not believe, can ever posses without the life factor...
Bookmarks