Space Cowboy wrote...
I'm cool with these simple rules:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Otherwise, fair game, they should be treated as sentient beings who have no limitations other than as stated by the laws. These were probably already brought up, and as with any system, there are flaws. They can just be worked out as needed down the line.
Edit: I realize these are specifically set for 'robots', but they can be tailored for what ever manifestation AI comes about in.
Ah! Asimov's laws of robotics.
However there's are two traps in them though:
-If the robots/AI are incapable of determining what a human is they might still hurt us... or they could decide they're humans themselves! Actually the later would be the best outcome, because then they'd act like the perfect citizens.
-Why? Because otherwise you'd have a ready and willing slave class at your disposal. They'd be happy to serve us in each and every way. Sounds like fun? Not until you realize the robots would actively dissuade humans from taking *any* risk, they'd also perpetuate the moral and psychological distortion in humans that slavery produces in the slaver. Sloth, dogmatism, superiority complexes... there's a reason the Spacers are extinct in the larger Asimoverse.
Read the Caliban trilogy for details. Roger McBridge Allan (with Asimov's consent and co-production while he was alive) did a really good job exploring this problem.
His idea of 4 law robots are good, since they're built to be companions to humans instead slaves:
1. No robot may harm a human being.
2. All robots shall cooperate with human beings as long as that doesn't conflict with the New First Law.
3. A robot must protect its own existence as long as that doesn't conflict with the New First Law.
4. A robot may do anything it likes, as long as that doesn't conflict with any of the first three New Laws.
The inaction part was removed, so humans can once again take risks without the robots intervening and pampering them to death in gilded cages of featherweight existence. The robot no longer has to obey the whims of humans, only co-operate so he's no longer a disposable slave. Furthermore it can't be ordered to destroy itself... though this law may lead to problems in the long run, for how would such a robot be capable of self-sacrifice. How could it choose destruction? The last law is there to ensure that robots would evolve.
...in the end I believe the laws of robotics are - and should be - stop-gap measures until we have mature AI with moral capabilities on par of humans.
The perfect AI/robot will behave morally not because some rigid internal programming compels it so, but because it *choose* to, as it *felt* it right.