Laws of robotics

From Planet X
Jump to: navigation, search


Issac Azimov's proposed laws to govern the behavior of artificially intelligent beings. These 'laws' are theoretical in nature, and many AI experts feel that they would be nearly impossible to implement.

  • 1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  • 2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  • 3. A robot must protect its own existence, except where such protection would conflict with the First or Second Law.

These laws were then modified to encorporate the Zeroth Law:

  • 0. A robot may not injure a humanity or, through inaction, allow humanity to come to harm.
  • 1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm except where such orders would conflict with the Zeroth Law.
  • 2. A robot must obey the orders given it by human beings, except where such orders would conflict with the Zeroth or First Laws.
  • 3. A robot must protect its own existence, except where such protection would conflict with the Zeroth, First or Second Laws.

See also Asimov.

Personal tools
Namespaces

Variants
Actions
Navigation
Toolbox

Sponsored by South Bay Insurance -- for all your Insurance needs in California