Three laws of AI
George Dyson in his Edge article "Turing's Cathedral" (November 24, 2005), quotes a Google employee as saying about the Google Book Seach (formerly Google Print) project:
"We are not scanning all those books to be read by people... We are scanning them to be read by an AI."
I have to admit, this gave me the willies. Should it? Is there an AI equivilent to Asimov's Three Laws of Robotics?
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I'd sleep better knowing that someone a whole lot smarter than I am is thinking this through. Will AIs be the benign helpmates envisioned by Ray Kurzweil in The Age of Spiritual Machines or the nemisis of humanity desribed in Dan Simmon's science fiction novels of Hyperion or Clarke's 2001?
Is my friendly little PowerBook about to say to me: "I'm sorry, Doug. I'm afraid I can't do that."?
Reader Comments