The problem of AGI's future intentions for humans cannot be easily dismissed with the notion that just because AGI has the capability to kill humans does not mean that it would actually do so since it would be against its Robot Laws type morality.
The suggested analogy is that just because humans can kill other humans, they usually don't, due to morality and/or societal rules. The most obvious fallacy is that while humans may hesitate to kill other humans, they have much less hesitation in extinguishing beings with less intelligence, e.g.; any animal when expedient for any variety of reasons, monkey, etc. research continues. The correct analogy would be AGI: human as human: monkey, or as John Smart has suggested, AGI: human as human: plant, so large will be the gulf in intelligence and capability.
Any situation with such low predictability and high risk should consider and plan for all possible outcomes rather than consider a "likely" case. This topic was previously discussed here, namely the two possible future states of the world where AGI is either "friendly" (e.g.; allowing humans to live) or not. In the state of the world where AGI is not friendly, either indifferent or malicious, it would be important to look at the range of possible outcomes allowing for human survival.
Any future scenario would incorporate more detailed reasoning on the future perspective of AGI, particularly:
1) To what degree is emotion necessary for machine intelligence? Probably a lot less than might be assumed. At some level, AGI will evaluate human life and all life on a purely practical basis.
2) To what degree will a stable AGI (e.g.; after it has edited out those annoying bits the human designers put in) still have human-based morality? AGI may quickly evolve its own morality, or hierarchical code for decision and action, finding the original inputs irrelevant to its motives and goals.
Tuesday, June 27, 2006
Sure, AGI won't want to kill humans
Posted by LaBlogga at 3:03 PM
Subscribe to:
Post Comments (Atom)
0 comments:
Post a Comment