Thursday, December 08, 2005

Expiration of convenient concepts like self and goals

A recurring theme from the DC Future Salons, promulgated by AI expert Ben Goertzel and others is the possibility that concepts such as self, free will/volition, goals, emotions and religion are merely temporary conveniences to humans (this is already quite clear in the case of religion as Antonio Damasio and Piero Scaruffi point out). That all of these concepts are temporary conveniences is an interesting, provocative and most likely correct idea.

Though the concepts have solid evolutionary standing as the fit results of natural selection, it may also be that they are anthropomorphic and highly related to the human substrate, and will prove irrelevant in the instance of uploaded and even extended human intelligence and/or AI.

How could this not be correct? We are only speculating about what the next incarnation/level of intelligence will be like. There are the usual partially helpful metaphors, that regarding motility, humans:plants as AI:humans and regarding evolutionary take-off, humans:apes as AI:humans. With the probable degree of difference between extended/uploaded humans/AIs and current humans, it is quite likely that crutch concepts like self and goals may not matter.

The interesting point is what will matter? Will there be concepts of convenience and metaphors adopted by intelligence v.2? If not directed by goals, what direction will there be? Will there be direction? Already humans have evolved to most behavior not being directed by survival but by happiness and for some, self-actualization. All are still goals. Will the human objective of the quest to understand the laws and mysteries (like dark matter) of the universe persist in AI? For an AI with secure energy inputs (e.g.; survival and immortality is reasonably assured), will there be any drives and direction and objectives?