Wednesday, May 31, 2006

Non-embodied AGI is preferable

The whole point of AGI is its broad flexibility to have many formats both resembling and greatly extending human-level intelligence.

An unfortunate artifact of the human substrate is that so much processing is devoted to the sensory network input and output and other demands of having a body and so little is available for the higher levels of intellectual processing and knowledge extension.

The computer substrate is of tremendous benefit to AGI. Not only does the AGI not have the physical, sensory and processing constraints of a body, neither does it have the time frame constraints of biology; it can evolve at exponential rates that would never be possible in biological substrates. Further, the human sense of identity and consciousness is body-grounded, awareness and control only extends to the barriers of the human body. In a computer network, the domain and locus of control is potentially as broad as the network and can be fluid and changing and distinct by function. This broad and malleable perception and control domain may lead to the development of quite different consciousnesses than human consciousness.

Even the term 'embodiment' is anthropomorphic in that it means a form that can be seen by humans, particularly a human-type form as in a robot; AGIs can theoretically take on many or multiple embodiments but in their highest form do not need any.

Some argue that embodiment is necessary to produce and evolve AGIs and certainly this is one approach, at this early stage of AGI development, all approaches should be tried, including other non-embodiment models.

2 comments:

Armchair Anarchist said...

So I'm guessing you're in the queue for early uploading too? ;)

I think the issue of embodiment in AI is an interesting one, but it will probably turn out to be the 'nature v. nurture' of its field. Only time will tell.

I seem to remember reading that the thinking behind embodiment as a necessity for AI (which may have well been from Minsky or Moravec, though I could be wrong) centers on the idea that human thought is inherantly a product of our environment and our relationship to it through the sensorium, including the simple biological realities of having a 'meat' body, which has a huge (though largely unnoticed) effect on the way we percieve our external reality.

It may well be that once the concept is cracked, embodiment could be done away with, but to really emulate *human* thought? I'm no scientist, but I'm guessing we'll need a substrate at first.

Of course, this all goes out of the window if you want to create non-human intelligence forms. But where would we start to build such a model? Even our computers are built in a way that mimics the models of the human mind (albeit in a very simple way, at present).

But I concur with you totally that we need to be pushing in all directions at first, at least until a pathway becomes clear. I long to see AI in my lifetime (or more importantly before human idiocy destroys the race, the planet or both at once).

Good blog you have here; been reading for a few weeks now, and you're in my blogroll. Pop by and visit sometime; I rarely publish anything as insightful and academic as this post of yours, but I dig up some good links. Plus you like good science fiction, so you may well find some common ground!

VelcroCityTouristBoard

Michael Anissimov said...

I agree with the general idea that non-embodied AGI is preferable... perhaps instead of non-embodied we should say 'virtually-embodied', as any intelligence will have a series of input/output devices which can be characterized as a body. The thing about having such a flexible body/mind is that it can be a nice starting point to then go on to specialize oneself for a specific physical environment or task.

Of course, eeing raised in a virtual environment does not automatically imply that an AGI will develop exponentially. (You subtly imply this in your description.) Exponential self-enhancement is likely to occur after a certain intelligence/action threshold, which we can't know the exact value of at the moment. Will the root of an AGI's intelligence mainly derive from direct human programming, or through the accrual of experience in virtual environments? Primarily the former, I think. A virtual agent will be able to learn nothing without the appropriate cognitive complexity to engage in open-ended learning.