Tuesday, July 31, 2007

Alt approaches to AGI

50+ year old attempts at creating AGI have not been successful. It is possible that AGI cannot be generated from current methods and technologies; the wrong tool is being used, sort of like trying to build a 747 with a toothbrush. Electromagnetism, silicon and Von Neumann architectures may not ever have the capacity to achieve AGI even allowing for continued increases in processing, storage and memory and architectural shifts such as parallelism.

Other substrates might work
Getting around the rigidity of Von Neumann, mathematical, logic-based, computational approaches, symbolic approaches and traditional computers, other computational substrates like quantum computing, DNA computing, etc. might work and also those that humans have not yet invented, discovered or exploited for this purpose like light, air, memes and information. There must be other substrates, and other viable approaches that are not constrained by mathematics and logic.

Information as a substrate
Narrowly, the only existing example of general intelligence is the human brain and the basic requirements of AGI are self-replication and self-improvement. Considering self-replication, there are many examples of more effective self-replication than humans, for example, memes, disease and microbes. Considering self-improvement, memes also self-improve more effectively than humans as they are refined through repetition, and have the unbounded ceiling for improvement of true AGI.

Taking advantage of the self-reproducing and self-improving properties and using memes and information as a novel computing substrate might be one way of extending AGI progress.

Information as a substrate could be developed symbiotically with a very broadly applicable new understanding of the laws of physics based on information and entropy as opposed to mass and energy.

7 comments:

Michael Anissimov said...

Couldn't it just be because our knowledge of statistical inference is only getting where it needs to be today, or because computing power is just now approaching the human level, or because we didn't know enough about the brain before, or that AI is a discipline that inherently requires 50 years to make progress on? The past successes of AI suggest that the von Neumann architecture is quite sufficient.

LaBlogga said...

Hi Michael,

Thank you for the comment. There can certainly be many reasons why we have not yet created AGI and may be getting closer or at least think we are, however the traditional Von Neumann architecture has not yet delivered AGI, though as you point out has produced numerous successful narrow AI results.

I am suggesting that novel approaches and substrates should be considered and might offer better solutions for creating AGI.

mark seery said...

From my perspective there are at least three fundamental issues:

1. Single components don't have enough processing capacity (the processing within the ear alone is equivalent to one billion instructions per second).

2. No computer ever built has even close to the amount of parallel processing as the brain, and I suspect not even close to the amount of memory either.

3. Humans are quite unique, compared to computers, in their ability to learn in a hierarchichal (sp?) and abstract fashion (some computer scientists think they have an answer for this one).

If a different substrate can solve the first two problems then great, bring it on.......

As to whether Von Neuman is a dead end, that point of view probably requires some explanation. I suspect that some people believe a biological substance is more likely to express emergent qualities than the current approach, but I suspect that is not necessarily true (current computer silmulations express emergent behavior for example).

LaBlogga said...

Hi Mark,

Thanks for the comment. I agree with your points about the challenges of replicating the complexity of human-level functions in a computing environment.

re: learning plasticity, yes, this is an important feature of humans that we have not been able to generate in machines yet.

A key limitation of Von Neumann architectures may be the linear one:one relationships, as compared with the many:many relationships existing in biology (cells, the brain, etc.). Emergent behavior arises from many phenomena but does not necessarily evolve into intelligence.

At some point there may well be a different substrate that ushers in a new era of computing from which AGI could more obviously emerge.

mark seery said...

"A key limitation of Von Neumann architectures may be the linear one:one relationships, as compared with the many:many relationships existing in biology (cells, the brain, etc.)."

I personally don't see that as much of a challenge as emulating chemical reactions. Molecule (and perhaps to some extent bioelectrical) interfaces have a range of expression as well as reactivity that bits do not. So what we really need is the ability to build a very dense array of many to many, and multilayer (in the case of neocortel emulation) packet processors (not bit gates - too granular). Single monolithic CPU subsystems like we build today are not the way to go, but from my understanding there is some debate about whether going away from that fundamentally breaks the principles of the Von Neumann architecture.

Bottom line though, I do agree that the current material science is limiting us. Not only from a processing perspective, but also from a power per instruction perspective (consider the milliwatts required to operate 1 billion instructions per second in the ear for example).

LaBlogga said...

Thanks, Mark, great points, I agree

Anonymous said...

You are right saying that AI needs new approaches.
In fact AI needs a completely new paradigm with a
different theory of information, mathematics,
action and time. This new paradigms is so unusual
that almost hard to believe. To understand this one
would require several levels of knowledge-concept
bootstrapping in many scientific domains. If
somebody told you the solution you would not believe or accept unless served as the "cooked pudding".

BTW, nice blog, keep up the good work you may have a chance to stumble upon something interesting.