Sunday, April 16, 2006

Machine intelligence to evolve non-human senses?

Jeff Hawkins has a recipe for building intelligent machines in his 2005 book, "On Intelligence.” Step one is creating a set of senses that conduct pattern recognition over their environment. Hawkins notes that machines could have new senses, senses that do not just correspond to the five human senses.

Certainly machines can be better at receiving sight and sound input (electromagnetic spectrum pulses and sound waves), discerning a wider range than humans have evolved to perceive unaided (e.g.; radio waves, X-rays, infrared). Taste is not relevant. Smell is relevant, notably in the sensors applications of air composition detection for smoke, chemicals, etc. Initially, intelligent machines are most likely to be single-purpose and stationery (e.g.; not robots), so human sensory input related to mobility would be unnecessary. Machines are already responsive to “touch,” though mouse, keyboard and other input means are a different experience than the human nervous system version of touch.

What non-human senses would be useful and natural to a machine? Which patterns in what environments will machines be processing that would warrant additional senses? A machine sense, if parallel to a human sense, would consist of a specialized sensor bank for receiving a particular set of external patterns or type of information.

From the machine eye view, the distinction between using a tool (e.g.; an electron microscope over a network), having a perceptive capability and having a full-blown sense seems somewhat artificial (no pun intended). It is easier to identify some useful areas of perceptive capabilities for machines but now (the future could be different), these seem more like functionality extensions than full-blown new senses.

Useful perceptive capabilities for machines are of two sorts: first, regarding the digital information world, for example measuring information density, information usefulness/relevancy (as Google purports), signal-to-noise ratio, time and entropy waves, etc. and second, regarding the physical world, for example measuring the basis for earthquakes and hurricanes, gravity, particle behavior and strong and weak electromagnetic forces, etc.

Some of these additional perceptual capabilities for machines can just be created with software, and the ones that require specialized sets of sensors for information input are still more like tool than senses in the human connotation of the word. Especially with the distributed nature of digital information networks, a specialized sensor input seems like a peripheral not an integral part of machine intelligence. However, it is still early in the game and Jeff Hawkins may have something specific in mind when he refers to machines that could have new non-human senses.


Michael Anissimov said...

The most useful sense that comes to mind for an AI is of course a codic modality.

I have thought that an AI might develop towards a unified sense "organ" and make distinctions only further down the line, after some cognitive processing has already been performed. For example, a finely tuned vision sense would also an AI to perform spectroscopy measurements and determine the chemical makeup of a target merely by sight. A superpowered vision sense would arguably be capable of replacing all other senses with a new monolithic apparatus.

Considering that an AI might be introduced to a virtual world operating millions of times more rapidly than the real world, it might end up making sense inferences about the real world as a mere afterthought, with the virtual world as a primary proxy! For example, an infant AI might visualize an earthquake as "something that causes a server to go down" rather than a primary sort of disaster. "Primary disaster" as a cognitive category might be reserved for events taking place within the virtual world.

Sensory equipment might be one of those things that basically gets "solved" by life/intelligence and only incremental upgrades are forthcoming. The real complexity lies in the combinatorial explosion of possible interpretations for sense data.

LaBlogga said...

Thank you for the comments. Yes, a perceptual mode for code is probably the most obvious non-human sense for an AI. Some humans could be said to already possess this perceptual capability.

I agree, "supersenses" are appropriate in the AI context; an electromagnetic perceiving sense, an air wave perturbation perceiving sense (e.g.; sound waves); a gravity perceiving sense, an n-dimensional object modeling sense, etc. would be more relevant to a more meta-capable entity.

The differences in the digital world(s) and the physical worlds from the AI's viewpoint are quite interesting. You're right, what we call the "physical" world would be a remote, hopelessly slow moving and mainly less important connection to the AI. Of course, that is part of the whole point, that with its more rapid timescale and enhanced modeling capabilities, it could predict earthquakes and allow weather management and other Kardashev Level I type activities.

I agree that the combinatorial explosion is the key point, the power of the AI comes from the meta-perspective, the greater processing capacity, the million-times quicker speed and the nexus and n-dimensional explosion of physical world and digital world perceptual senses and meta-senses.