One dream of the future is to augment the human brain via direct linkage to electronics. Brain-computer interfaces could provide two levels of capability, first, by allowing machines to be controlled directly by the brain. This has already been demonstrated in invasive implants for motor sensing and vision systems and non-invasive EEG-based helmets for basic game play, but has been elusive in avatar control (the Emotiv Systems helmet is not quite working yet). The second level of capability is in augmenting more complex cognitive processes such as learning and memory as is the goal of the Innerspace Foundation.
On-board processing
The broader objective is bringing information, processing, connectivity and communication on-board [the human]. Some of this is ‘on-board’ right now, in the sense that mobile phones, PDAs, books, notebooks, and other small handheld peripherals are carried with or clipped to people.
There are many forms of non-invasive wearable computing that could advance. Information recording and retrieval could be improved with better consumer product lifecamming rigs to capture and archive audio and video life streams. Other applications are underway in smart clothing, wifi-connected processing-enabled contact lenses, cell phones miniaturized as jewelry (the communications, GPS, etc. functions not requiring display), EEG helmets with greater functionality and an aesthetic redesign from Apple, and hair gel nanobots. A slightly more invasive idea is using the human bactierial biome as an augmentation substrate and there are a host of more invasive ideas in body computing, implantable devices, evolved and then reconnected neural cell colonies and other areas.
Cognition Valet
After information recording and retrieval, the next key stage of on-board computing is real-time or FTR (faster than real-time) processing, particularly automated processing. Killer apps include facial recognition, perceptual-environment adjustments (e.g.; brighter, louder), action simulators and social cognition recommendations (algorithms making speech and behavior recommendations). Ultimately, a full cognition valet would be available, modeling the reasoning, planning, motivation, introspection and probabilistic behavioral simulation of the self and others.
Protocols and Etiquette of the future: “my people talk to your people” becomes “my cognition valet interface messages or tweets with your cognition valet interface.”
Distributed human processing
Augmenting the brain could eventually lead to distributed personal intelligence. As in, reminiscent of David Brin’s “Kiln People,” I use a copy of my digital mindfile backup to run some Internet searches and work on a research project while my attention is not focused on online computer activities, simultaneously a neural cell culture from my physical brain focuses on a specific task, and the original me is off pursuing its usual goals.