Showing posts with label brain-computer interface. Show all posts
Showing posts with label brain-computer interface. Show all posts

Sunday, February 07, 2010

Integrating life and technology with body-area networks

Long before brain computer interfaces (BCIs) and brain co-processors are available, acceptable, and appropriate for general enhancement use, body-area networks (BANs) could be a key means of integrating life and technology. Processing and communications could be brought on-board the person for medical, consumer electronics, entertainment, and other applications. At present, BANs consist of one or a few wearable or implanted biosensors gathering basic biological data and transmitting it wirelessly to a computer. The IEEE’s BAN communication standards protocol is 802.15.6.

Medical BANs

  • Toumaz: wireless digital plaster; externally-worn disposable medical BANs for measuring blood pH, glucose, oxygen levels, and temperature
  • CardioMEMs: implantable wireless sensing devices less than one tenth the size of a dime for monitoring heart failure, aneurysms, and hypertension
Consumer BANs
A consumer application of BANs is health activity monitors such as the FitBit, DirectL ife, and WIN Human Recorder, and to some extent smart phones. All contain accelerometers that can measure movement and activity.

The next phases of BANs could be enabled by continued electronics miniaturization and next-generation communications networks (WiMAX, 4G, and beyond). In the farther future, BANs could include larger more complex networks of intercommunicating sensors and eventually autonomous sensors with two-way broadcast.

Biocompatibility and bandwidth are important concerns for human-machine integration interfaces, particularly implanted interfaces. However, the biggest challenge is energy, providing adequate ongoing power to devices. Several interesting methods of power generation are being investigated including thermal, vibrational, radio frequency (RF), photovoltaic (PV), and bio-chemical energy. ATP could possibly provide power to implanted devices, for example using DNA nanotechnology to synthesize ATP with nanoscale rotary motors, or nanodevices to produce ATP from naturally circulating glucose.

Sunday, December 07, 2008

Brain-computer interfacing and the cognition valet

One dream of the future is to augment the human brain via direct linkage to electronics. Brain-computer interfaces could provide two levels of capability, first, by allowing machines to be controlled directly by the brain. This has already been demonstrated in invasive implants for motor sensing and vision systems and non-invasive EEG-based helmets for basic game play, but has been elusive in avatar control (the Emotiv Systems helmet is not quite working yet). The second level of capability is in augmenting more complex cognitive processes such as learning and memory as is the goal of the Innerspace Foundation.

On-board processing
The broader objective is bringing information, processing, connectivity and communication on-board [the human]. Some of this is ‘on-board’ right now, in the sense that mobile phones, PDAs, books, notebooks, and other small handheld peripherals are carried with or clipped to people.

There are many forms of non-invasive wearable computing that could advance. Information recording and retrieval could be improved with better consumer product lifecamming rigs to capture and archive audio and video life streams. Other applications are underway in smart clothing, wifi-connected processing-enabled contact lenses, cell phones miniaturized as jewelry (the communications, GPS, etc. functions not requiring display), EEG helmets with greater functionality and an aesthetic redesign from Apple, and hair gel nanobots. A slightly more invasive idea is using the human bactierial biome as an augmentation substrate and there are a host of more invasive ideas in body computing, implantable devices, evolved and then reconnected neural cell colonies and other areas.

Cognition Valet
After information recording and retrieval, the next key stage of on-board computing is real-time or FTR (faster than real-time) processing, particularly automated processing. Killer apps include facial recognition, perceptual-environment adjustments (e.g.; brighter, louder), action simulators and social cognition recommendations (algorithms making speech and behavior recommendations). Ultimately, a full cognition valet would be available, modeling the reasoning, planning, motivation, introspection and probabilistic behavioral simulation of the self and others.

Protocols and Etiquette of the future: “my people talk to your people” becomes “my cognition valet interface messages or tweets with your cognition valet interface.”

Distributed human processing
Augmenting the brain could eventually lead to distributed personal intelligence. As in, reminiscent of David Brin’s “Kiln People,” I use a copy of my digital mindfile backup to run some Internet searches and work on a research project while my attention is not focused on online computer activities, simultaneously a neural cell culture from my physical brain focuses on a specific task, and the original me is off pursuing its usual goals.

Sunday, November 23, 2008

Advanced technology and social divisiveness

What would the world look like with even more dramatic technological change? What if accelerating change in technology not only continues but also heightens in depth and magnitude? One dramatic change, for example, would be having a 100x or 1,000x improvement in human capability (thought, memory, learning, lifespan, healthspan, etc.). The definition of what it is to be human may evolve as the transhuman and posthuman concepts explore. There have not yet been “different kinds of humans” or “different kinds of intelligences” co-existing in civilization.

These dramatic changes are distinct from the more general quality of life and more minor capacity improvements delivered by technology so far (the Internet, cell phone, medical transplant technologies, electricity, steam engine, immunization, etc.).

One possible future could be the organization of society into voluntary social groupings based on outlook and adoption or non-adoption of technology; some obvious dividing line technologies could be human genetic engineering and brain-computer interfaces.

A simple societal lens that can be applied at present is technology adopters and non-adopters.

Luddites are different from Those Who Don’t Use Cell Phones
Some portion of non-adopters are doing so deliberately and out of principle: Luddites, Amish and other religions, etc. The other portion of non-adopters has just not had the access (practical, technical, financial or otherwise), willingness or perception of value (e.g.; a killer app) required to adopt. So far in democracies, both types of non-adopters have been accommodated into society, and are generally able to continue their behaviors, for example, the practice of some religions of complete medical non-intervention.

Peaceful coexistence of adopters and non-adopters
Participatory political regimes will tend to avoid paternalism in technology adoption while economic and social incentives and universal access will tend to trigger adoption (example: the cell phone). Simultaneously, mature societies tend to accept and accommodate non-adopters. Two main dynamics that could challenge the peaceful coexistence of adopters and non-adopters would be first, the perceived threat of new technology particularly by those that can control its adoption and second, times of economic scarcity and pronounced competition for resources.