Self may be a convenient construct for intelligence in human packages...how could this change in the future?
In the case of uploading, where a human is copied to a digital substrate (via digital capture of personality, value system, experience history, memory, thoughts, etc. and/or a full copy and simulation of all neurons and other brain functions), would the concept of self be different?
Self has at least two dimensions: corporeal and mental. First the distinction between my body and other objects. With my mind, I can direct the movement of my body. Without employing my body (or more advanced metaphysics) I cannot direct the movement of other objects. This physical agent is "my body" or "my self."
Second, the distinction between my mind and the minds of others. The tableau of my thinking is not overtly visible or perceivable to others. From communication with others, I have the perception that others have thoughts as well, and that they are not explicit to me.
In short, a working definition of self is the perception and control (agency) of the entity that I can direct vs. my surroundings.
In the case of an uploaded human copied to any other physical body with action, self-direction and movement capability, it would still be useful to have the concept of the physical agent self and the mental self, although in a highly functional partial self, a partial set of mental capabilities may be appropriate (an interesting ethical question). If the uploaded human is residing exclusively in digital storage files and does not have physical agency, the concept of the physical agent self would not be useful but the concept of the mental self may still be useful. The mental self concept may evolve in many directions, including the idea of different permissioned access tiers to the mind/processing core and other areas.
What would it be like to be an identity in digital storage without physical agency, particularly knowing that previously "you" had had physical agency? Lots of aspects of physical being can presumably be simulated, the usual pain, pleasure, etc. sensory experiences we are familiar with and probably some new ones. The more important aspect of physical agency relates to survival and ability to act on the surrounding environment for survival. The digital copy would need assurance and control over aspects of survival. This could be a complicated process but at the surface seems to mainly entail the access to a variety of power supplies, computing resources and backups.
Is a saved digital copy of a human alive? Probably not unless its being run. When a digital copy of a human is being "run," how is it known/shown that it is "alive?" As with any new area, definitions will be important. What is alive or not for a running simulation of a human is an interesting topic to be covered later. Here, it is assumed that there are some cases where a simulation is deemed to be alive and others where it is not.
There do not appear to be any ways in which a human simulation could be alive without having self-awareness in the current definition of being able to distinguish between itself and its environment at least mentally and in some cases physically. This analysis does point up the possibility of adapted or more rigorous definitions of being 'alive.'
So the concept of the corporeal and mental self may still be useful in the future but may likely be expanded upon and may exist as one of multiple metaphors for describing the agency of thought and action.
Tuesday, December 13, 2005
Durability of the convenient construct of self
Posted by LaBlogga at 2:35 PM 0 comments
Monday, December 12, 2005
Entities becoming conscious?
Is it accurate or anthropomorphic to attribute the possibility of eventual self-awareness and consciousness to multi-agent entities such as corporations, governments and mountain ranges? Anthills and beehives have not-fully explained collective behaviors, but after years of evolution, the only place they become conscious is in sci fi.
Conscious entities are one flavor of FutureFear, not only the possibility that computers (AIs) become conscious and potentially take over the world but also that large currently (as far as we know) non-self-aware entities become self-aware and conscious and potentially take over the world. There is the further complication that humans may not be able to perceive other higher forms of consciousness, and that it possibly goes unnoticed for sometime until the physical structure of the visible world changes and is controlled by something else, analogous in a basic way to the ape's eye-view of human evolution.
The logic is the evolutionary lens that produced axons and synapses that alone are not self-aware, but together are, will continue expanding and evolving consciousness, possibly jumping to occupy other substrates. Culture is a key evolutionary differential component initially developed by humans. Culture is continually evolving. If memes not genes are the basis for competitive transmission and survival then the speed of cultural evolution is superseding human evolution and more complex collective organizations of groups of people and potentially other entities may indeed "wake-up" and become self-aware and conscious. As life is an emergent property of chemicals, and consciousness is an emergent property of life, there may be no end to the meta levels up; it is hard to argue to the contrary.
If this were to be the case in the instance of corporations and governments, for example, initially (like we see now for meme transmission) there must be a symbiosis, or really a dependence of the meme, government or corporation on the human substrate. Memes can evolve to having a machine-based substrate as AIs emerge (and of course conscious entities as another substrate), corporations can evolve to having electronic markets and true drone robots as their primary substrate, governments are an interesting case but the substrate could be machine/virtually based with people still as the governed.
If they wanted to continue to participate in a society of human individuals as would be wise initially, these entities, like AIs and the genetically enhanced apes of some sci fi stories, would start arguing that they are self-aware, self-organizing, self-replicating, evolving, living, conscious entities and demand the pursuant legal rights.
Posted by LaBlogga at 1:38 PM 0 comments
Saturday, December 10, 2005
More than emergence needed for Physics break-through
Basic science, especially physics, seems stalled in a variety of ways. String Theory is likely not the answer. Particle accelerators are too big, too expensive ($5B) and take too long to build and use; they are a clumsy approach, just the only current approach. With computing improvements, hopefully physics can make the jump to informatics and experience a Phase Change as discussed by Douglas Robertson.
Emergence has been touted as a panacea concept for the future of science for the last several years. Finally, some scientists are starting to explain with greater depth what emergence is and can provide to our study of science.
Robert Laughlin, in his March 2005 book, A Different Universe: Reinventing Physics from the Bottom Down notices the existence and necessity of the shift in scientific mindset and approach from reductionism to emergence. The shift has occurred somewhat due to the full exploration and ineffectiveness of reductionism. Focusing more on the abstract rather than the concrete is an important step since the next ideas are most likely significantly different paradigms than the current status quo. Assumes broad and innovative thinking.
Santa Fe Institute external faculty member, synthetic biology startup leader and 2005 Pop!Tech speaker Norman Packard points out that emergence is the name given to critical properties or phenomena that are not derivable from the original (Newtonian, etc.) laws; phenonomena such as chaos, fluid dynamics, life and consciousness (itself an emergent property of life). The idea is to think about new properties in ways unto themselves, not as derivations of the initially presented or base evidence. Assumes new laws and unrelatedness.
Robert Hazen, an Astrobiologist at Washington DC's Carnegie Institution, in his September 2005 book Gen-e-sis: The Scientific Quest for Life's Origins thinks that macro level entropy is symbiotic with micro level organization. Micro level organization (of ants to collective behavior; of axons and synapses to consciousness) is emergence. An interesting idea. Unclear if correct, but a nice example of larger-scale systemic thinking and the examination of potential interrelations between different levels and tiers of a (previously assumed to be unrelated) system. Assumes relatedness of seemingly unrelated aspects.
The point is to applaud the increasingly meaty application of emergence as an example of the short-list of new tools and thought paradigms required to make the next leaps in understanding physics and basic science.
Posted by LaBlogga at 8:22 AM 0 comments
Thursday, December 08, 2005
Expiration of convenient concepts like self and goals
A recurring theme from the DC Future Salons, promulgated by AI expert Ben Goertzel and others is the possibility that concepts such as self, free will/volition, goals, emotions and religion are merely temporary conveniences to humans (this is already quite clear in the case of religion as Antonio Damasio and Piero Scaruffi point out). That all of these concepts are temporary conveniences is an interesting, provocative and most likely correct idea.
Though the concepts have solid evolutionary standing as the fit results of natural selection, it may also be that they are anthropomorphic and highly related to the human substrate, and will prove irrelevant in the instance of uploaded and even extended human intelligence and/or AI.
How could this not be correct? We are only speculating about what the next incarnation/level of intelligence will be like. There are the usual partially helpful metaphors, that regarding motility, humans:plants as AI:humans and regarding evolutionary take-off, humans:apes as AI:humans. With the probable degree of difference between extended/uploaded humans/AIs and current humans, it is quite likely that crutch concepts like self and goals may not matter.
The interesting point is what will matter? Will there be concepts of convenience and metaphors adopted by intelligence v.2? If not directed by goals, what direction will there be? Will there be direction? Already humans have evolved to most behavior not being directed by survival but by happiness and for some, self-actualization. All are still goals. Will the human objective of the quest to understand the laws and mysteries (like dark matter) of the universe persist in AI? For an AI with secure energy inputs (e.g.; survival and immortality is reasonably assured), will there be any drives and direction and objectives?
Posted by LaBlogga at 5:16 PM 0 comments
Wednesday, December 07, 2005
Meme self-propagation improves with MySpace
Those memes are getting better and better at spreading themselves! Quicker than the posts and reference comments circling the blogosphere (already an exponential improvement over traditional media) is the instant distribution offered by the large and liquid communities on teen and twenties identity portal MySpace.
MySpace is a website where users can create pages with blogs, photos and other creative endeavors and interact with others. Site designers have also successfully positioned and promoted MySpace as a music distribution site for small bands. The site is home to many mainstream and longtail communities, including 90,000 Orange County (TV show) fans and 23,000 Bobby Pacquiao (super-featherweight boxer) fans. Another benefit of MySpace, and a property of all user-created content sites, is going global instantaneously, Friedman's Flat World in action.
What will the next level of meme spreading tools be like? What should we (humans) create next? (While we still can!) How long will we be able to perceive meme-spreading platforms and be necessary participants as the transmission substrate?
Posted by LaBlogga at 4:45 PM 0 comments
Sunday, December 04, 2005
AI and human evolution transcends government
Autonomy (the experience and discussion of) in early metaverse worlds like Second Life is most interesting in the sense that this is presumably a precursor to what autonomy in digital environments will be like with fully uploaded human minds.
In the crude early stages of these metaverse worlds, an unfortunate theme is facsimile to reality. Digital facsimile to the physical world is evident in the visual appearance dimension; how avatars, objects and architecture look, in the dynamics of social interaction and community building, and in conceptual themes. The tendency is to recreate similitudes of the physical world and slowly explore the new possibilities afforded by the digital environment. Presumably, dramatically more exploration will occur in the future and in freer digital environments without as many parameters established by the providers.
Analogous to the physical world is the theme of the check and balance between autonomy and community in digital environments. There is freedom to a degree and norms and enforceable codes if norms and laws are not maintained. This is seen in all existing virtual worlds; Amazon, eBay, Second Life, etc.
In the near term, humans will likely continue to install and look to a governing body for the enforcement of laws. If their power base can be shaken, governing bodies will hopefully become much more representative and responsive (say the full constituency votes daily on issues via instant messenger). Governing bodies could also improve by being replaced by AIs who would not have the Agency problem.
Post-upload, physical location as a function of governance will be quite different, and, in fact, human intelligence can presumably evolve to a point of not needing external governance.
A separate issue is whether non-human intelligence will be the governor of human intelligence and this is probably not the case for several reasons; first, the usual point that machine intelligence finds human intelligence largely irrelevant, and secondly, machine intelligence understands that system incentives not rule governs behavior.
Posted by LaBlogga at 2:46 PM 3 comments
Thursday, December 01, 2005
Classification Obsession – Invitation to the Future
One almost cannot help noticing a theme in current mainstream thought – an obsession with classification. Are priests gay or not, or more precisely, how gay are gay priests? Are decorated public space trees holiday trees or Christmas trees? Should cities have trees – holiday, Christmas or whatever? Should researchers be allowed to donate eggs for stem cells? Is the war in Iraq a war in Iraq?
At the meta level, in an increasingly complex world, a struggle for classification, delineation and categorization is quite normal. Is there more meaning to the recent classification game than the natural human pattern finding tendency?
Classification is not just pattern finding/assignation but can also be read as an attempt at order imposition in an increasingly shifting and expanding world…an early warning sign of futureshock.
More than futureshock, classification is the first step in perceiving, assessing and absorbing new situations. It is humans grappling with the degree of newness available in the world today, the inevitable cant of progress and the invitation to participate.
Posted by LaBlogga at 8:40 PM 0 comments