Showing posts with label ethics. Show all posts
Showing posts with label ethics. Show all posts

Sunday, December 07, 2014

Bergson-Deleuze: Incorporating Duration into Nanocognition

French philosophers Bergson and Deleuze bring to nanocognition and machine ethics interfaces the philosophical conceptualizations of image, movement, time, perception, memory, and reality that can be considered for implementation in tools for both cognitive enhancement and subjectivation (the greater actualization of human potential).

From the standpoint of an Ethics of Perception of Nanocognition, Bergson and Deleuze stress the need to see perception in itself, and machine ethics interfaces could possibly help us do this through the concept of Cinema 3: the perception-image. Having had only one default (undoubled) means of perception (taking the actualized perceptions of daily life as the only kind of perception, just as we have taken linear, spatialized, narrative time as the only form of time) has meant that we have not considered that there may be multiple ways to perceive, and that these might exist on a virtual plane of possible perceiving, and coalesce through difference into actual perception. At minimum, our nanocognitive prosthetics might be able to introduce and manage the notion of multiplicity in virtual and actual perception.

Bergson-Deleuze exhorts us to notice the doubled, internal, qualitative, subjective experience of lived phenomena like movement, time, perception, reality, and ourselves. In particular, nanocognition allows us to see the full doubling of perception, because there cannot be a doubling if there is only one unexamined mode, if perception in itself cannot be seen. It is only through duration - the doubled, subjective experience of perception (the experience of perception itself) that its virtuality and multiplicity (possibility) can be seen. Importantly, the consequence of seeing the doubled side of perception and reality is that it allows us to tune into the possibility of possibility itself. The real goal of Bergson-Deleuze is not just seeing different possibilities for ourselves, but seeing possibility itself; this is the ultimate implication for nanocognition – conceiving of nanocognition as pure possibility in and of itself.

Tuesday, June 03, 2014

EmergingTechs Nanotechnology, Synthetic Biology, and Geoengineering in the Governance Eye

The second annual Governance of Emerging Technologies conference held in Phoenix AZ May 27-29, 2014 discussed a variety of governance (regulation), legal, and ethical aspects of three areas of emerging technology: nanotechnology, synthetic biology, and geoengineering (climate management).

The prevailing attitude in nanotechnology is much like that in artificial intelligence, “no new news” and some degree of weariness after having experienced a few hype-bust cycles, coupled with the invisibility frontier. The invisibility frontier is when an exciting emerging technology becomes so pervasive and widely-deployed that it becomes invisible. There are numerous nanotechnology implementations in a range of fields including materials, computing, structures, nanoparticles, and new methods, similar to the way artificial intelligence deployments are also widely in use but ‘invisible’ in fraud detection, ATM machine operation, data management algorithms, and traffic coordination.

Perhaps the biggest moment of clarity was that different groups of people with different value systems, cultures, and ideals are coming together with more frequency than historically to solve problems. The locus of international interaction is no longer primarily geopolitics, but shifting to be much more one of collaboration between smaller groups in specific contexts who are inventing models for sharing knowledge that simultaneously reconfigure and extend it to different perspectives and value systems.

Monday, May 26, 2014

Futurist Ethics of Immanence

The ethics of the future could likely shift to one of immanence. In philosophy, immanence means situations where everything comes from within a system, world, or person, as opposed to transcendence, where there are externally-determined specifications. The traditional models of ethics have generally been transcendent in the sense that there are pre-specified ideals posed from some point outside of an individual’s own true sense of being. The best anyone can ever hope to achieve is regaining the baseline of the pre-specified ideal (Figure 1). Measuring whether someone has reached the ideal is also problematic tends to be imposed externally. (This is also an issue in artificial intelligence projects; judgments of intelligence are imposed externally).

 Figure 1: Rethinking Ethics from 1.0 Traditional to 2.0 Immanence.

There has been progression in ethics models, moving from act-based to agent-based to now situation-based. Act-based models are based on actions (the Kantian categorical imperative vs utilitarianism (the good of the many) or consequentialism (the end justifies the means). Agent-based models hold that the character of the agent should be predictive of behavior (dispositionist). Now social science experimentation has validated a situation-based model (the actor performs according to the situation (i.e., and could behave in different ways depending on the situation)). However all of these models are still transcendent; they are in the form of externally pre-specified ideals.

Moving to a true futurist ethics that supports freedom, empowerment, inspiration, and creative expression, it is necessary to espouse ethics models of immanence (Figure 1). In an ethics of immanence, the focus is the agent, where an important first step is tuning in to true desires (Deleuze) and one’s own sense of subjective experience (Bergson). Expanding the range of possible perceptions, interpretations, and courses of action is critical. This could be achieved by improved mechanisms for eliciting, optimizing, and managing values, desires, and biases.

As social models progress, a futurist ethics should move from what can be a limiting ethics 1.0 of judging behavior against pre-set principles to the ethics 2.0 of creating a life that is affirmatory and expansive.

Slideshare presentation: Machine Ethics: An Ethics of Perception in Nanocognition

Sunday, April 27, 2014

Bergson: Free Will through Subjective Experience

Advance in science always helps to promulgate new ideas for addressing long-standing multidisciplinary problems. Max Tegmark's recent book, the Mathematical Universe, is just such an example of new and interesting ways to apply science to understanding the problem of consciousness. However, before jumping into these ideas, it is important to have a fundamental knowledge of different theories of perception, cognition, and consciousness.
 
One place to turn for a basic theory of cognition is French process philosopher Henri Bergson (1859-1941). Although we might easily dismiss Bergson in our shiny modern era of real-time fMRIs, neo-cortical column simulation, and spike-timing calculations, Bergson's theories of perception and memory still stand as some of the most comprehensive and potentially accurate accounts of the phenomena.

Bergson's view is that there are two sides to experience: the quantitative measurable aspect, like a clock's objective ticking in minutes, and the qualitative subjective aspect, like what time feels like when we are waiting, or having fun with friends.

Bergson's prescription for more freedom and free will is tuning into subjective experience. In the example of time, it is to 'live in time,' experiencing time as duration, as internal themes and meldings of time.
We must tune into the subjective experience of time to exercise our free will. 
How this actually occurs is that we are more disposed to freedom and free will when we choose spontaneous action, which happens when we are oriented towards the qualitative aspects of internal experience, and see time as a dynamic overlap between states, not as boxes on a calendar.

Considering that we may espouse a futurist ethics that supports freedom, empowerment, inspiration, and creative expression of the individual in concert with society, the Bergsonian implementation would be ethics models that facilitate awareness of subjective experience, a point that Deleuze subsequently takes up in envisioning societies of individuals actualized in desiring-production.

Tuesday, July 09, 2013

Ethics of Perception and Nanocognition (Nanorobot-aided Cognition)

It is not too soon to consider what kinds of ethics nanorobotic cognitive aids should have, and what kinds of ethics our QS (quantified self) gadgetry in general should have. Ethics is meant in an Ethics 2.0 sense of enablement, empowerment, and coordination of new ways of living as opposed to an Ethics 1.0 sense of judging and circumscribing behavior.

Cognitive nanorobots, an analog to medical nanorobots, could have applications in cognitive enhancement and perceptual aid such as bias reduction, memory management (access, suppression), and personalized ethics optimization.

In defining an ethics of perception, a number of core philosophical questions arise such as the possibility and desirability of knowing a true and objective reality, and selecting different realities.

Hear more and discuss this topic:
"Ethics of Perception and Nanocognition (Nanorobot-aided Cognition)" Terasem's 9th Annual Workshop on Geoethical Nanotechnology, July 20, 2013, 1PM – 4PM EDT, Terasem Island, Second Life

Sunday, August 14, 2011

Scaling citizen health science and ethical review

Many things are needed to scale citizen science from small cohorts on the order of a few individuals to medium and large-sized cohorts. Building trust in online health communities, motivating sustained engagement from study participants, and lower-cost easier-access blood tests are a few things that are needed.

Legal and ethical issues are also a challenge. Independent ethical review is appropriate but the current IRB (Institutional Review Board) requirement for funding and journal publication is a barrier to crowdsourced study growth. In 23andMe's early studies, there was a definitional debate as to whether their research constituted 'human subjects research,' and whether there was a difference in interacting with subjects in-person versus over the internet.

The U.S. HHS (Health and Human Services) definition of 'humans subjects research' is research that "obtains (1) data through intervention or interaction with the individual, or (2) identifiable private information." (45 CFR 46.102(f)) The strict reading is that any research obtained by 'interacting' with a human subject (e.g.; likely all personalized health collaboration community research) would require an IRB for the funding needed to do it at scale.

Acknowledgement: Thank you to Thomas Pickard for providing background research

Sunday, July 11, 2010

Ethics of historical revivification

Thought experiment: Assuming a world or worlds without basic resource constraints, if technologically possible, would it be more humane or less human humane to revive dead persons from history? Even those recently dead could be out of sync with the current milieu. Obviously, there would need to be rehab programs as contemplated in science fiction, for example,

"Life 101: Introducing Genghis Khan to the iPhone"
If is arguable that some large percent of dead persons would find enjoyment and utility in revivification.

The interpretation of the core rights of the individual could be different in the future. “Life, liberty, and the pursuit of happiness” seems immutable, even when considered in the possible farther future context of many worlds, uploaded mindfiles, and human/AI hybrid intelligences. However, how these principles are applied in practice could seem strange from different historical viewpoints.

Attributes that might be important to an individual now, for example embodiment or corporeality (being physically instantiated in a body), could well be moot in the future. On-demand instantiation could be a norm to complement digital mindfiles.

It could be queried whether revived historical persons should have the option to re-die? Dying and suicide could be much different conceptually in a digital upload culture. Choosing not to run your mindfile could be legal, but deleting it (and all backups) could be the equivalent of suicide, which is generally illegal in contemporary society.

Sunday, July 26, 2009

Ethics of brainless humans

As a thought experiment, if it were possible, would it be ethical to make humans without brains for research purposes?

The idea arises since a more accurate model of humans for drug testing would be quite helpful. Drugs may work in mice, rats and monkeys but not in humans or in some humans but not others. Human biology is more complex and the detailed pathways and mechanisms are not yet understood.

Of course by definition, a brainless human is not really a human; a human form without a brain would be more equivalent to a test culture of liver cells than a cognitive agent.

Tissue culturing, regenerative medicine and 3D organ printing
The less contentious versions of the idea of growing brainless humans is currently under initial exploration in taking tissue from a human, growing it up in culture and testing drugs or other therapies on it. A further step up is regenerative medicine, producing artificial organs from a person’s cells such as the Wake Forest bladder and Gabor Forgacs 3D organ printing work.

Brain as executive agent may be required
The next steps for testing would be creating systems of interoperating tissue and organs (e.g.; how would this person’s heart and liver respond to this heart drug?) and possibly a complete collection of human biological systems sans brain. One obvious issue is that this might not even work since the brain is obviously a critical component of a human and that a brainless human could not be built, that some sort of executive organizing system like the brain would be needed. Also medical testing would need to include the impact on the brain and the brain’s role and interaction with the other biological systems and the drug.

Ethical but impractical
Where it is quite clear that generating a full living human for research purposes would be unethical, it is hard to argue that generating a brainless human, a complex collection of human biological systems without a brain, which is not really human and does not have consciousness or personhood, would be unethical. Certainly some arguments could be made to the contrary regarding the lack of specific knowledge about consciousness and concepts of personhood, but would seem to be outweighed.

Unlikely to arise
It is extremely unlikely that the situation of manufacturing brainless humans for research purposes would ever arise, first since a lot of testing and therapy may be possible with personalized tissue cultures and regenerative medicine, and informed by genomic and proteomic sequencing. Also, in an eventual era where it might be possible to construct a brainless human or a collection of live interacting tissues and organ systems, it would probably be more expedient to model the whole biological system digitally.

Sunday, July 12, 2009

Ethics of the future: self-copies

Just as the future of science and technology is rife with legal opportunities and psychological study possibilities, so is it with ethical issues. One interesting example is the case of individuals having multiple copies of themselves, either embodied or digital.


1. Can I self-copy?
The first issue is how different societies will set norms and legal standards for having copies. The least offensive first level would be having a backup copy of mindfiles for emergency and archival purposes, much like computer backups at present. People take pictures and videos of their experiences, why not of their minds? The other end of the extreme would be the most liberal societies allowing all manner of digital and embodied copies. The notion of regulating copies brings up an interesting potential precedent, that currently, the creation of children is largely unregulated on a global basis.

2. When and where can I run my self-copy(ies)?
A second issue is, given copies, under what circumstances can and should they be run. A daily backup is quite different from unleashing hundreds of embodied copies of oneself. Physically embodied copies would consume resources just as any other person in the world and there would likely be some stiff initial regulations since national population doubling, trebling or more overnight would not likely be a useful shock to society. Not to mention the difficulty in quickly obtaining and assembling the required resources for a full human copy; despite the potential advances in 3D human tissue and organ home printers by then.

Digital copies is the more obvious opportunity for running self-copies and could be much more challenging to regulate. In the early days, the size and processing requirements of uncompressed mindfiles would likely be so large that a runtime environment would not be readily available on any home machine or network but would rather require a supercomputer.

3. Am I a copy?
A third interesting problem is whether it would be moral for copies to know that they are copies, and the related legal issues regarding memory redaction as explored in Wright's "Golden Age" trilogy. Depending how interaction between originals and copies is organized, it may not matter. Psychologically for the originals and the copies, it may matter a lot. The original may 'own' the copies or the copies may have self-determination rights. In the case of an embodied copy, it is hard not to argue for their full personhood but somehow a digital instance seems to have fewer rights, although it may come to be that shutting down an instance of a digital mind, even with a recent full memory backup and integration, is just as wrong as a physical homicide.

Interesting ethical issues could arise for originals and copies alike as to what to share with the others; should horrifying experiences be edited out as Brin's Kiln People do at times? There would be both benefits and costs to experiencing the death of a self-copy, for example. It would not seem ethical to make self-copies explicitly for scientific research purposes to garner information from their deaths, but it does seem fully ethical to have multiple self-copies for with different life styles, some healthier and some less healthy to investigate a) whether a healthy life style matters and b) to selfishly share exciting experiences from less risk averse copies back with the longer-lived healthier copy.

Indeed in the new medical era of a systemic understanding of health and disease where n=1, what better control examples to have than of yourself! However, epigenetic mutations and post-translational modifications may be much harder to equalize across copies than memories and experiences.

The issue of the definition of life arises as some people may want the abridged meta-message or take-away from experiences, indeed this is one of the great potential benefits of multiple copies, while others may wish to preserve the full resolution of all experiences. The standard could accommodate both, with the summary being the routine information transfer with the detail archived for on-demand access.

4. What can I do with my self-copies?
Societies might like to attempt to establish checks and balances to prevent originals from selling copies of themselves or others into slavery to reap economic benefits, as dystopially portrayed in Ballantyne's "Capacity". Especially in a potential realm of digital minds, there are many potential future challenges with rights determination and enforcement.

The 'AI abdication' defense is the argument that societies that are sufficiently advanced to have the ability to run self-copies would also have other advancements developed and in use such as some sort of consciousness sensor identifying existing and emerging sentient beings and looking after their well-being, a beneficent policing. There are numerous issues with the AI abdication defense, including its unlikely existence from a technical standpoint, whether humans would agree to use such a tool, whether a caregiving AI could be hacked and other issues. However, technology does not advance in a vacuum and society generally matures around technologies so it is likely that some detriment-balancing counter initiatives would exist.

For example, would it be moral to create sub-sentient beings as sex slaves or personal assistants? This may be an improvement over the current situation but is not devoid of moral issues. At some point, as more about consciousness has been characterized and defined, a list of intelligence stratifications and capabilities could be a standard societal tool. Animals, humans and AIs would be included at minimum. A future world with many different levels of sentience seems quite possible.




Sunday, October 12, 2008

Prime Directive redux

As a follow-up to the last post, Technology Intervention is Moral for advanced civilizations, it could be quite useful to develop a rigorous Principles of Societal Interaction to be ready for any potential future communications. Current Earth-based treaties and norms, as well as Star Trek's Prime Directive, as Hiro Sheridan points out, could be drawn upon for ideas.

Star Trek's Prime Directive espouses a strict non-interference policy towards other societies and identifies a key technological pivot point, the development of the warp drive allowing interstellar space travel.

The Prime Directive is an interesting blueprint; however alternatives could be evaluated for at least three reasons: practicality, reality and moral imperative.

  • First, as a practical matter, the Earth-based examples have been a case of societies being aware of each other, and often interfering. A clean, invisible, non-interference model is probably not practical, even for societies scattered through space. Being cognizant of the limited frame of human-reasoning, it still seems that if there are multiple intelligent societies in the universe, it is at least possible that they will start finding each other through SETI-type programs and other means, either intentionally or accidentally. At minimum, it is not ascertainable that any and all advanced societies would have and be able to successfully execute a non-interference and non-awareness policy.
  • Second, in responding to the complicated nuances of reality, there is a difference between non-interference in the internal affairs of another society in the Prime Directive and Westphalian sovereignty sense (supportable) and complete non-interaction (less supportable). There could be many types of interaction and diplomatic mission technology sharing as has been the historical precedent for Earth-based societies whose objectives would not be in contravention with Westphalian sovereignty. Over time, it may even be that Earth-based intelligent society evolves a universal bill of rights for all intelligent life, irrespective of nation-state or other jurisdiction such that concepts like Westphalian sovereignty become outmoded.
  • Third, as argued in Technology Intervention is Moral, it is not clear that non-interference is the most moral course and an advanced society may consider it a moral imperative to offer certain types of suffering-alleviation/quality-of-life-improving technology to less advanced societies such as vaccinations has been the case on Earth.


Sunday, October 05, 2008

Technology intervention is moral

Advanced civilizations may have policies for interacting with civilizations deemed to be less advanced than their own. On Earth, there is currently no cohesive national or global view on contact with any non-Earth based intelligent societies. In the case of Earth-based societies, interventionism has been the norm.

Assuming safe interaction and communication can occur and intelligence or proto-intelligence has been established,

it is the moral obligation of any more advanced society to interact with any less advanced society.

It is a moral obligation to intervene for the purpose of technology-sharing first and most importantly to ameliorate suffering and improve quality of life, consider vaccines for example. Second, it is patronizing to decide whether or not to expose the less advanced society to more advanced technologies. The moral and respectful path is to expose the newtech and let the other decide.

Third, a broad goal of humanity is to lift all intelligent beings to an optimized state of fulfillment and contribution, so absent existential risk to the more advanced civilization, there is no reason not to share technology. Fourth, considering the 'do unto others' principle, the majority of humans would likely support intervention.

An alternate but less tenable view is that intervention is immoral, that the independence of the other civilization should be respected. The more advanced society does not have the right to interfere. It is better to let someone learn for themselves instead of teaching them; forget the matches and wait a few more centuries for lightning to zap the meat. However, even if the intervention is resented later, it is still more moral to intervene in the sense of improving suffering, quality of life, etc.

Tuesday, January 01, 2008

Is it moral to kick a robot?

As long as robots are non-sentient, non-feeling beings, it would only be immoral to kick a robot in the sense of potential property damage to others. It would be like kicking a couch or a car.

If robots were sentient, emotional beings, it would certainly be immoral to kick one. It would be like kicking a human.

If a robot were sentient but non-emotional, non-feeling, would it be immoral to kick it? Yes, for both the potential physical damage and an as yet undefined area of implied morality amongst sentient beings. Even if the robot did not ‘feel’ a kick as physical pain in the same way a human or animal would, the robot would have some sort of sensor network awareness that perceived and coded the action as inappropriate and possibly dangerous and illegal. A sentient robot would likely be able to take some action against the mistreatment.

How sentient or feeling does a robot need to become for it to be immoral to kick it? There will be early stages as robots are in the beginnings of sentience and emotion. If the kicker knows that the robot is or could be sentient or emotion-feeling, then it would be immoral to kick the robot. This would be similar to situations between adults and children, the former are assumed to have an uneven control and influence advantage over the latter who may not be consciously aware and able enough to perceive damage and protect themselves.

Sunday, November 25, 2007

Ethics of an Advanced Civilization

What kinds of policies is an advanced civilization likely to have regarding interacting with other societies it encounters?

Thinking of the most interesting case, a discretionary interaction, a more advanced society could probably easily identify ways of preventing pain or hardship in a less advanced society with their more advanced technology, spreading the smallpox vaccine would be an example in the human case.

If there were no existential risk to either civilization, and the more advanced society could adequately communicate with the lesser advanced society, to what degree if any should an advanced society be morally obligated to share their advanced technology with a less advanced society?

Presumably high up in the most likely example of an advanced society’s goals would be the furthering of knowledge and technology, and presumably this could be better accomplished with additional agents. So an advanced society would be more rather than less likely to share advanced technology, most likely overtly, even if it could adversely impact the culture of the less advanced society.

How the technology would be shared could be an interesting question, if the possessing society were advanced enough presumably it would be shared non-pecuniarily. Identifying which technologies should be shared could also be an interesting question as there will likely still be some competition for status and resources but probably nearly all technology could be shared as lesser advanced societies advanced to parity.

The foresightedness and cohesion required to consider the possibility of encountering other societies and have a universally agreed upon policy for this situation would seem to be one early mark of an advanced civilization.