Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

Friday, November 10, 2017

The Future of AI: Blockchain and Deep Learning

First point: considering blockchain and deep learning together suggests the emergence of a new class of global network computing system. These systems are self-operating computation graphs that make probabilistic guesses about reality states of the world.

Second point: blockchain and deep learning are facilitating each other’s development. This includes using deep learning algorithms for setting fees and detecting fraudulent activity, and using blockchains for secure registry, tracking, and remuneration of deep learning nets as they go onto the open Internet (in autonomous driving applications for example). Blockchain peer-to-peer nodes might provide deep learning services as they already provide transaction hosting and confirmation, news hosting, and banking (payment, credit flow-through) services. Further, there are similar functional emergences within the systems, for example LSTM (long-short term memory in RNNs) are like payment channels.

Third point: AI smart network thesis. We are starting to run more complicated operations through our networks: information (past), money (present), and brains (future). There are two fundamental eras of network computing: simple networks for the transfer of information (all computing to date from mainframe to mobile) and now smart networks for the transfer of value and intelligence. Blockchain and deep learning are built directly into smart networks so that they may automatically confirm authenticity and transfer value (blockchain) and predictively identify individual items and patterns.

Detailed Slides available here.

Sunday, February 01, 2015

Machine Cognition and AI Ethics Percolate at AAAI 2015

The AAAI’s Twenty-Ninth Conference on Artificial Intelligence was held January 25-30, 2015 in Austin, Texas. Machine cognition was an important focal area covered in two workshops on AI and Ethics, and Beyond the Turing Test, and in a special track on Cognitive Systems. Some of the most interesting emergent themes are discussed below.

Computational Ethics Systems
One main research activity in machine ethics is developing computational ethics systems. The status is that there are several such systems, however, a paucity of overall standards bodies, general ethics modules, and an articulation of universal principles that might be included like human dignity, informed consent, privacy, and benefit-harm analysis. Some standards bodies that are starting to address these ideas include the IEEE’ s Technical Committee on Robot Ethics and European committees involved in RoboLaw and Roboethics

One required feature of computational ethics systems could be the ability to flexibly apply different systems of ethics to more accurately reflect the ways that human intelligent agents approach real-life situations. For example, it is known from early programming efforts that simple models like Bentham and Mill’s utilitarianism are not robust enough ethics models. They do not incorporate comprehensive human notions of justice that extend beyond the immediate situation in decision-making. What is helpful is that machine systems on their own have evolved more expansive models than utilitarianism such as a prima facie duty approach. In the prima facie duty approach, there is a more complex conceptualization of intuitive duties, reputation, and the goal of increasing benefit and decreasing harm in the world. This is more analogous to real-life situations where there are multiple ethical obligations competing to determine the right action. GenEth is a machine ethics sandbox that is available to explore these kinds of systems for Mac OS, with details discussed in this conference paper.

There could be the flexible application of different ethics systems, and also integrated ethics systems. As in philosophy, computational ethics modules connote the idea of metaethics, a means of evaluating and integrating multiple ethical frameworks. These computational frameworks differ by ethical parameters and machine type; for example an integrated system is needed to enable a connected car to interface with a smart highway. The French ETHICAA (Ethics and Autonomous Agents) project seeks to develop embedded and integrated metaethics systems.

An ongoing debate is whether machine ethics should be separate modules or part of regular decision-making. Even though ultimately ethics might be best as a feature of any kind of decision-making, ethics are easiest to implement now in the early stages of development as a standalone module. Another point is that ethics models may vary significantly by culture; consider for example collectivist versus individualist societies, and how these ideals might be captured in code-based computational ethics modules. Happily for implementation, however, the initial tier of required functionality might be easy to achieve: obtaining ethicist consensus on overall how we want robots to treat us as humans. QA’ing computational ethics modules and machine behavior might be accomplished through some sort of ‘Ethical Turing Test;’ metaphorically, not literally, evaluating the degree to which machine responses match human ethicist responses.

Computational Ethics Systems: 
Enumerated, Evolved, or Corrigible
There are different approaches to computational ethics systems. Some involve the attempted enumeration of all involved principles and processes, reminiscent of Cyc. Others attempt to evolve ethical behavioral systems like the prima facie duty approach, possibly using methods like running machine learning algorithms over large data corpora. Others attempt to instill values-based thinking in ways like corrigibility. Corrigibility is the idea of building AI agents that reason as if they are incomplete and potentially flawed in dangerous ways. Since the AI agent apprehends that it is incomplete, it is encouraged to maintain a collaborative and not deceptive relationship with its programmers since the programmers may be able to help provide more complete information, even while both parties maintain different ethics systems. Thus a highly-advanced AI agent might be built that is open to online value learning, modification, correction, and ongoing interaction with humans. Corrigibility is proposed as a reasoning-based alternative to enumerated and evolved computational ethics systems, and also as an important ‘escape velocity’ project. Escape velocity refers to being able to bridge the competence gap between the current situation of not yet having human moral concepts reliably instantiated in AI systems, and the potential future of true moral superintelligences indispensably orchestrating many complex societal activities.

Lethal Autonomous Weapons
Machine cognition features prominently in lethal autonomous weapons where weapon systems are increasingly autonomous, making their own decisions in target selection and engagement without human input. The banning of autonomous weapons systems is currently under debate. On one side, detractors argue that full autonomy is too much, and that these weapons no longer have ‘meaningful human control’ as a positive obligation, and do not comply with the Geneva Convention’s Martens Clause requiring that fully autonomous weapons comply with principles of humanity and conscience. On the other side, supporters argue that machine morality might exceed human morality, and be more accurately and precisely applied. Ethically, it is not clear if weapons systems should be considered differently than other machine systems. For example, the Nationwide Kidney Exchange automatically allocates two transplant kidneys per week, where the lack of human involvement has been seen positively as a response to the agency problem.

Future of Work and Leisure
The automation economy is one of the great promises of machine cognition, where humans are able to offload more and more physical tasks, and also cognitive activities to AI systems. The Keynesian prediction of the leisure society by 2030 is becoming more imminent. This is the idea that leisure time, rather than work, will characterize national lifestyles. However, several thinkers are raising the need to redefine what is meant by work. The automation economy, possibly coupled with Guaranteed Basic Income initiatives, and an anti-scarcity mindset, could render obligation-based labor a thing of the past. There is ample room for redefining ‘work’ as productive activity that is meaningful to one’s sense of identity and self-worth for fulfillment, self-actualization, social-belonging, status-garnering, mate-seeking, cooperation, collaboration, and meeting other needs. The ‘end of work’ might just mean the ‘end of obligated work.’ 

Persuasion and Multispecies Sensibility
As humans, we still mostly conceive and employ the three modes of persuasion outlined centuries ago by Aristotle. These are ethos, relying on the speaker’s qualities like charisma; pathos, using emotion or passion to cast the audience into a certain frame of mind; and logos, employing the words of the oration as the argument. However, the human-machine interaction might cause these modes of human-related persuasion to be rethought and expanded, in both the human and machine context. Given that machine value systems and character may be different, so too might the most effective persuasion systems; both those employed on and deployed by machines. The ethics of human-machine persuasion is an area of open debate. For example, researchers are undecided on questions such as “Is it morally acceptable for a system to lie to persuade a human?” There is a rising necessity to consider ethics and reality issues from a thinking machine’s point-of-view in an overall future world system that might comprise multiple post-biological and other intelligent entities interacting together in digital societies.

Sunday, January 18, 2015

Blockchain Thinking: Transition to Digital Societies of Multispecies Intelligence

The future world could be one of multi-species intelligence. The possibility space could include “classic” humans, enhanced humans, digital mindfile uploads, and many forms of artificial intelligence: deep learning neural nets, machine learning algorithms, blockchain-based DACs (distributed autonomous organizations), and whole-brain software emulations. Machine modes of existence are different than those of humans, which means the need for ways to interact that facilitate and extend the existence of both parties.

Blockchains for Trustful Interspecies Social Contracts
The properties of blockchain technology as a decentralized, distributed, global, permanent, code-based ledger of transactions could be useful in managing such interactions. The cryptographic ledger system could be used for interactions either between humans or multispecies parties, exactly because it is not necessary to know, trust, or understand the other entity, just the code system.

While perhaps not a full answer to the problem of trustful multispecies interaction, and the subcase of enacting Friendly AI, decentralized smart networks like blockchains are a robust system of checks and balances. As such, blockchains are a mechanism with more leverage than other available solutions in responding to situations of future uncertainty. Blockchains could be the infrastructure for setting forth the new social contract between humans and technology, and formalizing this arrangement in smart contracts.

Mutual Coexistence in the Capacity Spectrum for Actualization
Trust-building models for interspecies digital intelligence interaction could include both game-theoretic checks-and-balances systems like blockchains to alleviate threats and fears, and also at a higher level, frameworks that put entities on the same plane of shared objectives. The problem frame of machine and human intelligence should not be one that characterizes relations as oppositional, but rather one that aligns entities on the same ground and value system for the most important shared parameters, like growth and actualization.

What we want is the ability to experience, grow, and contribute more, for both humans and machines, and the two in symbiosis and synthesis. This can be conceived as all entities existing on a spectrum of capacity for individuation (the ability to grow and realize potential). Productive interaction between intelligent species could be fostered by being joined in the common framework of a capacity spectrum that facilitates the objectives of personal, mutual, and collective growth in creating the digital communities of the future.

Adapted from: Swan, M. We Should Consider The Future World As One Of Multi-Species Intelligence. Response to The Edge Question 2015. Ed. John Brockman. 2015. 

Sunday, May 26, 2013

AAAI 2014: Connecting Machine Learning and Human Intelligence

The AAAI Spring Symposia are a place for worldwide artificial intelligence, machine learning, and other computer scientists to present and discuss innovative theoretical research in a workshop-like environment. In 2013, some of the topics included: learning, autonomous systems, wellness, crowd computing, behavior change, and creativity.

Proposals are underway for 2014. Please indicate your opinion by voting at the poll at the top right for these potential topics:
  • My data identity: personal, social, universal 
  • Big data becomes personal: knowledge into meaning 
  • Wearable computing and digital affect: wellness, bioidentity, and intentionality 
  • Big data, wearable computing, and identity construction: knowledge becomes meaning 
  • Personalized knowledge generation: identity, intentionality, action, and wellness

Sunday, March 31, 2013

What's new in AI? Trust, Creativity, and Shikake

The AAAI spring symposia held at Stanford University in March provide a nice look at the potpourri of innovative projects in process around the world by academic researchers in the artificial intelligence field. This year’s eight tracks can be grouped into two overall categories: those that focus on computer-self interaction or computer-computer interaction, and those that focus on human-computer interaction or human sociological phenomena as listed below.

Computer self-interaction or computer-computer interaction (Link to details)
  • Designing Intelligent Robots: Reintegrating AI II 
  • Lifelong Machine Learning 
  • Trust and Autonomous Systems 
  • Weakly Supervised Learning from Multimedia
Human-computer interaction or human sociological phenomena (Link to details)
  • Analyzing Microtext 
  • Creativity and (Early) Cognitive Development 
  • Data Driven Wellness: From Self-Tracking to Behavior Change 
  • Shikakeology: Designing Triggers for Behavior Change 
This last topic, Shikakeology, is an interesting new category that is completely on-trend with the growing smart matter, Internet-of-things, Quantified Self, Habit Design, and Continuous Monitoring movements. Shikake is a Japanese concept, where physical objects are embedded with sensors to trigger a physical or psychological behavior change. An example would be a trash can playing an appreciative sound to encourage litter to be deposited.

Sunday, October 21, 2012

Singularity Summit 2012: Image Recognition, Analogy, Big Health Data, and Bias Reduction

The seventh Singularity Summit was held in San Francisco, California on October 13-14, 2012. As in other years, there were about 600 attendees, although this year’s conference program included both general-interest science and singularity-related topics. Singularity in this sense denotes a technological singularity - a potential future moment when smarter-than-human intelligence may arise. The conference was organized by the Singularity Institute, who focuses on researching safe artificial intelligence architectures. The key themes of the conference are summarized below. Overall the conference material could be characterized as incrementalism within the space of traditional singularity-related work and faster-moving advances coming in other fields such as image recognition, big health data, synthetic biology, crowdsourcing, and biosensors.

Key Themes:
  • Singularity Thought Leadership
  • Big Data Artificial Intelligence: Image Recognition
  • Era of Big Health Data
  • Improving Cognition: Bias Reduction and Analogies
  • Singularity Predictions
Singularity Thought Leadership
Singularity thought leader Vernor Vinge, who coined the term technological singularity, provided an interesting perspective. Already since at least 2000, he has been referring to the idea of computing-enabled matter and the wireless Internet-of-things as Digital Gaia. He noted that 5% of objects worldwide are already embedded with microprocessors, and it could be scary as reality ‘wakes up’ further, especially as we are unable to control other phenomena we have created such as financial markets. He was pessimistic regarding privacy, suggesting that Brin’s traditional counterproposal to surveillance, sousveillance, is not necessarily better. More positively, he discussed the framing of computers as a neo-neocortex for the brain, extreme UIs to provide convenient and unobtrusive cognitive support, other intelligence amplification techniques, and how we have been unconsciously prepping many of our environments for robotic operations. There has also been the rise of an important resource in crowdsourcing as the network (the Internet plus potentially 7 billion Turing-test passing agents) filters optimal resources to specific cognitive tasks (like protein folding analysis).

Big Data Artificial Intelligence: Image Recognition
Peter Norvig continued in his usual vein of discussing what has been important in resolving contemporary problems in artificial intelligence. In machine translation (interestingly a Searlean Chinese room), the key was using large online data corpuses and straightforward machine learning algorithms (The Unreasonable Effectiveness of Data). In more recent work, his lab at Google has been able to recognize pictures of cats. In this digital vision processing advance (announced in June 2012 (article, paper)), the key was creating neural networks for machine learning that used hierarchical representation and problem solving, and again large online data corpuses (10 million images scanned by 16,000 computers) and straightforward learning algorithms.

Era of Big Health Data 
Three speakers presented innovations in the era of big health data, a sector which is generating data faster than any other and starting to use more sophisticated artificial intelligence techniques. Carl Zimmer pointed out that new viruses are continuing to develop and spread, and that this is expected to persist. Encouragingly, new viruses are genetically sequenced increasingly rapidly, but it still takes time breed up vaccines. A faster means of vaccine production could possibly come from newer techniques in synthetic biology and nanotechnology such as those from Angela Belcher’s lab.  Linda Avey discussed Curious, Inc, a personal data discovery platform in beta launch that looks for correlations across big health data streams (more information). John Wilbanks discussed the pyrrhic notion of privacy provided by traditional models as we move to a cloud-based big health data era (for example, only a few data points are needed to identify an individual and medical records may have ~500,000). Some health regulatory innovations include an updated version of HIPAA privacy policies, a portable consent for granting the use of personalized genomic data, and a network where patients may connect directly with researchers.

Improving Cognition: Bias Reduction and Analogies (QS’ing Your Thinking) 
A perennial theme in the singularity community is improving thinking and cognition, for example through bias reduction. Nobel Prize winner Daniel Kahneman spoke remotely on his work regarding fast and slow thinking. We have two thinking modes, fast (blink intuitions) and slow (more deliberative logical) thinking, both of which are indispensable and potentially problematic. Across all thinking is a strong inherent loss aversion, and this helps to generate a bias towards optimism. Steven Pinker also spoke about the theme of bias, indirectly. In recent work, he found that there has been a persistent decline in violence over the multi-century history of time, possibly mostly due to increases in affluence and literacy/knowledge. This may seem counter to popular media accounts which, guided by short-term interests, help to create an area of societal cognitive bias. Other research regarding cognitive enhancement and the processes of intelligence was Melanie Mitchell’s claim that analogies are a key attribute of intelligence. The practice of using analogies in new and appropriate ways could be a means of identifying intelligence, perhaps superior to other mechanisms such as general-purpose problem solving, question-answering, or Turing test-passing as the traditional proxies for intelligence.

Singularity Predictions 
Another persistent theme in the singularity community is sharpening analysis, predictions, and context around the moment when there might be greater-than-human intelligence. Singularity movement leader Ray Kurzweil made his usual optimistic remarks accompanied by slides with exponentiating curves of technology cost/functionality improvements, but did not confirm or update his long-standing prediction of a technological singularity circa 2045 [1]. Stuart Armstrong pointed out how predictions are usually 15-25 years out, and that this is true every year. In an analysis of the Singularity Institute’s database of 257 singularity predictions from 1950 forward, there is no convergence of time in estimates ranging from 2020-2080. Vernor Vinge encourages the consideration of a wide range of scenarios and methods including ‘What if the Singularity Doesn’t Happen.’ The singularity prediction problem might be improved by widening the possibility space, for example perhaps it less useful to focus on intelligence as the exclusive element for the moment of innovation, speciation, or progress beyond human-level; other dimensions such as emotional intelligence, empathy, creativity, or a composite thereof could be considered.

Reference
1. Kurzweil, R. The Singularity is Near; Penguin Group: New York, NY, USA, 2006; pp. 299-367.

Sunday, May 06, 2012

Obtaining models for singularity futures thinking

The challenge is called out by science fiction writer Vernor Vinge as being related to the technological singularity, namely that any one future technology change could be so fundamental across all aspects of life that it is hard to write plausible science fiction, and more generally impacts how we think about the modern and future world. Any next node that has sufficient transformative power (e.g.; like the internet) could change things so fundamentally, globally, multi-dimensionally, and quickly that its impact would be essentially beyond cognition. Moreover, while there are some potential candidates visible for the ‘next internet’ such as smartphones, 3D printing, biotechnology, nanotechnology, and robotics, the real next node is likely to be an unforeseen discontinuity.

Comprehensive survey of thinking models
There is a paucity of models for thinking comprehensively and critically about the future in rigorous, sophisticated, justifiable, and transferable ways. A project that should be undertaken if not done so already is an examination of different models for structuring thinking from different disciplines. There is value in this at two levels: first generally in identifying, characterizing, and synthesizing different models for structuring thinking, and second in applying these models cross-disciplinarily to existing areas, and to new areas such as thinking about the future.

Eliciting explicit models for structuring thinking
The models that are used to structure thinking in different fields need to be made explicit. Practitioners immersed in fields may not be easily disposed to articulate these models. For example, it may be novel to inquire ‘What is the model for inquiry in this field?’ or even to have the concepts and vocabulary for explicating them.

Fields with models for structuring thinking
Some of the obvious fields to investigate for eliciting established models for structuring thinking are philosophy, complexity (complex adaptive systems, chaos theory, symmetry, etc.), computing (artificial intelligence, machine learning, knowledge representation, data management, etc.), systems-level disciplines (ecology, biology, cosmology, etc.), and social sciences (sociology, anthropology, economics, etc.).

The challenge of fishing structure and content from academic fields
Some of the immediate obvious barriers in accessing models for structuring thinking from academic disciplines are nomenclature and insularity. Semantic and conceptual nomenclature may prevent easy access to fields, but are largely a veneer that may be penetrated with a variety of translation techniques and concept mapping. Much more problematic is the potential lack of available suitable content in these fields. By default, many areas of academia are not externally-focused applied disciplines but rather inwardly-focused insular disciplines engaged in cataloging and interpreting the thoughts of their own ancestral brethren. The accompanying applied dimension to every field that would explicitly render the core ideas accessible, and proven and useful through deployment seems to be absent from many fields. Rather than being perceived as less pure of an exploit, the application of the central ideas and structures would seem to be a key raison d'être for these fields of knowledge.

Sunday, July 31, 2011

Consciousness only exists as a human construct

It is quite possible that consciousness may not be an objectively definable phenomenon, but rather a convenient illusion created by humans to provide a context for understanding reality. While machines such as CT scans and MRIs measure human cognition, it may not be possible to measure the direct qualities of consciousness. Philosophers and others have pointed out that consciousness may be a subjective quality arising from the operation of the brain.

One reason that a more detailed look at consciousness may be interesting is in the contemplation of non-human intelligence. Given the aspect of subjective qualia surrounding the label consciousness, it might be insulting to non-human intelligence to be referred to as having consciousness, the more objective attribute ‘self-aware’ being preferable. If human consciousness does not exist, non-human consciousness would be even less likely to exist.

Sunday, March 27, 2011

Human language ambiguity and AI development

Since Jeopardy questions are not fashioned in SQL, and are in fact designed to obfuscate rather than elucidate, IBM’s Watson program used a number of indirect DeepQA processes to beat human competitors in January 2011 (Figure 1). A good deal of the program’s success may be attributable to algorithms that handle language ambiguity, for example, ranking potential answers by grouping their features to produce evidence profiles. Some of the twenty or more features analyzed include data type (e.g.; place, name, etc.), support in text passages, popularity, and source reliability.

Figure 1: IBM's supercomputer Watson wins Jeopardy!

Image credit: New Scientist

Since managing ambiguity is critical to successful natural language processing, it might be easier to develop AI in some human languages as opposed to others. Some languages are more precise. Languages without verb conjugation and temporal indication are more ambiguous and depend more on inferring meaning from context. While it might be easier to develop a Turing-test passing AI in these languages, it might not be as useful for general purpose problem solving since context inference would be challenging to incorporate. Perhaps it would be most expedient to develop AI in some of the most precise languages first, German or French, for example, instead of English.

Sunday, November 21, 2010

Evolutionary adaptations and artificial intelligence

Art and religion are human evolutionary adaptations. Are there similar evolutionary adaptations that human-level and beyond artificial intelligence would be likely to make? Another way to ask this is whether art and religion were predictable? It seems that they were, maybe not the detailed outcomes, but that mechanisms would arise to allow for the achievement of human objectives such as status-garnering and mate selection.

Likewise, it seems quite possible that human-level and beyond artificial intelligence would be likely to make evolutionary adaptations. Utility functions could be edited in many ways. The primary area could be performance optimization, continuously improving cognition and other operations. A second area could be related to societal objectives to the extent that artificial intelligence is present in communities. Artificial intelligence might not have art and religion, but could have related mechanisms for achieving external and internal purposes.

Sunday, October 17, 2010

Phase transition in intelligence

There could be at least three approaches to the long-term future of intelligence: engineering life into technology, simulated intelligence, and artificial intelligence. Further, while the story of evolutionary history is the domination of one form of intelligence, the future could hold ecosystems with multiple kinds of intelligence, particular specialized by purpose/task.

There are significant technical hurdles in executing simulated intelligence and artificial intelligence, but the areas have been progressing in Moore’s Law fashion. The engineering of life into technology will need to proceed expediently to keep pace with technological advance, and tie a lot of wetware loose ends together.

At present, the mutation rate of genetic replication puts an upward bound on how complex biological organisms can be. The human cannot be more than about 10x as complex as Drosophila (the fruit fly), for example. However, if the error rate in the genetic replication machinery could be improved, maybe it would be possible to have organisms 10x more complex than humans, and so on, and so on…

Sunday, October 25, 2009

Role of B.S. in Advanced Society

B.S. is a deeper philosophical topic than it might seem at first glance. Two interesting books contemplate the matter: B.S. and Philosophy (2006) and On B.S. (2005).

What is the role of B.S. in advanced society? Since it exists, it must have some role, possibly related to conflict reduction and social lubrication. A second reason for B.S. could be the complex values hierarchies in which individuals and societies operate. Social pressure and belongingness may trump truth as values. When someone is asked a question, the presupposition is that he or she may be able to answer and the inclination of the person asked is to try to respond even if a misrepresentation, e.g.; B.S., occurs.

These authors and others agree that B.S. has proliferated from the past to the present. Given that, what could be said about the future, is B.S. likely to increase or decrease? In the short term it will probably continue to increase but could then be reduced in the longer term with the advent of more advanced technology.

Personalized hypertargeted B.S.
On one hand, technology is increasing the detectibility of B.S., suggesting that B.S. could go down in the future. On the other hand, information is continuing to explode, providing more potential venues for B.S., suggesting that B.S. could go up in the future. B.S. is like spam or commercials, growing, but simultaneously control mechanisms are also growing to mediate interactions. Although B.S. could be more insidious, less detectible and even desirable when it is highly personalized and hypertargeted such as marketing is starting to be now.

Politicians replaced by Artificial Intelligences
Considering fields ranging from science, with a zero-low tolerance for B.S., to politics, with a high tolerance for B.S., it is possible in the future that it would be desirable to replace people in high-B.S. professions with Artificial Intelligences. This would solve the agency problem and special interests control overnight. Policy debates could be resolved by running a million different permutations via virtual simulation varying every parameter of a given policy change such that overall utility is maximized.

Sunday, August 23, 2009

Automatic Markets

At Singularity University, one of the most pervasive memes was the “routing packets” metaphor; that many current activities are just like routing packets on the Internet. This includes areas such as people in driverless cars, electrons in electric vehicle charging and power entry, load-balancing, routing and delivery on smartgrid electricity networks.

Fungible resources and quantized packet-routing
The packet-routing concept could be extended to neurons (routed in humans or AIs), clean water, clean air, food, disease management, health care system access and navigation, and in the farther future, information (neurally-summoned) and emotional support (automatically-summoned per human dopamine levels from nearby people or robots). It is all routing…directing quantized fungible resources to where they are needed and requested.

Automatic Markets
Since these various resources are not uniformly demanded, the idea of markets as a resource allocation mechanism is immediately obvious.

Further that automated, or automatic markets with pre-specified user preferences, analogous to limit orders, could be optimum. Markets could meet in equilibrium and transact, buying, selling, and adjusting automatically per evolving conditions and pre-programmed user profiles, permissions, and bidding functions.

Truly smart grids would have automatic bidding functions (as a precursor to more intelligence-like utility functions) that would indicate preferences and bid and equalize resource allocation, the truly invisible digital hand.

The key parameters of a working market, liquidity, price discovery and ease of exchange would seem to be present in these cases with large numbers of participants and market monitoring and bidding via web or SMS interfaces. The next layer, secondary markets and futures and options could also evolve as an improvement to market efficiency, if designed with appropriate incentives.

Automatic markets are not without flaw, they exist now in traditional financial markets, causing occasional but volatile disruptions in the form of quantitative program-trading (blamed for exacerbating the 1987 Black Monday stock market crash) and flash-trading. Speculative aspects are not trivial and would be a critical area for market designers to watch, particularly managing for high liquidity and equal access (e.g.; faster Internet connections do not matter).

Markets to grow as a digitized resource allocation tool
At present, markets are not pervasive in life. The most notable examples are traditional financial markets, eBay, peer-to-peer finance websites and prediction markets. Being in a global digital era with the ability to use resources in a more fungible and transferable way could further promulgate the use of markets as a resource allocation tool.

A focus on preference rather than monetary value, and other currencies such as attention, authority, trust, etc. could vastly extend the range of implementation of market principles.

Sunday, January 11, 2009

Heard from the future…

  • I don’t feel like myself in this upload anymore
  • I only got 98% fidelity in my revivification
  • You must be a newbie, you never get full reload on a reviv
  • I don’t want to be so present to my experience
  • I don’t want to remember all my memories, I want to edit out x, y, z and enhance a, b, c and…
  • I don’t want all my actions (and possibly thoughts) recorded by public cameras and sensor nodes
  • Why did I reembody as a (man, dog, robot, hummingbird, bee, network node, airplane, etc.)?
  • What is the construct "I"?
  • Is this virtual world virtual? Where is the real virtual world?
  • I think that AI over there is a few methods short of a class library

Sunday, December 14, 2008

Future of physical proximity

Where will you live? How would concepts and norms of physical proximity evolve if cars were no longer the dominant form of transportation? How would residential areas self-organize if not laid out around the needs of cars and roads? Imagine gardens replacing driveways and roadways. What if people just walked outside of their houses or onto their apartment rooftops to alight via jetpack, smartpod or small foldable, perhaps future versions of the MIT car. At present, cities, suburbs and whole countries are structured per the space dictates of motor vehicular transportation systems.

Nanoreality or rackspace reality
?
There are two obvious future scenarios. There may either be a radical mastery and use of the physical world through nanomanufacturing or a quasi-obsolescence of the physical world as people upload to digital mindfile racks and live in virtual reality. The only future demand for the physical world might be for vacationing and novelty (‘hey, let’s download into a corporeal form this weekend and check out Real Nature, even though Real Nature is sensorially dull compared to Virtual Nature’).

Work 2.0
The degree of simultaneous advances is relevant for evaluating which scenario may arise. For example, economically, must people work? What is the nature of future work? Creative and productive activity (Work 2.0) might all take place in virtual reality. Smart robots may have taken over many physical services and artificial intelligences may have taken over most knowledge work. Would people be able to do whatever work they need to from home or would there be physical proximity and transportation proximity requirements as there are now?

Portable housing and airsteading
Next-level mastery of the physical world could mean that people stay incorporeal and live in portable residential pods. Airsteading (a more flexible version of seasteading) could be the norm; docking on-demand as boats or RVs do, in different airspaces for a night or a year. Docking fees could include nanofeedstock streams and higher bandwidth more secure wifi and datastorage than that ubiquitously available on the worldnets. Mobile housing and airsteading could help fulfill the ‘warmth of the herd’ need and facilitate the intellectual capital congregation possibilities that cities have afforded since the early days of human civilization.

Sunday, September 28, 2008

Our beautiful future

As worldwide over-dependence on oil and the costly Iraq war has hastened the way for new energy regimes, the U.S. financial bailout will be hastening the use of economic models other than Darwinian capitalism as it has been known where the most able seize maximum resources for themselves. Nascent social movements for opting out of the traditional economic system will become stronger. Science fiction is rife with dystopian models of robotic controlled governments (Daniel Suarez’ Daemon is a recent example) but in many ways machine-like entities absent the agency problem could be a dramatic improvement over fallible people-administered governments. Technology is more often humanifying than dehumanifying.

As usual, the focus is on technological advances to remedy the current global energy, resource consumption and economic challenges. Given both history and the present status of initiatives, technology is likely to deliver. New eras may be ushered in even more quickly when demand is higher and complacency lower. A surveillance and sousveillance society is clearly emerging, simultaneously from top-down government and corporate programs and bottom-up individual broadcast of GPS location and other lifestreaming. The trend to freeing human time for productive and rewarding initiatives is continuing. What will be the first chicken in every pot, the robotic cleaner or self-cleaning nanosurfaces? How soon can all jobs be outsourced to AI? How soon will there be options on the nucleotide chassis?

Sunday, March 23, 2008

Post-scarcity economy

The long-term future economy is a post-scarcity economy (PSE), where substantially all human material needs are easily met at low cost or for free. The term post-scarcity economy is a bit of a misnomer since only the scarcity of material goods is likely to recede. The economy itself and scarcity as an economic dynamic will probably persist, for example, scarcity of time, energy, processing power and creative ideas.

The future economy will likely be realized in phases. Some material goods would be replaced or provided at near-zero cost at the outset, perhaps certain classes of items or goods like fuel, then more items such as food, then substantially all material goods. Fancier items like high-end designed objects and medical treatments would probably not be available in the earlier phases.

What will happen to services as material goods are increasingly provided at minimal cost? Initially services would be unchanged, but over time, nearly all current services could be replaced by technology-advanced near-zero cost alternatives. As Josh Hall suggests in Nanofuture, nanobots could provide daily hair-trimming and nano-foglets could create new hairstyles on demand. Robots are already available for lawn-mowing upkeep (Robomow). Telemedicine could be used for medical diagnostics and treatments. Artificial Intelligences (AIs) may be consulted for tax and stock advice.

Over time, public services such as police and fire protection could be provided by trusted AI networks and other mechanisms. Wireless sensor networks and cams may shift the nature of crime and policing activity. Future building materials may be impervious to fire and possibly self-reconstruct following earthquakes or other damage.

New virtual and other non-traditional services requiring intelligent attention from AIs or human minds, particularly in providing entertainment, learning and means of interesting and productive engagement, will probably be a growth area. The future economy will likely be transacted with multiple currencies, a variety of monetary currencies and additional supplementary currencies such as time, attention, intention, reputation and ideas.

Read more >>

Sunday, February 24, 2008

Upload world science fiction

It is strange that there has not been more in-depth exploration in science fiction about what mind upload societies would be like. A few aspects are examined in books like Accelerando, the Golden Age, Permutation City, Diaspora and the Cassini Division. Many issues could play out in fun ways in science fiction.

Trust in an upload world
In a world where everyone has uploaded their minds into computer banks and experience is simulated in virtual reality, what is real? How will checks and balances be established for trust and security? How do you know you are not being hacked? How do you know you are getting the bandwidth and processing power promised by your service provider? If you instantiated into an embodied form to go off-bank to check, how would you know that this has really occurred and is not a simulation of an embodied download by the service provider?

A science fiction story could revolve around escaping the upload service provider, finding its deviance (it has shockingly slaved entire banks of human minds to its own nefarious purposes) and overthrowing it to restore order only to find an even more evil system, like a spam-protection unit gone awry with emerging AI, now has the upload society in its clutches. The discrimination practices of the future could be delivering slower run-time environments to certain groups. The thematic issues to examine are the integrity, influence and control of an upload society.

Motivation and activity
What is the nature of being in an upload world? Is the construct of the individual still relevant? What are the driving motivations? What are the activities? What do minds do with 24 hours of run-time each day? If individuals can make copies of themselves, what are the legal and practical issues? How can constructive behavior be incentivized instead of regulated? An interesting story could ensue as an extension to the Kiln People concept, where a copy of a person mutates and wants to kill the original to assume its legal status. An interesting branch of future law may deal with copies interaction.

Societal dynamics
It could be interesting to look at how society redesigns and reorganizes itself in an upload world. Different subgroups may edit their utility functions in different ways. What are the reproduction norms? Do types of gender proliferate? Which memeplexes would arise and predominate? In the Post-Scarcity Economy, what will be organizing factors for society?

Information evolution
How do the Internet and the individual and the group evolve? In one interpretation, they are all just collections of information. Does distinction become meaningless at some point? Are there other distinctions that would be more relevant in an upload world? What establishes who owns, controls and has permission to view and create different information, whether people bits or data bits?

Monday, August 27, 2007

AIs let humans live over math problem

There is a possible future scenario where AIs let humans live due to math. AIs, especially if derived from human intelligence and economic models might covet what they do not have and cannot make. Some examples of scarce or unobtainable resources would be art, fallibility and imperfection, all generated by humans; anything non-mathematically random and to which a curve could not be fitted or any other math applied. AIs might thereby keep humans alive through this quirk, not because they are benevolent or enjoy art or human imperfection as art but rather because humans constitute a vexing math problem. It is unclear what might happen after equations have been developed to explain human behavior...

What are some other possible unintended consequences with AI?

Though easily remedied, there could be some embarrassing AI birth defects such as an AI compiled without write capability. Or a case of co-processisng dependency and anachronistic behavior when the remnants of human sexual jealousy have been inadvertently mapped onto an AI. Why AI-beta2 are you spending so much time processing on AI-delata3’s kernel?

A more serious possibility could be a normal situation of forking copies of a mindfile for research, simulation or other activities gone awry when one forked copy evolves malignantly from the original such that it no longer agrees to re-merge and has an independent survival drive, the natural extreme of which would be plotting to remove all instances of the original.

Tuesday, July 31, 2007

Alt approaches to AGI

50+ year old attempts at creating AGI have not been successful. It is possible that AGI cannot be generated from current methods and technologies; the wrong tool is being used, sort of like trying to build a 747 with a toothbrush. Electromagnetism, silicon and Von Neumann architectures may not ever have the capacity to achieve AGI even allowing for continued increases in processing, storage and memory and architectural shifts such as parallelism.

Other substrates might work
Getting around the rigidity of Von Neumann, mathematical, logic-based, computational approaches, symbolic approaches and traditional computers, other computational substrates like quantum computing, DNA computing, etc. might work and also those that humans have not yet invented, discovered or exploited for this purpose like light, air, memes and information. There must be other substrates, and other viable approaches that are not constrained by mathematics and logic.

Information as a substrate
Narrowly, the only existing example of general intelligence is the human brain and the basic requirements of AGI are self-replication and self-improvement. Considering self-replication, there are many examples of more effective self-replication than humans, for example, memes, disease and microbes. Considering self-improvement, memes also self-improve more effectively than humans as they are refined through repetition, and have the unbounded ceiling for improvement of true AGI.

Taking advantage of the self-reproducing and self-improving properties and using memes and information as a novel computing substrate might be one way of extending AGI progress.

Information as a substrate could be developed symbiotically with a very broadly applicable new understanding of the laws of physics based on information and entropy as opposed to mass and energy.