Showing posts with label complexity. Show all posts
Showing posts with label complexity. Show all posts

Friday, January 29, 2016

Cloudworld: A Hegelian Theory of Complexity and Algorithmic Reality

Philosophy could be an important conceptual resource in the determination of human-technology interactions for several reasons. First, philosophy concerns the topics of world, reality, self, society, aspirations, and meaning, all of which we are hoping to reconfigure and accentuate in our relations with technology. Improving human lives is after all one of the main purposes of technology. Second, philosophy relates to thinking, logic, reasoning, and being, which are the key properties of what we would like our technology entities to do. We would like our technology entities to be more like persons: pre-uncanny valley but fully-fledged tech others; thinker helpers, empathic listeners, coaches, optimizers; a new kind of technology-presenced companion. However, ensconced in recent computational advances, it has been neglected to look to thinking about thinking as a primary resource. Third, philosophy treats the grasping and naming of new things in the world, which is precisely helpful in the case of new and quickly-emerging technological realities.

Hegel could be a potentially helpful position in the consideration of the governance of emerging technologies. This is because the Hegelian reference point is specifically a moving dialogical expanding and not a pre-specified moment in response to unfolding situations. The Hegelian method involves triads: there is the thing itself, its negation, and a bigger third position that sublates the truth content out of the two previous positions into a new shape of its own consciousness. This kind of conceptual robustness could help in articulating more nuanced positions regarding emerging technologies and moving beyond stark binaries like ‘adopt-or-don’t adopt,’ technological dualism that ‘any technology has both good and evil uses,’ and a seemingly inevitable hopelessness in the face of existential risk.

The current situation of emerging technology is one of algorithmic reality. Not only are more new kinds of technology entities having a substantial presence in our human reality, where we are interacting with them on a regular basis, there is a sense of a quickening progression of these entities. There are drones, self-driving cars, personal home robots, quantified-self gadgets, Siri-commanded mobile phones, blockchain smart contract DACs, tradenets, deep-learning algorithms, big data clouds, brain-computer interfaces, neural hacking devices, augmented reality headsets, and deep-learning gaming worlds. Further, each of these technology classes is itself a platform, network, and app store, where the implication is cloudworld. Cloudworld is the notion of a deep multiplicity of networks as a compositional element of new algorithmic realities, where every network is a Turing-complete general computational substrate for every other. Any technology can immediately ‘grok,’ simulate, and run any other; the meaning of which from our human standpoint is vastly unclear. Derivatively, any sort of cloudmind (clustered interactions between multiple human minds or entities (e.g.; artificial intelligence) coordinated via the Internet cloud) might run on any platform.

A Hegelian theory of algorithmic reality is a complexity philosophy position, meaning that it has the properties of a complex adaptive system in being nonlinear, emergent, dynamic, open, unknowable, self-organizing, and interdependent. A complexity philosophy position is required to congruently correspond to the underlying reality which is itself complex. Algorithmic reality is not just an increasing degree of human-technology entity interaction but a multiplicity and proliferation of classes of network technology entities. The Hegelian position is exactly one that might constitute a bigger yes-and collaboration space that expansively accommodates all parties.

Inspiration: Minsky's legacy in the context of contemporary and near-future AI

Sunday, January 11, 2015

Blockchains as an Equality Technology

The advent of blockchain technology has prompted the questioning of many concepts that have been taken for granted for years such as money, currency, markets, economics, politics, citizenship, governance, authority, and self-determination.

We have become accustomed to the hierarchical structures of the contemporary world. These structures and models were nice advances at the time of their derivation, hundreds of years ago, to facilitate the large-scale orchestration of different operations of society so that life could be conducted in a safe and productive manner.

While serving as a significant node in the overall progress of humanity, the imperfect value proposition of hierarchical models has been waning, and especially rapidly so in the current era of science and technology. Now contemporary information technology is facilitating not just a more efficient life through technology (off-loading both physical and mental drudgery), but also allowing the models for large-scale societal coordination to be rethought.

Large-scale decentralized (e.g.; non-hierarchical) orchestration models like blockchain technology are starting to be available, and this could configure a completely new era in human progress. This is because decentralized models are equality technologies: technologies that allow more possibility for individual liberties, freedoms, rights, actualization, expression, and self-determination than has been possible in hierarchical models. Further, equality technologies imply not just more liberties for individuals and an eradication of illiberty, but a better equalization or calibration of liberties amongst individuals and societies.

It is not that a complete revolution to decentralized models would be underfoot, it is that decentralized models are a striking new entrant in the possibility space of the models for large-scale coordination. The longer-term future could likely be a space where there are many different centralized, decentralized, and hybrid models, and other new forms of models, where the important dynamic becomes tuning the orchestration system to the requirements of the underlying situation.

Sunday, January 04, 2015

The Philosophy of Complexity: Are Complex Systems Inherently Tyrannical?

The philosophy of complexity is developing as a field of philosophical inquiry to accompany, support, and question advances in the science of complex systems. This is warranted given that the issues surfaced by science findings signal a full slate of philosophical questions in the three main areas of ontology (existence), epistemology (knowledge), and axiology (valorization and ethics). The fast pace of technological innovation has been substantiating the need for various new philosophies explicitly examining these issues in technology, information, cognition, cognitive enhancement, big data, and complexity.

How much total Liberty is in the System?
A philosophy of complexity would operate both internally and externally to the practice of complexity science, at the level of the theory of the practice, and at the abstraction of the impact and meaning of the practice more broadly in society. One issue for investigation is a philosophical characterization of complex systems themselves, including parameterizing different features such as range-boundedness. For example, in French politics, there was the revolution and the subsequent process of republics starting, failing, and enduring. The question is measuring the total liberty available in the system, how has this changed over time, and what predictions can be made, or, more importantly, what improved changes might be catalyzed for the future?

Persistent Mathematical Behavior across Complex Systems
Fifteen or so criteria that are mathematically persistent across complex systems (fat tails, power laws, high coefficients, degrees of correlation, fractal behavior, etc.) have been identified. However, it seems that even while expanding and contracting over time, complex systems may be displaying cyclic and range-bound behavior. This could be inherently mean-regressing, and potentially tyrannizing or at least limiting to system participants, and this should be measured and evaluated. Is there a fixed amount of liberty available in the French political system? To what degree do complex systems as a format have limitations, and is this a block to progress? Both the quantitative and qualitative aspects of complex systems need to be measured, with an identification of where and how these limits can be and have been broken (beyond traditional symmetry-breaking).

Bergsonian Information, Illiberty, and Rethinking Thinking
The philosophical questions concern the ontology, epistemology, and axiology of complex systems. For example, does complexity have a qualitative side? There is a need to investigate the idea of ‘Bergsonian information,’ the extension of duration-as-time and duration-as-consciousness/self to the internal doubled experience of information, in the context of complexity. Likewise, liberty, illiberty (the absence of liberty), and potentiality in complex systems should be explored, especially in cognition, neuroscience, and connectome-mapping, areas which are just starting to be accessible to the complexity discipline. There can be an examination of how we can rethink thinking and intelligence (biological and artificial) per deep learning, symbolic systems methods, and the philosophy of complexity.

Monday, August 25, 2014

Complexity Science: Does Autocatalysis Explain the Emergence of Organizations?

One of the newer complexity science books is The Emergence of Organizations and Markets by John F. Padgett and Walter W. Powell (2012).

At first glance, the book might seem like just another contemporarily-popular social network analysis dressed up in complexity language. The book presents the claim that chemistry concept autocatalysis is the explanatory model for the emergence and growth of organizations. The argument is that autocatalysis (the catalysis of a reaction by one of its products) is like the process of individuals acquiring skills which thereby transform products and organizations: “Skills, like chemical reactions, are rules that transform products into other products” (pp. 70-1). The process is reciprocal and ongoing as actors create relations in the short-term, and relations create actors in the longer-term.

One response of a critical reader would be asking the degree to which autocatalysis has explanatory power over the formation and persistence of organizations. In the absence of the consideration of other models, or the extent to which autocatalysis does not fit, it is hard to assess where this model falls on the anecdotal-to-accurate spectrum. This is a potential problem with all attempts, however valiant, to transplant the models and structures from one field to another. Going beyond interesting associations to correlations and even causal links is challenging.

Also not uncommonly, the authors postulate that the interesting, novel, and value-contributing aspects of a system (in this context, an organization) occur in the interstices, edges, and anomalies of the system. In actuality, this might be just one possibility (and not the principal element according to thinkers like Simondon for whom novelty most directly emerges from the central interaction of the components, features, and functionality). Worse, seeking the interstice forces the focus onto identifying borders, edges, and interstices, defining the phases of inherently [non-definable] dynamical systems. Also with a Simondonian eye, this is to miss the nature and contribution of dynamic processes at the higher level - this is trying to corral them into identifiable morphologies instead of apprehending their functionality.

Monday, January 20, 2014

Systems-level Thinking Helps to Address Protein Folding

Deciphering protein folding is critical to a fundamental understanding of biology as proteins conduct most cellular operations, and since misfolded proteins are often causally implicated in disease.

The status of protein folding (describing proteins as folded into their final 3D shape) is that as of January 2014, the main resource, the Protein Data Bank, has 96,000 listed known protein structures. There has been much technological advance in determining the static and dynamic features of protein structures, including in X-ray crystallography, NMR (nuclear magnetic resonance spectroscopy), cryo-electron microscopy, small-angle X-ray scattering, and other spectroscopic techniques.

This sounds like good progress, however, taking the human example, only 24,000 of the Protein Data Bank's 96,000 listed proteins are human, and this is of an estimated total of 2 million human proteins. Further, all of the protein conformations have been determined empirically (e.g.; manually) rather than with prediction (e.g.; digitally). It was proposed that given the amino acid string, it should be possible to predict the 3D conformation of the protein as finally folded, but this has proved elusive. Both scientific and crowdsourced efforts are looking at the problem. Crowdsourced projects include games like fold.it, community projects like Protein Folding@Home, and prediction contests like CASP (Critical Assessment of Protein Structure Prediction), a community-wide, worldwide experiment for protein structure prediction, the next one taking place May – August 2014. All of these projects focus on the unitary folding of one target, such as TM019, as opposed to more universal system dynamics.

Complexity scientist Sandra D. Mitchell presented work at Stanford on January 15, 2014 suggesting that we need a plurality of conceptual representations and models (any model is only partial in some way), and that any complex problem should be addressed with an integrated multiplicity of approaches. Many (and possibly most) complex biological issues such as cancer and aging are now understood as deeply dynamic and systemic phenomena. Similarly, proteins do not fold in isolation, the local environment is highly involved with protein chaperones and other signaling processes (one example is the intricate behavior of toxic amyloid HSPs (heat shock proteins)).

Monday, December 10, 2012

Application of Complexity Theory: Away from Reductionist Phase Transitions

Reductionism persists as a useful node in the possibility space of understanding and managing the world around us. However the possibility space is now expanding to higher levels of resolution such as a focus on complex systems. Learning and tools are ratcheting in lock-step.

Some of the key complexity-related concepts in understanding collective behavior in real-life physical systems like the burning of a forest fire include:
  • Organization and Self-Organization: Self-orchestration into order in both living and non-living systems, for example: salt crystals, graphene, protein molecules, schools of fish, flocks of birds, bee hives, intelligence and the brain, social structures 
  • Order and Stability of Systems: Measurements of order, stability, and dynamical break-down in systems such as entropy, symmetry (and symmetry-breaking), critical point, phase transition, boundaries, and fractals (101 primer)
  • Tunable Parameters: An element or parameter which doesn’t control the system, but can be tuned to influence the performance of the system (for example, temperature is a tunable parameter in the complex system of water becoming ice) 
  • Perturbation and Reset: How and how quickly systems reset after being perturbed is another interesting aspect of complex systems 
 
Complexity science is not new as a field. What is new is first, a more congruous conceptual application of complexity thought in the sense of appreciating overall continuum of systems phenomena, not trying to grasp for the specific moment of a phase transition. Exemplar of this more comprehensive systems level thinking is Marcelo Gleiser’s reframe of the Grand Unified Theory problem and Sara Walker’s reframe of the Origins of Life problem. The other aspect that is new is the idea of working in an applied manner with complex systems, particularly with tools that are straightforward to implement like the math tools of non-linear dynamics, networks, chaos, fractals, and power laws (many inspired by the work of Stan Strogatz), and Software Tools like NetLogo, a multi-agent programmable modeling environment and ChucK, a digital audio programming language.

 

Sunday, May 06, 2012

Obtaining models for singularity futures thinking

The challenge is called out by science fiction writer Vernor Vinge as being related to the technological singularity, namely that any one future technology change could be so fundamental across all aspects of life that it is hard to write plausible science fiction, and more generally impacts how we think about the modern and future world. Any next node that has sufficient transformative power (e.g.; like the internet) could change things so fundamentally, globally, multi-dimensionally, and quickly that its impact would be essentially beyond cognition. Moreover, while there are some potential candidates visible for the ‘next internet’ such as smartphones, 3D printing, biotechnology, nanotechnology, and robotics, the real next node is likely to be an unforeseen discontinuity.

Comprehensive survey of thinking models
There is a paucity of models for thinking comprehensively and critically about the future in rigorous, sophisticated, justifiable, and transferable ways. A project that should be undertaken if not done so already is an examination of different models for structuring thinking from different disciplines. There is value in this at two levels: first generally in identifying, characterizing, and synthesizing different models for structuring thinking, and second in applying these models cross-disciplinarily to existing areas, and to new areas such as thinking about the future.

Eliciting explicit models for structuring thinking
The models that are used to structure thinking in different fields need to be made explicit. Practitioners immersed in fields may not be easily disposed to articulate these models. For example, it may be novel to inquire ‘What is the model for inquiry in this field?’ or even to have the concepts and vocabulary for explicating them.

Fields with models for structuring thinking
Some of the obvious fields to investigate for eliciting established models for structuring thinking are philosophy, complexity (complex adaptive systems, chaos theory, symmetry, etc.), computing (artificial intelligence, machine learning, knowledge representation, data management, etc.), systems-level disciplines (ecology, biology, cosmology, etc.), and social sciences (sociology, anthropology, economics, etc.).

The challenge of fishing structure and content from academic fields
Some of the immediate obvious barriers in accessing models for structuring thinking from academic disciplines are nomenclature and insularity. Semantic and conceptual nomenclature may prevent easy access to fields, but are largely a veneer that may be penetrated with a variety of translation techniques and concept mapping. Much more problematic is the potential lack of available suitable content in these fields. By default, many areas of academia are not externally-focused applied disciplines but rather inwardly-focused insular disciplines engaged in cataloging and interpreting the thoughts of their own ancestral brethren. The accompanying applied dimension to every field that would explicitly render the core ideas accessible, and proven and useful through deployment seems to be absent from many fields. Rather than being perceived as less pure of an exploit, the application of the central ideas and structures would seem to be a key raison d'être for these fields of knowledge.

Sunday, December 04, 2011

Anxious uncanniness drives technological phase change

There is an interesting link between philosophy, technology innovation, and complexity theory. A claim has arisen that people may be feeling unsettled, that they no longer belong to certain normative groups like ‘Americans’ or ‘doctors.’ One reason could be the fast pace of technology innovation and adoption which has been creating a pervasive, accelerating, and possibly irreversible culture of biotechnicity. Paradoxically, technology innovation may also be the resolution.

This feeling of being unsettled is that of experiencing an anxious uncanniness of what it means to be a doctor, a Christian, a New Yorker, or whatever. This has long been identified by philosophers (e.g.; Plato, Socrates, Kierkegaard, etc., and more recently Jonathan Lear) as (a lesser-known definition of) irony; when individuals experience a sense of dissumlation (dissimulation).

A further claim is not that anxious uncanniness is harmful or undesirable, but rather that ironic uncanny experiences should be cultivated as the only way out, a key means of growth. Growing by pushing out of one’s comfort zone is parallel to dynamics in the cycles of technology innovation and complexity theory. In technology innovation, the chaotic foment at the end of a paradigm (like the vacuum tube or perhaps oil) forces innovation into a new paradigm. In complex systems, after symmetry-breaking and the development of entropy, adding energy helps to rebalance systems to attain the next node of progress.

Monday, September 12, 2011

Human augmentation substrate: the microbiome

The human microbiome, comprising 10x human cells, is interesting not only for its significant role in determining health, disease, drug response, and individuality, but also in possibly being a less-invasive human augmentation substrate, for example, bringing nanoscale connectivity and memory processing modules onboard via the microbiome.

New research has identified that only five microbial lineages exist on humans: firmicutes, bacteriodetes, actinobacteria, proteobacteria, and other phyla which is surprising compared to the diversity of microbial phyla on Earth. However, within the lineages, there are many strains and species, for example 1,600-2,000 distal gut species of microbial bacteria in each person, only 7% of which were known previously (paper). Gut bacteria is critical to human functioning, one activity is producing butyrate in colon epithelial cells to maintain energy homeostasis. (paper, article)

The microbiome is a complex adaptive system: resilience and vulnerability
Research extends beyond characterization - an investigation of perturbations to the human microbiome has shown resilience in recovery following a disturbance. However there is vulnerability with persistent perturbation. The human microbiome may not reassume its initial state unless the disturbance is at a frequency that the system has experienced before and for some time. In this case, the system may get stuck in an alternative state or local maximum. (paper).

Sunday, September 04, 2011

Time, complexity, entropy, and the multiverse

FQXi, the Foundational Questions Institute, held a multidisciplinary meeting investigating the Nature of Time in Scandinavia August 27 – September 1, 2011 (Figure 1). FQXi promulgates original thinking and research on fundamental questions in physics and cosmology through research grants and essay-writing contests on topics such as “The Nature of Time,” and “Is Reality Digital or Analog?

Figure 1. Multidisciplinary topics covered at the FQXi Time Conference


Time is familiar in the sense of the three space dimensions and the one time dimension around which human affairs in the physical world are organized. Additionally, each person has a subjective and identifiable relationship to time, even though this may be little more than a convenient construct. In science, time has been developed to the greatest degree in physics and cosmology, and in the philosophy of science. Other fields too are starting to consider time more robustly, including complexity, biology, and computation.

The conference addressed the issue of the arrow of time from many perspectives. While most fundamental laws of nature are time-symmetric, some areas have a time arrow flowing in one direction such as thermodynamics, quantum theory, radiation, and gravity. This can be problematic to explain. A suggested analysis structure involving the trade-offs between complexity and entropy as systems evolve over time served as a useful model for analyzing different aspects of time throughout the meeting.

Monday, August 29, 2011

Inciting Brownian motion at the macro-scale

Entropy is the process of moving from order to disorder, for example one’s desk becoming cluttered after being cleaned. In many cases, lower entropy states are desirable as they connote greater order. Without doing work to decrease entropy, it generally increases at the macro level (the spacetime of objects that humans encounter on a daily basis). Entropy increases and time appears to move only forward.

At the micro scale of atoms, Brownian motion occurs (the constant jiggling of atoms), and creates an important case where the Second Law of Thermodynamics (heat eventually dissipates; systems move from being warm to cold) does not hold. Brownian motion at the micro scale also allows fluctuations in the arrow of time and in entropy, e.g.; time may flow forwards and backwards, and there may be fluctuations towards lower and higher states of entropy. This can be seen not just at the very-very small Planck scale and the atomic scale of statistical mechanics (for example, atoms jiggling in a gas), but also at the level of cells in the body, and in another example, pollen cells suspended just the right way in water.

That Brownian motion can occur at the comparatively larger scale of cells suggests that it may occur, or be induced to occur at even more macro levels too. For example, in complex adaptive systems, an economy has phases of Brownian motion, when rational agents are jiggling constantly to make the invisible hand of supply and demand meet. Perhaps incentive structures including policy may be used to facilitate the persistence of Brownian motion and devolution of entropy in macro-level systems.

Sunday, August 21, 2011

A system is a balance: complex systems design

The Inconsistency Robustness symposium held at Stanford August 16-18, 2011 featured discussion of a number of challenges that arise in the design of complex systems, and potential solutions to them. The dialogue ranged from computer science details (for example, message passing in concurrent systems) to systemic assessments (for example, ecological concerns and homeostasis). Researcher attendees applied their varied backgrounds to the discussion.

A universal point in complex systems design is the importance of expecting and incorporating inconsistency and potential points of failure into a system. Flexible robust systems may be dynamical and adaptive within boundaries (Figure 1 - "Complex Dynamical") provided there is some mechanism for identifying and monitoring potential out-of-bounds conditions.

Figure 1. Three patterns of behavior in complex dynamical systems (Source)

Sunday, October 17, 2010

Phase transition in intelligence

There could be at least three approaches to the long-term future of intelligence: engineering life into technology, simulated intelligence, and artificial intelligence. Further, while the story of evolutionary history is the domination of one form of intelligence, the future could hold ecosystems with multiple kinds of intelligence, particular specialized by purpose/task.

There are significant technical hurdles in executing simulated intelligence and artificial intelligence, but the areas have been progressing in Moore’s Law fashion. The engineering of life into technology will need to proceed expediently to keep pace with technological advance, and tie a lot of wetware loose ends together.

At present, the mutation rate of genetic replication puts an upward bound on how complex biological organisms can be. The human cannot be more than about 10x as complex as Drosophila (the fruit fly), for example. However, if the error rate in the genetic replication machinery could be improved, maybe it would be possible to have organisms 10x more complex than humans, and so on, and so on…