Monday, August 25, 2014

Complexity Science: Does Autocatalysis Explain the Emergence of Organizations?

One of the newer complexity science books is The Emergence of Organizations and Markets by John F. Padgett and Walter W. Powell (2012).

At first glance, the book might seem like just another contemporarily-popular social network analysis dressed up in complexity language. The book presents the claim that chemistry concept autocatalysis is the explanatory model for the emergence and growth of organizations. The argument is that autocatalysis (the catalysis of a reaction by one of its products) is like the process of individuals acquiring skills which thereby transform products and organizations: “Skills, like chemical reactions, are rules that transform products into other products” (pp. 70-1). The process is reciprocal and ongoing as actors create relations in the short-term, and relations create actors in the longer-term.

One response of a critical reader would be asking the degree to which autocatalysis has explanatory power over the formation and persistence of organizations. In the absence of the consideration of other models, or the extent to which autocatalysis does not fit, it is hard to assess where this model falls on the anecdotal-to-accurate spectrum. This is a potential problem with all attempts, however valiant, to transplant the models and structures from one field to another. Going beyond interesting associations to correlations and even causal links is challenging.

Also not uncommonly, the authors postulate that the interesting, novel, and value-contributing aspects of a system (in this context, an organization) occur in the interstices, edges, and anomalies of the system. In actuality, this might be just one possibility (and not the principal element according to thinkers like Simondon for whom novelty most directly emerges from the central interaction of the components, features, and functionality). Worse, seeking the interstice forces the focus onto identifying borders, edges, and interstices, defining the phases of inherently [non-definable] dynamical systems. Also with a Simondonian eye, this is to miss the nature and contribution of dynamic processes at the higher level - this is trying to corral them into identifiable morphologies instead of apprehending their functionality.

Monday, August 18, 2014

Intracortical Recording Devices

A key future use of neural electrode technology envisioned for nanomedicine and cognitive enhancement is intracortical recording devices that would capture the output signals of multiple neurons that are related to a given activity, for example signals associated with movement, or the intent of movement. Intracortical recording devices will require the next-generation of more robust and sophisticated neural interfaces combined with advanced signal processing, and algorithms to properly translate spontaneous neural action potentials into command signals [1]. Capturing, recording, and outputting neural signals would be a precursor to intervention and augmentation.

Toward the next-generation functionality necessary for intracortical recording devices, using organic rather than inorganic transistors, Bink et al. demonstrated flexible organic thin film transistors with sufficient performance for neural signal recording that can be directly interfaced with neural electrode arrays [2].

Since important brain network activity exists at temporal and spatial scales beyond the resolution of existing implantable devices, high-density active electrode arrays may be one way to provide a higher-resolution interface with the brain to access and influence this network activity. Integrating flexible electronic devices directly at the neural interface might possibly enable thousands of multiplexed electrodes to be connected with far fewer wires. Active electrode arrays have been demonstrated using traditional inorganic silicon transistors, but may not be cost-effective for scaling to large array sizes (8 × 8 cm).

Also, toward neural signal recording, Keefer et al. developed carbon nanotube coated electrodes, which increased the functional resolution, and thus the localized selectivity and potential influence of implanted neural electrodes. The team electrochemically populated conventional stainless steel and tungsten electrodes with carbon nanotubes which amplified both the recording of neural signals and the electronic stimulation of neurons (in vitro, and in rat and monkey models). The clinical electrical excitation of neuronal circuitry could be of significant benefit for epilepsy, Parkinson’s disease, persistent pain, hearing deficits, and depression. The team thus demonstrated an important advance for brain-machine communication: increasing the quality of electrode-neuronal interfaces by lowering the impedance and elevating the charge transfer of electrodes [3].

Full Article: Nanomedical Cognitive Enhancement

[1] Donoghue, J.P., Connecting cortex to machines: Recent advances in brain interfaces. Nat. Neurosci. 5 (Suppl), 1085–1088, 2002.
[2] Bink, H., Lai, Y., Saudari, S.R., Helfer, B., Viventi, J., Van der Spiegel, J., Litt, B., and Kagan, C., Flexible organic electronics for use in neural sensing. Conf. Proc. IEEE Eng. Med. Biol. Soc. 2011, 5400–5403, 2011.
[3] Keefer, E.W., Botterman, B.R., Romero, M.I., Rossi, A.F., and Gross, G.W., Carbon nanotube coating improves neuronal recordings. Nat. Nanotechnol. 3(7), 434–439, 2008.

Sunday, August 10, 2014

Escaping the Totalization of my own Thinking

One of the highest-order things that we can do for ourselves and others is try to escape our own thinking style. Each of us has a way of thinking, a default of which we may not even be aware. Even if we are aware that we each have a personal thinking style, we may not think to identify it and contrast it with other thinking styles, consider changing our own style, and even what it might mean to be portable between thinking styles.

This is a form of the totalization problem, that being completely within something, it is hard to see outside of the totality of that thing. If we are thinking through our own mind, how can we possibly think or see anything that is not within this realm? By definition, this seems an impossible conundrum; how are we to see what is beyond what we can see? How can we become aware of what we are not aware?

The totalization problem has been an area of considerable philosophical focus, whether there is an exteriority (an outside) to concepts like world and reality, and if so, whether it is reachable. Philosophers like Jacques Derrida thought that yes, escaping totalization (any system that totalizes) would indeed be possible. One way is though literature, which offers its own universe (totalization) but also inevitably a hook to the outside (our world). Another way is through the concept of yes, assent, which has a hearing-party affirming and a talking-party asserting in a dynamic process that cannot be totalized.

In a less complicated way for our own lives, there can be other ways of escaping from the totalization of our thought into an exteriority, an outside where we can see things differently. Explicitly, we can try different ways of experiencing the world by learning other of how people apprehend reality, and noticing that more joy may come from experiencing the journey rather than attaining any endpoint. Perhaps most important is being attuned to new ideas and new ways of thinking and being, especially those that don’t automatically make sense.

Sunday, August 03, 2014

Machine Ethics Interfaces

Machine ethics is a term used in different ways. The basic use is in the sense of people attempting to instill some sort of human-centric ethics or morality in the machines we build like robots, self-driving vehicles, and artificial intelligence (Wallach 2010) so that machines do not harm humans either maliciously or unintentionally. This trend may have begun with Asimov’s Three Laws of Robotics. However, there are many different philosophical and other issues with this definition of machine ethics, including the lack of grounds for anthropomorphically assuming that a human ethics would be appropriate for a machine ethics, beyond the context of human-machine interaction.

There is another broader sense of the term machine ethics which means any issue pertaining to machines and ethics, including how a machine ethics could be articulated by observing machine behavior, and (in a Simondonian sense (French philosopher Gilbert Simondon)) how different machine classes might evolve their own ethics as they themselves develop over time.

There is yet a third sense of the term machine ethics - to contemplate human-machine hybrids, specifically how humans augmented with nanocognition machines might trigger the development of new human ethical paradigms, for example an ethics of immanence that is completely unlike traditional ethical paradigms and allows for a greater realization of human capacity.

Machine ethics interfaces then, are interfaces (software modules for communication between users and technologies (machines, devices, software, nanorobots)) with ethical aspects deliberately designed into them. This could mean communication about ethical issues, user selection of ethically-related parameters, ethical issues regarding machine behavior, and ethical dimensions transparently built into the technology (like a kill switch in the case of malfunction). Machine ethics interfaces are the modules within machines that interact with living beings regarding ethical issues, pertaining to the ethics of machine behavior or the ethics of human behavior

Machine Ethics: 1) (conventional) technology designers attempting to incorporate models of human-centric morality into machines like robots, self-driving vehicles, and artificial intelligence to prevent humans from being harmed either maliciously or unintentionally, 2) any issue pertaining to machines and ethics, 3) the possibility of new ethical paradigms arising from human augmentation and human-machine hybrids.

Machine Ethics Interfaces: Interfaces (software modules for communication between users and technologies (machines, devices, software, nanorobots)) with ethical aspects deliberately designed into them. This could mean communication about ethical issues, user selection of ethically-related parameters, and ethical dimensions transparently built into the technology (like a kill switch in the case of malfunction).

Wallach, W. (2010). Moral Machines: Teaching Robots Right from Wrong. Oxford, UK: Oxford University Press.