Showing posts with label philosophy. Show all posts
Showing posts with label philosophy. Show all posts

Friday, January 29, 2016

Cloudworld: A Hegelian Theory of Complexity and Algorithmic Reality

Philosophy could be an important conceptual resource in the determination of human-technology interactions for several reasons. First, philosophy concerns the topics of world, reality, self, society, aspirations, and meaning, all of which we are hoping to reconfigure and accentuate in our relations with technology. Improving human lives is after all one of the main purposes of technology. Second, philosophy relates to thinking, logic, reasoning, and being, which are the key properties of what we would like our technology entities to do. We would like our technology entities to be more like persons: pre-uncanny valley but fully-fledged tech others; thinker helpers, empathic listeners, coaches, optimizers; a new kind of technology-presenced companion. However, ensconced in recent computational advances, it has been neglected to look to thinking about thinking as a primary resource. Third, philosophy treats the grasping and naming of new things in the world, which is precisely helpful in the case of new and quickly-emerging technological realities.

Hegel could be a potentially helpful position in the consideration of the governance of emerging technologies. This is because the Hegelian reference point is specifically a moving dialogical expanding and not a pre-specified moment in response to unfolding situations. The Hegelian method involves triads: there is the thing itself, its negation, and a bigger third position that sublates the truth content out of the two previous positions into a new shape of its own consciousness. This kind of conceptual robustness could help in articulating more nuanced positions regarding emerging technologies and moving beyond stark binaries like ‘adopt-or-don’t adopt,’ technological dualism that ‘any technology has both good and evil uses,’ and a seemingly inevitable hopelessness in the face of existential risk.

The current situation of emerging technology is one of algorithmic reality. Not only are more new kinds of technology entities having a substantial presence in our human reality, where we are interacting with them on a regular basis, there is a sense of a quickening progression of these entities. There are drones, self-driving cars, personal home robots, quantified-self gadgets, Siri-commanded mobile phones, blockchain smart contract DACs, tradenets, deep-learning algorithms, big data clouds, brain-computer interfaces, neural hacking devices, augmented reality headsets, and deep-learning gaming worlds. Further, each of these technology classes is itself a platform, network, and app store, where the implication is cloudworld. Cloudworld is the notion of a deep multiplicity of networks as a compositional element of new algorithmic realities, where every network is a Turing-complete general computational substrate for every other. Any technology can immediately ‘grok,’ simulate, and run any other; the meaning of which from our human standpoint is vastly unclear. Derivatively, any sort of cloudmind (clustered interactions between multiple human minds or entities (e.g.; artificial intelligence) coordinated via the Internet cloud) might run on any platform.

A Hegelian theory of algorithmic reality is a complexity philosophy position, meaning that it has the properties of a complex adaptive system in being nonlinear, emergent, dynamic, open, unknowable, self-organizing, and interdependent. A complexity philosophy position is required to congruently correspond to the underlying reality which is itself complex. Algorithmic reality is not just an increasing degree of human-technology entity interaction but a multiplicity and proliferation of classes of network technology entities. The Hegelian position is exactly one that might constitute a bigger yes-and collaboration space that expansively accommodates all parties.

Inspiration: Minsky's legacy in the context of contemporary and near-future AI

Sunday, November 29, 2015

Magic Blockchains, but for Time? Blocktime Arbitrage

There is no doubt that blockchains are a reality-making technology, a mode and means of implementing as many flavors of our own crypto-enlightenments as we can imagine! This includes newer, flatter, more autonomous economic, political, ethical, scientific, and community systems. But not just in the familiar human social constructs like economics and politics, possibly in physical realities too like time. Blocktime’s temporal multiplicity and malleability suggest a reality feature we have never had access to before – making more time.

Blocktime: A General Temporality of Blockchains
Blocktime as blockchains’ own temporality allows the tantalizing possibility of rejiggering time and making it a malleable property of blockchains. The in-built time clock in blockchains is blocktime, the chain of time by which a certain number of blocks will have been confirmed. Time is specified in units of transaction block confirmation times, not minutes or hours like in a human time system. Block confirmation times are convertible to minutes, but these conversion metrics might change over time.

Blocktime Arbitrage
One key point is that the notion of blocktime, as an extension of computing clocktime more generally, creates a differential. Blocktime and human time already exist as different time schemas. A differential suggests that the two different systems might be used to reinforce each other, or that the differential could be exploited, arbitraging the two time frameworks. Through the differential too is the way to ‘make more time,’ by accessing events in another time trajectory. The conceptualization of time in computer science is already different than human time. Computing clocktime has more dimensions (discrete time, no time, asynchronous time, etc.) than human physical and biological time, which is continuous. Clocktime has always been different than human time. What is different with blocktime is that it builds in even more variability, and the future assignability of time through dapps and smart contracts. For example, MTL (machine trust language) time primitives might be assigned to a micropayment channel dapp as a time arbiter.
Time has not been future-specifiable before, in the way that it can be assigned in blocktime smart contracts.
Temporality as a Smart Contract Feature
Time speed-ups, slow-downs, event-waiting, and event-positing (a true futures-class technology) could become de rigueur blocktime specifications. Even the blocktime regime itself could be a contract-specifiable parameter per drop-down menu, just like legal regime. Temporality becomes a feature as smart contracts are launched and await events or changes in conditions to update contract states. Time malleability could itself be a feature, arbitraging blocktime with real time. An example of a time schema differential arising could be for example, a decentralized peer-to-peer loan that is coming due in blocktime, but where there have not been enough physical-world time cycles available for generating the ‘fiat resources’ to repay the loan. 

Blocktime Standards
In blocktime, the time interval at which things are done is by block. This is the time that it takes blocks to confirm, so blockchain system processes like those involving smart contracts are ordered around the conception of blocktime quanta or units. This is a different temporal paradigm than human lived time (whether Bergsonian doubled duration (the internal sense of time passing) or external measurable clocktime). The human time paradigm is one that is more variable and contingent. Human time is divided and unitized by the vagaries of human experience, by parameters such as day and night; week, weekend, and holiday; seasons; and more contingently, crises, eras, and historical events.

Since blocktime is an inherent blockchain feature, one of the easiest ways to programmatically specify future time intervals for event conditions and state changes in blockchain-based events is via blocktime. Arguably, it is easier, and more congruent and efficient, to call a time measure from within a system rather than from outside. It could be prohibitively costly for example, to specify an external programmatic call to NIST or another time oracle. Possibly the emerging convention could be to call NIST, including as a backup, confirmation, or comparison for blocktime. Currently, blockchain systems do not necessarily synchronize their internal clocktime with NIST, but the possibility of a vast web of worldwide smart contracts suggests the value and necessity of external time oracles, and raises new issues about global time measurement more generally. Especially since each different blockchain might have its own blocktime, there could be some standard means of coordinating blocktime synchronizations for interoperability, maybe via a time sidechain for example.

Novel Temporalities of Computing (Discontinuous) and Big Data (Predictive)
First computing clocktime made time malleable through its different discontinuous forms. Then machine learning and big data facilitated a new temporality, one oriented to the present and future, instead of responding to just the past. There was a shift from only being able to react to events retrospectively after they had passed, to now being able to model, simulate, plan, and act in real-time as events occur, and proactively structure future events. The current change is that blockchains and particularly smart contracts add exponential power to this; they are in some sense a future reality-making technology on steroids. Whole classes of industries (like mortgage servicing) might be outsourced to the seamless orchestration of blockchain dapps and DACs in the next phases of the automation economy. While Bitcoin is the spot market for transactions in the present moment, smart contracts are a robust futures market for locking in the automated orchestration of vast areas of digital activity.

Blockchain Historicity: Computer Memory of Human Events
Blockchain logs are a human event memory server. Blockchains are already event history keepers, and now with blocktime have even more responsibility as the memory computer of human events. It is now possible to think in terms of blockchain time sequences, in the anticipation and scoping of future events and activities, as blockchain reality unfolds, as opposed to human time scales and events. For example, there are normal human time sequences, like a one-year lease agreement. Other sequentiality is based on human-experienced conditions like ‘the park is open until dark,’ which makes little sense in a blocktime schema. There are time guidelines that vary per lived experience in human realities. Likewise, there could be analogs in lived experience in blockchain realities. Different events could mark the historicity of blockchains, for example, the time elapsed since the genesis block, and other metrics regarding number, amount, and the speed of transactions. In cryptophilosophy, Hegel, Benjamin, Holderlin, and Heidegger’s conceptions of historicity and temporality might be instantiated in the blocktime paradigm, where, in ecstatic temporality, historicity is the event from the future reaching back to present now (Heidegger, Being and Time, 474).

Related Crypto-philosophy Talk: Swan, M. “Bergson’s Qualitative, Kant’s Time and Imagination, and Blocktime Smart Contracts.” Spatiality & Temporality Conference. 11-13 December 2015. Warsaw, Poland. 

Tuesday, September 29, 2015

Blockchain Crypto-economics: The Actualization Economy of Immanence

Phase I: P2P Economies
There is considerable room for exploration in defining what the new possibility space of personalized, self-defined, emergent economic systems might comprise. Opening up economic systems could have different stages and phases. The first position could be having the same structure of current economic systems, but opening up the parties, interaction types, and business models. The idea of ‘decentralized reddit’ is an example of one such first position. It is still the same Internet pipes, providing the same news items to consumers. What could be different is the hosting, pricing, and business model. The web property reddit could be hosted in a decentralized manner, p2p-hosted by community peers, as opposed to being centrally-served by the company, reddit, Inc. Once the content is hosted by peers, the business model too can change. Instead of indirect advertising-supported centralized models coordinating the serving of eyeballs to vendors, direct pay-for-consumption or freely-contributed content models could go more naturally with a p2p-based content community. This means perhaps leaner economic models with greater price rationalization and value assessment of consumption by users.

Phase II: Rethinking Economic Systems as Coordination Systems
However, what is possible is not just different economic systems from a business model perspective, but something more fundamentally radical, a blueprint for a new economy. All of the first position, ‘decentralized reddit,’ no matter how decentralized, is still in the same structure, in the traditional structure of how economics has been conceived – of some parties producing goods of value consumed by others for some price (including for free in gift-economies). Extending this, the fully-fledged second position challenges and redesigns what is meant by economic systems, and claims that the purpose and value of economic systems is much broader. Markets have been the only application of economic systems, but the concept is more extensive.
Economics is a coordination system, of resources, but more broadly, of reality. 
Economics is a mediating and coordination system of our interactions with reality. Elements of economic theory might still make sense, like inputs, outputs, and resources, within this broader conceptualization of mediating reality. Resources could be more expansively defined, such as 'what resources are needed as inputs to brains being able to have ideas' as opposed to 'number of units of lumber sold.' Economics, instead of being defined as the production and consumption of scarce goods and services, could be reconceived more generally as a facilitation response to reality, concretized as a discovery and interaction process where something is discovered and valorized by a party, possibly in acknowledgement, interaction, and exchange with another party.

Phase III: Crypto-economics Facilitates the Shift from the Labor Economy to the Actualization Economy
Reconceiving economics as the more generalized form of (ontologically) what it is, a coordination system, allows its purpose to be substantially opened. The primary focus of what economics is about can shift. The locus of focus can change from how scarce goods are produced and distributed to instead, something much more generalized, to what our experience of reality is, and therefore to what kinds of responses to reality we would like to facilitate and enable. The notion of reality mediation design is so greenfield that the first question is 'what is important?' Economics can become a greenfield design frame about what might be possible in general in the world.

Yes-and! Abundance Economies of Immanence expand Reality
There are arguably two levels of ‘what is important’ – sustenance and actualization. First, certainly one dimension that is important is a post-scarcity situation for the material inputs required for healthy, flourishing human lives. The blockchain automation economy is making great strides towards this. Second, once basic needs are met, the focus can become one of immanence: open-ended expansion up from baseline survival to actualization in terms of growth, learning, creativity, collaboration, and contribution. True abundance is having these two levels; not just having survival-level needs met but also and more importantly, entering more fully into an existence of immanence, of open-ended upside potentiality - the actualization economy - and spending more cognitive time in this space. Abundance Theory Studies recognizes both of these dimensions: the immanent potentiality upside of existence, together with the baseline-attaining post-scarcity situation for material goods. True Abundance Economies focus on expanding the position of yes-and improvisation energy directed to self-expression, creativity, and novelty; expanding reality in ways that matter.

Sunday, June 07, 2015

CryptoSustainability: Reinventing Economics

The new ‘Sensibility of the Cryptocitizen’ is about a rethinking our relationship with authority, and political and economic life design. It includes personal digital security practices like backing up our money, and more profoundly is not just about rethinking relationships of authority and power, and economic resources and exchange, but reinventing the models by which we use them. Perhaps never before has there been such a creativity that we are bringing to designing and trying different models and modes of life; prototyping as a life practice.

True autonomy is setting our own rules, economic, political, social, etc.; in every domain; setting our own rules for life per our own purposes and value systems. We are inventing new models that more directly address our local individual needs rather than accepting status quo models from the structured world.

Part of the ‘reinventing economics’ mindset is thinking more modularly and portably about resources, and where and how everything can be accomplished more fungibly and effectively, with a new responsiveness to meet needs dynamically. What if every resource had the Uber-like conceptualization of immediate resource delivery on demand? Not just food, transportation, products, and valet services [1], but more foundational resources too that involve space like lodging, showers, team coworking space, and office meeting space; on-demand pod space; portable mobile resource use.

There are some exciting examples of fungible, distributed autonomous space. Distributed autonomous mobile space includes the concept of a mobile AirBnB, embodied by the Blackbird Bus, which uses city streets as a commons for on-demand locational parking of a 68-passenger school bus that has been converted into a luxury mobile office/living space for a startup company, and offers co-housing nights via AirBnB. For example, there are Houslets, modular, portable, reconfigurable, and open-source living structures which can be fixed or mobile or anything in between. Another example is using space and economic designability to competitive advantage, such as autonomous political/judicial zones within countries, like the Zones for Economic Development and Employment (ZEDEs) in Honduras. Existing space-on-demand offerings like Liquid Space (hourly, daily, weekly, monthly booking of office space) could be further extended and enhanced by delivering mobile office pods to locations.

[1] The plethora of eDelivery services: Amazon Prime. Munchery. Postmates. Seamless. EAT24. GrubHub. Safeway.com. Whole Foods. DoorDash. Washio. TaskRabbit. FreshDirect. Homejoy. Uber. Google Express. Alfred.

Sunday, April 05, 2015

Philosophy of Big Data

Big data is growing as an area of information technology, service, and science, and so too is the need for its intellectual understanding and interpretation from a theoretical, philosophical, and societal perspective.

The ways that we conceptualize and act in the world are shifting now due to increasingly integrated big data flows from the continuously-connected multi-device computing layer that is covering the world. This connected computing layer includes wearables, Internet-of-Things (IOT) sensors, smartphones, tablets, laptops, Quantified Self-Tracking devices like the Fitbit, connected car, smarthome, and smartcity.

Through the connected computing world, big data services are facilitating the development of more efficient organizing mechanisms for the conduct and coordination of our interaction with reality.

One effect is that our stance is moving from being constrained to reactive response to now being able to engage in much more predictive action-taking in many areas of activity.

Another effect is that a more efficient world is being created, automating not just mechanical tasks, but also cognitive tasks. This paper discusses how a philosophy of big data might help in conceiving, creating, and transitioning to data-rich futures.

 More Information: Presentation, Video, Paper

Sunday, March 15, 2015

Cogntive Enhancement can Integrate Man and Machine

Cognitive enhancement should be conceived as the philosophical issue of the greater subjectivation possibilities for man, as opposed to primarily a bioethical concern. The current world is one in which man and technology are increasingly interlinked. One high-stakes endeavor is cognitive enhancement, of which there are different working definitions. A precise account is that cognitive enhancement is the augmentation of human skills, attributes, and competencies through the use of technology, medicine, and therapy designed to increase human performance capability (Hildt). Another is that it is the amplification or extension of core capacities of the mind through improvement or augmentation of internal or external information processing systems (Bostrom). Another is it refers to any expanded or new capacity of a human being (Buchanan). The salient distillation is that cognitive enhancement is the targeted improvement of natural human cognitive abilities.

The motivation for cognitive enhancement could be twofold. First, there are the obvious practical benefits of improved perception and memory. However, beyond this, more profoundly the reason for seeking improved cognition is the implication that it can facilitate our own growth and development as humans, actualizing ourselves and our potential more rapidly and effectively. Cognitive enhancement is an important topic for investigation because it examines our existence and also the human-technology relation. Increasingly powerful science and technology tools are emerging that may have the potential to dramatically enhance human performance, and perhaps redefine what it is to be human. Technology advances at a much higher rate than man subjectivates, and man and technology are increasingly being integrated together, with technology no longer being seen as an external tool, but as an embedded presence, such that man and technology are co-evolving and subjectivating together. The rights kinds of cognitive enhancement applications might be of benefit for both humans and technology entities, and their potential integration.

References
Bostrom, N., and Sandberg, A. (2009). "Cognitive enhancement: methods, ethics, regulatory challenges." Sci. Eng. Ethics 15, 311–341.
Buchanan, A. (2013). Beyond Humanity: The Ethics of Biomedical Enhancement. Oxford UK: Oxford University Press.
Hildt, E. and Franke, A.G., Eds. (2013). Cognitive Enhancement: An Interdisciplinary Perspective. Dordrecht DE: Springer.

Sunday, January 04, 2015

The Philosophy of Complexity: Are Complex Systems Inherently Tyrannical?

The philosophy of complexity is developing as a field of philosophical inquiry to accompany, support, and question advances in the science of complex systems. This is warranted given that the issues surfaced by science findings signal a full slate of philosophical questions in the three main areas of ontology (existence), epistemology (knowledge), and axiology (valorization and ethics). The fast pace of technological innovation has been substantiating the need for various new philosophies explicitly examining these issues in technology, information, cognition, cognitive enhancement, big data, and complexity.

How much total Liberty is in the System?
A philosophy of complexity would operate both internally and externally to the practice of complexity science, at the level of the theory of the practice, and at the abstraction of the impact and meaning of the practice more broadly in society. One issue for investigation is a philosophical characterization of complex systems themselves, including parameterizing different features such as range-boundedness. For example, in French politics, there was the revolution and the subsequent process of republics starting, failing, and enduring. The question is measuring the total liberty available in the system, how has this changed over time, and what predictions can be made, or, more importantly, what improved changes might be catalyzed for the future?

Persistent Mathematical Behavior across Complex Systems
Fifteen or so criteria that are mathematically persistent across complex systems (fat tails, power laws, high coefficients, degrees of correlation, fractal behavior, etc.) have been identified. However, it seems that even while expanding and contracting over time, complex systems may be displaying cyclic and range-bound behavior. This could be inherently mean-regressing, and potentially tyrannizing or at least limiting to system participants, and this should be measured and evaluated. Is there a fixed amount of liberty available in the French political system? To what degree do complex systems as a format have limitations, and is this a block to progress? Both the quantitative and qualitative aspects of complex systems need to be measured, with an identification of where and how these limits can be and have been broken (beyond traditional symmetry-breaking).

Bergsonian Information, Illiberty, and Rethinking Thinking
The philosophical questions concern the ontology, epistemology, and axiology of complex systems. For example, does complexity have a qualitative side? There is a need to investigate the idea of ‘Bergsonian information,’ the extension of duration-as-time and duration-as-consciousness/self to the internal doubled experience of information, in the context of complexity. Likewise, liberty, illiberty (the absence of liberty), and potentiality in complex systems should be explored, especially in cognition, neuroscience, and connectome-mapping, areas which are just starting to be accessible to the complexity discipline. There can be an examination of how we can rethink thinking and intelligence (biological and artificial) per deep learning, symbolic systems methods, and the philosophy of complexity.

Sunday, December 21, 2014

Bergson, Free Will, and the Philosophy of Cognitive Enhancement

Bergson claims that free will exists. It occurs in moments when a living being experiences duration, which is tuning into the internal sense of an experience, and a freely-determined action flows from this state. His reasoning is that “if duration is heterogeneous [if we are tuned into the internal sense of experience], the relation of the psychic state to act is unique, and the act is rightly judged free.” An act is free if it flows from an internal qualitative experience. He suggests we understand this by considering an example in our lives of having made a serious decision; where even searching for such an example already starts to evoke the qualitative aspects, unique psychic states, and then the free-action undertaken as a result. The crux is that “We should see that, if our action was pronounced by us to be free, it is because the relation of this action to the state from which it issued could not be expressed by a law, this psychic state being unique of its kind and unable to ever to occur again.” Turning inward to our unique experience causes freely undertaken action to flow as a result, even if this action is a formulation of our mental state.

Philosophy of Cognitive Enhancement
The reason that Bergson is useful for the philosophy of cognitive enhancement is that he provides a reasonable ontological explanation for free will with prescriptive recommendations for its achievement. He draws our attention to the qualitative and characterizes it in usable detail instead of dismissing it as inaccessible due to being subjective (as did Kierkegaard). This could help in developing a philosophy of cognitive enhancement by articulating some of the goals and experience of what it might mean for humans to engage in such practices. It is not necessary to agree with Bergson's claim in favor of free will to implement some of the underlying ideas. What is important to us in cognitive enhancement (the targeted improvement of natural human cognitive abilities) is not just better memories, but accelerated subjectivation - the ability to extend our capacity by becoming ‘more’ of who we are and can be more quickly.

Cognitive Enhancement Tools
One first cognitive enhancement application of Bergson might be in having greater activation of free will; catalyzing more ‘living now’ moments where free will could be realized. Our everyday acts are quantitative and undoubled but cognitive enhancement tools might be able to help with greater activation of the qualitative experience of life. Another application could be exploring the emergence of freedom as a property of internal experience. Bergson does not discuss whether we are free to perceive our inner states in different ways, or just one default way. It would seem that qualitative multiplicity could extend to having discretion over experience. This could inspire an ethics of perception, and an ethics of reality, as topics of cognitive enhancement philosophy. Another application area could be quantitative-qualitative transitions; tracing how the quantitative becomes the qualitative, as this might be a way into a richer, ongoing, free will-activated experience of life. There could be many applications trying to help improve our psychic states, both in their quality and accessibility, and in our awareness and perception of them.

Reference: Bergson, Henri. (2001, 1889). Time and Free Will: An Essay on the Immediate Data of Consciousness (Essai sur les données immédiates de la conscience). London UK: Dover Publications. Bergson, Free Will, and Cognitive Enhancement

Sunday, December 07, 2014

Bergson-Deleuze: Incorporating Duration into Nanocognition

French philosophers Bergson and Deleuze bring to nanocognition and machine ethics interfaces the philosophical conceptualizations of image, movement, time, perception, memory, and reality that can be considered for implementation in tools for both cognitive enhancement and subjectivation (the greater actualization of human potential).

From the standpoint of an Ethics of Perception of Nanocognition, Bergson and Deleuze stress the need to see perception in itself, and machine ethics interfaces could possibly help us do this through the concept of Cinema 3: the perception-image. Having had only one default (undoubled) means of perception (taking the actualized perceptions of daily life as the only kind of perception, just as we have taken linear, spatialized, narrative time as the only form of time) has meant that we have not considered that there may be multiple ways to perceive, and that these might exist on a virtual plane of possible perceiving, and coalesce through difference into actual perception. At minimum, our nanocognitive prosthetics might be able to introduce and manage the notion of multiplicity in virtual and actual perception.

Bergson-Deleuze exhorts us to notice the doubled, internal, qualitative, subjective experience of lived phenomena like movement, time, perception, reality, and ourselves. In particular, nanocognition allows us to see the full doubling of perception, because there cannot be a doubling if there is only one unexamined mode, if perception in itself cannot be seen. It is only through duration - the doubled, subjective experience of perception (the experience of perception itself) that its virtuality and multiplicity (possibility) can be seen. Importantly, the consequence of seeing the doubled side of perception and reality is that it allows us to tune into the possibility of possibility itself. The real goal of Bergson-Deleuze is not just seeing different possibilities for ourselves, but seeing possibility itself; this is the ultimate implication for nanocognition – conceiving of nanocognition as pure possibility in and of itself.

Sunday, October 19, 2014

iSchools: Contemporary Information Technology Theory Studies

The perfect merger of academic rigor and contemporary thinking has come together in the concept of iSchools, which give practical consideration and interesting learning opportunities to the most relevant issue of our time: information. So far there are over 50 worldwide iSchools; a global pool, like bitcoin for academia. The March 2014 conference was held in Berlin and the March 2015 conference will be at UC Irvine. With higher education under reinvention pressure from all directions, the possibility of making institutional learning relevant again cannot be underscored enough.

iSchools are the perfect venue to take up not just the practical agenda within the information technology field but also the theoretical, philosophical, and societal dimensions of the impact of information technology. There have started to be some conferences regarding ‘big data theory’ (Theory of Big Data, University College London, Jan 2015), and a calling out of the need for ‘big data theory’ (Big Data Needs a Big Theory to Go with It, Scientific American, Rise of Big Data underscores need for theory, Science News). These efforts are good, but mostly concern having theory to explain the internal operations of the field, not its greater societal and philosophical effect. In addition to how ‘big data theory’ is currently being conceptualized, an explicit consideration of the general theoretical and social impact of information technology is needed. Floridi’s distinction re: philosophy of information is apt; the main focus is how the field changes society, not the internecine methods of the field.

Research Agenda:
Contemporary Information Technology Theory Studies 
Here is a thumbnail sketch of a research agenda for Contemporary Information Technology Theory Studies. Early examples of topics taken up at institutes and think tanks (like Data&Society) are a good start and should be expanded and included in the academic setting. A more appropriately robust agenda will consider the broad theoretical, social, and philosophical impact of the classes of information technology below that are dramatically reshaping the world, including specifically how our ideas of self and world, and future possibilities are changing.

Sunday, August 10, 2014

Escaping the Totalization of my own Thinking

One of the highest-order things that we can do for ourselves and others is try to escape our own thinking style. Each of us has a way of thinking, a default of which we may not even be aware. Even if we are aware that we each have a personal thinking style, we may not think to identify it and contrast it with other thinking styles, consider changing our own style, and even what it might mean to be portable between thinking styles.

This is a form of the totalization problem, that being completely within something, it is hard to see outside of the totality of that thing. If we are thinking through our own mind, how can we possibly think or see anything that is not within this realm? By definition, this seems an impossible conundrum; how are we to see what is beyond what we can see? How can we become aware of what we are not aware?

The totalization problem has been an area of considerable philosophical focus, whether there is an exteriority (an outside) to concepts like world and reality, and if so, whether it is reachable. Philosophers like Jacques Derrida thought that yes, escaping totalization (any system that totalizes) would indeed be possible. One way is though literature, which offers its own universe (totalization) but also inevitably a hook to the outside (our world). Another way is through the concept of yes, assent, which has a hearing-party affirming and a talking-party asserting in a dynamic process that cannot be totalized.

In a less complicated way for our own lives, there can be other ways of escaping from the totalization of our thought into an exteriority, an outside where we can see things differently. Explicitly, we can try different ways of experiencing the world by learning other of how people apprehend reality, and noticing that more joy may come from experiencing the journey rather than attaining any endpoint. Perhaps most important is being attuned to new ideas and new ways of thinking and being, especially those that don’t automatically make sense.

Monday, May 26, 2014

Futurist Ethics of Immanence

The ethics of the future could likely shift to one of immanence. In philosophy, immanence means situations where everything comes from within a system, world, or person, as opposed to transcendence, where there are externally-determined specifications. The traditional models of ethics have generally been transcendent in the sense that there are pre-specified ideals posed from some point outside of an individual’s own true sense of being. The best anyone can ever hope to achieve is regaining the baseline of the pre-specified ideal (Figure 1). Measuring whether someone has reached the ideal is also problematic tends to be imposed externally. (This is also an issue in artificial intelligence projects; judgments of intelligence are imposed externally).

 Figure 1: Rethinking Ethics from 1.0 Traditional to 2.0 Immanence.

There has been progression in ethics models, moving from act-based to agent-based to now situation-based. Act-based models are based on actions (the Kantian categorical imperative vs utilitarianism (the good of the many) or consequentialism (the end justifies the means). Agent-based models hold that the character of the agent should be predictive of behavior (dispositionist). Now social science experimentation has validated a situation-based model (the actor performs according to the situation (i.e., and could behave in different ways depending on the situation)). However all of these models are still transcendent; they are in the form of externally pre-specified ideals.

Moving to a true futurist ethics that supports freedom, empowerment, inspiration, and creative expression, it is necessary to espouse ethics models of immanence (Figure 1). In an ethics of immanence, the focus is the agent, where an important first step is tuning in to true desires (Deleuze) and one’s own sense of subjective experience (Bergson). Expanding the range of possible perceptions, interpretations, and courses of action is critical. This could be achieved by improved mechanisms for eliciting, optimizing, and managing values, desires, and biases.

As social models progress, a futurist ethics should move from what can be a limiting ethics 1.0 of judging behavior against pre-set principles to the ethics 2.0 of creating a life that is affirmatory and expansive.

Slideshare presentation: Machine Ethics: An Ethics of Perception in Nanocognition

Sunday, May 18, 2014

Wearables-Mobile-IOT Tech creates Fourth Person Perspective

So far the individual has almost always existed in the context of a society of others. This could change in the farther future as individuals might be in the form of a variety of digital and physical copies in different stages of augmentation. It could become more difficult to find ‘like-others.’ My claim is that the function of alterity (an awareness of others that triggers subjectivation) would need to persist for individuals to fully become themselves, but it would not need to come from others that are like us.

All that is needed is some sort of external otherness that can show us ourselves in a new way to facilitate a moment of development. There is nothing in the function of alterity to suggest that it must be an ‘other’ that is like us. It is just that it has been this way historically, because other humans have been the ready form of ‘the other.’ It has been easiest and most noticeable when another human serves as a device like a mirror allowing us to see ourselves in a new way.

However, it is quite possible that the alterity function could be fulfilled in many other ways that do not involve a self-similar subject. One mechanism that is already allowing us to see ourselves in new ways is quantified self-tracking gadgetry. The ensemble of QS gadgets creates a fourth-person perspective, an objective means of seeing ourselves via exteriority and alterity that can trigger a moment of subjectivation. Now understanding the alterity function as such, there could be many alternative means of fulfilling it. 

Longer video on the topic: Posthuman Interpretation of Simondon's Individuation

Sunday, April 27, 2014

Bergson: Free Will through Subjective Experience

Advance in science always helps to promulgate new ideas for addressing long-standing multidisciplinary problems. Max Tegmark's recent book, the Mathematical Universe, is just such an example of new and interesting ways to apply science to understanding the problem of consciousness. However, before jumping into these ideas, it is important to have a fundamental knowledge of different theories of perception, cognition, and consciousness.
 
One place to turn for a basic theory of cognition is French process philosopher Henri Bergson (1859-1941). Although we might easily dismiss Bergson in our shiny modern era of real-time fMRIs, neo-cortical column simulation, and spike-timing calculations, Bergson's theories of perception and memory still stand as some of the most comprehensive and potentially accurate accounts of the phenomena.

Bergson's view is that there are two sides to experience: the quantitative measurable aspect, like a clock's objective ticking in minutes, and the qualitative subjective aspect, like what time feels like when we are waiting, or having fun with friends.

Bergson's prescription for more freedom and free will is tuning into subjective experience. In the example of time, it is to 'live in time,' experiencing time as duration, as internal themes and meldings of time.
We must tune into the subjective experience of time to exercise our free will. 
How this actually occurs is that we are more disposed to freedom and free will when we choose spontaneous action, which happens when we are oriented towards the qualitative aspects of internal experience, and see time as a dynamic overlap between states, not as boxes on a calendar.

Considering that we may espouse a futurist ethics that supports freedom, empowerment, inspiration, and creative expression of the individual in concert with society, the Bergsonian implementation would be ethics models that facilitate awareness of subjective experience, a point that Deleuze subsequently takes up in envisioning societies of individuals actualized in desiring-production.

Sunday, April 13, 2014

Big Data: Reconfiguring and Empowering the Human-Data Relation

A strong new presence in contemporary life is big data (the collection and use of personal data by large institutions). As individuals, we can feel powerless in our relation with data.

At present, the human-data relation is one of fear, distance, powerlessness, lack of recourse, and diminished agency. There is an asymmetry of touch in the human-data relation where data can see and touch us without our noticing or being able to touch back. What is missing from the human-data relation is the capacity for humans to touch data in a meaningful way. The asymmetry of touch leads to an incomplete subjectivation of both the human and the data: big data creates a false composite in trying to model and understand the whole individual from just a few electronically-traceable activities, while humans have almost no sight or conceptualization of the entity that is big data.

There are at least two ways to humanize and improve the human-data relation. One is reconceptualizing subjectivation and personal identity as a malleable and dynamic association of elements and capacities, and the other is reconfiguring the power relation between humans and data. To balance the power relation so that humans are more empowered, non-profit institutions, watchdog organizations, and community groups could be created for the defense of personal data, and privacy could be overhauled as a practical impossibility and recast into a system of access rights and responsibilities conferred upon the data.

Presentation: The Philosophy of Big Data
Video (in French): La reconfiguration de la relation humaine-données par le toucher

Sunday, March 30, 2014

Personhood Beyond the Human: the Subjectivation Scale of Future Persons

Philosophical concepts are useful for considering a potentially diverse landscape of future persons.

One important question is subjectivation – how individuals form and what constitutes an individual. The less helpful approach is focusing on classification and definition which is discriminatory and doomed to death by detail. A more fruitful approach is Simondon’s theory of individuation.

For Simondon, the current and future world is an environment of dynamic processes like individuation. Individuals participate in but do not cause individuation. Most importantly, individuals exist on a spectrum of capacity for action with other living beings including animals, human persons, and possibly a variety of future persons.

‘Capacity for action’ (a Spinoza-inspired concept) is crucial because it focuses on degrees of capability (related to a particular quality or skill) as opposed to underlying nature. Capacity for action has all of the possibility and mobility of a future-looking frame, and none of the fixity and discrimination of classification.

Sunday, March 23, 2014

Big Data becomes Personal: Knowledge into Meaning

One of the most significant shifts in the contemporary world is the trend towards obtaining and analyzing ‘big data’ in nearly every venue of life.

However, one of the biggest outstanding challenges is turning these large volumes of impersonal quantitative data into qualitative information that can impact the quality of life of the individual in a multiplicity of areas such as happiness, well-being, goal achievement, stress reduction, and overall life satisfaction.

For this reason, I have helped to organize the AAAI Spring Symposium this week (Big data becomes personal: knowledge into meaning) at Stanford March 24-26 to explore exactly this question of turning personal data into meaning as related in Figure 1.

Figure 1: Turning big data into personal meaning.

Sunday, March 02, 2014

Illiberty in Biohacking, Personal Data Rights, Neuro-diversity, and the Automation Economy

Illiberty is a new concept that describes the notion of a lack of liberty. In one way it is strange that a word for the opposite of liberty and freedom does not exist given how strongly these ideals feature in social, political, and cultural life. However, illiberty is quite subtle; it does not have the bluntness of the freedom-slavery opposition. Illiberty is the sense of a lack of liberty, particularly where there should be liberty. The justice-injustice pairing is similar to the liberty-illiberty relation.

As individuals, we continue to wake up to higher levels of consciousness in constituting ourselves as subjects, and now is the time to develop an awareness and response for new situations of illiberty.

Illiberty extends the familiar equity, social justice, privilege, and access struggles and covers a larger conceptual ground. Here are some new cases of illiberty, many of which we may be unaware: 
  • Labor Rights: Mind liberation from working in the corporation, working for others
  • Personal Data Rights: rights and responsibilities conferred upon personal data, especially big health science data streams: personal genome data, pacemaker data, biometric data, quantified self-tracking gadgetry data, neuro-data streams
  • Citizen Scientist Rights: Non-institutional conduct of scientific research, biorights, biohacking, the biocitizen, community labs
  • Neural Rights: Neurodiversity, ASD (autism spectrum disorder), introversion, mind emancipation 
  • Economic Rights: Basic income guarantee (JET Vol 24, Issue 1), automation economy, post-scarcity economy
  • Augmentation Rights: Rights and responsibilities of augmented persons
Illiberty Studies – Research Agenda 
  1. Develop the illiberty concept drawing on: Derrida (democracy to come – inherent illiberty in the conceptualization of liberty), Rancière (emancipation), de Soto (responsibility-taking maturation), Dussel (liberty recast as liberation), Foucault (self-imposed disciplinary power) and Deleuze (micro-fascisms in one’s own thinking). 
  2. Identify the conceptual shifts and argument structure in the historical development of equality philosophies (decolonialism, feminism, queer theory, transgender, marriage equality, neurodiversity, polyamory) 
Illiberty Working Group

Sunday, February 23, 2014

Microbots: Automation Revolution Continues with Miniaturized Electronics

Ratcheting down technology’s price-performance improvement curve, we have seen the evolution of computers from the size of a room to a PC to a smartphone to a credit-card-sized micro-controller to a smartwatch to now finally the point where they are almost invisible (Figure 1).

It is not likely to be the big robots of automotive factories that ‘take over the world’ or at least continue to take over labor, but rather microbots.

A recent trend in scientific advance has been microbots such as termite robots that build houses, nanomotors being controlled for the first time in living cells, Google’s electronic contact lenses, blood tests 2.0 (finally! more immediate and orders of magnitude cheaper, though still via physician hegemony), and personalized drone delivery services.

This all points to the ongoing miniaturization of computing, including new use cases and interesting philosophical and ethical problems that could arise when technology is invisible. We are generally aware of technology in our environment now, think of the UK’s ubiquitous surveillance cameras, or the trackability of web-surfing history, but a new conceptual adjustment may be required when technology is more pervasively integrated and invisible.

Figure 1:  Miniaturization Trend, Next Node: Microbots (Source)

Sunday, February 02, 2014

Turning Big Data into Smart Data

A key contemporary trend is big data - the creation and manipulation of large complex data sets that must be stored and managed in the cloud as they are too unwieldy for local computers. Big data creation is currently on the order of zettabytes (10007 bytes) per year, in roughly equal amounts by four segments: individuals (photos, video), companies (transaction monitoring), governments (surveillance (e.g.; the new Utah Data Center)), and scientific research (astronomical observations).

Big data fanfare abounds, we continuously hear announcements like more data was created last year than in the entire history of humanity, and that data creation is on a two year-doubling cycle. Better cheap fast storage has been the historical answer to supporting the ever-growing capacity to generate data, however this is not necessarily the best solution. Already much collected data is thrown away (e.g.; CCTV footage, real-time surgery video, and genome sequencing data) without saving anything. Much of stored data remains unused, and not cleaned up into a form that is human-usable since this is costly and challenging (de-duplication a primary example).

Turning big data into smart data means moving away from data fundamentalism, the idea that data must be collected, and that data collection in itself is an ends rather than a means. Advancement comes from smart data, not more data; being able to cleanly extract and use salient aspects of data (e.g.; the ‘diffs,’ for example identifying relevant genomic polymorphisms from the whole genome sequence), not just generate and discard or mindlessly store.