Showing posts with label computation. Show all posts
Showing posts with label computation. Show all posts

Monday, August 10, 2015

Smartgrid Life: Blockchain Cryptosustainability

The contemporary era of blockchains as an implementation mechanism for decentralization suggests a new overall conceptualization of life as being supported by any number of smartgrids. Distributed network grids is a familiar idea for resources such as water, electricity, health services, and Internet access, and might be extended to other resources, literally and conceptually. One example is on-demand microcoaching, for example guidance for playing a certain guitar solo with Piano++. Jeremy Rifkin, in Zero Marginal Cost Society and other books, outlines the grid paradigm, contemplating three smartnetworks: Internet communications grids, energy grids, and logistics grids. Evelyn Rodriguez adds another grid, local fresh produce, in the notion of Food as Distributed Commons, possibly in the form of a blockchain-based PopupFarm Grid DCO (distributed collaborative organization).

PopupFarm Grid
The idea of the PopupFarm Grid is that the 'urban farm grid' or 'fresh food grid' is like an energy grid. As the energy grid could be participative with solar panel installers selling unused power back to the grid, so too could urban fresh food be a peer-based collaborative production. Anyone could purchase a hydroponics unit and make capacity and outputs flexibly available on the urban food grid for community consumption. There could be an Uber-like mapping app to find the local hydroponics units with items fresh and maturing today, in an on-demand real-time updating reservation-taking system. This could lead to better utilization of fresh produce, improved health, and local community sustainability, knowledge-transfer, and self-sufficiency.

Consumers could own cryptoshares in virtual food cooperatives (like permacredits, the concept of a global affinity currency supporting local operations), and arrive and pay with community token. There could be dynamic supply-demand management and rebalancing at the community level. A series of smart contracts could onboard/offboard the diverse use cases and bridge time gaps. For example, land permitted for 2018 could already join the P2P network (similar to Lazooz drivers pre-earning token), to start earning community participation against future capacity. The paper-route of the future could be kids learning and participating in container maintenance for neighborhood urban food units. There could be decentralized exchange with software from OpenBazar or BitMarkets (decentralized versions of CraigsList). Another piece in the value chain could be idle Uber drivers (Lazooz in the decentralized model) and TaskRabbit, etc. gophers fulfilling delivery on-demand.

Cryptocitizen Mentality and Cryptosustainability Communities
The emerging cryptocitizen mentality is a new level of self-responsibility-taking: designing, iterating, and participating in community sustainability initiatives, including self-defining economic models. Cryptosustainability means sustainability in low-footprint mindful use of environmental resources, and also sustainability in human society organization models, where the idea is to build, prototype, and iterate new and innovative ways of doing things at various levels of scale. There can be a lot of energy when a community comes together at the beginning of a project, but keeping the energy resident over years and different phases of the project can be elusive. Blockchains are useful for this because they can help to build community trust and transparency by keeping information and record-keeping accessible. Blockchain-based cooperatives can build trust through the transparency and auditability of community operations. Anyone can check the record anytime. This could be useful for distributed decentralized governance and the coordination of cooperative shared ownership. Blockchain smart contracts can also help facilitate ongoing community processes, for example with modules for voting and decision-making with liquid democracy (e.g.; on-demand participative democracy) in proposal development, coordination, and voting; demand-planning ahead of time regarding the amount and type food wanted this year with a prediction market like from Augur; and a P2P dispute resolution and moderation with a PrecedentCoin module.

Sunday, April 05, 2015

Philosophy of Big Data

Big data is growing as an area of information technology, service, and science, and so too is the need for its intellectual understanding and interpretation from a theoretical, philosophical, and societal perspective.

The ways that we conceptualize and act in the world are shifting now due to increasingly integrated big data flows from the continuously-connected multi-device computing layer that is covering the world. This connected computing layer includes wearables, Internet-of-Things (IOT) sensors, smartphones, tablets, laptops, Quantified Self-Tracking devices like the Fitbit, connected car, smarthome, and smartcity.

Through the connected computing world, big data services are facilitating the development of more efficient organizing mechanisms for the conduct and coordination of our interaction with reality.

One effect is that our stance is moving from being constrained to reactive response to now being able to engage in much more predictive action-taking in many areas of activity.

Another effect is that a more efficient world is being created, automating not just mechanical tasks, but also cognitive tasks. This paper discusses how a philosophy of big data might help in conceiving, creating, and transitioning to data-rich futures.

 More Information: Presentation, Video, Paper

Sunday, March 21, 2010

Semiconductor roadmap updates

The working group documents and presentations are now available from the most recent International Technology Roadmap for Semiconductors (ITRS) 2009 Winter Conference held December 16, 2009 in Hsinchu City, Taiwan.

One of the most important updates from the ITRS 2009 meeting is a shift out in the time scale for the next expected computing nodes. There is a focus on both FLASH memory ½ pitches and the usual DRAM ½ pitches as smaller nodes are expected to be achieved with FLASH before DRAM. Specifically for FLASH, 22 nm is estimated for 2013, 16 nm in 2016 and 11 nm in 2019. For DRAM, 32 nm is estimated for 2013, 22 nm in 2016, and 16 nm in 2019.

An important architectural shift is underway for packing more transistors onto chips: moving from planar to multidimensional architectures. Another big industry focus is in implementing 450 mm wafers for chip manufacturing, up from the 300 mm current standard. (Figure 1)

Figure 1: One of the world's first 450 mm wafers

In lithography, a key bottleneck area, the two main technologies that will probably be in use for the current and next few nodes are Extreme Ultraviolet Lithography (EUV) and 193 nm immersion half pitch Double Patterning. EUV is less expensive. For later nodes (22 nm, 16 nm, and 11 nm), EUV and double patterning, together with ML2 (maskless lithography), imprinting, directed self-assembly, and interference lithography may be used.

An important challenge is the top-down (traditional engineered electronics) meets bottom-up (evolved molecular electronics) issue of how nodes 15 nm and smaller will be designed given quantum mechanics. The Emerging Research Devices (ERD) and Emerging Research Materials (ERM) working groups presented some innovative solutions, however the majority of the roadmap focus is on the nearer term, the next couple of nodes.

Sunday, August 02, 2009

Bio-design automation and synbio tools

The ability to write DNA could have an even greater impact than the ability to read it. Synthetic biologists are developing standardized methodologies and tools to engineer biology into new and improved forms, and presented their progress at the first-of-its-kind Bio-Design Automation workshop (agenda, proceedings) in San Francisco, CA on July 27, 2009, co-located with the computing industry’s annual Design Automation Conference. As with many areas of technological advancement, the requisite focus is on tools, tools, tools! (A PDF of this article is available here.)


Experimental evidence has helped to solidify the mindset that biology is an engineering substrate like any other and the work is now centered on creating standardized tools that are useful and reliable in an experimental setting. The metaphor is very much that of computing: just as most contemporary software developers work at high levels of abstraction and need not concern themselves with the 1s and 0s of machine language, in the future, synthetic biology programmers would not need to work directly with the Ac, Cs, Gs and Ts of DNA or understand the architecture of promoters, terminators, open reading frames and such. However, with synthetic biology being in its early stages, the groundwork to define and assemble these abstraction layers is currently at task.

Status of DNA synthesis
At present, the DNA synthesis process is relatively unautomated, unstandardized and expensive ($0.50-$1.00 per base pair (bp)); it would cost $1.5-3 billion to synthesize a full human genome. Synthesized DNA, which can be ordered from numerous contract labs such as DNA 2.0 in Menlo Park, CA and Tech Dragon in Hong Kong, has been following Moore’s Law (actually faster than Moore’s Law Carlson Curves doubling at 2x/yr vs. 1.5x/yr), but is still slow compared to what is needed. Right now short oligos, oligonucleotide sequences up to 200 bp, can be reliably synthesized but a low-cost repeatable basis for genes and genomes extending into the millions of bp is needed. Further, design capability lags synthesis capability, being about 400-800-fold less capable and allowing only 10,000-20,000 bp systems to be fully forward-engineered at present.

So far, practitioners have organized the design and construction of DNA into four hierarchical tiers: DNA, parts, devices and systems. The status is that the first two tiers, DNA and parts (simple modules such as toggle switches and oscillators), are starting to be consistently identified, characterized and produced. This is allowing more of an upstream focus on the next two tiers, complex devices and systems, and the methodologies that are needed to assemble components together into large-scale structures, for example those containing 10 million bp of DNA.

Standardizing the manipulation of biology
A variety of applied research techniques for standardizing, simulating, predicting, modulating and controlling biology with computational chemistry, quantitative modeling, languages and software tools are under development and were presented at the workshop.

Models and algorithms
In the models and algorithms session, there were some examples of the use of biochemical reactions for computation and optimization, performing arithmetic computation essentially the same way a digital computer would. Basic mathematical models such as the CME (Chemical Master Equation) and SSA (Stochastic Simulation Algorithm) were applied and extended to model, predict and optimize pathways and describe and design networks of reactions.

Experimental biology
The experimental biology session considered some potential applications of synthetic biology, first the automated design of synthetic ribosome binding sites to make protein production faster or slower (finding that the translation rate can be predicted if the Gibbs free energy (delta G) can be predicted). Second, an in-cell disease protection mechanism was presented where synthetic genetic controllers were used to prevent the lysis normally occurring in the lysis-lysogeny switch turned on in the disease process (lysogeny is the no-harm state and lysis is the death state).

Tools and parts
In the tools and parts session, several software-based frameworks and design tools were presented, many of which are listed in the software tools section below.

Languages and standardization
The languages and standardization session had discussions of language standardization projects such as the BioStream language, PoBol (Provisional BioBrick Language) and the BioBrick Open Language (BOL).

Software tools: a SynBio CrunchUp
Several rigorous computer-aided design and validation software tools and platforms are emerging for applied synthetic biology, many of which are freely available and open-source.

  • Clotho: An interoperable design framework supporting symbol, data model and data structure standardization; a toolset designed in a platform-based paradigm to consolidate existing synthetic biology tools into one working, integrated toolbox
  • SynBioSS - Synthetic Biology Software Suite: A computer-aided synthetic biology tool for the design of synthetic gene regulatory networks; computational synthetic biology
  • RBS Calculator: A biological engineering tool that predicts the translation initiation rate of a protein in bacteria; it may be used in Reverse Engineering or Forward Engineering modes
  • SeEd - Sequence Editor (work in progress): A tool for designing coding sequence alterations, a system conceptually built around constraints instead of sequences
  • Cellucidate: A web-based workspace for investigating the causal and dynamic properties of biological systems; a framework for modeling modular DNA parts for the predictable design of synthetic systems
  • iBioSim: A design automation software for analyzing biochemical reaction network models including genetic circuits, models representing metabolic networks, cell-signaling pathways, and other biological and chemical systems
  • GenoCAD: An experimental tool for building and verifying complex genetic constructs derived from a library of standard genetic parts
  • TinkerCell: A computer-aided design software for synthetic biology

Future of BioCAD
One of the most encouraging aspects in the current evolution of synthetic biology is the integrations the field is forging with other disciplines, particularly electronics design and manufacture, DNA nanotechnology and bioinformatics.

Scientists are meticulously applying engineering principles to synthetic biology and realize that novel innovations are also required since there are issues specific to engineering biological systems. Some of these technical issues include device characterization, impedance, matching, rules of composition, noise, cellular context, environmental conditions, rational design vs. directed evolution, persistence, mutations, crosstalk, cell death, chemical diffusion, motility and incomplete biological models.

As it happened in computing, and is happening now in biology, the broader benefit of humanity having the ability to develop and standardize abstraction layers in any field can be envisioned.
Clearly there will be ongoing efforts to more granularly manipulate and create all manner of biology and matter. Some of the subsequent areas where standards and abstraction hierarchies could be useful, though not immediate, are the next generations of computing and communications, molecular nanotechnology (atomically precise matter construction from the bottom up), climate, weather and atmosphere management, planet terraforming and space colony construction.

(Image credits: www.3dscience.com, www.biodesignautomation.org)

Sunday, July 05, 2009

Next-gen computing for terabase transfer

The single biggest challenge presently facing humanity is the new era of ICT (information and communication technology) required to advance the progress of science and technology. This constitutes more of a grand challenge than do disease, poverty, climate change, etc. because solutions are not immediately clear, and are likely to be more technical than political in nature. The raw capacity in information processing and transfer is required and also the software to drive these processes at higher levels of abstraction to make the information useable and meaningful. The computing and communications industries have been focused on incremental Moore’s Law extensions rather than new paradigms and do not appear to be cognizant of the current needs of science, and particularly the magnitude.

Computational era of science
One trigger for a new ICT era is the shift in the way that science is conducted. The old trial and error lab experimentation has been supplemented with informatics and computational science for characterizing, modeling, simulating, predicting and designing. Life sciences is the most prominent area of science requiring ICT advances, for a variety of purposes including biological process characterization and simulation. Genomics is possibly the field with the most ICT urgency; genomic data is growing at 10x/year vs. Moore’s Law at 1.5x/year for example, however nearly every field of science has progressed to large data sets and computational models.

Saturday, January 22, 2005

Which sciences are important now and why?

This is a quick and dirty look at which sciences are most important right now. These sciences are: physics, astronomy, nanotechnology, biotechnology, semiconductors, computation and information theory.

The important areas of physics are particle physics at accelerator and detector labs, and anything related to quantum theory/unified field theory/many worlds theory. These areas will help us to understand more about and control the properties of the smallest pieces of matter which are not atoms, not quarks but strings or other.

Astronomy is important because we need to find out more about the rest of the universe, how it works, more about black holes, dark energy and dark matter and how to harness them. Understanding the physics of that which we do not currently understand. This is important for many reasons including our eventual need to move off the Earth (4.5 billion years to the Earth's engulfment by the sun) and out of the galaxy (10 billion years until Andromeda collides with the Milky Way).

Nanotechnology is important because it will allow us to build and create objects from the ground up, truly mastering our world by creating matter. In addition there are interesting novel properties of atomic level matter which may offer better knowledge of quantum behavior.

Biotechnology is important in being able to improve and redesign ourselves as humans, in the near term to get to baseline by eradicating disease and then to expand from baseline into new enhanced capabilities and forms. As we look to immigrating to other worlds and space colonies, biological transformation will be key.

Semiconductors and computation are important as we need to reach successive tiers of computing capability to further master knowledge about how our universe works, especially regarding large phenomenon like galaxies and small phenomenon like brains. Software is a harder problem than hardware and with greater processing, a lot of results can be achieved by brute force computation and don't need to wait for and rely on more complicated human team work dependent software.

Information theory is increasingly critical as a new conceptual model by which the universe is being explained. Seth Lloyd is one of the foremost authors on this. The idea is that there are many forms of information storage and computation in the universe, including life (plants, animals, etc.) who store DNA and compute from it, even rocks reacting to their environment are said to process information. Black holes also take in information, process it and return output, but this phenomenon is tested or understood yet.

These sciences share the theme of helping us answer the biggest remaining outstanding questions the fastest. Who are we? Where did we come from? What is the nature of this universe? Are there other universes? What are the physical laws that govern all matter and phenomenon of this universe?