Showing posts with label computing. Show all posts
Showing posts with label computing. Show all posts

Sunday, November 09, 2014

Bitcoin 1.0, 2.0, and 3.0: Currency, Contracts, and Applications, beyond Financial Markets

Bitcoin 1.0 is currency - the deployment of cryptocurrencies in applications related to cash such as currency transfer, remittance, and digital payment systems. Bitcoin 2.0 is contracts - the whole slate of economic, market, and financial applications using the blockchain that are more extensive than simple cash transactions like stocks, bonds, futures, loans, mortgages, titles, smart property, and smart contracts. Bitcoin 3.0 is blockchain applications beyond currency, finance, and markets, particularly in the areas of government, health, science, literacy, culture, and art. 
Bitcoin and blockchain technology is much more than a digital currency, the blockchain is an information technology, potentially on the order of the Internet (‘the next Internet’), but even more pervasive and quickly-configuring. 
Prevalence of Decentralized Models 
Even if the currently developing models of Bitcoin and blockchain technology are not the final paradigm (there are many problematic flaws), the bigger trend, decentralized models as a class, could have a pronounced impact. If not the blockchain industry, there would probably be something else, and in fact there probably will be other complements to the blockchain industry anyway. It is just that the blockchain industry is one of the first identifiable large-scale implementations of decentralization models, conceived and executed at a new and more complex level of human activity.

Decentralized models have the potential to reorganize all manner of human activity, and quickly, because they are trustless, the friction of the search and trust-establishment process in previous models of human interaction is eliminated. This could mean greatly accelerated rates and levels of activity, on a much greater humanity-level scale. The blockchain (decentralized network coordination technology) could emerge as a fundamental infrastructure element in the model to scale humanity to its next levels of orders-of-magnitude-larger progress.

Sunday, September 29, 2013

Digital Literacy: Learning Newtech for its Own Sake

Digital literacy is a new capability and feature of our modern world, where consciously or unconsciously, there is a category in our lives called ‘learning newtech.’

There are two levels: first the basic skill acquisition and conceptual understanding required to learn a newtech, and second, the psychology of the digital learning curve which includes evaluating and justifying the time investment and utility of learning au courrant digital literacy tools with the appreciation that they will be almost immediately obsolescent.

We might complain about the effort required to master contemporary areas of digital literacy like learning mobile app development, the big data statistical manipulation language R, and scripting frameworks like node.js and jQuery. At the same time as we forget our many digital proficiencies, and the time invested to acquire them; previous generations of digital tools like file sharing, photo-uploading, Excel macros, Microsoft Word, PREZI presentations, file archival, and system restoration.

It is arguable that we should devote explicit effort to digital literacy, and further that digital literacy for its own sake could also be an objective. Taking Stanford University as an example, all incoming students must take a software programming class; pedagogically the language requirement is still in place, but it has shifted from French, Spanish, or German to C++, Java, or Python.

Sunday, November 18, 2012

Supercomputing Increments Towards the Exaflop Era

The November 2012 biannual list of the world’s fastest supercomputers shows the winner incrementally improving over the last measure. The Titan (a Cray XK7, Opteron 6274 16C 2.200GHz, Cray Gemini interconnect, NVIDIA K20x) is leading with 17.6 petaflops per second of maximum processing power. This was only an 8% increase in maximum processing speed as compared to other recent increases of 30-60%, but a continued step forward in computing power.

Supercomputers are used for a variety of complicated modeling problems where simultaneous processing is helpful such as weather modeling, quantum physics, and predicting degradation in nuclear weapons arsenals.

Figure 1. World's Fastest Supercomputers. (Source)
Increasingly, supercomputing is being seen as just one category of big data computing along with cloud-based server farms running simple algorithms over large data corpuses (for example Google’s cat image recognition project), crowd-based distributed computing networks (e.g.; protein Folding@home with 5 petaflops of computing power, and crowdsourced labor networks (e.g.; Mechanical Turk, oDesk, CrowdFlower - theoretically comprising 7 billion Turing test-passing online agents).
 

Sunday, August 19, 2012

Supercomputing: 16 petaflops, schmetaflops?

Supercomputing advances continue to exponentiate – the world’s best machine (IBM’s Sequoia - BlueGene/Q installed in the U.S. at LLNL) currently has 16 petaflops of raw compute capability.

Figure 1. Data Source: Top 500 Supercomputing Sites


As shown in Figure 1, the curve has been popping – up from 2 to 16 petaflops in just two years! However for all its massivity, supercomputing remains a linear endeavor. While the average contemporary supercomputer has much greater than human-level capability in raw compute power, it cannot think, which is to say pattern-match and respond in new situations, and solve general rather than special-purpose problems.

For the future of intelligence and cognitive computing, the three-way horse race continues between enhancing biological human cognition, reverse-engineering and simulating the human brain in software, and hybrids of these two.





Sunday, January 01, 2012

Top 10 technology trends for 2012

1. Mobile is the platform: smartphone apps & device proliferation
2. Cloud computing: big data era, hadoop, noSQL, machine learning
3. Gamification of behavior and content generation
4. Mobile payments and incentives (e.g.; Amex meets FourSquare)
5. Life by Siri, Skyvi, etc. intelligent software assistants
6. Happiness 2.0 and social intelligence: mindfulness, calming tech, and empathy building
7. Social graph prominence in search (e,g.; music, games, news, shopping)
8. Mobile health and quantified self-tracking devices: towards a continuous personal information climate
9. Analytics, data mining, algorithms, automation, robotics
10. Cloud culture: life imitates computing (e.g.; Occupy, Arab Spring)

Further out - Gesture-based computing, Home automation IF sensors, WiFi thermostat, Enterprise social networks

Is it ever coming? - Cure for the common cold, Driverless cars


Looking back at Predictions for 2011: right or wrong?

  • Right: Mobile is the platform, Device proliferation, Big data explosion, Group shopping
  • On the cusp: Crowdsourced labor, Quantified self tracking gadgets and app, Connected media and on-demand streaming video
  • Not yet: Sentiment engines, 3-D printing, Real-time economics

Sunday, June 05, 2011

Time malleability

There are differences between the conceptualization of time in computing systems and the human conceptualization of time. At the most basic level in computing, time is synonymous with performance and speed. At the next level in computing, there are “more kinds of time” than in the human and physics perspective where time is primarily continuous. In computing, time may be discrete, synchronous and asynchronous, absolute and relative, and not elapsing at all.

Concurrency trend in contemporary computing
Computing is now making time even more malleable as a side effect of the quest to develop concurrent systems (multi-cores and multi-processors, and cluster, grid, and cloud computing), in at least four ways. One technique is using functional languages such as Haskell, LISP, Scheme, Clojure, and F# where sets of items and processes may not need to be temporally ordered. A second method is enhancing existing computer languages with new commands like ‘happensbefore’ and concurrency control mechanisms like ‘lock free queues’ to manage multiple threads of code operating simultaneously. A third means is creating new models with less time dependency like MapReduce which automatically parallelizes large data problems into finding all possible answers (‘map’), and determining relevancy (‘reduce’). A fourth technique is extending alternative models such as clock free methods and asynchronous computing and restructuring problems to be distributed for more expedient resolution.

Building intelligent systems
The building of intelligent systems is a special problem in computing. There are many approaches ranging from attempts to model human thinking, including the conceptualization of time, to attempts to building intelligent systems from scratch. All models might benefit from incorporating biological time models such as temporal synchrony, the notion of a high-level background synchronization of processes.

Conclusion
Computers are already great time-savers. Computing approaches to contemporary problems like concurrency and building intelligent systems are increasing the ability to manipulate time. Ultimately, humans may be able to greatly extend the control of time, for all intents and purposes creating more time.

From “The conceptualization of time in computing

Sunday, January 02, 2011

Top 10 technology trends for 2011

1. Mobile is the platform; mobile payment ubiquity could be next
2. Device proliferation continues; tablets, e-book readers, etc.
3. Connected media and on-demand streaming video, IPTV, live event interaction
4. Social shopping: grouppurchasing, commenting, recommendation, LBS
5. Sentiment engines (ex: Pulse of the Nation, We Feel Fine) are ripe for being applied much more broadly to other keyword domains; sentiment prediction
6. Big data era explosion: machine learning, cloud computing, clusters, supercomputing
7. Labor-as-a-service: microlabor, on-demand labor, global task fulfillment
8. Quantified self tracking gadgets and apps (ex: WiThings scale, myZeo, BodyMetRx, medication reminder, nutrition intake, workout coordination, DIYgenomics, etc.)
9. Personal manufacturing, digital fabrication, 3D printing ("atoms are the new bits"); slow but important niche growth
10. Real-time economics: blippy, crowdsourced forecasting, stock market prediction

(Review predictions for 2010)

Sunday, April 25, 2010

Supercomputing and human intelligence

As of November 2009, the world’s fastest supercomputer was the Cray Jaguar located at the U.S. Department of Energy’s Oak Ridge National Laboratory, operating at 1.8 petaflops (1.8 x 1015 flops). Unlike human brain capacity, supercomputing capacity has been growing exponentially. In June 2005, the world’s fastest supercomputer was the IBM Blue Gene/L at Los Alamos National Laboratory, running at 0.1 petaflops. In less than five years, the Jaguar represents an order of magnitude increase, the latest culmination of capacity doublings each few years. (Figure 1)

Figure 1. Growth in supercomputer power
Source: Ray Kurzweil with modifications

The next supercomputing node, one more order of magnitude, at 1016 flops, is expected in 2011 with the Pleiades, Blue Waters, or Japanese RIKEN systems. 1016 flops would possibly allow the functional simulation of the human brain.

Clearly, there are many critical differences between the human brain and supercomputers. Supercomputers tend to be modular in architecture and address specific problems as opposed to having the general problem solving capabilities of the human brain. Having equal to or greater than human-level raw computing power in a machine does not necessarily confer the ability to compute as a human. Some estimates of the raw computational power of the human brain range between 1013 and 1016 operations per second. This would indicate that
supercomputing power is already on the order of estimated human brain capacity, but intelligent or human-simulating machines do not yet exist.
The digital comparison of raw computational capability may not be the right measure for understanding the complexity of the brain. Signal transmission is different in biological systems, with a variety of parameters such as context and continuum determining the quality and quantity of signals.

Sunday, January 03, 2010

Top 10 technology trends for 2010

Some of the freshest ideas in 2009 were botnet futures (Daemon, Daniel Suarez), a variety of neuro scanning applications (The Neuro Revolution, Zack Lynch), a systems approach to Earth (Whole Earth Discipline, Stewart Brand), accelerating economic development through charter cities (Charter Cities, Paul Romer), automatic markets for fungible resource allocation (Broader Perspective, Melanie Swan), and the notion that the next-generation of technology needed to solve intractable problems could be non-human understandable and come from sampling the computational universe of all possible technologies (Conversation on the Singularity, Stephen Wolfram).

Heading into a brand new decade, there are several exciting technology areas to watch. Many are on exponential improvement curves, although from any viewpoint on an exponential curve, things may look flat. Most of this blog’s big predictions for 2009 came true. Here’s what could happen in the next year or so:

1. Closer to $100 whole human genome
Third-generation DNA sequencer Pacific Biosciences estimates that they are still on track for a late 2010 release of single-molecule real-time sequencing technology that could eventually lead to less than $100 whole human genome sequencing.

2. Mobile continues to be the platform
There will likely be a greater launch and adoption of addictive location-based services (LBS) like FourSquare, Gowalla and Loopt, together with social networking, gaming, and video applications for the mobile platform. Continued trajectory of smartphone purchases (one in four in the U.S.). iPhone and Android app downloads double again. Gaming expands on mobiles and on the console platform with Avatar and maybe other 3-D console games. Internet-delivered content continues across all platforms.

3. 22nm computing node confirmed for 2011
Intel possibly confirming and providing more details about the 22nm Ivy Bridge chip planned for commercial release the second half of 2011. The September 2010 Intel Developer’s Forum may feature other interesting tidbits regarding the plans for 3-D architectures and programmable matter that could keep computing on Moore’s Law curves.

4. Supercomputers reach 15% human capacity
Supercomputing capacity doublings have been occurring each few years and could likely continue. As of November 2009, the world’s fastest supercomputer was the Cray Jaguar, running at 1.8 petaflops (1.8 x 1015 flops), approximately 10% of the estimated compute capacity of a human.

5. Confirmation of synthetic biology fuel launch for 2011

Pilot plants are running and the commercial launch of the first killer app of synthetic biology, synthetic fuel, could be confirmed for 2011. Sapphire Energy and Synthetic Genomics are generating petroleum from algal fuel; LS9, petroleum from microbes; Amyris Biotechnologies, ethanol, and Gevo, biobutenol.

6. Smart grid and smart meter deployment
In energy, more utilities moving to deploy internal smartgrid network management infrastructure and starting to replace consumer premises equipment (CPE) with advanced metering infrastructure (AMI) for automated utility reading and customer data access. Dozens of efforts are underway in the U.S. (Figure 1).



7. Increased choice in personal transportation
More electric vehicle offerings, greater launch of alternative fuels, a potential Tesla IPO announcement, and more widespread car share programs (i.e., City CarShare, Gettaround).

8. Real-time internet search dominates
More applications allow real-time search functionality through content aggregation, standards, and more granular web searches. Search could be 40% real-time, 40% location-based, 20% other.

9. Advent of health advisors and wellness coaches
Hints of personalized medicine start to arrive with the unification of health data streams (i.e., genomics, biomarker, family and health history, behavior, and environment) into personalized health management plans. Early use of health monitoring devices (i.e., FitBit, DirectLife) as a prelude to biomonitors.

10. WiMax roll-out continues
Clear adds more markets to its current 26. Increasing importance of integrated data stream management (video, voice, etc.) on fixed and mobile platforms.

Probably not happening in 2010 but would be nice…
Still waiting for significant progress regarding…
  • 4G/LTE roll-out
  • Driverless cars, on-demand personal rapid transport systems
  • Ubiquitous sensor networks
  • OLEDs

Sunday, August 02, 2009

Bio-design automation and synbio tools

The ability to write DNA could have an even greater impact than the ability to read it. Synthetic biologists are developing standardized methodologies and tools to engineer biology into new and improved forms, and presented their progress at the first-of-its-kind Bio-Design Automation workshop (agenda, proceedings) in San Francisco, CA on July 27, 2009, co-located with the computing industry’s annual Design Automation Conference. As with many areas of technological advancement, the requisite focus is on tools, tools, tools! (A PDF of this article is available here.)


Experimental evidence has helped to solidify the mindset that biology is an engineering substrate like any other and the work is now centered on creating standardized tools that are useful and reliable in an experimental setting. The metaphor is very much that of computing: just as most contemporary software developers work at high levels of abstraction and need not concern themselves with the 1s and 0s of machine language, in the future, synthetic biology programmers would not need to work directly with the Ac, Cs, Gs and Ts of DNA or understand the architecture of promoters, terminators, open reading frames and such. However, with synthetic biology being in its early stages, the groundwork to define and assemble these abstraction layers is currently at task.

Status of DNA synthesis
At present, the DNA synthesis process is relatively unautomated, unstandardized and expensive ($0.50-$1.00 per base pair (bp)); it would cost $1.5-3 billion to synthesize a full human genome. Synthesized DNA, which can be ordered from numerous contract labs such as DNA 2.0 in Menlo Park, CA and Tech Dragon in Hong Kong, has been following Moore’s Law (actually faster than Moore’s Law Carlson Curves doubling at 2x/yr vs. 1.5x/yr), but is still slow compared to what is needed. Right now short oligos, oligonucleotide sequences up to 200 bp, can be reliably synthesized but a low-cost repeatable basis for genes and genomes extending into the millions of bp is needed. Further, design capability lags synthesis capability, being about 400-800-fold less capable and allowing only 10,000-20,000 bp systems to be fully forward-engineered at present.

So far, practitioners have organized the design and construction of DNA into four hierarchical tiers: DNA, parts, devices and systems. The status is that the first two tiers, DNA and parts (simple modules such as toggle switches and oscillators), are starting to be consistently identified, characterized and produced. This is allowing more of an upstream focus on the next two tiers, complex devices and systems, and the methodologies that are needed to assemble components together into large-scale structures, for example those containing 10 million bp of DNA.

Standardizing the manipulation of biology
A variety of applied research techniques for standardizing, simulating, predicting, modulating and controlling biology with computational chemistry, quantitative modeling, languages and software tools are under development and were presented at the workshop.

Models and algorithms
In the models and algorithms session, there were some examples of the use of biochemical reactions for computation and optimization, performing arithmetic computation essentially the same way a digital computer would. Basic mathematical models such as the CME (Chemical Master Equation) and SSA (Stochastic Simulation Algorithm) were applied and extended to model, predict and optimize pathways and describe and design networks of reactions.

Experimental biology
The experimental biology session considered some potential applications of synthetic biology, first the automated design of synthetic ribosome binding sites to make protein production faster or slower (finding that the translation rate can be predicted if the Gibbs free energy (delta G) can be predicted). Second, an in-cell disease protection mechanism was presented where synthetic genetic controllers were used to prevent the lysis normally occurring in the lysis-lysogeny switch turned on in the disease process (lysogeny is the no-harm state and lysis is the death state).

Tools and parts
In the tools and parts session, several software-based frameworks and design tools were presented, many of which are listed in the software tools section below.

Languages and standardization
The languages and standardization session had discussions of language standardization projects such as the BioStream language, PoBol (Provisional BioBrick Language) and the BioBrick Open Language (BOL).

Software tools: a SynBio CrunchUp
Several rigorous computer-aided design and validation software tools and platforms are emerging for applied synthetic biology, many of which are freely available and open-source.

  • Clotho: An interoperable design framework supporting symbol, data model and data structure standardization; a toolset designed in a platform-based paradigm to consolidate existing synthetic biology tools into one working, integrated toolbox
  • SynBioSS - Synthetic Biology Software Suite: A computer-aided synthetic biology tool for the design of synthetic gene regulatory networks; computational synthetic biology
  • RBS Calculator: A biological engineering tool that predicts the translation initiation rate of a protein in bacteria; it may be used in Reverse Engineering or Forward Engineering modes
  • SeEd - Sequence Editor (work in progress): A tool for designing coding sequence alterations, a system conceptually built around constraints instead of sequences
  • Cellucidate: A web-based workspace for investigating the causal and dynamic properties of biological systems; a framework for modeling modular DNA parts for the predictable design of synthetic systems
  • iBioSim: A design automation software for analyzing biochemical reaction network models including genetic circuits, models representing metabolic networks, cell-signaling pathways, and other biological and chemical systems
  • GenoCAD: An experimental tool for building and verifying complex genetic constructs derived from a library of standard genetic parts
  • TinkerCell: A computer-aided design software for synthetic biology

Future of BioCAD
One of the most encouraging aspects in the current evolution of synthetic biology is the integrations the field is forging with other disciplines, particularly electronics design and manufacture, DNA nanotechnology and bioinformatics.

Scientists are meticulously applying engineering principles to synthetic biology and realize that novel innovations are also required since there are issues specific to engineering biological systems. Some of these technical issues include device characterization, impedance, matching, rules of composition, noise, cellular context, environmental conditions, rational design vs. directed evolution, persistence, mutations, crosstalk, cell death, chemical diffusion, motility and incomplete biological models.

As it happened in computing, and is happening now in biology, the broader benefit of humanity having the ability to develop and standardize abstraction layers in any field can be envisioned.
Clearly there will be ongoing efforts to more granularly manipulate and create all manner of biology and matter. Some of the subsequent areas where standards and abstraction hierarchies could be useful, though not immediate, are the next generations of computing and communications, molecular nanotechnology (atomically precise matter construction from the bottom up), climate, weather and atmosphere management, planet terraforming and space colony construction.

(Image credits: www.3dscience.com, www.biodesignautomation.org)

Sunday, July 05, 2009

Next-gen computing for terabase transfer

The single biggest challenge presently facing humanity is the new era of ICT (information and communication technology) required to advance the progress of science and technology. This constitutes more of a grand challenge than do disease, poverty, climate change, etc. because solutions are not immediately clear, and are likely to be more technical than political in nature. The raw capacity in information processing and transfer is required and also the software to drive these processes at higher levels of abstraction to make the information useable and meaningful. The computing and communications industries have been focused on incremental Moore’s Law extensions rather than new paradigms and do not appear to be cognizant of the current needs of science, and particularly the magnitude.

Computational era of science
One trigger for a new ICT era is the shift in the way that science is conducted. The old trial and error lab experimentation has been supplemented with informatics and computational science for characterizing, modeling, simulating, predicting and designing. Life sciences is the most prominent area of science requiring ICT advances, for a variety of purposes including biological process characterization and simulation. Genomics is possibly the field with the most ICT urgency; genomic data is growing at 10x/year vs. Moore’s Law at 1.5x/year for example, however nearly every field of science has progressed to large data sets and computational models.

Sunday, May 24, 2009

Expanding notion of Computing

As we push to extend inorganic Moore’s Law computing to ever-smaller nodes, and simultaneously attempt to understand and manipulate existing high-performance nanoscale computers known as biology, it is becoming obvious that the notion of computing is expanding. The definition, models and realms of computation are all being extended.

Computing models are growing
At the most basic level, how to do computing (the computing model) is certainly changing. As illustrated in Figure 1, the traditional linear Von Neumann model is being extended with new materials, 3D architectures, molecular electronics and solar transistors. Novel computing models are being investigated such as quantum computing, parallel architectures, cloud computing, liquid computing and the cell broadband architecture like that used in the IBM Roadrunner supercomputer. Biological computing models and biology as a substrate are also under exploration with 3D DNA nanotechnology, DNA computing, biosensors, synthetic biology, cellular colonies and bacterial intelligence, and the discovery of novel computing paradigms existing in biology such as the topological equations by which ciliate DNA is encrypted.

Figure 1. Evolving computational models (source)

Computing definition and realms are growing
At another level, subtly but importantly, where to do computing is changing from specialized locations the size of a large room in the 1970s to the destktop to the laptop, netbook and mobile device and smartphone. At present computers are still made of inorganic materials but introducing a variety of organic materials computing mechisms helps to expand the definition of what computing is. Ubiquitous sensors, personalized home electricity monitors, self-adjusting biofuels, molecular motors and biological computers do not sound like the traditional concept of computing. True next-generation drugs could be in the form of molecular machines. Organic components or organic/inorganic hybrid components, as the distinction dissolves, could be added to many object such as the smartphone. A mini-NMR or mini-Imager for mobile medical diagnostics from a disposable finger-prick blood sample would be an obvious addition.