Showing posts with label supercomputing. Show all posts
Showing posts with label supercomputing. Show all posts

Sunday, February 15, 2015

Blockchains as a Granular Universal Transaction System

Blockchain technology is a new concept in large-scale coordination due to a number of key features. First, a blockchain is an open universal transaction system. Every transaction worldwide is processed the same way and posted and made available for viewing on the blockchain. The transaction ledger is publicly-inspectable on-demand at any future moment.

Second, blockchains are trustless in the sense of not having to find or trust any of the other parties in the transaction; it is just necessary to trust the system. This suggests that orders-of-magnitude more transactions may be possible in trustless systems since the architecture is a mechanism allowing anyone to transact with anyone anywhere; geographical proximity, personal knowledge, and the search problem are all reduced or resolved. This is conceptually a next step in the progression of how Amazon (a global system) opens up trading in a way that Craigslist (geographically-local) does not.

Third, blockchains are a universal tracking system that might be able to accommodate infinitely more granularity than has been feasible and cost-effective to monitor previously. The optimality of what level of transaction detail is best to post directly to the blockchain (thus invoking the expensive mining operation for their recording) is being sorted out in different ecosystem tiers. The overall blockchain ecosystem is developing to avoid bloating the blockchain with too many micro-transactions by making use of special-purpose sidechains, decentralized off-chain storage (for example with MaidSafe, batched transactions (like batched notary sidechains to register large groups of legal documents), and Merkle trees (confirming and storing a whole corpus of data with one meta-hash).
Blockchains are a Supercomputer for Reality, a Mechanism for Orchestrating Quanta
The key idea of blockchains as a universal transaction system is that they are an automated computational mode, a seamless universal infrastructural element for the coordinated activity of granularity. Blockchains could be a universal transaction system on an order never before imagined that could possibly be used to coordinate the whole of human and machine activity. In this sense, blockchain technology is a supercomputer for reality. Any and all phenomena that can be quantized (defined in discrete units or packages) can be denoted this way and encoded and transacted in an automated fashion on the blockchain. As big data seeks to perhaps eventually model and predict all phenomena, natural and otherwise, so too might blockchains accompany big data for the tracking and administration of all phenomena.

One summary and prognostication of this dynamic and the potential universal applicability of blockchains is that anything that can be decentralized will be. This has an implied assumption about the inherent efficiency, benefit, and potential superiority in certain situations of the blockchain model. Decentralization is ‘where water goes’ (where water flows naturally, along the path of least resistance and least effort). The blockchain is an Occam’s razor, a natural efficiency process.

Blockchains are thus an intriguing model for coordinating the full transactional load of any large-scale system, whether the whole of different forms of human activity (social systems) or any other system too like a brain. In a brain there are quadrillions of transactions that could perhaps be handled in the universal transactional system architecture of a blockchain, like with Blockchain Thinking models.

Further, it is not just the transaction-handling capability of the blockchain as a universal coordination system but other properties that can also be applied through-out such as demurrage incitory stimulation for dynamic resource redistribution across the system. In Blockchain Thinking, this could be redistributing brain currencies like ideas and potentiation. Thus, it is not the mere orchestration features of universal blockchain systems but their enhancement possibilities that is perhaps the more interesting point. Not only can we better organize larger-scale existing activity with blockchains, but we can also possibly open up new classes of as-yet unimagined functionality and potentiality.

More information: Swan, M. (2015). Blockchain: Blueprint for a New Economy. O'Reilly Media.

Tuesday, December 17, 2013

Supercomputing Processing Speed Nearly Doubles in One Year

The Top500 November 2013 biannual list of the world’s fastest supercomputers shows China's Tianhe-2 still at nearly twice the capacity of the second fastest, with virtually no change since the machine vaulted onto the list in June 2013.

Tech Specs: Tianhe-2 (MilkyWay-2) - TH-IVB-FEP Cluster, Intel Xeon E5-2692 12C 2.200GHz, TH Express-2, Intel Xeon Phi 31S1P. The machine was constructed by the National University of Defense Technology (NUDT).

Tianhe-2's maximum processing power is 33.8 petaflops per second (and peak processing speed 54.9 petaflops per second). The nearest competitor is the US DOE's Titan, a Cray XK7 Opteron 6274 16C 2.200GHz with a maximum processing power of 17.6 petaflops per second. Processing power is firmly on a steep growth curve, accelerating since the 5 petaflops per second mark was surpassed in June 2011 by RIKEN Japan (Figure 1).

Figure 1: Supercomputing Processing Capacity (Source: Top500)

Supercomputers, although processing faster than the human brain for some time now, process in a massively linear parallel manner which is not at all how the brain functions. However, while not mimicking the human brain, trying to understand it is a key use of supercomputers, along with other traditional prediction problems like weather forecasting, and physics phenomena and energy modeling. Now firmly in the big data era, processing astronomical data too is a key use for supercomputing. Maybe Tianhe-2 will compute findings from lunar data processed through China's recently landed Jade Rabbit. In other astronomical applications, astronomers expect to be processing 10 petabytes of data every hour from the Square Kilometer Array telescope under development in Australia and South Africa with a total collecting area of one square kilometer.

Sunday, November 18, 2012

Supercomputing Increments Towards the Exaflop Era

The November 2012 biannual list of the world’s fastest supercomputers shows the winner incrementally improving over the last measure. The Titan (a Cray XK7, Opteron 6274 16C 2.200GHz, Cray Gemini interconnect, NVIDIA K20x) is leading with 17.6 petaflops per second of maximum processing power. This was only an 8% increase in maximum processing speed as compared to other recent increases of 30-60%, but a continued step forward in computing power.

Supercomputers are used for a variety of complicated modeling problems where simultaneous processing is helpful such as weather modeling, quantum physics, and predicting degradation in nuclear weapons arsenals.

Figure 1. World's Fastest Supercomputers. (Source)
Increasingly, supercomputing is being seen as just one category of big data computing along with cloud-based server farms running simple algorithms over large data corpuses (for example Google’s cat image recognition project), crowd-based distributed computing networks (e.g.; protein Folding@home with 5 petaflops of computing power, and crowdsourced labor networks (e.g.; Mechanical Turk, oDesk, CrowdFlower - theoretically comprising 7 billion Turing test-passing online agents).
 

Sunday, August 19, 2012

Supercomputing: 16 petaflops, schmetaflops?

Supercomputing advances continue to exponentiate – the world’s best machine (IBM’s Sequoia - BlueGene/Q installed in the U.S. at LLNL) currently has 16 petaflops of raw compute capability.

Figure 1. Data Source: Top 500 Supercomputing Sites


As shown in Figure 1, the curve has been popping – up from 2 to 16 petaflops in just two years! However for all its massivity, supercomputing remains a linear endeavor. While the average contemporary supercomputer has much greater than human-level capability in raw compute power, it cannot think, which is to say pattern-match and respond in new situations, and solve general rather than special-purpose problems.

For the future of intelligence and cognitive computing, the three-way horse race continues between enhancing biological human cognition, reverse-engineering and simulating the human brain in software, and hybrids of these two.





Sunday, January 01, 2012

Top 10 technology trends for 2012

1. Mobile is the platform: smartphone apps & device proliferation
2. Cloud computing: big data era, hadoop, noSQL, machine learning
3. Gamification of behavior and content generation
4. Mobile payments and incentives (e.g.; Amex meets FourSquare)
5. Life by Siri, Skyvi, etc. intelligent software assistants
6. Happiness 2.0 and social intelligence: mindfulness, calming tech, and empathy building
7. Social graph prominence in search (e,g.; music, games, news, shopping)
8. Mobile health and quantified self-tracking devices: towards a continuous personal information climate
9. Analytics, data mining, algorithms, automation, robotics
10. Cloud culture: life imitates computing (e.g.; Occupy, Arab Spring)

Further out - Gesture-based computing, Home automation IF sensors, WiFi thermostat, Enterprise social networks

Is it ever coming? - Cure for the common cold, Driverless cars


Looking back at Predictions for 2011: right or wrong?

  • Right: Mobile is the platform, Device proliferation, Big data explosion, Group shopping
  • On the cusp: Crowdsourced labor, Quantified self tracking gadgets and app, Connected media and on-demand streaming video
  • Not yet: Sentiment engines, 3-D printing, Real-time economics

Sunday, July 03, 2011

World supercomputing capability more than triples

In a breath of good news for Japan this year, RIKEN's supercomputer "K Computer" vaulted to the top slot in world supercomputing in June 2011 as tracked by Top 500 Supercomputer Sites.

Remarkably, capability more than tripled to over 8 petaflops per second (8 quadrillion calculations per second, measured as the Maximal LINPACK performance achieved), after supercomputer performance had been asymptoting at close to 1 pf and 2 pfs for the last three years (Figure 1). China's Tianhe-1A at the National Supercomputing Center in Tianjin was in second place, and the US's Jaguar Cray at Oak Ridge National Lab in third.

The capacity tripling constitutes obvious potential benefits to scientific computing, the realm of applications for which supercomputers are used. It is hoped that these kinds of quantitative changes may eventually lead to qualitative changes in the way other problems are investigated, for example how the brain works.

Figure 1. Source: Top 500 Supercomputer Sites

Sunday, January 02, 2011

Top 10 technology trends for 2011

1. Mobile is the platform; mobile payment ubiquity could be next
2. Device proliferation continues; tablets, e-book readers, etc.
3. Connected media and on-demand streaming video, IPTV, live event interaction
4. Social shopping: grouppurchasing, commenting, recommendation, LBS
5. Sentiment engines (ex: Pulse of the Nation, We Feel Fine) are ripe for being applied much more broadly to other keyword domains; sentiment prediction
6. Big data era explosion: machine learning, cloud computing, clusters, supercomputing
7. Labor-as-a-service: microlabor, on-demand labor, global task fulfillment
8. Quantified self tracking gadgets and apps (ex: WiThings scale, myZeo, BodyMetRx, medication reminder, nutrition intake, workout coordination, DIYgenomics, etc.)
9. Personal manufacturing, digital fabrication, 3D printing ("atoms are the new bits"); slow but important niche growth
10. Real-time economics: blippy, crowdsourced forecasting, stock market prediction

(Review predictions for 2010)

Sunday, December 12, 2010

Supercomputers surpass 2.5 petaflops

The biannual list of the world's fastest supercomputers was released on November 13, 2010. For the first time, supercomputing capability surpassed 2.5 petaflops with the world's fastest supercomputer, the Tianhe-1A - NUDT TH MPP, X5670 2.93Ghz 6C, NVIDIA GPU, FT-1000 8C NUDT, at the National Supercomputing Center in Tianjin China, clocking in at over 2.5 petaflops.

Figure 1 illustrates how supercomputing power has been growing in the last five years, starting at (a paltry) 136.8 gigaflops in June 2005, and experiencing four solid doublings in growth. This rate of progress is estimated to continue, and usher in the exaflop era of supercomputing by mid-decade. The IBM Roadrunner at Los Alamos was the first to achieve speeds over one petaflop in June 2008 and held onto the fastest computer seat for three measurement periods, then was surpassed by the Cray Jaguar at Oakridge for two measurement periods. China has now captured the fastest supercomputer ranking with its NUDT MPP.

Figure 1. Growth in Supercomputing Capability: Jun 2005 - Nov 2010

The world's supercomputers are working on many challenging problems in areas such as physics, energy, and climate modeling. A natural question arises as to how soon human neural simulation may be conducted with supercomputers. It is a challenging problem since neural activity has a different architecture than supercomputing activity. Signal transmission is different in biological systems, with a variety of parameters such as context and continuum determining the quality and quantity of signals. Distributed computing systems might be better geared to processing problems in a fashion similar to that of the brain. The largest current project in distributed computing, Stanford protein Folding@home, reached 5 petaflops in computing capacity in early 2009, just as supercomputers were reaching 1 petaflop. The network continues to focus on modeling protein folding but could eventually be extended to other problem spaces.

Sunday, April 25, 2010

Supercomputing and human intelligence

As of November 2009, the world’s fastest supercomputer was the Cray Jaguar located at the U.S. Department of Energy’s Oak Ridge National Laboratory, operating at 1.8 petaflops (1.8 x 1015 flops). Unlike human brain capacity, supercomputing capacity has been growing exponentially. In June 2005, the world’s fastest supercomputer was the IBM Blue Gene/L at Los Alamos National Laboratory, running at 0.1 petaflops. In less than five years, the Jaguar represents an order of magnitude increase, the latest culmination of capacity doublings each few years. (Figure 1)

Figure 1. Growth in supercomputer power
Source: Ray Kurzweil with modifications

The next supercomputing node, one more order of magnitude, at 1016 flops, is expected in 2011 with the Pleiades, Blue Waters, or Japanese RIKEN systems. 1016 flops would possibly allow the functional simulation of the human brain.

Clearly, there are many critical differences between the human brain and supercomputers. Supercomputers tend to be modular in architecture and address specific problems as opposed to having the general problem solving capabilities of the human brain. Having equal to or greater than human-level raw computing power in a machine does not necessarily confer the ability to compute as a human. Some estimates of the raw computational power of the human brain range between 1013 and 1016 operations per second. This would indicate that
supercomputing power is already on the order of estimated human brain capacity, but intelligent or human-simulating machines do not yet exist.
The digital comparison of raw computational capability may not be the right measure for understanding the complexity of the brain. Signal transmission is different in biological systems, with a variety of parameters such as context and continuum determining the quality and quantity of signals.

Sunday, January 03, 2010

Top 10 technology trends for 2010

Some of the freshest ideas in 2009 were botnet futures (Daemon, Daniel Suarez), a variety of neuro scanning applications (The Neuro Revolution, Zack Lynch), a systems approach to Earth (Whole Earth Discipline, Stewart Brand), accelerating economic development through charter cities (Charter Cities, Paul Romer), automatic markets for fungible resource allocation (Broader Perspective, Melanie Swan), and the notion that the next-generation of technology needed to solve intractable problems could be non-human understandable and come from sampling the computational universe of all possible technologies (Conversation on the Singularity, Stephen Wolfram).

Heading into a brand new decade, there are several exciting technology areas to watch. Many are on exponential improvement curves, although from any viewpoint on an exponential curve, things may look flat. Most of this blog’s big predictions for 2009 came true. Here’s what could happen in the next year or so:

1. Closer to $100 whole human genome
Third-generation DNA sequencer Pacific Biosciences estimates that they are still on track for a late 2010 release of single-molecule real-time sequencing technology that could eventually lead to less than $100 whole human genome sequencing.

2. Mobile continues to be the platform
There will likely be a greater launch and adoption of addictive location-based services (LBS) like FourSquare, Gowalla and Loopt, together with social networking, gaming, and video applications for the mobile platform. Continued trajectory of smartphone purchases (one in four in the U.S.). iPhone and Android app downloads double again. Gaming expands on mobiles and on the console platform with Avatar and maybe other 3-D console games. Internet-delivered content continues across all platforms.

3. 22nm computing node confirmed for 2011
Intel possibly confirming and providing more details about the 22nm Ivy Bridge chip planned for commercial release the second half of 2011. The September 2010 Intel Developer’s Forum may feature other interesting tidbits regarding the plans for 3-D architectures and programmable matter that could keep computing on Moore’s Law curves.

4. Supercomputers reach 15% human capacity
Supercomputing capacity doublings have been occurring each few years and could likely continue. As of November 2009, the world’s fastest supercomputer was the Cray Jaguar, running at 1.8 petaflops (1.8 x 1015 flops), approximately 10% of the estimated compute capacity of a human.

5. Confirmation of synthetic biology fuel launch for 2011

Pilot plants are running and the commercial launch of the first killer app of synthetic biology, synthetic fuel, could be confirmed for 2011. Sapphire Energy and Synthetic Genomics are generating petroleum from algal fuel; LS9, petroleum from microbes; Amyris Biotechnologies, ethanol, and Gevo, biobutenol.

6. Smart grid and smart meter deployment
In energy, more utilities moving to deploy internal smartgrid network management infrastructure and starting to replace consumer premises equipment (CPE) with advanced metering infrastructure (AMI) for automated utility reading and customer data access. Dozens of efforts are underway in the U.S. (Figure 1).



7. Increased choice in personal transportation
More electric vehicle offerings, greater launch of alternative fuels, a potential Tesla IPO announcement, and more widespread car share programs (i.e., City CarShare, Gettaround).

8. Real-time internet search dominates
More applications allow real-time search functionality through content aggregation, standards, and more granular web searches. Search could be 40% real-time, 40% location-based, 20% other.

9. Advent of health advisors and wellness coaches
Hints of personalized medicine start to arrive with the unification of health data streams (i.e., genomics, biomarker, family and health history, behavior, and environment) into personalized health management plans. Early use of health monitoring devices (i.e., FitBit, DirectLife) as a prelude to biomonitors.

10. WiMax roll-out continues
Clear adds more markets to its current 26. Increasing importance of integrated data stream management (video, voice, etc.) on fixed and mobile platforms.

Probably not happening in 2010 but would be nice…
Still waiting for significant progress regarding…
  • 4G/LTE roll-out
  • Driverless cars, on-demand personal rapid transport systems
  • Ubiquitous sensor networks
  • OLEDs

Sunday, December 21, 2008

Top 10 Computing Trends for 2009

Here is a quick list of my top computing and communications predictions for 2009 ranging from smartphones to supercomputers.

1. Smartphone AppMania continues
The explosion of application development on smartphone platforms like the iPhone and G1 continues, particularly in location-based services, social interaction and gaming. More computer science departments offer smartphone application development classes. There is more standardization of USB, earphone and other ports. U.S. ARPU is over $100/month.

2. Twitter is the platform
Despite renowned technical glitches, thousands more flock to messaging-leader Twitter and the fastest-growing user group of the microblogging notification system is non-human tweeters using the service as a data platform, example: Kickbee. Web 2.0 continues to bring back network computing, turning the web into the computer and human and object-based messaging becomes the new RSS.

3. Minis go mainstream
Mini PCs such as the Asus Eee PC, MSI Wind and Dell Inspiron continue to proliferate. Minis are fingertip candy; a travel machine for the on-the-go tech-savvy and too cheap to not be affordable for others at $200-$400.

4. Supercomputers achieve 8% human capacity
With IBM’s RoadRunner and Cray’s Jaguar running at just over 1 petaflop/s currently, the world’s fastest supercomputers could reach 1.5 petaflop/s in 2009 (unconfirmed results here), about 8% of the total processing capacity of the average human.

5. Chips: 32nm node rolls out amidst sales declines
Intel rolls out its 32nm node 1.9 billion transistor chip despite worldwide industry sales declines. Gartner forecasts a 4% decrease in chip sales in 2008 vs. 2007 and a 16% decrease in chip sales in 2009 vs. 2008. The biggest speedups continue to come from hardware, not software, and there could be additional breakthroughs in memory (flash, NRAM), magnetic disk storage, batteries and processor technology.

6. iWorld persists
The 200 millionth iProduct is sold before Apple’s CEO succession plan is in place.

7. WiMAX roll-out still stalled
WiMAX services could roll-out to 1-2 cities beyond Baltimore by year-end if Sprint and Clearwire’s operational and legal challenges are resolved. WiMAX would help to stratify connectivity offerings with a recession-attractive price point and bandwidth package (2-4 Mbps download, 1 Mbps upload speed; 6 month introductory price of $25/month, then $35/month).

8. More flexible media consumption models
More models for flexible on-demand pay/free video content viewing are launched for Tivo, Netflix, DVR, media PC and Internet consumers.

9. Video gaming grows
Video game titles, types and hours growth continues as escapism and low-cost entertainment options flourish.

10. Extended use of virtual worlds
Virtual world penetration and proliferation continues (Sony’s recent launch: PlayStation Home) at a slow and steady pace for both entertainment and serious use. The largest platform, Second Life, saw a 50% year-over-year increase in total hours and a 100% year-over-year increase in land ownership (much less exposure to virtual subprimes), and this rate of growth could easily continue in 2009. In the natural evolution of the Internet, virtual worlds continue expanding from the 3 Cs (communication, collaboration and commerce) to more advanced rapid prototyping, simulation and data visualization.


Other advances that could be around the corner:


Still waiting, a few other (non-comprehensive) opportunities:

  • Semantic web
  • Natural language processing
  • VLT (very long-term) laptop batteries
  • Wireless power
  • Ubiquitous free Wi-Fi
  • Paper-thin reader for newspapers, eBooks and any printed content
  • Cognition valet and other AI services