Sunday, December 28, 2008

Status of life sciences

Right now is an exciting time in life sciences. The field is advancing, growing and changing in nearly every dimension, not just content-wise but also structure-wise. Tremendous content is coming forth in the form of key research findings, affordable new technologies and simultaneous holistic and reductionist expansions via systems biology approaches and new sub-field branching. Structure-wise, life science is changing in three important ways: the concept of life science, how science is conducted and the models by which health and health care are understood and realized.

Conceptually over time, life sciences have transitioned from being an art to a science to an information technology problem to now, an engineering problem. The way science is conducted is also shifting. Science 1.0 was investigating and enumerating physical phenomenon and doing hypothesis-driven trial and error experimentation. Science 2.0 adds two additional steps to the traditional enumeration and experimentation to create a virtuous feedback loop: mathematical modeling and software simulation, and building actual samples in the lab using synthetic biology and other techniques.

A second aspect of Science 2.0 is the notion of being in a post-scientific society, where innovation is occurring in more venues, not just government and industrial research labs but increasingly at technology companies, startups, small-team academic labs and in the minds of creative individual entrepreneurs.

Sunday, December 21, 2008

Top 10 Computing Trends for 2009

Here is a quick list of my top computing and communications predictions for 2009 ranging from smartphones to supercomputers.

1. Smartphone AppMania continues
The explosion of application development on smartphone platforms like the iPhone and G1 continues, particularly in location-based services, social interaction and gaming. More computer science departments offer smartphone application development classes. There is more standardization of USB, earphone and other ports. U.S. ARPU is over $100/month.

2. Twitter is the platform
Despite renowned technical glitches, thousands more flock to messaging-leader Twitter and the fastest-growing user group of the microblogging notification system is non-human tweeters using the service as a data platform, example: Kickbee. Web 2.0 continues to bring back network computing, turning the web into the computer and human and object-based messaging becomes the new RSS.

3. Minis go mainstream
Mini PCs such as the Asus Eee PC, MSI Wind and Dell Inspiron continue to proliferate. Minis are fingertip candy; a travel machine for the on-the-go tech-savvy and too cheap to not be affordable for others at $200-$400.

4. Supercomputers achieve 8% human capacity
With IBM’s RoadRunner and Cray’s Jaguar running at just over 1 petaflop/s currently, the world’s fastest supercomputers could reach 1.5 petaflop/s in 2009 (unconfirmed results here), about 8% of the total processing capacity of the average human.

5. Chips: 32nm node rolls out amidst sales declines
Intel rolls out its 32nm node 1.9 billion transistor chip despite worldwide industry sales declines. Gartner forecasts a 4% decrease in chip sales in 2008 vs. 2007 and a 16% decrease in chip sales in 2009 vs. 2008. The biggest speedups continue to come from hardware, not software, and there could be additional breakthroughs in memory (flash, NRAM), magnetic disk storage, batteries and processor technology.

6. iWorld persists
The 200 millionth iProduct is sold before Apple’s CEO succession plan is in place.

7. WiMAX roll-out still stalled
WiMAX services could roll-out to 1-2 cities beyond Baltimore by year-end if Sprint and Clearwire’s operational and legal challenges are resolved. WiMAX would help to stratify connectivity offerings with a recession-attractive price point and bandwidth package (2-4 Mbps download, 1 Mbps upload speed; 6 month introductory price of $25/month, then $35/month).

8. More flexible media consumption models
More models for flexible on-demand pay/free video content viewing are launched for Tivo, Netflix, DVR, media PC and Internet consumers.

9. Video gaming grows
Video game titles, types and hours growth continues as escapism and low-cost entertainment options flourish.

10. Extended use of virtual worlds
Virtual world penetration and proliferation continues (Sony’s recent launch: PlayStation Home) at a slow and steady pace for both entertainment and serious use. The largest platform, Second Life, saw a 50% year-over-year increase in total hours and a 100% year-over-year increase in land ownership (much less exposure to virtual subprimes), and this rate of growth could easily continue in 2009. In the natural evolution of the Internet, virtual worlds continue expanding from the 3 Cs (communication, collaboration and commerce) to more advanced rapid prototyping, simulation and data visualization.


Other advances that could be around the corner:


Still waiting, a few other (non-comprehensive) opportunities:

  • Semantic web
  • Natural language processing
  • VLT (very long-term) laptop batteries
  • Wireless power
  • Ubiquitous free Wi-Fi
  • Paper-thin reader for newspapers, eBooks and any printed content
  • Cognition valet and other AI services

Sunday, December 14, 2008

Future of physical proximity

Where will you live? How would concepts and norms of physical proximity evolve if cars were no longer the dominant form of transportation? How would residential areas self-organize if not laid out around the needs of cars and roads? Imagine gardens replacing driveways and roadways. What if people just walked outside of their houses or onto their apartment rooftops to alight via jetpack, smartpod or small foldable, perhaps future versions of the MIT car. At present, cities, suburbs and whole countries are structured per the space dictates of motor vehicular transportation systems.

Nanoreality or rackspace reality
?
There are two obvious future scenarios. There may either be a radical mastery and use of the physical world through nanomanufacturing or a quasi-obsolescence of the physical world as people upload to digital mindfile racks and live in virtual reality. The only future demand for the physical world might be for vacationing and novelty (‘hey, let’s download into a corporeal form this weekend and check out Real Nature, even though Real Nature is sensorially dull compared to Virtual Nature’).

Work 2.0
The degree of simultaneous advances is relevant for evaluating which scenario may arise. For example, economically, must people work? What is the nature of future work? Creative and productive activity (Work 2.0) might all take place in virtual reality. Smart robots may have taken over many physical services and artificial intelligences may have taken over most knowledge work. Would people be able to do whatever work they need to from home or would there be physical proximity and transportation proximity requirements as there are now?

Portable housing and airsteading
Next-level mastery of the physical world could mean that people stay incorporeal and live in portable residential pods. Airsteading (a more flexible version of seasteading) could be the norm; docking on-demand as boats or RVs do, in different airspaces for a night or a year. Docking fees could include nanofeedstock streams and higher bandwidth more secure wifi and datastorage than that ubiquitously available on the worldnets. Mobile housing and airsteading could help fulfill the ‘warmth of the herd’ need and facilitate the intellectual capital congregation possibilities that cities have afforded since the early days of human civilization.

Sunday, December 07, 2008

Brain-computer interfacing and the cognition valet

One dream of the future is to augment the human brain via direct linkage to electronics. Brain-computer interfaces could provide two levels of capability, first, by allowing machines to be controlled directly by the brain. This has already been demonstrated in invasive implants for motor sensing and vision systems and non-invasive EEG-based helmets for basic game play, but has been elusive in avatar control (the Emotiv Systems helmet is not quite working yet). The second level of capability is in augmenting more complex cognitive processes such as learning and memory as is the goal of the Innerspace Foundation.

On-board processing
The broader objective is bringing information, processing, connectivity and communication on-board [the human]. Some of this is ‘on-board’ right now, in the sense that mobile phones, PDAs, books, notebooks, and other small handheld peripherals are carried with or clipped to people.

There are many forms of non-invasive wearable computing that could advance. Information recording and retrieval could be improved with better consumer product lifecamming rigs to capture and archive audio and video life streams. Other applications are underway in smart clothing, wifi-connected processing-enabled contact lenses, cell phones miniaturized as jewelry (the communications, GPS, etc. functions not requiring display), EEG helmets with greater functionality and an aesthetic redesign from Apple, and hair gel nanobots. A slightly more invasive idea is using the human bactierial biome as an augmentation substrate and there are a host of more invasive ideas in body computing, implantable devices, evolved and then reconnected neural cell colonies and other areas.

Cognition Valet
After information recording and retrieval, the next key stage of on-board computing is real-time or FTR (faster than real-time) processing, particularly automated processing. Killer apps include facial recognition, perceptual-environment adjustments (e.g.; brighter, louder), action simulators and social cognition recommendations (algorithms making speech and behavior recommendations). Ultimately, a full cognition valet would be available, modeling the reasoning, planning, motivation, introspection and probabilistic behavioral simulation of the self and others.

Protocols and Etiquette of the future: “my people talk to your people” becomes “my cognition valet interface messages or tweets with your cognition valet interface.”

Distributed human processing
Augmenting the brain could eventually lead to distributed personal intelligence. As in, reminiscent of David Brin’s “Kiln People,” I use a copy of my digital mindfile backup to run some Internet searches and work on a research project while my attention is not focused on online computer activities, simultaneously a neural cell culture from my physical brain focuses on a specific task, and the original me is off pursuing its usual goals.

Sunday, November 30, 2008

Future of energy

There is a lot of discussion and ideation regarding the future of energy. The only thing everyone generally agrees upon is the quantitative underpinnings: that current worldwide energy demand is 15 TW (terawatt) hours per year and is expected to double by 2030 to 30 TW hours per year. It is clear that fossil fuel alternatives are necessary. No matter how low the price of oil may return (light crude is $50 per barrel at present in November 2008, down from $145 per barrel four months ago in July 2008), fossil fuel emissions, growth in worldwide energy demand and the oil independence interests of some nationstates warrant alternatives.

Different technologies have different proponents. A comprehensive strategic plan with consideration for installed base technologies, improvements thereto and incorporation of new technologies is lacking. There are many unanswered questions about how the constellation of possibilities should fit together and how trade-off decisions should be made. For example, should resources be devoted to the redevelopment and retrofitting of the 500 oldertech PCC (pulverized-coal combustion) coal plants in the U.S. fleet with nanofilters to better collect CO2 and other emissions from the flue gas, or instead use the resources to install the newertech IGCC (integrated gasification combined cycle) coal plants? Should coal plants be scrapped altogether in favor of solar, nuclear, wind, wave, geothermal and other renewable sources?

Is solar power, photovoltaics or thermal, terrestrial or space-based, point or line, dish or tower, trough or linear fresnel, the way to go? Is nuclear fission or fusion, traditional or pebble bed, the way to go? Is the fuel cell an unrealizable dream or a potential answer? Are batteries too toxic and developed from increasingly scarce materials or are nanoalloys, interleaved materials layers and nanocoatings making them viable?

Proposal #1: Presidential think-tank to develop comprehensive U.S. Energy Strategy Energy, like agriculture, is a special-interest politics game. It would be helpful to have a presidentially-appointed think-tank of diverse members without political background or agenda to develop a comprehensive strategic long-term Energy plan for the U.S.

Proposal #2: Key U.S. states to generate and sell energy
So far the thinking has been small, Nevada and Arizona have undertaken renewable energy initiatives for their own power needs but with their solar and land resources, they could potentially join California as one of the world’s ten largest economies by generating and transmitting energy to other states. Regional and eventually a national powergrid would need to be developed. This could work for Texas too, oil field land could be redeployed for solar power; oil derricks replaced by linear fresnel towers.

Sunday, November 23, 2008

Advanced technology and social divisiveness

What would the world look like with even more dramatic technological change? What if accelerating change in technology not only continues but also heightens in depth and magnitude? One dramatic change, for example, would be having a 100x or 1,000x improvement in human capability (thought, memory, learning, lifespan, healthspan, etc.). The definition of what it is to be human may evolve as the transhuman and posthuman concepts explore. There have not yet been “different kinds of humans” or “different kinds of intelligences” co-existing in civilization.

These dramatic changes are distinct from the more general quality of life and more minor capacity improvements delivered by technology so far (the Internet, cell phone, medical transplant technologies, electricity, steam engine, immunization, etc.).

One possible future could be the organization of society into voluntary social groupings based on outlook and adoption or non-adoption of technology; some obvious dividing line technologies could be human genetic engineering and brain-computer interfaces.

A simple societal lens that can be applied at present is technology adopters and non-adopters.

Luddites are different from Those Who Don’t Use Cell Phones
Some portion of non-adopters are doing so deliberately and out of principle: Luddites, Amish and other religions, etc. The other portion of non-adopters has just not had the access (practical, technical, financial or otherwise), willingness or perception of value (e.g.; a killer app) required to adopt. So far in democracies, both types of non-adopters have been accommodated into society, and are generally able to continue their behaviors, for example, the practice of some religions of complete medical non-intervention.

Peaceful coexistence of adopters and non-adopters
Participatory political regimes will tend to avoid paternalism in technology adoption while economic and social incentives and universal access will tend to trigger adoption (example: the cell phone). Simultaneously, mature societies tend to accept and accommodate non-adopters. Two main dynamics that could challenge the peaceful coexistence of adopters and non-adopters would be first, the perceived threat of new technology particularly by those that can control its adoption and second, times of economic scarcity and pronounced competition for resources.

Sunday, November 16, 2008

Economics taboo in life sciences

It is considered impolite at best to ask life sciences companies about their cost structure and pricing strategies. Life sciences executives can often appear naive, incognizant and uncaring about the basic economics of their industry. They appear to exclusively and superficially target profit maximization and wholly propriety IP development and protection; ironic given the greater goals of healthcare. Life sciences as an industry seems to be at least twenty years behind the high tech industries such as computing and communications in terms of understanding and delivering economic value to a wide audience of end consumers, and in terms of openness and collaboration.

Fixed and variable costs, pricing strategies and quantitative aspects of customer demand are much more known and openly shared and discussed by companies in the high tech industries. That critical piece of entrepreneurialism, understanding the specific economic value of a product or service to the end consumer is absent in life sciences. The problem of course is the “third party pays” dynamic in life sciences where a third party, insurers, pays for services consumed by patients. If patients knew, or perhaps were even paying, prices, their behavior would likely be much more rational, and so too would health services offering have to be much more rational. Price is not discussed and is rarely even available at the doctor’s office.

Sunday, November 09, 2008

Your double

If you had a double, would you hang out with him or her? A double here is defined as an identical copy of yourself. The format could be interesting. Presumably an identical physical copy of yourself would be a bit stranger than a digital copy, advising you or conversing with you from the discretion of a computer screen. With yourself as a friend, some people might not bother to interact with others at all anymore. Others might prefer zero interaction with their double.

Theoretically, there would be no reason to stop at two instances of yourself. What about more, either digital or physical? If the doubles are caught up in their own goals and objectives, they might not be able to be the objective advisors that could be nice when someone knows you so well, but their insights and activities could be quite interesting.

Some good SciFi examples that examine the idea of having one or more doubles are John C. Wright’s Golden Age, David Brin’s Kiln People and Richard Morgan’s Altered Carbon.

There would be several aspects to be sorted. Would experience be linked, shared, merged or kept separate? If integrated into a composite, a really good 3D merge and difference finder program would be needed. What about legal agreements with your double and security/privacy concerns and etiquette? The People’s Court of the future could feature cases between multiple instances of the same person!

If experience could not be merged and shared, it could still be interesting to have one or more doubles even if those people would be continually diverging from you due to having their own experiences. It might be like having additional close - really close - family members. The others would be people of their own with the legal rights, economic needs, dreams, goals and activities of any other individuals.

There could be interesting tests to pass to demonstrate the impact of initiating a double in the physical world and its contribution vs. draw down of resources; although, this analysis is generally absent from the deliberation of current-day parents.

Sunday, November 02, 2008

Examining tool complexity

Tools and the science findings they enable evolve in lock-step. Many tools have been quietly transforming into complex entities of their own over the last several years. Exemplar contemporary tools on the landscape include many forms of the microscope, mass spectrometer, chromatograph, flow cytometer, and telescope.

The complex tools of today involve a hardware component together with many layers of software
for operating, enumerating and analyzing. The analytics software layer has become critical as mathematical modeling, simulation, automation, statistical computation and informatics are expected features. For example, the new biology extends traditional enumeration and experimentation with the additional steps of mathematical modeling and software simulation, and building test biological machines in the lab.

The increasing complexity of tools means that it is not possible to just wait for hardware speedups anymore, software is the weakest link (open source collaboration helps but only modestly), mathematical advances have been figuring most prominently and the cultural divide between hard science professionals and computer science, mathematics and statistical experts inhibits progress.

Sunday, October 26, 2008

Bye-bye Bush

Historical Oil Prices: 2004 - 2008

Sunday, October 19, 2008

Can social capital markets move from niche to core?

The 700+ participants in this week’s first of its kind Social Capital Markets conference think so. Session summaries are available here. Social Capital Markets tend to denote markets and economic transactions where not only financial but also social and environmental aspects are of concern.

Right now there are three significant factors impacting the development of social capital markets:

First, the current failure of traditional capital markets models. We are in a moment of refashioning the global economy and the values and principles of social capital markets are being demanded: accountability, transparency, sustainability and governance. Social capital markets companies have a great opportunity to step in and help build the new world economic order.

Second, a broad social consciousness has developed. It started with Al Gore, Paul Hawken and others. Like all human behavior, economics is another area where deep awareness about the social and environmental impact of actions is necessary and increasingly available. Further, there is the idea of using the principles of business and economics as a tool for change; getting developing world populations actively involved as entrepreneurs receiving microloans has been vastly more successful in alleviating poverty than 30 years of foreign aid programs.

Third, the tools are now in place for realizing social capital markets. Web-based marketplace platforms and offerings are available for all manner of social economic transactions including investing (SRI public equity, social venture capital, debt, loans, microfinance, real estate and prediction markets), philanthropy (from donations to mission-related investments), purchasing (goods and services marketplaces) and income generating (jobs, projects and ideas marketplaces). Granular attribute selection can be used to allocate capital which makes transactions more empowering for all participating parties.

In summary, the three factors driving the next stage of Social Capital Markets, new models for rebuilding traditional capital markets, the development of a broad social consciousness in support of green markets and the tools to execute and monitor these transactions, could help the area evolve from niche to core.

The true moment of progress could come when Social Capital Markets are no longer distinct from traditional capital markets but are rather merely a feature or attribute of all capital markets.

Sunday, October 12, 2008

Prime Directive redux

As a follow-up to the last post, Technology Intervention is Moral for advanced civilizations, it could be quite useful to develop a rigorous Principles of Societal Interaction to be ready for any potential future communications. Current Earth-based treaties and norms, as well as Star Trek's Prime Directive, as Hiro Sheridan points out, could be drawn upon for ideas.

Star Trek's Prime Directive espouses a strict non-interference policy towards other societies and identifies a key technological pivot point, the development of the warp drive allowing interstellar space travel.

The Prime Directive is an interesting blueprint; however alternatives could be evaluated for at least three reasons: practicality, reality and moral imperative.

  • First, as a practical matter, the Earth-based examples have been a case of societies being aware of each other, and often interfering. A clean, invisible, non-interference model is probably not practical, even for societies scattered through space. Being cognizant of the limited frame of human-reasoning, it still seems that if there are multiple intelligent societies in the universe, it is at least possible that they will start finding each other through SETI-type programs and other means, either intentionally or accidentally. At minimum, it is not ascertainable that any and all advanced societies would have and be able to successfully execute a non-interference and non-awareness policy.
  • Second, in responding to the complicated nuances of reality, there is a difference between non-interference in the internal affairs of another society in the Prime Directive and Westphalian sovereignty sense (supportable) and complete non-interaction (less supportable). There could be many types of interaction and diplomatic mission technology sharing as has been the historical precedent for Earth-based societies whose objectives would not be in contravention with Westphalian sovereignty. Over time, it may even be that Earth-based intelligent society evolves a universal bill of rights for all intelligent life, irrespective of nation-state or other jurisdiction such that concepts like Westphalian sovereignty become outmoded.
  • Third, as argued in Technology Intervention is Moral, it is not clear that non-interference is the most moral course and an advanced society may consider it a moral imperative to offer certain types of suffering-alleviation/quality-of-life-improving technology to less advanced societies such as vaccinations has been the case on Earth.


Sunday, October 05, 2008

Technology intervention is moral

Advanced civilizations may have policies for interacting with civilizations deemed to be less advanced than their own. On Earth, there is currently no cohesive national or global view on contact with any non-Earth based intelligent societies. In the case of Earth-based societies, interventionism has been the norm.

Assuming safe interaction and communication can occur and intelligence or proto-intelligence has been established,

it is the moral obligation of any more advanced society to interact with any less advanced society.

It is a moral obligation to intervene for the purpose of technology-sharing first and most importantly to ameliorate suffering and improve quality of life, consider vaccines for example. Second, it is patronizing to decide whether or not to expose the less advanced society to more advanced technologies. The moral and respectful path is to expose the newtech and let the other decide.

Third, a broad goal of humanity is to lift all intelligent beings to an optimized state of fulfillment and contribution, so absent existential risk to the more advanced civilization, there is no reason not to share technology. Fourth, considering the 'do unto others' principle, the majority of humans would likely support intervention.

An alternate but less tenable view is that intervention is immoral, that the independence of the other civilization should be respected. The more advanced society does not have the right to interfere. It is better to let someone learn for themselves instead of teaching them; forget the matches and wait a few more centuries for lightning to zap the meat. However, even if the intervention is resented later, it is still more moral to intervene in the sense of improving suffering, quality of life, etc.

Sunday, September 28, 2008

Our beautiful future

As worldwide over-dependence on oil and the costly Iraq war has hastened the way for new energy regimes, the U.S. financial bailout will be hastening the use of economic models other than Darwinian capitalism as it has been known where the most able seize maximum resources for themselves. Nascent social movements for opting out of the traditional economic system will become stronger. Science fiction is rife with dystopian models of robotic controlled governments (Daniel Suarez’ Daemon is a recent example) but in many ways machine-like entities absent the agency problem could be a dramatic improvement over fallible people-administered governments. Technology is more often humanifying than dehumanifying.

As usual, the focus is on technological advances to remedy the current global energy, resource consumption and economic challenges. Given both history and the present status of initiatives, technology is likely to deliver. New eras may be ushered in even more quickly when demand is higher and complacency lower. A surveillance and sousveillance society is clearly emerging, simultaneously from top-down government and corporate programs and bottom-up individual broadcast of GPS location and other lifestreaming. The trend to freeing human time for productive and rewarding initiatives is continuing. What will be the first chicken in every pot, the robotic cleaner or self-cleaning nanosurfaces? How soon can all jobs be outsourced to AI? How soon will there be options on the nucleotide chassis?

Sunday, September 21, 2008

Synthetic biology advances

The realization of synthetic biology, one of the cornerstone fields in this century’s life science revolutions, is a step closer this year with three important advances.

First, synthetic biology movement leader Drew Endy has arrived at Stanford from MIT. Knowing that people and tools are critical to the area’s development, he is assembling a world class curriculum and department to tackle the challenges of synthetic biology, estimating that Stanford is four years behind.

Second, more than 85 worldwide university teams have entered this year’s iGEM (international genetically engineered machines) competition. 900 students are estimated to be at MIT for the November 8-9 presentation of their work and the contest’s culmination. Previous year’s novel synthetic designs have included wintergreen and banana-scented E. coli bacteria, creating virtual-machine like computational platforms in cells and microbial cameras or light programmable biofilms.

Third, record attendance is expected at the fourth annual Synthetic Biology conference will be taking place October 10-12 at the Hong Kong University of Science and Technology.

Biology investigation, modeling, simulation and building
Synthetic biology is starting to have more process and rigor, particularly as articulated by Martyn Amos in Genesis Machines. Several areas have been simultaneously improving and coming together: biological system and process enumeration, 3D software modeling and simulation, and biological machine building. As CAD and EDA allowed semiconductor designers to achieve new levels of productivity and automate complex circuit design and test, so too are software tools aiding biology.

Bio-SPICE (Biological Simulation Program for Intra- and Inter-Cellular Evaluation) is an open source framework and software toolset for the modeling and simulation of spatio-temporal processes in living cells. The innovation process for synthetic biologists is now:

  • investigate the biological phenomenon or mechanisms
  • mathematically model the existing or novel phenomenon
  • use software simulation to test the model
  • build it in the lab with standardized off the shelf biological parts of synthesized DNA

Sunday, September 14, 2008

Status of public transport

The second annual Bay Area TransitCamp was an interesting venue for public transportation employees, community representatives, software developers and interested citizens to discuss all manner of subjects relating to public transportation, in particular technical, planning, communications and policy aspects. The informal venue allowed high-quality information sharing, education and brainstorming from multiple viewpoints.

Public transport companies are trying to understand how to improve their service, and communications and interactions with riders and the public in general. Political and other community representatives are trying to understand how to improve transit solutions and decrease local traffic congestion. The upcoming $10 billion California ballot initiative for high-speed rail construction is pointing up what appears to be a lack of coordinated long-term strategic planning and integration of the nine Bay Area transit authorities.

Software developers are working on applications, services and standards.

There are two kinds of transit applications: schedule information and real-time updates
Third party developers like iCaltrain are providing more user-friendly platform-portable schedule information. NextBus is moving to the next layer by providing real-time GPS-enabled vehicle location information and service alerts, nationwide, as transit providers make data available. BART is twittering service alerts and other information. MyBart is overlaying public transit data with event information, discounts and sponsor offerings.

The main Internet-enabled service is the formalization and expansion of ride share and carpool systems. There are already a few dozen of these services such as zimride, Avego, RideNow, eRideShare and GoLoco; for dynamic or planned, local or long-distance rides and already the usual a need for a meta ride share site.

TripML and DynamicRidesharing are two standards efforts under development and promulgation.

Still looks like quite a wait for PRT (personal rapid transit)

Sunday, September 07, 2008

Cost of each new drug: $1.3 billion

The Tufts Center for the Study of Drug Development¹ estimated that it was costing $1.3 billion in 2006 to bring each new drug to market. Why is it so expensive and why does the cost keep growing precipitously? There have been some technology advances, but they are expensive and have helped to raise the number of discoveries but not the number of approved drugs. Little cost-scaling is available at present for the clinical trial and production process bottlenecks of current drug development.

Quadrupling from $318m (1987) to $13 billion (2006).
Tufts Center for the Study of Drug Development¹

Biggest cost component: clinical trials
The biggest cost in bringing a new drug to market is human clinical trials, and these costs continue to grow. Without standardized electronic health records and other obvious initiatives, it is time-consuming and costly to source and enroll appropriate clinical trial participants. The cocktail problem is also in effect as people have had more varying health issues and remedies over time. The amount of detail to be collected and assessed increases and homogeneous and isolated factor patient comparisons are more difficult.

Increased complexity and amortization of failed drugs

The low-hanging fruit drugs have already been discovered. The diseases currently studied have less readily identifiable and more complex target molecules in the body. The target molecules have more intricate biological interactions and less easily matchable compounds for therapies. Each successful drug includes the cost of failed drugs as only one in five marketed drugs is able to pay for its R&D costs.

No cost decreases for biologic drugs

One of the main kinds of drugs produced starting in 1998 is biological drugs. These are drugs that mimic the effects of substances naturally made by the body. The fixed time, process and other costs required to produce these genetically engineered substances means that economies of scale do not ensue for larger volumes. This is compared to traditional drugs which became cheaper over time in production, helping to offset the cost of new discoveries.

Unclear benefits of newtech
How can the paradox of technology advances yet constant numbers of approved drugs be explained? Technology advances have been proliferating in areas such as mass spectrometry, protein crystallography, chromatography, flow cytometry, microfluidics, genetic scanning and synthesis and atomic force microscopy, all of which are helpful but expensive. The ongoing cost to maintain a state of the art lab has skyrocketed. Newtech has meant that the rate of discovered substances and medicines in development is increasing (2,700 compounds are in development in 2007 vs. 2,000 in 2003)², but the complete process of creating viable therapies and moving them through clinical trials to approval is the bottleneck.


¹ J.A. Dimasi and H.G. Grabowski, “The Cost of Biopharmaceutical R&D: Is Biotech Different?,” Managerial and Decision Economics 28 (2007): 469-479
² Adis R&D Insight Database, 27 February 2008

Sunday, August 31, 2008

The long arm of the corporate agenda

One of the most interesting places to look for social commentary is … in corporate annual reports. The medium is ripe for gaffes, unmasked agendas and misdeeds, for example, Big Oil's unabashed proxy statement solicitations to shareholders each year to vote against greenhouse gas emissions goals and environmental impact reports (documented in this post and comments).

General Mills. America’s processed food provider.
The 'fattening of America' trend has treated General Mills well, the stock is up 15% this year to date (vs. the S&P 500's 12% decline) and up 40% since 2005.


General Mills 2008 Annual Report - front cover

You be the judge - is the General Mills 2008 Annual Report a triumph of demographic marketing or would it be over the top to see it as a perpetuation of racial stereotyping?

The Latina is featured center front with a Fajita product (go Hispanic demographic!). Why is she is the only one sitting? The fit-looking Caucasian woman, in pearls, is holding snack bars (because as a size 2, she is only entitled to diet food?). The African American man is holding nothing (what is the messaging here? General Mills does not have products (or jobs, see back cover) for African Americans?).


General Mills 2008 Annual Report - back cover

Flipping to the back cover, another healthy looking Caucasian woman is holding party mix (sorority alumnae?). The older guy with the paunch is leaning on several cases of sugar-coated cereal, and the balding corporate executive is holding reports, not food items. Only the women and the overweight men have food? Where is that lucrative gay demographic? Or maybe the African American male is doing double duty as a metrosexual.


General Mills 2008 Annual Report
Chairman and CEO photo

Inside, the story continues. The Chairman and CEO jauntily appear, strangely without food products, though collegially in shirtsleeves.

Sunday, August 24, 2008

Economic fallacies II

Fallacy #3: The singularity is a great investment opportunity
A technological revolution like that brought about by the PC or the Internet is a great investment opportunity. Current possibilities for this kind of compound growth in technology-driven financial returns include alternative energy, genomics, personalized medicine, anti-aging therapies, 3d data manipulation tools and narrowly-applied artificial intelligence.

A technological singularity is not necessarily a great investment opportunity. A technological singularity implies change so radical and diffuse that prior models for understanding and exploiting or profiting from the world will no longer work. There is a substantial risk that financial markets as they are known today could disappear. Growth, alpha and superior financial returns may be irrelevant in a post-traditional financial markets era. Planning for the possibility of a technological singularity suggests a much broader definition of what the assets of the future may be and allocating to these areas, a substantial shift away from the traditional asset preservation and financial returns that outpace inflation in the long-run mindset of today.


Fallacy #4: Economic systems become irrelevant in a post-scarcity economy
This is the notion that economies and markets go away in a post-scarcity economy for material goods. At present, an increasing number of goods and services are becoming available for free or offered via modern business models such as the freemium. In the future, substantially all material needs may be easily met at low cost or for free in a molecular-nanotech society, but scarcity as an economic dynamic is likely to persist and economics systems in general are also likely to continue.

Scarcity would be perceived in whatever material resources were not yet plentifully available and in any finite resources such as time, ideas, attention, emotion, reputation, quality, etc. Economic system dynamics could change substantially, for example, property tax would not make sense in a world where nanotech could rapidly build or absorb structures. Unless economics and markets as the most effective means of resource distribution are superceded, they are likely to endure.


Fallacy #5: Social capital markets need not deliver competitive returns
The conventional notion is that it is acceptable for social capital market investments to deliver lower returns than traditional financial instruments. Social capital market investment products include SRI equity funds, corporate governance initiatives, social capital venturing (private equity), fair trade coffee and organic products. On average, consumers are willing to spend 5% more for attribute products (products with affinity attributes such as fair trade, local, organic, etc.) and investors have been willing to sacrifice 5% or more in financial return for socially responsible investments.

However, after some implementation time lag, social capital could have equal or higher returns. Sustainable socially responsible businesses should be more profitable not less. Both direct tangible economic benefits can accrue as well as the indirect benefits of marketing and market-knowledge that the business is more principled and sustainable. Corporate governance and other green or social initiatives should benefit the bottom line, not penalize it. The notion that return and social good are mutually exclusive is a fallacy.

The article with all nine fallacies is available here

Sunday, August 17, 2008

Fallacies when thinking about the economics of future technology

Future technologies seem so impactful and fabulous that it is easy to jump to incorrect conclusions about what things would be like with their advent.


Fallacy #1: Molecular assemblers will have a worldwide overnight rollout
The conventional assumption is that once humans are able to make one molecular assembler, it will be able to self-replicate, and therefore within twenty-four hours everyone worldwide will have one. It is far more likely that a molecular assembler would follow the usual s-curve adoption pattern of any other newtech; early versions are expensive and clunky with minimal functionality, continued improvement iterations make the newtech more relevant and usable.

The first molecular assemblers may be like a next generation 3d printer, printing the T-shirt a friend sent as an email attachment. Only early adopters will have the utility (read: money and interest) to purchase the first molecular assemblers. Also, the first molecular assemblers will not be able to self-replicate as the intricate molecular manufacturing processes will need to be conducted at special facilities.

Finally, the full newtech ecosystem needs to be considered, while carbon and other basic elements could be obtained easily from dirt piles delivered to suburban driveways, industrial utility solutions are need for the 50% of the urbanized world. Cartridge supply for specialty elements (think Gillette) will be required. Matter decompiling will need to be a feature of the molecular assembler or there will need to be some other means of recycling. More here, here and here.


Fallacy #2: Don’t develop newtech if it’s not cheap enough for universal access
This is the view that we should not develop any beneficial newtech unless it can be immediately accessible worldwide at a low price. “Folks, lets not make the Eniac since not everyone can have one.” However noble this view may be, it again ignores the historical precedent of technology development, rollout and penetration. A fundamental property of technology is that it may be expensive at the outset but then price drops, functionality improvements and re-purposing to new markets occur over time. For example, those currently paying $100,000 a year for life extension treatments are hopefully helping to rationalize, standardize and develop a broader market for these services.

Work can still be done on open-source and universal accessibility models, and diligence applied to clearing public goods to non-IP protected regimes (e.g.; the human genome), but with the understanding that traditional technology development models (cost drops over time) will continue to drive progress.

In fact, there can be benefits in not adopting newtech immediately; costs are higher, unintended consequences are unknown, early adopters can work out the kinks (e.g; the first generation iPhone cost $600, the second generation iPhone 3G with expanded functionality emerged a year later at $199) and older technology generations like landline telephony can be skipped. World-is-flat cycle time speed-ups and new business models (e.g.; OneWorldHealth as a non-profit pharmaceutical company directed at developing world disease) illustrate market efficiency in applying traditional technology development in today’s world.

The article with all nine fallacies is available here

Sunday, August 10, 2008

Human augmentation via bacterial biome

Human augmentation of physical and mental capabilities by bringing electronics on board seems a likely future. The early stages have already been realized, 10% of Americans are cyborgs today in the sense of having synthetic items permanently implanted: hearing aids, teeth, pacemakers, hip and knee replacements, RFID chips etc. Cochlear implants interfacing with hearing cognition for deaf children are routine. Neuroengineering research has been progressing in the implementation of electroencephalography-based computer controllers. Brain cap video game headsets may become the norm.

There are at least three ways for achieving human-electronic interfaces; physical implants, wearables and a third as yet unconsidered possibility, exploiting the human bacterial biome.

The 1,000 trillion bacteria that are part of each human (10x the number of human cells) could be an ideal augmentation substrate.
There are trillions of them, they are already on board and pass easily and unobtrusively in and out of the human. They are easy to obtain, test and experiment on in the lab. They are expendable. Functionality could be enabled individually, or distributed over the 500 – 100,000 classes of bacteria.

Augmentation applications: communications and processing
The two most important augmentation applications are communications and processing, both of which could potentially be conducted via the human’s microscopic bacteria. Communication is required between the bacteria, which they are already doing to some extent, and externally, to the Internet using wireless, Bluetooth or some other, possibly to be developed protocol. What a vast improvement on board connectivity would be, never having to depend on the vagaries of PC wireless cards, modems and Bluetooth devices.

The second main application is processing, the initial killer app being a memory aid. Coordinating the bacterial biome into a distributed biocomputer for searching, downloading, accessing and delivering information would be an obvious first goal. Other applications could include continuous lifelogging perhaps (literally) through eye cam bacteria, personalized biosensing and remediation of the external environment as it interfaces with humans, virtual reality and nutrient and waste cycling. Electricity from the body could possibly facilitate these computations.

Easy upgrade and maintenance
The continual turnover, ingress and egress of bacteria in humans means that upgrade cycles and retirement of dead or non-functioning elements could occur seamlessly. Bacterial updates could be printed regularly from a 3d printer or automatically dispensed in smarthome air or water. Mechanically, the updates might be delivered through the air, in the shower, or by a nutrient blanket during sleep.

Nanobot intermediaries
Enhancing the human bacterial biome would really just be extending the life support functionality it already provides and could be a nice intermediary step to the more robust bionanodevices and nanobots envisioned in molecular manufacturing. Existing bacteria could be enhanced, much of the human microflora does not appear to be doing anything anyway, or additional bacteria could be engineered and brought on board.

Sunday, August 03, 2008

VC life extension opportunity redux

The VC life extension investment opportunity could happen in at least three areas: first and most importantly, translational medicine, second, health social networks and third, distantly, standardized longevity treatments and delivery systems.

Translational medicine
The biggest and most obvious longevity play for VCs is in translational medicine, shepherding and commercializing science findings from basic research to patient therapies. The vast majority, perhaps even 90% or more, of basic research findings never go beyond the lab or journal publication. A particularly high profile example of an early stage longevity company that rocketed from test tube to IPO to big pharma acquisition is Sirtris, bought by GlaxoSmithKline for $720M in June 2008. Some other early stage translational medicine longevity startups are Elixir Pharmaceuticals, Juvenon and Sierra Sciences. In the plethora of remedies claiming science support, it is critical to tightly link the research evidence to the intervention. For example, in this year’s exploding brain fitness market, there is much claim of scientific support but little published clinical trial evidence.

Health social networks
Another VC play is health social networks (for example, PatientsLikeMe, CureTogether, DailyStrength, HealthChapter, Experience Project and peoplejam). Not only can patients connect and generate group-synthesized, curated and moderated knowledge, but health social networks can also facilitate the development of personalized medicine by being a repository for the quantitative data of genomics, ongoing biomarker measurements and electronic healthcare records. Big pharma can approach patient communities for field studies and clinical trials.

Longevity treatment delivery
Standardized longevity treatments and delivery programs via private clinics are not a traditional VC play, but some interesting 10x business models may be possible. In the several years before longevity treatments are more proven and automatically administered via traditional healthcare channels, these services can be provided by private clinics as they are now.

What would improve the longevity treatment market is standardization; standardization of doctor qualifications, certifications, validations, services and treatments, and treatments supported by scientific research clearly evidenced to consumers. Currently, longevity doctors offer heterogeneous suites of services which is challenging for consumers to parse. Positioned appropriately, there is more than adequate demand despite the current lack of insurance reimbursement. Since longevity treatments are currently outside the purview of traditional medicine, third party certification (for example led by the Methuselah Foundation) could help validate doctors and treatment programs, and contribute to industry standards.

Sunday, July 20, 2008

Next big VC market: life extension?

Life extension is a growing market and could be the next significant industry targeted by Venture Capitalists and private investment as alternative energy and clean tech eventually wane. The opportunity is made obvious by continuous soaring costs in the world’s largest industry, healthcare, unfunded Medicare type liabilities in every industrialized country, and the demographic aging of populations and below replacement fertility rates together with massive demand and willingness to spend on longevity remedies.

What is the Life Extension Market?
The life extension market is the commercialization of scientific findings from stem cell, immunology, cancer, regenerative medicine and other areas of research. The research linkage between products and research will hopefully become stronger and more standardized. For example, many longevity remedies available today claim scientific support. This research could and should be linked to the products online (23andme is a nice example of research linkage) so consumers and other interested parties can research the products themselves. Bloggers and other independent intermediary watchdogs could synthesize the scientific research and confirm or deny the product claims.

Longevity Docs Needed
It is not clear that traditional physicians will be those prescribing longevity remedies. Specialist longevity docs are needed and will likely arise and market themselves as such, there are a few examples of this today. Most traditional physicians do not currently have expertise in new areas such as longevity and personalized genomics, or the enhancement and prevention vs. cure mindset.

Supplements, Hormones and Enzymes
The first step in life extension treatments is supplements, ranging from a daily multivitamin to the 200 or more supplements per day taken by futurist Ray Kurzweil. The next step is hormone and enzyme replacement therapies, which must generally be overseen by a physician. A variety of treatments have been undertaken per the shifting legal climate, not everyone wanting to be restored to the hormonal levels of their twenties and other reasons.

Longevity Social Network
It would be great to have a health social network (HSN), like PatientsLikeMe and CureTogether for the longevity community. First, people could share the different interventions they are trying. Second, they could upload their ongoing bio-marker test data into an aggregated electronic health record, similar to what Google Health is contemplating, to track and possibly share the impact of the interventions. Third, companies with research and therapies targeting this market could contact an aggregated group to propose field studies, clinical trials and offerings. For example, the 23andme Parkinson’s community has been contacted for such research.

Sunday, July 13, 2008

Mobile surveillance?

Are location-based services (LBS) quickly becoming the next FaceBook? Loopt would be one prominent example. As the number of Internet-enabled and GPS-enabled mobile phones proliferates with the spread of the iPhone and the two-year upgrade cycle for mobile devices, about 45% of mobile devices in the U.S. now have wireless broadband access. Local search and other mobile applications can finally be realized.

Location-based services, also called location-aware services, are about finding out what is nearby and who is nearby.

The what is fairly straightforward. This is more granular and possibly dynamic information about the places nearby; restaurant and store reviews, movie times, historical information for self-tours, etc. and the ability to take action, to make reservations, buy tickets, etc.

The who is related to people you know and people you don’t know. One social application is enabling your geographical location to be seen to friends in your network so they can easily find you. Another application is chat networks for people that are nearby, waiting somewhere for example, that would like to meet or just chat anonymously. There are also public-space interaction projects, games, art, etc. where people can use their cell phones to interact with a billboard or display, playing a video game against someone else in Times Square or making billboard art, either directly with call or text input or ambient algorithms following network participants as they pass through the area.

Age-tiering of technology
Twenty-somethings may be happy to permission their friend network to see where they are and senior-monitoring could be desirable but are age tiers 30-70 as interested in this functionality? Is the last modicum of privacy breached when your friends and family can see that you are at the gym, the dry cleaner, not at your office, etc. or is it too plebeian to care if everyone does it? Cell phones mean we can be reached everywhere, but do we want to be seen everywhere?

Sunday, July 06, 2008

Twittering Dante

Is medium sacred or could any content be served up in any medium? As vast content libraries are increasingly available on the web, content consumption should take whatever form is convenient and preferable to the user.

There is user-generated content and user-specified content is the next logical step. The amount of content available in multiple forms is growing, some examples are Audible books, and talks and interviews presented as transcripts, podcasts and videos.

User-specified content
In a robust platform, users could select content topic, format, level, detail tier and social dimension. Content topic is the main parameter chosen at present. Delivery formats could range from the text formats of book, academic paper, article, blog post and tweet to the multimedia formats of audio, video, slidecast, video game, etc. The level of content could be most basically popular vs. expert. The detail tier could be summary, outline, article, full detail and annotated version. Social dimension would include comments and reviews by others. These could all be drop-down menus at the top of Wikipedia.

Medium purists will insist that the only way to view the Mona Lisa is to hike up to the humid corridor at the Louvre and squeeze in to peer at the small canvas with the rest of the crowd but others are moving with the times. Who will be the first entrepreneur to do a twelevator pitch - twitter a business plan - to a VC, which VC will be the first to request twelevator pitches? If you can't explain your business in 140 characters or less, fahgetaboutit!

New literacy content and medium synesthesia
The new literacy is that the educated person of today is able to express ideas in a variety of media – in some combination of the traditional reading and writing AND in computer programs, 3d virtual worlds, synthetic biology, video games, 3d printing and visual storytelling. Does the new literacy mean that there could be a representation of all content in all media? Is there an opera of Cornell fab@home objects? What does an Indiana Jones movie look like in synthetic biology? What is a kinesthetic experience of protein folding?

Tuesday, July 01, 2008

Status of Research on Human Aging

Longevity is the new alternative energy
With $10m quickly raised by the Methuselah Foundation, VCs just beginning to see the opportunity and continually soaring healthcare costs, the longevity market could easily become as big as the alternative energy/climate change solutions market has become now.

Longevity research status
Grossly generalizing, the main focus in aging research is figuring out how to get processes that already occur, in the young and in cancer for example, to occur at other times, in the old. The optimum approach may include both reverse engineering and forward engineering in the form of synthetic biology as has been successful in other biological research areas like gene synthesis.

Aging is multidisciplinary, comprising at minimum the study of stem cells, immunology, cancer, DNA damage, tissue engineering, genetic engineering, regenerative medicine and micronutrients.

A comprehensive collection of anti-aging research findings was presented at the Aging 2008 conference June 27-29 at UCLA. The current developmental stage of aging research is early, perhaps in the second inning. Groundwork is being laid, phenomena are being documented, understanding of general mechanisms is sought, existing processes are being enumerated and early cycles of testing have begun primarily on flies and mice.

The seven primary causes of aging are DNA mutations in the cell nucleus and mitochondria, junk that builds up inside and outside cells, cells sticking together and cell loss and death. These are described at length, together with potential solutions, in aging research pioneer Aubrey de Grey’s book, Ending Aging and in the journal Rejuvenation Research. De Grey’s organization, the Methuselah Foundation, provides grants to anti-aging researchers. Some of the freshest thinking so far has included biomedical remediation, therapeutic organisms purpose-catalyzed in the body and the possibility of removing the overly-prone-to-damage mitochondrial DNA.

Generalized summary of Aging 2008 research findings:

  • Applying (non-individual specific) substances from the young to the old appears to work
  • With aging, not only does "good stuff" (cells, processes, etc.) decline but "bad stuff" also arises
  • The quality of the biological environment facilitates or inhibits activity and repair
  • Treatments may be most effective when begun in youth or middle age
  • The goal is to extend healthspan not just lifespan

DIY biohacking and the cocktail problem

Every bit as interesting as the scientific talks were the informal discussions of the wide range of interventions, treatments, supplements and other anti-aging remedies in use by conference participants. The cocktail problem is how multiple remedies taken in concert may be impacting each other. Never has there been a market with such demand and so few offerings as for anti-aging remedies.

Sunday, June 22, 2008

Paradigm shifts in thinking

Biology has shifted from an art to a digital information science to an engineering problem.

Medicine has shifted from treatment to prevention and from normalization to enhancement.

Health has shifted from passive response to pro-activity.

Scientific method has shifted from hypothesis and experiment to simulation and empiricism.

Literacy has shifted from reading and writing to being able to express ideas in different forms of media such as software, 3d printing, virtual worlds and synthetic biology.

Sunday, June 15, 2008

Social media and Enterprise 2.0

There is a much deeper application of Web 2.0 technologies and concepts possible in the enterprise than is currently being contemplated and implemented. Some companies have an early effort to use some of the tools but have not noticed that the concepts themselves can be applied to generate significant benefit. Worse, misapplication is also occurring such as the creation of Social Media Officers oblivious to the bottom-up rather than top-down property of social media.

Future of social media
The long-term future of social media is lifelogging - the auto-capture and permissioned auto-posting or archiving of every person’s every thought and experience. Feedhavior’s digital footprints continue to drive individual actions. The corporation goes away. Artificial intelligence becomes the most efficient form of outsourcing. People and organizations spend more time in simulation worlds than physical worlds. Entrepreneurs and organizations provide goods and services by making offerings proactively to groups of potential customers aggregated through their web-based interest communities. Marketing must be relevant to avoid being perceived as advertising.


Applying Web 2.0 Technologies to the Enterprise
There is no part of the firm at present that cannot make use of Web 2.0 and social media technologies. There are two dimensions for application:

External and Internal
Externally, a firm can use Web 2.0 and social media technologies for branding, re-inventing and testing business models, product and service sales, customer relationship management (CRM), partner ecosystem management, R&D outsourcing and recruiting. Internally, firms can use Web 2.0 and social media technologies for communication, collaboration, work assignment, task and project management, resource allocation and performance feedback.

Tools, Concepts and Values
Some examples of the direct application of social media and Web 2.0 tools are using blogs to supplement or replace marketing, APIs to supplement or replace business development, and crowdsourcing ideagoras to supplement or replace R&D. Applying concepts is for example not just using Digg for the firm’s industry news feeds but Digg functionality to bid up and down work assignments or performance feedback. Using Web 2.0 in the enterprise is not using mash-ups but mashing up internal applications, putting a virtual world front-end on any data application to represent the information in a high-resolution way. Internal trainings and meetings are conducted as open space unconferences. Everyone can participate in everything.

The values of social media are also applied internally and externally: authenticity, openness, transparency, participation, creativity, perpetual beta, new linkages, asking the wisdom of crowds (web, twitter), acknowledgement that everyone can have good ideas and contribute and using freemium and open-source business models (free + fee-based).