Sunday, April 06, 2008

Advanced civilizations forgo simulation?

Nick Bostrom's Simulation Argument articulates three future possibilities: “either almost every civilization like ours goes extinct before reaching technological maturity; or almost every mature civilization lacks any interest in building Matrices; or almost all people with our kind of experiences live in Matrices. He suggests that there is a 20% chance based on what we know now that we are living in a matrix.

Considering these three possibilities, the second case looks most probable, that technologically-mature civilizations are not interested in running simulations. It could be quite possible both because of its own likelihood and the likelihood of there being little value to simulations with more primitive self-aware participants (ourselves).

Future irrelevance of simulations with self-aware participants
It is quite possible that technologically-advanced societies may be able to achieve their objectives more effectively by other means rather than by running simulations. These more efficient means could be pure math, higher levels of abstraction, greater intelligence and better tools.

The main reasons that a non-technologically advanced society like ours thinks running simulations would be useful are to gain a deeper understanding of ourselves and our behaviors, to explore alternative histories and for leisure and entertainment either as observers or participants.

It is quite possible in the future that societies and entities will understand themselves so well that simulation will not be necessary. For example, augmented reality overlays could display or predict another party's utility function and provide a 3d data visualization of their likely future behaviors.

Alternative histories (e.g.; Napoleon not defeated at Waterloo, no Yucatan impact asteroid 65 million years ago, Greenland ice sheet not melting in the 2000s) would not need to be run, they would just be known or predictable. They would be obvious to a super-intelligence in the same way that I know my shoe will come untied if I pull on the lace, I do not need to actually try it. This may also be true for clinical trials and all areas of biology and most scientific experiments. Simulations with self-aware participants could be useful to a future society, but only in a forward-looking sense for more complex situations than can be handled by whatever the CurrentTech is, and the self-aware participants would be contemporary to that era not historical (us).

There are other situations where the objective is not obtaining information, the ever-burgeoning entertainment field for example. Entertainment simulations with self-aware participants would certainly be in demand. However, the primitive level of current humans would render us uninteresting in interactions with future society. For time tourism back in history, again, there is no reason to have primitive self-aware agents such as ourselves, artificial intelligences or non-sentient simulations could more adequately realize the experience.

Finally, there is the possibility that new reasons for running simulations could emerge as society advances, and it would be for these as yet unknown reasons that a future society would run simulations, and yes, there is some non-zero possibility of this.

Conclusion
It could be quite likely that future society will have more advanced technology than simulation, and even in the cases when it is interesting for future society to run simulations with self-aware participants, these self-aware participants would likely be contemporary not primitive (us). There may be less than a 20% chance that we are living in a simulation.

7 comments:

Roko said...

the reason that Bostrom's argument is so hard to argue against is that there only has to be one person in [for example] the year 3000 who wants to run ancestor simulations in order for the number of simulated humans to vastly outnumber real ones.

This person could be mentally ill, could have a weird sense of humor, etc, etc, etc. The computing power will be so cheap that even a social outcast or criminal would be able to do it.

Consider the following analogy: today, there are some people who string cats up on washing lines and beat them to death with baseball bats. There are not many people who do this - it is a very rare occurrence - but it happens. All of the arguments that you just made as to why people of the future will not run ancestor simulations apply to this case as well. For example, you said:

However, the primitive level of current humans would render us uninteresting in interactions with future society.

which corresponds to

However, the primitive level of current cats would render them uninteresting in interactions with modern humans.

- which is clearly false! [you could replace "cat" with "mouse" if you think that cats are nearly as complex as humans. There are lots of pet mice around. It seems to me that the primitiveness of domestic animals is, to a certain extent, part of their appeal]

You said: They would be obvious to a super-intelligence in the same way that I know my shoe will come untied if I pull on the lace, I do not need to actually try it.

As if it is not obvious to people who beat a cat around the head with a baseball bat that the cat will die...

The overall lesson is that it's not enough to argue that most people in the future will not run ancestor simulations. You have to argue why ALL people in the future won't run those simulations. The only way I can see this happening is if it is forbidden to do so by law, and that that law is ruthlessly and vigorously enforced. Some kind of participatory panopticon would be a good way to achieve this.

Roko said...

Following on from the pet analogy, here's a scary though I had: lots of people keep pet snakes. They like the snake even though it is a vicious killer and requires live prey. They have to keep feeding the snake, so they have a supply of some poor little mammal, for example some kind of mouse, to feed it with. Note that the mouse is more closely related to the human than the snake is.

It is not impossible some very advanced posthuman will keep a pathological unfriendly AI as a pet, and will need a ready supply of simulated human civilizations to unleash it on...

Anonymous said...

We also might want to consider how broad the term "simulation" can be. When I imagine what will happen at work tomorrow, that is a form of simulation. When we dream we are creating extremely convincing simulations.

It could be that at some point there is no difference between imagination and simulation. If that were the case then creating simulations might just be another part of the cognitive process of posthumans.

LaBlogga said...

Hi roko, thanks for the detailed comments.

You point out a good reason why primitive self-aware agents (e.g.; us) in simulations may exist.

This raises some interesting follow-on questions such as - could it be assumed that all or most simulations with primitive self-aware agents run by future societies are run by 'evil puppet-masters,' the equivalent of the animal experimenters/torturers you mention? How should knowing that it is likely we have an evil puppet-master change our behavior?

It is possible that future society would be so advanced that it would be too boring even for malevolent or others tinkerers to simulate humans; maybe like us simulating amoeba, not interesting for long.

I think enforcement of nearly anything continues to get harder to achieve, despite any presence of a top-down or bottom-up panopticon.

Your second comment could be an interesting sci fi story - AIs are trained and let to run amok on live human society simulations! However, training AIs or super-humans or transhumans on simulations would be more likely to use then-contemporary self-aware agents, not historical ones like us.

In summary, cases two and three of Bostrom's argument could be true simultaneously: that most technologically-advanced society participants are not interested in running sims [but some are], AND we are living in a simulation.

LaBlogga said...

Hi anonymous, thanks for the comment.

I agree that simulation is a plastic word and there can be many different kinds of simulations. I am discussing the case of simulations with conscious autonomous agents that are live and self-directing in the simulation.

I wonder if you are thinking that external autonomous agent actions would be connected into the imagination which would make it indistinguishable from simulation. Karl Schroeder's "Lady of Mazes" has a similar concept; copies of people's mindfiles are saved into your cognitive space and you can realistically simulate their presence and actions.

Jazo said...

I hope you don't mind the redundancy, but I cross-commented over from roko's blog.

What about the possibility that we are AI being bred and cultivated in a simulation timed before the singularity to instill in us human-like attributes. Those of us who pass the criteria of this "turing test" are transferred or copied and fitted with all the upgrades and superintelligence on top of the "core being" we become in this simulation. Is it in this way humanity is preserved in the future? Should I start a meta-cult?

LaBlogga said...

Hi el lazo, thanks for the comments.

Your idea that we could be AIs trying to evolve human traits seems a bit unlikely, for example what is the desirability of human-like traits to AI?

"Turing test winners" being lifted out of our sim/reality doesn't seem distinguishable from the religous afterlife notion.