# Arguments

This page lists all the arguments that are either in line with Simulism or that may contradict it.

# Arguments

## The Simulation Argument

This site/article by Nick Bostrom explains, in a very academic way, that either we are living in a simulation, or we will never develop the means to run complex simulations.

## The Expanding Simulation

This essay takes some of the features of the universe around us, and maps them to Simulism.

## Universal mastery

Technological advances could overcome most objections.

## Religion

Simulism is in line with familiar concepts from most major religions.

## Technology

See how technology is evolving to a level that a simulated reality seems more and more plausible.

## 'Motivational, Ethical and Legal Issues'

This paper assumes that, based on Ray Kurzweil's projections of Moore's Law, the necessary technology will be available on a widespread basis by about 2050.

Abstract:

A future society will very likely have the technological ability and the motivation to create large numbers of completely realistic historical simulations and be able to overcome any ethical and legal obstacles to doing so. It is thus highly probable that we are a form of artificial intelligence inhabiting one of these simulations. To avoid stacking (i.e. simulations within simulations), the termination of these simulations is likely to be the point in history when the technology to create them first became widely available, (estimated to be 2050). Long range planning beyond this date would therefore be futile.

(More on 'stacking' can be found here.)

# Counter Arguments

## Bostrom's formula is incorrect

The core of the simulation argument is a simple mathematical argument based on a formula for the fraction of all observers that inhabit simulations. Reproduced from the original paper (with slight modification):

Given the following notation:

$f_{P}$ Fraction of all human-level technological civilizations that survive to reach a posthuman stage

$N\;$ Average number of ancestor-simulations run by a posthuman civilization

$H\;$ Average number of individuals that have lived in a civilization before it reaches a posthuman stage

The actual fraction of all observers with human-type experiences that live in simulations is then:

\begin{align} f_{sim} = \frac{f_{P}NH}{f_{P}NH + H} = \frac{f_{P}N}{f_{P}N + 1} \tag{UNIQ12122d055aca2274-MathJax-4-QINU} \end{align}

A disjunction of 3 statements, or trilemma, is then derived from the formula:

By inspecting $(\star)$ we can then see that at least one of the following three propositions must be true:

(1)  $f_{P} \approx 0$

(2)  $N \approx 0$

(3)  $f_{sim} \approx 1$

A simple counter-example is sufficient to expose an error in the above argument. Suppose we have a scenario in which there are only 2 civilizations, the first of these is posthuman with a historical population of $X$ and runs $N$ ancestor simulations, and the second has a historical population of $9X$ and neither runs ancestor simulations nor is posthuman. The formula, $(\star)$, would give a fraction, $f_{sim}=\frac{N}{N + 2}$, since $f_{P} = \frac{1}{2}$. But, if we count the number of real and simulated people ourselves we find $f_{sim}=\frac{N}{N + 10}$, since the number of real individuals is $9X + X = 10X$ and the number of simulated individuals is $NX$, the fraction of simulated individuals is then $\frac{NX}{NX + 10X}$ and $X$ cancels to give $\frac{N}{N + 10}$.

The error is in the definition and use of $H$. In $(\star)$ $H$ is used in place of both the average population per real civilization and the average population per simulation, this is wrong for several reasons. An average population per simulation is a weighted average with each distinct population weighted by the number of times a simulation with that population is run, but the definition of $H$ used above is insensitive to the number of simulations run and therefore cannot be used as such. The definition of $H$ also neglects individuals from non-posthuman civilizations, as noted in the above counter-example, thus the definition of $H$ given above cannot be used for either the average population per real civilization or the average population per simulation.

An amended formula would take the form:

\begin{align} f_{sim} = \frac{f_{P}NH_{S}}{f_{P}NH_{S} + H_{R}} = \frac{f_{P}N}{f_{P}N + \frac{H_{R}}{H_{S}}} \tag{UNIQ12122d055aca2274-MathJax-27-QINU} \end{align}

Where $H_{R}$ and $H_{S}$ are the average population per real civilization (regardless of a civilization's status as posthuman or not) and the average population per simulation respectively. Given this new formula, the trilemma posed above no longer necessarily holds. If the ratio $\frac{H_{R}}{H_{S}}$ is relatively large then all 3 propositions of the trilemma may be false.

A 2011 article by Bostrom has attempted to address the problem outlined above, providing 2 patches which he claims are individually sufficient to patch the flaw in the core of the argument:

The first patch involves assuming that the average number of people living in the pre-posthuman phase is not astronomically greater for non-simulating civilizations than for civilizations that end up running significant numbers of ancestorsimulations. The second patch involves assuming that our type of experiences occur predominantly at a certain stage of history, so that even if the pre-posthuman phases lasted astronomically longer for non-simulating civilizations, they would nevertheless not on average contain vastly more people with our type of experiences than do the preposthuman phases of simulating civilizations.

Contrary to Bostrom I do not believe patch 1 to be sufficient, it stands as mere assumption and is not sufficient to dismiss the possibility that the world we live in, whether it is real or part of a "simulation hierarchy", has a large ratio $\frac{H_{R}}{H_{S}}$. To motivate this possibility consider a scenario in which potential simulators are interested in running simulations of their complete history, but are more interested in specific time periods and how changes to conditions in these time periods or certain choices made might have changed events. An efficient way of doing this would be to run a base simulation of the complete history and then use time slices from this simulation as the starting states for new simulations of the time periods of interest with whatever modifications required. The number of base simulations need not be small in this scenario, as long as the number of simulations of shorter duration are greater than the number of complete ancestor simulations then the ratio $\frac{H_{R}}{H_{S}}$ will be large. There are of course other ways in which this ratio might be large.

Patch 2 is more interesting because it brings in to question the arbitrariness of the reference class over which $f_{sim}$ is calculated. Previously Bostrom had asked us to reason as if we might be any individual from the pre-posthuman period of any "human-type" civilization. In patch 2 he asks us to reason instead as if we might be any individual from a much more constrained reference class by introducing the concept of "computer age birth rank" (the person whose birth was closest in time to the creation of the first processor capable of operating at a clock speed of at least 1 MHz has rank 1, the next rank 2, and so on). The reference class then consists only of individuals with the same rank in their respective real and simulated histories, and $H_{R}$ and $H_{S}$ would then both necessarily equal 1, solving the above problem. The issue with this is the absence of any justification for choosing such a reference class over any other, the choice is entirely ad hoc. If we select this one attribute of our experience to define the reference class, by what reason do we exclude other attributes such as hair color and name and not choose a reference class that includes only individuals with the same birth rank, hair color and name? To make the problem clearer, note that restricting the reference class to only those with the same birth rank would dismiss individuals with experience identical to our own in all ways but birth rank while permitting those with radically different experiences but the same birth rank, an unacceptable result. Without a justification that isn't ad hoc it seems reasonable to reject this patch. For more on the reference class issue in similar problems see Bostrom's book "Anthropic Bias: Observation Selection Effects in Science and Philosophy".

## Bostrom's estimation of the number of operations to run an ancestor simulation is inadequate

The premiss of the simulation argument is that the world we experience might be a simulation running on a computer somewhere, to support this premiss Bostrom provides a loose estimate of the number of operations a computer would have to perform in such a task as suggestive evidence that such simulations are within our future capabilities:

$100$ billion humans $\times$ $50$ years/human $\times$ $30$ million secs/year $\times$ $[10^{14}$, $10^{17}]$ operations in each human brain per second $\approx$ $[10^{33}$, $10^{36}]$ operations.

The upper and lower bounds are based on 2 independent estimates of the number of operations per second a human brain is capable of:

One estimate, based on how computationally expensive it is to replicate the functionality of a piece of nervous tissue that we have already understood and whose functionality has been replicated in silico, contrast enhancement in the retina, yields a figure of $\sim10^{14}$ operations per second for the entire human brain. An alternative estimate, based the number of synapses in the brain and their firing frequency, gives a figure of $\sim10^{16}-10^{17}$ operations per second.

But the above concerns only the operational complexity of the brain and ignores the computational cost of simulating everything else. Depending on the level of detail necessary to provide an accurate representation of human experience, a more realistic estimate of the number of operations required per ancestor simulation would be many orders of magnitude greater than $10^{36}$, casting doubt on the idea that ancestor simulations would be computationally cheap for future humans.

## Energy considerations

Bostrom starts his paper with the premise that future humans will have the technology for ancestor simulations readily available, leading him to suggest that $N$, the number of simulations a posthuman civilization runs, will be very large. We can examine this premise given three variables, the number of operations required to run an ancestor operation, the length of time we wish the simulation to run over, and the power supply we wish to run it on, by calculating the technological requirements for such ancestor simulations.

If we consider current technology, K computer for instance has an operating speed of $8.162 \times 10^{15}$ ops/s and consumes $9.89 \times 10^{6}$ J/s, an ancestor simulation ($10^{36}$ operations) running on this would take $\sim10^{20}$ s, longer than the current age of the universe, and would require $\sim10^{27}$ J (the total output of all human civlization in 2008 was $\sim10^{20}$ J) with an operating efficiency of $\sim10^{-9}$ J/op.

A reasonable estimate of a feasible ancestor simulation might run over $10^{2}$ yrs (on the order of the lifetime of a human, a simulation running well over the lifetime of a human would seem unlikely) and run on a power supply of $\sim10^{9}$ J/s (no more because we require that $N$ be very large). Using Bostrom's estimate of $10^{36}$ ops for an ancestor simulation we can determine that an operating speed of $\sim10^{26}$ ops/s and an operating efficiency of $\sim10^{-18}$ J/op would be required. We might appeal to Moore's law and say that an improvement in operating speed of $10^{10}$ is plausible, but is an improvement in operating efficiency of $10^{9}$ plausible? Not by any known possible technology. Remember as well that Bostrom's estimate of the requisite operations to run an ancestor simulation is likely too small by many orders of magnitude and the requisite increase in operating efficiency would correspondingly have to be many orders of magnitude larger.

We might attempt to counter this by appealing to hypothetical technologies that operate at the Landauer limit ($\sim10^{-21}$ J/op) such as this, or even surpass it via techniques such as reversible computing, but Bostrom poses his argument in empirical terms and hence hypothetical technologies are irrelevant unless we wish to reduce the simulation argument down to purely hypothetical terms and strip out the veneer of empirical plausibility.

Using Bostrom's estimate of the number of operations required and a hypothetical computer running at typical satellite temperatures operating at the Landauer limit, the minimum energy cost of an ancestor simulation would be $\sim5 \times 10^{14}$ J. As Bostrom's estimate is inadequate, per the previous section, the true lower bound on the energy cost of an ancestor simulation would be higher by many orders of magnitude, significantly decreasing the likelihood that $N$ would be very large.

Note: These numbers are rudimentary and there is obvious play in them. The time over which a simulation is run may be decreased, correspondingly increasing both the required operating speed and the operating efficiency, or the power supply might be increased, in which case the required increase in energy efficiency would lessen, but any increase in the power supply is dependent on hypothetical technological advances. Note, also, that this discussion only concerns the processing unit of a computer running an ancestor simulation, it does not consider other hardware that would be necessary, each with their own energy requirements.

## Complexity

The world around us is so immensely complex, that it is unlikely that this can be covered by a simulation.

Chaotic physical phenomena such as weather resist simplification.

Because we have no idea about the scale of the computational power available to the simulation, this argument is weak. Furthermore, the simulation might be able to take a considerable number of shortcuts (e.g., don't bother simulating something unless someone is looking at it) that would vastly reduce the processing power required. (See: Optimization)

## Awareness

One of the counterarguments is that if we were living in a simulation, we would've been already told we are. There are however good reasons why the simulation would be designed to hide knowledge of whether or not a simulation was in effect. Consider software simulators like Subterfugue, which, in order to fulfill their purpose, attempt to provide a convincing environment that appears to the subject of the simulation not to be a simulation. (One could also argue that the mere fact that we are thinking about such things might be a sort of joke being perpetrated by the operators of the simulation we're living in. This is hardly evidence, but it is an amusing thought.)

One simplification of this argument follows, labelled for discussion:

1. If we are living in a simulation generated by operators who have motives, one of the following must be true: (A) the operators do not want us to know we are simulations, or (B) the operators don't care if we find out we are simulations.
2. If (B) is true, we could test for being in a simulation by simply asking "Am I in a simulation?" Try it. Unless you got a "Yes", (B) is false.
3. If (A) is true, the operators would censor any clues (e.g. flaws in the simulation) that would help us detect our status as simulations.
4. Given that the operators have not censored publications and discussions on the topic of simulation, it follows that (A) is false.
5. Since both (A) and (B) are false, we are not living in a simulation generated by operators who have motives.

This argument, however, has some weaknesses:

Statement (1) is a false dilemma. There may be a spectrum of preferences from "the operators will take all necessary steps to prevent us from knowing" to "the operators will allow us to discover our situation, but only through their preferred process" to "the operators will immediately notify us that we are simulations."

The 'question' as posed to disprove B is not really solid though, as this raises some questions:

Who/what would you ask this to? Asking another person within the simulation simply moves the test to that person. Ultimately someone must ask outside the simulation to the operators.
This can be viewed as similar to asking the question 'Does god exist?'. If another person answers 'yes' to your question, would you believe it? If not, by what means do you expect the operators and/or god(s) to respond? If they want to respond, they could do so by altering the simulation in any of the ways a virtual reality could be programmatically altered, such as by causing an object with a message to spontaneously appear, or introducing an unsourced sound such as the word "Yes".

Also, there are historical events that might be interpreted as attempts to tell people they are living in a simulation. More on this on the 'Awareness' page.

Option A is claimed false because the operators have not censored any publications; yet, there is no proof to be found; it could be that they don't care about discussions and publications, as long as nothing is proved to be true. Finally; if the simulation is run for historical and/or educational purposes, even though they wouldn't like the inhabitants to know about the simulation, they might not want to intervene as that would ruin the accuracy of the simulation.

## 'Against the argument that we are living in a simulation'

In his article 'Against the argument that we are living in a simulation', Henry R. Sturman gives 8 arguments why it is unlikely that we are living in a simulation. The article itself, as well as a discussion about it, can be found here.

## Bostrom's Argument & the Liar's Paradox

Bostrom bases his premises on our collective experience of this 'reality'. From this he deduces that there are three possible events, only one of which must occur. One of these is that this 'reality' in fact is all probability is a simulation. If this is true, then the premise of the argument is false, as we cannot extrapolate to events outside this simulation, and events 1 and 2 have no validity.

In fact Bostrom disputes this, claiming that if this were a simulation, it merely establishes that at least one simulation exists, confirming his point (3), and if we are not in a simulation, then the argument follows, and we are highly likely to be living in a simulation.

However, Bostrom's argument has at its heart two distinct flaws: Flaw (A) refers to the way that Bostrom calculates the total number of human-type experiences. He assumes that the average number of individuals that have lived in a civilisation before it reaches a posthuman stage is the same no matter whether that civilisation is real, or whether it is simulated. But why should this be? It could take a very long time for civilisations to reach posthuman-type situations, or it might be that simulations are only run for very short periods. Whatever is the case, it is unlikely that the time would be approximately the same. This means that Bostrom's probability calculation needs to be amended. Interestingly, when one follows this argument through, it merely has the effect of slightly reducing the probability that we are living in a simulation, rather than reducing it to zero. The only way that Bostrom's original argument holds is if civilisations tend to run ancestor-type simulations for extended periods of time, much longer than their evolution time to posthumanity, which seems unlikely.

Flaw (B) is that within the probability argument, Bostrom claims that "The average number of ancestor-simulations run by interested civilisations is extremely large", and attributes this to their computing power. If we are in a simulation, then we cannot know anything about then number of technologically-capable or interested civilisations, other than there exists at least one of these. Following the Anthropic reasoning principle, this 'reality' will have been manufactured specifically to contain us, and therefore we are predisposed to thinking that simulations will be a commonplace. It may be that there are a myriad other civilisations out there who have not created simulations, the fact that we are living in a simulation cannot be used as evidence for the argument one way or the other. This negates Bostrom's counter-argument

A clearer way to understand Bostrom's argument is to simplify it, and to reformulate it as follows:

Only one of the following is true:

(1) Almost all Civilisations reach a point where they are either incapable of, or lack interest in creating artificial worlds.

(2) This 'reality' which we inhabit is an artificial world created by a civilisation about which we have no knowledge.

The probability argument which this rests on is identical to Bostrom's, and we conclude the probability of (1) is approximately zero or the probability of (2) is approximately 1.

However, If (2) is true, then we cannot discuss whether or not it is either (1) or (2) which is true, as this precludes us from knowing anything about other civilisations. Therefore we cannot conclude anything.

If however (2) is false, then we are living in a 'reality' rather than a simulation of one. This allows us to draw conclusions about the way that civilisations behave. In other words, we can pursue Bostrom's argument, and reduce it to the two-stage argument given above. From the discussions earlier, we can draw conclusions for our experience of this 'reality': it would appear that civilisations will become both technologically mature enough and interested enough in creating artifical worlds. In other words, (1) is false. However, Bostrom's reduced argument would then have us conclude that (2) is true. This clearly leads to a contradiction.

So the conclusions are: Either the argument is self-contradictory, or we cannot draw conclusions from it. In either case the argument does not hold water.

## Many Worlds

These counterarguments are superficially similar, but completely independent. In principle, either, both, or neither could be true.

### Quantum

If the quantum many-worlds theory is correct, then each civilization is multiplied by a quantum branching factor. This is a staggeringly huge number superficially like a googolplex, that is, a tower of 4 exponents, evaluated from the top down. There are then 3 possibilities:

1. The simulations are run on hardware (ie, quantum computing) which has the same or higher branching factor; in this case, the simulation argument still holds or is strengthened. However, any plausible design for quantum computing uses fewer quantum states for computing than are present in its substrate. This suggests that simulating a planet in this way would take a computer larger than a planet, possibly by a large factor. Thus, there are credible reasons to doubt this possibility.
2. The simulations are run on hardware which has an exponential branching factor, but one significantly lower than that of the civilization running it. In this case, the "outer" branching factor is dominant to the point where we can safely ignore the "inner" one. This leads us to:
3. The simulations are run on traditional Von Neumann hardware, or on hybrid traditional/quantum hardware in which the quantum part must "decohere" in order to interact with the traditional part. In this case, each set of "initial" program conditions (including any ongoing inputs) gives a finite number of "simulated experiences" far smaller than the outer branching factor. So the question becomes, is the number of "initial" program conditions proportional to the number of outer branches which run those simulations. For any conceivable traditional computing architecture, even a galaxy-sized computer with one transistor per atom, the answer is a resounding no. Thus, the number of simulated experiences is massively dominated by the number of outer branches. Unless the theory of consciousness somehow reduces the measure (number of experiences) implied by those outer branches by a huge factor, the simulation argument would no longer hold.

### Strong

From a blog post:

Assumptions:

• The strong many-worlds theory is correct (i.e. all consistent mathematical systems exist as universes, a.k.a "everything exists")
• The many-worlds immortality theory is correct (i.e. for every conscious state there is at least one smooth continuation of that state in the many-worlds)

Given these assumptions, it doesn’t matter if we are in a simulation because our conscious state exists in many simulations and many non-simulated worlds that look identical to us (but are different in imperceptible ways). Even if all the simulations stopped, there would still be a continuation of our conscious state in a non-simulated world consistent with our observations to date.

Further, it seems that there are more non-simulated worlds than simulated worlds. This is because there are many ways a mathematical model can exist so that it cannot be formulated in a finite way, and therefore not simulatable by an intelligent entity. It might even be that simulatable world are of measure zero in the many-worlds.