Talk:Quantum Theory
From Simulism
In the absence of a contribution to the quantum theory pages, I should like to pose a question: If a simulation were one of Bostrom's ancestor-simulations, what purpose would be served by simulating events at the quantum mechanical level?
The world as we see it operates at a level in-between the micro (at which Quantum Mechanics normally applies), and the macro (at which the Theory of Relativity normally applies). It is difficult enough to see a rationale for introducing Relativity, but Quantum Mechanics poses a whole new set of questions.
Using Occam's Razor as a starting point, one might argue that a simulator would not introduce more complexity into a simulation that would be absolutely necessary for it to achieve its aims. Therefore, if we are living in a simulation, there would have to be a pretty good reason why it is necessary for us to see this level of detail we find in particle physics for example.
One answer to this might be that the simulatees could prove to be a pretty dogged bunch, and start inventing all sorts of devices to probe down to the level of the atom & below. If the simulation were constructed in a slipshod, inconsistent or contradictory manner, then it would become apparent that the universe is not based on well-founded scientific laws which apply generally, but on ad-hoc solutions to individual problems. I suspect that anyone living in such a simulation might very soon give up science, and simply opt for a religion, because it would be obvious that the world had been constructed, rather than evolved. In biology we are not surprised to find several different solutions to the same problem, for example, how to live in freezing water. In the time before Darwin, this was almost universally accepted as evidence of a creator. It was only when Darwin proposed a theory to explain how these different manifestations were consequnces of the same general law, that god did not need to be invoked to explain the diversity of living things. In physics, we certainly expect that one law discovered in one place will apply generally, to invoke different laws for different parts of the universe would be a violation of Occam's Razor.
Interestingly, the current search for a theory of Quantum Gravity focuses on this very point: either we have just not yet found the theory, or it doesn't exist. If that were the case there are two very different and mutually incompatible methodologies operating at macro & micro levels. Could the failure to find a workable theory for Quantum Gravity be evidence that we are in a simulation? --TonyFleet 18:06, 11 February 2007 (CET)
One reason why Quantum Mechanics may exist is explained in 'The Expanding Simulation' -- Ivo 23:27, 11 February 2007 (CET)
Quantum Mechanics as evidence of simulism
Some time ago I stumbled upon the following article called "A Cybernetic Interpretation of Quantum Mechanics":
http://www.bottomlayer.com/bottom/argument/Argument4.html
This article could be a good starting point. -- Bart
The collapse of the wave function is paradoxal. I wonder if paradox is our way to confirm the detection of a limit of our particular simulation/situation. -- Maughaum
You ask the question "what purpose would be served by simulating events at the quantum mechanical level?" I can't help suspecting that when we investigate reality down to the quantum level we might be delving deeper than we were intended to go. Let's imagine we are in a computer simulation which uses conventional semiconductors in its simulating microprocessor. The computer program would represent the simulated beings as logic states in transistors (true or false, 0 or 1). Everything discrete, everything tidy.
Now, the simulated beings would undoubtedly want to investigate their world by scientific enquiry. They would discover the classical (Newtonian) laws of physics which would be deterministic, dealing with those macroscopic 0 and 1 logic states. Again, everything very clear-cut. But what happens if they start digging deeper? What if they start to look below the level of elementary particles, below the level of the binary logic states of the transisitors, below the level they were ever intended to look? Then they might discover the behaviour of the underlying computer, the actual detailed behaviour of the transistors themselves which were performing the simulation. Then they would discover the behaviour of semiconductors, then they would discover quantum mechanics.
So quantum mechanics would not be part of the programmed simulation: no simulator would have "programmed" quantum mechanical behaviour. But when they discover quantum mechanics they would be discovering the detailed, underlying idiosyncracies of the computer performing the simulation. --Andrewthomas10 16:30, 14 February 2007 (CET)
I see what you are saying, and the same thought had occured to me. However, I probably disagree with you on one fundamental point; simulees could, in all probability discover idiosyncracies in the programming, but not in the hardware (unless we are talking about hard-coded stuff). The reason I say this, is that the only tools that the simulees have at their disposal are software-based. They can throw data structures at other data structures to see how they respond. The response is a function of the programming and not of the machine on which it is built. Ross Rhodes' paper on the Cybernetic Interpretation of Quantum Mechanics does a pretty good job of taking some features of QM and explaining how we might be mis-interpreting software glitches and features as a consistent, coherent theory. However, I am not totally convinced by this, mainly because QM is highly mathematical and opens itself up to falsifiability testing in a major way. If QM really were an ad hoc theory just based on 'system breakdown' effects, as a result of us attempting to probe to deeper level of programming than comfortably exists, then I do not believe that we could have produced such a concise, coherent and mathematically precise theory. QM has stood the test of time, and continues to surprise us with the way that its bizarre predictions are actually true. If we are in a simulation, I think QM is programmed in, and it's not an accident.--TonyFleet 18:51, 14 February 2007 (CET)
By the way, isn't it time someone actually wrote the article we are all discussing? :oP
You make a good point, Tony: "the only tools that the simulees have at their disposal are software-based." The same thought had occurred to me, of course! I was hoping no one would notice. The software could be thought of as independent of the hardware, couldn't it? However ...
... perhaps we're all too used to the idea of the digital computer. We've grown-up with digital computers. It's very easy to fall into the trap - we can't conceive of any other computer except digital computers. But what if we lose our fixation with digital computers for a moment and consider analogue computers? Then you can't separate the hardware from the software. John D. Barrow in his book Pi in the Sky considers the concept of the "computer" in the most general terms: "A computer is an arrangement of some of the material constituents of the Universe into a configuration whose natural evolution in time according to the laws of Nature simulates some mathematical process. There are many simple examples. The swing of a pendulum in the Earth's gravitational field can be used to make a 'computer' that counts in a regular way."
The discrete nature of digital computing does indeed completely separate the hardware from the software. It's that sampling process that does it: takes an analogue signal and converts it into digital. But the digital computer with this artificial separation of "hardware" and "software" is really the exception rather than the norm.
So if we're dealing with the more general concept of a computer - the analogue computer - then possibly the simulated beings could indeed discover aspects of the underlying hardware. Surely analogue computers are most likely, anyway. Why have an unnecessary sampling process? --Andrewthomas10 23:28, 14 February 2007 (CET)
I go along with this up to a point, but coming at it from a different direction. I think that recent advances in DNA computing, for example begin to open up completely new types of computational possibilities. Neural networks attempt to mimic brain functions, but in a very primitive manner. At some point, we will undoubtedly devise some sort of device which uses a biological basis for computation, with parallel processing via DNA, possibly using this as a memory device also, and maybe some quantum computing done via artificial neurons. In this case, the distinction between hardware & software will begin to be blurred, but that is only for an outside observer.
For the mind created by this process, the problem remains. Imagine that you are the computational device. The question is, how much could you know about your own brain if yours was the only brain in existence, you had no body, and all that you were doing was lucid dreaming? You could analyse your psychology, but you have no mechanism for examining your biology. Consciousness is an emergent property of the system, and sits above it; unless we have recourse to other systems, where we can see them in their entirety, or we have other methods of probing, akin to performing auto-surgery on our brains, we are trapped in our own psychosphere.--TonyFleet 23:58, 14 February 2007 (CET)
Deary me, you're up late! I'll have to read that in the morning! --Andrewthomas10 00:12, 15 February 2007 (CET)
Hi Tony, I've read your response and I think they way you're treating consciousness you're still treating it like a "hardware" and "software" split. I could understand that with a digital computer: for example, you can run software on a Java virtual machine and that is then completely independent of the underlying hardware. But that is not the case for an analogue computer. In an analogue computer (and the brain is an analogue computer, not a digital computer) the outputs do reveal something of the underlying hardware (e.g., "0.9893674326" rather than just "1"). Our consciouness is simply not like a digital computer, it is not independent of the underlying hardware. We all have moods when we have a chemical imbalance in the brain. We take Prozac to cheer ourselves up. Our consciousness is dependent on the hardware.
When you say "Consciousness is an emergent property of the system, and sits above it" I imagine you are comparing the emergent macroscopic world from its very different quantum mechanical foundations. But that direct causal connection between the macro- and microscopic world is always maintained, so we can perform experiments and discover more about quantum mechanics. There is no "hardware/software" split in the real world. In fact, that split of hardware and software in digital computers is really quite unique and rare, and I do not think we should be swayed by it when we consider if physical reality is a computer simulation. I guess quite a few people here come from a computing background, and so we have to be very careful not to assume the computers performing an advanced simulation would in any way be similar to the computers we use everyday. Super-advanced computers would no doubt be unrecognisable from Windows Vista. I hope so, anyway! --Andrewthomas10 10:36, 15 February 2007 (CET)
In my reading of consciousness, this is an emergent property of 'brain-function', rather than brain; i.e. the fact that I am conscious, and able to think about my own mind raises it above the level of the substrate. In this case the substrate is the brain function which allows us to go about on a day-to-day basis. When I am driving my car, I am rarely conscious; the number of times I get to my destination and think "How the hell have I got here?" is amazing. I am therefore not arguing that 'consciousness is just another piece of software', rather it is an emergent function of the software/hardware substrate. Animals have brain function which allows them to perform, sense, possibly have emotions, but are they conscious? What my consciousness experiences, the moods of the system, the aches of the body, the chemical disbalances, are all data inputs of one type or another into the brain-function below; they can be experienced by, but they do not create the consciouness.
The parallel here I have in mind is the way that life is an emergent property of a complex of chemical reactions. This is an essentially different and unpredictable (from an atomic theory point of view) consequence of the way that elements interact.
NB I do agree that the software/hardware split is a pitfall; it comes about because of the thesis that effectively every finitely computable problem can be solved using a Turing Machine (which has a clear hardware/software split); however, I agree with you. The brain might be an example of a computer able to solve problems that are not finitely computable.
In addition, I think we also need to consider what the simulism hypothesis would involve: because we apparently have bodies, and we apparently have brains, we can still discuss such things as 'taking Prozac'. However, if, in fact we were simulated, our simulated bodies would be taking simulated Prozac, and this would be affecting our simulated brains. Our conscious minds would detect this in the same way regardless of whether there is actual brain function beneath, or a simulated brain function. If we can come to conclusions about the 'reality' of our brain function, just by our conscious minds performing self-analysis, then equally so can the simulated minds, and they would both come to the same conclusions about the brain being an analog computer. However, they cannot both be right. In one case, the brain actually is an analog computer, and in the other, it is a simulated analog computer which is a piece of software (or software hardware or some other combination of elements) which cannot essentially be known.
This is a fascinating discussion. Thanks Andrew --TonyFleet 11:14, 15 February 2007 (CET)
Yes, thanks Tony. I really enjoyed it. Fascinating. --Andrewthomas10 12:30, 15 February 2007 (CET)
Very interesting indeed. Sometimes I wonder whether I should also put a forum up on the site, but I think we're not big enough for that yet.
Here's an interesting thought that occurred to me. You know by now how I like to explain by example:
Network data transfer is pretty reliable. But deep under the hood, it isn't. The zeros and ones that we transfer over the internet are often garbled because of fluctuations in the circuits or other tiny disturbances. We use things like parity bits, redundant transfer, package resends etc. to correct that, and a few levels up the network stack, we don't notice anything and the network is very reliable.
Now take 'The Sims online'. Suppose The Sims is simulated down to the particle level, and this comes very close to the ones and zeros of our simulation. If the sims do their normal everyday things, all is well, the game runs on a reliable internet connection and everything works as expected.
Given enough detail, the sims could have tools that investigate their surroundings at the particle level. If they start to do measurement on single particles, which are *very* close to the 1's and 0's of the simulation, they might start to see strange things. They do a double slit experiment and see the scatters, but the particles aren't there. This is because at that exact moment, they are looking at a very detailed level of the system, at the level of the simulation where individual 1's and 0's can be incorrect. Overall, the result is the same because by then, the correctional facilities will have kicked in. --Ivo 21:47, 15 February 2007 (CET)
I don't think this would happen in a simulation; I think the operating system would detect and correct errors before the Sims had noticed them. They might just notice a moment's 'strangeness', but that's all. We have to trust that in planning a believable simulation, one of the major tasks would be to design the OS to maintain system credibility in the face of bugs, downtime, power surges and data corruption.--TonyFleet 22:23, 15 February 2007 (CET)