The Simulation Argument’s Linda Fallacy

The Pen Of Darkness
5 min readNov 9, 2020

We are capable of thinking about the possibility of being in a simulation. This thought is made possible by certain experiences that we synthesize with concepts. The experience, for instance, of teleology or grand overarching design, like the idea of our complex social or cognitive behavior following the same sort of pure physics principles of fundamental particles. Most of these are pareidolia, but in many instances we find legitimate parallels between two phenomena that shouldn’t technically be related, like gravity and social networks, or the strong nuclear force and sexual fertilization. While we can cognitively access the possibility that physical laws are not prevented from being intelligently designed, by a God or by a programmer, it is a whole other ballgame when talking about phenomenology. Why does it feel a certain way to experience gravity? How would you intelligently design or program that? Either you put all the parts together, generate an awareness of these parts and then allow the designed-mind to create experience in-situ, just like nature does anyway, or you experience it yourself and measure and conceptualize and abstract it to the realm of pure reason such that you’ve perfectly captured the essence of experience in order to then program it successfully. In both these cases, what comes first is a naturally created organism that experiences the world. The simulation argument then needs a whole set of new assumptions over and above this one, ie the advancement of technology to a state where simulating experience and reality is feasible or even physically possible. Occam’s Razor says to shave away any extraneous additional assumptions when fewer assumptions do just as well a job in explaining the phenomenon. Our failure to do so when it comes to the simulation argument is similar to the conjunction fallacy, more popularly known as the Linda problem. Here, Linda is our experience of reality. What is more likely? A) That such experience of reality is possible, or B) That such experience of reality is possible AND we’d be able to synthesize and replicate it. The simulation argument rests on our falling for this fallacy, and seems to be mediated by 2 connected mechanisms.

  1. Kahnemann and Tversky postulated that we make mistakes with the Linda problem because the mathematically less probable answer “Linda is a bankteller AND a feminist activist” is more representative of our idea of Linda, given the information we are given about her. So what makes the simulation argument’s conjunction more representative of our idea of reality, given the information we are given about it? One of the tricks in the Linda problem is that the 2nd predicate (the conjunct) of feminist/activist is stronger than the 1st of bank-teller. By strength we can again defer to probability, and reason that given the information about Linda, it is more likely she is a feminist/activist than a bank-teller. When asked to choose between statements therefore, having done the math on the predicates, we pick the one that includes the more probable predicate. We mean to say 2>1, instead we end up saying 1+2 > 1. This is a neat trick, not just because of the slippery wording but also because representativeness has nothing to do with statistics, and everything to do with our perception of statistics driven by other heuristics like the availability bias, powerful prejudice and the framing effect of the introductory information. The simulation argument plays a similarly neat trick. First we are given information, and made to think, about computing, virtual reality, and our personal experience with technology. Given our close association with reality-rendering software, whether 3D movies, 4k cameraphones or video games, we are given 2 predicates, 1) we often experience reality 2) we often experience technology-mediated reality. The fact that 2 is more representative does not make it more likely, and even if it were more likely, it does not make it more likely than 1+2 which is the bait and switch that is performed in order for us to commit the conjunction fallacy.
  2. We fail to use Occam’s razor also due to our discomfort with the notion of probability. Bayes is the mathematician and computer engineer’s friend. It allows us to be more fluid with truth, less certain about an inherently uncertain universe. Instead of shaving away unnecessary assumptions with Occam’s Razor, we batter the truth with the unsubtle ram of probability. Sure you can assign a vanishingly small probability to the possibility of developing advanced enough technology to support and create simulations, but then under that branch of the tree, there are infinitely many nodes that can be created by the simulation. I seem to have derived here how this is a faulty application of inverse probability, but noting that I can’t understand what I’ve said there or remember what I was smoking while saying it, I’ll move on instead to why I thought of the probability problem in the first place now. Probability is an artificial construct. It’s a forward projection of empirical frequency. The coin you’re about to flip cannot be a 50% tail. It’s just a reflection of the fact that 50 of the last 100 tosses have been tails. Your flight cannot 0.1% crash, you cannot pick a sock that is 25% black and I will not hate 75% of the next Justice League movie. Sure enough, when research subjects are given a rephrased version of the Linda problem where they’re told to think in terms of frequency rather than probability (how many out of 100 Lindas are bank tellers, vs how many of them are bank tellers + feminists), the conjunction fallacy is less common. This brings us to the territory of Kant vs Leibniz vs Hume, linking lies, damn lies, statistics, and reality. Hume maintained that reason comes from ideas, ideas come from sensory experience, so nothing is innately true, except that which has been demonstrated. Induction therefore is a flawed exercise, and the exercise of abstracting some sort of pattern out of human experience is doomed to failure. While this is good for anti-simulation arguers like myself, rubbishing the entire concept of deriving synthetic a priori truths from a future-statistic like infinite recursive simulation, it rests on an uneasily empirical foundation of reality that we instinctively find incomplete, or at least unsatisfactory. Thankfully, that writer to end all writing, that thinker to end all thought, and that man to end all men, Kant, reconciles the idealism of Leibniz, the realism of Hume, as well as the fatalism of Musk/Bostrom through his idea of the transcendental deduction. As per Kant’s synthesis, experience without reason (Hume) is content without form. Reason without experience (Leibniz) is form without content. There is no knowledge that does not have both. For instance, we could consider space and time not as concepts, but as forms of intuition, that structure concepts, because concepts have pluralistic instances, whereas there is only 1 space and 1 time. These concepts are synthesized with experience to arrive at objective knowledge. But concepts want to be applied unconditionally, ‘pure reason’ that usurps these functions and results in illusions called ‘ideas’. The simulation argument is one such idea resulting from the pure reason that our experience has patterns, that there are relationships between the laws of physics and our complex social behavior, and that we can sometimes gain a perspective-less view of reality that seems to us like looking at source code. Unfortunately it is not backed up by sensory experience and therefore does not constitute a meaningful statement at all.

--

--

The Pen Of Darkness

A novel insightful exercise to determine the pragmatic difference in intellectual payoff between a novel insight and an obvious fact mistaken for novel insight.