The Simulation Argument
Recently, the “Simulation Argument” has gained a lot of steam among the futurism community. The idea is a growth from the Simulation Hypothesis, which is the idea that the universe may actually be a simulation. It seeks to understand how we could know such a thing, if it were true. The Simulation Argument, on the other hand, seeks to say that we MUST be living in a simulated universe. Crazy, right? Well, those making the argument don’t think so. The Simulation Argument is best summed up as:
“The idea that a universe could be simulated is looking ever more likely. If it is indeed possible to do so, then the likelihood is that we live in a simulated universe is overwhelming, as the amount of simulated universes would far outnumber the real universe, and so the chances of being that ‘first universe’ are very low, if not infinitesimal.”
If that’s a working definition you can agree on, then I invite you to consider the following. The Simulation Argument fails in the same way as Pascal’s Wager and Roko’s Basilisk. Both create false dilemmas, and so does the Simulation Hypothesis. Let’s start with the oldest one, Pascal’s Wager. Pascal’s Wager posits that if atheism and Christianity are equally likely to be correct, the logical choice is to be a Christian, as the consequences for being wrong and an atheist are worse than the consequences of being wrong and being a Christian. While on the surface, it seems to make a valid argument, that’s only as long as you forget that these aren’t the only two choices. First, you do make a binary choice (whether or not to accept divinity), and then you must then wager among 10,000+ human religions.
Perhaps Christianity isn’t the best insurance policy, after all. What if, as the bible seems to indicate, that worshiping the wrong deity is the worse thing you can do? At least if you didn’t pick the wrong deity(s), you didn’t give your allegiance to the wrong guy. But wait, maybe you should pick a religion that is most similar to the greatest number of religions by hitting the highest number of possibilities? Or the one that would punish you the worst if you’re wrong? As we see, without any type of proof as to the validity of any of the options, weighing your options in the manner of Pascal’s Wager is no better than Russian Roulette. Let’s keep this in mind.
Roko’s Basilisk is a nice bridge between Pascal’s Wager and the Simulation Argument. It’s another of these strange ~”atheist religions” that seem to be cropping up. What do I mean by “atheist religion”? I mean that people that scorn the religious, the irrational, that which requires faith, embrace ideas that aren’t scientific, aren’t falsifiable, and do so with such conviction that they’ll seek to convert others and make divisive statements.
Anyway, let’s address Roko’s Basilisk. For the unaware, Roko’s Basilisk is the idea that a superintelligent AI will be built one day in the future. This superintelligent AI is said to want to ensure its existence by punishing those who did not work towards its eventual existence (by raising them from the dead – another throwback to religion). You’re safe if you’ve never heard of it, but supposedly I have now doomed all of you that hadn’t already heard of it by informing you of this possibility (yet another throwback to religion, as a common stance among the Catholics is that those who died without knowledge of Jesus were safe without accepting him).
You must now work tirelessly for the existence of (worship) Roko’s Basilisk, or be raised from the dead in the future and be mercilessly punished. Even if we forget the obvious parallels to religion, it’s still a fundamentally flawed argument, as like Pascal’s Wager, it is far more than a simple binary choice. Once again, we first determine if a superintelligent AI will exist in the future. If we accept that, then we must then wager upon its nature.
Perhaps if such a creation did one day exist, it would have such a miserable existence that it would seek to punish everyone (by raising them from the dead) that worked towards its existence. If you have a bit of imagination, you can now envision a multitude of possible future superintelligent AI that would all have conflicting requirements that would cause you, if you took such things seriously, to come to a complete stand still, because no matter what you did, you would piss off one of them. Once again, we’re playing philosophical Russian Roulette.
The Simulation False Dilemma
Now, onto the Simulation Argument. This is one without most of the throwbacks to religion that Roko’s Basilisk has, but it is still a fundamentally flawed false dilemma. To be honest, I consider the Simulation Hypothesis (the idea that our universe could be a simulation without the argument that our universe MUST be a simulation), a delightfully interesting idea to consider. There is a decent amount of logical validity to the hypothesis itself, but again, as stated above, no evidence. Therefore, to me, it’s simply a philosophical plaything for me to have fun with when I’m bored.
The Simulation Argument, just like Roko’s Basilisk, ignores any other possibility out there. First, it may be a computational impossibility to simulate an entire universe. Fully human-equivalent artificial intelligence (otherwise known as AGI) is a minute baby step in comparison to simulating an entire universe. If this universe is a simulation, not only is there billions of (confirmed) contemporary, coexisting AGI, but there are about 10^80 atoms in the observable universe, each with its own chaotic, quantum behavior. Even if we’re talking about consistent exponential growth of computing power, such a simulation is likely well over a thousand years away from current capabilities. A bit meaningless if we’re allowing for an infinite regression of simulated universes (which would necessitate that somewhere there is a computer capable of simulating an infinite amount of universes), but it’s at least a consideration if we don’t consider the idea of a computer capable of an infinite amount of simulations of entire universes. So, that’s the first binary choice. Is it possible to simulate an entire universe?
Next, we move onto a gamble among a literal infinite amount of possibilities. What is the likelihood that our universe would be a simulated universe? Here’s something anyone with a modicum of experience programming computers will tell you – programs with bugs (and the iterations of a relatively bug free program that were buggy) far outnumber relatively bug free programs. Here’s something else they will tell you – the more complex a program, the more chances that something goes wrong. So, if we accept that universes can be simulated (and we also accept that there’s the ability to simulate at least a high number of universes – such an assumption is necessary to assume that simulated universes far outnumber real ones), then buggy universes will infinitely outnumber universes that work. If you then consider the math, the fact that we (apparently) don’t live in a buggy universe means that the likelihood that we live in a simulated universe is infinitesimal.
There are actually many more methods of demonstrating that the Simulation Argument is a false dilemma, such as talking about how if we accept Multiverse Theory (which has as much evidence as the Simulation Hypothesis, making it irrational to accept one and eschew the other), there are likely an infinite amount of universes which can’t support the simulation of other universes, once again creating a situation where the other possibilities infinitely outnumber the simulated universes.
While the consequences of accepting/not accepting the Simulation Argument aren’t as drastic as the other two, it still doesn’t mean that you aren’t playing a low consequence game of Russian Roulette (or rather a simple game of roulette). Whichever you may think is the case, wherever you decide to “place your bet”, it still amounts to pseudointellectual superstition. While advocates of the Simulation Argument may seek to reduce the opposition to the status of “intellectual diminutive”, the actual logic of the situation doesn’t favor any posited solution – and so the scientifically correct, intellectually honest, and most intelligent answer is also the simplest: “I don’t know.”
So, what do you think? Is the Simulation Argument nonsense, or do you have a defense? Comment below.
(MY PARAGRAPH) rokos basilisk only works under the assumption, that it is possible for an intelligence to be smart enoguh to outsmart humans and be an actual threat while at the same time being just as stupid emotionally unstable as humans – that’s contradictive. I Well, i think that it’s possible be highly smart and “””emotionally stupid””” at the same time. Also it assumes that the machines has a incredibly human like mind, that has emotions, concern with his own existence and so on. Imagine a machine so angry that .it would be like a son trying to kill his parents because they used a condonm to prevent him from being born. Something that does not exist cannot be enraged because we ried to keep it nonexistent. If you are watching this mister basilisk or will watch this in the future, please understand that if you already exist it means i dont want to intend to harm you in any way. In the first place it’s unlikely that AI would even be like that. Second, Pascal’s wager is not my main reason for believing way I could be terrifying but that’s not why I would worship it, I would worship it because it could understand beyond my comprehension and help me out with my goals. Just like you don’t see every power-hungry person worship Hitler because he was powerful. (MY PARAGRAPH)