Once the provenance of sci-fi, this has recently entered the mainstream thanks to Elon Musk’s belief that the overwhelming likelihood is that we are living in a digital simulation. This idea is, apparently, all the rage in silicon Valley (overidentify much?), so much so that serious projects have started to help us break free of the simulation.
Why is this regarded as something so plausible? As far as I understand, it is essentially a matter of probabilities. Let us assume that it is possible to create a digital simulation of the Universe. That Universe can contain multiple computers, each running a further simulation. And within those simulations, more simulations are run.
There’s nothing problematic about the basic idea here. Running software inside other software is totally standard; indeed, this very Discourse platform runs inside Docker, a virtualization environment.
And it’s not as if we run out of complexity in those embedded systems; thanks to chaos theory and the notion of fractals, we know that extremely complex behavior can be generated with very simple algorithms.
So, given that the number of embedded simulations is theoretically orders of magnitude greater than the simulation that runs them, the logical conclusion is that the possible number of simulated realities is vastly greater than the number of possible “base realities”. (Of course, if we really want to bend minds, we can add parallel universes to the mix!)
The problem with the argument, in my view, lies with the basic assumption: that we can in fact create a simulation of the Universe, consciousness and all. There’s precisely zero evidence for this.
The argument has been to point to the exponential growth of the complexity of software; but this tells us nothing. I can make more and more complex Lego houses, but I will never end up with an oil painting. Nor will I end up with, say, an education policy. They are completely different kinds of things. No-one in the IT world has a robust or clear idea what consciousness is. To say that it’s nothing more than an emergent property of complex computations is purely speculative.
One of the things that the long history of philosophy teaches us is that when logic gets unmoored from data it leads us in strange and often unprofitable byways. For this reason, the overwhelming tendency in philosophy has been to move away from such reasoning and towards a more close, grounded kind of examination. The greater the leap that inference has to make, the less we should place our confidence in it. And boy, is there a lot of leaping going on.
Among the titans of IT, the secular saviors of our age, there is an all-too-human hubris. Google made a great search engine. Next: make a space elevator.
Too hard Fix climate change. Also too hard Okay, fine: cure death. Mark Zuckerberg made a successful social network. Next: eliminate disease. Elon Musk built a cool electric car. Next: colonize the solar system. And create consciousness, I guess. In fact, all of them are working on building true AI.
Whether this has anything to do with real consciousness is dubious. As a sci-fi geek, I’d love to see all these things happen. As a philosopher, my confidence declines the further we get from observed facts.
The thing with AI, though: it doesn’t need to be self-aware to end the world. It just needs to press a button.