A couple of commentators on a previous post pointed me to an Arc Digital article by Thomas Metcalf which contends that the Simulation Argument (SA) ought to be taken more seriously. (Metcalf’s article wasn’t written in response to mine, although it appeared a week or so afterward: post hoc sed non propter hoc.)
I don’t think there’s anything in the article that poses any problems for the arguments I gave in my post. Rather than respond to every point, I’ll just quote a few sections and make some comments.
Metcalf observes that the SA has two key premises:
The Empirical Premise: Most of the “people” who think they’re real, flesh-and-blood humans are actually conscious computer programs.
The Indifference Premise: If most people are simulated, then you are probably simulated.
As I explained in my earlier post, I think the first premise is false, and necessarily so. It’s not metaphysically possible for a computer program to be conscious, assuming that the computer in question is a purely physical mechanism. (I think Metcalf actually commits a category error in his statement of the Empirical Premise. A computer program is abstract in nature; it’s a set of instructions that can be run by one or more physical computers. So a computer program wouldn’t be conscious; it would be the computer running the program, if it were possible for a computer to be conscious.)
The second premise looks problematic too. Metcalf elaborates:
The idea behind the Indifference Premise is simple: If most people have some feature, then absent other evidence, you should guess that you probably have that feature.
Most people have the following feature: not being me. Should I therefore guess that I’m probably not me too? Perhaps the “absent other evidence” clause is supposed to foreclose such trivial counterexamples. But what kind of evidence is in view here? Observational evidence? Surely that’s not the kind of evidence that would confirm my self-identity. Self-identity is known a priori. Couldn’t I also know a priori that I’m not a computer simulation? Well, if I can know a priori that no purely material object can be conscious, then I can know a priori that I’m not a computer simulation. All this to say, both premises of Metcalf’s SA seem to hang on the controversial notion that a computer can be conscious.
One common argument against substance (mind-body) dualism runs as follows. We know that consciousness is dependent on the brain, because when the brain is damaged it adversely affects consciousness and mental function. (You can prove this point to yourself experimentally by hitting yourself hard on the head with a brick.) Furthermore, it is argued, when brain function ceases altogether, consciousness disappears. (Don’t try to prove this latter point to yourself experimentally; just take it on trust.) Therefore, contra substance dualism, the mind — if it’s a real entity at all — must be ontologically dependent on the physical structures of the brain. We should be physicalists of some kind.
I come across this argument all the time in the writings of naturalists, but it strikes me as a blatant non sequitur. At most it shows that there’s a causal relationship between the mind and the body, which substance dualists insist upon anyway. (The so-called “interaction problem,” which is concerned with how there can be causation between physical and non-physical substances, is a different challenge to dualism, one I don’t propose to address here.) The fact that increasing damage to the brain leads to increasing mental impairment doesn’t at all imply that the mind cannot exist apart from the brain.
Here’s an analogy to elucidate why that’s so. Imagine a spaceship of the kind familiar from sci-fi movies. In this spaceship, the cockpit doubles up as an escape pod. In normal operation, the cockpit is attached to the main ship; whenever the ship moves, the cockpit moves with it, just as it should. If the ship is attacked with (say) photon torpedoes, the cockpit is buffeted about along with the rest of the spacecraft. When the ship is damaged, all of its systems can be affected; thus the operation of the cockpit can be impaired by damage to the ship in which it is housed.
If the ship becomes so badly damaged that it can’t move at all, the cockpit is stuck along with it, since it’s fixed to the ship. But if the spaceship is completely blown apart, the cockpit functions as an escape pod: it can detach from the doomed ship, and once detached, it can move freely again. (In line with a Christian eschatology, we could even extend the analogy such that if the parts of the ship are recovered and reassembled, the cockpit can be reattached — but that’s not necessary for the point I’m making here.)