A couple of commentators on a previous post pointed me to an Arc Digital article by Thomas Metcalf which contends that the Simulation Argument (SA) ought to be taken more seriously. (Metcalf’s article wasn’t written in response to mine, although it appeared a week or so afterward: post hoc sed non propter hoc.)
I don’t think there’s anything in the article that poses any problems for the arguments I gave in my post. Rather than respond to every point, I’ll just quote a few sections and make some comments.
Metcalf observes that the SA has two key premises:
The Empirical Premise: Most of the “people” who think they’re real, flesh-and-blood humans are actually conscious computer programs.
The Indifference Premise: If most people are simulated, then you are probably simulated.
As I explained in my earlier post, I think the first premise is false, and necessarily so. It’s not metaphysically possible for a computer program to be conscious, assuming that the computer in question is a purely physical mechanism. (I think Metcalf actually commits a category error in his statement of the Empirical Premise. A computer program is abstract in nature; it’s a set of instructions that can be run by one or more physical computers. So a computer program wouldn’t be conscious; it would be the computer running the program, if it were possible for a computer to be conscious.)
The second premise looks problematic too. Metcalf elaborates:
The idea behind the Indifference Premise is simple: If most people have some feature, then absent other evidence, you should guess that you probably have that feature.
Most people have the following feature: not being me. Should I therefore guess that I’m probably not me too? Perhaps the “absent other evidence” clause is supposed to foreclose such trivial counterexamples. But what kind of evidence is in view here? Observational evidence? Surely that’s not the kind of evidence that would confirm my self-identity. Self-identity is known a priori. Couldn’t I also know a priori that I’m not a computer simulation? Well, if I can know a priori that no purely material object can be conscious, then I can know a priori that I’m not a computer simulation. All this to say, both premises of Metcalf’s SA seem to hang on the controversial notion that a computer can be conscious.