Skepticism, Pragmatism, and Zebras

In 1970 Fred Dretske published a paper about a fairly technical issue in epistemology1,2 In that paper he gave a “silly example” (his words) to illustrate some point about skepticism. Imagine that you take your kid to the zoo to see the zebras. Now, how do you know that the animals you are looking at are zebras? Dretske points out that most of us wouldn’t hesitate to say that those animals are zebras:

We know what zebras look like, and, besides, this is the city zoo and the animals are in a pen clearly marked “Zebras.” Yet, something’s being a zebra implies that it is not a mule and, in particular, not a mule cleverly disguised by the zoo authorities to look like a zebra. Do you know that these animals are not mules cleverly disguised by the zoo authorities to look like zebras? If you are tempted to say “Yes” to this question, think a moment about what reasons you have, what evidence you can produce in favor of this claim. The evidence you had for thinking them zebras has been effectively neutralized, since it does not count toward their not being mules cleverly disguised to look like zebras. Have you checked with the zoo authorities? Did you examine the animals closely enough to detect such a fraud? You might do this, of course, but in most cases you do nothing of the kind. You have some some general uniformities on which you rely, regularities to which you give expression by such remarks as, “That isn’t very likely” or “Why should the zoo authorities do that?” Granted, the hypothesis (if we may call it that) is not very plausible, given what we know about people and zoos. (1015-6)

It turns out that Dretske’s example isn’t as silly as he thought it was: allegedly, a zoo in Egypt painted some donkeys (not mules, but close enough) to look like zebras.

The context of Dretske’s “silly example” was a type of argument for skepticism. Within philosophy, the term “skepticism” usually refers to attempts to systematically undermine knowledge claims. According to skeptics we cannot know what we believe we know about some (usually broadly construed) topic. According to moral skeptics, for example, we can have no moral knowledge (that is, no knowledge about what is right or wrong). Such philosophical skepticism is usually based on one (or more) of three kinds of arguments. In the following I intend to show that two of these kinds of skeptical arguments fail, and that the third depends on an unscientific and impossible demand of absolute certainty which the (hypothetical) skeptic shares with most non-skeptical philosophers. But before I turn to these arguments I want to make a few brief remarks about the concept of “skepticism”.

For the ancient Greeks, skepticism was – in the first place – a philosophy of life. Skeptics believed that we should suspend judgment whenever we could not be certain about the right answer or the right thing to do, and that our lives would be better this way. Nowadays, the term “skeptic” generally has a slightly different meaning – skepticism is usually understood to be an attitude focused on doubt (rather than suspension of judgment), but the term is also sometimes abused to refer to denial rather than doubt. There are organizations and publications in many countries that call themselves “skeptical”. These organizations usually investigate all kinds of strange claims that conflict with modern science – from homeopathy to UFOs and from Bigfoot to astrology. The term “skeptical” is appropriate in case of these organizations: they doubt, they suspend judgment, and they investigate. If there is overwhelming evidence to reject some strange claim, then they usually will reject that claim indeed, but not before they have assessed that evidence and with the typical caveat of the scientist: new evidence may lead to different conclusions. This then, is an open-minded and scientifically motivated kind of skepticism. It contrasts in almost every respect to the attitude of denial that is also sometimes called “skepticism”. Such an attitude of denial is the attitude of the so-called “climate skeptic” who is not a skeptic in any sense of that term. “Climate skeptics” bluntly deny findings of climate science that are supported by overwhelming evidence and by almost everyone working in the field. A climate skeptic doesn’t suspend judgment, but judges. A climate skeptic doesn’t doubt, but denies. And climate skeptics aren’t motivated by science, but by ideology (and money, or perhaps, especially by money). In short, climate skeptics are not skeptics.

Skepticism in philosophy is of a somewhat different nature. A philosophical skeptic isn’t as malicious as a so-called “climate skeptic”, but isn’t anything like an open-minded investigator of some strange claim either. The role of the (often hypothetical) philosophical skeptic is to undermine our theories and knowledge claims. Skeptical arguments aim to show that we cannot know (with the certainty that “knowing” implies) what we believe to know, and thus for any philosophical claim or theory, the skeptic’s “job” is to show that we cannot know something that we do (or should) know according to that claim or theory. Many philosophical theories are developed in response to – real or hypothetical – skeptical objections, and sometimes philosophers propose outrageous “solutions” in response to outrageous (and usually hypothetical) skeptical doubts. I don’t think this is the right way to do philosophy, however. Doubt can be unreasonable, and unreasonable doubt should not be taken seriously. I will return to this point below, after assessing the aforementioned three skeptical arguments.

“some” does not imply “possibly all”

The most common type of skeptical argument (which can be found in Descartes Meditations, for example) comes in two varieties. The first variety reasons from “some” to “possibly all”, the second from “any” to “all”. The first looks something like this:

  • Some of our beliefs are (or turn out to be) mistaken.
  • Therefore, all of our beliefs could be mistaken.

(Note that in philosophy, “belief” just means something you hold true. I believe that I’m typing this sentence on my old and dying laptop computer, for example.) The second variety goes approximately like this:

  • any of our beliefs could be mistaken.
  • Therefore, all of our beliefs could be mistaken.

Both varieties are invalid. Simon Blackburn offers a nice analogy to show this in Think3, possibly the best introduction to philosophy: “Some banknotes are forgeries. So for all we know, they all are forgeries” (p. 22). Or, rephrasing to make it even more similar to variety 1:

  • Some banknotes are (or turn out to be) forgeries.
  • Therefore, all banknotes could be forgeries.

And in the mold of variety 2:

  • any banknote could be a forgery.
  • Therefore, all banknotes could be forgeries.

Blackburn points out that “the conclusion is impossible, since the very notion of a forgery presupposes valid notes or coins. Forgeries are are parasitic upon the real. Forgers need genuine notes and coins to copy.” (p. 22) Hence, the conclusions cannot possibly follow from the premise (in either variety), which means that the arguments are invalid. And given that these arguments about banknotes are logically analogous to the original arguments (meaning that they “work” in exactly the same way) the original arguments are invalid as well. In other words, neither from the fact that we are sometimes mistaken nor from the fact than any of beliefs could be mistaken does it follow that everything that we believe (i.e. that we think we know) could be mistaken. Thus, the first kind of skeptical argument fails.

the grammar of justification

The second type of skeptical argument is based on the “closure principle of justification”. This is the argument that Dretske argued against with his “silly example”. Knowledge is traditionally defined as justified true belief. To know something, you must believe it, you must have justification for that belief (such as evidence or a good argument), and your belief must be true. Edmund Gettier became famous for publishing two counterexamples (that is, two cases of justified true belief that don’t appear to count as knowledge),4 but still most philosophers hold that something like justification is necessary for knowledge. If you don’t have proper justification, then you don’t know.

According to the closure principle of justification, if you have justification for believing that \(p\) (regardless of what \(p\) stands for), and \(p\) implies \(q\), then you also are justified to believe \(p\). It is this principle that leads to a skeptical argument, which is most easily illustrated by means of Dretske’s “silly example”.

Let \(p\) stand for “the zoo animal I’m looking at is a zebra”, and \(q\) for “the zoo animal I’m looking at is not a disguised mule” (or donkey, if your prefer the Cairo zoo to Dretske’s hypothetical example). It obviously is the case then that \(p\) implies \(q\): if I’m looking at a zebra, then I’m not looking at a disguised mule. Now, according to the closure principle, if I have justification for my belief that what I’m looking at is a zebra, then that justification carries over to the implication (i.e. to \(q\)), and I am, therefore, justified to believe that what I am looking at is not a disguised mule. However, the skeptic argues, I am actually not justified to believe that what I am looking at is not a disguised mule, because the disguise might be very clever (and indeed, the disguise of the donkeys in the Cairo zoo fooled most visitors).5 By implication, I am not justified to believe that I am looking at a zebra. If this argument is generalized, I am not justified to believe anything, and therefore, I don’t know anything, which is, of course, the skeptic’s point.

Let’s spell out the argument:
1) If I’m looking at a zebra, then I’m not looking at a disguised mule.
2) If I am justified to believe that I’m looking at a zebra, then I’m justified to believe that I’m not looking at a disguised mule. (This follows from the closure principle.)
3) I am not justified to believe that I’m not looking at a disguised mule. (Because the disguise might be very clever.)
4) Therefore, I am not justified to believe that I’m looking at a zebra.

The first line (1) sets up the closure principle, which is used in line (2). Lines (2) to (4) are the main argument. The form of this argument is called modus tollens, which is a perfectly valid argument form. (Arguments in modus tollens follow the pattern “if \(p\) then \(q\); not \(q\); therefore not \(p\)”. For example, “If it rains then the streets get wet. The streets aren’t wet. So it didn’t rain.”) Whether it is a good argument doesn’t depend on validity alone, however, but also on whether the premises, (2) and (3), are true. If you look back at the quote by Dretske at the beginning of this post then you’ll see that Dretske rejects (3). He writes that the evidence for (3) is also evidence for the first half (the “antecedent”) of (2) – that I am justified to believe that I’m looking at a zebra – and therefore, that evidence is “neutralized”. However, if that evidence is neutralized, then we don’t have evidence to believe that I’m looking at a zebra in the first place, and we would end up rejecting this argument for skepticism but arguing for skepticism anyway, because “neutralized” evidence isn’t evidence for anything.

Like Dretske, I think that (3) is false, but not because the evidence is somehow “neutral”. In the contrary, in most cases the evidence would justify me to believe that I’m looking at a zebra and that I am not looking at a disguised mule. This is not my main reason for rejecting this kind of skeptical argument, however. (But I’ll return to this issue in the next section.) That reason is that I think (2) – in this form – is false as well.

Most critics of the closure argument for skepticism reject (2), and usually they reject it because they believe that there is something wrong with the closure principle of justification. There is indeed, or actually, there are several things wrong with it. One problem that is often pointed out is that necessary truths are logically implied by anything, but fixes for that problem have been proposed. There is a much more fundamental problem, however, but it will take a bit of technical detour to explain that problem.

In the sentence “John eats an apple” the verb “to eat” is transitive, meaning that it takes two arguments: John and the apple. We can represent this formally as \(eat(j,a)\)” in which \(J\) stands for John and \(a\) for the apple. If Johns eats an apple, then that apple is being eaten. In other words, the active voice sentence “John eats an apple” implies the passive voice sentence “an apple is (being) eaten”.6 And conversely, the passive voice sentence “an apple is (being) eaten” implies that there is someone or something eaten that apple. For that reason, a formal representation of the passive voice substitutes “there is something that/who” for the subject in the active voice sentence. Thus, a correct formalization of “an apple is (being) eaten” is \(\exists x [eat(x,a)]\), in words: “there is some \(x\) such that \(x\) eats \(a\)”. One could, of course, propose alternative formalizations along the lines of \(being eaten(a)\), but such a formalization obscures the relation between “eat” and “being eaten” and – more importantly – it obscures the essential fact that there is someone or something doing the eating. The passive voice does not imply that there isn’t anyone eating the apple; it just doesn’t mention who or what is doing the eating (because it doesn’t matter in the context, or because it is unknown, or because it is assumed to be known by the audience, or …).

Justification is a transitive verb. A certain body of evidence justifies a certain belief. Let’s formalize this as \(J(e,b)\) in which \(J\) stands for “justifies”, \(e\) for evidence, and \(b\) for the belief that this evidence justifies. To say that a certain belief is justified is a passive voice sentence. It should be formalized – similarly to “an apple is (being) eaten” – as \(\exists x[J(x,b)]\), or in words: “there is something (i.e. some evidence) that justifies belief \(b\)”. In the same way that the passive voice of “to eat” doesn’t make the eater magically disappear (and instead just doesn’t mention the eater), saying that a belief is justified doesn’t make whatever justifies it magically disappear.

Now, let’s have another look at the closure principle of justification. Traditionally, this is considered to have the following form: “if \(J(p)\) and \(p\) implies \(q\), then \(J(q)\)” (in words: “if \(p\) is justified and \(p\) implies \(q\), then \(q\) is justified). This turns out to be incorrect, because justification is a transitive verb and takes two arguments rather than one. If we take that into account, then the principle becomes: “if \(\exists x[J(x,p)]\) and \(p\) implies \(q\), then \(\exists x[J(x,q)]\)”, or in words “if there is something that justifies \(p\) and \(p\) implies \(q\), then there is something that justifies \(q\)”. The important thing to note, however, is that the first and the second \(x\) are not necessarily the same thing – that is, the evidence that justifies \(p\) does not have to be the same evidence that justifies \(q\). All the principle says is that there is evidence for \(q\), not what that evidence is.

Nevertheless, we can say more than that. The evidence for \(p\) is a collection of sentences. For example, if \(p\) stands for “the zoo animal I’m looking at is a zebra”, then the evidence that justifies this belief includes sentences like “the zoo animal I’m looking at has black and white stripes”, “zebras have black and white stripes”, and so forth. If \(q\) stands for “the zoo animal I’m looking at is not a disguised mule”, then \(q\) is justified by a subtly different collection of sentences, however. That is, the “evidence” for \(q\) consists of the evidence for \(p\) with the addition of a few additional sentences that state that \(p\) implies \(q\) and that this in turn implies that the justification for \(p\) carries over to \(q\) (i.e. the closure principle). In other words, the evidence that justifies \(q\) includes the evidence for \(p\), but doesn’t coincide with it (in the same sense that a car includes a steering wheel but isn’t identical to it). And therefore, \(p\) and \(q\) have different evidence indeed. But if that is the case, then representing the closure principle as “if \(\exists x[J(x,p)]\) and \(p\) implies \(q\), then \(\exists x[J(x,q)]\)” is still misleading – we need to be more specific about the “evidence”. So what we should have instead is something like this: “if \(J(e,p)\) and \(p\) implies \(q\), then \(J(\{e,c\},p)\)”, or in words: “if a belief that \(p\) is justified by evidence \(e\) and \(p\) implies \(q\), then the belief that \(q\) is justified by evidence consisting of \(e\) and \(c\)” in which \(c\) stands for the necessary additions, namely that \(p\) implies \(q\), and a representation of the closure principle itself.7

If we apply this to the closure argument for skepticism, then that argument falls apart. (2) then states that if I am justified by visual evidence and my knowledge of zebras (for example) then this implies that I am justified to believe that I am not looking at a disguised mule by the same evidence plus the fact that zebras are not disguised mules and the closure principle. And (3) states that I am not justified to believe that I am not looking at a disguised mule by visual evidence and my knowledge of zebras (because the disguise might be very clever). Recall that the conclusion follows by modus tollens: “if \(p\) then \(q\); not \(q\); therefore not \(p\)”. It depends on identity between what follows “not” in (3) and what follows “if … then” in (2), but that identity has disappeared: what follows “not” in (3) is “justified to believe that I am not looking at a disguised mule by visual evidence and my knowledge of zebras”, but what follows “if … then” in (2) is “justified to believe that I am not looking at a disguised mule by visual evidence and my knowledge of zebras plus the fact that zebras are not disguised mules and the closure principle”. The implication is that (2) and (3) are talking about subtly different things (but subtle differences matter!), and therefore, that nothing follows from these two sentences taken together.

Hence, the skeptical argument based on the closure principle fails, but we still have a problem. (2) says that I am not justified to believe that I am not looking at a disguised mule, and this raises two closely related questions: When am I justified to believe something? And when am I justified to discard alternative explanations of the same evidence? (That I’m looking at a disguised mule rather than a zebra is an example of such an alternative explanation of the same visual evidence.) According to the third and most radical skeptical argument, the answer to either question is “never”.

any doubt is fatal

According to the third skeptical argument, if there is any (genuine) reason to doubt some belief, then that belief is not justified (and therefore, not knowledge). Since it is always possible to come up with (more or less exotic) reasons to doubt a belief, no beliefs are justified, and thus, we don’t know anything.

This argument depends on the idea that justification depends on certainty – only absolutely certain knowledge is real knowledge. The quest for absolute certainty is a religious quest, however. Science doesn’t produce certainties – if you are looking for certainties, you should turn to religion, but unfortunately, most of philosophy sides with religion in this respect. Like religion, philosophy is a quest for absolute certainty. The main (perhaps, only) difference is that religion settles for dogma while philosophers never stop searching (well, ideally, that is, as there are plenty of dogmas to be found in the history of philosophy). Like most other philosophers, skeptics are also on a quest for certainty, and they believe to have found it in the certainty that nothing else can be known for certain. They hang on to that one certainty, however, as if their lives depend on it.

W.V.O. Quine, probably the greatest philosopher of the 20th century, argued that we should accept philosophical theories in the same way that we accept scientific theories: provisionally, knowing that new evidence may require us to give up any of our (theoretical) beliefs.8 Quine’s “naturalism” makes philosophy a part of science, and adopts a broadly scientific attitude towards philosophical theories – an attitude that accepts uncertainty, and thus contrast with the more religious attitude that continues the never-ending quest for absolute certainty. This approach to philosophy is the product of the Pragmatic tradition that started in the second half of the 19th century in the US with philosophers like Charles Sanders Peirce and William James.

I’m not interested in the quest for certainty – I’ll gladly leave that quest to religions and religiously minded philosophers. We should accept a theory – regardless of whether it is a philosophical, biological, economic, or astronomical theory – if we have overwhelming evidence for it. But this raises the question what exactly that means, of course – when is evidence “overwhelming”. There isn’t a single, simple answer to that question, but if I’d have to pick a general framework for assessing evidence, I’d pick Bayes’ theorem.

Bayes’ theorem is a formula to calculate the probability of some hypothesis \(H\), given evidence \(E\): “\(P(H|E)\)”. This is (one version of) Bayes’ formula:
$$ P(H|E) = { { P(H) \times P(E|H) } \over { P(E) } } $$ What it says is that \(P(H|E)\), the probability of hypothesis \(H\) given evidence \(E\), is the product of the probability of \(H\) and the probability of the evidence \(E\) if \(H\) is true, divided by the probability of the evidence \(E\).

Let’s apply this to the zebra case to clarify it. What we want to know is \(P(H|E)\), the probability that the zebra-like “thing” we’re looking at actually is a zebra. The hypothesis \(H\) is that it is a zebra; the evidence \(E\) is that it looks like a zebra. \(P(H)\) and \(P(E)\) are, respectively, the probability of encountering a zebra in the zoo, and the probability of encountering something that looks like a zebra in the zoo. \(P(E)\) is slightly larger than \(P(H)\) because the donkeys in the Cairo zoo also look like zebras (except for their rather un-zebra-like ears and noses), and so does a particularly well made zebra statue, for example. \(P(E|H)\), finally, is the probability of seeing something that looks like a zebra when looking at a zebra (or in other words, the chance of recognizing a zebra as a zebra). Since, zebras are rather iconic, \(P(E|H)\) is close to 100%, and this implies that \(P(H|E)\) is almost equal to the ratio of \(P(H)\) and \(P(E)\), which is the percentage of zebra-like “things” in the zoo that are actually zebras. Obviously, this percentage depends on how zebra-like these things have to be (to count as “zebra-like”). If they have to look very much like a zebra, then the percentage will be very close to 100% (because, aside from real zebras, there are very few things that very much look like zebras); if they only have to be somewhat zebra-like, then the percentage will be much lower. Note that the extent of zebra-likeness is essentially the quality of the evidence \(E\), and thus a specification of \(E\). If the evidence \(E\) is that what we’re looking at looks very much like a zebra, then \(P(H|E)\), the probability of it actually being a zebra given that evidence, is close to 100%. If \(E\) is that it is merely somewhat zebra-like, then that probability will be much lower.

If, given the evidence, the probability that what we’re looking at is a zebra is close to 100%, then we are justified to believe that it is a zebra. How close exactly it needs to be can be debated. Social scientists usually want to be at least 95% sure before they conclude something, but there may be cases in which we would want the threshold to be higher. If what we’re looking at in the zoo very much looks like a zebra, then the probability that it actually is a zebra is much higher than 95%, however, so we don’t really have to worry about what the exact threshold should be.

But where are the mules or donkeys? In Dretske’s “silly example”, the cleverly disguised mules are an alternative explanation of the same (visual) evidence \(E\). It is not the only possible alternative explanation, however. We could also be hallucinating, or looking at a zebra statue, among many other more or less plausible options. The probability of the evidence, \(P(E)\) is the sum of the probabilities of all the things that could produce that evidence multiplied by the probabilities that those things would actually produce that evidence. So, for every possible explanation \(x\) of the evidence (seeing something that looks like a zebra), we need to figure out the probability of that explanation \(P(x)\) and the probability that it would produce the evidence \(P(E|x)\), and then multiply these two numbers with each other. \(P(E)\) is the sum of these products for all possible explanations: \(P(E) = \sum ( P(x) \times  P(E|x)\) ).

We have two kinds of explanations, however: our hypothesis \(H\), and a bunch of alternative explanations \(A\) (such as mules, donkeys, hallucinations, and statues). Hence,
$$ P(E) = P(H) \times P(E|H) + \sum ( P(A) \times  P(E|A) ) $$ Substituting this for \(P(E)\) in Bayes’ theorem results in this:
$$ P(H|E) = { { P(H) \times P(E|H) } \over { P(H) \times  P(E|H) + \sum { \big( P(A) \times  P(E|A) \big) } } }$$ This may look complicated, but it actually is fairly simple. We now have “\(P(H)\times P(E|H)\)” above and below the line, which means that the smaller \(\sum(P(A)\times P(E|A))\) relative to \(P(H)\times P(E|H)\) the more likely it is that our hypothesis \(H\) (given evidence \(E\)) is true. Or in other words, the smaller the likelihood of alternative explanations relative to our hypothesis, the more likely that our hypothesis is true. An obvious problem is that \(\sum(P(A)\times P(E|A))\) includes all possible other explanations, and it is not likely that we know all of those. However, the more exotic some alternative explanation the lower either \(P(A)\) or \(P(E|A)\) or both, and the less it contributes to \(\sum(P(A)\times P(E|A))\). In other words, “explanations” that are either extremely unlikely and/or that are very unlikely to produce \(E\) can be safely ignored (unless there are so many of them that adding them all up results in a significant number anyway).

Back to the zoo. Heat-induced hallucinations may be possible this summer,9 but the chance of a hallucination of a zebra specifically, appears pretty small. Donkeys disguised as zebras occur, and apparently they fool most visitors, so their \(P(E|A)\) is high, but there is only one suspected case of this in the more than a thousand zoos worldwide, so \(P(A)\) is extremely low (especially if time is taken into account). Zebra statues are probably more common, but are less likely to be confused with real zebras (i.e. they have a low \(P(E|A)\)). And so forth. In other words, for all of these alternative explanations \(P(A)\times P(E|A)\) is an extremely small number. Probably zebra statues have the highest \(P(A)\times P(E|A)\), but even that number is unlikely to exceed 0.1% if the evidence \(E\) we’re talking about is seeing something that very much looks like a living zebra. \(\sum(P(A)\times P(E|A))\) is the result of adding up all of those very small numbers, most of which are in fact so small that they can be ignored. I’d be extremely surprises if that sum exceeds 1% – in fact, I suspect it to be well below 0.1%.

But let’s say that \(\sum(P(A)\times P(E|A))\) is 1%. And let’s assume that 50% of all zoos have zebras and that there is a 98% chance that when facing a zebra, I’m seeing zebra.10 Then, if we plug in those numbers, we get (50% × 98%) divided by (50% × 98%) + 1%, which is 49% / 50% = 98%. That last number is \(P(H|E)\), the probability that what looks like a zebra actually is a zebra. If \(\sum(P(A)\times P(E|A))\)is 0.1%, then that probability rises to 99.8%. But I’m happy with 98% – if there is a 98% probability that given evidence \(E\) my hypothesis \(H\) is correct, then – in most cases – I am justified to believe \(H\).

evil demons and mad scientists

The problem, of course, is that in philosophy we often don’t know how probable supposed alternative explanations are, and how likely they are to produce the same (or sufficiently similar) evidence. Descartes suggested an “evil demon” as an alternative explanation for reality as it appears to me. According to this explanation, reality is nothing like I think it is – rather, everything I experience is the result of the deceptions by this evil demon. A modern variant of this idea is the brain-in-a-vat (BIV) scenario. Rather than reality being as it appears to be, I’m really just a BIV with a bunch of electrodes connected to it. Through those electrodes some mad scientist (not to be confused with the Mad Professor) creates the illusion of all the experiences I take to be real.

It seems to be fundamentally impossible to estimate how probable such “alternative explanations” are. Given everything we think we know about the universe, they appear to be extremely unlikely, but of course, if they are true, then everything we think we know – including the unlikeliness of these explanations – is put in our heads by the evil demon or mad scientist, so our beliefs about the unlikeliness of these scenarios may itself be a deception. It seems that all we can do is to assume that most of what we believe is true, and to assess \(P(A)\) and \(P(E|A)\) against that assumption. And that’s what we should do indeed. Given everything we know – and given that we cannot base our judgment on anything else – \(P(A)\times P(E|A)\) for BIVs and evil demons is extremely low (perhaps, even zero, because given everything we know, these scenarios may very well be impossible).

This is pretty much how science works. We test hypothesis against the background of everything we think we already know, because that is the only thing we can do. And by implication, it may turn out at some point that smaller or bigger chunks of our assumed knowledge are false and thus need to be rejected or adjusted. This has happened several times in history – Thomas Kuhn called such events “scientific revolutions” – and it will probably happen again. There is no certainty in science, just provisional acceptance of the best (i.e. most well-supported) available theories. And as Quine argued, at least in principle, none of our beliefs is immune from potential revision.

Many philosophers seem (or seemed?) to think that philosophy is fundamentally different – that philosophy is concerned with a special kind of knowledge and uses a special method – but Pragmatists reject such ideas, and for good reasons. The choice for philosophy is between a pointless, religious quest for absolute certainty, or the adoption of something like the scientific method, accepting uncertainty, and accepting that conclusions are always provisional. The Pragmatist, Neo-Pragmatist, or Quinean naturalist chooses the second option, and that is also my choice.

Nevertheless, the Pragmatist doesn’t refute the skeptic, but as Richard Rorty once phrased it, merely tells him to get lost.11 From the Pragmatist point of view, the skeptical “alternatives” (like the evil demon or BIV) are too unlikely to be taken seriously. But there is a second reason for a Pragmatist to ignore such supposed “alternative explanations”. To have any significant effect on the probability of hypothesis \(H\) given evidence \(E\) (for example, that reality really is as it appears to be according to our evidence), an alternative explanation must have a high probability and a high \(P(E|A)\), but if something has a sufficiently high \(P(E|A)\), then in practice it is indistinguishable from \(H\), because all we can rely on is the evidence \(E\). Or in other words, the only reason to take something like the evil demon scenario seriously would be that it would be very likely that that scenario would produce the exact same experiences (of apparent reality) as those we do have – that is, if in practice the evil demon scenario and reality as we think it is are indistinguishable. But if some supposed alternative explanation is effectively indistinguishable from the more commonsensical hypothesis that most of us accept (namely, that reality is more or less like we think it is), then there is no reason to take that alternative seriously after all. If something would be different in the evil demon scenario, then that scenario would be demonstrably false, but if nothing would be different in the evil demon scenario, then what is the point of even considering that scenario?


If you found this article and/or other articles in this blog useful or valuable, please consider making a small financial contribution to support this blog 𝐹=𝑚𝑎 and its author. You can find 𝐹=𝑚𝑎’s Patreon page here.

Notes

  1. Epistemology is the branch of philosophy that is concerned with the nature, possibility, and sources of knowledge.
  2. Fred Dretske (1970). “Epistemic Operators”, The Journal of Philosophy 67.24: 1007-1023.
  3. Simon Blackburn (2011). Think: A Compelling Introduction to Philosophy (Oxford University Press).
  4. Edmund Gettier (1963). “Is Justified True Belief Knowledge?”, analysis 23: 121–123.
  5. This is assuming that they actually were disguised donkeys, which, as far as I know, has not been confirmed yet.
  6. Or strictly speaking, what the first sentence expresses implies what the second sentence expresses.
  7. Or alternatively, ∃x[ if J(x,p) and p implies q, then J({x,c},q) ]; in words: “if there is some x such that p is justified by x and p implies q, then the belief that q is justified by evidence consisting of x and c”.
  8. See, for example, W.V.O. Quine (1948). “On What There Is”. In (1964), From a Logical Point of View (Cambridge MA: Harvard University Press), 1-19. And W.V.O. Quine (1960). Word & Object (Cambridge MA: MIT Press).
  9. This post was written during the 2018 heatwave.
  10. I have no clue about what percentage of zoos has zebras. It may be much higher than 50%, but it could also turn out to be a little lower. 98% seems a rather low estimate for P(E|H), on the other hand. I’d say that the chance of recognizing a zebra is much closer to 100% than that.
  11. Actually, Rorty said this about Donald Davidson, who perhaps wasn’t a Pragmatist in a strict sense of that term (because Davidson defies labeling), but Davidson was a student of Quine and his philosophy was also heavily influenced by pragmatism. (As was Rorty himself.) See: Richard Rorty (1986). “Pragmatism, Davidson and Truth”. In Ernest LePore (ed.), Truth and Interpretation: Perspectives on the Philosophy of Donald Davidson (Oxford: Blackwell), 333-355.

Leave a Reply

Your email address will not be published. Required fields are marked *