Pursuing Truth through Intuition

Consider these famous thought experiments:

A sheriff in a small town faces a crisis: a brutal crime has inflamed public outrage, and an innocent man—widely believed to be guilty—has been arrested. The evidence against him is weak, but the townspeople are restless, verging on mob violence. If the sheriff releases the suspect, riots will likely erupt, leading to multiple deaths and widespread destruction. But if he proceeds with the execution in order to preserve public order, he knowingly condemns an innocent person. And as difficult as it may be, to condemn an innocent man seems clearly wrong.

In philosophy, knowledge is typically defined as justified true belief. So for us to say that a person knows something, it must be the case not only that they believe that thing, nor that they believe it and it is true, but that they believe it, it is true, and they have a justification (reasoning) behind their belief. Yet, suppose a man is driving in the countryside, and sees what looks exactly like a barn. Accordingly, he thinks that he is seeing a barn—and in fact, that is what he is doing. But what he does not know is that the neighborhood generally consists of many fake barns—barn facades designed to look exactly like real barns when viewed from the road. Since, if he had been looking at one of them, he would have been unable to tell the difference, his “knowledge” that he was looking at a barn would seem to be poorly founded. It seems that he doesn’t know anything; he just got lucky.

Imagine a machine that could provide whatever desirable or pleasurable experiences a subject could want. Psychologists and bioengineers have worked together and figured out a way to stimulate a person’s brain in order to induce these good experiences in a way indistinguishable to the subject from the real thing. It is suggested that, because all that matters is the pursuit of pleasure and the avoidance of pain, what one needs for a good life is to be connected to this machine indefinitely. Yet to hook oneself up to such a machine seems not only hollow but possibly even wrong.

Suppose a Chinese speaker was told that they could slip paper under a door in order to communicate with someone on the other side. They do this, slipping paper with Chinese writing under a door, and after a moment, they receive back some paper with Mandarin characters written on it. The Chinese speaker goes on like this for a while having a conversation until you walk up and tell them that on the other side of the door is not a person but actually a computer. The Chinese speaker is shocked but quickly grants that the amazing computer has reached a level where it not only understands Chinese but understands human communication. (If you’re unfamiliar with the Turing Test, it is essentially this—the suggestion that if one communicated with a computer but was unable to tell that they were talking with a computer rather than another person, then this would be such an achievement we could rightfully say that a computer has become intelligent.) However, what if the reality was that I (a man who does not know Chinese, Mandarin or otherwise) was on the other side of the door? I was given an English manual with instructions, along with sufficient pencils, paper, erasers, and filing cabinets, and I was told that when Chinese characters were slipped under the door, I should follow the step-by-step program, which would eventually instruct me to slide new Chinese characters back out under the door. Although a bit farfetched with me in the scenario (and not at all unbelievable with the computer), it is hard to see any real difference between me following instructions and a machine running a program. And it seems, assuming we would never say that I understand Chinese, that neither does the computer understand Chinese.


Do you notice anything similar about these thought experiments, anything they have in common for how they make an argument? Among other things, each scenario makes its final claim by appealing to an intuition in the reader—That computer doesn’t know Chinese because we would never say a man shuffling cards knows a language… Something seems off about foregoing the real world in favor of simulated pleasure… There’s just something wrong about blaming an innocent man, no matter how much good it achieves. These sorts of arguments are common enough in philosophy and ethics, often using an ad absurdum structure, where some initial premise cannot be true or else it would lead to an obvious contradiction with a cognitive or moral intuition we all share.

Years ago when it was first pointed out to me that many arguments in philosophy rely ultimately on intuition, I was upset. Intuitions are fuzzy things. There’s no guarantee that they’ll be the same from person to person or even moment to moment. What is more, they’re not based on anything—they come from the gut. So appealing to them seems to run counter to the logical rigor that I thought was the point of our philosophical inquiry in the first place. What makes these intuitions, about ethics, about meaning, better than the erroneous intuitions we’re trying to fix? If we’re trying to get at the nature of things, the really real, then it just seems wrong to base our arguments and philosophical systems on those murky ideas hovering at the back of our brains (which I suppose is itself an intuition).

Of course, we’re free to ignore these intuitions. In philosophical terms, it’s called “biting the bullet.” Yeah, it feels wrong to condemn an innocent man, but that is in fact the right thing to do in that scenario… it doesn’t matter if he was lucky or not, it was truly a barn he was looking at and he did in fact know it. It doesn’t matter that we don’t want these conclusions to be true; what matters is that they line up with our best current theories for how things are, so we should start getting used to their conclusions. Unfortunately for this “biting the bullet” approach, I associate it most with edgy philosophy students and, for lack of a better term, moral monsters. The sort of people who ignore the intuitions we all share, the sort of intuitions that tell us condemning an innocent person is never justified, that life is more than sensory pleasure, that all people are to be treated equally and with dignity—these people live in a different moral world than the rest of us and are, you know, monsters. 

It should be obvious if you read last week’s post that I’ve grown to take these intuitions more seriously. Not only in a linguistic sense—What are we really asking when we ask if something is human? What do we really mean by “person”?—but also in terms of the sort of arguments I want to bring up: Do we suppose that heaven is mostly populated by souls that never saw the light of day? Is it too late to attribute personhood only after memories begin to form? In any case, I think those gut feelings do a lot more work and reveal a lot more about how the world operates than our puny attempts at analytic arguments.

This is nowhere more clear than at the end of life, when a family looks at their loved one who has died or is dying soon. At that point, the loved one is not much a person—they have little or nothing of what seems to count. But even if their heart has stopped and their brain no longer functions, no one in that family looks down at a corpse devoid of value, at a mass of tissue that no longer serves a purpose other than to be disposed of. Their intuition tells them that they’re looking down at family, at a person (though deceased) that they still love.


Discover more from Religion & Story

Subscribe to get the latest posts sent to your email.

Leave a Reply

search previous next tag category expand menu location phone mail time cart zoom edit close