What Should We Do?

A Meditation on Choice

By Stephen Raithel

 

Should I study for my test or go play basketball? Should I go to Old Kenyon tonight or stay in my New Apt? Should I bother waiting in the omelet line? As a creature capable of some degree of forethought, what tools do I have at my disposal to navigate these decisions?
In a popular anecdote from How We Decide, Jonah Lehrer gives a plug for the emotions. He tells a story of Lieutenant Commander Michael Riley who attended the radar station on the HMS Gloucester, a British destroyer. Riley spent hours on end staring at seemingly identical blips, which usually signified passing Allied fighter jets, but suddenly something new caught his eye. Something about this radar blip seemed different, but he couldn’t quite say what. It might be just another Allied jet flying overhead, but it could also be a missile coming in at high altitude. Should he shoot it down or not? Making the wrong choice, blowing up the jet of an ally or letting an enemy missile sink a ship, would incur tremendous loss. Staring at the blip for almost a minute, unable to distinguish between the two possibilities, Riley ordered the object to be shot down. And he was right.

Riley, though assured in his choice, considered himself lucky. He reviewed the tapes and still couldn’t see any sign indicating if it was a missile or a friendly jet. To Riley, it seemed that his mind didn’t know what his gut did. Many years later, an expert reviewed the tapes and realized that in the first moment the blip appeared, you could clearly tell it was a missile. Only after this first moment was the blip ambiguous. Riley’s unconscious mind picked up on this transient discrepancy, nudging him toward the right decision.
Lehrer’s ultimate point in this instance is something about the power of emotions and intuitions in individuals habituated by extensive training. Riley made the right choice because he picked up on something of which he wasn’t even consciously aware. The anecdote is an interesting one, and a valid one when we consider the merits of, or our capacity for, rationality in our decisions.

But what about the more mundane questions that started this article? What about decisions in which we haven’t been trained and we don’t have much confidence in our gut?

These questions prompted some research that eventually ended in my Senior Exercise about a technical aspect of the branch of mathematics known as statistical decision theory. To illustrate the power and elegance of this type of mathematical analysis, let’s consider a simplified medical example.

Specifically, let’s consider a rural doctor who sees the whole gamut of patients. Some patients present clear signs—their hand is cut and needs stitches, or their leg is broken and needs to be set. But this is hardly the usual picture. Often, a patient presents ambiguous signs or symptoms that could imply a variety of diseases and appropriate treatments.

In this light, imagine a patient who approaches our rural doctor and complains of faintness of breath. In a vastly simplified world, it could be either heart disease or lung disease. Importantly, the treatments for one disease are grossly inappropriate for the other disease: heart surgery would not do any good for asthma; an inhaler wouldn’t help a heart defect. What should the doctor do? On what grounds should he make his decision? In the decision theory jargon, what decisions are admissible?

Like Michael Riley in Lehrer’s anecdote, the doctor might very well make a decision based on how he feels, but most of us would like to have some rational justification from a doctor before we undergo surgery. What are the pertinent bits of information? If we were to analyze the problem more fully, we would first ask for some data to understand how likely it is that the patient has each disease. Maybe we could run a stress test to figure out the likelihood of heart disease, but we would also need to know the weaknesses of the test by understanding the frequency of incorrect results given by the test.

Next, we would try to quantify the different damages done by correctly and wrongly treating the certain disease. For example, maybe giving heart surgery to the asthmatic does five-times as much damage to the patient as giving an inhaler to the individual with a weak heart. Maybe leaving the asthma untreated is half as dangerous as leaving the heart condition untreated. Even if the diagnosis was a toss up between heart disease and lung disease, we would want to know the stakes for the decision game we play.

Finally, after we know the stakes of the decision and the probabilities of the different outcomes, after we know the likelihood the test is wrong, we would need to know what we want. Are we trying to get the best result on average, with a risk of causing disasters? Are we trying to avoid the worst-case scenarios? These different questions imply different algorithms to get to the solution. I want to sweep the reason for this, as well as the actual mathematics with its probability theory and calculus, under the rug and jump to the conclusion. Once we put the appropriate numbers in place and jump through the right hoops, there is an answer. A computer can crunch the numbers and spit out the answer. You’ll have to trust me on this.

We could write a rule for the physician to follow about when to give heart surgery and when to give an inhaler. The rule would be relative to the power and accuracy of the stress test. The decision rule would also be relative to how we think the different possible bad outcomes compare, and how it is bad to get unnecessary surgery. Of course, as with any field that deals with decisions under uncertainty, the physician will be wrong sometimes while following our decision rule, but our decision rule would guarantee that he would do less damage (and more good) than any other rule.

One limitation of the decision theoretic paradigm is that it is time consuming. There is a reason that not even the most intrepid mathematician gets out paper and pencil for every choice. Another is that all the possibilities have to be known and the associated probabilities of those events estimated; unfortunately, in reality, we don’t know what we don’t know and there might be outcomes we don’t expect at all. This hyper-Spock-like rationality turns on the ability to map a consequence in real life onto a single number to represent the “loss,” which might damningly simplify the complexities of decisions in a moral realm.

This said, decision theoretic paradigms are used in fields from forest management to portfolio allocation, from medicine to experimental design. And, by understanding this mathematically justified way to make decisions, perhaps we gain some new insight into the anecdote concerning Lieutenant Captain Michael Riley.

Though Riley could not consciously understand which event the blip on the radar signified, his decision had nevertheless been altered by it. There is nothing mystical about how Riley decided to shoot down the missile. He made a choice based on the data available, though it was data that his unconscious collected and hid from him. He also made a choice, in part, on the severity of the different consequences incurred by the wrong choice. In short, it sounds like his problem was one that could be analyzed by statistical decision theory.
I am not saying that our brains run through an algorithm like the one I’ve glossed over in this paper. In fact, from a tourist’s perspective of evolutionary psychology, I really doubt it. But while our brain might not have this algorithm, another “brain” could. It’s a problem in computation that a computer could conceivably solve. In many medical examples like the one I’ve highlighted, the decision problem is one that many engineers and computer scientists are working on so that computers may eventually solve problems like the medical example that I highlighted.

Think of the wonderful strangeness of this topic: how cleanly mathematics could answer a problem that might seem intractable and doomed; how mathematics can actually relate to and model the real world at all; the complexity inherent in our own circuitry programming other circuitry to hopefully solve problems better than we can. It’s weird. And if you ask me, it’s pretty cool too.

Share a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: