- BQO - https://www.bigquestionsonline.com -

How Can Science Help Us Make Better Choices?

I will argue here that science can most definitely help us make better choices—but only within a particular narrow sense of “better choices” used in cognitive science.  Whether science aids us in a broader sense of the term “better choices” is much less certain.

To a psychologist “making better choices” means better human decision-making—and good decision-making to a cognitive scientist means rational decision-making.  Philosophers define two types of rationality—instrumental and epistemic.  To think rationally means taking the appropriate action given one’s goals and beliefs (instrumental rationality), and holding beliefs that are commensurate with available evidence (epistemic rationality).  It’s handy to think of rationality as being about what to do and what is true.  What to do—instrumental rationality; and what is true—epistemic rationality.  Both facilitate good decision-making.  High epistemic rationality helps indirectly because good decisions are based on beliefs about the world that match reality.  Instrumental rationality is present when we make our choices by combining our goals with our beliefs in a way that maximizes goal achievement.

Science aids in making rational choices in one indirect way and in one direct way.  The indirect way is that science allows us to get our beliefs in line with the world.  After all, the quest of science is a true description of the world.  Because decisions made based on true beliefs will be better ones, science at least gives us a chance to make rational choices by portraying a true picture of the world as it is.  Science cannot force people to use these true beliefs in their instrumental decision-making, but it at least can make them available.

Science aids rational decision-making in a much more direct way, though.  Through the discovery of normative models of decision-making—those that help us maximize our goal fulfillment—science provides specific tools that we can use in decision-making.  Economists and cognitive scientists have refined the notion of optimization of goal fulfillment into the technical rules of decision theory.  The model of rational judgment used by decision scientists is one in which a person chooses options based on which option has the largest expected utility.  One of the fundamental advances in the history of modern decision science was the demonstration that if people’s preferences follow certain patterns (the so-called axioms of utility theory) then they are behaving as if they are maximizing utility—they are acting to get what they most want.

Framing and Decision-Making

At a practical level, the rules of rational choice are not technical—they are basic strictures such as to obey transitivity (if you prefer option A to option B, and option B to option C, then you should prefer option A to option C) and to avoid having decisions affected by irrelevant context.  Interestingly, psychologists have found that people sometimes violate these strictures of instrumental rationality.  Humans are quite prone to having their decisions influenced by totally irrelevant contextual factors.  Advertisers are quite aware of this flaw in human cognition.  They know that the claim “96% Fat Free!” will sell more than “Contains only 4% Fat”.

Cognitive psychologists have studied these so-called framing effects in detail.  One of the most compelling framing demonstrations asks subjects to imagine that health officials are preparing for the outbreak of an unusual disease which is expected to kill 600 people.  Two alternative programs to combat the disease have been proposed:  If Program A is adopted, 200 people will be saved.  If Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved.  Most people, when given this problem, prefer Program A—the one that saves 200 lives for sure.  However, in a typical experiment of this sort, another group of subjects is given the same scenario with the following two choices: If Program C is adopted, 400 people will die.  If Program D is adopted, there is a one-third probability that nobody will die and a two-thirds probability that 600 people will die.  Now people say they prefer Program D.  The problem is that each group of subjects has simply heard a different description of the same situation.  Program A and C are the same.  That 400 will die in Program C implies that 200 will be saved—precisely the same number saved (200) in Program A.  Likewise, the two-thirds chance that 600 will die in Program D is the same two-thirds chance that 600 will die (“no people will be saved”) in Program B.  If people preferred Program A in the first choice they should have preferred Program C in the second.  Instead, the most preferred program depended on how an identical choice was framed.

Framing effects occur because people are lazy information processors—they accept problems as given and do not tend to build alternative models of situations.  In short, humans tend to be cognitive misers.  Humans are cognitive misers because their basic tendency is to default to so-called heuristic processing mechanisms of low computational expense. This bias to default to low-power cognitive mechanisms, however, means that humans are sometimes less than rational.  Heuristic processes often provide a quick solution that is a first approximation to an optimal response.  But modern life often requires more precise thought than this.  Modern technological societies are in fact hostile environments for people reliant on only the most easily computed automatic response.  Thus, being cognitive misers will sometimes impede people from achieving their goals.

The cognitive miser tendency represents a processing problem of the human brain.  The second broad reason that humans are less than rational represents a content problem—that knowledge structures that are needed to sustain rational behavior are never learned by many people. The tools of rationality—probabilistic thinking, logic, scientific reasoning—represent mindware (a term coined by cognitive scientist David Perkins) that is often incompletely learned or not acquired at all.  For example, assigning the right probability values to events is a critical aspect of rational thought.  It is involved, for example, in medical diagnosis.  Consider the following problem on which it has been found that both medical personnel and laypersons make a critical thinking error due to a mindware gap:

Imagine that the XYZ virus causes a serious disease that occurs in 1 in every 1,000 people.  Imagine also that there is a test to diagnose the disease that always indicates correctly that a person who has the XYZ virus actually has it.  Finally, imagine that the test has a false-positive rate of 5 percent—the test wrongly indicates that the XYZ virus is present in 5 percent of the cases where it is not.  Imagine that we choose a person randomly and administer the test, and that it yields a positive result (indicates that the person is XYZ-positive).  What is the probability that the individual actually has the XYZ virus?

The point is not to get the precise answer so much as to see whether you are in the right ballpark.  The answers of many people are not.  The most common answer given is 95 percent.  Actually, the correct answer is approximately 2 percent!  Why is the answer 2 percent?  Of 1000 people, just one will actually be XYZ-positive.  If the other 999 are tested, the test will indicate incorrectly that approximately 50 of them have the virus (.05 multiplied by 999) because of the 5 percent false-positive rate.  Thus, of the 51 patients testing positive, only one (approximately 2 percent) will actually be XYZ-positive.  In short, the base rate is such that the vast majority of people do not have the virus.  This fact, combined with a substantial false-positive rate, ensures that, in absolute numbers, the majority of positive tests will be of people who do not have the virus.  Gaps in knowledge structures such as these represent a second major class of reasoning error (in addition to miserly processing).   Rational thinking errors due to such knowledge gaps can occur in a potentially large set of coherent knowledge bases in the domains of probabilistic reasoning, causal reasoning, knowledge of risks, logic, practical numeracy, financial literacy, and scientific thinking (the importance of alternative hypotheses, etc.).

Rational Thinking, Intelligence and Conflicting Desires

Cognitive science has provided a roadmap of the multifarious skills that comprise rational thinking.  Importantly, most of these rational thinking skills are not assessed on intelligence tests, and they are only modestly related to measured intelligence.  Thus, individual differences on IQ tests are not proxies for individual differences in rational thinking.  If we want to assess differences in rational thinking, we will need to specifically assess the components of rational thought directly.  We know the types of thinking processes that would be assessed by such an instrument, and we have in hand prototypes of the kinds of tasks that would be used in the domains of both instrumental rationality and epistemic rationality.  There is no technological limitation on constructing a rational thinking or RQ-test.  Indeed, this is what our research lab is doing with the help of a three-year grant from the John Templeton Foundation.  Specifically, we are attempting to construct the first prototype of an assessment instrument that will comprehensively measure individual differences in rational thought.  Our instrument will assess the many ways in which people fail to think rationally and, hopefully, pinpoint where a person’s thinking needs remediation.  In this way, the scientific research from our research program will be another instance of scientific knowledge facilitating better human choices.

While I have been optimistic about the potential of science for fostering better human choices, I must place the caveat here that we have been talking about instrumental rationality in only a narrow sense—one where a person’s desires are taken as given.  The strengths of such a narrow theory of rationality are well-known.  For example, if the conception of rationality is restricted in this manner, many powerful formalisms (such as the axioms of utility theory mentioned previously) are available to serve as standards of optimal behavior.  However, there are pitfalls in such an approach used exclusively.  In not evaluating desires, a narrow instrumental theory of rationality might determine that Hitler was a rational person as long as he acted in accordance with the basic axioms of choice as he went about fulfilling his grotesque desires.

So-called broad theories of rationality attempt to evaluate goals and desires, and psychologists have found that this involves the mental process of meta-representation and that it creates unique conflicts involving higher-order cognition.  Most people are accustomed to conflicts between their first-order desires (if I buy that jacket I want, I won’t be able to buy that iPod that I also desire).  However, a person who forms ethical preferences creates possibilities for conflict that involve second-order mental states.  So, for example, I watch a television documentary on the small Pakistani children who are unschooled because they work sewing soccer balls, and vow that someone should do something about this.  However, I also find myself at the sporting goods store two weeks later instinctively avoiding the more expensive union-made ball.  A new conflict has been created for me.  I can either attempt the difficult task of restructuring my first-order desires (e.g., learn to not automatically prefer the cheaper product) or I must ignore a newly formed second-order preference (I would prefer not to prefer cheaper products).

Actions out of kilter with a political, moral, or social commitment create inconsistency.  Values and commitments create new attention-drawing inconsistencies that are not there when one is only aware of the necessity of scheduling action to efficiently fulfill first-order desires.  Here, science (and more specifically, the technology and efficient economies that science spawns) can be an impediment to true reflection on our second-order goals and desires. There are a number of ways in which technology and markets impede reflection on our first-order desires. Economies of scale make fulfilling short-leashed genetic goals cheap and easy and, other things being equal, people prefer more cheaply fulfilled desires because they leave more money for fulfilling other desires.

Adaptive preferences are those that are easy to fulfill in the particular environment in which a person lives.  Efficient markets have as a side effect the tendency to turn widespread, easily satisfied, first-order desires into adaptive preferences.  If you like fast-food, television sit-coms, video-games, recreating in automobiles, violent movies, and alcohol, the market makes it quite easy to get the things you want at a very reasonable cost because these are convenient preferences to have.  If you like looking at original paintings, theater, walking in a pristine wood, French films, and fat-free food you can certainly satisfy these preferences if you are sufficiently affluent, but it will be vastly more difficult and costly than in the previous case.  So preferences differ in adaptiveness, or convenience, and markets accentuate the convenience of satisfying uncritiqued first-order preferences.

Of course, people can express more considered, higher-order preferences through markets too (e.g., free range eggs, fair-trade coffee) but they are statistically much smaller, harder to trigger via advertising, and lack economies of scale.  However, the positive feedback loop surrounding unconsidered, first-order desires can even affect people’s second-order judgments—“Well if everyone is doing it, it must not be so bad after all.”  Many symbolic and ethical choices must be developed in opposition to first-order goals that technology-based markets are adapted to fulfilling efficiently.  In this way, science and technology might actually undermine our rationality, if rational choices are broadly defined.

Discussion Questions

1.  As science and technology has provided you with more choices, have you felt more or less at ease with the choices that you have made?

2.  Do you spend commensurately larger amounts of time on the important decisions in life (e.g., job, marriage, pension allocation, mortgage) compared to the amount of time you spend on the small decisions of life (e.g., what to order from Netflix, whether to buy a new pair of shoes)?

3.  Have you ever made a decision in life that you knew was wrong even as you made it?  If so, what is the sense of the phrase “knew was wrong”?  Do you often intend to choose the wrong thing?  If not, then why did you—even though you knew it is wrong?

Discussion Summary

At the heart of my essay was the distinction between narrow and broad views of rationality. My argument was that when rationality is taken in the narrow sense, the conclusion is clear cut. Namely, that science in general and the science of decision-making in particular can aid people in attaining narrow standards of rationality. My major caveat was that when rationality is taken in a broader sense, the role of science in facilitating it is much more ambiguous.

The bulk of the discussion concerned issues of facilitating rationality in the narrow sense. I was optimistic on that score, pointing out that we have discovered strategies of cognitive change that facilitate better decision-making. I also pointed out that often, we can facilitate better choices by changing the environment in a way that does not require changes in the cognitive capabilities of people. My primary example of the former point—cognitive change—was the ability to learn the strategy of considering alternative hypotheses.  Many studies have attempted to teach thinking of the alternative hypothesis by instructing people in a simple habit of saying to themselves the phrase “think of the opposite” in relevant situations.  This strategic mindware can help prevent a host of the thinking errors such as: anchoring biases, overconfidence effects, hindsight bias, confirmation bias, and self serving biases. Probabilistic thinking and the knowledge of statistics provide other large domain of learnable strategies and knowledge that greatly facilitate rational thinking.

Another question prompted me to emphasize that we do not always have to change thinking in order to facilitate decision-making that is more rational. It is sometimes easier to change the environment than it is to change people. One commentor mentioned the use of the so-called black hat in the financial sector (designating an individual whose sole role is to challenge). My familiarity with such a technique comes from Danny Kahneman’s discussion of Gary Klein’s idea of the premortem.  This is an example of an environmental intervention that actually makes use of our miserly tendency to model only our own position.  Premortems and black hats are examples of just such changes to the environment that do not require cognitive change by individuals.  People doing premortems and doing black hat challenges are free to engage in onesided thinking (a characteristic of the cognitive miser) but now the onesided thinking is in the service of a process that is rational overall.

In my books I have discussed other examples of environmental alterations.  Perhaps the best known is Thaler and Benartzi’s suggested reform involving getting employees to increase their 401(k) contributions by asking them to commit in advance to having a proportion of their future raises allocated to additional 401(k) contributions.  This strategy ensures that the employee will never experience the additional contribution as a loss, because the employee never sees a decrease in the paycheck.  Of course, the contribution is the same in either case, but such a procedure encourages the employee to frame it in a way that makes it less aversive.  Thaler and Benartzi have developed a savings program called Save More Tomorrow™ (SMarT).  The important point for our discussion here is that it represents an example of inoculation against irrational behavior by changing the environment rather than people.  The SMarT program demonstrates that some of the difficulties that arise because of cognitive miser tendencies can be dealt with by changes in the environment.

One commentor raised many issues that are at the interface of a narrow and a broad view of rationality. A broad view of rationality takes the decision-maker’s goals and desires not as given, but as a subject for critique. Here, I admitted that scientific and material advance may be a two-edged sword.  In fact, I went further than that.  I admitted that science and material progress may be impediments to true reflection on our goals and desires. I discussed a number of ways in which technology and markets impede reflection on the nature of our first-order desires. In my opinion, this would be the most fruitful direction for further discussion in JTF forums.  One question I raised originally, but which now makes an even more appropriate jumping off point is below.

Two New Big Questions

1. Have you ever made a decision in life that you knew was wrong even as you made it?  If so, what is the sense of the phrase “knew was wrong”?  Do you often intend to choose the wrong thing?  If not, then why did you—even though you knew it is wrong?

And secondly,

2. If many symbolic and ethical choices must be developed in opposition to first-order goals that markets are adapted to fulfilling efficiently, then how might we make markets more friendly to our ethical preferences?