How Can Science Help Us Make Better Choices?

Science Help Us Make Better Choices?iStock

I will argue here that science can most definitely help us make better choices—but only within a particular narrow sense of “better choices” used in cognitive science.  Whether science aids us in a broader sense of the term “better choices” is much less certain.

To a psychologist “making better choices” means better human decision-making—and good decision-making to a cognitive scientist means rational decision-making.  Philosophers define two types of rationality—instrumental and epistemic.  To think rationally means taking the appropriate action given one’s goals and beliefs (instrumental rationality), and holding beliefs that are commensurate with available evidence (epistemic rationality).  It’s handy to think of rationality as being about what to do and what is true.  What to do—instrumental rationality; and what is true—epistemic rationality.  Both facilitate good decision-making.  High epistemic rationality helps indirectly because good decisions are based on beliefs about the world that match reality.  Instrumental rationality is present when we make our choices by combining our goals with our beliefs in a way that maximizes goal achievement.

Science aids in making rational choices in one indirect way and in one direct way.  The indirect way is that science allows us to get our beliefs in line with the world.  After all, the quest of science is a true description of the world.  Because decisions made based on true beliefs will be better ones, science at least gives us a chance to make rational choices by portraying a true picture of the world as it is.  Science cannot force people to use these true beliefs in their instrumental decision-making, but it at least can make them available.

Science aids rational decision-making in a much more direct way, though.  Through the discovery of normative models of decision-making—those that help us maximize our goal fulfillment—science provides specific tools that we can use in decision-making.  Economists and cognitive scientists have refined the notion of optimization of goal fulfillment into the technical rules of decision theory.  The model of rational judgment used by decision scientists is one in which a person chooses options based on which option has the largest expected utility.  One of the fundamental advances in the history of modern decision science was the demonstration that if people’s preferences follow certain patterns (the so-called axioms of utility theory) then they are behaving as if they are maximizing utility—they are acting to get what they most want.

Framing and Decision-Making

At a practical level, the rules of rational choice are not technical—they are basic strictures such as to obey transitivity (if you prefer option A to option B, and option B to option C, then you should prefer option A to option C) and to avoid having decisions affected by irrelevant context.  Interestingly, psychologists have found that people sometimes violate these strictures of instrumental rationality.  Humans are quite prone to having their decisions influenced by totally irrelevant contextual factors.  Advertisers are quite aware of this flaw in human cognition.  They know that the claim “96% Fat Free!” will sell more than “Contains only 4% Fat”.

Cognitive psychologists have studied these so-called framing effects in detail.  One of the most compelling framing demonstrations asks subjects to imagine that health officials are preparing for the outbreak of an unusual disease which is expected to kill 600 people.  Two alternative programs to combat the disease have been proposed:  If Program A is adopted, 200 people will be saved.  If Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved.  Most people, when given this problem, prefer Program A—the one that saves 200 lives for sure.  However, in a typical experiment of this sort, another group of subjects is given the same scenario with the following two choices: If Program C is adopted, 400 people will die.  If Program D is adopted, there is a one-third probability that nobody will die and a two-thirds probability that 600 people will die.  Now people say they prefer Program D.  The problem is that each group of subjects has simply heard a different description of the same situation.  Program A and C are the same.  That 400 will die in Program C implies that 200 will be saved—precisely the same number saved (200) in Program A.  Likewise, the two-thirds chance that 600 will die in Program D is the same two-thirds chance that 600 will die (“no people will be saved”) in Program B.  If people preferred Program A in the first choice they should have preferred Program C in the second.  Instead, the most preferred program depended on how an identical choice was framed.

Framing effects occur because people are lazy information processors—they accept problems as given and do not tend to build alternative models of situations.  In short, humans tend to be cognitive misers.  Humans are cognitive misers because their basic tendency is to default to so-called heuristic processing mechanisms of low computational expense. This bias to default to low-power cognitive mechanisms, however, means that humans are sometimes less than rational.  Heuristic processes often provide a quick solution that is a first approximation to an optimal response.  But modern life often requires more precise thought than this.  Modern technological societies are in fact hostile environments for people reliant on only the most easily computed automatic response.  Thus, being cognitive misers will sometimes impede people from achieving their goals.

The cognitive miser tendency represents a processing problem of the human brain.  The second broad reason that humans are less than rational represents a content problem—that knowledge structures that are needed to sustain rational behavior are never learned by many people. The tools of rationality—probabilistic thinking, logic, scientific reasoning—represent mindware (a term coined by cognitive scientist David Perkins) that is often incompletely learned or not acquired at all.  For example, assigning the right probability values to events is a critical aspect of rational thought.  It is involved, for example, in medical diagnosis.  Consider the following problem on which it has been found that both medical personnel and laypersons make a critical thinking error due to a mindware gap:

Imagine that the XYZ virus causes a serious disease that occurs in 1 in every 1,000 people.  Imagine also that there is a test to diagnose the disease that always indicates correctly that a person who has the XYZ virus actually has it.  Finally, imagine that the test has a false-positive rate of 5 percent—the test wrongly indicates that the XYZ virus is present in 5 percent of the cases where it is not.  Imagine that we choose a person randomly and administer the test, and that it yields a positive result (indicates that the person is XYZ-positive).  What is the probability that the individual actually has the XYZ virus?

The point is not to get the precise answer so much as to see whether you are in the right ballpark.  The answers of many people are not.  The most common answer given is 95 percent.  Actually, the correct answer is approximately 2 percent!  Why is the answer 2 percent?  Of 1000 people, just one will actually be XYZ-positive.  If the other 999 are tested, the test will indicate incorrectly that approximately 50 of them have the virus (.05 multiplied by 999) because of the 5 percent false-positive rate.  Thus, of the 51 patients testing positive, only one (approximately 2 percent) will actually be XYZ-positive.  In short, the base rate is such that the vast majority of people do not have the virus.  This fact, combined with a substantial false-positive rate, ensures that, in absolute numbers, the majority of positive tests will be of people who do not have the virus.  Gaps in knowledge structures such as these represent a second major class of reasoning error (in addition to miserly processing).   Rational thinking errors due to such knowledge gaps can occur in a potentially large set of coherent knowledge bases in the domains of probabilistic reasoning, causal reasoning, knowledge of risks, logic, practical numeracy, financial literacy, and scientific thinking (the importance of alternative hypotheses, etc.).

Rational Thinking, Intelligence and Conflicting Desires

Cognitive science has provided a roadmap of the multifarious skills that comprise rational thinking.  Importantly, most of these rational thinking skills are not assessed on intelligence tests, and they are only modestly related to measured intelligence.  Thus, individual differences on IQ tests are not proxies for individual differences in rational thinking.  If we want to assess differences in rational thinking, we will need to specifically assess the components of rational thought directly.  We know the types of thinking processes that would be assessed by such an instrument, and we have in hand prototypes of the kinds of tasks that would be used in the domains of both instrumental rationality and epistemic rationality.  There is no technological limitation on constructing a rational thinking or RQ-test.  Indeed, this is what our research lab is doing with the help of a three-year grant from the John Templeton Foundation.  Specifically, we are attempting to construct the first prototype of an assessment instrument that will comprehensively measure individual differences in rational thought.  Our instrument will assess the many ways in which people fail to think rationally and, hopefully, pinpoint where a person’s thinking needs remediation.  In this way, the scientific research from our research program will be another instance of scientific knowledge facilitating better human choices.

While I have been optimistic about the potential of science for fostering better human choices, I must place the caveat here that we have been talking about instrumental rationality in only a narrow sense—one where a person’s desires are taken as given.  The strengths of such a narrow theory of rationality are well-known.  For example, if the conception of rationality is restricted in this manner, many powerful formalisms (such as the axioms of utility theory mentioned previously) are available to serve as standards of optimal behavior.  However, there are pitfalls in such an approach used exclusively.  In not evaluating desires, a narrow instrumental theory of rationality might determine that Hitler was a rational person as long as he acted in accordance with the basic axioms of choice as he went about fulfilling his grotesque desires.

So-called broad theories of rationality attempt to evaluate goals and desires, and psychologists have found that this involves the mental process of meta-representation and that it creates unique conflicts involving higher-order cognition.  Most people are accustomed to conflicts between their first-order desires (if I buy that jacket I want, I won’t be able to buy that iPod that I also desire).  However, a person who forms ethical preferences creates possibilities for conflict that involve second-order mental states.  So, for example, I watch a television documentary on the small Pakistani children who are unschooled because they work sewing soccer balls, and vow that someone should do something about this.  However, I also find myself at the sporting goods store two weeks later instinctively avoiding the more expensive union-made ball.  A new conflict has been created for me.  I can either attempt the difficult task of restructuring my first-order desires (e.g., learn to not automatically prefer the cheaper product) or I must ignore a newly formed second-order preference (I would prefer not to prefer cheaper products).

Actions out of kilter with a political, moral, or social commitment create inconsistency.  Values and commitments create new attention-drawing inconsistencies that are not there when one is only aware of the necessity of scheduling action to efficiently fulfill first-order desires.  Here, science (and more specifically, the technology and efficient economies that science spawns) can be an impediment to true reflection on our second-order goals and desires. There are a number of ways in which technology and markets impede reflection on our first-order desires. Economies of scale make fulfilling short-leashed genetic goals cheap and easy and, other things being equal, people prefer more cheaply fulfilled desires because they leave more money for fulfilling other desires.

Adaptive preferences are those that are easy to fulfill in the particular environment in which a person lives.  Efficient markets have as a side effect the tendency to turn widespread, easily satisfied, first-order desires into adaptive preferences.  If you like fast-food, television sit-coms, video-games, recreating in automobiles, violent movies, and alcohol, the market makes it quite easy to get the things you want at a very reasonable cost because these are convenient preferences to have.  If you like looking at original paintings, theater, walking in a pristine wood, French films, and fat-free food you can certainly satisfy these preferences if you are sufficiently affluent, but it will be vastly more difficult and costly than in the previous case.  So preferences differ in adaptiveness, or convenience, and markets accentuate the convenience of satisfying uncritiqued first-order preferences.

Of course, people can express more considered, higher-order preferences through markets too (e.g., free range eggs, fair-trade coffee) but they are statistically much smaller, harder to trigger via advertising, and lack economies of scale.  However, the positive feedback loop surrounding unconsidered, first-order desires can even affect people’s second-order judgments—“Well if everyone is doing it, it must not be so bad after all.”  Many symbolic and ethical choices must be developed in opposition to first-order goals that technology-based markets are adapted to fulfilling efficiently.  In this way, science and technology might actually undermine our rationality, if rational choices are broadly defined.

Discussion Questions

1.  As science and technology has provided you with more choices, have you felt more or less at ease with the choices that you have made?

2.  Do you spend commensurately larger amounts of time on the important decisions in life (e.g., job, marriage, pension allocation, mortgage) compared to the amount of time you spend on the small decisions of life (e.g., what to order from Netflix, whether to buy a new pair of shoes)?

3.  Have you ever made a decision in life that you knew was wrong even as you made it?  If so, what is the sense of the phrase “knew was wrong”?  Do you often intend to choose the wrong thing?  If not, then why did you—even though you knew it is wrong?

Discussion Summary

At the heart of my essay was the distinction between narrow and broad views of rationality. My argument was that when rationality is taken in the narrow sense, the conclusion is clear cut. Namely, that science in general and the science of decision-making in particular can aid people in attaining narrow standards of rationality. My major caveat was that when rationality is taken in a broader sense, the role of science in facilitating it is much more ambiguous.

The bulk of the discussion concerned issues of facilitating rationality in the narrow sense. I was optimistic on that score, pointing out that we have discovered strategies of cognitive change that facilitate better decision-making. I also pointed out that often, we can facilitate better choices by changing the environment in a way that does not require changes in the cognitive capabilities of people. My primary example of the former point—cognitive change—was the ability to learn the strategy of considering alternative hypotheses.  Many studies have attempted to teach thinking of the alternative hypothesis by instructing people in a simple habit of saying to themselves the phrase “think of the opposite” in relevant situations.  This strategic mindware can help prevent a host of the thinking errors such as: anchoring biases, overconfidence effects, hindsight bias, confirmation bias, and self serving biases. Probabilistic thinking and the knowledge of statistics provide other large domain of learnable strategies and knowledge that greatly facilitate rational thinking.

Another question prompted me to emphasize that we do not always have to change thinking in order to facilitate decision-making that is more rational. It is sometimes easier to change the environment than it is to change people. One commentor mentioned the use of the so-called black hat in the financial sector (designating an individual whose sole role is to challenge). My familiarity with such a technique comes from Danny Kahneman’s discussion of Gary Klein’s idea of the premortem.  This is an example of an environmental intervention that actually makes use of our miserly tendency to model only our own position.  Premortems and black hats are examples of just such changes to the environment that do not require cognitive change by individuals.  People doing premortems and doing black hat challenges are free to engage in onesided thinking (a characteristic of the cognitive miser) but now the onesided thinking is in the service of a process that is rational overall.

In my books I have discussed other examples of environmental alterations.  Perhaps the best known is Thaler and Benartzi’s suggested reform involving getting employees to increase their 401(k) contributions by asking them to commit in advance to having a proportion of their future raises allocated to additional 401(k) contributions.  This strategy ensures that the employee will never experience the additional contribution as a loss, because the employee never sees a decrease in the paycheck.  Of course, the contribution is the same in either case, but such a procedure encourages the employee to frame it in a way that makes it less aversive.  Thaler and Benartzi have developed a savings program called Save More Tomorrow™ (SMarT).  The important point for our discussion here is that it represents an example of inoculation against irrational behavior by changing the environment rather than people.  The SMarT program demonstrates that some of the difficulties that arise because of cognitive miser tendencies can be dealt with by changes in the environment.

One commentor raised many issues that are at the interface of a narrow and a broad view of rationality. A broad view of rationality takes the decision-maker’s goals and desires not as given, but as a subject for critique. Here, I admitted that scientific and material advance may be a two-edged sword.  In fact, I went further than that.  I admitted that science and material progress may be impediments to true reflection on our goals and desires. I discussed a number of ways in which technology and markets impede reflection on the nature of our first-order desires. In my opinion, this would be the most fruitful direction for further discussion in JTF forums.  One question I raised originally, but which now makes an even more appropriate jumping off point is below.

Two New Big Questions

1. Have you ever made a decision in life that you knew was wrong even as you made it?  If so, what is the sense of the phrase “knew was wrong”?  Do you often intend to choose the wrong thing?  If not, then why did you—even though you knew it is wrong?

And secondly,

2. If many symbolic and ethical choices must be developed in opposition to first-order goals that markets are adapted to fulfilling efficiently, then how might we make markets more friendly to our ethical preferences?

7 Responses

  1. Ansley Roan says:

    Professor Stanovich,

    I realize that you are working on a way to measure differences in rational thought, but I wonder, are there ways to encourage more rational thought (and better choices) now? And if so, what might they be?

    Thank you.

  2. Keith Stanovich says:

    Yes, there are many ways to encourage more rational thought via better choices.  The good news here is that we already know that many components of rational thought are malleable and can be acquired.  For example, disjunctive reasoning is the tendency to consider all possible states of the world when deciding among options or when choosing a problem solution in a reasoning task.  It is a rational thinking strategy with a high degree of generality.  People make many suboptimal decisions because of the failure to flesh out all the possible options in a situation, yet the disjunctive mental tendency is not mentally taxing (it is not computationally expensive, to use the cognitive science jargon). 

     

    The tendency to consider alternative hypotheses is, like disjunctive reasoning, strategic mindware of great generality.  Also, it can be implemented in very simple ways.  Many studies have attempted to teach thinking of the alternative hypothesis by instructing people in a simple habit.  People are given extensive practice at saying to themselves the phrase “think of the opposite” in relevant situations.  This strategic mindware does not stress computational capacity and thus is probably easily learnable by many individuals.  Several studies have shown that practice at the simple strategy of triggering the thought “think of the opposite” can help to prevent a host of the thinking errors studied in the heuristics and biases literature, including but not limited to: anchoring biases, overconfidence effects, hindsight bias, confirmation bias, and self serving biases.

     

    Various aspects of probabilistic thinking represent mindware of great generality and potency.  However, as any person who has ever taught a statistics course can attest, some of these insights are counterintuitive and unnatural for people—particularly in their application.  There is nevertheless still some evidence that they are indeed teachable—albeit with somewhat more effort and difficulty than strategies such as disjunctive reasoning or considering alternative hypotheses.  Aspects of scientific thinking necessary to infer a causal relationship are also definitely teachable.

     

    Much of the strategic mindware I have mentioned here represents learnable strategies in the domain of instrumental rationality (achieving one’s goals).  Epistemic rationality (having beliefs well calibrated to the world) is often disrupted by contaminated mindware.  However, even here, there are teachable strategies that can reduce the probability of acquiring mindware that is harmful.  For example, the principle of falsifiability provides a wonderful inoculation against many kinds of nonfunctional beliefs.  It is a tool of immense generality.  It is taught in low-level methodology and philosophy of science courses, but could be taught much more broadly than this.  Many pseudoscientific beliefs represent the presence of contaminated mindware that causes irrationality.  The critical thinking skills that help individuals to recognize pseudoscientific belief systems can be taught in high-school courses. In my book Rationality and the Reflective Mind, I include a long Table in Chapter 10 that shows that dozens of rational thinking micro-skills have been shown to be teachable.

  3. Partington says:

    I am fascinated by the potential application of these ideas to the workplace, e.g in health care or in my field, the financial sector. I recently heard a suggestion that at any meetings where risk decisions are made, there should be a designated “black hat” individual whose sole role is to challenge. Do you think this would be an effective way to close mindware gaps or identify cognitive miserliness?   If so, is there training that could make it easier for the black hats to identify problems?

    • Keith Stanovich says:

      Most definitely I feel that the concept of the “black hat” that you describe using in your organization would be very effective.  I think that research would support the idea that this would be a very effective way of countering the cognitive miserliness that results in too few alternatives being considered.  And it also counters the cognitive miserliness that results in overly optimistic projections about actions that were decided upon too soon. On page 264 of his brilliant book, Thinking, Fast and Slow, Danny Kahneman discusses what he calls the premortem–a technique that he attributes to the applied psychologist Gary Klein.  It is used at a point where the organization feels that it is moving toward a particular action but has not yet committed itself to that action.  The decision-makers are then asked to imagine that they are a year into the future and that the outcome has been a disaster.  They are asked to take ten minutes to write a brief history of the disaster.  People usually have no trouble doing so.  And when they write their histories, they inadvertently reveal some of the critiques of the proposed action that had not been heretofore articulated.  Kahneman argues that discussions taking place after the direction of an action has been tipped usually are not critical enough and lead to some decision-makers suppressing doubts.  He argues that one of the things that the premortem mechanism does is to legitimize doubt.  So Klein’s idea of the premortem in decision making is very much like the black hat that you describe using in your organization.  Research would support both of these related mechanisms because there is much research to indicate that suboptimal outcomes are often the result of the failure to flesh out information on alternatives other than the one chosen.

      • Keith Stanovich says:

        The mention of mechanisms like the premortem and the black hat bring to mind the point that sometimes, in order to prevent irrational behavior, it is easier to change the environment than to change people.  For example, if the cognitive miser is easily framed, responds to the most vivid stimulus present, and accepts defaults as given, then the behavior of misers will be shaped by whoever in their world has the power to determine these things (how things are framed, what the most vivid stimulus is, and what the default is).  Phrased in this manner, the state of affairs seems somewhat ominous.  But maybe there is an upside here.  Yes, a malicious controller of our environment might choose to exploit us.  But perhaps a benevolent controller of our environment could help us—could save us from our irrational acts without our having to change basic aspects of our cognition. As I said above, the upside is that for certain cognitive problems it might be easier to change the environment than to change people.  Because in a democracy we in part control our own environment, as a society we could decide to restructure the world so that it helped people to be more rational. Premortems and black hats are examples of just such changes to the environment that do not require cognitive change by individuals.  People doing premortems and doing black hat challenges are free to engage in onesided thinking (a characteristic of the cognitive miser) but now the onesided thinking is in the service of a process that is rational overall.

  4. wondering14 says:

    Education in probabilistic math is important, but so is fill in the blank. History or travel may provide background for a developed wisdom, which I suppose differs from rationality. The world runs on more than rationality. Is that bad?  Maybe we should develop an Irrationality Quotient test as counterpoint and see if it and the RQ test meet anywhere.

    Heuristics will always be necessary. Will RQ test results help make one’s heuristics more on target?

    Street smarts differ from IQ smarts. A low IQ tester may have different rationales than high scorers. Life-learned rationalities may lead to different solutions than book-learned ones. For some, Machiavelli is rational, for others he is not. But there, as the author suggests, we get into the difficult but pervasive area of ethics.  But is it rational to design around it? One affects the other.

    Outside a defined rationality that is designed into an RQ test, lie undefined ones. When will the next decision-tree-like batch of tools be developed? When will people be able to assign better probabilities to tree branches?

    In today’s quicker world, rational decision-making differs from that a century past. What is envisioned for a century hence?

    Current tests let people know their preferences. I’ve taken some and they don’t help me go in one direction just because a test says that is where I want to go. Preferences are often fuzzy and changing, as fast as the next advertisement or TV documentary.

    Take two rational people, two with equal RQ scores, give them the same complex life problem, give them the same instrumental tools, and will both arrive at the same solution?

    If one were 100% rational, would one be able to determine if God exists?  Where does rationality leave off?

    • Keith Stanovich says:

      Many of your questions seem to concern the overall theme of the fluidity of standards of rationality. This is an important issue that I have addressed in several of my books. I will just mention here two deep conceptual issues that relate to many of your questions. The standard strictures of narrow instrumental rationality that I outlined at the beginning of the essay are inherently relativistic.  As many commentators have noted, taking a person’s preferences and goals as given, narrow instrumental rationality acknowledges the differing contexts of individuals.  This is because the differing context of individuals will quite naturally lead to different life goals.  Instrumental rationality, as traditionally conceived, implicitly acknowledges this context by taking current goals as given and not critiquing the historical origins of those goals.  Thus, perhaps surprisingly, the most traditional notion of instrumental rationality in fact quite clearly accepts a variety of human contexts and is not at all constrained to a specific cultural context.  That is one of its surprising strengths.  It is often not recognized that the traditional notion of instrumental rationality contains such contextual flexibility.

       

      The other aspect of the fluidity of rational norms that is suggested in your questions is the historical one. The rational norms of the present day most definitely differ from those of centuries ago, and could possibly differ from those of the future, because rational norms are a cultural achievement. Rational standards for assessing human behavior are social and cultural

      products that are preserved and stored independently of the genes. The development of probability theory, concepts of empiricism, logic, and scientific thinking throughout the centuries have provided humans with conceptual tools to aid in the formation and revision of belief and in their reasoning about action. They represent the cultural achievements that foster greater human rationality when they are installed as mindware. As societies evolve, they produce more of the cultural tools of rationality and these tools become more widespread in the population. A college sophomore with introductory statistics under his or her belt, if time-transported to the Europe of a few centuries ago, could become rich “beyond the dreams of avarice” by frequenting the gaming tables (or by becoming involved in insurance or lotteries).

       

      Several of your other questions are at the interface of the definition of narrow and broad rationality that I discussed in the essay. A Machiavelli-type person may well be deemed rational on the narrow view, but their attitudes might come in for much critique under a broader view of rationality–one that critiqued a person’s desires.

       

      Finally, the term street smarts leads interestingly to many of the points we have been making in our writings introducing the need for an RQ test.  The folk phrase street smarts, in most usages, has connotations of rationality.  To demarcate it from intelligence, as you do in this comment, makes our point exactly.  Rational thinking and the types of abilities displayed on intelligence tests are not the same thing.  In a similar manner to my comment above, the term also has a subtle interplay with the broad versus narrow view.  It is possible that many high-level urban gang members are instrumentally rational in the narrow view—in the sense that they efficiently take actions based on their goals and beliefs.  However, a broader view of rationality might critique the goals that they hold.  Likewise it might be possible to show that their epistemic standards are low–that their beliefs are not well determined by evidence.