Can Our Capacity for Moral Reasoning Be Strengthened?

Capacity for Moral ReasoningShutterstock

Of course our ability to engage in moral reasoning can be improved, though there are grounds for modesty about our prospects. Recent “scientific” pessimism on this score claims that what we take to be moral reasons are mere rationalizations; and that when we think we are reasoning with others, we are actually engaged in unreasoned persuasion. While such pessimism about the role of moral reasons and reasoning rests on some important insights, it ignores respects in which we can be responsive to good reasons, and it overreaches in its conclusions. This extravagant view treats half-truths about the difficulty of moral reasoning as if they were the whole truth about its impossibility. The case for such strong pessimism rests not on science but scientism: a narrow-minded view of what facts can support evaluative claims, which rests on dubious hidden premises and is not mandated by a scientific worldview.

In the first place, the human capacity for moral reasoning can be strengthened because our general reasoning capacities are amenable to improvement. This is most readily observed by considering how people develop as they mature from children to adults. We improve at logic, whether or not we learn the fancy Latinate names for rules of inference. We also get better at understanding and explaining events, and thereby become more adept at the common form of reasoning called inference to the best explanation. These are hypotheses about what best explains some event or phenomena, and we make them all the time. If the lawns and streets are wet this morning, the best explanation is that it rained last night while we were asleep. Such inferences are especially useful for making one’s view of the world more accurate. They help us see through many popular conspiracy theories, for instance: those that require implausibly exquisite competence and secrecy from a large group of people. Thus abstract principles of reasoning, in conjunction with observation and general knowledge, can generate substantial conclusions about the world. Some basic training in logic and statistical reasoning doesn’t hurt our ability to reason, either.

Two initial objections may occur to the reader. First, haven’t behavioral economists and psychologists shown us that people are very bad at reasoning? We humans are predictably irrational in various respects, often succumbing to errors about value and probability that are relatively easy to exploit. Second, what do these reflections on general-purpose reasoning have to do with specifically moral thought? Both worries are reasonable but not decisive. What economists and psychologists have shown is that people unreflectively adopt heuristics—rough-and-ready simplifying principles—that work pretty well in a wide variety of common contexts. When interested parties, including both marketers and scientists, figure out the heuristics people use, they can exploit circumstances where it fails. We commonly chase sunk costs, overvalue things that belong to us, and respond differently to equivalent scenarios depending on how they are framed. Although these failures of rationality are fascinating and important, concentration on them can obscure the fact that our reasoning works well in many other circumstances.

What does all this have to do with specifically moral reasoning? Plenty of general reasoning is morally relevant, in that it can be applied to morally significant cases and yield practical conclusions. Hence in an important sense we can improve our moral reasoning simply by improving our general purpose reasoning and applying it to moral questions. A narrower conception of moral reasoning would focus on our ability to reason with moral concepts such as fairness. Again it is helpful to look at bad moral reasoners: children just learning how to apply moral concepts. Kids learn early that “It’s not fair” is more powerful than “I don’t want to” or just “No.” Once introduced to the power of appeals to fairness, children will start to make them when they previously would have said no or just cried, as if unfair just means contrary to what I want. But quickly enough one learns that claims about fairness have to be disciplined in certain ways. You can only make a claim of fairness when you can offer reasons others should accept as binding regardless of who benefits in any specific case. This is just one example of how specifically moral reasoning can be improved.

Admittedly, general principles about fairness and other moral concepts only go so far, and they cannot determine how to balance fairness against other goods, such as welfare. This is not a question that can be given a persuasive answer in the abstract. While it is easy to doubt whether judgments about problem cases are justified, in part because intuitions diverge, it is hard to deny the superiority of certain answers in many ordinary cases. Hence although there are grounds for a modest pessimism about the limits of moral reasoning, the radical proposal under consideration has deeply implausible ramifications.

Recently several arguments have been offered as scientific grounds for such strong pessimism about moral reasoning. I will focus here on two of the most significant. First, it is claimed that the function of the brain is advocacy rather than discovery; the idea is that evolution built the brain to win arguments rather than to find truth. Second, some hold that moral reasoning is fraudulent because we typically engage in it when a moral judgment has already been made on other grounds, primarily emotional ones. Such reasoning amounts to no more than a search for arguments for a pre-established conclusion. In this view, what passes for moral reasoning is really post hoc rationalization. [For those who want more detail, here is a link to one of the most prominent papers in the pessimistic genre, which discusses several of these arguments.]

The idea that the human brain is a machine built to win arguments rather than to discover the truth seizes on the fact that people are biased in myriad ways, and these biases influence their evaluation of evidence. (This tendency is not quite as dismal as it seems, since it makes sense to hold on to core beliefs and values firmly rather than continually reevaluating them, which would have significant psychic costs.) But the claim that the brain’s primary function is advocacy rather than discovery is not credible. Most intellectual tasks involve problem solving rather than persuasion; you don’t argue with a bear but hunt, fight, or flee from it. Even in social contexts, where persuasion is most important, there are obvious costs to being proven wrong. Convince the tribe that bears are harmless and your reputation is likely to suffer. This picture of human thought looks badly distorted. As with much of the case for pessimism, this is an overstatement that derives illicit support from its cynical appeal.

The second claim is directed specifically at moral reasoning, which it holds to be mere rationalization. In this view, even though general reasoning skills can be improved, these improvements do not carry over to moral thinking because the moral domain is shot through with strong interests and emotions. Worse yet, sharp reasoning skills can improve people’s ability to justify whatever they want to do. But the evidence for such an extraordinary conclusion turns out to be surprisingly weak. The strongest case comes from the well-documented human capacity to confabulate about our reasons, telling neat stories about what we do and why, which do not hold up under scrutiny. Although confabulation happens in various contexts, many of which have nothing to do with value judgment, no one thinks that this phenomenon supports a global pessimism about reasons. Even if we sometimes get it wrong about what we’re doing, everyone grants that in most ordinary contexts we know both what we are doing and why. Moreover, sometimes when people confabulate a false causal story, they are still sensitive to reasons that they cannot articulate. In one famous study, subjects who were unconsciously tipped off by an experimenter about how to solve a puzzle often told demonstrably false stories about how they realized the solution—but they were nonetheless responsive to a clue in their environment that helped solve the problem.

Consider one of the most frequently cited experiments when it comes to the case of moral judgment specifically. Social psychologists claim to have found that subjects were morally dumbfounded by various “offensive yet harmless” scenarios, such as eating one’s dead pet dog and cleaning the toilet with the flag. That is to say, although they were quite sure there was something wrong with these actions, the subjects could not give reasons in support of their judgments. Or so it is claimed. This is put forward as evidence that we typically or always make moral judgments on the basis of irrational emotions and then search for bogus reasons to support them afterwards.

While the dumbfounding experiment has been deeply influential, it has serious problems, the worst of which is that it presupposes an extremely narrow conception of what can count as a good practical reason: a reason to act or forbear from acting. This should be obvious, since there are obviously good reasons not to perform those offensive actions previously described or—to take another example from the original experiment— to cannibalize a corpse in a medical laboratory so as to avoid wasting meat. In the dumbfounding scenarios, the experimenters stipulate that there are no harmful consequences of actions that stir up strong aversive reactions. But although a fortunate outcome can be stipulated about a fictional scenario, one cannot simply stipulate that a type of action isn’t dangerous—that is, likely to be harmful in realistic contexts—or that it does not violate well-founded rules, such as the rule laboratories have against the desecration of corpses. The general tendencies of actions are matters of fact rather than stipulation. This point is crucial, because good rules and sound intuitions are based on such generalizations about the likely consequences of an action, not on its specific results, which are often unpredictable.

Moreover, the psychological literature on moral dumbfounding presupposes that the only thing that can count as a practical reason is harm. It then adopts an untenably narrow conception of what counts as harmful that ignores danger, treats well-founded rules as mere suggestions, and ignores painful emotions even when they are predictable. This is not science but scientism, which purports to rest on purely empirical grounds but actually relies on hidden and implausible moral premises. [Read a more detailed critique of the moral dumbfounding experiment].

It is mere scientism to insist that deeply held human aversions and attractions, such as our sensitivity to the expressive aspects of our actions, are irrational taboos to be dismissed as magical thinking. Yet this literature does just that. There is nothing inherently magical about being averse to sticking pins in a doll constructed to resemble your child, for instance. Magical thinking requires some false causal belief, such as belief in the power of voodoo; but one need not labor under any such illusion in order to prefer not to deface an image of your beloved. Similarly, people are reluctant to do things with a symbol of something they care about (such as a flag) which suggests indifference or hostility toward what it symbolizes. Most of us would not want to drink water that has had a sterilized roach dipped into it, even though we know the roach did not add any germs to the water, simply because such “roached” water is disgusting. While there is a science of disgust, there is no science of the disgusting—that is, of what merits disgust—and the tacit assumption that only germs can be disgusting leads to some obviously absurd conclusions. But these are just the cases that the psychological literature takes to show that we are in the grip of taboos and magical thinking—not just in certain instances of moral judgment but typically.

Most ordinary people are aware of these points intuitively, even if they cannot say more about why they are averse to drinking roached water, desecrating corpses, or eating their dead pet. The dumbfounding literature simply assumes that the offensiveness and disgustingness of certain actions does not provide reason to avoid them—so long as they are stipulated to be, in some narrow and artificial sense, harmless. Indeed, harm itself is not a scientific concept but a moral one; yet that does not undermine its significance. Though one could attempt to formulate an empirical notion of what counts as an injury, say, all that would do is demonstrate that there are other sorts of harms than injuries. It is simply not in the purview of science to discover what humans ought to care about.

What we are left with, after the hyperbole, is that there are real worries specific to moral reasoning. When people’s interests are involved in an argument, we can expect them to be biased. And when people are in the grip of a strong emotion, they are often unreasonable. These observations are true but banal. What is more, conceptions of moral reasoning that require it to be untainted by anything contingently human—such as our attachment to specific people and projects, our sensitivity to symbolism and emotional expression, and our special concern with the consequences of our own actions—then such conceptions of morality will inevitably be disappointed with the inability of humans to live up to them. But none of this is to say that it is impossible for people to act against their self-interest, knowingly, or to constrain their behavior on grounds of fairness or other moral concepts. We can and do so, often—albeit not as often as we flatter ourselves in thinking.

But the question was not whether moral reasoning is difficult to engage in honestly and, at least sometimes, harder to follow. It was whether it is possible to strengthen our capacity for moral reasoning—or if, as some pessimists claim, moral reasoning is fraudulent or pointless at its core. The modestly pessimistic claim is true but rather obvious to those who are not naïve about human nature. The strongly pessimistic claim is exaggerated and as simplistic as its naively optimistic counterpart. We should reject it.

Questions to consider in the comments:

  1. What are the strongest grounds for pessimism about moral reasoning?
  2. Can you formulate a claim weaker than that moral reasoning cannot be strengthened but stronger than that moral reasoning is difficult, which can be stated clearly and evaluated with evidence?
  3. What challenges are specific to moral reasoning, and sort of strategies might be employed to meet them?
  4. Is there anything inherently irrational in caring about expressive and symbolic aspects of our action? Would more rational creatures than humans not care about such things? Do you suppose that we humans could rid ourselves of such cares and, if we could, why think we should do so?
  5. What would you say about someone who engaged in cannibalism or ate a dead pet, not because he was starving but simply in order to avoid wasting edible meat or to try something new? What about someone who did something that risked serious emotional harm but which, in the event, proved harmless? Are these actions OK or is there something wrong with them?

Discussion Summary

Most of the comments on the essay focused less on moral reasoning than on moral judgment generally. Perhaps this should not be surprising. Although the essay was primarily concerned with recent scientifically based arguments for pessimism about moral reasoning, those arguments tend to take for granted the answers to the most basic issues of moral metaphysics. That is to say, they assume that moral language is meaningful, that some moral claims are true, and that moral knowledge exists, even if they disagree about the nature of such truth. The psychologists tend to accept moral intuitions as they stand, despite thinking them to be driven by emotion rather than sensitive to reason, by adopting a form of moral relativism. The philosophers tend to reject commonplace moral intuitions, precisely because they depend on emotion, but they want to replace these tainted judgments with self-evident, rational intuitions. Thus both embrace pessimism about ordinary moral reasoning despite being optimists (that is, realists) about moral judgment.

These new arguments that arise from the empirical ethics movement are novel and interesting, but they presuppose certain things that several of the commenters wanted to call into question. It might help wrap up the discussion to examine these presuppositions. Consider first the challenge that moral judgments are just expressions of the speaker’s emotions and therefore neither true nor false. This is the view associated with logical positivism, but it need not adopt the positivists’ radical account of meaningfulness, which has largely been abandoned. The biggest problem with the simple story of moral judgment as expression of emotion is that nobody treats moral judgments—their own or other people’s—in this way. In making moral judgments we attempt to persuade others to feel as we do. At the very least, then, what we express is not just approval or disapproval; it also urges others to do so as well.

This is the point at which moral reasoning enters the picture. Another fact about how moral discourse actually takes place is that we do not simply make judgments but offer reasons for them, which purport to justify those judgments. We do not simply say that abortion is always wrong or permissible (or some more qualified claim). We also say why: because it stops a beating heart or because it is my body and my choice—to take two bumper sticker quality reasons, for example. Although many people hold skeptical theories about moral judgment, few can consistently treat moral judgments the way those theories seem to require. And it does seem like some reasons are better than others. Most people who have thought about the issue should be able to offer the first line of response against both those bumper sticker reasons. This shows how we treat moral judgments: as claims that stand in need of justification, and that can be justified with reasons. To be sure, those reasons give out pretty quickly. It’s hard to see what more could be said to defend the claim that pain is bad, for instance, but it’s also hard to see what further justification of that claim is necessary.

What needs to be the case in order for moral reasoning to be anything like what it purports to be: evidence in favor of some moral judgment? It need not be the case that there are objective answers to all moral questions, especially not answers that are independent of anything distinctively human but can speak to all rational beings. There are many domains of evaluative judgment—concerning aesthetics, for instance—where it seems clear that something human must be implicated in truths about beauty. A rational being with fundamentally different sensory equipment would not see any point to our concept of the beautiful. Nor does it seem like we need to believe in a final aesthetic truth, where say all painters are ranked in order of their quality, in order to be confident in the relative merit of Rembrandt versus Rockwell as painters.

It seems instead that there are ways in which we can make smaller discriminations in our ability to reason about moral matters. We can point out relevant similarities between one case and another—for instance, between abortion and capital punishment—and then find disanalogies between the cases as well. Then we can examine the similarities and dissimilarities and see whether some are more pertinent than others. There is no guarantee that we will agree about this, or that we will not find ourselves, at the end of the day, unsure about our judgments. But this is true about disagreement and reasoning in other areas as well.

Two New Big Questions:

1. When is the opinion of experts more likely to get the right answer than mass opinion, and when does the “wisdom of crowds” exceed that of individual experts?

2. How do universities promote or inhibit diversity of opinion?

21 Responses

  1. Wilhelmus says:

    In the last 30 years the amount of people have doubled on earth. This also means that the amount of negative thoughts/pessimism has been doubled, it is like in a newspaper we always see tha bad news and we rarely are nourished with good news, that is why the pessimism “seems” to take over (economy, politics). However the fact that humanity has doubled also means that LOVE has doubled, we love our children, this is not a thing only for the happy few. The moral of love has strengthened, it is only that the most of attention is for the deviations because they represent the “news”. We have to be aware of course of these deviations to stay observant and attentive what are the real morals that mankind needs.


  2. twomeyw2 says:

    I don’t think you can separate morality from emotion.  Certain actions feel right or wrong based on genetic dispositions and cultural ones that have been developed over the centuries and that we have been taught throughout childhood.

    I think the way to strengthen our moral reasoning is to better understand our emotions and how they are impacting our reasoning so we can isolate the negative emotions (ego, revenge, personal gain, etc.) that may be negatively affecting our reasoning.  As well as learning the historic background for why certain cultural standards have come to be.

    But, I believe that a purely rational (i.e. no emotion) moral view is impossible.  How can you possibly value one thing over another without emotion?  The universe after all, simply is.

    I think there is danger in even trying to reach such an (emotionless) intellectual state.  The book “The Mind’s I” makes a good argument that consciousness cannot exist without emotion.  A purely rational brain would just be a lump of matter without at least some drive (such as curiosity) to push it along.  Yet, it seems like some of the scientists you mention would prefer to reach such a state.

    • Daniel Jacobson says:

      I agree with you that the emotions play a crucial role in moral thought. Or, rather, they play several crucial roles. It’s helpful to differentiate between two ideas: the idea that emotions can guide you (as you put it, that “certain actions feel right or wrong”) and the idea that emotions can motivate you (as vengeful feelings motivate people). In both respects, emotions can work for better or worse. They can guide you poorly or well; they can motivate you to good or evil. This much is largely (but not entirely!) uncontroversial.
      In fact, I think there is an even deeper connection between emotions and evaluative thought, in that certain distinctively human values are in part constituted by emotionally-driven ways in which we humans see the world, for instance as disgusting, shameful, funny, worthy-of-pride, and so forth. This idea is more controversial and is characteristic of the school of moral philosophy known as sentimentalism. Some of the most important figures working in this tradition arose out of the Scottish Enlightenment: David Hume, Adam Smith, and John Stuart Mill.
      One thing I find interesting about the pessimism about moral reasoning that I addressed in the essay, which arises from what is known as the empirical ethics movement, is that its champions come to radically different views about morality (as opposed to moral reasoning). That is, they are all pessimistic about our capacity for moral reasoning, but they draw very different conclusions from this pessimism. Some think we should throw out the emotions as “garbage”: vestiges of our contingent evolutionary and cultural history that have nothing useful to teach us about morality. They can be optimistic about morality because they believe in purely rational intuitions that are self-evident (such as that pain is bad and pleasure good). Others think we should embrace our emotional responses, but not because they track moral reality; rather, however we happen to respond determines how things are. This is a form of moral relativism. I think both these approaches are too extreme.

  3. barrycooper says:

    I wrote a piece on Goodness dealing with this rough topic, which I will link at the end of this post.  In my view, the academic search for singular best answers in the moral realm is futile, just as it is futile to search for final qualitative gestalts in any realm of human endeavor.  We live in a universe without a top or bottom, in which up is defined solely by the presence of gravity.  We must reason, then, as bubbles in an endless ocean.  Our advantages are that we are self aware bubbles, and we are aware of one another.

    Logically, in any purposive activity, one must define one’s goal.  The simplest and most obvious goal in human life is happiness.  The next question is: are there grades and types of happiness?  My answer is that, yes, there are.  The happiness of a parent seeing a child succeed is in my view qualitatively higher than spending time with a prostitute.  The pride of success in a long, hard fought battle is better than intoxication.

    Logically, since I cannot inhabit other people’s minds, all such reason must proceed from my own experience.  If my experiences are shared, then I will generate recognition in others.  I am not stipulating general rules; I am, rather, saying “this is true for ME, and I believe that you will find it true for YOU also.”  Such a thing may be an approximate general rule, with exceptions.

    In my view, there is no room for ontology, per se, but rather for tendencies and directions and approximations.  I call a moral order a Telearchy: it is an order–a complex order, a formally “chaotic” order–based upon chosen aims and principles.

    Within my own moral ecology all moral decisions are local, imperfect, and necessary.  It will not be necessary for me to render a decision on whether or not to eat my cat until the cat dies.  And if I simply choose not to eat my cat because I don’t want to, that is fine.  Nothing further need be said, as this is not even an important decision.

    Your capacity to pursue your own rational self interest–a combination of temporal simple pleasures and higher grade, more difficult “flow” sorts of experiences–is dictated by your character.  In many cases, it is easier to make a decision which does not best support your own long term best interests.  This means that a properly moral disposition will have the capacity to reject self pity, and the capacity to persevere in the face of difficulty.  I therefore make these two habits immutable principles within my own creed.

    My third core principle is what I call Perceptual Breathing, which is the constant habit of reconciling abstractions with concrete realities, and more generally constantly pursuing UNDERSTANDING on all the levels on which it operates: kinesthetic, emotional, cognitive (both in terms of patterns of thinking and actual knowledge) and in my view spiritual.

    Thus, in answer to your question as to whether or not moral reasoning can be improved, I would say both no and yes.  No, because I don’t think you can “do” morality in the abstract.  I do not think it is a useful activity.  Yes, because characters can be improved, judgement improved, knowledge gained.  But what is being improved is a complex moral gestalt that is unstable, but oriented through movement in a chosen direction.

    Few thoughts.  My piece (I hesitate to call it an essay) is here:–modified.pdf

    You may find the rest of the website of some interest as well.  Morality is the rough subtext of everything on there.  Even when I deal with economics, I am trying to develop a better understanding of the effects of specific types of policies on generalized human well being.  That website is

  4. Lime says:

         As  a logical empiricist I find all values come from our emotions. It is pointless to say that my emotio9ns are better than yours so I will proceed to improve them It is literally nonsense.

    • Daniel Jacobson says:

      Surely it isn’t “literally nonsense” to criticize certain emotions and endorse others. It makes good sense to say that some fears are phobias, whereas others are well justified because they are directed at things that are really dangerous. Don’t we all make such discriminations, all the time? Don’t you too, in living your life?

      Go back and re-read Ch. 6 of Language, Truth, and Logic, and I think you’ll find that even A. J. Ayer finds a point to moral discourse! More sophisticated expressivists such as Stevenson and Gibbard go much further. I’m a Michigander, so I have a soft spot for sophisticated expressivism; but before we can talk about that, I have to move you off of your simple theory.

  5. Lime says:

         Yess I too have read A.J. Ayer; in fact I too am from Michigan andf had Stevenson as a prof.  My main point ,however, is that  no moral value is any “better” than any other.  That is why there are no logical logical positivists writing thousans of words telling that their morals are any “better” than others or telling others how to strengthen their “moral reasoning.”

    • Daniel Jacobson says:

      Another Michigander! Good. For the sake of other readers, then, let’s first agree that Stevenson in particular (but also Ayer) held that although moral judgments are expressions of emotions — and hence are neither true nor false, strictly speaking — there is a very important point to making them: we use evaluative judgments for purposes of persuasion.


      This then might be put forward as grounds for pessimism about moral reasoning; indeed, it has similarities to both the arguments I canvased in the essay. But Richard Brandt (another Michigander) criticized Stevenson for holding a view that implies that one reason can be better than another only be being more persuasive, not by actually justifying an emotion any better than another putative reason does. Notwithstanding my regard for Stevenson, who performed a great service by focusing on this “dynamic” role of evaluative discourse, I find this complaint very telling.


      So let us return to the crux of the matter. Do you disagree with my claim that some reasons to feel an emotion are better than others? That is, better not in the sense of being more persuasive, but in the sense of better justifying an emotional response. Take fear, for example. Do you not in fact differentiate between fitting fear (of a nearby grizzly bear, say) and unfitting, phobic fear (of a harmless spider)? Moreover, can you deny that some reasons, such as that grizzlies eat people and common spiders pose no danger, give us grounds for fearing the one and not the other? And doesn’t this hold more generally for other emotions too, such as guilt and anger? Someone who grants this point – and I don’t see how to avoid it – inches closer to granting that some norms about the justification/warrant/rationality (call it what you will) of emotions are better than others.


      But I’ll stop there for now and ask whether you dispute the claim that some emotions are, and others are not, justifiable in this sense.


      For the sake of other readers, I’ll add one more thing. I might have brought out the big guns initially, by pointing out that Lime’s view has the consequence that a moral view advocating genocide is no worse (or better) than your own. That seems to most of us like an absurd conclusion, to which one can only those in the grip of a theory – here, a theory about meaning and moral judgment – could be drawn. And what theory is more credible than the conviction that some moral views are, indeed, worse than others? 

      Thanks for this line of argument, Lime, which is exactly on point. I hope you’ll continue the conversation. To lay my cards on the table, the view I’m aiming at (eventually) is the sort of sentimentalism that I find most persuasive: one that can accomodate the commitments we all have (I contend) to drawing distinctions between fitting and unfitting emotional responses; good and bad reasons for having them. One might call such a view rational sentimentalism.

  6. twomeyw2 says:


    Your view seems a bit hypocritical.

    We (as in our consciousness) require emotion to function. Assuming you value functioning (which apparently you do) you can’t rule out all emotion as valueless, or all emotion as equal.

    A being that had reached the pinnacle of evolution (the ability to adapt and evolve in real time as he willed) would not turn off all emotional drives or he would cease to exist. He would instead, tailor his emotional drives to achieve a superior, more efficient level of consciousness.

    Humans obviously haven’t reached the pinnacle of evolution, but our minds are somewhat adaptable and we are capable of using knowledge and reason to better deal with and utilize the emotions we have. It therefore makes perfect sense to strive for improvement to our moral reasoning.

    Dismissing all emotion as equal or valueless is dismissing the idea of self improvement or even human improvement (going along with the idea of a sort of collective human conscious).

    If you really disagree with this (driven by some feeling of your own I presume) why bother at all?

  7. Benson says:

    The question “Can our capacity for moral reasoning be strengthened?” needs to be decomposed into the questions: What is morality?; What is moral reasoning; How should we understand the strengths of moral reasoning? And finally How can we affect it?

    Morality is a set of rules and attitudes that deal with interactions between people,stemming, in part, from the combination of the drives for self protection and for group empathy as well as from experience and inculcation to group attitudes. Apparently “moral reasoning” is a code to indicate that empathy is primary. Discussions of this sort often contain implicit assumptions that the authors detect behavior in others which do not accord with their own principles and consequently that it is necessary to find a means of extending the reach of what they consider correct. Attitudes can be affected in two principal ways; namely, by repeated statement and example or by threat of punishment.

    • Daniel Jacobson says:

      Definitions of basic concepts are very difficult to give. Part of what you offer as characterization of morality seems definitional, but another part seems like a (plausible) speculation about why morality exists. No doubt morality involves a set of rules and attitudes that deal with interactions between people. But so do other sets of norms, such as etiquette and the rules of football. I should think that the concept of obligation, or at any rate of right and wrong, are central to morality; whereas other norms and attitudes — even some that stem from empathy and the motive of self-protection — are not most helpfully seen as moral. But this is controversial, and some of my friends try to convince me otherwise! If I am right then many important rules and attitudes are non-moral (not to say immoral).
      I like Mill’s characterization of morality: “we call any conduct wrong, or employ, instead, some other term of dislike or disparagement, according as we think that the person out, or ought not, to be punished for it; and we say that it would be right to do so and so, or merely that it would be desirable or laudable, according as we would wish to see the person whom it concerns, compelled, or only persuaded and exhorted, to act in that manner.” But there are problems with every attempt to define the realm of the moral and ultimately any definition will be revisionary of some ordinary thought.
      I’m not sure why moral reasoning indicates anything about empathy. (And if there’s any code here, I don’t know it!) By ‘reasoning’ I just mean thinking that proceeds in steps, on the basis of reasons that are taken to justify each step. As opposed to something like perception, say. Those who champion morality but disparage moral reasoning think that we simply “intuit” moral conclusions, like we simply see what is in front of us. Those who criticize such intuitionism tend to complain that it illicitly borrows the epistemic credentials of perception.
      But I certainly don’t say that something isn’t a piece of moral reasoning on the grounds that I disagree with the conclusion! Or that it is moral reasoning because I agree with it. Just like reasoning outside of the moral domain, some of it is good and some of it is bad.
      I agree with you that two of the principal ways in which we change people’s attitudes are by exhortation and by punishment. Those are not exclusive, surely. We can use carrots as well as sticks. And, I believe, we use reasoning as well as mere exhortation (or “repeated statement”). I assume you’ll agree that we can give positive as well as negative incentives. Do you want to deny that we can reason people into better moral views? That denial would be in line with many of the pessimists I’m discussing here. If that’s your view, I’d like to invite you to follow up on why you’re skeptical about this possibility?

  8. Lime says:

         As clever as Jacobson is, he hasn’t changed my views. Facts remain facts; values remain values. The same fact can cause different values; a doctor who cures is loved.If he causes pain, he is hated. One can create clever arguments, but vlalues have lives of their own. If all of our facts are the same, we value alike. We create realities for people who disagree, call them crazy,etc. Argument is fun, intellectually stimulating, but values have lives of their own, coming from our emotion or unfree will if one listens to the current psychological fad.

  9. barrycooper says:

    Here is a question: to what extent are philosophical arguments best understood in aesthetic and not utilitarian terms?

    I will be honest and admit that I sometimes am tempted to go to Mensa meetings for want of intellectual companionship, but find that this is the sort of thing that smart people do.  I vastly prefer the company of construction workers, who do not feed on uncertainty and confusion, or waste time comparing one unanchored abstraction to another.

    • Daniel Jacobson says:

      Please help me understand your question. What would it be to understand philosophical arguments in aesthetic terms rather than utilitarian terms? Do you mean to evaluate actions as beautiful or cruel, based on their intentions or the motivation behind them, rather than simply in terms of their consequences? Or do you have something else in mind?

      • barrycooper says:

        First off, please forgive my tone.  I am by nature impatient. It is a moral flaw because it causes me to engage in ways with people that damage my own best interests.

        What I mean is this: the PROCESS of philosophizing becomes for many an end in itself.  Philosophy should not in my view be an academic subject.  It is virtually impossible to measure progress.  And if one looks at the current state of affairs in our universities and intellectual life generally, it seems clear that what it CAN do, when done badly, is make things worse. 

        What many academic philosophers really live for is the thrill of the thought, particularly the new thought.  For my own purposes I distinguish intellectual, who I define as cognitive aesthetes, from what I term “thought workers”, which is to say people who want to solve specific problems, then burn the bridge; who, in other words, are attached to the outcome, not the process.

        The only measure of a philosopher I can see, in our modern world, is the extent of their acceptance by their peers.  The wider world will never know they existed, not in the way we know, for example, that energy can come from an atom.

        I am perhaps being irritable.  You are after all doing your job.  In any event, I will leave it at that.  I doubt I am adding anything.

        • Daniel Jacobson says:

          Wittgenstein is supposed to have said something like, “The human mind is drawn to philosophical error as the human body is drawn to sin.” I agree. Speaking for myself, I am not drawn to it simply for the sake of arguing. I want to get things right. Some philosophical positions are clearly wrong. Not just positions held by professional philosophers but by scientists waxing philosophical, and by laypeople as well, and politicians. The job of the philosopher, at least the moral philosopher, is to correct bad philosophy. This is an important task, which has real ramifications.

          For instance, logical positivism rests on an untenable theory of meaning, and it wrongly disparages emotions as being insensitive to reason. And moral relativism wrongly asserts that we cannot properly criticize moral attitudes and norms that are sufficiently well entrenched in a culture. Both of these views would have important ramifications, if true. Both are false, and the arguments for them inadequate. (Or so I am here merely asserting.)

          That said, we philosophers do not build bridges but work in the field of idea. You may underestimate the importance of ideas, however. Recall Keynes: “The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.”

          • barrycooper says:

            Your response is reasonable in the best sense.  For my part, I tend to view the halls of academia as having been overrun by barbarians, and forget that there are people out there still trying to protect what needs protecting.  This is an important and needed task.

            As to the topic, I would submit that the process of reasoning when it comes to moral issues is one of tying together behavior with likely EMOTIONAL outcomes.  I would submit that reason supports emotion, not the contrary.  Why are we not cruel to others?  Because it makes us feel bad, if we are capable of positive social interactions.  It isolates us, and the process of rationalization disconnects us from our deepest positive feelings.

            I don’t have time to say more, but will link a post I just made on what constitutes progress.  I include there a discussion of what I consider moral progress:

          • Daniel Jacobson says:

            Thanks, Barry. There are ideas and intellectual virtues that need protecting, and I agree with you that many of them are currently under assault. 

  10. Justin says:

    It seems as though many of the commenters are resisting the idea that our capacity for moral reasoning can be strengthened due to doubts about whether there are any final moral truths toward which reasoning is directed. Perhaps they are assuming: only if there are such final moral truths could there be a meaningful notion of strengthened reasoning–strengthening would be assessed in terms of getting us closer to those truths. If so, that seems hasty. We could presumably imagine that the human capacity for singing could be strengthened–more people could have stronger voices, and sing more often on key, etc,–without there being a finally best way of singing. ANd while some possible changes in human singing might be ones that only some would regard as improvements, others might be ones that would be nearly universally recognized as such. It looks like Jacobson has given us some ways of thinking about similar incremental improvements that don’t require a fixed endpoint, for moral thinking. 

    • Daniel Jacobson says:

      That seems like a shrewd diagnosis, doctor. I would add: although there may be no final moral truths, there are surely some false moral views. Including many with actual adherents. Often these views can be undermined by reasoning — either moral reasoning or general reasoning that is morally relevant.

  11. twomeyw2 says:

    I really like Justin’s music example.

    I wonder if those as hardcore as Lime refuse to make or discuss value judgements on music, food, movies, etc., as well.  Conversation must be pretty boring.  But I guess boredom is just another emotion…