Thursday, May 13, 2010

Does Moral Action Depend on Reasoning? Part 1

The Templeton Foundation has a wonderful collection of essays, from different perspectives, in response to the question: "Does Moral Action Depend on Reasoning?" Here are some thoughts and highlights, in the first of two posts, from my POV. Note: you can order the entire booklet of essays for free.

Michael Gazzaniga ("Not Really")
  • Our moral "decisions" are the result of "largely automatic and deterministic processes." 
  • "The largely unquestioned modern scientific view is that the brain enables the mind—that is, the physical organ gives rise to the hard-to-define collection of mental mechanisms governing our cognitive existence. On this view, the brain is widely believed to be a deterministic system, with millions of interacting parts that produce reliable and automatic responses to environmental challenges. Moral judgments and choices are mental phenomena that fit this general pattern."
  • "Most scientific research shows that morality is largely universal, which is to say, cross-cultural."
  • "All decision processes resulting in behaviors, no matter what their category, are carried out before one becomes consciously aware of them. Whether driven by internally determined and evolved structures or by learning and experience, these behaviors are executed by the brain in an orderly and automatic way."
  • Because of this self-conscious reasoning in the sense of "figuring out" or "problem solving" does not even take place.
  • The brain's left hemisphere has an "interpreter" that "seeks to understand the rationale behind the pattern of behavior observed in others and/or oneself."
  • Does the fact that the brain carries out its work before we are consciously aware of an action cause us to reject the idea that we are personally responsible for our behavior? Not at all, says Gazzaniga. He writes: "The very concept of personal responsibility implies that one is participating in a social group whose rules can be learned. When our brains integrate the myriad information that goes into a decision to act, prior learned rules of behavior are part of that information flow. In the vast number of decisions that a brain makes, playing by the rules usually pays off."
  • Now watch Gazzaniga's conclusion, esp. re. "free will": "These recent advances in understanding how the brain works in producing moral behavior do not challenge or make obsolete the value of holding people in a society accountable for their actions. Though it does suggest that the endless historical discussion of free will and the like has little or no meaning, it does not suggest in any way that we, as mental agents, are merely accessory to our brain activity. Indeed, in beginning to understand how the mind emerges from the brain, we are also realizing how the mind constrains the brain."
  • I confess to finding this view incoherent. Gazzaniga uses words that say we are morally responsible for our choices. He says all decision-processes that result in behavior are carried out before we are consciously aware of an action. Help! I don't get it! Perhaps it's not "gettable?" Perhaps my brain already decided I would "think" like this, and so my words are merely the expression of my already-determined brain?
Rebecca Goldstein "(Yes and No - Happily")
  • We all have "proto-moral" emotions, like the emotion of personal outrage. This is the primal sense that "I matter." So far, as regards this, reason plays no part.
  • Where reason comes in is re. the moral idea that "Others matter." Goldstein defines "reason" this way: "Reason is our capacity for teasing out implications and testing inconsistencies, and an emotion like personal outrage has implications for how we ought to think of others."
  • Reason "adds to the proto-morality of personal outrage." Here Goldstein cites Kant's "categorical imperative" as an example.
  • Speaking in a Kantian way Goldstein concludes: "Morality can be summarized with a paraphrase: Reason without moral emotions is empty, moral emotions without reason are blind."
 Aref Ali Nayed ("No, it does not!")
  • Nayed says "moral action depends on compassion." "Compassion is the necessary and sufficient condition on which moral action depends."
  • Where there is compassion, reasoning is not needed. "Parents need no reasoning to nourish their children in loving-kindness. Children need no reasoning to lovingly care for their aging parents."
  • "Actions are most truly moral when they spring from so deep in our compassionate humanity that they defy merely calculative logic." 
  • Moral actions precede calculative and deliberative reasoning.  
  • Nayed asks, "Where does compassion come from? He gives two answers: 1) God; and 2) our mothers.
  • Compassion is a pre-understood, prethematic thing, and the root and foundation of all moral action. As such, it is not "rational" in the sense of being self-consciously derived from a reasoning process.
  • Nayed's views are clearly theological, and remind me of the Reformed theological notion of "properly basic beliefs."
Alfred Mele ("Only if we're free")
  • "We are morally responsible for a substantial share of our actions, and this would not be true if we never reasoned about them."
  • Mele acknowledges that many will disagree with him because of the current popularity of the view that "free will is an illusion."
  • Mele looks at the current, growing scientific view that the physical brain "decides" on actions up to ten seconds before people are aware that they made a choice.
  • Mele questions studies that purport to show this. He adds that the bar for "free will" has been set impossibly high. One can affirm free will with the belief that "mind" is substantially indebted to the physical brain. With this in mind, Mele asks:
  •  "So are we free enough to reason about many of our moral actions? Certainly."
  • After reading Mele's response my sense is that he did not really answer the question.
Stanley Fish ("It depends...")
  • Fish begins: "If the question is "Do those who make moral decisions have reasons at the ready when asked to justify them?," the answer is "sometimes yes, sometimes no.""
  • Many people report coming to a moral decision "instinctively." Therefore, not as a result of a reasoning process (arguing from premises to a conclusion).
  • Persons live within "autopoietic systems." This means that "What is or is not a reason—what will be heard as a reason and not as something flying in from left field—will be a function of an ongoing conversation or tradition of inquiry in which certain propositions, but not others, count as weighty arguments in the process of decision-making." This may be similiar to the idea of noetic frameworks," and even "grand narratives."
  • It also seems similiar to Kuhnian "paradigms," especially since Fish cites Kuhn in his response.
  • Fish concludes: "So does moral action depend on reasoning? Yes. Does knowing that help you make moral decisions? No. You are at once on your own and always already owned." I like this response. Which says, I think: we are free to make moral decisions within the noetic framework that we belong to (which "owns" us).
Christine M. Korsgaard ("Yes, if...)
  • We don't have to go through a "process" of reasoning every time we make a moral decision. Sometimes we just "know." I am certain this is true.
  • Korsgaard then says: "But moral action does not merely depend on reason. Moral action is rational action, because the moral law is a law of reason."
  • Korsgaard makes a distinction between "intelligent animals" and "rational animals." Intelligent animals "learn" from experience. Rational animals are aware of the grounds of their beliefs and actions. A rational animal "is able to ask herself whether the forces that incline her to believe or to do certain things amount to good reasons to believe or do those things, and then to determine what she believes and does accordingly."

  • "Because we can make these assessments, rational animals can exert a kind of control over our beliefs and actions that other animals, even very intelligent ones, cannot. It is because of this control that human beings, probably alone among the animals, are responsible for what we do."
  • We don't have to possess free will in some "impossible" sense (we don't have to make the bar for free will impossibly high) to claim that we can be motivated to act on the basis of some principle. To do that would be to reason about moral actions, and choose on the basis of such reasoning.
  • I like Korsgaard's reasoning. Like when she writes: "When I say that moral action is rational, then, I mean that it is action in which this extra level of conscious control, the deliberate regulation by rational principles, is exercised. To put it more simply, moral action is not mere behavior that is altruistic, cooperative, or fair. It is action governed by a conception of the way that you ought to act. To be a moral being is to be capable of being motivated to do what you ought to do because you believe you ought to do it. And that requires reason."
  • And, of course, we often fail to measure up to the high standards of morality.