Cc: evol-pscyh 
From: "Phil Roberts, Jr." 
Date: Wed, 26 May 2004 23:30:53 -0400
Subject: Re: [evol-psych] Evolution of morality

Mark Flinn wrote:

> There is a fine distinction between the 'is vs. ought' naturalistic 
> fallacy and what we are discussing here.  A current understanding of 
> moral systems, based on evolutionary theory (e.g., Alexander 1987), 
> does not suggest that what exists in biological reality ('is') has a 
> higher moral standing ('ought').  We are trying to move beyond the 
> old Spencerian ideas.  The anti-sociobiology groups that tried to 
> discredit evolutionary approaches with those political tactics were 
> wrong.
> But that is not to say that 'ought' does not have a scientific 
> underpinning.  Folk beliefs are not random, and unless we want to 
> place our unquestioning faith in the Koran or the Bible or the Shaman 
> of your choice, then we need to try and understand why humans have 
> 'oughts'.

I have a couple of trial balloons to offer on this 'ought'
business.  No doubt someone in the group will be able to
find a fly or two in my ointment.  Here goes:

'Ought's are entailments or predictions of causal hypotheses.
These come in two basic flavors, with a subdivision in the

   1. Run of the mill causal 'ought's, e.g., If it rains the grass
      'ought' to turn green.
   2. Rational agent causal 'ought's
      a. prudential 'ought's e.g., One 'ought' not drink to excess.
      b. moral 'ought's e.g., One 'ought' to love his neighbor as he
                              loves himself.

Underlying both the prudential and moral 'oughts' is an implicit
theory of rationality we all share in which rationality is simply
a matter valuative objectivity and, indeed, valuative objectivity
in one form or another underlies prudential and moral norms the
world over (exceptions to my assumed rule would be most appreciated,
if someone has one or two to offer).

Tying this into evolutionary theory, the chief benefit of morality
is NOT physical, but rather emotional, in that our self-esteem
is heavily dependent on our being able to view ourselves as
rational, i.e., of being relatively more valuatively objective
than non-rational creatures.  The reason we need to nourish
our self-esteem is because the value we attach to ourselves
is just another way of talking about 'the will to survive' in
a species that accomplishes this from long range planning (in
contrast to blind responses to stimuli), and because we
ourselves have become a little too valuatively objective (too
rational, according to our implicit theory of same) for our own
good, and now require EVIDENCE (justification) for the rationally
inordinate sense of importance nature would like us to maintain in
order to insure our survival.

Implicit in this is the contention that nature has not been
selecting for sociality in man, but rather rationality,
with sociality merely one of its many manifestations (i.e.
the need for evidence in the form of the opinions of others
that one has worth in order to sustain one's will to survive).


> We will not discover moral 'facts' analogous to the facts regarding a 
> hydrogen atom.  There are no absolute ultimate morals akin to the 
> physical laws of the universe.  

In an earlier post I actually did offer what I view as a science
like derivation of a moral 'ought' from an epistemic 'is'.  I
say science-like in the sense that many philosophers of science
I am familiar with have increasingly come to champion explanationism,
a la Pierce, Harmon, Lycan, Thagard as a more realistic view of
the epistemology underlying the scientific enterprise.  Assuming
this is so, then my science-like derivation goes as follows:

1.  Assume that 'being rational' is NOT simply a matter of
       'being efficient' (means/end theory)
       'being logical' (computationalism)
       'being self-interested' (egoism)
       'being happy' (pragmatism)
       'being strategically logical (game theory)
       'following a universalizable maxim (Kant)
       'fulfilling one's desires' (hedonism)
       'maximizing global happiness' (utilitarianism)
       'truth or falsehood (Hume)
     but simply a matter of
        'being able to "see" what is going on' [non-formalizable]
     with the metaphor unpacked to
        'being rational' = 'being objective', not only cognitively,

2.  Corroborate the epistemic credentials (the "is" component of the ought
     derivation) of the above "theory" in terms of its ability to maximize
     explanatory coherence better than any of its competitors (means/end,
     egoism, etc.).

     For example:

     a. The theory can "explain", at least in a conceptual framework not
        available from the perspective of competing theories, both the
        excessive altruism and the emotional instability (volatility in
        self-worth) observable in homo sapiens, in that they can both
        be construed as two different sides of the same valuative
        objectivity coin (an equalizing of value between the interests
        of others and one's self).  Since in the above theory, rationality
        correlates with valuative objectivity, homo sapiens would be
        construed as having become MORE RATIONAL than the predicted norm
        (ruthless selfishness).  While this doesn't offer us an immediate
        causal account of the anomalies in question (altruism and emotional
        instability), it certainly offers one a new conceptual framework
        for thinking about them which, in turn, might lead to an improved
        causal account (e.g., they are maladaptive byproducts of the
        evolution of rationality).

     b. The theory can shed new light on a number of rationality paradoxes
        such as Newcomb's Problem, Prisoner's Dilemma, etc., in that all
        such paradoxes stem from the assumption that rationality is a
        strategic attribute.  

     c. The theory can circumvent the logical paradoxes of rational
        irrationality, similar to the example offered by Derek Parfit
        on page 12 of 'Reasons and Persons'.

     d. The theory can explain the chaos of the Cohen symposium on
        rationality ('The Behavioral and Brain Sciences', 1981, 4,
        317-370) by sharpening the distinction between logic and
        reasoning (I wouldn't go into this here).

     e. The theory can offer intersubjectively reproducible empirical
        evidence (feelings of worthlessness) that mother nature's most
        rational species is beginning to show signs of "standing outside
        the system" (Lucas) corroborating the Lucas and Penrose position
        on the implications of Godel's incompleteness theorem (i.e., minds
        are not machines).  (again a bit too complex an issue for
        explanation in this particular post).

     f. The theory is compatible with what is the currently
        accepted paradigm for practical rationality, the 'equal
        weight' criterion, albeit extended beyond the periphery
        of self-interest:

          My feelings a year hence should be just as important to me as
          my feelings next minute, if only I could make an equally sure
          forecast of them.  Indeed this equal and impartial concern for
          all parts of one's conscious life is perhaps the most prominent
          element in the common notion of the _rational_.  (Henry
          Sidgwick, 'The Methods of Ethics').

          All these theories [of rational self-interest] also claim that,
          in deciding what would be best for someone, we should give equal
          weight to all the parts of this person's future.  Later events
          may be less predictable; and a predictable event should count for
          less if it is less likely to happen.  But it should not count
          for less merely because, if it happens, it will happen later
          (Derek Parfit, 'Reasons and Persons').

3.  Derive the 'ought' component via the syllogism:

         'Given that one chooses to be rational,
         MORAL MAXIM:
         one ought to 'Love (value) their neighbor as they love (value)

   Notice that this moral maxim does not contain any values in the
   maxim itself.   It merely states, given that one values X
   such and such an amount, then one ought to value Y such and such
   an amount.  But that does not mean that the ought doesn't have
   underlying premises.  Indeed, it would seem to have both a
   cognitive AND a valuative premise.  The cognitive premise is that
   the underlying theory of rationality is "true", and the valuative
   premise is that the individual in question values rationality.
   And guess what?  I don't think this is actually MY theory of
   rationality, in that I suspect the maxim itself rings a bell in
   just about anyone capable of reflective thought.

   My conclusion then.  Moral oughts are entailments of an implicit
   theory of rationality we humans have been subconsciously entertaining
   for the past several thousand years (as evidenced by the widespread
   acceptance of the moral maxim) and as such is entailed by the
   implicit cognitive premise that our shared theory of rationality is
   "true" and the shared valuative premise that rationality is itself of
   intrinsic worth, or at least of sufficient worth to warrant that
   humans will often sacrifice their well-being, and at times their
   very lives (e.g., self-incinerating Buddhist monks) in the pursuit
   of moral objectives.

                        Rationology 101
             How the Author of Genesis Got It Right
              (and the Golden Rule Got It Wrong)

Phil Roberts, Jr.