From: "Phil Roberts, Jr." 
Newsgroups: sci.bio.evolution
Subject: Re: Robot Evolution
Date: Fri, 5 Jan 2007 01:57:11 -0500 (EST)


Tim Tyler wrote:

 >
 > The argument from Godel's theorem really is totally dead.
 >
 > If you don't see why, I recommend consulting the numerous
 > refutations of the argument on the internet until you
 > understand exactly what is wrong with it.
 >
 > There /may/ be other reasons for thinking machines
 > cannot match the computational powers of humans -
 > but the argument from Godel's theorem is simply defunct.
 >
 > It was been dead since the moment it was proposed - and
 > only continues its zombie existence in the minds of those
 > who don't understand it :-(

Stripped to its bare bones, I suspect the Godel argument
amounts to something like:

a.  We have reason to believe that Peaono arithemtic is
    consistent.
b.  Therefore we have reason to believe that its Godel
    sentence can not be proven within the system.
c.  Therefore we have reason to believe its Godel is
    "true".
d.  Since the machine is restricted to formal proofs, we
    can "see" something that is "true" that can not be
    proven by the machine.  

This is certainly not a
proof, not even a formal argument.  "Minds are
different from machines" (Lucas) is an empirical
assertion.  One does not prove empirical assertions,
one marshals evidence.  Its also why I am a bit
skeptical of Dennett's assertion about Penrose
PROVING that mathematicians are not employing a
knowably sound algorithm.

I see what I have referred to as 'the Godel argument',
for lack of a better term, as
more analogous to a controlled experiment in
which, under certain conditions, it is possible
to observe something interesting in nature in
clearer relief than is common.  In this case an
occasion in which it appears that reasoning can
actually go beyond logic, or perhaps employ the same
logic at a higher level, and for that very reason
avoid inconsistency.  This suggests that in addition
to an internal form (e.g., classical logic), rationality
may have holistic properties that, before Godel, were
not so easily noticed.

Here is a quote from
one of my papers in which I suggest something similar
to the Godel thingy above only in the realm of practical
rationality.

[quoting my paper]
  On pages 13 and 14 of Reasons and Persons, Derek Parfit
  offers a hypothetical scenario in which there is a
  significant likelihood that a robber will inflict grave
  harm on someone's family irrespective of whether the
  individual conforms to the robber's demands or not,
  and in which, given the specific circumstances, far and
  away the best alternative would be to take a "special
  drug" that causes one to become temporarily irrational:

    "While I am in this state, I shall act in ways that are
    very irrational.  There is a risk that, before the police
    arrive I may harm myself or my children.  But, since I
    have no gun, this risk is small.  And making myself
    irrational is the best way to reduce the great risk
    that this man will kill us all.

    On any theory of rationality, it would be rational for
    me, in this case, to cause myself to become irrational.
    An acceptable theory about rationality can tell us to
    cause ourselves to do, what in its own terms, is
    irrational."(Parfit)

  I would argue that what Professor Parfit is actually
  consulting here, is not any of the current theories of
  rationality, many of which would indeed sanction rational
  irrationality in the above scenario and thereby qualify
  as self-defeating (i.e., false), but rather a shared
  implicit theory in which being rational' is simply a
  matter of 'being objective', and in which no objective
  is rational in any but a relative sense of the term.
  This would explain how he could get away with asserting
  a logical contradiction that none of us finds cognitively
  dissonant, in that underlying the absolutist terminology
  (rational vs. irrational) would be the shared
  understanding that in the above scenario the individual
  would actually be opting to become relatively less
  rational for a time as a means to a relatively more
  rational end (protecting his family), and in which
  the temporary reduction in rationality is merely an
  extension of the "irrationality" (lack of objectivity)
  that is part and parcel of fixating on a supremely
  valued end irrespective of the context.
[endquote from my paper]


                Rationology 101
      How the Author of Genesis Got It Right 
        (and the Golden Rule Got It Wrong
            http://www.rationology.net


PR