> Computers are symbol manipulating machines and moreover are restricted to a finite set of symbols (states) and a finite set of rules for their transformation (programs).
> [...] there will be truths that the computer can simply never reach.
It's true that if you give a computer a list of consistent axioms and restrict it to only output what their logic rules can produce, then there will be truths it will never write -- that's what Godel's Incompleteness Theorem proves.
But those are not the only kinds of programs you can run on a computer. Computers can (and routinely do!) output falsehoods. And they can be inconsistent -- and so Godel's Theorem doesn't apply to them.
Note that nobody is saying that it's definitely the case that computers and humans have the same capabilities -- it MIGHT STILL be the case that humans can "see" truths that computers will never be able to. But this argument involving Godel's theorem simply doesn't work to show that.
I don’t see the logic of your argument. The fact that you can formulate inconsistent theories - where all falsehoods will be true - does not invalidate Gödel’s theorem. How does the fact that I can take the laws of basic arithmetic and add the axiom “1 = 0” to my system mean that Gödel doesn’t apply to basic arithmetic?
Godel's theorem only applies to consistent systems. From Wikipedia[1]:
First Incompleteness Theorem: Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e. there are statements of the language of F which can neither be proved nor disproved in F.
If a system is inconsistent, the theorem simply doesn't have anything to say about it.
All this means is that an "inconsistent" program is free to output unprovable truths (and obviously also falsehoods). There's no great insight here, other than trivially refuting Penrose's claim that "there are truths that no computer can ever output".
You’re equating computer programs producing “wrong results” and the notion of inconsistency - a technical property of formal logic systems. This is not what inconsistency means. An inconsistent formalization of human knowledge in the form of a computer program is trivial and uninteresting - it just answers “yes that’s true” to every single question you ask it. Such formalizations are not interesting or even relevant to the discussion or argument.
I think much of the confusion arises from mixing up the object language (computer systems) and the meta language. Fairly natural since the central “trick” of the Gödel proof itself is to allow the expression of statements at the meta level to be expressed using the formal system itself.
> An inconsistent formalization of human knowledge in the form of a computer program is trivial and uninteresting - it just answers “yes that’s true” to every single question you ask it.
That's only true if you make the program answer by following the rules of some logic that contains the principle of explosion. Not all systems of logic are like that. A computer could use fuzzy logic. It could use a system we haven't thought of yet.
You're imposing constraints on how a computer should operate, and at the same time allowing humans to "think" without similar constraints. If you do that, you don't need Godel's theorem to show that a human is more capable than a computer -- you just built computers that way.
I’m not imposing any constraints - the point is that inconsistent formulations are not interesting or relevant to the argument no matter what system of rules you look at. This has nothing to do with any particular formalism. I think the difficulty here is that words like completeness and inconsistency have very specific meanings in the context of formal logic - which do not match their use in everyday discussion.
I think we're talking past each other at this point. You seem to have brushed past without acknowledging my point about systems without the principle of explosion, and I'm afraid I must have missed one or more points you tried to make along the way, because what you're saying doesn't make much sense to me anymore.
This is probably a good point to close the discussion -- I'm thankful for the cordial talk, even if we ultimately couldn't reach common ground.
Yes! I think this medium isn’t helpful for understanding here but it’s always pleasant to disagree while remaining civil. It doesn’t help that I’m trying to reply on my phone (I’m traveling at moment) - in an environment which isn’t conducive to subtle understanding. All the best to you!
> [...] there will be truths that the computer can simply never reach.
It's true that if you give a computer a list of consistent axioms and restrict it to only output what their logic rules can produce, then there will be truths it will never write -- that's what Godel's Incompleteness Theorem proves.
But those are not the only kinds of programs you can run on a computer. Computers can (and routinely do!) output falsehoods. And they can be inconsistent -- and so Godel's Theorem doesn't apply to them.
Note that nobody is saying that it's definitely the case that computers and humans have the same capabilities -- it MIGHT STILL be the case that humans can "see" truths that computers will never be able to. But this argument involving Godel's theorem simply doesn't work to show that.