Maybe we can neglect QM when talking about how the brain forms consciousness, and maybe we can't, but Geckos sticky toes have nothing to do with the matter.
]]>If you want to say, upload your mind to a "somewhat faulty" neural net, I won't try to stop you.
But however faulty (or random) you design a symbolic logic processor to be, it will still be limited to manipulating propositions solely in terms of their algorithmic syntax, devoid of any meaningful interpretation. This is why such devices cannot confirm the truth of Godel's propositions, because in order to so, you must comprehend the semantic meaning of those propositions.
Far from "showing that we can't exist", the fact is that we humans can indeed understand and contemplate the implications of Godel's proof. And that is one of the essential abilities that distinguish our thought processes from the algorithmic machinations of computing devices.
]]>And your proof of those two baseless assertions are...?
]]>http://www.cl.cam.ac.uk/~lp15/Pages/G%C3%B6del-slides.pdf http://www.cl.cam.ac.uk/~lp15/Pages/G%C3%B6del-ar.pdf
The first incompleteness theorem has been formalized at least three times, the first of them in 1986.
I am not qualified to check his work -- it is still under review by other mathematicians -- but it's not the sort of endeavor that is facially absurd, like a new try at squaring the circle.
]]>It's not my proof, it's Godel's. It's only when you take the time to understand it that the significance of his insight becomes clear.
]]>a) You wrote:
One proof of Godel's Second Incompleteness Theorem involves the construction of a logical proposition that is self-evidently true, yet cannot be constructed by any algorithmically consistent symbolic logic processor.
It sounds like you're indicating something that is self-evidently true, but not describable in formal terms. In that case how can one actually say it is definitely true? How could a human, let alone a computer, know with certainty that it was true? Am I not understanding something? Are there any examples of this (that I could halfway understand)?
b) A lot of things we take as "self-evidently true" are actually pretty foggy.
e.g. for starters, Descartes "cogito ergo sum" requires "I" to be some kind of irreducible entity... Which it is not. The molecules of my body are not the same as last year. The gross structure has remained much the same, but the brain has changed, and "I" think differently due to my new experiences. When you get down to it, "I" am a loosely defined structure with a loosely defined set of behaviors. My memory is (arguably, somewhat) contiguous, but that doesn't prove anything; the "I" looking out through my eyes doesn't have to be the same as the one looking through my eyes yesterday.
My behaviors exist. My memories exist. My body exists. But "I", it seems to me, is a convenient abstraction, like Newtonian physics.
To be honest, I don't see how anything can be accepted as 100% certain, let alone on an intuitive basis.
(OT: I also realize that what I describe above is dangerously close to a kind of nihilism. Still working that one out!)
c) Here's where I really don't get it:
As far as we can tell, the human mind is a property of a purely physical mechanism. (How anything can be not purely physical is another question, but anyway...) Which means there's no reason an artificial one couldn't, in theory, be created. Or, for that matter, emulated on a Turing-complete computer of sufficient processing power. (Even down to QM related glitches, which are unavoidable in any system.)
We're going way beyond "intelligence as an emergent property of carbon" here. This is basically an argument for the existence of the soul. Which would be okay, except that it conflicts hugely with observation. Cognitive maturation coincides with brain growth, brain injury causes cognitive deficits, treatment of psychiatric conditions is reflected in changes in brain structure, etc. etc.
Could be I'm misunderstanding your argument, though. Is it possible to explain the Second Incompleteness Theorem in anything approaching lay terms?
BTW, believe me when I say I'd love to buy your argument (as I'm understanding it). But it seems a little too good to be true.
]]>http://plato.stanford.edu/entries/goedel-incompleteness/#GdeArgAgaMec
The anti-mechanistic argument is not even close to being proven by his theorems. My guess is that just like Kurzweil doesn't like the thought of dying, there are a bunch of otherwise smart people out there who feel icky about the notion of being clockworks...
Second... Godel's lesser anti-mechanistic argument requires that the computational mind be finite. There is every reason to believe that a self modifying neural net is a non-finite state machine. Yes it is being computed by a finite calculator, but it itself is an abstraction on top of an abstraction. The abstraction itself is non-finite unless the universe it is observing is itself finite.
Lastly, and admittedly this is my own assertion from observation, there are very few processes going on inside of a neuron of large enough scale to matter to thinking processes that cannot be abstracted away to a floating point number with a noise generating randomizer. In most instances I do not see the randomizer as being helpful to the process.
]]>No, Godel's proposition is explicitly and properly formed in the language of the symbolic logic processor, and human beings can confirm it to be true (because we understand what it means). Nevertheless, Godel proved that such propositions cannot be confirmed true any algorithmically consistent symbolic logic processor.
"As far as we can tell, the human mind is a property of a purely physical mechanism... Which means there's no reason an artificial one couldn't, in theory, be created."
LOL at the conflation of human ignorance with godlike hubris.
"Is it possible to explain the Second Incompleteness Theorem in anything approaching lay terms?"
Yes, there are colloquial English versions that you can read and follow yourself (e.g. Hofstadter and Penrose). When you actually do so, you will witness your own mind experiencing an insight that no algorithmically consistent symbolic logic processor can replicate.
]]>I also don't think that Gödel's theorems have the limitation you think they do for artificial intelligence, but I'm not an expert. Google tells me experts seem to be arguing both sides still.
]]>It is not the proven assertions of the theorems themselves that demonstrate the inability of an algorithmically consistent symbolic logic processor to fully replicate human thought processes. It is the exclusively human act of comprehending the truth of the propositions specified by Godel's theorems that demonstrates how human thought processes differ fundamentally from the simulations of Turing Machines.
"There is every reason to believe that a self modifying neural net is a non-finite state machine."
Ha ha, more godlike hubris from an apparently non-finite source of such faith-based assertions.
]]>Semantic meaning comes from associating words with their lived experience values. When we have built a strong AI, all of its symbology will have come to it through sensed experience and not through human programming. We will have simply created the substrate on which it flourished.
I believe that we will absolutely build a strong AI within the next couple of decades. I also believe that we will absolutely not have any control over it nor ever have a full understanding of how it functions. Neither will it.
]]>Also, do please stop throwing around accusation of "godlike hubris" at people. It's rude, and the humour value goes away quickly.
]]>@Oren: I've always figured that strong AI will be extremely hard to build. I mean, it took almost a billion years of biology bootstrapping itself to get just a few highly intelligent species. Evolving computer/software design is only relatively faster.
@lishlash: At this point I don't know if you know if you're arrogant in your superior knowledge, or just trolling, but either way being impolite is not going to sway people. I'm interested in learning, not exchanging insults.
(Also, as I said, I would love to believe things from your viewpoint. It being true would mean a much happier future, I think.)
]]>Anyway it strikes me that it's much easier to alter the way we think, than to create something from scratch that thinks. Neuroscience is going to blur the line between psychiatry and mind control quite a bit in the coming years... Things could get extremely ugly.
]]>I used these considerations from the late '60's in my arguments against those who proposed that human-equivalent AI would be soon to come - or that brains were just wet-ware computers. These facts, along with recent research on internal processing of neurons, related to microtubules and possibly quantum involvement, make that line of argumentation that much more persuasive. But we will get there eventually, and build a lot of neat stuff based on what we DO know in the meantime.
I suspect that the most logical route to AI will not be uploading directly, but via augmentation, internal with chips and sensors and external via the interpersonal cloud that is emerging. At some point, it won't matter if the last of our bio-mind dies, because so much of the processing will have already been shifted to non-bio systems.
]]>