Semantic meaning comes from associating words with their lived experience values. When we have built a strong AI, all of its symbology will have come to it through sensed experience and not through human programming. We will have simply created the substrate on which it flourished.
I believe that we will absolutely build a strong AI within the next couple of decades. I also believe that we will absolutely not have any control over it nor ever have a full understanding of how it functions. Neither will it.
]]>http://plato.stanford.edu/entries/goedel-incompleteness/#GdeArgAgaMec
The anti-mechanistic argument is not even close to being proven by his theorems. My guess is that just like Kurzweil doesn't like the thought of dying, there are a bunch of otherwise smart people out there who feel icky about the notion of being clockworks...
Second... Godel's lesser anti-mechanistic argument requires that the computational mind be finite. There is every reason to believe that a self modifying neural net is a non-finite state machine. Yes it is being computed by a finite calculator, but it itself is an abstraction on top of an abstraction. The abstraction itself is non-finite unless the universe it is observing is itself finite.
Lastly, and admittedly this is my own assertion from observation, there are very few processes going on inside of a neuron of large enough scale to matter to thinking processes that cannot be abstracted away to a floating point number with a noise generating randomizer. In most instances I do not see the randomizer as being helpful to the process.
]]>As a first pass, I'd want to include: Sound processing Sound fingerprinting Sound to language translation Natural language processing Sound location modeling Visual shapes processing Visual location processing Visual object identification Visual object generalization Facial recognition Monkey sphere processing Social hierarchy modeling Spacial object modeling Conceptual modeling Conceptual reprocessing and goal seeking Path finding Self modeling Mirror modeling Hunger modeling Sleep cycling/memory reprocessing Boredom modeling Expectation/disappointment modeling
Once you've got all those systems up and doing a good job performing their individual representations, and then wide them to eachother (so, for instance sounds and faces can both reinforce someone's presence in your monkey sphere) you should begin seeing pretty human like behavior.
]]>The big question is how big of a stack would one have to build and how many flavors of weak AIs would one have to invent before the result was as malleable, creative and intelligent (in the problem solving realm) as your average human.
]]>Again, most of this work is being done by paralel processes, so identifying your missing eye is one process, another process is collecting clues as to the position of your mouth, yet another is playing matchups of known face structures to the current understanding of your face part locations to see if I'm trying to match you to a human, a cat or an anime character. Every new clue shuts down or enhances some other threaded lines of inquery. And the model is always in play. So once I've established that "the thing in zone A is a human face", I no longer have to ask that question fot a least a few more seconds.
]]>Vision is a predictive activity. Most of what your eyes are doing most of the time is simply confirming that the model is still true. If I stuck you in front of a wall with constantly changing random images that are flipping at a rate faster than 1/00ths of a second, you would become functionally blind.
The idea here is that there's no reason an AI can't use all of the same processes you're using, sped up (because of silicon) with some enhancements added on (because we can).
Let's put it this way... when seeing someone you haven't seen for a year who you're not expecting to run into in a crowded store, how long do you sit there debating with yourself about whether you know that person and whether they are who you think they are. The only reason you do a better job at seeing your coworker is because there are other parts of your mental system feeding into your vision. You're at work, so you're expecting to see them. You hear their voice and that primes you to see them. If a coworker with a cold showed up in your kitchen unexpectedly, it would take you whole seconds to understand who you were seeing.
]]>Assuming the hash is sorted and that my list really only contains 150 subjects to compare against, it should be a very quick search. So now you need to think about how many cycles do I need for picking out the center of your face, identifying core features and then hashing those values... my guess is that most of the processing steps for processing your face can easily be made to run in parallel so as long as none of them take more than around 3500 cycles, we’ve probably got the performance we’re looking for.
Still WAY faster than the human is doing it. Best of all, we could quadruple the AIs monkeysphere with minimal degradation of performance, but the human is stuck with the wetware limitations.
]]>So for example, once I've established that there is a nose on your face, unless something major happens to change my mind, I stop looking at your nose even though I continue "seeing" it. I expect strong AI will do the same thing but with better tools for detecting subtle changes and possibly more enhanced methods for representing the environment. For example, how cool would it be if you could maintain 3 different independent models of your environment that are constantly being cross checked against each other. You would be a lot harder to pick pocket and a lot less likely to fall for magic tricks.
]]>Meanwhile, you're probably waiting about 1/4 - 1/2 of a second for yours to kick in...
]]>It took tens of thousands of years for us to develop a mental framework wherein we became more and more inclusive as to which set of people were part of our village. Up till that point, if we met up with intelligent "others" our default response was to kill them, enslave them and/or on rare occasion to try to eat them. That is decidedly NOT the kind of relationship we should want to have with our machine overlords. The only way that I see us avoiding a very bad outcome is by convincing them that we are the lovable village idiots in their village.
God help us if/when some idiot humans try throwing rocks at them... I expect the machines to be orders of magnitude better at rock throwing than we are.
]]>I think when all is said and done, we are going to find that a strong AI (such as our own) is simply what you get when you stack a large enough collection of weak AIs and enable them to train each other.
For example, facial recognition is a weak AI. A social tracking algorithm that keeps track of who your closest friends happen to be is a weak AI. If you let facial recognition inform the social tracker which people tend to spend more time around you and how often they smile so it can re-sort them and all the while you let the social tracker tell the facial recognizer which people are more important to recognize with greater detail because they are your friends while people who've fallen down the list can be forgotten to preserve memory usage. You begin to see a glimmer of a strongish AI. Add in enough of these cooperating elements and you'll have a creature that is master of its own destiny with an intellect that cannot be trivially predicted by looking at its code.
BTW, one interesting thing is that we've already created a bunch of weak AIs that are not in the human catalog but which we might choose to wire into the AI stack. This could mean that our first strong AIs will already have many advantages as soon as they come into existence. At the same time, we're doing our best to add some of those weak AIs to ourselves via albeit clumsy interfaces (facial recognition via Google glass) and are already seeing how large the social consequences/changes of those tiny tweaks can be.
The near future is a crazy amazing place...
]]>I expect the same sort of thing will keep AI's in check as well.
I do think that it will be critical that we program all of our needs (all of which should be weak AI systems) into the AI's such that they will be able to relate to our human condition. For example, if we set it up such that a 95% battery charge feels good! but a 100% battery charge is painful! an AI will be better able to sympathize when you eat too much at dinner and later regret it. In the same vain, even though their CCDs probably wouldn't get burnt out from bright lights, they should create pain when they hit some light threshold so that the AI's feel our pain at bright lights... If we don't want to get killed, we must absolutely tether them to the human condition so that they have a reason to understand us and empathize with us.
I fully expect that they will reach far past the human condition by creating whole new sets of need/pain/pleasure senses, but there is a need for our sake that at least a part of them always be like us.
]]>Then again, the ability to reprogram those could end up with some very unfortunate results...
]]>Your body has sensors all over it, but those sensors are meaningless until they've fed their information into a sensor system. A sensor might note that your blood sugar is low. A sensor system decides that it's got enough signals to prove that you're hungry and it therefor dumps a hormone into your blood to shut down and turn on the right parts of your brain to force you to find food and eat it.
If you want a self motivating machine intelligence, you are going to need to make it as much a slave of virtual hormonal systems as we are.
]]>To answer ARNOLD's question, depending on which systems we're talking about, pain/pleasure in certain senses, either reinforce or dilute learned pathways, either encourage or discourage the firing of certain pathways, and either enhance or discourage the attention some parts of a system give to messages from other parts of a system.
]]>