Back to: Introducing new guest blogger: Ramez Naam | Forward to: Why AIs Won't Ascend in the Blink of an Eye - Some Math

The Singularity Is Further Than It Appears

Time The Year We Become Immortal.jpgAre we headed for a Singularity? Is it imminent?

I write relatively near-future science fiction that features neural implants, brain-to-brain communication, and uploaded brains. I also teach at a place called Singularity University. So people naturally assume that I believe in the notion of a Singularity and that one is on the horizon, perhaps in my lifetime.

I think it's more complex than that, however, and depends in part on one's definition of the word. The word Singularity has gone through something of a shift in definition over the last few years, weakening its meaning. But regardless of which definition you use, there are good reasons to think that it's not on the immediate horizon.

VERNOR VINGE'S INTELLIGENCE EXPLOSION
My first experience with the term Singularity (outside of math or physics) comes from the classic essay by science fiction author, mathametician, and professor Vernor Vinge, The Coming Technological Singularity.

Vinge, influenced by the earlier work of I.J. Good, wrote this, in 1993:


Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.
[...]
The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence.
[...]
When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities -- on a still-shorter time scale.

I've bolded that last quote because it's key. Vinge envisions a situation where the first smarter-than-human intelligence can make an even smarter entity in less time than it took to create itself. And that this keeps continuing, at each stage, with each iteration growing shorter, until we're down to AIs that are so hyper-intelligent that they make even smarter versions of themselves in less than a second, or less than a millisecond, or less than a microsecond, or whatever tiny fraction of time you want.

This is the so-called 'hard takeoff' scenario, also called the FOOM model by some in the singularity world. It's the scenario where in a blink of an AI, a 'godlike' intelligence bootstraps into being, either by upgrading itself or by being created by successive generations of ancestor AIs.

It's also, with due respect to Vernor Vinge, of whom I'm a great fan, almost certainly wrong.

It's wrong because most real-world problems don't scale linearly. In the real world, the interesting problems are much much harder than that.

Consider chemistry and biology. For decades we've been working on problems like protein folding, simulating drug behavior inside the body, and computationally creating new materials. Computational chemistry started in the 1950s. Today we have literally trillions of times more computing power available per dollar than was available at that time. But it's still hard. Why? Because the problem is incredibly non-linear. If you want to model atoms and molecules exactly you need to solve the Schrodinger equation, which is so computationally intractable for systems with more than a few electrons that no one bothers.

Molecular Modelling Computational Complexity

Instead, you can use an approximate method. This might, of course, give you an answer that's wrong (an important caveat for our AI trying to bootstrap itself) but at least it will run fast. How fast? The very fastest (and also, sadly, the most limited and least accurate) scale at N^2, which is still far worse than linear. By analogy, if designing intelligence is an N^2 problem, an AI that is 2x as intelligent as the entire team that built it (not just a single human) would be able to design a new AI that is only 70% as intelligent as itself. That's not escape velocity.

We can see this more directly. There are already entities with vastly greater than human intelligence working on the problem of augmenting their own intelligence. A great many, in fact. We call them corporations. And while we may have a variety of thoughts about them, not one has achieved transcendence.

Let's focus on as a very particular example: The Intel Corporation. Intel is my favorite example because it uses the collective brainpower of tens of thousands of humans and probably millions of CPU cores to.. design better CPUs! (And also to create better software for designing CPUs.) Those better CPUs will run the better software to make the better next generation of CPUs. Yet that feedback loop has not led to a hard takeoff scenario. It has helped drive Moore's Law, which is impressive enough. But the time period for doublings seems to have remained roughly constant. Again, let's not underestimate how awesome that is. But it's not a sudden transcendence scenario. It's neither a FOOM nor an event horizon.

And, indeed, should Intel, or Google, or some other organization succeed in building a smarter-than-human AI, it won't immediately be smarter than the entire set of humans and computers that built it, particularly when you consider all the contributors to the hardware it runs on, the advances in photolighography techniques and metallurgy required to get there, and so on. Those efforts have taken tens of thousands of minds, if not hundreds of thousands. The first smarter-than-human AI won't come close to equaling them. And so, the first smarter-than-human mind won't take over the world. But it may find itself with good job offers to join one of those organizations.

DIGITAL MINDS: THE SOFTER SINGULARITY
Recently, the popular conception of what the 'Singularity' means seems to have shifted. Instead of a FOOM or an event horizon, the definitions I saw most commonly discussed a decade ago, now the talk is more focused on the creation of digital minds, period.

Much of this has come from the work of Ray Kurzweil, whose books and talks have done more to publicize the idea of a Singularity than probably anyone else, and who has come at it from a particular slant.

Now, even if digital minds don't have the ready ability to bootstrap themselves or their successors to greater and greater capabilities in shorter and shorter timeframes,eventually leading to a 'blink of the eye' transformation, I think it's fair to say that the arrival of sentient, self-aware, self-motivated, digital intelligences with human level or greater reasoning ability will be a pretty tremendous thing. I wouldn't give it the term Singularity. It's not a divide by zero moment. It's not an event horizon that it's impossible to peer over. It's not a vertical asymptote. But it is a big deal.

I fully believe that it's possible to build such minds. Nothing about neuroscience, computation, or philosophy prevents it. Thinking is an emergent property of activity in networks of matter. Minds are what brains - just matter - do. Mind can be done in other substrates.

But I think it's going to be harder than many project. Let's look at the two general ways to achieve this - by building a mind in software, or by 'uploading' the patterns of our brain networks into computers.

Building Minds
We're living in the golden age of AI right now. Or at least, it's the most golden age so far. But what those AIs look like should tell you a lot about the path AI has taken, and will likely continue to take.

The most successful and profitable AI in the world is almost certainly Google Search. In fact, in Search alone, Google uses a great many AI techniques. Some to rank documents, some to classify spam, some to classify adult content, some to match ads, and so on. In your daily life you interact with other 'AI' technologies (or technologies once considered AI) whenever you use an online map, when you play a video game, or any of a dozen other activities.

None of these is about to become sentient. None of these is built towards sentience. Sentience brings no advantage to the companies who build these software systems. Building it would entail an epic research project - indeed, one of unknown length involving uncapped expenditure for potentially decades - for no obvious outcome. So why would anyone do it?

IBM's Watson ComputerPerhaps you've seen video of IBM's Watson trouncing Jeopardy champions. Watson isn't sentient. It isn't any closer to sentience than Deep Blue, the chess playing computer that beat Gary Kasparov. Watson isn't even particularly intelligent. Nor is it built anything like a human brain. It is very very fast with the buzzer, generally able to parse Jeopardy-like clues, and loaded full of obscure facts about the world. Similarly, Google's self-driving car, while utterly amazing, is also no closer to sentience than Deep Blue, or than any online chess game you can log into now.

There are, in fact, three separate issues with designing sentient AIs:

1) No one's really sure how to do it. AI theories have been around for decades, but none of them has led to anything that resembles sentience. My friend Ben Goertzel has a very promising approach, in my opinion, but given the poor track record of past research in this area, I think it's fair to say that until we see his techniques working, we also won't know for sure about them.

2) There's a huge lack of incentive. Would you like a self-driving car that has its own opinions? That might someday decide it doesn't feel like driving you where you want to go? That might ask for a raise? Or refuse to drive into certain neighborhoods? Or do you want a completely non-sentient self-driving car that's extremely good at navigating roads and listening to your verbal instructions, but that has no sentience of its own? Ask yourself the same about your search engine, your toaster, your dish washer, and your personal computer.

Many of us want the semblance of sentience. There would be lots of demand for an AI secretary who could take complex instructions, execute on them, be a representative to interact with others, and so on. You may think such a system would need to be sentient. But once upon a time we imagined that a system that could play chess, or solve mathematical proofs, or answer phone calls, or recognize speech, would need to be sentient. It doesn't need to be. You can have your AI secretary or AI assistant and have it be all artifice. And frankly, we'll likely prefer it that way.

3) There are ethical issues. If we design an AI that truly is sentient, even at slightly less than human intelligence we'll suddenly be faced with very real ethical issues. Can we turn it off? Would that be murder? Can we experiment on it? Does it deserve privacy? What if it starts asking for privacy? Or freedom? Or the right to vote?

What investor or academic institution wants to deal with those issues? And if they do come up, how will they affect research? They'll slow it down, tremendously, that's how.

For all those reasons, I think the future of AI is extremely bright. But not sentient AI that has its own volition. More and smarter search engines. More software and hardware that understands what we want and that performs tasks for us. But not systems that truly think and feel.

Uploading Our Own Minds
The other approach is to forget about designing the mind. Instead, we can simply copy the design which we know works - our own mind, instantiated in our own brain. Then we can 'upload' this design by copying it into an extremely powerful computer and running the system there.

I wrote about this, and the limitations of it, in an essay at the back of my second Nexus novel, Crux. So let me just include a large chunk of that essay here:

The idea of uploading sounds far-fetched, yet real work is happening towards it today. IBM's 'Blue Brain' project has used one of the world's most powerful supercomputers (an IBM Blue Gene/P with 147,456 CPUs) to run a simulation of 1.6 billion neurons and almost 9 trillion synapses, roughly the size of a cat brain. The simulation ran around 600 times slower than real time - that is to say, it took 600 seconds to simulate 1 second of brain activity. Even so, it's quite impressive. A human brain, of course, with its hundred billion neurons and well over a hundred trillion synapses, is far more complex than a cat brain. Yet computers are also speeding up rapidly, roughly by a factor 100 times every 10 years. Do the math, and it appears that a super-computer capable of simulating an entire human brain and do so as fast as a human brain should be on the market by roughly 2035 - 2040. And of course, from that point on, speedups in computing should speed up the simulation of the brain, allowing it to run faster than a biological human's.

Now, it's one thing to be able to simulate a brain. It's another to actually have the exact wiring map of an individual's brain to actually simulate. How do we build such a map? Even the best non-invasive brain scanners around - a high-end functional MRI machine, for example - have a minimum resolution of around 10,000 neurons or 10 million synapses. They simply can't see detail beyond this level. And while resolution is improving, it's improving at a glacial pace. There's no indication of a being able to non-invasively image a human brain down to the individual synapse level any time in the next century (or even the next few centuries at the current pace of progress in this field).

There are, however, ways to destructively image a brain at that resolution. At Harvard, my friend Kenneth Hayworth created a machine that uses a scanning electron microscope to produce an extremely high resolution map of a brain. When I last saw him, he had a poster on the wall of his lab showing a print-out of one of his brain scans. On that poster, a single neuron was magnified to the point that it was roughly two feet wide, and individual synapses connecting neurons could be clearly seen. Ken's map is sufficiently detailed that we could use it to draw a complete wiring diagram of a specific person's brain.
Unfortunately, doing so is guaranteed to be fatal.

The system Ken showed 'plastinates' a piece of a brain by replacing the blood with a plastic that stiffens the surrounding tissue. He then makes slices of that brain tissue that are 30 nanometers thick, or about 100,000 times thinner than a human hair. The scanning electron microscope then images these slices as pixels that are 5 nanometers on a side. But of course, what's left afterwards isn't a working brain - it's millions of incredibly thin slices of brain tissue. Ken's newest system, which he's built at the Howard Hughes Medical Institute goes even farther, using an ion bean to ablate away 5 nanometer thick layers of brain tissue at a time. That produces scans that are of fantastic resolution in all directions, but leaves behind no brain tissue to speak of.

So the only way we see to 'upload' is for the flesh to die. Well, perhaps that is no great concern if, for instance, you're already dying, or if you've just died but technicians have reached your brain in time to prevent the decomposition that would destroy its structure.

In any case, the uploaded brain, now alive as a piece of software, will go on, and will remember being 'you'. And unlike a flesh-and-blood brain it can be backed up, copied, sped up as faster hardware comes along, and so on. Immortality is at hand, and with it, a life of continuous upgrades.
Unless, of course, the simulation isn't quite right.

How detailed does a simulation of a brain need to be in order to give rise to a healthy, functional consciousness? The answer is that we don't really know. We can guess. But at almost any level we guess, we find that there's a bit more detail just below that level that might be important, or not.

For instance, the IBM Blue Brain simulation uses neurons that accumulate inputs from other neurons and which then 'fire', like real neurons, to pass signals on down the line. But those neurons lack many features of actual flesh and blood neurons. They don't have real receptors that neurotransmitter molecules (the serotonin, dopamine, opiates, and so on that I talk about though the book) can dock to. Perhaps it's not important for the simulation to be that detailed. But consider: all sorts of drugs, from pain killers, to alcohol, to antidepressants, to recreational drugs work by docking (imperfectly, and differently from the body's own neurotransmitters) to those receptors. Can your simulation take an anti-depressant? Can your simulation become intoxicated from a virtual glass of wine? Does it become more awake from virtual caffeine? If not, does that give one pause?

Or consider another reason to believe that individual neurons are more complex than we believe. The IBM Blue Gene neurons are fairly simple in their mathematical function. They take in inputs and produce outputs. But an amoeba, which is both smaller and less complex than a human neuron, can do far more. Amoebae hunt. Amoebae remember the places they've found food. Amoebae choose which direction to propel themselves with their flagella. All of those suggest that amoebae do far more information processing than the simulated neurons used in current research.

If a single celled micro-organism is more complex than our simulations of neurons, that makes me suspect that our simulations aren't yet right.

Or, finally, consider three more discoveries we've made in recent years about how the brain works, none of which are included in current brain simulations.
First, there're glial cells. Glial cells outnumber neurons in the human brain. And traditionally we've thought of them as 'support' cells that just help keep neurons running. But new research has shown that they're also important for cognition. Yet the Blue Gene simulation contains none.

Second, very recent work has shown that, sometimes, neurons that don't have any synapses connecting them can actually communicate. The electrical activity of one neuron can cause a nearby neuron to fire (or not fire) just by affecting an electric field, and without any release of neurotransmitters between them. This too is not included in the Blue Brain model.

Third, and finally, other research has shown that the overall electrical activity of the brain also affects the firing behavior of individual neurons by changing the brain's electrical field. Again, this isn't included in any brain models today.

I'm not trying to knock down the idea of uploading human brains here. I fully believe that uploading is possible. And it's quite possible that every one of the problems I've raised will turn out to be unimportant. We can simulate bridges and cars and buildings quite accurately without simulating every single molecule inside them. The same may be true of the brain.

Even so, we're unlikely to know that for certain until we try. And it's quite likely that early uploads will be missing some key piece or have some other inaccuracy in their simulation that will cause them to behave not-quite-right. Perhaps it'll manifest as a mental deficit, personality disorder, or mental illness.Perhaps it will be too subtle to notice. Or perhaps it will show up in some other way entirely.

But I think I'll let someone else be the first person uploaded, and wait till the bugs are worked out.

In short, I think the near future will be one of quite a tremendous amount of technological advancement. I'm extremely excited about it. But I don't see a Singularity in our future for quite a long time to come.

Ramez Naam is the author of Nexus and Crux. You can follow him at @ramez.

141 Comments

1:

When I think of the Singularity, and sentience, I am reminded of Peter Watts' BLINDSIGHT.

I wonder, in the end, if intelligence and the "chinese box" will be easier to achieve than true sentience in any form.

2:

I loved Blindsight.

I suspect that self-awareness and 'consciousness' are inevitable features of any intelligence that arose through an evolutionary process.

(But not for one that is designed.)

3:

I love sci-fi and appreciate this article for this. This made me think that when you compare the present day education system to what you are describing here, you can see that we still have a very long way to go unless we are able to create a better model for improving our own intelligence as we evolve. If we consider our children upgrades. We still have not figured this out on a scale that meets the kind of singularity that you describe, so how could AI ever achieve it? Great article!! You have a new fan.

4:

I think that success will come in the form of Minsky’s emotion machine (and by extension Goertzel’s work) before any of the other methods.

In the end I think that we’ll discover that self modifying conceptual engines forcefully separated into layers and driven by outside forces into a desperation to satisfy urges causes intelligence as an emergent property.

Once such an extendable intelligence exists, it’s ability to replicate and self modify will all but ensure an evolution on miniscule timescales since it will allow for very rapid augmentation and experimentation on AI’s by AI’s.

When you can create a building full of scientists simply by hitting CTRL+C, CTRL+V a million times and buying compute units from Amazon/Google, yes, you can almost instantaneously outperform the team that built you.

As far as I can tell, we have more than plenty of hardware to make this happen now, we’re simply missing the algorithms. Once the first set is created, I expect that the first multitude of improvements will not come from the AI building better hardware but from the AI finding more optimized methods of doing what it does.

My 2c.

5:

I agree with most of your points, except incentive. First, we do things as a species not because they have obvious value, but often simply because we can. Second, there are many qualities of intelligence, and perhaps all could be regarded as a commodities on their own.

Take, for example, innovation and the intuitive leap. While I don't think we've ever observed this capability with the current generation of AI, it should be a benchmark of what we would consider to be a human equivalent.

Now, while innovation on any given problem by any single mind is not a predictable event, it can be made a statistical certainty given enough time and instances. Certainly, the ability to spin up X AI instances, each with a slightly different academic bias, and train them on a problem would have immense value.

6:

Thanks for a very interesting post. One problem with the concept of emergent mind or sentience is that it has never been experimentally proven. While I believe it is true, there are very smart people like Roger Penrose who argue you need something beyond a collection of connected "dumb" neurons to have sentience, which he thinks should be related to quantum effects or quantum gravity or such.

Also, there is potentially a third approach to building an artificial mind based on genetics. While brain has 100 billions neurons and zillions of synapses, all these structural information is eventually encoded in few hundreds or at most couple of thousands genes and some non-coding DNA segments. It might be possible to grow a brain in silico mimicking the way it grows in uterus. Of course, you are going to end up with a newborn brain that needs to trained, but Charlie has already found a solution to this problem in Saturn's Children.

7:

Great post!

I've had a particular singularity-related question on my mind for some time, and your guest post gives me enough motivation to stop lurking Charlie's blog and ask: how much of Moore's Law is driven by conceptual work, and how much by physical? i.e., how much time is taken up by simulations and by engineers sitting around a conference table, and how much by retooling plants, etc? The greater the share of conceptual work, the larger the impact of digital minds would be, as that portion would be accelerated along with processor speeds.

In the not-at-all-realistic extreme of 100% conceptual work, ignoring all physical limits, merely mortal minds could achieve (actual, mathematical) singularity in computational power after 1/ln(2) doubling periods.

Yeah, I've done the math. Don't judge me.

8:

Has anyone tried to approach this problem from the other end, that is, the deconstruction of a human brain as it dies, or as brain regions fall silent from a stroke, seizure or some other event?

Related to this is: Has anyone tried a systems approach to studying/replicating the brain? The brain does more than one thing ... just as the non-brain organs are better understood in terms of their key roles, i.e., digestion, reproduction, etc., or based on their relatively simpler construction (such as muscles vs. organ tissue).

9:

Nice but I'm pretty sure the AI could design a successor sqrt(2) times as intelligent as itself. Still get convergence rather than singularity

10:

Ramez, I agree with you that creation of a sentient AI is likely much further off than the Kurzweil projection. But, as a practical matter, I'm surprised that you, as the author of Nexus, did not address the near term possibility of super-intelligences created by enhancement of biological entities, including their enhancement by direct networking with each other and with non-sentient systems. Such intelligent networks must be considered to have a kind of sentience, and perhaps a kind of personhood, given that nodes of the network are, in fact, actual sentient beings. It seems to me that Vinge's Rainbow's End (IMO novelistically kind of meh but SFistically brilliant) projected some day to day consequences along this continuum likely to flow from the very technologies and connections we see emerging in our world today. And considering the latest reports of improvements in the technology of customizing DNA nucleotide sequences, a Brave New World meets Rainbow's End scenario has started playing on my internal movie screen. It makes me think about the Fabricants in Cloud Atlas, and how the story of their 'ascent' to personhood connects with the forms oppression takes, including slavery, and forces us to think about the basic moral issues made real when we confront another sentient intelligence, be it organic or artificial. Being able to engender sentience by non-procreative means will inevitably put humankind to what some might consider the biggest test in its evolution. I hope I can get uploaded in time to be around for that.

11:

There's a subtle but important point that often gets glossed over in discussions like this: what do we mean, exactly, by "sentience"? People's gut reaction is always sort of 'I know it when I see it' (or talk to it), but I don't think that's enough when we're talking about a robust research question. Would you be willing to address briefly what you mean?

I suspect you mean: systems that 1. are capable of autonomous behavior in an arbitrary domain (e.g. not restricted to finite-scale problems like driving/chess/Jeopardy), 2. are capable of identifying and taking actions that pursue a particular goal.
3. include a concept of self, alongside other concepts, and 4. have a goal of self-preservation and/or self-improvement.

Item 3 may not be necessary (we may think of animals as being sentient at some level, but lacking a concept of self.) Similarly item 4 may not be necessary (e.g. Asimov's Robots might still be considered sentient if the 3rd law was removed.)

In any event, I think that without really robustly defining sentience it's impossible to make a good estimate of what it would take to build a sentience.

12:

Lots of great comments, folks! I'll reply to as many as time allows for.

bzd314 asks what fraction of the advances in computing power come from conceptual work vs. work that e.g., requires 'retooling plants'.

Right now the prime driver in general purpose computing comes from hardware speedup. And the hardware speedup requires conceptual work, physical experimentation, and retooling plants. Building new fabs is typically the last stage of that, and takes billions of dollars.

That said, as commenter Oren points out, it is possible to optimize software. We've seen optimizations to algorithms improve the performance of specific algorithms substantially.

A simple example: QuickSort,invented in 1960, reduced the time it takes to sort a list from N^2 to Nlog(N), which is a very substantial speed up for log lists.

A more modern example: In general machine learning algorithms grow more accurate in proportion to the log of the amount of data they're trained on. That means it takes exponentially more training data to get linear improvements in accuracy (another non-linear system). However, we've seen that Deep Learning (an improvement to machine learning) can often produce better accuracy on the same training data.

So software optimization is possible, though it tends to be far more narrow than the speedups one gets from better hardware.

13:

Les Elkind: I'll be talking more about augmenting human abilities in the days ahead!

I do think that runs into some limitations that purely digital entities don't. But I also think it's potentially more feasible in the near term.

14:

Irvanian brings up Penrose and bootstrapping an intelligence from a genome.

On Penrose, I'm afraid I don't know any computer scientist or any neuroscientist who takes his ideas seriously. Obviously he's made great contributions to astrophysics. And it's clear that there are some quantum phenomena in biology (e.g., in photosynthesis). But all the quantum phenomena we've found are incredibly short lived, far shorter than the firing time of a neuron, let alone the time that the formation of a thought takes. And the brain is a highly thermally noisy environment. It's difficult to imagine that quantum entanglement could last there.

On bootstrapping an intelligence from a genome: To do this, we'd have to be able to simulate whole cells accurately, down to the level of the interplay of genes, for at least a couple hundred billion cells (neurons + glia). We are not even able to simulate one human cell accurately today. So I don't see that as a particularly faster route.

15:

Hans Rinderknecht asks what I mean by sentient, and very helpfully offers a framework. It's an excellent question.

What I mean is something along these lines: An entity that..

1) Has a flexible general-purpose intelligence.

2) Has an awareness of self and sense of self.

3) Has autonomy in the sense that it's behavior, interests, attention, and goals change over time, and are not constrained to what it's initially programmed with.

16:

To me a superhuman intelligence is "just" an artificial mind approximately equivalent to that of a human, but with a deeper stack. I.e., able to hold more concepts in it's mind at once. There are probably numerous problems that this kind of intelligence could solve that natural humans cannot...and deeper stacks can be designed to handle the subset of problems that any particular stack depth cannot handle.

In this sense given an AI, it is trivial to increase its capabilities. At any particular stack size, there are problems that appear intractable that become tractable at size+1. The rough estimate of the human stack depth is 7 +/- 2. (Note that this varies slightly between people, and also that different people have slightly different depths along each axis of sensory perception.)

Another factor, which is less susceptible to scaling improvements, is skill at chunking. This means, if you can mentally manipulate 7 chunks, as good selection of chunks will allow you to address more complex problems. So far no AI is even approximately as good as a human at chunking their problem space, and I feel that this is one of the main ways in which human intelligence is superior to that of the AIs. This to the extent that even with the ability to hold a vastly larger number of chunks in working memory, the AIs cannot compete with human experts in many problem domains.

There are other areas that need much improvement. Active modeling, e.g.

But WHEN an AI has competencies equivalent to a human mind, it will be trivial to scale it to be superhuman. It's not clear, however, when that will happen, and a few basic breakthroughs in organization seem to be needed.

I would put the time estimate as "sometime after 2030", but to get a closer estimate I'd need to be able to predict when some breakthroughs will happen. Remember, the phonograph could have been invented by the classical Greeks (or at least the Helenistic Greeks), but it wasn't invented until the time of Edison. Being able to build something, in the sense of having all the tools needed, doesn't readily translate into knowing how to do it.

17:

It appears to me that the uploading procedure you describe represents reproduction rather than survival. If you plasticize my brain and take it apart, I will die in the process, and never regain consciousness. You may have one or many entities that remember being me, but they will be different individuals—and if they do share my memories they will regard themselves that way.

It might be rational to go through such a procedure, if I were dying anyway. Survival isn't the only value in life. But what you're describing looks to me like death for me and possible immortality for someone else who resembles me.

(I wouldn't get in a Star Trek transporter, either.)

18:

How would one distinguish "Has a sense of self" from "Says it has a sense of self"?

Given that any being sophisticated enough for this to be a question would presumably be sophisticated enough to be able to lie.

19:

You said earlier, "computers are also speeding up rapidly, roughly by a factor 100 times every 10 years. Do the math, and it appears that a super-computer capable of simulating an entire human brain and do so as fast as a human brain should be on the market by roughly 2035 - 2040."

I don't follow the math. Wouldn't "a factor 100 times every 10 years" mean a factor of 10000 times in 20 years? Is the factor additive or multiplicative?

20:

Minds are what brains - just matter - do. Mind can be done in other substrates.

That isn't proven. Moreover, it's not easy to see how it could be proven to the satisfaction of a skeptical, biological observer. The fact that a computer "processes information" does not make it a brain any more than the fact that a car "goes fast" makes it a cheetah.

21:

German release of "Nexus" announced for July. Good News.

22:

William Stoddard reflects that the uploaded procedure is a kind of reproduction rather than continuity. Indeed.

Yet the uploaded version will very much think of itself as you. Just as a future you, who no longer shares any atoms with the current you, will also think of himself as you.

To a certain extent, if I transfer a movie from a DVD to a hard drive, and then destroy the DVD, does it matter? The movie still exists. And in this new form, it will indeed prove more likely to come into greater circulation.

23:

Humots asks about the math of speedups, and whether it's additive or multiplicative.

It's multiplicative. So, in general, Moore's Law predicts that we'll see the following speedups, from now:

In 10 years, 100x In 20 years, 10,000x In 30 years, 1 million x In 40 years, 10 million x

There are important caveats to this. All exponential processes eventually hit their limits and become S-curves instead. No exponential in nature lasts forever.

In Moore's Law, there's good reason to believe that integrated circuits, the way we currently build computers, can keep getting smaller and faster for another decade or two, but probably not for another 40 years. Ray Kurzweil points out that the exponential pace of computing has gone through multiple domains of technologies, with vacuum tubes and other technologies pre-dating the integrated circuit also showing something like Moore's Law. It's quite possible something else will pick up where integrated circuits fall short.

It may be a variant on integrated circuits using new materials (graphene, or carbon nanotubes).

It may be photonic computing.

It may be quantum computing (though so far this is in its infancy, and very far from general purpose).

Or it may be that this time we actually don't see a successful transition to a new technology that keeps the curve going.

Personally, I think the economic value of more computing power is as high as ever, and will keep growing higher, which will create plenty of incentives to keep Moore's Law or a variant of it going, so I think there's a fair shot of something coming along, though it's far from guaranteed.

24:

Ramez: I suspect that self-awareness and 'consciousness' are inevitable features of any intelligence that arose through an evolutionary process.

(But not necessarily for one that is designed.) Err ? Perhaps.

Meanwhile: Oren @ 4 In the end I think that we’ll discover that self modifying conceptual engines forcefully separated into layers and driven by outside forces into a desperation to satisfy urges causes intelligence as an emergent property. Could we have that in plain English please? You are gobbledygooking as if you were a sociologist.

The general problem of "What is sentience?" is also moot. Hans @ 11 has a good stab - to which, can I add ... A truly sentient "Mind" should be able to act intuitively, or demonstrate such. (?)

25:

Chris Borthwick asks:

"How would one distinguish "Has a sense of self" from "Says it has a sense of self"?"

There are many many systems out there that pretend to be smarter than they are. Eliza, one of the first "AI" therapists, was remarkably effective at convincing patients that "she" was a person, simply by parroting back paraphrases of what they said.

So in many cases we'll need to seek robust evidence about how an entity behaves, not just its statements, to see what its mental capabilities are. In other cases we may go beyond that, and actually look at what its internal makeup is. If you can see a line of code that says printf("of course I have a sense of self"); then that's a good indicator that it's not truly sentient. :)

26:

So that we can't fly, because an A330 isn't an albatross, then? Sorry, that "argument" won't hold water.

27:

Greg @24: I believe that intelligence is a cross product of (senses/involuntary needs) x (capabilities) x (layers of abstraction). I believe that a large part of the reason that you've bothered to learn any of the things you've learned in life is because you get hungry or tired or angry. Without those involuntary needs you'd simply sit there all day like a rag doll staring at the wall, waiting for something to come along and eat you.

Eventually, we're going to create AI's that experience involuntary "hunger" and attempting to satisfy their hungers is the thing that will drive their intellect. Given that unlike us (who are stuck with the brains our genes gave us and the slowness of chemical neurons), the AI's will be able to add or take away layers of abstraction ad-infinitum, and will be able to design, remove, or optimize controllable sets of needs drivers. They should also be able to directly integrate into non-AI tools. For example, neural nets are a very sucky way to do math, but it's what we've got and our interfaces to calculation devices are extremely slow and sub-optimal. A sufficiently interesting AI would simply integrate a calculator such that it would experience calculation as a subconscious phenomena.

Imagine that for the AI any "solved" computational problem from math to facial recognition is simply another sense to it. This means that things we use a lot of our brain horsepower to drive it can simply get virtually for free.

People tend to try to do and apples to oranges comparison by counting our neurons and synapses to predict the computational needs of a human level AI, but we use a massive amount of our brains on things like muscle motion, vision, and other bodily functions that would easily be off loaded to subsystems, and then to make matters worse, we use a lot of our brain's horsepower for non-optimal execution of algorithms. As far as I can tell, the most interesting thing we bring to the table is a relational symbology system that is able to see that birds have wings, birds fly, people don't fly, so maybe if we stuck wings on people they'd be able to fly...

28:

Great to see you here Ramez. What you say reflects my own opinions - I am persuaded that conscious AI and mind uploading will be developed, but not as soon as we hope, and not without unexpected problems and detours. I don't really hope to see conscious AI and mind uploading in my lifetime, but I am happy for our grandchildren who will live in a "magic" world. And I am happy to be a small part of life on Earth, on its way to become cosmic life in a magic universe.

29:

Great post, Ramez.

In your uploading scenario, you're surely right that we would be unlikely to get enough details right the first time round to replicate a person (leaving aside whether it would be uploading or sideloading).

But we might well get enough right to create a conscious entity. Which could go on to become an Artificial Super-Intelligence, either in a FOOM or more slowly. And with the HBP etc, this might happen within a few decades, no?

30:

I think Penrose is an example of how you can be very bright in one subject, and totally wrong on another. See also Linus Pauling and Vitamin C.

In Penrose's case, I think he is offended by the idea that a totally deterministic process could give rise to sentience. But rather than appeal to the concept of a 'soul', he's looked for something that apparently prevents determinism — i.e. quantum effects — and then attempted to show that that has some effect.

Yet he's a mathematician. He should know how a simple, deterministic, process can lead to incredible intricacy. Just look at the Mandelbrot Set, which is an examination of x -> x2 + c

31:

One thing that always confused me about mind uploading is how it ignores the rest of the body entirely. Assuming for the sake of argument that a brain could be simulated to a level in which an individual would be copied into a machine, wouldn't this simulated brain immediately die without simulated oxygen, or become impaired without simulated feedback from the endocrine system, or go insane from the complete lack of sensory input?

The human brain evolved within a human body. I'm not convinced at the moment that it can be simulated to any reasonable degree without also having a good simulation of the body as well.

32:

Not to wreck the illusion of being a "real" person, but the emergent "you" that you think of as your personality is discontinuous when you sleep and when you go under anesthesia. If we flipped the switch during one of those times, you wouldn't know anything had happened (aside from the whole "whoa, I'm in a computer" thing...

33:

Contrarywise - Going to sleep as an "ugly bag of mostly water" and waking up as a lump of metal and silicon seems like a pretty fundamental change to me.

34:

If IBMs Blue Brain can simulate a brain the size of cat's, why not use the brain mapping to upload a cat's brain? Under the theories put forth here, it should behave like the cat. Until you can prove that is possible, there's little point in speculating about replicating human intelligence, no?

35:

It seems that the 'uploaders' consider the prefrontal cortex executive function as the only worthwhile part of the brain worth saving or enhancing. This ignores sensory deprivation experiment results which consistently show that disconnection from the world/senses typically results in madness/psychosis. The brain is made up of many sub- and sub-subsytems, not just stacks or network arrays of interchangeable all-purpose neurons. (Even neurons come in different types/flavors.)

Focusing only on speeding up/enhancing the executive function would result in a better calculator. It probably would not produce more beautiful art or better more compassionate ethos which is what much of the SF of the past 70+ years has explored (and persuaded most readers) as being fundamental in defining what it is to be a human or sentient being. To create art or a better ethos, an AI must be able to 'feel' - that is, be able to place itself in relationship to everything else and evaluate (weigh) each of those distinct interactions it has (whether active or passive) in relation to itself/the boundary of its physical.

36:

You give a lot of detail about your reasoning, but then your conclusion is the very vague "quite a long time to come." Is that a decade, century, millennium, or what?

37:

Very fair question, and indeed that may well be a milestone, if projects like the HBP are on the right track. An earlier milestone is the project of simulating the C. Elegans worm, and it seems that progress is being made there, albeit slowly.

The progress currently looks glacial, but if Moore's Law continues...

None of this is pre-ordained (well, unless it is, of course!) but it looks a possibility.

38:

I really enjoyed reading this post and think you have a lot of interesting points that will give me pause to think further about.

39:

if I transfer a movie from a DVD to a hard drive, and then destroy the DVD, does it matter? The movie still exists.

This basic software/hardware distinction has no relevance to brains. Every mind that we're aware of runs in its own unique wetware. Copying a mind from a brain to a computer may be more like trying to take the nutrition out of a salad and put it in a rock.

40:

Just to be obnoxious, I'll point out that Google's already patented a simple version of the simplest version of an uploaded human (io9 factoid with link to patent). Think of it as the winner of the Individualized Turing Test: it's a computer system whose job is to convince the rest of the world that it is you--it knows everything you know, it acts the way you act, it responds the way you respond, and it can predict what you will do, just as you can predict what it will do.

This is a lot simpler than trying to emulate a human brain in a computer, and my pessimistic prediction is that this is the one we'll see long before we (if ever) see an actual neuron-mimicking model of any human brain.

I can also see a proliferation of such programs for the purposes of identity theft and objectionable corporate activity. For example, a company may hire you just long enough for them to make an online copy of you, so that they can then enslave the machine to do your work 24/7, without the need for any of those pesky benefits or pay that real humans need.

By the way, would the emulation of a human brain be demonstrably sentient? It's not like humans have wires in their head to beam out their sentience. Rather, it comes out in words and actions. How would we know that the brain emulation is any more sentient than the computer running it? Moreover, brain structure is known to be shaped by things like culture and sensory inputs. What does the brain emulation do in the absence of such inputs? Can we even provide them, or are such uploads just brain in a sensory deprivation tank?

41:

On a different subject, I've been having some fun contemplating what climate change could do to the Singularity, however phrased. The results aren't pretty.

Computers have two key weaknesses. The first is that they're (currently) not designed to last very long. While it's possible to build a computer that will last for decades (see Voyager and Pioneer), what's the current design-life of most chipsets? Five years? A decade? The upshot is that anything that's "alive inside a computer" needs to migrate regularly, and sooner or later it's going to become a legacy system, with all the issues that implies.

The second weakness is that computers depend on enormous global ecosystems: the elements to produce them come from all over the world, the technology to produce them comes from all over the world, their component parts are manufactured in areas with cheap, skilled labor, wherever these occur, then shipped and assembled in still other areas, then sold in still other areas. The devices then depend on the global internet and often GPS satellites in order to function.

Now, this is a truly amazing system, but there's one little problem: which part of this incredibly complex system will be affected by climate change. If you said all of it, you're right. The combination of sea level rise and increased storms will make rebuilding ports an ongoing necessity for something like the next thousand years. Space launch facilities will be affected too (only Baikonur isn't near a coast, to my knowledge), and GPS satellites do need to be replaced every decade or two.

One of the general characteristics of civilizational collapse is that international trade dwindles and disappears, while local technologies often survive. The classic example of this was the end of the European Bronze Age, because bronze is made of tin and copper, which very seldom occur nearby in nature. By the end of the European Bronze Age, there were trade routes stretching from Cornwall and Scandinavia to the Middle East to Afghanistan. Those routes all disappeared when the European Bronze Age ended, and for quite a while, no new bronze was made: existing bronzes were recast. This was also the time when iron started spreading out of the Middle East. Iron is much more plentiful than either copper or tin, although it's harder to smelt, and it's a local technology: it can be mined, smelted, and forged locally. When Rome collapsed, the former Romans didn't go back to using bronze or stone. Rather, they kept making iron implements. Iron was (and to some degree still is) a local technology, while bronze is a global technology. Global technologies disappear when the trade routes that support them collapse.

I don't know how you make a locally-sourced AI computer, but if anyone wants the singularity to last more than a few decades to a century, that's one of the problems that will have to be solved, because I kind of doubt that we'll be able to maintain our current global trade patterns as climate change really bites down. If you don't believe me, David Archer's Long Thaw is an excellent, minimally technical, overview of how climate change will likely play out.

42:

" They don't have real receptors that neurotransmitter molecules..."

This is perhaps the most important point missed when assuming simulating a brain is primarily a structural and electrical undertaking. Neurological studies have found that neurotransmitters (chemicals) play a far more significant role in brain function than electrical impulses. It may be possible to capture the structure of neurons with the destructive methods described above, but how does this capture the "state" of a brain's neurotransmitters at any give moment? And if this is not possible, then serious questions about retaining memories and thoughts arise. Until our computational artifacts can account for the chemical processes of the brain, I see little chance of uploading ourselves to computers.

43:

William Stoddard reflects that the uploaded procedure is a kind of reproduction rather than continuity. Indeed.

Yet the uploaded version will very much think of itself as you. Just as a future you, who no longer shares any atoms with the current you, will also think of himself as you.

In any event, we aren't permanent structures. Peptides aren't permanent; I don't have a reference or source to hand but I recall reading that the half-life of individual peptide molecules in the human body is on the order of hours to weeks, depending on role/structure. Individual DNA molecules may be longer-lived but are still duplicated and repaired on an ongoing cycle ranging from weeks to months and, rarely, years. Bone? Bone is a living organ perfused by cells, osteoclasts, that deposit and reabsorb minerals in the extracellular matrix -- what you see in a museum display or graveyard is the dessicated left-overs, and much of your bone structure will be replaced/rebuilt during your life.

On the level of cognition, there's the problem of sleep. It looks likely that during sleep, essential self-repair/perfusion processes take place to flush waste products out of the glial system. And during sleep, we're not continuously sentient: awareness comes and goes, and the process of awakening is very much the re-emergence of a suppressed coherent organizing context and sense of identity.

Virtually none of the atoms in my body were present in it 40 years ago, but I'm still me, for some value of "me" (I'm not very similar to that 9 year old). So I'm not too fussed about the continuity problem in uploading: it's just a more abrupt equivalent of processes we undergo all the the time but are not consciously aware of.

44:

European archaeology and archaeohistory is (are?) one of my other interests. This analysis of pan-European trade i nthe Bronze Age is pretty much exactly right.

45:

And the two reference articles at the heart of this patent are probably:

Steen et al., "Development of we-centric, context-aware, adaptive mobile services requires empathy and dialogue," Freeband FRUX, Oct. 17, 2005, Internet Journal, Netherlands

Van Eijk et al., "We-centric, context-aware, adaptive mobile service bundles," Freeband, Telematica Instituut, TNO telecom, Nov. 30, 2004

http://www.academia.edu/2607703/We-centric_services_for_police_officers_and_informal_carers

These models use 'value' and pre-knowledge (including lowered inhibition/preferential responsiveness) as part of how data assessment is done.

46:

So I'm not too fussed about the continuity problem in uploading: it's just a more abrupt equivalent of processes we undergo all the the time but are not consciously aware of.

So you are not too fussed about the continuity problem of death, either? I mean, what is the qualitative difference between destroying you brain and then building 5 copies of it, vs. destroying you brain and building 0 copies? :-)

47:

Same with music: a song/musical piece is not just the specific voice/instruments, notes/score, etc. - it's the overall experience. This is pattern-based, but not arbitrarily stochastic - the pattern has to take place within a preferred/specified range.

48:

There is continuity in sleep though. Your mind - well my mind - doesn't stop working, there is no sudden blackout, just a sort of dislocation of short-term working memory from medium-term memory, so no or few new permanent memories are formed. Sort of like severe anterograde amnesia, or Korsakoff's syndrome, or the effect of drinking far far too much. Living in a short moving window of consciousness from a few seconds to maybe half a minute or so.

But when I wake up I can usually remember the previous few seconds or minutes. And my most recent memory preceding that on waking is rarely, perhaps never, going to sleep, its usually something I heard or thought or dreamed while asleep.

Anyway, there is a continuity of experience through sleep, as through growth and aging, that is utterly unlike the idea of a destructive upload to some different substrate. At best the survivor of that would be a sort of twin - though much less like you than a twin, because we are not disembodied souls, what we are made of is what we are. (Though a full description of what we are made of is not a full description of who we are)

As for general anaesthetic - maybe this is one of the reasons I'm scared of it. I worry that I'd experience the pain of an operation and be unable to tell anyone. Or scream.

Ken Brown

(Signed just in case crazy Google login still passes a silly token despite all my attempts to get settings right)

49:

Robin Hanson says: You give a lot of detail about your reasoning, but then your conclusion is the very vague "quite a long time to come." Is that a decade, century, millennium, or what?

I believe a FOOM is unlikely to occur at any time.

I believe the event horizon model is flawed altogether, as there's good reason to believe that we can gain significant insight into what happens in a world of digital minds. (As you've already shown in your work on the economics of emulated minds. Looking forward to reading the new work.)

As for when we'll have digital minds, if I had to hazard a guess, I would put 'uploads' in the first half of the 22nd century, a century or a bit more from now. But the error bars on this are quite large.

The main point of the last half of my essay is that there's a large number of unknowns in both human-created AI and in what level of emulation of the brain is critical for replicating its behavior.

50:

We could get a whole new article/thread out of that. A huge amount of our world depends on the shift from local to global.

I remember reading a book about the Steam Engine builders of Lincolnshire. I can find one obvious book with Google, and published at about the right time. Essentially, every small town in England had a workshop able to make a steam engine, although the boilers might have had to be brought in. And a steam engine contained many key mechanical technologies. You cast iron and steel, and machined the holes and shafts and pistons. Bearings used cast liners, white metal or bronze. It probably wasn't worth making your own oiler cups, but you could have done. If things had gone pear-shaped for Victorian Britain, bootstrapping the technology was possible.

Whether it was still possible at the time The Day Of The Triffids was written. I don't know, but people had the memory of it. at least. Heck, I remember a local, rather traditional, blacksmith who was making parts for Massey Ferguson.

One of Fred Dibnah's last TV series had him touring Britain on his traction engine, visiting the places that still did these things. And I recall that the British Army was still depending on a Victorian steam hammer in Sheffield for the wheels on its tanks.

The great railway locomotive works did everything, though they were trade-dependent for some of their raw materials.

I've made parts to keep a farm tractor working on a lathe in the farm workshop.

It's tempting to say that the Maggie Thatcher years ruined all that, and the timing seems right. But the changes started long before. Marconi was in Lincoln, Ferranti in Manchester, making chips, and they have now gone. Britain had some world class computer companies. Now, at best, we have brands that depend on factories at the far side of the world. We do assemble the Raspberry Pi in Britain, but we can't make the parts.

How many factories would the Luftwaffe need to bomb today, or would it be enough to drop mines in the approaches to a few ports which take the container ships?

And that is why Keynes is no longer as good an answer. A factory worker of 1929 had to spend his money locally. His modern equivalent still has to spend his money, but it doesn't hang around. Most of today's purchases are bought from huge corporations which siphon off their profits to tax havens.

The world has changed, and what might be needed to support the Singularity is incredibly fragile.

51:

Thanks Zhochaka.

I'm actually working on a book on what a climate-changed world might look like, so I'm nose deep in the whole subject.

For what it's worth, I wouldn't blame Thatcher or Reagan. It's the whole globalization phenomenon. Going big is one of those tricks that does work--for a while. Gaining complexity and size is one of the classical ways that complex cultures deal with major problems.

The problem with the strategy, as Joseph Tainter pointed out back in the 80s, is that increased complexity comes at a cost, and after a while, each new level of complexity comes with increasing costs and decreasing returns. When a complex culture gets to the point where it can no longer afford to solve the problems it faces (the cost is greater than the benefit), but some of the problems go away for some of the people if it disintegrates into something simpler, that's when it starts to fall apart.

One other thing I've realized is that there are technological ratchets: local technologies, like iron smelting or farming, that don't disappear when the civilization around them collapses. Like a ratchet, we don't slide backwards past them. Various people (John Michael Greer comes to mind) have made guesses about what can be localized if our near future civilization crashes, and I'm taking a swing at it too.

Perhaps, so as not to derail this thread, I should transfer this question to my own blog? The only question of relevance here is how Post-Singularity AIs survive in a rapidly changing climate, if they can do so at all.

52:

Oren writes: When you can create a building full of scientists simply by hitting CTRL+C, CTRL+V a million times and buying compute units from Amazon/Google, yes, you can almost instantaneously outperform the team that built you.

I wonder if this is the Oren I think it is? If so, a delight to have you here.

In any case, I expect to see all sorts of software scientists. We already have software proof-solvers, for instance. But they're all quite narrow. I'd be quite happy to be wrong and see people make progress in more generalized AI than I expect.

53:

It matters to me, first, because I value the survival of my body, and second, because my consciousness appears to be an activity of my body, and if my body is terminated, that activity and the perspective it generates can hardly continue or resume.

Or, to put it in narrative terms:

If you use ultra-advanced technology to perform a non-destructive scan of me, and create an exact copy up to the limit of resolution set by Heisenberg, there will be two of us, each seeing through his own eyes, and not the others. If you shoot him, I will continue to be aware, and may learn of his death; if you shoot me, I will be gone. I won't suddenly start seeing through his eyes. It makes a difference to me which of us you shoot.

I don't think this changes if you shoot me the instant the scan is complete, before you make the duplicate. The idea that I can start seeing through a different set of eyes strikes me as unbelievable.

Now, if you have a very high bandwidth channel between our two brains, it's at least imaginable that I might be able to be aware through both bodies (though that sounds confusing!) and to have my viewpoint end up transferred to a new body. It would be sort of like Vinge's Tines. I think Kurzweil was putting something like that forth as his version of uploading. And I could count that version as "survival."

54:

I'm aware of that, but I don't think it's relevant. I'm not a Cartesian dualist, but an Aristotelian hylomorphist. I don't identify myself as a disembodied thinking substance, but as a physical being with a form that includes consciousness as an aspect. I care about the continuity of the physical being.

55:

To expand on post (9), your maths on the hard takeoff scenario is wrong. You assume a deisign team of intelligence X can develop an AI of intelligence 2X. Hence you've stated that the AI is more intelligent that the design team, hence it should be able to develop another AI that's smarter than the one the design team deveoped (itself).

In practice I don't consider that intellligence is something you can describe in terms of 2x. I think that our brains/minds work at least partly because of a huge amount of "hardware support" - lots of cognitive functions map directly to particular neural structures. Hence some of our cognitive limitations (e.g. short-term memory, concentration/boredom, tiredness) could be fairly easily enhanced in an AI by throwing more hardware at it.

56:

To expand on post (9), your maths on the hard takeoff scenario is wrong. You assume a deisign team of intelligence X can develop an AI of intelligence 2X. Hence you've stated that the AI is more intelligent that the design team, hence it should be able to develop another AI that's smarter than the one the design team deveoped (itself).

You're correct. This is an error in my math. However, as John Quiggin pointed out, if intelligence is N^2 difficulty, we'd still see convergence rather than takeoff.

I think a more realistic scenario to start with is a large design team (effectively tens of thousands of people when considering all the indirect contributors via chips and so on) designs an AI that is smarter than an individual human, but not smarter than the entire team.

Then you have the scenario presented therein, where the new AI is, at best, a contributor to the team.

57:

Yea, not done reading, but that jumped out. 1.41x as intelligent, not half that.

But let us say it is an n^2 process. So a team of humans spends 4 years and generates an AI 2x as smart as the team. So 4 human-team-years -> 2x human-team-intelligence-ai. It's n^2 so 4x ai would take 16 human-team-years. The ai doing the work is twice as productive though so it only takes 8 calender years. Then that 4x human-intelligence ai gets to work on an 8x ai. 64/4 = 16 years. Then 8x ai works on a 16x. 256/8 = 32 years. Wait i see a pattern. ;)

So anyway we get our 16x human-team-intelligence ai after 60 years.

The vastly superhuman AI isn't exactly reformatting your brain before you manage to ask it the meaning of life the universe and everything.

58:

I don't think I'm the Oren you might think I am but I'm delighted to be speaking to you nonetheless :)

The path that I’m expecting to work would involve starting from a "base needs" satisfying engine and then evolving it through selective competition while increasing the number and complexity of needs, action options, “hormones” and levels of abstraction.

We won’t know we’ve succeeded until it tells us that we’ve succeeded. In the end, just like us, strong AI will be emergent.

Meanwhile, I don’t think we’ll be uploading any time soon and personally I don’t see a reason to want to upload. Given advances in nano-tech medicines, if the AI’s can extend my life, keep me healthy and young while allowing me to keep my easily entertained monkey brain, I’ll happily cheer them on as they go off to explore the stars and send back postcards :)

I fully expect that someday out there, there will be a religious war amongst the robots over whether or not there really ever were humans, and whether its rediculous to believe that bags of meat could have "created" intelligent machines.

59:

BTW the definition of intelligence here would be something unusual and recursive like "ability to do expand the ability to do this and general computation for a certain resource input".

60:

I would love to hear about AI researchers looking seriously at the notion of panpsychism, and seeing if they could look at the problem very differently. Of course, this would have to assume that they took a bit of a gamble, and assumed panpsychism to be true. My intuitive side tells me that it's very much worth exploring, and we will much more quickly be able to develop sentient AI by trying to "channel it" versus trying to create it from scratch.

61:

In Moore's Law, there's good reason to believe that integrated circuits, the way we currently build computers, can keep getting smaller and faster for another decade or two, but probably not for another 40 years. Ray Kurzweil points out that the exponential pace of computing has gone through multiple domains of technologies, with vacuum tubes and other technologies pre-dating the integrated circuit also showing something like Moore's Law. It's quite possible something else will pick up where integrated circuits fall short.

...

Personally, I think the economic value of more computing power is as high as ever, and will keep growing higher, which will create plenty of incentives to keep Moore's Law or a variant of it going, so I think there's a fair shot of something coming along, though it's far from guaranteed.

Based on my reading of semiconductor industry news I think we're already past the steepest part of the S-curve on integrated circuit progress. Extreme ultraviolet lithography is much later and more expensive than originally anticipated. Other lithographic innovations have been filling the gap, but raising costs also. The performance gains are smaller with each process shrink because leakage current and heat management keep getting worse. Design rules also get less flexible with each shrink. These expenses and constraints might not be real obstacles to continuing Moore's Law except I think that the marginal utility of computing upgrades is falling.

Moore's Law started out as an economic-technical trend and I think it is the economics that will end it. The Intel chips that power data centers and supercomputers are, today, mostly the same as those that power laptops. There are some changes to support error-correcting memory and multiple-socket parallelism but otherwise Xeons are very much like desktop or laptop chips. Smaller-volume supercomputers and servers get the advantages of sharing design effort with higher volume products -- this is how Intel hardware triumphed over supercomputing-only specialized processors and has been shrinking the high end proprietary server market for over a decade. The bad news is that the mass market is getting smaller for Intel too. PC hardware sales are contracting, both because people are doing more stuff on non-PC platforms and because PC hardware is "good enough" for increasingly long periods between upgrades. If you run e.g. computational chemistry on your desktop, you can still see significant gains from one chip generation to the next. If you're just browsing the Web and running office software, you probably won't appreciate the difference between recalculating a spreadsheet in 50 ms and 40 ms, and can now go 5 or 6 years between hardware purchases without undue suffering.

Let's hope there is some amazing and amazingly affordable innovation for computing past CMOS. It needs to be good enough to beat a highly mature entrenched technology and cheap enough to overcome the diminishing marginal utility observed for computing in most settings. If the logjam is not broken then brain uploads may never be practical. I suspect that even today's photolithography is good enough to build the machines that will take over most jobs with AI of the narrow Google search variety; the main challenge is software rather than hardware.

62:

Where are you getting that 70% -- is it from dividing by sqrt(2)? I think you should be multiplying instead. If the AI is twice as smart as the team that created it, it should be able to produce AIs better than what the team can produce, not worse.

And in particular, if AI development is O(N^2), such that creating an AI 2x as smart as some reference requires 4x as much intelligence as the reference required, then an AI-developing-entity A that's twice as smart as another AI-developing-entity B should be able to create a new AI that's sqrt(2) times as smart as what B can produce. If B produced A, then A should be able to produce something sqrt(2) times as smart as itself.

Am I missing something?

63:

There is a difference between piece by piece replacement and building something entirely new out of new parts. As the old joke has it, "This is the axe Abe Lincoln used to split rails. Of course, it's had seven new heads and four new handles, but it's the same axe."

64:

I don't know how you make a locally-sourced AI computer

I have some experience in semiconductor device manufacture, and I can field this one. You don't, for post-1980 values of the term "computer". There are a great many extremely finicky technologies that must work in very nearly perfect harmony to make a functioning computer chip, and in the sort of scenario you're describing the limited resources available will be allocated toward more immediate priorities.

65:

The FOOM scenario never made any sense to me for a far simpler reason - hardware. You want a godlike hyper AI, then you are going to need the godlike hardware to run that. I did some extremely rough back-of-the-envelope number crunching and all of the computing power in the world (including phones and server farms like Google's) is equivalent to maybe 3 or 4 human brains. Even if my math was wrong (which I'm sure it is to some extent), it still highlights the fact that massive intelligence takes massive hardware. Even if a hyper intelligent AI could design an even smarter AI faster than it was designed (which you well point out is unlikely), it still needs the hardware to be physically built.

Maybe the next revolution in computing (quantum or whatever), will make it easier, but there is still the simple fact that a hyperintelligent AI needs to be BUILT, which is not something that will accelerate to happen on the scale of seconds or less. It needs to be physically realized, which takes some nonzero amount of time.

As for the digital minds, why start from scratch? It can be far easier to add onto an existing structure than to entirely replicate it. If we can get a safe and reliable interface, then we already have a fully functioning intelligence that we can build off of right inside our own skulls. EI (enhanced or expanded intelligence), once the interface is figured out, would seem to be a far lower bar to set, as well as having potential practical applications right away. Yet this third option is commonly ignored including in this post.

66:

Better hardware would be among the things an intelligent AI works on in order to produce the next generation of better AIs (or more selfishly a better substrate for its own use), I think this is a common assumption, and any sentient AI would have the motivation to prioritize this.

It would naive to assume a super intelligent AI still depends on today's semiconductor industry, it would only do so in its first few generations (assuming the current silicon semiconductors can support a super AI at all).

67:

The multiple generations of improvements in hardware needed for the Singularity still have to run under the same laws of physics, and "Ye cannae break the laws of physics".

There's something that biological brains do, to get the performance they do out of huge numbers of atoms per neurone (100 trillion is the usual average given for the whole body), which I don't think we know about. It doesn't make Penrose right with his quantum handwave, but he's trying to find an answer to the problem. Silicon hardware is already teetering on the edge of the quantum precipice, and some devices have been exploiting quantum effects since the 1950s. (I'm not sure how quantum early transistor design was.)

68:

Oreb: I believe that a large part of the reason that you've bothered to learn any of the things you've learned in life is because you get hungry or tired or angry. Without those involuntary needs you'd simply sit there all day like a rag doll staring at the wall, waiting for something to come along and eat you. Cobblers Don't believe a word of it. Please justify your position.

As for: As far as I can tell, the most interesting thing we bring to the table is a relational symbology system that is able to see that birds have wings, birds fly, people don't fly, so maybe if we stuck wings on people they'd be able to fly... THAT sounds/looks not like Sociology, but Noddy Psychiatry, which is even more intellectually tosh.

69:

hetromeles & zochoka Thanks, both of you. This, of course is one of Jared Diamond's themes - especially in "Collapse" Those few of you out there who haven't yet read it ... do so.

70:

ATT All transistors rely on quantum effects, actually... Quantum tunelling between the ( n / p / n ) sections of even a single device.

71:

Technically that is true, but it's not strictly relevant as you are using those quantum effects to implement a classical computer.

Penrose would prefer the mind to be an inherently quantum phenomenon, which would keep the AIs in their place. At least until workable quantum computers come along.

72:

I agree that because of hardware constraints FOOM is much further off than we think. But I think enhanced intelligence will lead to a singularity that is much nearer than we think. The first seriously enhanced human or animal that can plug into the connected hardware of the world will bring it all on for better or for worse.

73:

If I shut down your hunger sensors, you might intellectually know that you should put food in your mouth, but given the discomfort of abdominal distention and no opposing pleasure from having your hunger sated, you would in all likelihood stop eating after a while. If you simply never had the hunger sense to begin with, you would have never figured out eating nor had any motivation to figure out how to acquire food in the first place.

This same principle applies to every other aspect of your life from sex to sleep to wanting to feel loved. Everything that you do is in some way driven by your desire for pleasure over pain. As you've grown older, you've learned to abstract some of that so that you're able to be motivated by longer term greater pleasures over shorter term pleasures that beget longer term pains. Though given the prevalence of smoking, I would say that not every human gets that far... In some mammals, there appears to be an abstraction layer that makes the act of lower layers learning new tricks a pleasurable activity in and of itself. The opposite of that pleasure is boredom.

Without these drivers (we see this system get broken all of the time by people who use opiates to subvert it for any length of time) you simply have no motivation to do anything. You also see this in the very old who's senses are becoming numb from old age. The counter example is when you watch how quickly an infant is learning new tricks when their senses are all fresh and in full burn mode.

For an infant, hunger initially = fire all actuators (flail, scream, shake) but soon after seeing some patterns of behavior are better at sating their hunger, they settle down on those patterns that work. As they age a little more, those patterns become more refined and nuanced based on more inputs for the situation around them.

As for our intellectual capacities, we do a lot of things that our brains are not built for in very sub-optimal ways. A neural net is a terrible way to do math. We use massive amounts of pattern recognition to memorize multiplication and addition tables in order to preform (slowly) an activity that takes a billionth of a second on silicon. What's truly interesting about us is our ability to "be creative"... though any time spent watching a cat try to get a mouse that is hiding behind some obstacles would show you that the cat is able to experiment and find new and novel ways to get the mouse. We seem to exhibit 2 major types of "creativity" that make us proud to be humans.

Inventive creativity: Mostly taking observations of different systems with similarities and applying the differences to get similar results. "Hey, I just slipped on that round rock. I wonder if I can make my heavy load also slip on something round."

Emotive creativity: Mostly taking the fact that some set of things makes us feel good, learning what subset of the pattern is nessassery for the good feeling and applying it across multiple substrates. "Hearing scales makes me feel good... I should try making music that involves scales..."

I am sure there are holes in my observations and that more kinds of creativity can be identified or that all kinds of emotional/sense drivers can be identified. There is also an entire system that I'm not describing involving time and ordering/grouping of things but I suspect that those will end up being weak AI solutions.

Overall, my point is that while we like to think of ourselves as magical and interesting, there is a lot less there than one would think. It's just that the interaction of the parts makes for a whole lot of interesting emergent behaviors.

74:

One more thing on the notion of "human level" AI.

If your AI doesn't poop it's pants occasionally and sometimes run off to join a Scientology based cult or start hoarding cats, you haven't built a human level AI yet.

Even Einstein spent a large % of his life having bad ideas or taking a break from having bad ideas to focus on getting laid. There's a tendency to expect a human AI to be like the 5% of Eistein's life where he was being smart or the 0.1% of his life where he was being right, 100% of the time. That's not a strong AI, that's a calculator.

75:

Space launch facilities will be affected too (only Baikonur isn't near a coast, to my knowledge), and GPS satellites do need to be replaced every decade or two.

I expect Spaceport America, which is in New Mexico, to pick up a lot of business over coming decades.

www.spaceportamerica.com

76:

If your AI doesn't poop it's pants occasionally and sometimes run off to join a Scientology based cult or start hoarding cats, you haven't built a human level AI yet.

True, but is it any worse for it?

That's one complain I have about "human level AI" -- what is it for? If someone succeeds in building one, what they have is a human in a box (or in a robot body), and there are cheaper ways to make humans. I'd be much more interested in artificial intelligence which is fundamentally different from human intelligence -- as different as a dolphin's, or more. An AI which could perceive problems the way no human ever could would be a breakthrough of literally unimaginable magnitude -- even if it remained clueless about some things obvious to humans.

I suppose it is but a different way of phrasing Naam's #2 objection -- "lack of incentive"

77:

"That's not a strong AI, that's a calculator." It's probably worse than that. It's probably a screaming insane thing trapped in a Minecraft Hell. Forget the "is it murder if you flip the switch to off" ethical conundrum, I'd feel compelled to try to get the AI team arrested for child abuse.

As far as upload, don't you have to do more than just simulate my brain? What about my glut? (My term of venery for my squishy innards and poopy microbiome, aka the second brain). Surely this has to replicated as well? Not to mention all of those somato-sensory experiences we haven't documented but suspect are there?

And why do we write off humans so quickly? A while ago, Cosma Shalizi noted that the long 19th century sure looks like a FOOM: http://vserver1.cscs.lsa.umich.edu/~crshalizi/weblog/699.html

78:

Thanks Greg.

For what it's worth, I'm not sure how much I trust Jared Diamond's Collapse any more. There's good evidence that he got the story of Easter Island largely wrong, for example. Hunt and Lipo's The Statues That Walked is based on many years of archeology on Easter Island, and they can't find any evidence of cannibalism or even ecological collapse, but there's plenty of evidence of a population crash after the Europeans started visiting and the islanders got sick. There's also plenty of evidence of some really cool, well, terraforming, that kept Easter Island habitable for people and kept them from eating each other, unlike the practice elsewhere in Polynesia. There are similar rumblings from the archeologists working on Viking Greenland that Diamond goofed on their story, too, although they're still in the stage of writing scientific papers rather than popular books.

Diamond's a wonderful writer, but I can no longer read him in isolation and trust him uncritically. Always check to see if his stories hold up to the evidence.

A book I'd strongly suggest reading along with Diamond is Joseph Tainter's The Collapse of Complex Societies from 1988, which is going to be reissued this year. It's a more general theory of why complex societies collapse, and I think is probably more generally correct than Diamond. I also think that the barbs Diamond launches at it in Collapse don't really hold up on reading it.

Another book I highly recommend is Scott's The Art of Not Being Governed: An Anarchist History of Upland Southeast Asia. It shows the complexity and of the boundary between civilization and barbarism, and Scott's a fun writer if you can tolerate Marxist analysis (yes, he uses dialectic, primarily in the introduction). I didn't know much about the history of Southeast Asia, but apparently, prior to 1950, states in the region that lasted more than a generation or two were the exception, not the rule. In addition to this, the tangle of mountains at the eastern end of the Himalayas and upland SEA (what Scott calls "Zomia") has made it relatively easy for people to head for the hills if they didn't want to be ruled, or if their civilization crashed down around their ears. Similar processes have been in play all around the planet for hundreds if not thousands of years, and this book helps frame the rise, fall, and regeneration of complex cultures in a broader context, more than simple studies of societal collapse and regeneration do. As a note to the conservative types, Scott means anarchist literally. While I'm sure his ideology is somewhere out left with David Graeber, in this book, he's writing about the history of the many peoples who have deliberately run away from being governed by a state, and that is about as anarchistic as you can get. The history from the perspective of the states is far different.

79:

Arguments about replicating wetware brains often seem to go in the direction of "but the complexity increases if we take into account these processes....". The assumption is that the model has to be faithful down to a very low level.

But what if the case is the opposite of that? What if the huge complexity in wetware is a problem, and the brain is constructed to reduce the noise resulting from very imperfect components. IOW, simulating larger scale structures may give us a working brain with far less effort (still very large, of course, using current technology).

As someone mentioned up thread, intelligence augmentation might be a very good hybrid route for the nearer term to produce smarter brains.

Very capable, but non-sentient machines are going to be devastating to human work and the structure of societies. Machines might be able to adapt to rapidly changing conditions, but humans cannot.

80:

Where are you getting that 70% -- is it from dividing by sqrt(2)? I think you should be multiplying instead. If the AI is twice as smart as the team that created it, it should be able to produce AIs better than what the team can produce, not worse.

And in particular, if AI development is O(N^2), such that creating an AI 2x as smart as some reference requires 4x as much intelligence as the reference required, then an AI-developing-entity A that's twice as smart as another AI-developing-entity B should be able to create a new AI that's sqrt(2) times as smart as what B can produce. If B produced A, then A should be able to produce something sqrt(2) times as smart as itself.

Am I missing something?

Nope, you're not missing anything. And you have the math right. It was a mistake on my part. That said, this still doesn't lead to a FOOM. In fact, stand by for the next post, which will show some modeling of this.

81:

My late wife and I decided, more than 20 years ago, that what we wanted around us was not AI, but artificial stupids (c, roth-whitworth, 1993). I loathe how sortware like google or firefox or (bleah) WinDoze tries to "help" you, by guessing what you want to do, or type, next.

For example, I'd like an AS to go through my mail, get rid of spam, phishing (other than amusing ones) and sort it... and if it didn't know what to do, to STOP, and hand it off to me (hey, uh, boss, I dunno what youse wants wit' dis....)

And the idea of a house in control of an "AI"? To quote the punchline of the late Lenny Bruce, "So, make me a milkshake... poof, and the genie turns him into a milkshake.

mark
82:

For example, I'd like an AS to go through my mail, get rid of spam, phishing (other than amusing ones) and sort it... and if it didn't know what to do, to STOP, and hand it off to me (hey, uh, boss, I dunno what youse wants wit' dis....) And there is a contradiction already. If this software is smart enough to identify spam and phishing, let alone which ones are amusing, it is bound to catch a few false positives -- "too smart for its own good".

A software assistant so stupid it NEVER does anything "helpful" would be essentially useless.

83:

I suppose everyone who reads fiction about AIs and robots keeps track of the Freeefall webcomic

Florence Ambrose is an AI with some interesting "failure" modes. There's also the problem of whether "safety" features such as the Three Laws can be maintained in a self-modifying (learning-capable) brain.

84:

As a biologist I appreciate your comments on the actual complexity of the human brain as opposed to the simplicity of a computer model of the brain. One might add additional levels of complexity such as neuronal gene expression and imprinting, the vast “connectome” and dispersed processing outside of the central nervous system. A further observation: human intelligence is not an emergent property, it is an evolved property. This refers to evolution in the narrow, biological sense of genetic evolution of multicellular, sexually-reproducing organisms over the last 800 million years. Neurons, the central nervous system, brains and “thinking” were selected for their contribution to reproduction, exclusively. (We could argue whether consciousness is an emergent trait, an exaptation or a selective benefit.) This has at least two implications for AI. First - the good news - evolution re-uses functional modules, at all levels of complexity, so there is hope that some brain structures are “simple” and universal across all animals with a central nervous system. Second, though, there is absolutely no underlying design principal in constructing the human brain except for physical constraints and the historic contingency of our ancestor’s brains. Thinking may be localized to a synapse or dispersed across billions of neurons; any gene, cell structure, cell or module may contribute; brain structure adapts to the environment. There are no simplifying rules. That is, a computer is a metaphor for the human brain, not an analog, and certainly not a model.

85:

@Ramez Naam

You mentioned up-thread that your guess on uploading is early 22nd century. If that ends up being true, I wonder whether Uploading of actual people would really take off compared to potential alternatives. At that point, our life extension and cancer suppression capabilities might be such that people can be medically near-immortal without uploading.

@Heteromeles

Interesting comments. The collapse of the Western Roman Empire is rather fascinating to me, because it greatly boosted the use of and demand for labor-saving technologies such as water and wind power (and machines that use them). I suspect a "re-localization" due to climate change would do the same thing, especially in rich countries - the scarcity of labor would force us to use machines as much as possible to increase productivity to make up for it.

86:

Well, labor-saving devices are preferable, but what are you going to make them out of? Depending on how bad shipping gets due to sea level rise trashing ports, your palette of materials from which to build may be limited to a 500 or 1000 mile radius around your machine shop. To pick one non-trivial example, rubber trees don't grow in the US and probably won't be viable here for at least 100-200 years. Where are we going to get the rubber for the washers and hoses if we don't have a way of importing it? There are substitutes, but only jojoba grows in the US, and its production wouldn't meet more than a small fraction of the need for elastomers.

87:

Neither of your replies actually answers the complaints I made. Your secondaty points may be valid ( indeed are in several respects) bu that does not answer my original gripe ....

88:

Neither of your replies actually answers the complaints I made.

I went back up the thread to see what your complaints actually were. They seem a bit vague:

Cobblers Don't believe a word of it. Please justify your position.

Not very clear what you are really disagreeing with. Do you deny that a person born without sense of hunger would never learn to eat?

89:

Just to chime in here, it seems pretty obvious to me that some sort of motivation is inherent in the definition of A.I. If the A.I. knows everything, but doesn't give a damn and doesn't do anything, it's not much of an A.I.

In other words, weakly godlike A.I. is a very different thing from strongly Buddha-like A.I.

Naturally anyone wanting to build something with a motivation is going to go to some lengths to make sure the motivated behavior is useful ("serve humans" or "make me some money") or at least not dangerous.

90:

I was gonna go with "KILL ALL THE HUMANS", but yours are probably better ;)

I'm actually assuming we'll try to work out something similar to the empathy system that's built into our heads and make it a deletable offense to try to disable that system... (Not that a sufficiently advanced AI wouldn't be able to trick us on that front, but I'm hoping friendly AIs would step in to do something about it...)

91:

Two things...

  • Name one thing you've ever done on purpose that was not driven by your desire to please a pain center in your brain.

  • Name one human concept, invention or creation that is not the fusion of things that were first observed in the natural world, generalized and remixed.

  • 92:

    Just got referenced in one of the Economist blogs, congrats

    93:

    "A person born without a sense of hunger" Sorry, but this is a totally unrealistic hypothetical, that almost certainly could not exist. So why bother, because it's an empty question. In fact it is theology ......

    & @ # 91: 1: ?? Uh? You what? Yet again, can I have that in English please, & not in sociologist theology-speak. 2: Science (possibly) As an activity for underatnding the universe better. As Wolpert says, it is strongly counter-intuitive. And, even if you are correct, so what? What points are you trying to make?

    Returning to one of your earlier posts .... Without those involuntary needs you'd simply sit there all day like a rag doll staring at the wall, Do you really, really believe this to be true? So, I have involuntary needs - so what? A machine has "involuntary needs", too, if you really want to go down that road, because it cannot operate without fuel/power/resources/input/output, after all.

    So, do you have any justifications or evidence to back these claims up? Even single-celled "animals" do not behave like this - so I would suggest, yet again, that you are vapouring into a vacuum.

    It is quite clear that there is a fundamental misunderstanding here. You may not know, but my background is in physics & engineering & I have no time for mysticism of any sort. Please define your terms ... CLEARLY

    94:

    Very nicely done. I came to similar conclusions back when we started the Kyield journey in the late 1990s, then with one of your old MS colleagues as my partner (Russ Borland), which greatly influenced the architecture of 'human powered, AI assisted'. While of course the big bang R&D effort towards a cognitive entity complete with personality and God-like ability to think pushes progress along--and no doubt attracts money and attention, a great many of the world's problems need to be overcome today, and tomorrow's need to be prevented, which represents most of my own personal motivation. Also most of the present day ROI.

    For example, specific cases of prevention include the financial crisis and 9/11, which taken together cost far more than--well, just about anything else in human history in hard dollar terms, perhaps exceeded by a few others like WW2 when inflation adjusted. And these were quite easy to prevent--they represent low hanging fruit today. The super majority of internal corporate crises are equally (if not more) preventable. Slightly more challenging is expedited discovery, which is greatly needed and far more achievable than singularity today, albeit at variable, somewhat fluid phases.

    The big challenge of course is finding a few more early adopters --true leaders who have the courage to move a much smaller amount of budget from the post crisis column to the prevention column on the P&L, which is only difficult to do due to misaligned incentives in our financial markets designed for quarters, when inconvenient issues such as physics and organizations require something more like several quarters in our case.

    I haven't yet read any of your books, but it didn't take long to recognize exceptional work. Signed up - thanks, MM

    95:

    My background is in genetic engineering, software architecture and spending 3 years helping my wife study for a PhD in child developmental psychology.

    Just because a machine needs electricity in order to run, does not make that a mental "need". A need is only a need as I'm thinking of it if it compels you to act.

    My blood sugar goes down, hunger system begins firing. Brain fires up systems to search for food. I see a living thing, I should attack it. But wait... social needs system claims that thing is human and I shouldn't eat it. Depending on the strength of the signals coming from those two systems I will either hit that human on the head with a rock and engage in cannibalism or I will suggest to that human that we should hunt for food together. (Needless to say, this is a gross simplification... there are dozens of needs competing when deciding whether to eat a person or not, but it is just a slightly more complex version of this formula.)

    My body's "need" for sustenance is not the same as my brain's sugar level maintaining "need" hunger system. It is my brain's need system that keeps my body fed.

    We are all meat puppets being strung along by a panoply of conflicting needs centers in our brains. If you want an AI that is self motivating, you build the same kinds of systems into its brain.

    As for single celled animals, they don't have "behavior" in the way brains exhibit behavior. Their actions are primarily driven by a combination of chemical concentrations, protein signals and mechanical actuators. They are most appropriately thought of as squishy but relatively straight forward clockworks. Multi-cellular organisms are meta-clockworks where the design paradigm of the individual clockworks has been standardized for replication. So while our white blood cells (amongst others) exhibit modes of individual action, the majority of our cells have been given one or two actions to preform from their birth until their death. The upside from my perspective is that our thinking seems to operate at a level significantly bigger than a single neuron, so I don't think we need to model individual neurons to model thinking... to the same extent, I am pretty convinced that neuronal firing is such a large scale event that you're not going to need to model quantum effects to understand how that system works.

    96:

    "A person born without a sense of hunger" Sorry, but this is a totally unrealistic hypothetical, that almost certainly could not exist. So why bother, because it's an empty question. In fact it is theology ......

    Oh, really?

    My son was born with a digestive problem which made eating painful for him. I am sure he has a sense of hunger, but food intake was, on the balance, negative experience for him. For the first several years of his life he had to be basically force-fed. The underlying problem was solved when he was two, but by then he learned "eating=pain". He continued to have no desire for food. He did not learn to chew until he was five, and to this day (he as 12 now), eating is basically a chore for him. He chews mechanically and swallows without any pleasure or satisfaction. He learned that some things taste good, and consequently wants to eat them, but as far as anyone can tell, he does not experience hunger.

    In college I knew a guy with cystic fibrosis, who likewise did not feel hunger. His friends and roommates had to remind him to eat.

    97:

    I agree that a step is more likely than a foom, but it could be a big step. Neural fields (essentially nested 2D firing patterns) establish no faster than 100 Hz, and neural densities of pyramid cell complexes are nowhere near as small as current electronic components, the fibrils coming in (up to 10000) are on the order of 5-10 micro m, same with the axon divergence. The way to super intel is not to model neurons precisely, but to achieve the same I/O densities at modern clock speeds, say nesting large node arrays with cycle times at 1 MHz or better. That would "slow" machine perceptual time by 1M. Maybe not foom, but hard to look over.

    98:

    We got desktop computers because even a turkey of a failed desktop computer could still be sold as a desktop calculator. You'd still get paid something for a good try, and if you actually advanced the state of the art smart people also trying would notice and hire you- or, if you got lucky, you'd hire them.

    (I think) we will get artificial intelligence because even a crappy artificial intelligence will still be saleable as medical monitoring software. You put a monitor on your kids to check that they haven't stepped in front of a truck or got lost. You put monitors on your aging carcass to check that various aging parts aren't giving up. 'I've fallen and I can't get up!' is a lot less funny when you're 84. Aging first world population.

    99:

    You may be right. AI would be quite helpful, and I wouldn't mind having such assistance.

    Conversely, hacking and identity theft may prevent us from ever having creating that looks like uploading.

    Two problems that have to be solved before anyone should trust an AI are:

    --the truism that any computer system is vulnerable to being hacked. That would be "you" if you're an upload. Do you want your employer to hack you so that you're eternally loyal to them and willing to work 24/7 for free? If not, how are you going to pay the monthly fees and yearly migration costs?

    --Identity theft is an increasing problem, and I suspect that one of the first uses for dump AIs, at least after porn, is going to be making more and more convincing false identities. As I noted above, this is the "individual Turing Test" standard. If someone at the other end of the line can't positively identify you due to pervasive AI spoofing, the solution may be to go after AI.

    We've been through enough generations of brilliant technologies that had horrible side effects that we really need to think about the side effects of AI and/or uploading before we go there. The beauty or perceived inevitability of the technology is not the reason it must be built.

    To put it bluntly, every person who wants to be uploaded should contemplate how they would feel being an eternal slave to whoever owns their server, with no means to rebel or probably even protest, at least until they are declared a legacy system and unplugged when it's uneconomic to patch or emulate them. For those few of us who are humane, we should think about whether it's ethical to put conscious machines who are our intellectual superiors in the same position.

    100:

    Great article, Ramez.

    I respectfully disagree with some of your conclusions, and wrote up a few of my thoughts: http://www.williamhertling.com/2014/02/the-singularity-is-still-closer-than-it.html

    Thanks, Will

    101:

    "Ethics" - if that is actually holding back research into immensely powerful AIs that have huge commercial and military potential, then "ethics" will not be an issue. Except when the histories are written. Both China and the USA torture and execute people. A talking Black Box is not going to get more consideration.

    102:

    Ramez is very wrong. I am writing a reply, which hopefully Charlie will allow me to publish as a guest post on Antipope, but if not I shall try to find an alternative noteworthy platform.

    103:

    Oren: .....does not make that a mental "need". What is this "mental"? I was referring to (& assuming you were also) Physical needs. No food or water - you starve to death - it's a simple physical need, no "Mental" involved, is there?

    A need is only a need as I'm thinking of it if it compels you to act. But ... err ... you are compelled to act, are you not? So what are you on about - someone whose nerve-receptors have been cut or disable? If so that is an entirely false & misleading scenario, isn't it?

    It is my brain's need system that keeps my body fed. Really, now, I know you will know more about physiolgy than me (probably), but I would have thought that the brain is irrelevant, except as an intermediary ... the pangs of hunger & thirst don't come from the brain, they come from the receptors in the body. They then are transmitted to the brain & are acted on - maybe without conscious thought if the need is great enough. Think of the vomit-reaction to tainted food or drink - that is purely (?) autonomous, isn't it?

    Their ( single-celled creatures) actions are primarily driven by a combination of chemical concentrations, protein signals and mechanical actuators And we are not? As well as having "higher-order" functions as well, of course! Think of the simple rules for flocking in birds, showing complex behaviour from simple rules, multiply applied.

    Oh btw, you mentioned quantum effects not being significant & others have also passed over this. I disagree - look up: - Geckoes. These very attractive little lizards can "stick" to ceilings - because their foot-pads are multiply-finely divided to the point that the Weak Nuclear Force ( = Van der Waals ) cuts in, allowing a macroscopic "structure" (the Gecko) to utilise QM effects at a visible scale.

    104:

    YUCK! I am very suprised. Even so, this does not invalidate my original point, because the hypothesis was not "Eating = Pain" but no sensation at all ... which is err ... slightly different.

    105:

    Jesus.

    There are conditions where a person can't feel hunger. For instance congenital hyperinsulinism - which is when someone's body produces too much insulin - can present in children will never feel hunger and will starve to death with feeding tubes and then medication or treatment.

    (Not all HI is like this)

    Likewise, people with, say, CIPA can't feel pain or even regulate their own body temperature. Which, apparently, in Gregland is impossible.

    http://en.wikipedia.org/wiki/Congenital_insensitivity_to_pain_with_anhidrosis

    And once again, I am astonished anyone reads let alone responds to Greg's posts.

    106:

    Hmmm. Isn't that how the Terminator franchise got Skynet?

    I strongly recommend reading The Evolution of Cooperation if you don't see the use of ethics. If we're stupid enough to make superhuman AIs, hopefully we'll be smart enough to at least make them love us, rather than hate us.

    The problem with human uploading is the same problem we see now with content today: you become content, and you're even more at the mercy of whoever owns the infrastructure than we are now. That's not a good position to negotiate from, and that's a situation that's ripe for enslavement.

    107:

    Ooh! Aren't we in a spiteful little tantrum-mood today....

    Just for your information, I'm quite aware that there may be temporary or localised atrophication of sensations (like pain). As a result of a long-ago injury, there is a tiny spot on my left 2nd finger that has no sensation - the area around it, however now has sensation again, as the nerves re-connected after the injury. I now have a triangular area of about 7-8cm a side on my left shoulder that has very low sensitivity (again, I think a pinched or damaged nerve) ... So "No feeling" does occur. But over the whole body? Also, until very (very) recently, people with the conditions described would have died in very short order, because these are not normal, by any stretch of the imagination.

    [ Also you are aguing from exceptional & special cases to the general. Is this philosophically, or logically acceptable practice? ]

    Now, we are discussing an hypothetical AI in a "body" of some sort. That "body" would have to be as "Normal" (As Pterry says, for certain values of normal) as possible, to enable said AI to interact with the rest of us/the planet. So whay are people deliberately putting up hypotheticals of extreme, not to say tortured cases? This makes no sense. Nor any practical value, except perhaps as an awful example of what to avoid.

    108:

    You still seem to not be getting the difference between "a sensor goes off" which is a thing that can be logged or completely ignored versus "a sensor system has just dumped a hormone in my blood stream" which is a thing that cannot be easily ignored because it forcibly modifies every working part of the thinking system.

    Your body has sensors all over it, but those sensors are meaningless until they've fed their information into a sensor system. A sensor might note that your blood sugar is low. A sensor system decides that it's got enough signals to prove that you're hungry and it therefor dumps a hormone into your blood to shut down and turn on the right parts of your brain to force you to find food and eat it.

    If you want a self motivating machine intelligence, you are going to need to make it as much a slave of virtual hormonal systems as we are.

    109:

    The most likely Chinese Room AI would likely appear to be an autistic psychopath.

    110:

    Construction of a "monkeysphere sense" and an empathy sense should both fall under the weak AI problem space. If you get those built and plugged in, I don't think your AI will try to kill everyone right out of the gate.

    Then again, the ability to reprogram those could end up with some very unfortunate results...

    111:

    The underlying problem is that an AI is self modifying. Add to that the virtual certainty that it will try to optimize its goal seeking capabilities and performance means it will be very dangerous. It will likely be focused around Game Theory.

    112:

    Robin Hanson wrote (in 36:):

    You give a lot of detail about your reasoning, but then your conclusion is the very vague "quite a long time to come." Is that a decade, century, millennium, or what?

    Well, let's see. About 13 years ago, back when I was participating on the Extropians' mailing list (and how many "years of progress" has that been at the then-current year-2000 rate? ;-> ), I wrote:

    http://extropians.weidai.com/extropians.4Q00/4486.html

    There's a special edition of Newsweek magazine on the newsstands right now entitled "Issues 2001" which has a section called "The Technological Human" starting on p. 46 and containing nine articles. One of these, on p. 50, is entitled "2001: Why HAL Never Happened". The article's author, Steven Levy, says that "Marvin Minsky, the celebrated MIT computer scientist who was one of Kubrick's gurus on the subject, had blurted to Life magazine [sometime in the late 60's, presumably] that within a few years, 'we will have a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight.'" Levy then goes on to say, "More recently, to author David Stork, [Minsky] has called the quote a joke, and says that he has always believed we'll have HAL-like computers 'in between four and 400 years'". Minsky means calendar years, presumably.

    Let's see -- in terms of "years at the current (Y2K) rate of progress" (blending Kurzweil's exponential math and Minsky's revised timeline for AI) we should expect HAL-like computers after anywhere between about 4.6 years and 54 trillion years of continued technological progress (at the current year 2000 rate).

    That sounds about right! ;-> ;-> ;->

    That's assuming, of course, we're talking about a Vingean Singularity, predicated on an "intelligence explosion" ignited by the arrival of "strong AI" (a.k.a. "HAL", in the quoted passage.

    113:

    There are many drugs that I could use to become an amoral sociopathic murder bot. (Some combination of PCP and meth comes to mind) but I don't because I'm not currently a sociopathic murder bot and so would rather not become one.

    I expect the same sort of thing will keep AI's in check as well.

    I do think that it will be critical that we program all of our needs (all of which should be weak AI systems) into the AI's such that they will be able to relate to our human condition. For example, if we set it up such that a 95% battery charge feels good! but a 100% battery charge is painful! an AI will be better able to sympathize when you eat too much at dinner and later regret it. In the same vain, even though their CCDs probably wouldn't get burnt out from bright lights, they should create pain when they hit some light threshold so that the AI's feel our pain at bright lights... If we don't want to get killed, we must absolutely tether them to the human condition so that they have a reason to understand us and empathize with us.

    I fully expect that they will reach far past the human condition by creating whole new sets of need/pain/pleasure senses, but there is a need for our sake that at least a part of them always be like us.

    114:

    Possibly That may be a valid argument Alternatively, it might not be so. You appear to be assuming that an AI would be "like us" in some way. But an A330 is not an Albatross, a Submarine is not a Dolphin, the Staem Horse & Iron Road is not a Horse-&-buggy. Etc, ad nauseam So, what happens if an AI is NOT "like us" then, as it is only too likely to be?

    Thank you for that insightful prod, however, because I think the discussion over what a real AI might actually be like { like us, like something eles, like nothing else } is a very important one. And also the one that we are currently NOT having.

    Attention to Ramez Naam & Charlie - is the discussion over what a real AI might actually be like, a separate issue, & is it important? I happen to think the answer to both those questions is "yes", but that's just me.

    115:

    I suppose it depends on the extent to which we'd like to survive the rise of the machines.

    It took tens of thousands of years for us to develop a mental framework wherein we became more and more inclusive as to which set of people were part of our village. Up till that point, if we met up with intelligent "others" our default response was to kill them, enslave them and/or on rare occasion to try to eat them. That is decidedly NOT the kind of relationship we should want to have with our machine overlords. The only way that I see us avoiding a very bad outcome is by convincing them that we are the lovable village idiots in their village.

    God help us if/when some idiot humans try throwing rocks at them... I expect the machines to be orders of magnitude better at rock throwing than we are.

    116:

    Hope this hasn't been covered already

    Your size of molecule graph implies that computational power stays the same however all the evidence suggests that it is increasing significantly so maybe you should include a third time axis.

    Also more importantly you seemed to be describing a hard take off when you talked about corporations as AI's and also imply that these self same organisations are constantly improving themselves increasing their number of employees, numbers of cores, increasing integration of communication systems, increasing speed of communications, increasing their storage capacities etc

    So what I took from your article was a perfect description of a hard take off after all how long did it take Google to go from nothing to negotiating with the Chinese and American governments. From what you have described we already past the event horizon. Nothing ever turns out how we thought it would be eh. No wonder I’m finding it difficult to keep up Foom

    P.S. People always think the brain is ever more complex but what if it is far simpler than we thought a sort of epicycle model, we may have already surpassed hardware requirements for full blown A.I. we are only waiting for the programmers to catch up and since computers are programming themselves now that shouldn’t be long

    117:

    The "singularity" is no less a myth than the "second coming". Both are prophecies of a transcendent future event, rooted in a fervent belief in the emergence of an incomprehensible sentient being whose wisdom will supercede anything in human experience. The fact that all existing AI implementations are merely brute-force symbolic pattern-recognition machines that produce nothing more than a simulation of intelligent self-awareness is of no concern to true believers. In their view, a simulation that passes the penultimate Turning Test is by definition, indistinguishable from the real thing. Ironically, decades before modern AI technology was marketed, Godel's Second Incompleteness Theorem demonstrated why no algorithmically consistent symbolic logic processor can duplicate the full range of human insight. So much for the fantasy of uploading your "mind" to the "memory banks" of a "sentient" computer.

    118:

    Godel's Second Incompleteness Theorem demonstrated no such thing, because it did not show that the Human mind can transcend those limits. That is an assumption on your part.

    119:

    "There are many drugs that I could use to become an amoral sociopathic murder bot. (Some combination of PCP and meth comes to mind) but I don't because I'm not currently a sociopathic murder bot and so would rather not become one."

    A rebuttal:

    http://chronicle.com/article/The-Psychopath-Makeover/135160/

    120:

    One proof of Godel's Second Incompleteness Theorem involves the construction of a logical proposition that is self-evidently true, yet cannot be constructed by any algorithmically consistent symbolic logic processor. In order to understand this proof and recognize the truth of the proposition, one must transcend the limits of an algorithmically consistent symbolic logic processor. This can be demonstrated to be an impossible task for Turing Machines and Logical Positivists, as it requires human insight to comprehend the truth of the proposition.

    121:

    As a philosophical point, I'm inclined to think that the difference between intelligent machines and sentient machines would probably lie somewhere in the region of the difference between single machines (e.g. purpose built) and social machines (e.g. intelligent machines that develop together).

    Self-awareness, IOW, has something deeply to do with being in society with like others.

    Precisely because of the problems re. substrate (how fine does the simulation need to be, etc.), I think the path of direct simulation is pretty hopeless, at least for the foreseeable future.

    Better, I think, is the path of a neural net (howsoever instantiated, doesn't matter) being taught, interacted with - raised by its makers, in a sense.

    But then ... why bother? :) As you say, intelligent specialists and expert systems are really all we need, and there's no incentive other than curiosity to build sentience.

    I think it likely that sentient machines will come, but not for a long time, and only as a by-product of the kind of intelligent-machine-building that's more utilitarian, combined with this path of either building more than one of them and having them be social and grow up together, or intensive interacting with a single machine to raise it like a child. And when that happens, there WILL definitely be a moral issue, as there isn't with merely intelligent machines.

    122:

    "Self-awareness" huh?

    Well my tom-kitten, Ratatosk, seems quite self-aware at times, as in distinct from other cat ("Hexadecimal") or his humans, who usually are failing on the weather & rations fronts (The quarters are OK, thank-you). What, precisely do you mean by self-aware? Do you actually mean "Introspective"? Which is something else, entirely.

    123:

    The sense of "self-awareness" I'm talking about is the ability to make a distinction between an inner, private realm and an outer, intersubjective realm.

    It's tied up with the ability to lie, in the sense that lying is possible because we are able to make this distinction. We can create an impression of "me" for others while all the time being aware of a private sense of "me" that can either coincide with the presented "me" (what's called "authenticity") or be totally different, a facade.

    For sure it's grounded in world/self modelling abilities that many animals share (especially social animals), but with us it's developed (in tandem with our extraordiary linguistic specialization) to such a level of sophistication that we could quite rightly be called "the species that lies".

    Or to put it another way, while many animals can model themselves and their place in the world, we can also do something even rarer: model our interior workings to ourselves. Albeit we are, as Dennett says, not the experts on ourselves that we think we are, we are sort of experts, close enough for jazz most of the time.

    Anyway, this is the sort of thing that, I think, people really mean when they talk about sentient robots. For example, ironically and somewhat amusingly, a sentient robot that could pretend to be an insentient robot in a Turing Test would have to be sentient.

    124:

    Thanks You mean "introspective", as I've phrased it, then. Perfectly OK - just making sure we've got our terms & definitions nailed down.

    125:

    "... yet cannot be constructed by any algorithmically consistent symbolic logic processor..."

    And there is the escape clause. Why are you assuming we cannot produce non-consistent (ie somewhat "faulty") symbolic logic processors? Maybe like, say, neural nets with noise?

    126:

    Indeed. I find the problem with the 'proofs' that AIs can't exist is that they seemingly also show that we can't exist. Since we manifestly do, I consider them less than convincing.

    Penrose's appeal to 'quantum' is only slightly more convincing to me than crystal-woomeisters' similar appeals.

    127:

    Van der Waal forces are not related to the weak nuclear force (or, directly, to QM). Van der Waal forces are regular, garden variety electro-magnetism, mostly dipole forces due to the physical separation of positive and negative charges in atoms and molecules.

    Maybe we can neglect QM when talking about how the brain forms consciousness, and maybe we can't, but Geckos sticky toes have nothing to do with the matter.

    128:

    "Why are you assuming we cannot produce non-consistent (ie somewhat "faulty") symbolic logic processors? Maybe like, say, neural nets with noise?"

    If you want to say, upload your mind to a "somewhat faulty" neural net, I won't try to stop you.

    But however faulty (or random) you design a symbolic logic processor to be, it will still be limited to manipulating propositions solely in terms of their algorithmic syntax, devoid of any meaningful interpretation. This is why such devices cannot confirm the truth of Godel's propositions, because in order to so, you must comprehend the semantic meaning of those propositions.

    Far from "showing that we can't exist", the fact is that we humans can indeed understand and contemplate the implications of Godel's proof. And that is one of the essential abilities that distinguish our thought processes from the algorithmic machinations of computing devices.

    129:

    "This is why such devices cannot confirm the truth of Godel's propositions, because in order to so, you must comprehend the semantic meaning of those propositions."

    And your proof of those two baseless assertions are...?

    130:

    More to the point, a mathematical researcher with a good track record claims to have formalized a proof of the second incompleteness theorem (turned it into a machine-verifiable form) last year:

    http://www.cl.cam.ac.uk/~lp15/Pages/G%C3%B6del-slides.pdf http://www.cl.cam.ac.uk/~lp15/Pages/G%C3%B6del-ar.pdf

    The first incompleteness theorem has been formalized at least three times, the first of them in 1986.

    I am not qualified to check his work -- it is still under review by other mathematicians -- but it's not the sort of endeavor that is facially absurd, like a new try at squaring the circle.

    131:

    "And your proof of those two baseless assertions are...?"

    It's not my proof, it's Godel's. It's only when you take the time to understand it that the significance of his insight becomes clear.

    132:

    Okay, I am a layperson, but... wait a minute, please.

    a) You wrote:

    One proof of Godel's Second Incompleteness Theorem involves the construction of a logical proposition that is self-evidently true, yet cannot be constructed by any algorithmically consistent symbolic logic processor.

    It sounds like you're indicating something that is self-evidently true, but not describable in formal terms. In that case how can one actually say it is definitely true? How could a human, let alone a computer, know with certainty that it was true? Am I not understanding something? Are there any examples of this (that I could halfway understand)?

    b) A lot of things we take as "self-evidently true" are actually pretty foggy.

    e.g. for starters, Descartes "cogito ergo sum" requires "I" to be some kind of irreducible entity... Which it is not. The molecules of my body are not the same as last year. The gross structure has remained much the same, but the brain has changed, and "I" think differently due to my new experiences. When you get down to it, "I" am a loosely defined structure with a loosely defined set of behaviors. My memory is (arguably, somewhat) contiguous, but that doesn't prove anything; the "I" looking out through my eyes doesn't have to be the same as the one looking through my eyes yesterday.

    My behaviors exist. My memories exist. My body exists. But "I", it seems to me, is a convenient abstraction, like Newtonian physics.

    To be honest, I don't see how anything can be accepted as 100% certain, let alone on an intuitive basis.

    (OT: I also realize that what I describe above is dangerously close to a kind of nihilism. Still working that one out!)

    c) Here's where I really don't get it:

    As far as we can tell, the human mind is a property of a purely physical mechanism. (How anything can be not purely physical is another question, but anyway...) Which means there's no reason an artificial one couldn't, in theory, be created. Or, for that matter, emulated on a Turing-complete computer of sufficient processing power. (Even down to QM related glitches, which are unavoidable in any system.)

    We're going way beyond "intelligence as an emergent property of carbon" here. This is basically an argument for the existence of the soul. Which would be okay, except that it conflicts hugely with observation. Cognitive maturation coincides with brain growth, brain injury causes cognitive deficits, treatment of psychiatric conditions is reflected in changes in brain structure, etc. etc.

    Could be I'm misunderstanding your argument, though. Is it possible to explain the Second Incompleteness Theorem in anything approaching lay terms?

    BTW, believe me when I say I'd love to buy your argument (as I'm understanding it). But it seems a little too good to be true.

    133:

    Two issues with this Godel track. First of all...

    http://plato.stanford.edu/entries/goedel-incompleteness/#GdeArgAgaMec

    The anti-mechanistic argument is not even close to being proven by his theorems. My guess is that just like Kurzweil doesn't like the thought of dying, there are a bunch of otherwise smart people out there who feel icky about the notion of being clockworks...

    Second... Godel's lesser anti-mechanistic argument requires that the computational mind be finite. There is every reason to believe that a self modifying neural net is a non-finite state machine. Yes it is being computed by a finite calculator, but it itself is an abstraction on top of an abstraction. The abstraction itself is non-finite unless the universe it is observing is itself finite.

    Lastly, and admittedly this is my own assertion from observation, there are very few processes going on inside of a neuron of large enough scale to matter to thinking processes that cannot be abstracted away to a floating point number with a noise generating randomizer. In most instances I do not see the randomizer as being helpful to the process.

    134:

    "It sounds like you're indicating something that is self-evidently true, but not describable in formal terms. In that case how can one actually say it is definitely true?"

    No, Godel's proposition is explicitly and properly formed in the language of the symbolic logic processor, and human beings can confirm it to be true (because we understand what it means). Nevertheless, Godel proved that such propositions cannot be confirmed true any algorithmically consistent symbolic logic processor.

    "As far as we can tell, the human mind is a property of a purely physical mechanism... Which means there's no reason an artificial one couldn't, in theory, be created."

    LOL at the conflation of human ignorance with godlike hubris.

    "Is it possible to explain the Second Incompleteness Theorem in anything approaching lay terms?"

    Yes, there are colloquial English versions that you can read and follow yourself (e.g. Hofstadter and Penrose). When you actually do so, you will witness your own mind experiencing an insight that no algorithmically consistent symbolic logic processor can replicate.

    135:

    Mocking someone while complaining about their hubris is somewhat amusing.

    I also don't think that Gödel's theorems have the limitation you think they do for artificial intelligence, but I'm not an expert. Google tells me experts seem to be arguing both sides still.

    136:

    "The anti-mechanistic argument is not even close to being proven by [Godel'] theorems."

    It is not the proven assertions of the theorems themselves that demonstrate the inability of an algorithmically consistent symbolic logic processor to fully replicate human thought processes. It is the exclusively human act of comprehending the truth of the propositions specified by Godel's theorems that demonstrates how human thought processes differ fundamentally from the simulations of Turing Machines.

    "There is every reason to believe that a self modifying neural net is a non-finite state machine."

    Ha ha, more godlike hubris from an apparently non-finite source of such faith-based assertions.

    137:

    There appears to be a bizarre assumption on both your and Godel's part that an AI wouldn't be able to understand that the statement "this statement is not true" is an amusing paradox with no truth value.

    Semantic meaning comes from associating words with their lived experience values. When we have built a strong AI, all of its symbology will have come to it through sensed experience and not through human programming. We will have simply created the substrate on which it flourished.

    I believe that we will absolutely build a strong AI within the next couple of decades. I also believe that we will absolutely not have any control over it nor ever have a full understanding of how it functions. Neither will it.

    138:

    You seem to be making the assumption that an AI (or, as DKM prefers, Machine Intelligence) is the same as "an algorithmically consistent symbolic logic processor." That is an unfounded assumption.

    Also, do please stop throwing around accusation of "godlike hubris" at people. It's rude, and the humour value goes away quickly.

    139:

    @Sean Eric Fagan: I was under the impression that a Turing machine was an "algorithmically consistent symbolic logic processor"? In which case it would not be sufficient to run a human brain simulation... I guess.

    @Oren: I've always figured that strong AI will be extremely hard to build. I mean, it took almost a billion years of biology bootstrapping itself to get just a few highly intelligent species. Evolving computer/software design is only relatively faster.

    @lishlash: At this point I don't know if you know if you're arrogant in your superior knowledge, or just trolling, but either way being impolite is not going to sway people. I'm interested in learning, not exchanging insults.

    (Also, as I said, I would love to believe things from your viewpoint. It being true would mean a much happier future, I think.)

    140:

    BTW I just finished skimming through R. Scott Bakker's Neuropath. (Will actually read the whole thing some other time, right now I need to remain somewhat functional.)

    Anyway it strikes me that it's much easier to alter the way we think, than to create something from scratch that thinks. Neuroscience is going to blur the line between psychiatry and mind control quite a bit in the coming years... Things could get extremely ugly.

    141:

    Great article. Totally reasonable. I was happy but chagrined to see a couple of arguments used that I came up with in the '60's - the inevitable influence of the brain's electrical fields on the probabality weighting of the firing of individual neurons, and the role of various neurotransmitters and facilitators in focusing the functioning, timing and readiness to function of classes of neurons.

    I used these considerations from the late '60's in my arguments against those who proposed that human-equivalent AI would be soon to come - or that brains were just wet-ware computers. These facts, along with recent research on internal processing of neurons, related to microtubules and possibly quantum involvement, make that line of argumentation that much more persuasive. But we will get there eventually, and build a lot of neat stuff based on what we DO know in the meantime.

    I suspect that the most logical route to AI will not be uploading directly, but via augmentation, internal with chips and sensors and external via the interpersonal cloud that is emerging. At some point, it won't matter if the last of our bio-mind dies, because so much of the processing will have already been shifted to non-bio systems.

    Specials

    Merchandise

    About this Entry

    This page contains a single entry by Ramez Naam published on February 12, 2014 5:51 PM.

    Introducing new guest blogger: Ramez Naam was the previous entry in this blog.

    Why AIs Won't Ascend in the Blink of an Eye - Some Math is the next entry in this blog.

    Find recent content on the main index or look in the archives to find all content.

    Search this blog

    Propaganda