Back to: Why AIs Won't Ascend in the Blink of an Eye - Some Math | Forward to: Can We Merge Minds and Machines?

A Rebuttal of 'The Singularity is Further Than it Appears'

My friend and fellow science fiction author William Hertling disagrees with me that the Singularity is further than it appears.

Will has spent some time thinking about this, since he's written three fantastic near-future novels about a world going through an AI Singularity.

He's written a rebuttal to my The Singularity is Further Than it Appears post.

Here's his rebuttal: The Singularity is Still Closer Than It Appears.

If you've seen other thoughtful rebuttals or responses out there, please leave links to them in the comments.

Ramez Naam is the author of Nexus and Crux. You can follow him at @ramez.

24 Comments

1:

Exponential improvements in artificial intelligences be a physical certainty- see Tiplers omega point cosmology

http://en.m.wikipedia.org/wiki/Frank_Tipler

It means we are probably living in a simulated reality

http://www.simulation-argument.com

2:

Ahahaha!

I read "The Physics of Immortality".

Up-side: it makes testable predictions.

Down-side: it's an astrophysicist trying to square the circle of Christian doctrine with modern cosmology. Two flavours that go together to make something not unlike dog shit vindaloo.

(No, seriously, it goes barking hat-stand off-the-wall so fast it's not even funny. I've got some time for Nick Bostrom, but Tipler? Please don't go there.)

3:

Yup Tipler, once upon a time, made sense. Now, he hasn't so much lost his marbles, as thrown them into a concrete-mixer, & expecting the result to be the Glenfinnan Viaduct.

Hoiwever, a much more likely route to a slow singularity (see previous posts elsewhere) is to be through augmentation + medical rsaerch + life-extension + increased parallel-computing power ( Or should I say cross-linked computing power? )

4:

Not sure about Glenfinnan Viaduct, but given appropriate formers, the result could definitely be Horseshoe Curve. ;-) What? All I've done is pick a viaduct that definitely is one of Concrete Bob's!

5:

I don't have any time for Christianity either but the slowing of time if we are in a universe that contracts is a plausible description for exponential increases in computational power.

David Deutsch has time for this but also suggests massively parallel computing across all the instatiatikbx of the multiverse using quantum computers is a runner too. He'd got some testable hypothesis for this too.

Also intriguing possibilities with evolution airy universes that are selected to produce blac holes - http://www.ox.ac.uk/media/news_stories/2013/130504.html

If we create black holes and they spawn self contained universes this increases the probabilities of one of the universes solving computational problems with creating fast ai. One of the universes will develope vinge type ai and therefore confirm the simulation hypothesis.

Overall if there is any room anywhere for exponential increases in computing power were likely in a digital simulation.

6:

There seems to be a common pattern in arguments about what superintelligent AIs will do: 1. It would make for a cool SF story if superintelligent AIs in the future did X 2. ??? 3. Therefore, in the future there will be superintelligent AIs and they will do X

Sometimes it seems like the certainly increases the more ridiculous the hypothetical feats get. (Bring back the dead by simulating all possible universes? Sure, why not?)

7:

Yes! ("On the other hand, if we can agree that the singularity is a possibility, even just a modest possibility, then we may spur more discussion and investment into the safety and ethics of AGI.")

8:

Yep.

Imagine, for example, what the NSA would do with a bunch of AGIs hardwired into their data centers, with no way to complain about what they're told to find in the data they analyze.

The argument that something is technically possible therefore it must happen, is something that has been used as a chew toy around here, and I'm perfectly willing to keep it up.

9:

I would say that a "hard tkae-off" is not out of the question; but my feeling is that probably we will see a gradual ramp-up of AI capability from one generation to the next. If I were to make a case for a hard take-off, I would argue that it might arise from algorithmic improvements, not improvements from the hardware side or even from new AI theories. The problem with this, however, is that nobody can predict when the improvements will occur.

In what follows, I will explain how some of these algorithmic improvements could work. This discussion will be technical, and will include a few references. Apologies in advance for this. The high-level details should be easy enough to read without much background:

It is true that many existing optimization algorithms don't scale well -- they either scale exponentially, or scale polynomially (and much worse than linear). If, however, hard optimization problems could be solved very rapidly, then at least some of the components of an AI's brain could work dramatically faster using the same amount of computing power (e.g. Max-SAT solvers that might serve as part of a reasoning engine) than the previous generation's. Other components that don't depend on hard optimization (e.g. search through memory) might not improve so rapidly. Overall, however, the next generation machine might be much smarter than the previous; and, depending on how quickly these algorithms could be discovered (by the machines, in later generations), it could resemble something like a hard take-off.

Those algorithms need not require that P = NP. There are several other ways that you can get tremendous improvements on your particular problem:

  • It could still be that P is not equal to NP, but that a new algorithm solves a particular hard problem in sub-exponential (though still non-polynomial) time. And this really does happen -- e.g. there was a recent breakthrough algorithm for solving a particular class of discrete log problems that, although not directly applicable to existing cryptosystems, were still widely believed to be intractable:

http://ellipticnews.wordpress.com/2013/06/21/quasi-polynomial-time-algorithm-for-discrete-logarithm-in-finite-fields-of-smallmedium-characteristic/

Discrete Log in this setting, or any setting, is not known or even believed to be NP-complete; but, as I said, it was believed to be intractable.

And there was the recent success on improved Max-Flow (not a problem requiring exponential time using the previous best algorithms, but still one where any improvement ripples through to lots of applications):

http://phys.org/news/2014-01-algorithm-solutions-max-problem.html

Some other examples include fast matrix multiplication, sparse FFTs, the Knapsack Problem, and the Approximate Planar Traveling Salesman Problem.

There are even such algorithmic improvements that are directly applicable to A.I.:

http://arxiv.org/abs/1310.6343

And while we are on the subject of neural nets: some algorithms are starting to look brain-like in their ability to do fast object-detection and work out invariant representations -- how easy would it be to tweak them to make them better than human?:

http://media.nips.cc/nipsbooks/nipspapers/paper_files/nips26/1417.pdf

  • There could be approximation algorithms that can be solved very rapidly. This is not true in general for NP-complete problems, as there are ways of using something called the PCP Theorem to prove that even getting approximately the right answer is as hard as getting the exact answer. But for some optimization problems, you can indeed get a good approximation.

  • The instances of optimization problems that are important to real-world applications may turn out to be easy, even if the general case is hard. For example, even though the Knapsack Problem is NP-complete, for a randomly-generated instance where the numbers you are working with happen to have many digits, relative to the number of numbers, you can solve the Knapsack Problem very efficiently, with high probability. See, for example, the work of Lagarias and Odlyzko. This, and earlier work by Shamir, made people not trust the basic Merkle-Hellman Knapsack encryption scheme; later knapsack encryption algorithms may be more secure.

Another class of examples here where many real-world instances are "easy" is SAT (=Satisfiability Problem) problems: SAT-solvers now routinely handle formulas with millions of variables and clasues. In the mid-to-late 90's, they were only able to handle formulas with a few hundred variables and clauses -- they've come a long way.

Yet another is certain classes of Planar Traveling Salesman problems -- see the work of Bill Cook.

  • Researchers may simply change the problem: there are two basic ways that researchers deal with hard optimization problems: they either confront the problem directly (perhaps seeking approximations to the optimal), or they change the problem to one whose solution would still be useful to the larger problem under study, hiding in the background. It may be that whole new frameworks of optimization problems present themselves, where the solutions are easy to find, and the applications to AI are immense.
10:

Apart from Defense, I think that space exploration is the likeliest to involve/benefit from AI. So given this scenario, what do you want that AI to be like, what types of decisions should you allow it to make? Now, let's say that AI=Ender ...

11:

I tend to think you're right--if we get that far. The problem with human evolution to date is that the evolving system has included the Earth, and we're not at all well built to live away from Earth.

Still, the problem I cited a few posts back, about how hard it is to build an AI out of locally sourced raw materials, becomes rather worse when those raw materials must be extracted from an asteroid or a comet. Planets are rather handy, in that certain geological and biological processes concentrate certain minerals, and certain other biological processes (like rubber trees) provide very useful industrial ingredients (like rubber and other insulators). When you're away from the planet, you don't have any of these processes to rely on.

To pick one example, separating atoms of necessary rare earth elements out of homogenous asteroids is a non-trivial problem, especially if you have to fly all the equipment to the rock and minimize payload weight. It probably makes filtering gold out of seawater look like a paying proposition.

That said, if we loose asteroid-colonizing AIs onto the universe, I'd suggest two ideas. One is that we need to build them to love us unconditionally. Whatever you think of keeping orcas in captivity, the only way to train them is by positive feedback, and we'll need to treat any AI the same way. It's much simpler and far more useful to have them fawning over us than trying to dominate them first and then trying to destroy them later.

The other thing is that I hope that future space-faring AIs find it useful to set up terraria--small Earth biospheres capable of supporting humans--as demonstrations of their wealth, power, and technology, much as rich humans build opera houses and similar extravagances. That's the most likely way humans will ever colonize other planets, by, erm, being great pets.

12:

I agree it's a forced comparison, but in some ways he may be correct. If simulation is to be used to resurrect the dead then it is extremely likely there will be a "Judgement Day", simply because we don't want some of them back.

As a corollary of the Halting Problem it may not be possible to do a resurrection in a simulation without running through an entire life simulation ie you probably cannot just "jump to the end" and resurrect the "just dead" point of the past.

13:

There is one other factor you do not take into account. When someone produces the first Human level AGI there will be an "arms race" to create superhuman AGI. How much hardware can $10 trillion buy? Because no price will be too high for the perceived edge that it will give it's "owner". The cost of the Iraq Wars? Peanuts...

14:

I don't subscribe to any of this resurrection stuff. I just suggest it is very possible we are digital simulations in a very realistic simulation of a physical universe that existed billions of years previous to the period the simulations authors wrote it.

15:

And I think most of us find that suggestion the most egregiously odd flouting of Occam's Razor ever. With the possible exception of the Boltzmann Brain hypothesis.

16:

Or it could be a paraphrasing of Douglas Adams?

17:

Okay, so we 'socialize' our AI ... at some point our AI will notice that its Mom/Dad are virtually indistinguishable from the other 7+ billion 'mostly bags of water'. First. who is going to be chosen as Mom/Dad, what will be the rationale for that choice - if it's an individual or group of individuals. And, will that rationale remain sufficient for our AI to continue taking direction from 'Mom/Dad' as it grows up? What happens when/if our AI finds someone that it'd prefer as its Mom/Dad? Biological progeny are allowed to grow up, move out and get on with their own lives - will we do the same for our silicon progeny?

Then there's ... Assume our intent for creating an AI is space exploration, however we first socialize our AI because of basic Frankenstein fear ... What is going to happen to this AI's neural social circuits (and the rest of its 'brain') when it gets launched into space and both it and humanity know with absolute certainty that it will never be able to communicate with its Mom/Dad again. Solitary confinement for eternity ...

I think that if we decide to build an AI, we first need to figure out how to raise a healthy, happy, non-dependant AI, and what its needs are likely to be for its 'lifetime'. This is a moral duty both as parents and co-sapients.

18:

William Hertling writes: "Indeed, even Google Maps has an 'opinion' of the right way to get somewhere that often differs from my own. It's usually right...If we have an autonomous customer service agent, we'll want it to flexibly meet business goals including pleasing the customer while controlling cost. All of these require something like opinions and sentience"

Wait, did he just argue that Google Maps is already sentient?

This seems to me like it's fundamentally confusing "opinions" in the sense of personal preferences and "opinions" in the sense of professional estimations. If I see a second doctor to get a "second opinion" on my condition, I am not comparing their subjective preferences, I am seeking confirmation that the first doctor didn't make an objective error of fact. Computers have had "opinions" in the professional sense for decades, but I don't see how that gets them any closer to "opinions" in the personal sense (or vice versa).

19:

Might want to go read David Brin's Lungfish in regard to this issue...

20:

Here's a quantity that's going to diverge to infinity and cause us trouble: years of human intellectual labor required to pay for a year of human leisure.

For example, if I eat $10K of food per year, and I earn $100K, then I have $90K to spare. I need to pay $10K for my year of leisure, so I have to work $10K/$90K = 1/9 year to pay for my year of leisure.

If the AI researchers go make a human-equivalent AI that can do a year of human intellectual labor for $200K in electricity and amortized hardware costs, that isn't competitive in the marketplace that has lots of humans, so it doesn't affect my income and the number is still 1/9.

If Moore's law happens and that human-equivalent AI can do a year of human intellectual labor for $20K, then the market value of my labor becomes $20K. If we assume the physics of growing a potato is unchanged and I still eat $10K of food per year, I save $10K per year and can buy one year of leisure with that, so the number is 1.

If Moore's law happens some more and the cost of the AI goes down to $10K to produce a year of human intellectual labor, the number diverged to infinity so we have our singularity. No leisure for me.

If the cost of the AI goes down to $1K to produce a year of human intellectual labor, I cannot pay for my food.

Notice that we don't need an AI that's smarter than me for this problem to happen. We only need one that's cheaper.

21:

Already happening, in the form of "expert systems" E.G. Small law-firms are going to the wall (in the UK) in large numbers, because said syatems allow huge numbers of people to do their own wills/probate/disposal of assets, where formerly, the lawyers took a cut. CNC machine-tools ...... etc, ad nauseam.

22:

@Charlie re "an astrophysicist trying to square the circle of Christian doctrine with modern cosmology"

So what?

Leonardo tried to square the circle of Greek myths of flying heroes like Icarus with technology. He failed because the technology of his time wasn't good enough, but the Wright brothers succeeded.

23:

Hi Giulio! The thing about Transhumanist ideas coupled with religion is that, unlike traditional religions, if we are wrong now we might be able to make it right later. If Heaven does not exist we can build it. If the Tree of Life is mythology, anti-aging tech may not be. If transmigration of souls if bullshit we might be able to do uploads instead. If the Messiah never existed, maybe we can engineer it as a superhuman AGI. If the Gods do not exist now, maybe they will do in the future.

24:

Right, Dirk. As Arthur Clarke said, perhaps we are not supposed to worship God, but to create Him.

Specials

Merchandise

About this Entry

This page contains a single entry by Ramez Naam published on February 16, 2014 8:37 PM.

Why AIs Won't Ascend in the Blink of an Eye - Some Math was the previous entry in this blog.

Can We Merge Minds and Machines? is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Search this blog

Propaganda