In my previous post on why the Singularity is Further Than it Appears, I argued that creating more advanced minds is very likely a problem of non-linear complexity. That is to say, creating a mind of intelligence 2 is probably more than twice as hard as creating a mind of intelligence 1.
The difficulty might go up exponentially. Or it might go up 'merely' with the cube or the square of the intelligence level you're trying to reach.
Blog reader Paul Baumbart took it upon himself to graph out how the intelligence of our AI changes over time, depending on the computational complexity of increasing intelligence. And I thought it was worth sharing with you.
The blue line on the left is a model very much like Vernor Vinge's. In this model, making an intelligence 10x smarter is only 10x as hard. This is the linear model. And this does show the runaway AI scenario, where an AI (or upload, or other super-intelligence) can make itself smarter, and now so smart that in an even shorter time than before it can make itself even smarter, repeat ad infinitum. You can see this in the fact that the slope of the line keeps rising. It's arcing upward. The super-intelligence is gaining more intelligence in each period of time than it in the period of time before that.
That was Vernor Vinge's original conception of a "Singularity" and it does indeed bear the name. Because when you graph it, you get a vertical asymptote. You get essentialy a divide-by-zero point. You get a moment in time when you go from realms of ordinary intelligence to infinity. The intelligence of the AI diverges.
Every other model Paul put into his spreadsheet showed convergence instead of divergence. Almost any non-linear difficulty in boosting intelligence means that no runaway occurs. (Note that these *do not* include the benefit of getting new hardware over time and general speedup from Moore's Law, for so long as that continues. But they do include the benefit of designing new hardware for itself or any speedup that it can cause to Moore's Law.)
The bottom line, in green, is exponential difficulty (e^x). Many real-world problems are exponentially difficult as they grow in size. The 'traveling salesman' problem is an exponential problem (at least to find an exact solution). Modeling quantum mechanical systems is an exponential problem. Even some important scenarios of protein folding are exponentially difficult. So it's not at all unlikely that boosting intelligence would fall into this category. And as you can see,if intelligence is exponentially difficult, the super-intelligence does ascend.
The next line up is a polynomial difficulty of x^2. x^2 means that to achieve twice as much, it's four times as hard. To achieve 10 times as much, it's 100 times as hard. Many real world problems are actually much harder than this. Some tricky and approximate molecular modeling techniques scale at x^4 or even x^7, much harder than this. So x^2 is actually quite generous. And yet, as John Quiggin quickly pointed out, with x^2 difficulty, the AI does not diverge.
[Note that there was an error in my math in the original post. I wrote that an AI twice as smart as the entire team would be able to produce a new inelligence only 70% as smart as itself. That's incorrect. It should have been 140% as smart as itself. That's the first step on this curve, which quickly converges.]
The other curves on this graph are progressively easier levels of difficulty. The prominent red curve in the middle, which goes quite far, but also doesn't diverge, is assuming that the problem scales at x to the power 1.2. That's saying that to create an intelligence 100x as great is about 251 times as hard as creating an intelligence of level 1. Personally, I suspect that's vastly underestimating the difficulty, but we can hope.
Many thanks to Paul Baumgart for putting this together.
He's also made a spreadsheet with his math available here. (That link will open the spreadsheet directly.)