My expectation is that AI development would look like an S curve. There may be low hanging fruit at the beginning (e.g. new cognitive algorithms, the gains from higher processing speed in silicon compared to neurons), leading to a steep ascent, but then AIs will run into fundamental limits and the curve will level off as in the above models.
If that model is correct, then the question of the possibility of a runaway intelligence explosion hinges on how long the steep part of the S curve is -- how much low hanging fruit is there, and where are the fundamental limits?
]]>And in particular, if AI development is O(N^2), such that creating an AI 2x as smart as some reference requires 4x as much intelligence as the reference required, then an AI-developing-entity A that's twice as smart as another AI-developing-entity B should be able to create a new AI that's sqrt(2) times as smart as what B can produce. If B produced A, then A should be able to produce something sqrt(2) times as smart as itself.
Am I missing something?
]]>