Worse, all of the relevant people I spoke to (nobody eminent) either deprecated it or were extremely vague about what they believed. My understanding is that it is probably not distinguishable from several other formulations of quantum mechanics, and is more a metaphysical theory than a physical one. Wikipedia seems to confirm this.

One aspect where almost all (all?) of its proponents have definitely not thought it through is the cardinality of the number of worlds. They all seem to be assuming an enumerable number of worlds, because they have difficulty getting their head around a continuous one, but I can see no reason that should be the case.

That's also very relevant to the next blog entry (i.e. OGH's space opera). All of the stories that do use a continuum (e.g. Niven's For A Foggy Night) rely on collapse, to discretise the worlds. I once tried thinking of how to use a many worlds continuum in a science fiction story without relying on wave-function collapse, but my head hurt :-)

]]>God alone knows what one would call the inheritance of such things as the bush ivy characteristics over vegetative reproduction. Over to Heteromeles :-)

]]>But it is essential to note that I am NOT referring to the basic expanding universe hypothesis, nor that it started out as something as dense and hot as hell, but the claim that the conditions there (as estimated by extrapolating the currently favour formulae) created the rules of the universe we live in. I.e. exactly comparable to the religious assumption.

Personally, I favour the fourth origin hypothesis: we are a figment of our own imaginations :-)

]]>I agree. Some computer systems have shown traces of the latter, but we don't really have a clue how to write a program that has them, and they are NOT an inevitable of increased computational power.

"That falls over the same problem as the religious believers refuse to accept ... WHich happens to be a thoroughly testable & falsifyable proposition."

That's what I said, though it's ALSO true of the claims of the Big Bang Baloney artists. Unfortunately, it's NOT testable or falsifiable under any conditions that WE can control; as I said, those three hypotheses are not distinguishable.

]]>Yes and no. We know of plenty of computational machines that are strictly more powerful than Turing machines, though they are (a) currently beyond our ability to analyse and (b) not constructible by humans as we now think, or using physics that we can control. Consider, for example, a Prolog-like declarative system with continuous (i.e. true real number) time and true real number values. It isn't implausible that we are being simulated on one of them. But that's also irrelevant, because there is no way to distinguish the following hypotheses, so debating the issue is comparable to arguing how many angels can stand on the head of a pin:

The universe and its rules were created for some inscrutable reason by some deity or other.

The universe and its rules came into existence by 'chance' as the result of obeying some meta-rules or other.

The universe and its rules are being simulated by alien entities that live in a super-universe with a strictly more powerful set of rules.

I don't blame you. I was responding to Mateus Araújo, and assuming a fairly advanced knowledge of the mathematical issues.

"I've never read anyone talk about the Singularity as if it has something to do with computational complexity, ..., which when mashed together turn into some newfangled form of the Christian rapture."

I fully agree with the latter. And that was the point of my first paragraph. Those people are abusing a standard mathematical term for bullshitting purposes.

"In a sense, given their axioms, the fact that their conclusions are so bizarre isn't really surprising. After all, garbage-in, garbage-out. ...

But none of this really has anything to do with computational decidability or the benefits of BQP for factoring and so on."

Actually, it does, but you need to be a (practical) algorithmist to know why and how. But talking about the theoretical underpinnings for something only slightly more plausible than the ravings of von Daniken and Velikovsky is probably a mistake ....

"Have I completely missed the real thread of your conversation somehow?"

No. You have understood FAR better than 90% of the people who blither on about it.

]]>Dammit, even a simple theory like general relativity became dogma long before it had any real proof from experiment. Its two undetermined constants accounted for the speed of light and Mercury's precession, and an equally simple explanation of the bending of light was that energetic mass has twice the gravitational effect of rest mass.

]]>The second is that BPP is a VERY strict form of probabilistic computing, especially in the requirement for completion in deterministic polynomial time. Every statistician can tell you that things get a lot weirder as you move away from such strict models. Inter alia, probability is an infinite limit concept, and two infinite limits do not generally commute; there are several computer science papers I have seen that have got that one wrong.

As an example, there is no BPP test program that can distinguish a true random bit generator from an unknown pseudorandom bit generator; but there is, if I recall correctly, a pseudorandom bit generator that will test as true random against any fair BPP test program. Abandon the requirement for BPP, and things get a LOT more 'interesting'; I did some analysis of that at one stage.

The third is that polynomial does not mean realistic; 10^10^10*x^10^10 is polynomial, for example. There are lots of practical algorithms where the fastest algorithm in a theoretical sense is not the fastest in practice.

The third is that almost all computer science complexity theory is based on the language recognition model, and quite a lot of it does NOT extend to practical problems. The first point is that most theoretical algorithms oversimplify practical problems, and there are usually (yes, usually) practical heuristics that make an immense difference.

The fourth is that few practical problems require perfect answers. the original (numerical analysis) complexity theory took that on board, but it got very badly neglected with the rise of computer science complexity theory. This links to the second point, which is similar.

The fifth is the most mathematically interesting. Even if you know that P=NP, that does not give you an algorithm for finding an algorithm for solving NP problems in polynomial time. Further, despite claims by the NP brigade, the fact that program proving is in P doesn't help, because Goedel can be phrased that there are things that are true but not provable within a system.

No. Finding a way around the Turing/Goedel constraint WOULD be a game changer; merely proving P=NP (or even finding a usable but inefficient algorithm for all NP problems) would not.

]]>That's going back a bit :-) I have been using the Internet only since 1979, though I was using wide-area networking a bit before that.

Yeah. Fair use doesn't mean what most laymen think it does. If I recall, there was some discussion (when?) about constraining the proportion that any one agent could use but it was abandoned as a hopeless task. Some nodes have done (do?) it, but it's impossible to specify precisely. It's now politically impossible, as well as technically.

]]>Those are two aspects of the same phenomenon. I get very bored with people talking about the increasing rate of IT or other change, and I point out (to no avail) that it is slower than when I was younger and is currently slowing down. My point stands, unchanged.

I can assure you that there IS speculation of the absence of a Turing/Goedel limit. And, while I could explain why what I posted is correct, I suspect that you would have difficulty following it. I am aware of that proof, but it is not that aspect I am referring to. And, as I said, explicitly, it is very unlikely.

Your last paragraph shows that you don't understand computational or complexity theory. Even were that class of AI to be more intelligent than us, that is only in the sense that one human is more intelligent than another. And my remarks about NP were referring to the whole of that area, including BPP. Again, I could explain, but I suspect that you would have difficulty following.

If anyone is interested, the issues that I referred to as tricky are NOT primarily to do with the actual computation, but the class of questions the models are considering, though the computational primitives have an effect on that. Specifically, yes/no versus an actual value, and deterministic versus probabilistic - and you will NOT find the latter in computer science textbooks.

]]>It is POSSIBLE that different computational models do not have a Turing/Goedel limit, but that's not the way the smart minds think. Quantum computing is hyped up to be a strictly more powerful model, but I have seen no evidence that it is (in the sense of what can be computed). Computing with (true) real numbers is, but is not realisable. I know that people have worked on this, but nobody has ever found one that is both strictly more powerful and realisable.

There are more powerful, realisable models in the sense that they can solve problems in a reasonable time that are intractable in the simple Von Neumann model, but they can't actually calculate anything that the latter can't. If anyone delivers a working quantum computer, it may be one such, probably for a VERY limited set of problems.

]]>