Back to: Commercial: Rule 34 audiobook now available | Forward to: Shameless log-rolling

I may be being unduly optimistic ...

... but I think this means we're probably going to see room temperature quantum computing on integrated circuitry within the not-too-distant future. (For values of 5 years << NTDF << 20 years.)

So, what are the second-order implications[*] of being able to manufacture and deploy room temperature chips able to perform > 1 billion quantum operations per second on n >= 32 qubits per register and n >= 1 billion qubits of on-chip storage, for about the price of a present day high-end Intel server grade CPU? (i.e. US $100-1000 per unit and power consumption in the range 10-100W, suitable for embedding in commodity servers and high-end laptops?)

And thinking further, what are the implications of yadda for about the price of a present day ARM core (i.e. US $0.5 - $50 per unit, and power consumption in the range 10-500mW, suitable for embedding in cheap handheld devices like mobile phones)?

[*]Yes, yes, I know: public key crypto suddenly gets a lot harder. And we learn to live with much better, albeit non-deterministic, solutions to the travelling salesman problem and the blind knapsack packing problem, etc. And Roger Penrose has to come up with another argument to support his prejudice that consciousness is non-computable. What I want to know is, what else happens?

194 Comments

1:

Like most solid state systems, the decoherence time is way too short. You can see the oscillations damping out in few dozen cycles; worthwhile QC needs something like 10,000 cycles.

The other, more subtle, problem with defect based systems is that each one is slightly different; the local environment (at the range of a few atoms) is reproducible, but there is no way to control the long range charge and strain gradients the defect is immersed in.

As QC depends on synchronisation of multiple freely evolving subsystems, matching of the qubit properties is vital. Making an analogy with conventional spectroscopy, this requires that the inhomogenous line broadening is very small.

2:

The PlayStation Q, available in all good stores by 2021. Call of Duty 14 looks more realistic than Saving Private Ryan.

3:

IF/WHEN we get products that are not sold at a price of 10x over existing systems.

True room or area controlled HVAC systems where you only heat and/or cool spots as needed which cost what whole house systems do today. Ditto lighting systems that work intelligently with you. Way smarter than the existing motion detectors that turn on a light when someone enters a room. While there exists such things today they cost too much, are temperamental, and very stupid in terms of figuring out what the people in a house are doing.

With enough compute power a house should be able to learn the habits of a family and integrate such things as national and local holidays, school calendars, neighborhood events, personal calendars, pet habits, etc.

Now toss in a Siri style voice interface to deal with exceptions. And allow this system to ask questions and it starts to get interesting. It would do things like ask me if I want to turn off the air compressor in the shed as it appears it is not being used. (Something that happens to me several times a year. Or even just turn off the shed light when left on.) Turn off ceiling fans when no on is in a room for a bit but turn them back on without asking if people return within a reasonable amount of time. And learn who want a stiff wind vs. who wants a gental breeze vs. those who just don't like air blowing on them.

For those of us with street side mail boxes the box would let us know when there s mail in it. If we still have paper mail.

Analyse my yard and turn on my sprinklers if needed but do it when rates are cheap, following restrictions in effect and use my collected rain water. But notice if rain is predicted and just wait for it if reasonable to do so.

Some issues? Does this run standalone in my house or does it use a google like remote to me data base to analyse things. In the US we'd need to break up ISP last mile from services to likely make this affordable as now we are very much becoming into a duopoly situation where data limits are a road to more profits and thus a hindrance to innovative uses for more data.

Also what about privacy. How secret can I keep this diary of my families life?

4:

> Write me a 3 volume trilogy, in the style of Charlie Stross, on the topic of the economics of early days of quantum computing, set in the alternate time-line where Churchill's Anglo-French Empire won WWII. Hero should be me, as usual, and there should be a mysterious red-headed woman.

<< ok, that will take 3 minutes - here's the first chapter to read while I do the rest.

5:

Taking the MWI literally, multiple copies of the machine are involved in the computation. For 32 bits its around 4 billion copies. However, there is an assumption that the space is continuous and infinitely divisible. I am interested in whether there is a limit to the number of worlds one can use ie whether QM turns digital at some point and (say) 1000 Qubit machines just don't work

6:

Massive, fuzzy, difficult to model problems suddenly become loads easier --- such as machine vision, world modelling, and weak AI. The end result: robots that can actually operate usefully in the real world.

We're already seeing some of this, as simple brute force computation on traditional processors becomes cheap enough to to apply in bulk. e.g. self-driving cars.

Driving cars around on roads full of traffic is a good benchmark for a real-world nasty problem. Once we can do that reliably, we're a massive step down the path to von Neumann machines and a proper robotic economy.

7:

Also, such a machine might show up any non-linearities in QM. That would open up a whole host of possibilities, including communication across worlds. http://www.technologyreview.com/blog/arxiv/25494/

8:

There are a lot of misconceptions about QC. The most widespread is that gets an advantage from "using multiple worlds". In the general case, distributing the system of many configurations (worlds) is not a win, because you get a weak mixture of many answers, with little probability weight on any one.

The useful QC algorithms correspond to special cases where there is some interference phenomenon, so that although the system evolves through intermediate states that are widely distributed, the final state (answer) involves the probability weight being concentrated.

For an explanation of how this works see http://www.scottaaronson.com/blog/?p=208

10:

The implications are almost non-existent in the non too distant future. The fun only really starts 20-30 years after it has been available and it has become clear which problems can indeed be solved by such a contraption and which can't.

Among the likely candidates are such things as reliable pattern recognition - with possible implications for the development of slightly stronger AI. But this depends on the actual performance of the actual product - for which we don't even have a reliable metric as far as I know.

11:

Isn't all this pessimism a little unusual? Maybe the thread is too young yet.

Charlie, for not knowing anything about QC, does your question translate to: What happens when [amazing computing resources] become cheaply (or even more cheaply) available?

Just want to understand what you're asking for. I have no idea what kind of problems QC otherwise would be limited to…

12:

For the short-to-medium future, lots of new job openings for CS researchers as companies and universities try to come to grips with how to actually use such machines in a productive way. Look at Connection Machine, Transputer and other early attempts at massively parallel machines and how they failed commercially as methods to make use of their power just weren't available.

These don't really magically solve things we don't know how to solve today. Machine vision doesn't suddenly become a solved problem, for instance. It's a hard problem, not because we lack speed or ways to handle fuzzy states in classical machines, but because at a fundamental level we still don't really know enough about biological vision systems to know how to tackle the full problem in a robust manner.

13:

"Total Information Awareness" might finally work as intended -- and naturally, lead to unintended consequences.

14:

Iirc elliptic curve cryptography is not affected by advances in quantum computing, as far as we know.

15:

The major disappointment in QC on the theoretical side has been how relatively few interesting computational problems turn out qualitatively easier in quantum as opposed to classical computing. When the field started off (and when I was a QC grad student) the factorisation and sorting algorithms had just been discovered, and everyone expected that this was the tip of the iceberg. Fifteen years later, and people haven't really found that much else.

Having said that (and this gets back to the original question), perhaps it's not that surprising that quantum computers aren't generally all that much extra help when it comes to doing classical computation problems. The most interesting line I've had from people in the field I talk to is that the real power of quantum computers is going to be in simulating quantum systems. At the moment, even rather simple quantum interactions - say, all but the simplest problems in chemistry - turn out to be intractably hard on a classical computer. Think of it this way: the end result of the chemical interaction is the sum of lots of interfering processes. The only straightforward classical way to simulate this is to run all the processes in sequence. A quantum computer can do it all at once.

So my impression is that once we have quantum computation, it's going to revolutionise chemistry (and also quantum field theory, though that's less likely to be technologically significant).

16:

See http://arxiv.org/abs/quant-ph/0301141 for attack on elliptic curve cryptography.

There are crypto algorithms that are resistant to QC attacks - search for "post-quantum cryptography" for details.

18:

Oh dear, poor Nixar. Two references to a rebutting paper in less than ten minutes.

19:

Simulations. Very detailed simulations of nondeterministic scenarios.

Advertisements in 2020 will simulate your cortex's response to 10**20 permutations of an ad to find the one with the highest probability of sparking your interest in the brand/product leading to an eventual sale, given your immediate mental and physical state. Advertisers will bid highest for the lucrative just-drifting-off-to-sleep time slots.

The Gattaca scenario will become possible. Parents will choose their kids' genes so that they dance well, have straight teeth, and minimize teenage acne. It's just a lot of protein folding.

High frequency trading comparable to today's best algorithms will move into the sub-nanosecond range, meaning that Wall Street collapses into one cubic decimeter (limited by speed of light). Meanwhile, at the 10s of microseconds range where today's HFTs operate, trading algorithms will simulate one another in ever increasing detail, causing several of them to fall in love.

Accurate weather prediction will be "just a few years away".

20:

Forgive if these are ignorant questions, but...

Has anyone actually figured out how to program a quantum computer, other than theoretically? For commercial use, that is. I'd imagine that making an operating system that takes advantage of four state qubits will be difficult until one is built.

Any one who says it will lead to AI sooner is jumping the gun. What makes you think it would be any easier with QC?

And, I don't think it would have, to begin with, much impact on sales of current type computers, unless there's a scramble for the newest thing, and companies decide to dump their old product. Assuming that something other than diamond (even artificial) is found that this works in, it would be some time before a laptop becomes available.

21:

Here's my flippant answer: Pocket calculators that give results before you finish entering the equation. A generation of children grow up not knowing basic arithmetic.

22:

It'll mean AI will only be 5 years away rather than 10. :-)

23:

I was talking about the original definition of artificial intelligence - with intelligence as in "military intelligence".

And basically anything that can do better pattern recognition will improve that. No more, no less. I surely wasn't talking about the emergence of weakly godlike or anthropic entities.

24:

Parents will choose their kids' genes so that they dance well, have straight teeth, and minimize teenage acne.

Will that be available retroactively?

More to the point, though, it won't tackle the problem of whether they dance or not, which is more fundamental.

25:

As I understand it, qbits are basically just switches that can have values other than 0 (completely off) or 1 (completely off). The relate in such a way that, if one bit is 50% on and the next bit is 50% on, collectively they are 25% on, and so forth.

At some point you are limited by your ability to distinguish between a circuit that's 0.1% on (or whatever the threshold happens to be, but 0.1% is only about 10 bits deep for 50% qbits) and a circuit that's off. My intuition is that this winds up making quantum computers nearly useless for most applications that can't be brute-forced with modern normal computers.

26:

Data mining becomes even more efficient and privacy becomes a quaint historical niche subject even sooner than without QC.

27:

My little comment/question wasn't aimed at you, or anyone in particular. It was more to the thought that someone was likely to say that it'll make strong AI easier. My main point was that, even if it becomes relatively easy to build quantum computers, it will be a while before it's really understood how to use them effectively and make them commercial. Not as long as, say, from Univac to my iPod touch, but it won't be quick.

28:

qubits are not analogue values that vary between 0% and 100%. They are entities with two (rarely more) discrete quantum states. What varies is the probability of being in each state; in a complex system with many qubits you then start thinking about the correlations between various qubits, and the phases of those correlations. At no point is any qubit ever in anything other than a coherent superposition of the two basis states.

The big limitation, and the reason why none of the currently proposed QC systems is within light years of being useful, is decoherence. If the delicate correlations between states decay, you just end with statistical mixture, rather than a superposition. For even the simplest algorithms, you need systems that are coherent for thousands of cycles; very few candidates get anywhere near this.

Remember, conventional computing works because at each step all the details (noise etc) get collapsed; the system is dissipative and irreversible. The volume of phase space occupied keeps being recompressed. In a QC, everything evolves freely; you cannot try to shrink the size of the uncertainty blur without decoherence setting in.

29:

Another simulation idea.

"Smart foods" will design themselves in real time to taste and feel like the thing you most want to eat at the moment you bite in. Inexplicably, 99% of the time, this will be a marshmallow with a hint of artificial bacon flavor.

The silicon carbide quantum computers in the food won't digest. Instead, they will collect on the linings of our guts. As the quantum computers accrete, they'll cluster into much more powerful computers. These super QC clusters, armed with microscopic biochem labs and immense computing power, will start redesigning our bodies from the inside out to be more hospitable environments for super QC clusters. Those humans who survive will rapidly morph into thousands of new species.

30:

David, I'm not sure you understood the question ...

31:

Well, stuff that strikes me as likely to become tractable would include realtime solutions to the tertiary protein folding problem -- currently a hard one to tackle because it involves solving the wave function for something with mass in the kilo-Dalton to mega-Dalton range. Which in turn suggests the possibility of things like designer antibodies that can be produced in near real time to deal with stuff expressed by cancer cells. (Shorter form: personalized cancer defenses using genetic engineering to produce "silver bullets" targeted on whatever cell line is causing trouble right now.)

Better weather forecasting -- accurate out to maybe as much as 3-4 days :)

32: 6 - self-driving cars is an interesting one - not because it would be a good use of quantum computing, but because it's an example of the fact that AI can't be 'as good as human' but needs to be near-perfect before people would accept non-human driven cars - even if the latter would be far safer, and able to use additional 'senses'.
33:

We end up with a high heeled shoe stuck up our bum?

34:

Custom rapidly produced antibodies for cold and flu is a significant game changer for levels of productivity between those societies that can afford such and those that can't.

35:

Fusion power is now only 50 years away.

36:

Agreed on machine vision. I'd gotten the strong impression that the real problem with machine vision that replicated human vision was that areas outside our visual cortices account for something like 80-90% of vision. A machine that saw the world the way a human does would be fooled by magic tricks and subject to all the illusions that humans routinely fall for. That's not always a good thing.

More to the point, I have to ask the contrary question: given how much money someone could make by creating a useful quantum computer, why did they publish this article? I'd bet even odds that this is one of those "neat in the lab, fails in the real world" experimental reports.

37:

Let's see, real world implications: --There will probably be new forms of spam, phishing, and similar broad-scale attacks that we haven't thought of yet. Emergent spam?

--We may well see a generation of high-efficiency solar power units. The idea here is that photosynthesis relies on some interesting quantum effects, and hacking or replicating these to produce a useful electric current would be useful.

--Cladistics gets done. While I'm not sure, I suspect that the algorithms that make trees out of data might an area where QC is useful. If that's the case, expect to see bigger phylogenetic trees with more organisms and more genes (and they're already experimenting with "genomic trees"). We may get to the point where we have The One Tree (as the evolutionary biologists call the ultimate phylogeny) and evolutionary biology turns primarily to studying evolution in action right now. This may have second-order effects in such non-trivial areas as plant and animal breeding. We'll see.

--Community restoration might become viable. To me, it appears that both ecological restoration and urban restoration suffer from many of the same problems: limited data, limited resources, too many potential pathways, too many missing pieces, and people who cover up failures. The typical restoration tactics right now is either some variation of "spray and pray" or persuading someone to make restoring a small area their life's work. If QC makes these complicated problems more tractable, we may see things like more livable cities in the US (through more realistic urban planning), more diverse restored forests (prairies, reefs, whatever), and possibly even more complex industrial farms.

38:

You're right. Delete it.

Shouldn't resond when I wake up too early and am reading in bed.

Moderator: done

39:

Comprehensive, rational analysis of widespread social problems becomes computationally trivial... and the solutions are still ignored by a populace which prefers 'reasoning' by anecdote and gut impressions.

40:

Climate models become incredibly more powerful, and denial becomes even shriller.

I also think that the science-fictional idea of digital personalities become commonplace, and they take over a good deal of the mental drudgery needed to function in an increasingly complex society. Everyone has multiple digital assistants, in 3D and they look fabulous.

41:

Schrödinger's kitten videos go viral...

42:

For the most part, the effects are not going to be what people think.

Quantum computers just aren't that good at solving most problems that can't already be solved very well. You get a speedup on a very small class of problems, and there is good reason to think that class is not very general. Quantum computers will not allow you to solve general exponential problems or NP-complete problems in tractable time. They won't let you simulate millions of parallel brain states, they won't let you predict next week's weather with pinpoint precision.

The obvious things this will do is force us to change our encryption technology (note, change, not abandon) and it will improve certain kinds of search algorithms a lot. Doubtless a few other specialized algorithms are out there waiting for us.

However, that's not the interesting part.

The biggest effect, and most unobvious IMHO except if you're In The Know, is that it will allow us to simulate quantum systems far faster. Why would this be of interest? If you're trying to design generalized molecular manufacturing systems (i.e. SFish nanotechnology featuring molecular robots that can do just about anything, not stuff like stain resistant "nanocoatings" for jeans), this would radically improve the rate at which you could do engineering work because you could get reliable computer simulations in short periods of time. That might not seem very glamorous, but MNT is potentially the most disruptive technology humans have ever seen, and making it happen decades faster would be a giant change.

43:

That matches my memory of what I've read. (Of course, there are always new algorithms being developed, so that may no longer be true.)

But all current financial records are secured by prime factorization. And those all fall. You can't retroactively change the encryption mechanism. (Unless you can recall all copies of the files in question.)

44:

Every problem that's made simpler to solve by quantum computing will ease the creation of a general AI system.

I don't believe in general intelligence. Not among computer, but also not among people. What I believe is that there are LOTS of specialized problem solving modules, and that some of them solve problems of coordination among problem solving modules, and some of the solve ... N.B.: In my model consciousness is NOT a controlling element. It's a serializing element. It's a part of the "store this in memory in a way that makes it easy to find again". (Recent research seems to partially back this up, in that actions are decided before they become conscious.)

SOME of those problems can be more easily solved with quantum computation. Whether it's significant enough to be worth the extra bother of including a quantum computer in the AI depends partially on how significant they are, and partially on how much bother it is. Just because it becomes possible doesn't make it worthwhile. Sorting is something that computers already do pretty well, and we don't often need to factor large numbers. I'm sure that there are other things of significance. Perhaps pattern recognition. (Current pattern recognition mechanisms are complex, slow, and need improvement. It's not clear that only quantum computing can improve them. It's also not clear how much quantum computing can improve them. [To me. I'm definitely not a specialist in that field.] But it might be worthwhile. If including it wasn't too much of a burden.)

But do note that the pattern recognition problem is one of the basic AI problems. If pattern recognition were cheap and efficient, then there's lots of thing that could more easily be designed using it than can with current approaches. Scale and angle invariant object recognition is only one of a number of problems that this might make MUCH easier.

45:

puts grumpy hat on

All that happens is that the price of any given computation goes from vanishingly small to slightly closer to zero.

For almost all practical purposes, we're not limited by cost of computation. All this does is make all the other resource constraints more obvious, that is energy, land, water, and the ability of our biosphere to cope with all the crap we're throwing at it.

Now, if you're in the mood for undue optimism, then I'll raise the Rossi energy catalyser. Low energy nuclear reactions means fusion-levels of power without much of that nasty radioactivity. That's world-changing. Sadly, however, it seems to break the laws of physics, as these kind of things often do.

46:

Maybe we'll finally get a solution to the n body problem!

47:

I'm not sure is does much of anything. This seems vaguely like a rehash of the perennial general purpose vs dedicated processor market. The dedicated processors always seem to lose out because their limited market results in eventual low priced general purpose processors due to continued scale/experience curves.

In addition, within that 5-20 year time frame, aren't there a number of very interesting technologies that could offer very low power and high performance general processors?

48:

Computer generated music becomes so stupendous that nobody bothers listening to human composed music any more.

49:

The intended consequences are bad enough.

Hans

50:

I see this shortened form being written here and there, as if:

QC = Quantum Computing

For me those letters have always meant something else:

QC = Quality Control

Which makes me think of the quality of current computer programs. I extrapolate from this to the applications of Quantum Computing and I think that the only major changes that will appear concern treatment of mass quantities of already known, easy to access data. Better Googling, better astronomical research but not necessarily better biomedical applications because we still don't have large quantities of useful steady data in a lot of things concerning human biology. We're constrained by the relatively small number of microbiologists really doing Science and finding out where the data is "even" and massive and where it is not. Same thing with meteorologists.

There's also a third QC which springs to my mind these days.

QC = Questionable Content (a webcomic)

Interestingly, it went through a long story arc exploring some of the consequences of machine intelligence, a few weeks ago:

http://questionablecontent.net/view.php?comic=1994

Except for those "Anthro PCs" like Momo, the "alternate universe" of QC is pretty much the same as ours.

51:

Crypto becomes worthless, hence internet commerce dies when Chechnyans get QC kit.

Actors all laid off when QC makes CGI versions of old faithfuls like James Stewart and Leo DiCaprio available on request. Real actors go bankrupt attempting to recover royalties. Money now spent on developing worthwhile scripts. Okay, that last one is too far-fetched.

TV Movie Ratings become defunct when public demand for QC-enabled video preprocessing kit outstrips directors' ability to defend copyright thanks to VCR-era "fair use" statutes. Don't feel Johnny Depp's dialogue is fitting for the audience at hand? Gone, seamlessly gone in a few jiga-qflops. Watching "Fear and Loathing in Las Vegas" with the auto-censor set to "Sesame Street" becomes new doper entertainment de jour.

Elections seamlessly rigged in QC voting machine roll-out. Damage moot since all candidates now QC-authored sims anyway. Isaac Asimov sim goes broke trying to recover royalties from this situation and is farmed out to SciFi Channel to recover costs, but clever sim of clever author gets revenge by writing purest bilge. Audience doesn't notice.

QC-originated blogspam completely defeats any and all prevention measure (including those produced using QC tech). Death of blogosphere results.

Online gaming achieves better-than-life levels of reality limited only by the players' ability to afford the display kit (QC-enabled, of course). Online game fori now a wasteland of murderous, superfast AI sims running on University-owned QC hardware, killing subscriber base and online game industry wholesale.

Upside? P3n1z p1llz that work, details figured out using whole-human sim run on Sperry-Pfizer QC mainframe. Finding a partner to try out the results could be challenging, given that you don't know who's a sim, and the prospective partner won't believe you aren't. Luckily, you'll have access to QC-enabled cyber partners thanks to Realer-than-Real Doll™.

Where's the Tylenol?

52:

Replace them with new financial data, and declare the old data invalid. Then look out for ways to fake identity data for a the new financial data from the old data.

53:

Whilst I'm not expert in the topic I do find it strange that some people seem to think the quantum computing is synonymous with strong AI...

54:

All privacy and crypto will be retroactively destroyed. It won't be just that crypto is harder, but that all public-key communications are breakable. The past 20 years of archived secrets just became readable. (What was archived, though?)

Crypto also just got unbreakable. Now people can communicate actually securely. Also, it is possible that they will be able to know not just that their communication is encrypted securely, but they will also know whether their communication was bugged along the way! (Although I believe that result depends on being able to transmit qubits, rather than just use them locally for computation, but I am unsure).

Databases also got a lot faster, particularly those databases in which data is not stored in sorted order. "select key from rows" now runs in O(sqrt(n)) time instead of O(n) time for .

We can also simulate quantum physics in a short amount of time with a high degree of accuracy. Who knows what advances will occur...

If Roger Penrose is right that our brains are inherently quantum, then we will be able to simulate neurons quickly as well, with all of the attendant social weirdness around that idea.

Other than that

55:

Actually, on protein interaction levels quantum effects are not that important; you can do fairly impressive classical simulations. Scale is still a problem, but not one QC inherently helps with (no huge scaling law speedup, at least). Look up some of the work done by D.E. Shaw Research; here's an interesting paper on their machine. http://mags.acm.org/communications/200807/?folio=91#pg93

56:

Since there seems to be a lot of confusion as to what quantum computing is and what benefits it could provide, let me provide a quick FAQ:

Will QC mean uncrackable crypto? No. Quantum cryptosystems have already been cracked.

Will QC replace all other kinds of computing? No. As Perry@41 pointed out, QC is only useful for a (currently) small class of problems, those for which we have quantum algorithms. Note that we don't have to have quantum computers to develop those algorithms, though it might help, since so far only a few extremely knowledgeable people have been able to come up with any.

Does QC use superposition of states for its speedup? Yes, but it also uses entanglement of multiple Qbit states, something that was not at all well understood by physicists until after it became experimentally accessible in the 1980s.

Does QC allow us to generate many solutions to a problem in parallel? No, in fact it can only generate one solution for each run of a program. Getting the solution requires measuring the quantum state of the appropriate Qbits, which reduces all measured superpositions to some eigenstate. If you want to get all the solutions (for instance all the prime factors of some number), you have to run the program over again; each run will produce one factor (not always different factors, so the number of runs is usually greater than the number of expected solutions). The reason some quantum algorithms might be practical is that they make the probability that the final state of the solution Qbits is a real solution as close to 1 as the hardware will allow.

What are the obstacles to practical QC?

  • Short decoherence times don't allow for Qbit memory to be stable for long, and require that program runs take very little time. This is why most research has been done at ultra-low temperatures where decoherence times are longer. Fixing this requires QC hardware that isolates Qbit states from their environment by several more orders of magnitude than currently obtainable.
  • High error rates, requiring large numbers of Qbits for error detection and correction. Last I heard it took 7 Qbits to do error correction for every data Qbit in a simulation of a practical quantum computer with 32 Qbits per register. Some new hardware techniques, plus new computing circuits that use measurement as part of the computation may fix this.
  • There currently is no practical technology for moving Qbit states between main memory, registers, logic circuits, and I/O (if you can't get the result out of the computer it isn't worth much). Quantum teleportation is the obvious candidate, but nobody has yet demonstrated a real system using it.
  • There are very few quantum algorithms known. Only two that I can think of have practical applications at the moment.
  • Does QC use Many Worlds? Maybe. But you can view QC as using the Feynman sum-over-history abstraction just as easily. This may change as we learn more about multi-Qbit interactions, in which case we may get experimental evidence to choose one of the competing interpretations of quantum theory (I'm not very sanguine about this possibility).

    57:

    Thank you. That was very helpful.

    58:

    So my answer to Charlie's question? First, I think his timeframe is about right, 5 years to laboratory demo, and 20 years to commercial application1.

    In the commercial timeframe, I expect expert systems to be built using quantum algorithms2, allowing AI systems that will not be generally intelligent, but will be far better at particular tasks than any human. Those will include medical diagnosis, disease etiology and epidemic control, network and computer system administration, battlefield command-and-control support and logistics, air and ground traffic control, and stock and commodity market manipulation. And if history is any guide, the systems to do those things will be accepted by humans as infallible operators, so oversight will be at best perfunctory. And once the systems are in place, even serious failures as a result of the lack of oversight and reality-checking will not cause the systems to be taken out of service or even seriously questioned.

    1. Looking at the electron-spin in crystal defect technology that inspired the question, I see some possible ways to control the quality and characteristics of the defects. They require that defects that are within a few atomic layers of a surface be amenable to use as Qbits; then existing techniques for growing layers of crystal on nano-lithographic resist could create large arrays of functionally identical defects.

    2. Expert systems basically take input data, match them against the if clauses of some large corpus of if/then rules, and fire the then clause of any rules that carry a greater than given probability of being the desired result. ISTM there ought to be a way to design a quantum algorithm that considers all rules matching a given input simultaneously and results in a measurement which is most likely to be the most likely then clause.

    59:

    Rule 34 still applies, even in the quantum domain...

    60:

    I was at a talk by Suzanne Gildert a couple of years ago when she told us about the work Google has done running a pattern recognition neural network on a DWave QC to identify cars. You could hear the jaws dropping. At the time there was some question whether the DWave machine was a true QC, but last I heard it seems to be resolving in their favour. I also hear they have a 1024 Qubit machine in the works.

    61:

    Dirk Bruere @ 59:

    Something is wrong with that. The very first DWave machine was shipped a couple of months ago (to Lockheed IIRC). Google may have been working with some kind of simulator, but there's no way they had real QC hardware. Also, there's still some question about whether the DWave machine is a real QC; no one outside DWave or Lockheed has seen one yet. And I don't believe for a moment that a 1024 Qbit computer will be available for actual use (as opposed to debugging and screaming in frustration at the error rate) for several years at least.

    62:

    Google had access to the machine. I was talking to one of the people working on it.

    63:

    Quoth Wikipedia,

    On Tuesday, December 8, 2009 at the Neural Information Processing Systems (NIPS) conference, a Google research team led by Hartmut Neven used D-Wave's processor to train a binary image classifier.
    64:

    http://www.physorg.com/news180107947.html

    For the past three years, Google researchers have been investigating how quantum algorithms might provide a faster way to recognize a specific object in an image compared with classical algorithms. In their recent demonstration, the researchers used quantum adiabatic algorithms discovered by Edward Farhi and collaborators at MIT. To find hardware for implementing these algorithms, Google approached the technology company D-Wave, based in Vancouver, Canada. As Hartmut Neven, Google's Technical Lead Manager in Image Recognition, wrote on Google's research blog, the algorithm is run on a D-Wave C4 Chimera chip. "D-Wave develops processors that realize the adiabatic quantum algorithm by magnetically coupling superconducting loops called rf-squid flux qubits," wrote Neven. "This design realizes what is known as the Ising model which represents the simplest model for an interacting many-body system and it can be manufactured using proven chip fabrication methods." At the NIPS 2009 conference, Google demonstrated how its algorithm could recognize cars in images. First, the researchers trained the system by showing it 20,000 photographs, half of which contained cars that had boxes drawn around them, while the other half had no cars. After the training, the researchers presented the algorithm with 20,000 new photos, with half containing cars. The algorithm could recognize which images had cars significantly faster than the algorithms used by any of Google's conventional computers. ...

    65:

    Redesigning the human genome to create the perfect human? As a high school science fair project? :)

    66:
    To find hardware for implementing these algorithms, Google approached the technology company D-Wave, based in Vancouver, Canada. As Hartmut Neven, Google's Technical Lead Manager in Image Recognition, wrote on Google's research blog, the algorithm is run on a D-Wave C4 Chimera chip.

    Except - as been pointed out already - there's a good deal of skepticism that D-Wave really has anything like QC. In particular, since he's already been mentioned Scott Aaronson over at Shtetl-Optimised has said that D-Wave's claims are just so much hype and self-promotion:

    But first, let me anticipate the question that at least one commenter will ask (I mean you, rrtucci). No, I don’t have any regrets about pouring cold water on D-Wave’s previous announcements, because as far as I can tell, I was right! For years, D-Wave trumpeted “quantum computing demonstrations” that didn’t demonstrate anything of the kind; tried the research community’s patience with hype and irrelevant side claims; and persistently dodged the central question of how it knew it was doing quantum computing rather than classical simulated annealing. So when people asked me about it, that’s exactly what I told them.

    Good man, Scott Aaronson. I was gratified but not particularly surprised that this was yet another blog a lot of people here seem to read regularly. He's also a good aggregator for the some of the blogs I follow, stuff like N-Category Cafe, Terence Tao, et. al.

    67:

    Theoretical arguments about what the D-wave is in fact may be valid. But given the scientific papers (Google, others) about things that researchers are doing with them, it appears that whatever they are, they work. It may not be QC as some people define it, but it is doing something...

    68:

    The key word in my quote is "simulated". Yes, you can simulate a QC-based machine on a vanilla PC if you like and then run the quantum algorithm on the virtual. And you can even realize some sort of performance goals that aren't currently achievable with the usual visual recognition programs.

    But that's not really exploiting an actual physical implementation of QC now, is it?

    That's more like doing a digital implementation of Martin Gardner's famous SLAM analogue sorter and then claiming the problem of sorting in sublinear (in fact, constant) time has been solved.

    69:

    I don't believe for a moment that a 1024 Qbit computer will be available for actual use (as opposed to debugging and screaming in frustration at the error rate) for several years at least.

    There's your problem; I've got computer hardware for debugging and swearing at today!

    (And I remember the SLAM analogue sorter, too!)

    70:

    Anyone remember the O(1) solution for the traveling salesman problem using a model made of wooden dowels and string? I haven't been able to convince google to tell me anything about it, but I think I saw it in Scientific American many years ago.

    71:

    Ok, so basically this is what happens, I think:

    Old datasets encrypted with existing public key cryptosystems become decryptable. Any data you let out in that mode is retroactively at risk. Symmetric key data is fine.

    There's some churn as everyone has to switch to a better public key system. There are a number that are not thought to be solvable in BQP, so there will be the requisite Very Serious Meetings to pick one and then a lot of job security for software engineers to switch.

    The end result is that crypto works functionally the same, we just get to pay for a Y2k-style rewrite of a lot of code. Most software is somewhat pluggable though, so it's a lot cheaper. Your browser gets updated three months before the last Very Serious Meeting and turns out to be vaguely incorrect.

    More importantly, we get tremendously better at quantum mechanics simulations, which leads to amazing discoveries in chemistry and physics. However, everyone is so busy complaining about software upgrades and random cultural stuff that the general public doesn't notice.

    For most computational fields (eg. AI), not much happens. It turns out that BQP is not that great, and just using better Von Neumann based designs is mostly sufficient.

    Overall, it's not big news. The new trendy portable device and networking turns out to continue to be much more important.

    Forty years later, the amazing advances in chemistry and physics result in a never-ending series of astounding advances that transform all aspects of society, but nobody credits quantum computers for any of it.

    Quantum computer engineers quietly fume about this and many blog posts are written.

    72:

    Question Is all this QC computing still SERIAL? If so, then, apart fom having more "muscle" so to speak, what's the point? Yes, I know, some really difficult but still not ?P=NP? problems, previously uncrackable will fall .....

    Is it PARALLEL, or better-still INTERLINKED? If interlinked, i.e. the equivalent of each mini-processor connected to all its' (8-faces+8-vertices+12edges) 28 nearest-neighbours, then you've probably got AI. But, if done properly (IIRC no-one's tried it) this should also be possible with "classical" computing.

    73:

    90% of all known governments announce (pre-emptive) legal bans on public ownership of quantum computers, on the grounds that they might be used by terrorists. Special licenses to use them for research purposes are available, but mysteriously much more easily obtainable for arms companies than for universities doing basic research. Even before some genius has the idea of selling them to the highest bidder in order to raise money.

    74:

    Your interlinking shape is impossible... V+F-E=2 for Euler's formula. I think you mean 6 faces (the other numbers are right for a cube) and 26 potential linkages.

    75:

    Community restoration might become viable.

    My first thought was, central planning becomes viable.

    There are a lot of 3rd order consequences to that, some of which our host has already played with...

    The biggest one is getting humans out of thinking of things as zero-sum games. Or maybe the robots do take over.

    76:

    You are thinking of spaghetti sorting (which other people mentioned as SLAM sorting), which was written about by Alexander Dewdney in the Computer Recreations column in SciAm.

    I don't know which issue it was in though. I later read about it in one of book collections.

    77:

    Never mind. I just realized that you were talking about the peg board and string solution to traveling salesman. Though, I think that was in the same column.

    78:

    Ha! " peg board and string solution to traveling salesman. Though, I think that was in the same column."

    Once upon a time, when I was young Technician, the Dept Of Business Management of a newly formed U.K. Polytechnic of which I was a minor Technical Officer did employ a New and Shiny Technician at the behest of the then Acting Head of Department who had an Enthusiasm for Queuing Theory but lacked the Budget for IBM Main Frames that would be underpowered by the standards of the Strange device upon which I now type. And so the Acting Head decided to hire a Model Maker to construct a DEVICE to demonstrate The Principles of Queuing Theory using a Thing made out of Wood that, using springs, did propel ball bearings about a Track ..... yea, all Right it was a sort of Gianourmous Pin Ball Machine made out of wood and with a bit of imagination, and also little ramps made out of WOOD, you could make the Ball bearings Fly at the nearby walls upon which you could, say, hang Targets. To this day I'm uncertain about Queuing Theory - beyond that it would be a doddle to demo it with the most basic of PCs - but I do know that ball bearings powered by springs will dent walls to a surprising depth.

    79:

    Actually, one of the cheaper solutions to the traveling salesman problem appears to be plasmodial slime molds (link to story): they eat the bacteria on oak flakes, so you simply lay out the oat flakes at the destination cities on a diagram made of agar gel in the appropriate configuration, seed it with bits of slime mold, and the organism will form a plasmodium in the shape of the optimal answer (which is something like a minimum surface area answer). All this for the cost of some oats, some slime mold, and a place to grow the mold.

    I figured this might be possible back in grad school, but never tried it even though I had plenty of slime mold to play with. Oh well, it's fun to have weird regrets.

    80:

    We get one step closer to the aliens deciding to either assimilate or obliterate us before we start chewing up the Milky Way.

    81:

    Note to mice who are running Deep Thought: who needs humans as computing elements when you can use slime molds?

    My introduction to slime molds was via a paper I came across as a freshman at Brandies University (the significance of the location is left as an exercise for the reader): "Slime Mold Morphogenesis and the Jewish Question".

    82:
    Is all this QC computing still SERIAL?

    Yes and no. Computations are carried out by quantum gates, and (abstractly, since the gates aren't necessarily separate pieces of hardware) each computation requires the output of the previous computation and sends its output to the next computation. But, these computations transform one quantum state to another, and as those states may be superpositions of observable states, they effectively perform their operations on all the states of the superposition, producing a superposition of all the results. As an example, consider an eight-Qbit quantum OR circuit, which takes 8-bit input states and outputs the superposition consisting of all the states which are the OR values of the states in the input superposition. This is, in a sense, parallel computation.

    Of course, there's no reason (other than a complete inability to build the technology as of yet) you couldn't take a whole bunch of quantum memory and logic and wire them up to perform parallel computations, just as you can build an SIMD or MIMD classical computer out of standard logic and memory.

    But, if done properly (IIRC no-one's tried it) this should also be possible with "classical" computing.

    Oh, the multi-connected thing (interlinked) has been tried, lots of times. Google "Connection Machine", or "HyperCube"1, or "NCube", or "Pyramid Computer Architecture". Mostly they've used N-dimensional unit cubes with the vertices connected or pyramids with aritys of 2 or 4..

    1. I worked on two versions of this, one at IMSAI in 1976, where we took up to 16 IMSAI 8-bit microcomputers and wired them into a 4-dimensional hypercube, and the other at Intel, where we put a (metric) shitload of Intel single board computers into hypercubes up to N=8. The guys who founded NCube (not the marketing fuckwads who run it now) came from Intel; they built hypercubes of cheap RISC chips up to N=10.

    83:

    One point no one's made yet: it's very likely that quantum computers will be built as co-processers for classical computers, that take data from the host computers and perform specific quantum algorithms on them. They'll be specialized hardware, not replacements for all computing systems.

    And a possible application of QC that would have a long-term effect on society. Given cheap, room-temp QC and algorithms to analyze individual human genomes, it might be possible to compute the epigenetic operation of all the parts of the genome that control the expression of the protein-synthesis genes (and the meta-expression genes that control the expression genes, and so on recursively). That's the only way I can think of that cheap, quick control of both the individual genotype and the phenotype is likely to happen. If it does happen, we get radiation of Phylum Homo into as many niches as we want, and probably the only way humans are likely to colonize space in any large numbers.

    85:

    David Wallace said:

    So my impression is that once we have quantum computation, it's going to revolutionise chemistry (and also quantum field theory, though that's less likely to be technologically significant).

    Charlie Stross said:

    David, I'm not sure you understood the question ...
    If you were replying to david.given, you can ignore this. But if you were replying to David Wallace, please reconsider the implication, Charlie. You asked what else happens. I was only passingly interested in QC for nearly a decade. It’s a fascinating engineering challenge, to be certain, but the underlying physics appear quite straight forward. Then, as some initial progress was made with trapped ions, cavity QED, and now BECs and other gate tech in the naughties (far earlier than my cautious self expected), it occurred to me. If you could build a quantum simulator, you could model, in detail, experiments no classical computer could tackle on its own. And since quantum algorithmic logic is clear even without being to yet build enough stable circuits to run them, one can begin planning the outlines of those experiments now. Quantum computing could revolutionize not just QFT, but theoretical physics as we know it. And while that may not seem like a watershed consequence on the surface, consider: when have major revolutions in theoretical physics not shook civilization? Suffice it to say that I no longer look down my nose as QC experimentalists.

    86:

    On that much computing power, the Eschaton evolves just to the point of being able to ring Charlie up and ask for a sequel as further input data.

    {g,d&rlh}

    87:

    I'm only beginning to assimilate Scott Aaronson on this, so these are shots in the dark.

    For a start, we develop a whole new computational vocabulary, something like functional programming. Think of a block diagram for a little model CPU: it's a network of Boolean gates. It seems to me that our central "QPU" with its 32 qubits/register comprises a single gate which operates on a 32-dimensional Hilbert space. We load the QPU with the appropriate gate, send in the data from one register and then catch the result in another.

    I'd guess there will be quite the bestiary of possible gates, so that we become much better at talking about Hilbert space operators. I could be wrong, because even the classic Controlled NOT gate (it goes back to Feynman) works on four qubits. But 32 qubits should make for a lot of variety.

    Now what can we practically map onto (that is, model in) 32D? Atoms and molecules. A multi-electron orbital, say a covalent bond, lives in a space spanned by some few electrons. It's a probability distribution (amplitude distribution, strictly) over that space. If we can simulate it in one of our registers -- or even just sample it -- then we've captured something of atomic behaviour, and we can operate on it. We can probably do a lot of unphysical things to our model, but there may be some things there which we can actually do to atoms by means of lasers, spin-waves, Casimir forces etc.

    This could take us a long way toward programmable matter; things like tunable photovoltaics, reactive materials, smart molecular filters, and nanophase goodies like that.

    Eventually we go for proteins. We're not going to fit a protein molecule in a register; its reactive sites alone comprise many electrons. But perhaps with a billion registers' worth of projections or samples we could build some kind of composite analogue. Or hologram.

    Remember, we can't simply write out a list of base pairs, ACGGTAC, and calculate form of the protein it encodes. Proteins are made by ribosomes. But hey, if we could simulate the ribosome ...

    Another big prize would be precise models of the reaction of neurotransmitters and synapses. Who knows what kinds of devil are in those details?

    88:

    I think we've learned a lot about entanglement from experiments aimed primarily at understanding how to use it in QC. And I think we'll learn more about 3, 4, and more state interactions because of QC. It may be that some 3-state interactions are as different from 2-state entanglements as those are from single-state superpositions, and we're more likely to learn about that because examining multi-state interactions exhaustively will probably be useful in learning how to build QC, but might not be a priority in pure physics investigation.

    While modeling natural systems can be an extremely useful methodology, I'm fascinated by the idea behind the work that Kip Thorne and his colleagues have been doing in modeling systems that could be artifacts of a very advanced technology. That's how they saw the work on wormholes, for instance. Although its true that often there's no distinction (the natural uranium reactor in Africa for instance), there are many systems that physics can investigate which aren't likely to occur in nature. For instance, it seems unlikely that Bose-Einstein Condensates are likely to exist anywhere in this era of the universe, or to have existed at any time in the past, since they require sub-1-Kelvin temperatures, and there aren't likely to be many places in the universe that are colder than 2.7 Kelvin for very long.

    89:

    Assuming the usual size reduction in the hardware, your main benefit is that you can pack a lot more computing power into a smaller space than before.

    Independently operating robots might become practical, from specialized gardening or cleaning machines to general-purpose servants similar to those in Asimov's stories.

    90:

    I was replying to David L (comment #3, which it appears one of the other mods unpublished due to it being pretty rubbish, really: I've put it back so you can see it).

    91:

    For instance, it seems unlikely that Bose-Einstein Condensates are likely to exist anywhere in this era of the universe, or to have existed at any time in the past, since they require sub-1-Kelvin temperatures, and there aren't likely to be many places in the universe that are colder than 2.7 Kelvin for very long.

    Yes, but if the universe does turn out to be expanding steadily (i.e. no ever-accelerating Big Rip, and no contraction to a Big Crunch) there will come a time when the cosmic background will drop below 2.2 Kelvin and He2 superfluidity becomes possible, and a time somewhat later when it drops low enough for Bose Einstein Condensates to dominate! I'd expect both these points to occur after the stelliferous era ends but before the time frame for proton decay or black hole evaporation, let alone the [hypothetical] appearance of Boltzmann brains. And I am now wondering what on earth a large, flat universe where the remaining stellar radiation sources are red or brown dwarfs or black hole accretion disks, illuminating very cold (and rare) lumps of condensed matter where quantum effects dominate, is going to look like ...

    92:

    Everyone talks about the weather, but nobody does anything about it. A lot of you have mentioned that QC would help predict the weather, but if you can predict then you should also be able to influence the weather in a constructive way.

    The question is what would happen when there are several actors trying to influence the weather with conflicting interest. Would that lead to an arms race in qbits.

    93:

    Remember, we can't simply write out a list of base pairs, ACGGTAC, and calculate form of the protein it encodes

    Yes we can. L2biology.

    94:

    Yes we can. L2biology.

    You are mistaken.

    The issue is not the polypeptide sequence, but the conformation the peptide chain assumes after synthesis -- the tertiary protein structure. Different side-chains of amino-acids exhibit differential solubility in lipids and aqueous phases, depending on whether they're hydrophobic or hydrophilic. In addition, disulphide bridges between cysteine residues may stabilize the structure in non-thermodynamic equilibria, and chaperones (including heat shock proteins) help modify the tertiary structure further. As tertiary structure determines the conformation of active sites in enzymes, it's very important to get it right -- but protein folding remains a computationally difficult problem (hence the need for distributed approaches like the folding at home project run by Stanford).

    95:

    You are mistaken.

    I'm not mistaken, you just misunderstood. I know about the folding problem, but you don't need to simulate the ribosome for that (what Jonathan Burns @87 wrote). Gene sequence gives us the amino acid sequence, and we can simulate from there.

    Chaperones do not modify the tertiary structure, they just help the protein reach it faster and without entangling with other molecules.

    96:

    I think you're both quibbling over a tiny error of words.

    Computational solutions to protein folding problems don't normally simulate the ribosome, but they are difficult to do. If you feed in a gene sequence or an amino acid sequence, it's still a massively difficult problem to solve and often takes weeks.

    From what I understand of quantum computing (and it's a long, long way from area of expertise, protein folding is closer) it's probably better at solving these kinds of massively parallel difficult problems where shifting any one thing can massively impact the whole. Reliable largish scale QC will make the solutions much easier to find... possibly easy enough to do the other things suggested in the post when you can get solutions in hours or less.

    97:

    Well, to be fair, David L himself at #38 requested it be deleted.

    98:

    Yeah but ... please don't unpublish stuff that's already been replied to by me or another moderator? (I think this needs to be an explicit add-on to the moderation policy.)

    99:

    In retrospect, yes. Even leaving a 'this post now deleted by request' would have led to less confusion.

    100:

    it's still a massively difficult problem to solve and often takes weeks.

    I think currently it's more like years...

    101:

    Here's my prediction. QM will "break" somewhere between 128 and 1024 Qubits

    102:

    I imagine we will see a resurgence in the use of expert systems, because now things like prolog that simply model decision trees can be executed in foo time (constant <= foo <= linear), rather than n^2 or n^3 or even recursing indefinitely like we have today. This might result in everything from weak 'neat' AI of the opencyc vein appearing in software agents (causing the funny 1960s pop culture ideas about AI to become amusingly accurate) to drastic shifts in how profitable various CS programs are (theory-heavy programs run by math departments and dominated by professors with boners for horn clauses suddenly become centres of applied industry now that a prolog program can complete quickly without copious clever use of cuts). I imagine that between those two things the culture shock would be pretty crazy.

    103:

    MWI can't be used for parallelization because it explicitly disallows entanglement between unzipped continuities. If information flows between different continuities the model breaks, no longer agreeing with any of the other models (or with experiment), and so it doesn't work. [This is the one thing in QM that even the hardest and more knowledgeable scifi authors have to ignore for the sake of story]

    104:

    If that were true then the mere fact of quantum computation would disprove the interpretation. That has not happened. Indeed, it has reinforced the MWI.

    http://en.wikipedia.org/wiki/The_Fabric_of_Reality#The_Many_Worlds_interpretation_of_quantum_mechanics

    More in Deutsch's book, Fabric of Reality

    105:

    Whoops. Should be: I imagine we will see a resurgence in the use of expert systems, because now things like prolog that simply model decision trees can be executed in foo time (constant less-than-or-equal-to foo less-than-or-equal-to linear). This has effects ranging from the existence of rules-based weak AI in software agents (making the children of Siri an amusing justification of the odd 1960s robot stereotypes that still float around) to making theory-heavy university CS programs with a focus on things like provability and FOL become strongly applied without changing their programs (a big cultural shift due purely to languages like prolog suddenly becoming feasible for Real Work).

    106:

    I see no reason why QC would require MWI. Only collapse states that can occur do occur, and so QC is a constraint system. Presumably, the answers produced by a QC would differ in different continuities based on how the collapse occurred, but all of those answers would reflect the constraints (in other words, each continuity would have a correct answer, rather than having some kind of action at a distance that removed all invalid continuities retroactively).

    107:

    QC does not require MWI, but if you want any kind of "real" picture abstracted from the mathematics then its the only one

    108:
    I'd expect both these points to occur after the stelliferous era ends but before the time frame for proton decay or black hole evaporation, let alone the [hypothetical] appearance of Boltzmann brains.

    If I'm reading this paper correctly, assuming an open universe with the current cosmological constant (if it's increasing, as observations seem to show, then things will happen more quickly) the effective background temperature will be well below 1 Kelvin within the next 150 billion years, as the recombination era recedes beyond the visible horizon of the universe. This is long before proton decay, which can't happen before about 1034 years from now based on current experiments, and may not happen at all.

    Incidentally, I find the idea of Boltzmann brains rather less than convincing. It seems to me that believing in them requires the same sort of misunderstanding of the nature of probability that the arguments for the Doomsday Argument and the Simulation Universe require. (Anyone who wants to debate these things, please give Charlie a break and go to your own blog, or mine if you insist, but don't do it here.)

    And I am now wondering what on earth a large, flat universe where the remaining stellar radiation sources are red or brown dwarfs or black hole accretion disks, illuminating very cold (and rare) lumps of condensed matter where quantum effects dominate, is going to look like ...

    Assuming that the accretion of mass in galactic black holes keeps galaxies and local clusters of galaxies from expanding themselves, so that the average distance between black holes decreases and stars continue to orbit them, and all the big stars supernova and the resultant stellar black holes aggregate over time, I'd expect gravitational lensing to become a visible part of the sky. So as things move around, your view of them would waver (on a fairly long time scale, granted) as if you were underwater.

    109:

    "Incidentally, I find the idea of Boltzmann brains rather less than convincing. It seems to me that believing in them requires the same sort of misunderstanding of the nature of probability"

    Is a BB more convincing if it arises from a relatively low mass state machine cycling through possibilities? A lot can happen in infinite time. Surely all configurations of phase space must occur infinitely many times?

    110:

    What about a significant speed up in proteome mapping, as current computing allowed for fast genome mapping.

    111:
    QC does not require MWI, but if you want any kind of "real" picture abstracted from the mathematics then its the only one

    This doesn't make any sense to me. Could you try to explain that again?

    Btw, basic QM makes much more sense - read, understandable - when you employ the proper (and really not very hard) mathematical formalism. That'd be, uh, linear algebra. Since a fundamental requirement of QM as it is currently formulated is that "everything is linear" ("operators" like position, momentum, etc. are represented by matrices), it almost immediately follows that there won't be any messages going back and forth between these different worlds any time soon. And in fact it is that very linearity which gives you the "superposition of states" which we can't ordinarily interact with . . . speaking very imprecisely of course!

    Otoh, talking about MWI as if they real worlds separated by some sort of wibbly wobbly timey wimey dimensional stuff is more than slightly misleading; those other worlds occupy the exact same three dimensions that we do. Bear in mind that things like temperature, electrical resistance, etc are treated as if they were dimensions all time by scientists; that doesn't mean that temperature is some spooky extra dimensional orthogonal to our three regular ones now, does it?

    112:
    Incidentally, I find the idea of Boltzmann brains rather less than convincing. It seems to me that believing in them requires the same sort of misunderstanding of the nature of probability that the arguments for the Doomsday Argument and the Simulation Universe require.

    I'm assuming you're talking about the "paradox" whose premises force you to accept the notion that everything we see is overwhelmingly likely to be the hallucinations of an intelligence that spontaneously assembles itself in a rather nasty dark place, and yes, agree completely that it seems to be nothing more nor less than not understanding basic probability.

    Take a kilogram of any old stuff; what's the probability that it spontaneously assembles a whole chicken egg? Something rather lower than 10^(-100) I've heard it said. Now take 10^31 kilograms of that same bubbish; you'd think that the probability of getting fresh eggs is still something on the order of less than one in a trillion trillion trillion. Except that's not true; 10^31 kilograms of material just floating around in space will quickly sort itself out into a sun and it's attendant planets, and you're looking at eggs after only waiting ten billion years or so.

    In any event, that's the consensus of the scientific community, who mostly wish those string theorists would just shut the hell up and actually do some useful work for a change (You need BB's or at least the possibility of them for string theory to make any sense.)

    113:

    Argue with David Deutsch

    114:

    I think you will find that infinity is rather larger than any improbably number you can name. That's rather the crux of the argument.

    115:

    Dirk, I literally have no idea what you're going on about in either of your last two posts. If you have a point to make, could you please post something a little longer and include more details? Something concrete?

    116:

    a) The mathematical formalism does not indicate "what is really happening" - that's why there are interpretations. Your school of thought is called "shut up and calculate". Of the rest, the MWI is the leading candidate amongst physicists. For one of the best arguments in favour of the MWI read Deutsch

    b) Your argument again BBs is that they are incredibly improbable. However in infinite time "incredibly improbable" crops up infinitely many times.

    117:
    Surely all configurations of phase space must occur infinitely many times?

    No, all reachable configurations of phase space must occur infinitely many times. That's a rather important difference. As scentofviolets pointed out, you can't get 1031 kg of eggs spontaneously appearing because that mass will collapse into a star (rather faster than a cloud of gas would, because its denser). Also, if that happened to be about 5*1031 kg, it would burn for a few million years, then go supernova and a good part of those non-eggs would be turned into a stellar black hole, to be locked up for another 1060 years or so. Even if that happened an infinite number of times, you'd still never see all those eggs, or any Boltzmann Brains.

    118:

    Personally, I'd like a gadget that can open my kitchen window whenever my c at wants letting in our out. I'm not able to have a cat flap, andcan't leave a window open that wouldn't allow a burglar in. So a cheap window opening device with cat face recognition (don't want the neighbours cats coming in and stealing all the food) would seem a trivial thing to provide with disposable QC available.

    119:

    One of the arguments that the D-wave stuff is spurious is that other groups trying to work with the same system (junction based superconductors devices, in particular SQUIDs), like the Delft and SUNY groups, have published fairly detailed accounts of what decoherence times they can get. The best results are impressive; up to about 3,000 cycles. To do this they run at milli-kelvin temperatures, while the D-wave guys claim that their stuff works at LHe temperatures (4.2K).

    The other scalability problems with superconductor based circuits are trapped flux and junction non-reproducibility. Despite heroic efforts, conventional (non-quantum) superconductor circuits are essentially impossible to get yield on at about the 2,000 junction level, which gets you a few hundred qubits in most schemes.

    120:

    The BB argument, as I understand it, is not literally about brains materializing out of vacuum energy fluctuations. Rather, it's about the fact that (a) we live in an expanding universe, (b) we can extrapolate in one temporal direction right back to a singularity (the Big Bang), therefore (c) if time is infinite in the opposite direction (from the Big Bang) then we would expect, going by the principle of mediocrity, that we would be observing a perfectly flat, homogeneous cosmos (because in infinite spacetime, the overwhelming majority of observers are going to be vacuum energy fluctuations in the middle of nowhere, not physical observers bumping up against one of the walls -- or at least, within 14 billion years of a wall).

    So either something is wrong with the principle of mediocrity, or we are very improbable indeed, or time has an end as well as a beginning.

    (Am I getting anything miserably wrong, here?)

    121:

    Either there's something wrong with the principle of mediocracy, or it's very unlikely that we'd appear by immediate vacuum fluctuation, or time had an end. It makes no comment on the probability of our appearance by natural selection.

    122:

    If the principle of mediocrity applies, why aren't you a hunter or subsistence farmer?

    Because if you select a random human from the whole of history-so-far, that's what you'll almost certainly get. Professional authors, teachers, scientists, etc - and indeed people with internet access - are trivial minorities. Everyone reading this is demonstrably a member of one very unlikely group, so there's no reason to assume that they aren't members of another, larger, unlikely group.

    123:

    The problem with the principle of mediocrity argument when so used is that no matter what point in time is picked, the same argument applies, because time is apparently unbounded in one direction only. This is paradoxical, and when you encounter a paradox, it's usually a sign that the argument in question is at base problematic.

    (It strongly reminds me of the Hitchhiker's Guide argument that we don't really exist, which is also based on infinities and vanishingly small probabilities.)

    In this case, the problem with the argument is also one of dividing an infinity by a finite number, and anyone who seriously advances such an argument should be laughed down.

    124:

    But given that a vacuum fluctuation is far more likely to produce something with thge mass of an egg rather than a star, one would expect more egg value masses rather than star value masses. OTOH, it may be that a perfectly formed egg is a rarer occurrence than that of a star mass of hydrogen. But in infinite time both will occur infinitely many times.

    125:

    That's rather assuming you are not a BB. Crudely, a raw brain popping into existence in a cold vacuum is not going to last long enough to think anything. A realistic BB must come with some kind of environment suitable to sustain it. When one applies anthropic selection I would guess the type of BB, viewed from the "inside", is quite limited and will resemble a universe of sorts. Omphalos Hypothesis

    126:

    (I edited your comment to make the quoted portion obvious.)

    You know, I think you might just have stumbled across a new solution to the Fermi paradox?

    127:

    I think the only viable answer to the Fermi Paradox has got to be some variety of Simulation Argument, of which the BB is one improbable type

    128:

    Which is of course a subset of the Zoo Hypothesis: http://en.wikipedia.org/wiki/Zoo_hypothesis

    129:

    At the risk of sound dumb, why do we want to solve protein folding? Since we already have an incredibly fast way of solving it -- make the protein, watch it fold.

    Surely what we're after is understanding the actual rules so that we can work the other way and create proteins of specific shapes?

    Or have I completely misunderstood what the end goal is?

    130:

    At the risk of sound dumb, why do we want to solve protein folding?

    You can't design a protein with an active site that does something useful unless you understand how proteins fold. That's what it's about.

    131:

    'Make the protein - watch it fold'.

    That's a little hand-waving. Remember, we want to know where the individual atoms in the protein end up, because it's not just the overall shape of the resulting molecule that matter but also where the active sites on it are.

    As I understand it, the proteins currently being studied are pre-existing proteins, so you don't even have to make them. There are X-ray diffraction pattern images, but unfortunately, it's extremely to impossibly difficult to work backwards from such a pattern to work out the structure that creates it. On the other hand, if you have a hypothetical structure, you can notionally shine an X-ray beam at it, and work out what the pattern would be.

    So, you take an unfolded protein molecule, and let it fold according to various rules. There will probably be lots of different ways it could fold, and those will give different structures. So you take all those candidate structures, simulate an X-ray diffraction for each, and then look to see which (if any) match the real-life pattern. If you've got a match, then you can be pretty damned sure you've got the actual structure, and then you can work on the resulting biochemistry.

    132:

    "So either something is wrong with the principle of mediocrity, or we are very improbable indeed, or time has an end as well as a beginning."

    I think "time has an end as well as a beginning" needs to be a placeholder for "cosmological models featuring a finite period of Interesting Stuff followed by an indefinitely long period of Boring Stuff, can't be correct". Introducing a finite end-point to time (by fiat or by a closed-universe classical cosmology) will do it, but prima facie you could imagine other non-classical cosmologies that would too (something like a baby-universe model, for instance, if you tuned the parameters right).

    133:

    Otoh, talking about MWI as if they real worlds separated by some sort of wibbly wobbly timey wimey dimensional stuff is more than slightly misleading; those other worlds occupy the exact same three dimensions that we do.

    That's going to depend on the details of how quantum gravity works. On fairly plausible assumptions, the quantum-mechanical branching leads to spacetime branching fairly quickly. You're quite right about MWI as an interpretation of extant physics, though.

    Bear in mind that things like temperature, electrical resistance, etc are treated as if they were dimensions all time by scientists; that doesn't mean that temperature is some spooky extra dimensional orthogonal to our three regular ones now, does it?

    Sure, linearity doesn't suffice to justify many-worlds talk; no serious advocate of the position thinks otherwise. What's doing the work is two extra features:

    (i) entanglement; or, put another way, the way subsystems combine. In classical electromagnetism, say, even if system 1 and system 2 are both in superpositions, both terms in the superposition in system 1 interact with both terms in the superposition in system 2. In quantum theory, we can thread systems together so that we get multiple-system superpositions; indeed, the way the dynamics goes, that's generically what you'd expect to happen. (In terms of linear algebra, that's what you get when your system-combination rule uses the tensor product, not the direct product.)

    (ii) Dynamical suppression of interference. While of course you're right that linearity means that each term in a superposition can be evolved separately, it's still true in general that physical phenomena are contributed to by each term in the superposition, and different terms can cancel out or reinforce; in that indirect sense, there really are interactions between terms, despite the linearity. In quantum theory, if we try to create large-scale superpositions then the interference very rapidly gets suppressed, so that the different terms in the superposition effectively evolve independently. That combination of effective independence with arbitrarily high dynamical complexity is what motivates the "many-worlds" terminology in (modern versions of) the MWI.

    Incidentally, from this point of view, one way to think about the quantum factorisation algorithm is that we temporarily suppress interference between branches so we can do a calculation in each branch. Then we allow the branches to interfere so that we can extract some collective property of all the calculations.

    134:

    QM is linear right up until a measurement is performed. If it is assumed QM is linear at all times, we get the MWI

    135:

    I get that qbits aren't just resistors, and the whole superposition of states thing. But in an actual computer, you have electrical inputs going to qbits and electrical outputs coming out. If you test a qbit's state by putting a voltage on its input (and you manage to do this without decohering it), it will pass n% as much current as it would if the bit were fully on. If you test two qbits, then you'll get (n1*n2)% (or, depending on how the qbits are wired together, (1-n1)n2%, or n1(1-n2)%, or (1-n1)(1-n2)%) of the current output that you'd expect from two comparable normal bits.

    The dream of quantum computing is to test a huge amount of possibility space to find optima that account for a vanishingly small fraction of a huge possibility space. I just doubt that, on an engineering level, the transition from normal bits to qbits will be as powerful as it seems to mathematicians. The real world isn't as scalable as mathematics is.

    136:

    If you test a qubit by putting a voltage over it, or by any other process, you've already decohered it: pretty much by definition, any process which gives you information about the state of a quantum system also decoheres it.

    And if you have a qubit that's in the state you're calling "n% on", it won't pass n% of the current that a fully-on qubit would pass. It'll either pass 100% of it (with probability n%), or none of it (with probability (1-n)%.

    Ultimately, a qubit just isn't anything much like a classical analogue bit.

    137:
    If you test a qbit's state by putting a voltage on its input (and you manage to do this without decohering it),

    You can't do this, because by definition a measurement ("testing the state") can only give you an eigenstate, not a superposition. The voltage you measure will be one or another of the superposed states, with probability determined by the phases of the states. Same for any number of entangled or non-entangled qbits.

    Your second paragraph is therefore not at all consistent with experimentally verified QM.

    138:
    Incidentally, from this point of view, one way to think about the quantum factorisation algorithm is that we temporarily suppress interference between branches so we can do a calculation in each branch. Then we allow the branches to interfere so that we can extract some collective property of all the calculations.

    Exactly so. The algorithm is designed so that the final quantum state just before measurement has (in the limit of perfect hardware and application of operators) probability of 1 that all of its terms are solutions of the factorization.

    139:

    Another problem with the DWave narrative is that no quantum logic circuit is perfect: there can be errors in the initial state of a qbit (say 49.9% / 51.1% superposition instead of exactly 50% for each superposed state) and errors in the application of an operator (which are typically pulses of RF or photons of a specific energy or rotations of a magnetic field, etc., etc., and can have errors in pulse length, energy, intensity, total angle, etc., etc). These errors tend to accumulate, and need to be detected and corrected in order for the algorithms to work correctly. Simulating practical circuits so far has required very high numbers of error-handling qbits (6 or 7 times the number of data qubits in many cases). There are potential fixes for this problem, but they were developed recently, and haven't been applied to working systems, to my knowledge.

    140:

    @ 122 NO You are more likely to get a peasant of some sort - quite possibly a peasant living in a favela... & billion alive now. How many dead have passed before us, and what proportion in the last century??? Do the maths again, I think

    As for improbable, don't we want to just brew a really fresh cup of Tea?

    141:

    While no one knows, I've seen the number quoted as ~100 billion humans have ever lived, and there are 7 billion now.

    As for Boltzmann Brains and improbability, things get weird. There are multiple intelligent species (to some degree of intelligence): dolphins, apes, some monkeys, various parrots, elephants, and many others. I'm using the low bar of "intelligence" as having culture as a significant component of their critical adaptations to their local environment. In other words, without learning from their mother or caregiver, they couldn't survive.

    The point here is that intelligence is not unusual. We differ in degree, not in kind.

    Additionally, for most of anatomically modern human history (last 40-50,000 years), we've been nomadic, highly adaptable hunters and gatherers. There are some who suggest our excursion into civilization is a temporary blip on this particular record.

    While it's fun to think that we'll rapidly progress to Boltzmann Brains and interstellar badassdom, it's equally probable that the Fermi Paradox is caused by civilization being an evanescent blip in intelligent species' otherwise stable hunting and gathering evolved niche. It may simply be that intelligent species can't colonize space, because it's practically impossible, even if it's theoretically feasible.

    142:
    It may simply be that intelligent species can't colonize space, because it's practically impossible, even if it's theoretically feasible.

    Or it's so hard that no species has yet been able to do it before a major catastrophe (asteroid impact, supervolcano eruption, runaway greenhouse effect, pick your extinction level event) killed them off.

    143:

    The problem with the principle of mediocrity is that it insists that you have to be whatever is most probable for you to be, and that that is whatever is most common. But in reality, you are whatever you are with probability 1, and if the principle of mediocrity doesn't like that, well, too bad.

    Probabilities are statements about ensembles of events, not individual events. No matter how improbable an individual event may be, if it occurred, it occurred.

    144:

    A few decades ago there was a suspicion that internal warfare would be the civilization blocker, given that we'd figured out fusion weapons before moon colonies (and organized into two or three big factions). These days the Big War has gone out of fashion, happily, but there are still plenty of civilization ending scenarios that don't require suicide.

    If no society has migrated into space, that would suggest that uploading is a Hard Problem. We've got robots out there now, have at least some appreciation for the difficulties of running industry in space, and if we could put workers on-site without the usual problems of putting canned monkeys on spaceships might be within a few generations of getting something really interesting built.

    145:

    Yes, yes, a thousand times yes. The number of very intelligent people who've completely forgotten all those arguments need to start "given the existence of".

    146:

    In the infinite multiverse we are all inevitable

    147:

    Including Catholic-version Jezus, Satan, Ptath, and Cthulhu?

    Colour me unenthused ...

    148:

    All possible variations must exist.

    149:

    As heteromeles observed, the vast majority (probably 85-90%) of humans are believed to have lived prior to the 20th century. But most of the people who lived during that century will have been hunters or subsistence farmers. And if you don't think a peasant is a subsistence farmer, your definition of one or the other may be different to mine. Even where they're indentured serfs working land belonging to their lord, they're still both (a) farming and (b)not producing much surplus over their own family's survival requirements

    which makes them a subsistence farmer by any reasonable definition.

    Most (identifiable, anatomically modern) humans to date have been scratching a living from the soil, either in hunter-gatherer societies or as subsistence farmers.

    150:
    So either something is wrong with the principle of mediocrity, or we are very improbable indeed, or time has an end as well as a beginning.

    Or (d), something else.

    It is possible to have a series containing an infinite number of terms that still sums to a finite number ... for instance, 0.5 + 0.25 + 0.125 + ... = 1. By using the same principle, it may be that the probability of an occurrence per unit of time drops away quickly enough that it doesn't become infinitely large over an infinite period. At that point, the singularity in the equation disappears.

    (This assumes a naive meaning to time. However, given that some people deny it even exists, I'm easy with that.)

    151:

    Yes. If the universe is open and the cosmological constant is increasing monotonically (the current orthodoxy in physics), then it's entirely possible that the energy density of the universe decreases fast enough that some (perhaps very many) "possible" configurations will never occur because the energy isn't available for them to assemble and/or the density of matter is insufficient to support them. This is one of the things I meant when I talked about "reachable" configurations upthread.

    For instance, if the universe expands so rapidly that within the average lifetime of protons the average distance between elementary particles is larger than the diameter of the visible universe (i.e., the width of a particle's past lightcone) then denser configurations have such a low probability that they won't occur while matter still exists.

    And, as I said before, something is wrong with the application of the principle of mediocrity.

    152:

    I think we're somewhat stretching the meaning of "possible" here.

    153:

    The BB argument refers to vacuum fluctuations in an effectively empty universe http://en.wikipedia.org/wiki/Boltzmann_brain

    154:

    The article you reference never mentions vacuum fluctuations or virtual particles. It refers specifically to "stochastic fluctuations in the level of entropy", which are what we've been talking about: random paths through the phase space of the universe's configurations of matter and energy. I've seen other arguments for Boltzmann Brains that talk about random vacuum fluctuations of the sort that (perhaps) initiated the Big Bang, but the same objection applies to them: there's no known mechanism that would allow a vacuum fluctuation to take on the structure of a Boltzmann Brain or anything similar to it in the sense of being a configuration of matter and energy with no reachable antecedent states. In an infinite amount of time, in theory all possible things can happen, but impossible things can't.

    155:

    Any arbitrary random configuration of matter can spontaneously arise given enough time, including random arrangements that mimic a brain complete with memories. Admittedly, the odds against it must be googolplex to the googolplex or whatever, but that's still nothing compared to infinity. Of course, it is far more likely that a BB will arise as part of something like a universe, or perhaps a state machine cycling through a simulation of a universe.

    156:
    Any arbitrary random configuration of matter can spontaneously arise given enough time, including random arrangements that mimic a brain complete with memories.

    Really? How? Have you got a citation for this?

    157:

    If Big Lambda IS increasing monotonically. What is either driving it, or caused it to increase, assumiong no "resistance/friction" ???

    Suggests standard model has a hole or two, but we knew that nayway.

    Interesting times indeed.

    158:

    Inherent in the definition of "random"

    159:

    For random = "occuring by pure chance" not "computerised random number generator" I presume.

    160:

    I meant to comment on this earlier but I had family stuff to attend to, so apologies if this is a bit late, but first let me say something about the principle of mediocrity and how it's (often improperly) applied:

    The principle of mediocrity simply says that if you randomly select members from a population it is reasonable to conclude that on average (or for the selected members to conclude, if the population happens to be human), those members are the "most representative" of that population. In this particular application of the principle, most of the selected members could accurately conclude that they were "the most representative", i.e., a random selection here would consist mostly of poor farmers who could each accurately conclude that they were "typical". There would of course be some doughy middle-aged well-educated white-bread types in the mix making the same inference of course, and they'd be wrong. But for the most part, most people in the sample would be correct in their conclusion. That part's fine.

    Now here's the thing: None of the sample members can know whether they are correct or not until after they have access to information about the composition of the sample. The farmer is right, the nerd is wrong, but there's no way for either to know this.

    Now it's obvious where the fallacy lies in so many of these arguments appealing to the principle of mediocrity, namely treating it as a way to generate data upon which can draw further inferences as opposed to merely a supposition which must be independently confirmed. That is, they incorrectly use it as a way to generate data out of thin air rather than actually going about the business of data collection.

    Yes, you can assume for the sake of argument that the principle of mediocrity holds, then prove or disprove it by testing for the real-world consequences that assumption implies. That's fine. You just can't - as so often happens in these situations - proceed as if it's already been proven that the principle of mediocrity holds in your argument and therefore certain far-out scenarios must necessarily be true.

    Sorry to be so long-winded; I have a stats class to teach in a couple of hours :-)

    161:

    No. "Random" is a mathematical term applying to the statistical distribution of an ensemble of samples. It is not a statement about what paths are or are not possible in the traversal of a phase space. The Law of Eternal Return, as normally stated, was created and is repeated mostly by philosophers, not mathematicians or physicists. The Poincaré recurrence theorem in mathematics, which is the nearest thing to that Law I know of in actual science, states that a system whose dynamics are volume-preserving and which is confined to a finite spatial volume will, after a sufficiently long time, return to an arbitrarily small neighborhood of its initial state (reference). Note that there is absolutely no guarantee that the universe' dynamics are volume-preserving (I suspect the opposite is true), nor that is is confined to a finite volume.

    162:

    The real problem with Λ for modern cosmologists is not why it's changing, but why it's so damn small. No one knows why it's not either 0.0000, or something like 10120 times larger than it is. But it's not quite 0, but almost so, so there must be something that makes it the value it is. The rest of us are just glad that it is as small as it is, because otherwise the universe would have expanded to an infinitesimally rarified vacuum immediately after the Big Bang.

    163:

    Well, I guess you've just knocked all the nonsense about Boltzmann Brains on the head. Just submit it to a physics journal and end all this nonsense. Obviously nobody else has spotted this apparent flaw.

    164:

    Careful with the sarcasm, please. Polite disagreements are fine.

    165:

    Sometimes nitpicking irrelevancies annoy me. There is substantial literature on the BB problem, and people ought to read it first.

    166:
    Well, I guess you've just knocked all the nonsense about Boltzmann Brains on the head. Just submit it to a physics journal and end all this nonsense. Obviously nobody else has spotted this apparent flaw.

    Dirk, if you read those journals and followed that particular community you'd know that most physicists already consider BB's to be utter nonsense and whose partisans as often as not display a belligerent ignorance of some rather basic concepts in probability (there's a reason for that which I'll explain in a bit).

    The whole thing comes down to the arrow of time. In this particular era it has a rather strongly preferred direction; why is that? The standard reply is that this is because the universe started in a low entropy state. This just begs the question of course, replacing the original one with why did the universe happen to have this rather unlikely beginning?

    So far, no one has a good answer for that one, with the usual reply that it was just a random fluctuation as plausible as any other.

    It's at this point that the BB nonsense starts up. Believers say that small fluctuations are more likely than larger ones and that by their calculations, it is far, far more likely (something rather more that 10^500 to 1) that we should be a BB briefly hallucinating an ordered cosmos. Since as a matter of persistent empiricism this doesn't seem to be the case, the notion that the universe is nothing but a random fluctuation in the primordial ylem must be wrong.

    So goes the argument, and to say that those "calculations" are met with a certain skepticism rather understates the state of affairs. Making implausible assumptions, disregarding the significance of conditional probabilities, etc. is the politest way of phrasing the criticisms. Calling the proponents hacks looking for ways to justify their pet theories is harsher, but probably closer to the truth.

    Did I mention that the people pushing the BB argument are string theorists? Or that said theorists are quick to point out that string theory just happens to offer a solution to this "paradox" ;-)

    Read the journals, Dirk, follow the comments and who says what. You'll see pretty quickly that affairs are pretty much as I described them.

    167:
    There is substantial literature on the BB problem, and people ought to read it first.

    And why do you assume that we haven't just because we don't agree with your view of it? scentofviolets is correct: a large number of physicists think that the Boltzmann Brain scenario is not physically realistic, and is based on at least 2 misunderstandings about the mathematics that supposedly supports the idea. And I would point out that string theory has yet to produce either an explanation of any idiosyncratic feature of the universe around us (if you plug in all the constants we see, they make sense, but nothing in string theory explains why they have the values they do) or to make any prediction that has been verified experimentally. So using an argument from string theory really doesn't buy a lot of credibility in the real universe.

    Next time you want to make a point, try a logical argument instead of snark. It might get you a bit more respect.

    168:

    Bruce Cohen writes:

    The real problem with Λ for modern cosmologists is not why it's changing, but why it's so damn small. No one knows why it's not either 0.0000, or something like 10120 times larger than it is. But it's not quite 0, but almost so, so there must be something that makes it the value it is. The rest of us are just glad that it is as small as it is, because otherwise the universe would have expanded to an infinitesimally rarified vacuum immediately after the Big Bang.

    The Weak Anthropic Principle plays into this - universes (or sections of universes) which can't support life as we know it won't have many observers...

    Whether this fundamentally is that we're one of many universes in a multiverse, and hit the lottery on random initial constants, or whether there's something deeper going on and we're unitary or less than highly multiplicious, I don't know. We are several layers of key insight (at least) away from a deep cosmology that's complete enough to answer those questions, and still lacking testable theories to drive deeper insight.

    But if we weren't here, we would neither notice the unlikelyhood nor care much about it.

    169:

    >>>The Weak Anthropic Principle plays into this - universes (or sections of universes) which can't support life as we know it won't have many observers...

    They may have billions of observers. We are not there, how can we know? There is no way to study the probability of our universe without contacting other universes in some way.

    170:

    Oh? If the universe in question does not support life, and quite possibly no complex form of matter at all, then what sort of observer do you expect it to contain?

    (If it does support life, it's not one of the ones without observers.)

    171:

    universes (or sections of universes) which can't support life as we know it

    Key, very key IMO, point emboldened.

    172:

    Agreed, but as you say the Weak Anthropic Principle doesn't tell us much beyond the fact that it's possible for us to be here, which we knew anyway. Regrettably, we don't really have any good leads on why the universe is the way we see it.

    I'm rather sad that String Theory hasn't been predictive; I have a fondness for crazy speculative ideas that turn out to be useful1. And what could be crazier than the idea that we're made out of oscillating strings of something that are stuck to a membrane in a higher-dimensional space?

    1. I was really rooting for John Wheeler's concept of "It into Bit" that tried to fit consciousness into QM without the adhocery of the "collapse of the wave function". But the idea didn't survive a better understanding of the mechanisms of decoherence and entanglement. I think some of the spirit of it still survives in the theories of Friedkin, Deutsch, and Lloyd about the physics of computation.

    173:
    Agreed, but as you say the Weak Anthropic Principle doesn't tell us much beyond the fact that it's possible for us to be here, which we knew anyway. Regrettably, we don't really have any good leads on why the universe is the way we see it.

    Which is part and parcel with the abuse of the principle of mediocrity I was referring to earlier; you're either in one unique universe, in which case there are a huge number of "OMG! Why do we live in a universe with such an improbable fill-in-the-blank property?". Or you're in one universe that's part of a much larger ensemble which you have no direct access to, in fact the theory that posits these extra universes also requires they be inaccessible. Queue the endless noodlings that never go anywhere.

    Why do we live in a universe with 3 dimensions? What a low and arbitrary number! Why do we live in a universe with a Euclidean metric? Why not an L1 or L-infinity norm? And what a low and arbitrary number!

    Blah blah blah . . . That's not physics; that's metaphysics.

    174:
    Why do we live in a universe with 3 dimensions? What a low and arbitrary number! Why do we live in a universe with a Euclidean metric? Why not an L1 or L-infinity norm? And what a low and arbitrary number!

    The Weak Anthropic Principle actually might help here. See Max Tegmark's On the dimensionality of spacetime.

    175:

    I will make the appropriate ooh and ah noises now; I've only glanced at that article briefly, but it looks as if it rewards careful reading.

    I've read that wave propagation is suitable-for-life in universes with odd numbers of spacial dimensions, but don't have a reference available.

    More than one time dimension would be tricky to address in fiction. Our authors, not to mention our readers, are stuck in only one. The challenge of formatting such a story I leave to a typographer with too much imagination...

    176:

    I think they've disproved the holographic (2-D plus time universe), since the predicted grain size hasn't shown up. Still, that looked reasonable too, so I'm not readily buying arguments about the ideal dimensionality of reality just yet.

    As for >1 time dimension, I've got to suggest reading The Ghosts of Deep Time. It's set in the Terran Chronoplex. The idea is that Earth's history is a tree (with not very many branches--other things control the number and extent of the multiple worlds phenomena), and history branches in two temporal dimensions. Of course, you could also say that the temporal dimension in this story is fractal, manipulable and varying between 1 and 2...

    177:

    Why do we live in a universe with 3 dimensions? What a low and arbitrary number!

    You think it's too low, I think it's too high. Why are there dimensions at all? Why are there matter and space? Why is there anything at all? That's the question.

    178:

    Anatoly @ 177

    Wrong question(s) Completely.

    You are asking: WHY? Which is religious phenomenon, and false.

    The proper question is: HOW? By what mechanism?

    OK?

    179:

    The whole point of having a quantum bit is that, to some tests, it looks like it's in a superposition of two states rather than in one state or the other. That's the difference between a qbit and a random bit. Working out how to interrogate a set of qbits without collapsing the state is exactly what it means to build a quantum computer.

    My basic point of disbelief is that one could interrogate the bits precisely enough to identify the small number of hugely improbable states that are relevant to any practical problem, in a background of huge numbers of superimposed, yet irrelevant, states.

    180:

    Anatoly,

    Nobody knows why, but it's pretty obvious that at three (very nearly) orthogonal spatial dimensions and one temporal dimension exist. The empirical evidence is overwhelming. Beyond those four dimensions, evidence gets thin quickly.

    181:
    how to interrogate a set of qbits without collapsing the state is exactly what it means to build a quantum computer.

    Nope, you can't do that, QM won't let you. As soon as you interrogate a qbit in any way that would allow the result to be propagated out to the world, it "collapses"1 the quantum state to one of its observable eigenstates. The way you get multiple results from a quantum computation is to run the computation multiple times, getting (hopefully) different results each time. The way a quantum algorithm deals with "the small number of hugely improbable states that are relevant to any practical problem" is to manipulate the superpositions so as to make the probability of the correct result high, and the probabilities of anything else low. Wikipedia has a reasonably good description of the Shor quantum factoring algorithm, see the section headed "Finding the Period" for information about ensuring the probability of the result.

    1. I'm not a fan of this terminology; it smacks too much of the Copenhagen Interpretation, which always seemed like a cop-out to me.

    182:

    I think the problem is not that you are too optimistic. It seems that you have some misconceptions of the effectiveness of quantum computation.

    There are only limited number of problems where real world problems where quantum computers can be much more effective than classical computers. To get speedup over classical computer you need to rely on quantum interference. Getting the answers out from superposition is usually slows down the quantum computer to very close to classical computer. You can expect only square root speedup instead of exponential speedup. This is not technical limitation, it's fundamental limitation.

    Here is good intro to the subject: Scott Aaronson: Quantum Computing and the Limits of the Efficiently Computable - 2011 Buhl Lecture , Power Point here

    Summary: Quantum computer is not like massively parallel classical computer, Bounded-Error Quantum Polunomial Time (BQP) can't solve NP-complete problems fast and “The No-SuperSearch Postulate” (there is no physical means to solve NP-complete problems in polynomial time).

    What this means is that you should not expect quantum computers in every laptop. Not because technology is so hard (well it is), but because there is no much use for it. You can't solve arbitrary matrix calculations or run normal programs faster than classical ones.

    183:

    It looks like I misspoke, and probably misthought. Consider a set of entangled qbits (for lack of a better word, a qbyte). A quantum computer should be able to make measurements of the qbyte that do not disentangle any of the qbits, and these measurements should be consistent with a coherent superposition of states on each qbit. As an analogy, interrogating the qbyte is like looking at the results of a double (or multiple) slit experiment, and interrogating a qbit is like covering one of the slits.

    Extending the analogy, trying to guess a 256-bit decryption key with a quantum computer is like running a 256-slit experiment, covering various slits, and trying to match a known output. It's not impossible, but it wouldn't take much noise to screw it up.

    184:

    I don't think you understand that the term "measurement" has a particular meaning in QM that contradicts what you're saying. By definition a measurement causes the measured quantum state to "collapse" into one of its eigenstates, after which any superposition or entanglement in the original state is lost. If QM is a correct theory (and it's one of the most precisely verified theories in all of physics), there is no way a quantum state can survive measurement.

    185:

    Use another word, if you prefer. You can (sense? observe?) the light passing through a double slit, and you can (verb?) the light passing through each single slit. You've said enough that I'm pretty sure you understand the relationship of one to the other.

    186:

    Erm, can I remind you that R. Feynman would have nothing to do with "Collapsing the wave-function mysticism"?

    I agree, that Copenhage is codswallop, but, what is obviously wrong is our interpretation/understanding of QM, not the results. Hence the still unexplained presence, carefully ignored if at all possible, of obvious hidden variable/factors in QM -: as is revealed by the results when you try a slow-drip-of-single-photons-through-a-double-slit ... and see how the rsult appears to change over time as the count builds up.

    187:

    Yesterday on BBC Radio4 .. 05.45->05.59 interview with Ronald Pearson (a mech Eng) .... Claiming prediction for integration of QM & Relativity, and (without saying so) appearing to resolve the renormalisation question.

    Unfortunately, IF it is the same Pearson he is also following Crookes loopy mysticism, according to "Google"

    I really don't know what to make of it, without actual numbers/data/publications.

    188:

    The moment you see someone from that far outside any field attempting to 'sort it all out', your crank radar should be on full alert. It's not absolutely impossible he's got the answer, but it's also not absolutely impossible that I've just won the lottery.

    189:

    Precisely!

    Exept the renormalisation and QM?Rel problems are there. Oh dear

    190:

    " .. but it's also not absolutely impossible that I've just won the lottery. "

    Indeed ? But do you enter the lottery ? And IF you do which lottery do you enter? Surely, even in the Wobbly World of Advanced Mathematics, there has to be some sort of basic ' I do THIS and so This must follow " ?

    If you don't Bet then, surely all bets are off.

    Where is your money placed upon the Great Roulette Wheel of Chance and Weird Science ?

    191:

    What will happen? Dunno

    What could happen?

    Hard takeoff

    192:

    Let's imagine that the human mind uses quantum computation, and let's imagine that we have a gigaqbit to work with (high? low? not really the point...)

    From this point of view, all of a sudden a 32 bit quantum computer becomes something of a curiosity -- we would have a lot of experience, from the inside, of how to deal with quantum computation, and the new perspective of setting the thing up from the outside would give us some interesting perspectives (though, ultimately, rather mundane sorts of things that would require a serious entrepreneur to turn into something a lot of people find attractive).

    Put differently, we might wind up with search mechanisms that require extended training to tune properly...

    193:

    Unfortunately reality slams the door in your face on this one... having given up my gig at the local university repairing their 50 year old nuclear research reactor, I have since returned to the relatively boring world of microprocessors as a test engineer at AMD. (Intel and AMD are woefully behind where SUN Microsystems was 10 years ago for starters). So where does this marvelous new tech come from? Certainly not the major players in the semi-conductor industry (who actually have the capital and resources to pull it off to start with)... Most of these companies are now spending their time and effort ensure their DFT features work rather than concentrating on whether the actual processor works correctly. Seems a bit idiotic to me, but that is the state of the industry at the moment.

    My biggest lament at the moment is that I now have to work for the dark side (in terms of computing) but as I oft heard quoted "they made me an offer I couldn't refuse...

    194:

    Dwave have sold a 128 adiabatic quantum computer to Lockhead Martin, and another is in place at USC's supercomputer center in Marina del Rey.

    http://dwave.wordpress.com/2011/10/31/historic-opening-of-the-worlds-first-quantum-computing-centre/

    They have already solved some problems on the 128 qubit processor which do not appear solvable using current compute hardware.

    The problem I'm most interested in is the solution of binary classifier problem. (Eg is there a picture of grandma in this photo or not, or should the car make a left turn or not). To make these decision you put together a bunch of small simple questions, is that pixel blue, is the pixel next to it grey, etc together with a set of weights and then run a simple calculation which says yes/no. The hard part is calculating the weights when you have a large number of simple classifiers. This is what is classified as NP hard problem, for large numbers of weights the problem cannot be solved on a normal computer.

    The dwave computer appears to solve this in polynomial time.

    But the kicker is that the output (the weights) can then be used by even simple (smaller than an iphone) computer to perform the classification better more accurately than ever before.

    Imagine a future not far from now when every android device is trained specifically to recognize all your specific friends photos, or understand your specific speech patterns, and maybe do it better than even you friends or family do.

    The device just needs a set of patterns which are calculated by a dwave processor and then stored for use on all your Google connected devices.

    This is why Google is interested in this. Quantum computing doesnt just let use solve hard problems in protean folding, or cryptography, but it lets us turn existing hardware into 'smart' hardware by pushing a small set of bits at the hardware that take enormous amount of compute resources to calculate.

    Specials

    Merchandise

    About this Entry

    This page contains a single entry by Charlie Stross published on November 4, 2011 10:11 AM.

    Commercial: Rule 34 audiobook now available was the previous entry in this blog.

    Shameless log-rolling is the next entry in this blog.

    Find recent content on the main index or look in the archives to find all content.

    Search this blog

    Propaganda