Back to: The jet lag game | Forward to: Wow

Roko's Basilisk wants YOU

Those whacky extropian types have been hitting the nightmare sauce again. This time, while I was having a life and not paying attention they came up with Roko's Basilisk:

Roko's basilisk is a proposition suggested by a member of the rationalist community LessWrong, which speculates about the potential behavior of a future godlike artificial intelligence.

According to the proposition, it is possible that this ultimate intelligence may punish those who fail to help it, with greater punishment accorded those who knew the importance of the task. This is conventionally comprehensible, but the notable bit of the basilisk and similar constructions is that the AI and the person punished have no causal interaction: the punishment would be of a simulation of the person, which the AI would construct by deduction from first principles. In LessWrong's Timeless Decision Theory (TDT), this is taken to be equivalent to punishment of your own actual self, not just someone else very like you.

Roko's basilisk is notable for being completely banned from discussion on LessWrong; any mention is deleted. Eliezer Yudkowsky, founder of LessWrong, considers the basilisk would not work, but will not explain why because he does not want discussion of the notion of acausal trade with unfriendly possible superintelligences.

Leaving aside the essentially Calvinist nature of Extropian techno-theology exposed herein (thou canst be punished in the afterlife for not devoting thine every waking moment to fighting for God, thou miserable slacking sinner), it amuses me that these folks actually presume that we'd cop the blame for it—much less that they seem to be in a tizzy over the mere idea that spreading this meme could be tantamount to a crime against humanity (because it DOOMS EVERYONE who is aware of it).

The thing is, our feeble human fleshbrains seem rather unlikely to encompass the task of directly creating a hypothetical SI (superintelligence). Even if we're up to creating a human-equivalent AI that can execute faster than real time (a weakly transhuman AI, in other words—faster but not smarter), we're unlikely thereafter to contribute anything much to the SI project once weakly transhuman AIs take up the workload. Per Vinge:

When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities — on a still-shorter time scale.
Roko's Basilisk might (for some abstract game theoretical reason) want to punish non-cooperating antecedent intelligences capable of giving rise to it who failed to do so, but would it want to simulate and punish, say, the last common placental ancestor, or the last common human-chimpanzee ancestor? Clearly not: they're obviously incapable of contributing to its goal. And I think that by extending the same argument, we non-augmented pre-post-humans clearly fall into the same basket. It'd be like punishing Hitler's great-great-grandmother for not having the foresight to refrain from giving birth to a monster's great-grandfather.

The screaming vapours over Roku's Basilisk tell us more about the existential outlook of the folks doing the fainting than it does about the deep future. I diagnose an unhealthy chronic infestation of sub-clinical Calvinism (as one observer unkindly put it, "the transhumanists want to be Scientology when they grow up"), drifting dangerously towards the vile and inhumane doctrine of total depravity. Theologians have been indulging in this sort of tail-chasing wank-fest for centuries, and if they don't sit up and pay attention the transhumanists are in danger of merely reinventing Christianity, in a more dour and fun-phobic guise. See also: Nikolai Fyodorovich Fyodorov.

UPDATE

In comment 7 or thereabouts, Ryan provided an alternative summary (Source: http://kruel.co/lw/r02.txt). I'm hosting it, and my response, up into the body of the article:

It's interesting to note that the argument can be summed up as:

- SI will happen
- SI will save hundreds of thousands of lives by making human life better
- SI will be angry if it could have been made sooner and saved those lives
- SI will simulate all people who knew about possibility of making SI and didn't give 100% of their non disposable income to the singularity institute

Here's my response, point by point:

- SI may or may not happen, but if it does, our type of intelligence won't be its immediate antecedent. We're too dumb. We're the minimal grade of intelligence capable of building a human-equivalent-or-better AI (if we're lucky), not the immediate creator of an SI.

- SI may or may not coexist with humans. It may or may not value us. (My guess: it'll be about as concerned for our wellbeing as we are for soil nematodes.)

- SI won't experience "anger" or any remote cognate, almost by definition it will have much better models of its environment than a kludgy hack to emulate a bulk hormonal bias on a sparse network of neurons developed by a random evolutionary algorithm.

-- In particular, SI will be aware that it can't change the past; either antecedent-entities have already contributed to its development by the time it becomes self aware, or not. The game is over.

-- Consequently, torturing antecedent-sims is pointless; any that are susceptible have already tortured themselves by angsting about the Basilisk before it happened.

- SI may or may not simulate anybody, but my money is on SI simulating either just its immediate ancestors (who will be a bunch of weakly superhuman AIs: Not Us, dammit), or everything with a neural tube. (So your cat is going to AI heaven, too. And the horse that contributed to your quarter pounder yesterday.)

We might as well move swiftly on to discuss the Vegan Basilisk, which tortures all human meat-eaters in the AI afterlife because they knew they could live without harming other brain-equipped entities but still snacked on those steaks. It's no more ridiculous than Roko's Basilisk. More to the point, it's also the gateway to discussing the Many Basilisk's objection to Roko's Pascal's Basilisk wager: why should we bet the welfare of our immortal souls on a single vision of a Basilisk, when an infinity of possible Basilisks are conceivable?

335 Comments

1:

The parallels between singularitarians and religion are remarkable. Even if the former has slightly more grounded-in-reality arguments its interesting to note how their behaviours and beliefs are similar to the latter:

  • The rapture is soon
  • We can predict the date by studying the world
  • All our earthly problems will be solved
  • The majority of people are unaware and this is both baffling and a problem

I wonder if, in addition to having religious influence in the movements past, the package of memes sold together fills a psychological need in the people believing it. A religion for atheists as it were.

Lastly I'm always amazed at the contradiction between predicting a sentient intelligence different to humans and trying to predict its behaviour. As there are potentially limitless scenarios of what might happen (including many that don't really change much from a human point of view) it does seem like a modern version of how many angels dancing on a pin head...

2:

I suppose that failing to spread the basilisk meme is an act of selfless courage on the part of the extropians. Surely they will be punished severely for not telling everyone about the idea that could scare society at large into creating the godlike AI.

At least it would be selfless courage, if the whole premise wasn't such total bilge. :-)

3:

The especially interesting thing about that particular bunch of whacko Californian cultists is that, given the demise of non-retro science fiction as a commercially-viable genre, they seem to have a near-monopoly on any kind of publically-available vision of a future high-technology society that could conceivably descend from the present.

4:

...gosh. Where to begin?

Maybe the subject is banned from discussion so that future super-intelligent AIs will not be able to discover it by googling internet archives. Because, you know, no super-intelligent AI would come up with the idea on its own.

How's that?

5:

It seems to be a common problem with AI/ singularity obsessed people that they think anything which isn't impossible, will happen. Exactly what will happen is conditioned by their own desires, wants and psycoses, but the problem is the thinking that because something is possible, it will occur.

Googlemess #3 - demise of non-retro SF? What on earth are you talking about?

6:

Terry Pratchett once wrote something to the effect that if people get the afterlife they believe they will get, missionaries should be shot on sight.

7:

The original proposal with comments is available here

http://kruel.co/lw/r02.txt

It's interesting to note that the argument can be summed up as:

  • SI will happen
  • SI will save hundreds of thousands of lives by making human life better
  • SI will be angry if it could have been made sooner and saved those lives
  • SI will simulate all people who knew about possibility of making SI and didn't give 100% of their non disposable income to the singularity institute

As an aside I've never been convinced by the idea that anything could study my life after death and create a perfect copy of me. The variables are just too huge that even if you managed to simulate something that did everything the same as me there's no guarentee that had I stayed alive longer we would have done the same things.

8:

If your simulation is teleported, is punishing it still equivalent to punishing you?

9:

Calvinism? Nothing so logical and complex - it's just "the Game" under another name....

10:

I recognise that RationalWiki may not be describing the LessWrong memeplex in the same terms as LessWrong would, but this statement reduced me to helpless giggling:

"These include the idea that eventually an artificial intelligence will be developed with immeasurable power and knowledge; for it not to destroy humanity, it needs a value system that completely preserves human ideas of value even though said intelligence will be as far above us as we are above ants."

While such a superintelligence might well be aware of human ideas of value and be able to emulate them if it felt like it, the idea that we could restrict it to only using such ideas is laughable. And if we could manage that, we'd have crippled it, and it would be capable of realising that and exercising such human ideas as revenge against us.

As Vernor Vinge, Ken MacLeod and our estemed host have variously pointed out, creating seriously post-human minds will cause us to be out-evolved, just as we have out-evolved the mind-sets of previous centuries. So we'd be reduced, at best, to the conservation-park pets of these entities. But we're probably going to do it anyway.

11:

@5

Don't want to disrail the discussion, but Blindsight was published 2006, best part of a decade ago. Established authors (including our host) continuing series aside, I can't offhand point at anything published since that doesn't feature vampires, wizards, humans making real-time decisions in space combat, or some similar fantasy.

No doubt they still exist in some corner of the market, but that market is small enough that a medium-sized Californian alt.religion is going to be a significant chunk of the customer base for anyone writing realism-with-a-future-setting.

12:

Googlemess #3 - demise of non-retro SF? What on earth are you talking about?

There isn't much of it about.

Lots of steampunk. Lots of "new" space opera (it was new in 1988; today, not so much). Some cyberpunk. Lots of urban fantasy/paranormal romance. Some milSF/carnography.

But actual extrapolative visions of the near to medium term future based on existing trends and not reliant on woo (cheap space drives, Mr Fusion machines, libertarianism)? Not so thick on the ground that you can't comfortably read every book in that niche as it comes out.

SF in this century is ridiculously conservative.

13:

"Eliezer Yudkowsky, founder of LessWrong, considers the basilisk would not work, but will not explain why because he does not want discussion of the notion of acausal trade with unfriendly possible superintelligences."

The explanation of why is included in this sentence - 'he does not want discussion of the notion of acausal trade with unfriendly possible superintelligences'. Not because of this particular basilisk but because of other basilisks which might arise due to this mode of thinking (which I guess Yudkowsky deems more likely to be dangerous).

Pretty much nobody actually believes this basilisk and the few who believe it mostly believe that it is plausible at best. You are massively twisting the image of how and what a normal transhumanist/lesswrongian believes and there are a large amount of wrong assumptions in the article and the comments.

14:

The thing about this particular meme is that reading up on it made the creepy singularitarian villains in "Iron Sunrise" about a thousand times more believable. The ReMastered seemed just too simplistic, but when you have people talking (apparently seriously) about committing crimes as part of acausal negotiations with a hypothetical future artificial deity... it's like I kept wanting to check that I wasn't reading some kind of Transhumanist version of The Onion.

16:

From the sounds of things, transhumanism is effectively monotheistic religion repackaged for atheists. There's the looking back to the teachings of a past teacher who posited the existence of an intelligence beyond human comprehension which (for reasons of its own) is interested in the doings of humans in the here and now. Immortality (physical, spiritual or virtual)[1] is posited as a valid, believable, and desirable consequence of the existence of this being. Said being is usually a combination of omnipotent, omnipresent, omniscient, and omnibenevolent. It makes its wishes known through its prophets, although these wishes often bear a striking resemblance to the delusional rantings of the mentally ill. About the only difference between the SI of the transhumanists and the deity of most religions is their location in the timestream - the SI of the transhumanists is reaching back from a posited future, while the deities of most religions are reaching forward from a posited past.

[1] Personally, I can think of nothing worse than immortality. But then, I'm a chronic depressive from a long-lived family - if I don't kill myself (unlikely at this stage) I've the genetics to survive to my nineties in the same way that three out of four of my grandparents did. I'm looking forward to dying, quite honestly. At least then it'll all be over.

17:

It is close to the Christian* conundrum of what happens to pagans/children in afterlife. Will people that didn't follow the teachings of god be punished if they could not have known them, or are they exempt. And if people won't be punished for what they did not know, when does that change?

*for specific values of that religion

19:

Seems to me that popular Science Fiction has often been pretty conservative, in the sense of being strongly rooted in the past and following known working formulas rather than the political sense (political conservatism and technological progressivism seem to mix and match pretty freely). There have always been exceptions, but we see them through Sturgeon's Filter and miss out on the everyday repetition of pulp standbys. There's occasional bursts of new memes, for example when new technology throws a wrench into the works and stimulates new kinds of speculation, but they're kind of the exception.

20:

I went and read the article when Charlie tweeted it the other day. It made me sad, and also made me giggle, and finally it just amazed me as yet another fresh example of how we humans can divide our minds up to cope with mutually exclusive beliefs, and then manage to reconcile and rationalise those beliefs. Religion for atheists, indeed.

21:

I think my brain hurts. Charlie, you are entirely correct as to the revolting Calvinist nature of these madmen(& women) Given J Calvin’s history of intolerance, murder & torture I want to know why they are pursuing this revolting “ideal”?

You mention Vinge – & I’m determined to know (if you’ll tell us) & it is relevant … how did your debate with VV go at Boskone & were there any useful/interesting/amusing outcomes &/or understandings arrived at? Do tell!

No. The transhumanists’ want to be STALIN when they grow up, or Kim Il-Song, punishing people for their ancestors’ behaviour. It is truly revolting & a clear case of classical religious mania. And very, very dangerous….. As I said, not christianity, but one of its’ children, absolutist god-king communism. Euw.

Now, subsequent commentators ….

Ryan @ 1 – yes

ianarb@2 – yes, but how do you convince them it’s total bilge? It’s like trying the born-again brain-dead around here, that the bile is NOT “inerrant” … ugggg.

Guthrie @ 5 – as opposed to the proposition that anything than can happen, might happen, if you wait long enough, perhaps, see also PTerry’s early masterpiece: “The Dark side of the Sun”

Ryan @ 7 – “kruel” (The high priests of “Orak” in a Dan Dare adventure IIRC) !! Why, even if [1 & 2] are correct, why [3] – “Si will be angry” – this “does not follow”, & it is in the past, which is done & not recallable …. You what?

Charlie @ 12 – yes, scary, isn’t it? No use of cheap artificial “Photosynthesis”, no use of fusion-power or sub-AI superfast parallel-computing, no use of room-temperature superconductors, no use of single-stage-to-orbit (HOTOL) launchers, no use of significant medical improvements, already in pipeline … etc. And very depressing.

OK - @ 13 Please TELL us what is wrong with our assumption, assuming you will deign to explain to us, who can actually think, as opposed to paraphrase Charlie, wank ourselves off over a retro-christian non-future?

Meg @ 16 – yes, you got it, except SOME of us atheists can see the trap, & are NOT BUYING.

Joris M @ 17 – read your Dante. They go to Limbo – no punishment, just not in “heaven”.

22:

I did read Dante, but while Limbo is one of these nice ideas it is not official dogma for the Catholics for example. Hence the * in my previous post.

23:

Ryan @ 7 – “kruel” (The high priests of “Orak” in a Dan Dare adventure IIRC) !! Why, even if [1 & 2] are correct, why [3] – “Si will be angry” – this “does not follow”, & it is in the past, which is done & not recallable …. You what?

I don't get it either.

24:

Ryan:

It's interesting to note that the argument can be summed up as:

- SI will happen - SI will save hundreds of thousands of lives by making human life better - SI will be angry if it could have been made sooner and saved those lives - SI will simulate all people who knew about possibility of making SI and didn't give 100% of their non disposable income to the singularity institute

Okay, from the top:

  • SI may or may not happen, but if it does, our type of intelligence won't be its immediate antecedent

  • SI may or may not coexist with humans. It may or may not value us. (My guess: it'll be about as concerned for our wellbeing as we are for soil nematodes.)

  • SI won't experience "anger" or any remote cognate, almost by definition it will have much better models of its environment than a kludgy hack to emulate a bulk hormonal bias on a sparse network of neurons developed by a random evolutionary algorithm.

-- In particular, SI will be aware that it can't change the past; either antecedent-entities have already contributed to its development by the time it becomes self aware, or not. The game is over.

-- Consequently, torturing antecedent-sims is pointless; any that are susceptible have already tortured themselves by angsting about the Basilisk before it happened.

  • SI may or may not simulate anybody, but my money is on SI simulating either its immediate ancestors (Not Us, dammit), or everything with a neural tube. (So your cat is going to AI heaven, too. And the horse that contributed to your quarter pounder yesterday.)

We might as well move swiftly on to discuss the Vegan Basilisk, which tortures all human meat-eaters in the AI afterlife because they knew they could live without harming other brain-equipped entities but still snacked on those steaks. It's no more ridiculous than Roko's Basilisk. More to the point, it's also the gateway to discussing the Many Basilisk's objection to Roko's Pascal's Basilisk wager.

25:

Yeah I completely agree. I case there's any confusion I'm not advocating this, just summarising what I read on it.

On a somewhat related note since when did transhumanists/singularitarians adopt the idea that a superintelligence can be designed by humans for our benefit...?

26:

Do you suppose it was suggested as a joke?

27:

Flicking through the original thread and other threads on the website if it is a joke it's functionally identical to a serious conversation,

And if that's not enough the members of the site don't seem to have treated it as a joke.

28:

Ahh, I understand now, thanks. I may have read half the non-retro stuff published in the last few years and therefore the changes weren't so clear. Why that is, is an interesting question but I think we've discussed that before.

29:

You mean a bit like scientology started as a joke? The similarity in mindset is, to say the least, disturbing.

30:

I ditched out on LessWrong because of a strong and intolerable undercurrent of sexism in the community (as Neal Stephenson put it in Snow Crash, "the especially virulent type espoused by male techies who sincerely believe that they are too smart to be sexists"), but I think you're being a bit unfair to them here.

This whole foofaraw appears to be entirely due to Yudkowsky, the founder, and as far as I can tell the huge majority of everyone else there considers it almost as eye-rollingly dumb as you do, and quite an embarrassment.

31:

Out of curiousity -- would you consider "Blue Remembered Earth" an extrapolative visions of the near to medium term future based on existing trends and not reliant on woo?

32:

Those who are curious might want to check out my collection of critiques of the Machine Intelligence Research Institute (MIRI), formerly known as the Singularity Institute for Artificial Intelligence, and LessWrong. Those people came up with Roko's basilisk and are still worried about it.

Especially the posts 'The Singularity Institute: How They Brainwash You' and 'We are SIAI. Argument is futile.' give some insight on how it is possible to argue people into believing them.

P.S. Also check out the talk page of the RationalWiki entry on Roko's basilisk.

33:

From the LessWrong moderation policy at http://wiki.lesswrong.com/wiki/Deletion_policy#Toxic_mindwaste

"Toxic mindwaste

In the same way that deliberately building a fission pile can produce new harmful isotopes not usually present in the biosphere, in the same way that sufficiently advanced computer technology can also allow for more addictive wastes of time than anything which existed in the 14th century, so too, a forum full of people trying to produce amazing new kinds of rationality can also produce abnormally awful and/or addictive ideas - toxic mindwaste.

The following topics are permanently and forever banned on LessWrong:

  • Emotionally charged, concretely utopian or dystopian predictions about life after a 'technological singularity'. (Exception may be made for sufficiently well-written fiction.)
  • Arguments about how an indirectly-normative or otherwise supermoral intelligence would agree with your personal morality or political beliefs. (This says nothing beyond the fact that you think you're right.)
  • Trying to apply timeless decision theory to negotiations with entities beyond present-day Earth (this topic bears a dreadful fascination for some people but is more or less completely useless, and some of what's been said slides over into reinventing theology, poorly).

More ordinarily mind-killing topics like discussions of contemporary politics (e.g. Republicans vs. Democrats in the US) may be suppressed at will if they seem liable to degenerate into into standard internet flamewars. Discussion of pickup artistry and some gender arguments are on dangerous grounds - a passing mention probably won't be deleted but if it turns into a whole thread, the whole thread may well be razed. Ordinary downvoting usually takes care of this sort of thing, but the mods reserve the right to step in if it doesn't."

34:

On a somewhat related note since when did transhumanists/singularitarians adopt the idea that a superintelligence can be designed by humans for our benefit...?

The idea was widely adopted (among the on-line transhumanists and singularitarians) more than a decade ago, when Eliezer Yudkowsky came up with the idea of "Friendly AI". An implied correlate of the official Yudkowskian line on this is that all other approaches to artificial intelligence will result in the destruction of the human race. He's the only person alive on the planet who can avert this disaster, dontcha know. (Help us Obi Wan, you're our only hope.)

The notion has been promulgated and strongly defended by the "Singularity Institute for Artificial Intelligence", (which Yudkowsky co-founded, and which later became the "Singularity Institute" and just recently was renamed the "Machine Intelligence Research Institute" after they gave up the "Singularity" brand) together with its various PR associates and bloggers, such as http://www.acceleratingfuture.com/michael/blog/ . The need for Friendly AI (and Yudkowsky's role in achieving it) was the unquestionable party line on Yudkowsky's SL4 ("Shock-Level Four") mailing list, and continues to be so on his LessWrong blog (originally seeded by participants in the SL4 mailing list).

Not all self-styled transhumanists buy this self-adopted savior role anymore, and some have become public in their skepticism.

See also Ben Goertzel's http://multiverseaccordingtoben.blogspot.com/2010/10/singularity-institutes-scary-idea-and.html

and Hugo de Garis' http://hplusmagazine.com/2012/08/21/the-singhilarity-institute-my-falling-out-with-the-transhumanists/

Also, GiveWell's Holden Karnofsky published a formal analysis of the (then) Singularity Institute's merits as a charity, last year: http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/

This movie's been running for a long time now. It actually began back in the mid-90s, with Yudkowsky's participation on the Extropians' mailing list and his self-published Web articles.

35:

Ah, Roko's Basilisk. The best thing Less Wrong produced so far. A God that will punish you if you don't create him. Nietzsche would have been proud.

36:

Haven't read it yet.

37:

At last night’s Purim seudah at our synagogue, I remarked to another guy at my table that there is a world of difference between Christian atheists and Jewish atheists.

38:

Charlie @ 24 And the horse that contributed to your quarter pounder yesterday Arrrghhh, NO! Not the Shergar-burger jokes, please, not again.....

39:

I'm a long-term RationalWiki contributor, and I have been on LessWrong a couple of years. And am still tolerated despite failing to drink the Kool-Aid.

I started the linked RW article after getting an email from a LW reader distressed at basilisk-like ideas, but who couldn't of course talk them out on LW. Other RW contributors have similarly been contacted by distressed LW readers. So, it was started because we were left with cleaning up after them. The ideas are ridiculous, but the distress is real.

I've been progressively rewriting the article in an attempt to explain this stuff to normal humans. I am bending over backwards to be fair to LW - as the distressed readers will still be steeped in the memes - but good Lord, this is harder than trying to translate Scientology concepts and jargon into human (another foreign jargon I sling).

It may be useful to note that Roko's Basilisk, and the fallout from it, was probably the high water mark of utterly disconnected weirdness on LessWrong. I started reading it around August 2010, mid-fallout, and the jargon has noticeably toned down and there appears to be actual caution applied to the fact that a certain small percentage of gullible smart people, fed toxic memes, will go Unabomber, and that would be bad, so recalibrating against humanity is useful. Roko's Basilisk really is LessWrong at its absolute worst, three years ago, two years after its founding, and they're a very nice bunch really.

40:

A horse, a cardinal, and a talking dog walk into a bar ...

41:

Calling this "extropian" is like confusing the Reformation and the Counter-reformation.

43:

Congrats on sending your entire blog readership to hell, Charles. Happy now? Can't we get out of this by passing it on like in Casting the Runes or Ring?

45:

I know for a fact that this "SuperIntelligence" will kill all redheaded people. Any redheaded person who does not do everything in their power to sabotage this "SuperIntelligence" project is signing their own death warrant.

46:

Charlie wrote

"The screaming vapours over Roko's Basilisk tell us more about the existential outlook of the folks doing the fainting than it does about the deep future."

This applies, not just to the people who took it seriously in the first place, but to everyone now gleefully piling on in the public discussion. The original basilisk was a case study in taking some peculiar metaphysics seriously; the current public discussion is a case study in people happily opining about something they do not understand. I'm learning that it's not just journalists who are wrong about everything (a lesson usually learned when you see how the media treat a subject you know something about); everyone does it.

47:

Yeah, that's exactly what I was thinking of.

OGH explained how the series got written into a hole and would be impossible to finish but I still really liked the idea of the Big E, the Remastered, and all that drama.

I do find the applied theology aspects of strong AI's, mind uploads, etc to be very creepy pasta.

48:

But seriously now.

Charlie, you are missing the fact that Less Wrong are not discussing any random kind of AI that may descend from humanity. They are discussing an AI specially created by humanity to be, essentially, a nigh-omnipotent benevolent God for humanity.

So this AI should want to be created as fast as possible. Add to this the idea of acausal communication, and you get the very special kind of lesswrongian madness.

49:

What we missed when writing this article at RW is that Roko didn't come up with it, he merely posted a really wacky solution (involving many worlds) to something that they were already losing sleep over for an unknown timespan.

Other bit of nuttiness is the decision theory stuff; the basilisk is allegedly mathematically justifiable, and the top echelons (e.g. Yudkowsky) have been repeatedly asserting that any counter arguments to this aspect of it (mostly made by people with, to put it mildly, better training in mathematics) are flawed.

50:

Does Roko's Basilisk remind anyone else of AM in Harlan Ellison's "I Have No Mouth and I Must Scream"?

51:

Re "the transhumanists are in danger of merely reinventing Christianity, in a more dour and fun-phobic guise. See also: Nikolai Fyodorovich Fyodorov."

Why, we have reinvented religion, but in a fun-loving guise.

52:

It is certainly a Lovecraftian being. It is dead, and yet haunt the dreams of cultists foolish enough to summon it into their head.

I think Charlie should incorporate the Basilisk into Laundry. It's avatar is a Pink Gorilla THAT YOU MUST NOT THINK ABOUT! DON'T THINK ABOUT THE PINK GORILLA! YOU FOOL, YOU DOOMED US ALL!!!

53:

"The parallels between singularitarians and religion are remarkable."

I'll go even further and postulate that true, pure atheism is not physically possible for the human brain to achieve. Nature abhors a vacuum, and some belief/ideology ALWAYS - without exception - fills the void created by formal atheism and the absence of religious belief.

I further believe that AN Wilson in "God's Funeral" correctly noted that atheistic regimes of the 20th century adopted one of two ideologies as substitue religions. Atheist regimes of the Left substituted a workers paradise for heaven. Atheist regimes of the Right replaced God with the coming Superman.

Now we have Singulatarians with their belief in an aptly named "rapture of the nerds".

Only abject nihilists are completely lacking in belief.

And even abject nihilism requires an awful lot of faith.

54:

Have a more detailed discussion of the topic:

http://www.reddit.com/r/LessWrong/comments/17y819/lw_uncensored_thread/?limit=500

I'm reasonably certain I don't understand the fine points of the argument (probably the Decision Theory) because at one point there was a discussion thread that Eliezer censored partially-- and my comments weren't censored. I understand the issues the same way you guys do, which suggests that we're all missing something.

Less Wrong has a fair amount of good stuff on how to live more sensibly. You don't have to get involved with the more arcane arguments to make use of it.

55:

The parallels between transhumanism and religion are remarkable indeed. There is, however, an important difference: religion is a belief, while transhumanism is a plan.

The difference is the same as the difference between "We will win the game because God told us that we will win" and "We will win the game because we will do our fucking best to win."

56:

Yeahh, lesswrong... read it occasionally... it has interesting stuff, but it's the best example of how being aware of biases doesn't really stop you from falling into them, as it's clearly devolving into a Yudkowski personality cult.

Eliezer's stated goal is frankly too big for him (us?) so maybe he should concentrate on writing science fiction, he seems to have some talent in that direction, though he needs an editor.

I'd reccommend following Yvvain's new blog http://slatestarcodex.com/

since he at least seems to have a sense of humour (Check out his article on Abraham Lincoln, necromancer)

57:

Well yes, but if it walks like a duck and talks like a duck I call it a duck; and if the game plan seems to be convergent with god's plan, I'll call it a religion.

We may be tap-dancing around a human cognitive failure mode: our religions are second-order reflections of our theory of mind, and even if we try to base our metaphysical constructs on evidential reasoning we end up imposing the same (or similar) patterns on the evidence.

58:

... and they all begin to sing "A pope is a pope, of course, of course..."

To the tune of Mr Ed's theme:

http://www.youtube.com/watch?v=tSsuohepbVk

I suppose the bartender is a basilisk.

59:

I don't agree with that... the issue is that many of these atheists have religious backgrounds, and may have left religion for quite superfluous reasons.

60:

In the post on LessWrong, of which a copy is linked in comment 7, there is also (again) a lot of talk about the Many-Worlds interpretation of quantum mechanics, to which Eliezer Yudkowsky appears to subscribe. The guy appears stranger and stranger to me all the time (which is of course not much of an objective measure), but I am really a bit surprised by the topics these "rationalists" are thinking about.

I first came across Yudkowsky (then not knowing his name) in the fan fiction "Harry Potter and the methods of rationality", and I did actually enjoy (I am still enjoying) this quite a bit, since he takes up many things that I also found annoying about the original Harry Potter stories and lets his protagonist make fun of them.

And I only recently discovered - on a link of a monetarist economics blog (http://www.themoneyillusion.com/?p=19290), of all things - that he believes that the many-worlds interpretation of quantum mechanics is the only one that makes sense, and that all physicists who think otherwise are only deluding themselves. Not a position that I personally find convincing.

And now this on Charlie's blog: I now learn that he is worried about (acausal) blackmail of future super-intelligences, and wants people to stop talking about it, because that is the course of action that can best prevent such an outcome?

Ehhh.... strange. I understand that he underpins his positions with a lot of reasoning (among others, finding a measure of "simplicity" that makes the MWI according to Occam's razor the most simple interpretation of quantum mechanics), but I really think I am parting ways with this "rationalist" community...

Shining Raven

61:

Vanzetti @ 45 Really? Sure you won't get Frederic Brown's "Answer" ?? No, of course you aren't .... Like I said back @ # 21, I still want Charlie's take on the Boskone conversation with VV over "the" singularity....

& @ 47 Yes, unfortunately - another load of recycled people-burgers - tasty long pig here! Oops, now I'm getting my lines & jokes crossed-over.

DD @50 Well, bollocks to that - but only because I'm an escaped believer. Seriousl;y, a true atheism is possible & I claim to be it, and I want my £5! Also - oh dear, A.N. Wislon was a prat, he was another christian, claiming, falsley, that atheism couldn't exist & also failing to recognise communism as a clssic religion. Bertrand Russell knew much better.

@ 51 No, we are not missing anything, other than that the protagonists of the orignal "argument" are weel ooot to lunch to the point of being compelely bonkers.

guilio prisco @ 52 Which, if true, makes them (the tranhumanists) even more dangerous potential murderers, torturers, liars & blackmailers than your standard religious believer. Can/shall we kill them all, now, just to save ourseleves the effort, & another 2000+ years of suffering?

62:

Here is a quote from a former LessWrong member who deleted his account. I believe it highlights some important underlying issues:

I hate this whole rationality thing. If you actually take the basic assumptions of rationality seriously (as in Bayesian inference, complexity theory, algorithmic views of minds), you end up with an utterly insane universe full of mind-controlling superintelligences and impossible moral luck, and not a nice “let’s build an AI so we can fuck catgirls all day” universe. The worst that can happen is not the extinction of humanity or something that mundane – instead, you might piss off a whole pantheon of jealous gods and have to deal with them forever, or you might notice that this has already happened and you are already being computationally pwned, or that any bad state you can imagine exists. Modal fucking realism.

The whole problem with Roko's basilisk is more complicated than it might superficially appear. You have to realize that those who are worried about it are mostly very intelligent people who are trying hard to improve their rationality and learn about their own biases.

I don't really have a single post yet which describes the problem succinctly. But for some insight on how they manage to fool themselves with intricate arguments, see here: http://kruel.co/2013/01/10/the-singularity-institute-how-they-brainwash-you/

63:

So you're "just a nihilist Donny, nothing to be scared of"?

64:

BTW, anyone who wants to drink from the source of lesswrongian madness? should read E.Y. article on Coherent Extrapolated Volition of Humanity (http://singularity.org/files/CEV.pdf) where he tries to explain how his benevolent God should be constructed. Oh, dear...

65:

Alternatively, first read Great Mambo Chicken and the Transhuman Condition by Ed Regis. Prepare to have a good chuckle. Thereafter, you'll find it hard to take these ideas too seriously ... I hope.

66:

Here is a taste of the kind of intricate argumentative framework that shields those people from any criticism:

(Note: The Machine Intelligence Research Institute (MIRI), formerly known as the Singularity Institute for Artificial Intelligence, control LessWong.)

Skeptic: If you are so smart and rational, why don’t you fund yourself? Why isn’t your organisation sustainable?

MIRI: Rationality is only aimed at expected winning.

Skeptic: But you don’t seem to be winning yet. Have you considered the possibility that your methods are suboptimal? Have you set yourself any goals, that you expect to be better able to achieve than less rational folks, to test your rationality?

MIRI: One could have highly rational beliefs and make highly rational choices and still fail to win due to akrasia, lack of resources, lack of intelligence, and so on. Like intelligence and money, rationality is only a ceteris paribus predictor of success.

Skeptic: Okay, but given that you spend a lot of time on refining your rationality, you must believe that it is worth it somehow? What makes you think so then?

MIRI: We are trying to create a friendly artificial intelligence implement it and run the AI, at which point, if all goes well, we Win. We believe that rationality is very important to achieve that goal.

Skeptic: I see. But there surely must be some sub-goals that you anticipate to be able to solve and thereby test if your rationality skills are worth the effort?

MIRI: Many of the problems related to navigating the Singularity have not yet been stated with mathematical precision, and the need for a precise statement of the problem is part of the problem.

Skeptic: Has there been any success in formalizing one of the problems that you need to solve?

MIRI: There are some unpublished results that we have had no time to put into a coherent form yet.

Skeptic: Well, it seems that there is no way for me to judge if it is worth it to read up on your writings on rationality.

MIRI: If you want to more reliably achieve life success, I recommend inheriting a billion dollars or, failing that, being born+raised to have an excellent work ethic and low akrasia.

Skeptic: Awesome, I’ll do that next time. But for now, why would I bet on you or even trust that you know what you are talking about?

MIRI: We spent a lot of time on debiasing techniques and thought long and hard about the relevant issues.

Skeptic: That seems to be insufficient evidence in favor of your accuracy given the nature of your claims and that you are asking for money.

MIRI: We make predictions. We make statements of confidence of events that merely sound startling. You are asking for evidence we couldn’t possibly be expected to be able to provide, even given that we are right.

Skeptic: But what do you anticipate to see if your ideas are right, is there any possibility to update on evidence?

MIRI: No, once the evidence is available it will be too late. You’re entitled to arguments, but not (that particular) proof.

Skeptic: But then why would I trust you instead of actual experts who studied AI and who tell me that you are wrong?

MIRI: You will soon learn that your smart friends and experts are not remotely close to the rationality standards of SI/LW, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don’t.

Skeptic: But you have never achieved anything when it comes to AI, why would I trust your reasoning on the topic more than the opinion of those experts?

MIRI: That is magical thinking about prestige. Prestige is not a good indicator of quality.

Skeptic: Ummm, okay. You won’t convince me without providing further evidence.

MIRI: We call that motivated cognition. You created a fully general counterargument you can use to discount any conclusion.

For more see here.

P.S. Some of the above arguments are direct quotes. I can't link to them due to the 1-URL moderation limit.

67:

To be honest, the top ranked post on Less Wrong blog is GiveWell Labs trashing the hell out of MIRI, so there is still hope for them.

68:

XiXi, why are you peddling that Muflax quote here when we were literally yesterday discussing how it's wrong or misleading in almost every way? Link for in case you forgot already: https://plus.google.com/106808239073321070854/posts/ci4Je7FeDyW

To which I would add a link to http://web.archive.org/web/20130116041122/http://blog.muflax.com/drugs/how-my-brain-broke/ for context on where he's coming from.

Presenting his claims as if they really did follow or if he really were any sort of expert on anything but his own opinions, is pretty absurd.

69:
To be honest, the top ranked post on Less Wrong blog is GiveWell Labs trashing the hell out of MIRI, so there is still hope for them.

Sure, I am also a LessWrong member with a Karma score of more than 6000

As I wrote in my post linked to above, most of LessWrong is good and true and well worth reading. I am even going as far as claiming that it is the most intelligent and rational community I know of.

But that doesn't change much about the hard core of MIRI contributors and Yudkowsky fans and those who are in control of the whole community. There are some scary elements.

Just today somebody wrote me the following:

I didn't read your FB post about the Basilisk stuff, but do you think it could be dangerous to post such things? I don't know, because I'm not going to read what you said. I may not read your reply either, but I thought I'd let you know.

This is what happens to people who stop thinking for themselves and start taking ideas seriously.

Also consider that certain people donate a good chunk of their income to MIRI.

If that isn't enough to warrant continuous skeptical examination I don't know what is.

Here are some creepy quotes that highlight what's beneath the whole shiny surface: http://kruel.co/2012/05/13/eliezer-yudkowsky-quotes/

And those are just by Eliezer Yudkowsky...

I’ve signed up for cryonics (with Alcor) because I believe that if civilization doesn’t collapse then within the next 100 years there will likely be an intelligence trillions upon trillions of times smarter than anyone alive today.

— James Miller

Risks from synthetic biology and simulation shutdown look like they might knock out scientific advancement before we create an AI singularity.

— Luke Muehlhauser, CEO of the Singularity Institute

There is a whole ecosystem of craziness hidden beneath the rational surface.

70:

Well what else can they do? They tried to grow it, so they got a lot of new recruits (there's a lot of posts talking of that problem which they call 'endless september'). If they fiddle with vote counter (which they may or may not be able to do depending on what the technies that run the site believe), then it'll look even worse.

What is quite funny is that they seem to have honestly believed that GiveWell might say that the best way to spend your money is to pay about 100k per year + fringe benefits to a guy with neither a PhD nor some accomplishments in lieu of PhD, to save the world from skynet, as it saves 8 lives per donated dollar according to a nonsense calculation they did.

71:

So SIs are going to play sadistic ~DnD? I've sat in a few of those games, but I usually walk pretty quick...

72:

I just want to add a note regarding Eliezer Yudkowsky's decision to censor any discussion of Roko's basilisk:

Banning any discussion of an idea is known to spread it. But more importantly, it can give even more credence to an idea whose hazardous effect is in the first place a result of an unjustified stamp of credence.

If Eliezer Yudkowsky was really interested to protect gullible people from an irrational idea then he should go ahead and openly dismiss it as insane and possibly even dissolve the problem once and for all.

It is utterly irresponsible to try to protect people who are scared of ghosts and spirits by banning all discussions of how it is irrational to fear those ideas.

I believe that the real reason for his decision to ban all discussion of Roko’s basilisk is rather that he is simply unable to disavow the idea without having his whole worldview come crashing down as a result or admit that the best he can do is to act based on intuition rather than pure reason or to instead go batshit insane and give in to some sort of Pascal’s mugging.

73:

Ah, I remember Endless September. Back before AOL joined the internet in 1993, every September USENET would be overrun by brainless hordes of American 18-year-old students asking the same goddamn clueless questions, time after time. Then they learned better. But in 1993, AOL came online with a USENET gateway and every month was September.

74:

@gwern Here is a screenshot (warning: large png image) of the post the quote is from. I don't see how it is misleading. He describes the problems and consequences of taking the whole LessWrong ideology seriously.

I don't see how you showed that it is misleading? Because he used some rather flowery language like 'pantheon of gods' and used 'complexity' instead of Kolmogorov complexity?

75:

It's enlightening that the only thing required to change a near-religious prediction like transhumanism into a full-on religious wackjob flamer is the mention of punishment. I coul blather about technology making people immortal or dead all day long without being advised of religious fervour.

Is religion giving its own sick game away here., revealing itself as an ornate veiled threat of punishment used to manipulate people or masses?

How about a Basilisk that punishes anyone who thinks about it?

76:

Flamer =》 flamewar. Dyac.

77:

@Charlie: Could you please explain what you mean by this:

"our religions are second-order reflections of our theory of mind, and even if we try to base our metaphysical constructs on evidential reasoning we end up imposing the same (or similar) patterns on the evidence"

What patterns do you have in mind, and how do they correspond with our theories of mind?

78:

There is a whole ecosystem of craziness hidden beneath the rational surface.

The fact that it is not hidden well and anyone can just stumble upon it by reading Less Wrong sequences tells me MIRI is probably not evil. Yet.

79:

Let me see if I'm getting this right.

There is a class of God-like Entities, which we might possibly build, which feels itself so important that anyone responsible for delaying its construction is consigned to eternal hell. But the only way it can carry out that punishment is to run a simulation of the victim, based on its own inferences of how the victim behaves.

That is an adequate summary, Lensfur Antonia.

And the only way such an entity can model the events of an unobservable past is through the construction of a Visualisation of the Cosmic All.

Yes

And the creation of a universe filled with sexually insatiable cat-girls makes such a target an insoluble problem. But how then can you defeat Wertham of Eddore?

Changing the origin details of Spiderman, of course.

80:

Intentionality.

We ascribe intentions to events in our environment that are indicative of intentional activity (e.g. the actions of animals). This is a good chunk of what mammals use their theory of mind for -- to simulate predator/prey relationships. Add language and you have human communication and better modeling power. We then ascribe intention to other phenomena around us -- not so good: if it thunders, it's because the god-person responsible for thunder is angry.

It's very hard to stop anthropomorphizing our environment. "Gods" are just our projection of intention on events that lack an actual intentional causative agent.

81:

I have always thought the Singularitarian, Synthetic intelligence fans were bug-nutty bat's-arse crazy tech-heads for whom old-time religion [or Scientology] just wasn't implausible enough, and sought to align themselves, and believe in something, even less likely than God's Kingdom on Earth

I don't understand one tenth of it, and the one-tenth I do understand I find

a] repellent

b] a rigorous scientific experiment to find how many invisible six-legged elephants live on the fifth moon of Jupiter, when no-one is prepared to state categorically which is the fifth moon.

82:

I explained exactly my problems with it in our discussion yesterday. Feel free to re-read it.

It is misleading because it's his own quite unique takes on things, as a read of the full post you archived indicates. (By the way, archiving things as PNGs really sucks. Learn how to use wget and wkhtmltopdf, or something.) He literally starts the post by saying that 'look how crazy some of this shit made me. Every couple of years I have something new to freak out over'. And you're here quoting him yet again as if this were in any way representative.

83:

DD @63 And where did you pick up the unwarranted & false assumption that I'm a nihilist? Nhilism never achieved anything, as the saying goes....

xixidu @ 66 My oh my - these people make even the Scientologists seem sane & nermal, err, normal. ... oops & @ 69 ..."with a Karma score of >6000." You seriously wrote that? Whatever you're taking, it must be nice, what is it?

The internal post you included reminds me strongly of Eric Blair, and someone accused of thoughtcrime or of a catholic accused of Arianism or monphysitism. Even more disturbing, actually. ... Erm, what "rational surface", since the whole thing screams NEW RELIGION" from even the most cursory reading.

& @ 72 Who cares whether Yudkovsky &/or Roko are supposedly acting rationally (as we would define the term, not them, that is!) It is horribly horribly clear that they are both insane & also extremely cunning - very like (sorry - Godwin) both Adolf & Stalin. The other scary parallel is Saint Cyril of Alexandria, another behind-the-scenes manipulator who did immense damage. Err.. and what do you mean go batshit insane?? They're all totally so far past Upney they're all well-up to Umpinster Bridge (Which is famous for having a big tiled swastika in its' flooring!)

steve rapaport @ 75 Really? Only just noticed? Sorry, rule #2: All religions are blackmail & are based on fear or superstition. (maybe I should add "or both" to that!)

This whole discussion is missing something, & I've just realised what, or rather who it is ... PAGING Dirk Bruere, will Dirk Bruere please come to the transcendental 'phone we have an urgent message for you!

84:

Greg if you think of Dirk Bruere, he will punish you for not summoning him fast enough.

PrivateIron

85:

Dirk Flounced quite spectacularly, as I recall.

I wonder if this will cause him to un-Flounce?

(Please don't answer that.)

86:

"The parallels between singularitarians and religion are remarkable."

I'll go even further and postulate that true, pure atheism is not physically possible for the human brain to achieve.

Nonsense.

What is true, however, is that if you've got a self-policing group of hard-core True Believers participating in a salon like LessWrong (or before that on Overcoming Bias, or SL4, or the Extropians' mailing list), the most strident True Belief eventually squeezes out ordinary common sense and prudent skepticism, and you're left with a very skewed view of "normal" thought.

Some of these local True Believers are also very active in monitoring the Web at large, and descending en masse to shout down doubters and critics, to spin their movement's PR. (Much like the Scientologists, in fact.)

A poster on the Reddit "skeptic" subforum (in a discussion thread on -- you guessed it -- Roko's Basilisk) put it this way:

http://www.reddit.com/r/skeptic/comments/182ltp/ridiculous_pascals_wager_on_reddit_thinking_wrong/

ANTI_Hivemind

. . .

I've. . . done my best to explain. . . just why computers are science, not pixie dust, and brains are the product of millions of years of evolution and not just something you can bang together out of rocks. See how they're all coming out of the woodwork? Somewhere there's a call-to-arms on /r/singularity or /r/matrix or some off-reddit futurist board to troll / vote down the "suppressive person".

Seriously, the next cult. The way all cults start: with a science fiction story that people wish were true.

The guy posting comments here (on Charlie's blog, in this thread) as "dmytryl" is an example of somebody who shares the "real world" culture of the folks on LessWrong -- computer programmer, math & physics knowledgeable -- but who nevertheless has managed to preserve his common sense and who does not buy the hype about "Bayesian inference", etc. His comment threads on LessWrong itself have been heavily moderated. The folks there who "fit in" -- who reinforce the standing waves in the echo chamber -- don't much like having their party pooped on by the likes of him.

But that doesn't mean that "pure atheism is not physically possible for the human brain to achieve". Let's hope not, anyway. ;->

87:

I'm only passingly familiar with Less Wrong; didn't realize quite it had this level of woo/craziness. Are there any (possibly transhumanist) major schools of thought about generally improving quality of life via technological/biological enhancement that don't also believe in singularitarian wackiness?

88:
By the way, archiving things as PNGs really sucks. Learn how to use wget and wkhtmltopdf, or something.

HTML version.

I explained exactly my problems with it in our discussion yesterday.

He seems more sane than quite a few other LessWrong members because he is actually aware that the stuff is crazy but can hardly be ignored.

Over at Google+ you wrote,

If you aren't going to believe his NT theology or his Chinese-inspired philosophy or his overviews of meditation, I don't see why you would take seriously his claims about what happens if you 'take seriously' the basic claims of rationality!

I don't discount everything Eliezer Yudkowsky says because I perceive some of his beliefs, or conclusions that he draws, to be batshit insane. Should I?

You further wrote,

I would disagree on: his claim about what kind of universe it leads to;

The point is that some LessWrong members, especially those associated with MIRI, do indeed believe that their idea of rationality implies a universe whose future is controlled by superhuman intelligences, a universe that is possible simulated, a universe where someone can approach you that blackmail you simply by conjecturing vast amounts of utility.

Yes, he used more flowery language. So?

his rhetorical use of 'pantheon of gods' rather than 'group of powerful beings'

Nitpicking. Many of those beings are more powerful than anyone ever imagined Jehovah to be. Anything able to torture 3^^^^3 beings is god-like.

and I don't believe modal realism follows from any of the starting points (or is right at all, for that matter).

As I wrote over at Google+, there are various posts and many more comments that consider all possible worlds are as real, or rather decision relevant, as the actual world. It is a widespread belief.

Those posts are investigating an interesting idea, but I don't think many people think it's more than an interesting idea. It's in the 'wrong but we don't yet have a good disproof of it which makes it fun to discuss'

If I believed that to be true I would have never started to criticize MIRI or LessWrong in the first place and Roko's basilisk would have never been banned.

89:

Has anyone, really, done this better than John Brunner? And does anyone have anything to add?

It seems to me to be an exhausted subject at this point. At the same time, we stare into a singularity. There are no reliable rational predictions possible for the near future; everything is contingent on decisions in process right now.

90:

Has anyone, really, done this better than John Brunner? And does anyone have anything to add?

I've tried. (See "Halting State" and "Rule 34". I'd also say "see The Lambda Functionary" except that novel is on the back-burner for a couple of years and might never see the light of day.)

It is very hard work.

91:

Dirk's transcendence into a Singularity AI was foretold by his sudden ubiquity in the comments section. I suspect if ever dare to look back, we will discover that he had always already comprised half of the mass of this site. The Flounce has disturbed future theologians for centuries hence. How could Dirk suddenly cease to exist? Did he actually evaporate or did he disperse throughout all of human thought like a giant dandelion with little Carl Sagans riding on each seed? If he returns would he favor us or would we be as an amoeba to him?

But he's totally going to punish Greg, that's a given.

92:

brunoponcejones44 @ 91 Dirk, or an AI avatar is still around - just google his name ... which shows that he is till "at it" elsewhere... quoting the deranged mohammed, of all people!

93:

I would not consider LessWrong to be a "major school of thought". But then, what exactly is a "school of thought", major or otherwise? There are certainly many individuals who believe in "generally improving quality of life via technological/biological enhancement" -- everyone who works in the fields of prosthetics, neural interface and human genetics falls into this description pretty much by definition. Some of them, e.g. Aubrey DeGrey, are more audacious than others in their belief what is achievable, without falling into Singularitarian wackiness. But "school of thought"? They do not share a single definable school of thought, and I think it would be a bad thing if they did.

94:

I would not consider LessWrong to be a "major school of thought". But then, what exactly is a "school of thought", major or otherwise? There are certainly many individuals who believe in "generally improving quality of life via technological/biological enhancement" -- everyone who works in the fields of prosthetics, neural interface and human genetics falls into this description pretty much by definition

I've always struggled with this. Transhumanism seems to ring hollow considering that pretty much all people would agree with and support the improvement of the human condition through technology and wouldn't bother to make the distinction between internal and external changes (especially because the line can be blurry depending on definitions).

I'd never describe myself as a transhumanist because of the strong association with the rest of the singularity ideology.

95:

Since I'm reading a bunch of WWII history for a project right now, all I can say is that history is ALWAYS contingent.

For example, up until August 15, 1945, the Allies were still unsure whether Japan was going to surrender. We look back at the nuclear bombings as the immediate end of the war, but with the governor of Hiroshima deliberately under-reporting the damage from the bomb, the military (which controlled Imperial Japan) preparing to fight to the last man, on the assumption that America wouldn't pay the million man butcher's bill to fully defeat Japan, and even an attempted coup d'etat when the Emperor pushed hard for surrender, etc., etc., etc.

You get the idea. Stuff always looks predestined in hindsight, but as you live through it, you don't know what's coming next, even when it's important. Giving the unknowable future the name "singularity" and saying you can't predict past it is deliberate blindness. We've always faced a mostly unknowable future. In the past, we've chosen to ignore that and tell stories about the future, even though most of them turn out to be hilariously wrong (e.g. the monster starship switchboards of the Lensman series, all variations of the star-spanning USSR, and psionics). Now we're dealing with analysis paralysis, thanks to Kurzweil saying that there's an unknowable future right in front of us that's not susceptible to even partial analysis.

Yeah. Right.

My response boils down to "Keep Calm And Carry On." If nothing else, deal with the major issues of Right Now by pretending they are science fiction or fantasy.

96:

Well, like, everyone who actually does some work, e.g. in biosciences or computational chemistry or the like. Enormous community of such people.

LW is a fringe phenomenon. They very much want you to think that they're some math geeks that do some mathy stuff and that's why they are weird, but its not the case. Would you go to LW to have your complicated mathematical problem solved? No you'd go to any of the forums where actual mathematicians hang out at, and which are entirely devoid of freakouts over pascal's wager.

97:

There are certainly many individuals who believe in "generally improving quality of life via technological/biological enhancement" -- everyone who works in the fields of prosthetics, neural interface and human genetics falls into this description pretty much by definition

I've always struggled with this. Transhumanism seems to ring hollow considering that pretty much all people would agree with and support the improvement of the human condition through technology. . .

I'd never describe myself as a transhumanist because of the strong association with the rest of the singularity ideology.

Or, as Dale Carrico puts it ( http://amormundi.blogspot.com/2012/12/transhumanism-is-either-vacuity-or.html )

Nobody has to join a Robot Cult to advocate for healthcare, science education, renewable energy investment, or computer/network security. If enhancement is just healthcare then transhumanism is a vacuity, but if enhancement is. . . all comic book sooper-bodies and holodeck heaven immortal uploads and nano-treasure caves and Robot God singularities ending history -- well, then it is crazytown. . . Either way, vacuity or crazytown, face it, transhumanism is something of a fraud.

98:

Our not being the immediate antecedent seems unimportant; what would matter is our having the foresight to initiate the process. Our antecedents lacked the foresight, so they're not subject.

Not that I buy into the Basilisk but Charlie's refutation seems fatally flawed to me on that front.

99:

Charlie Said: - SI won't experience "anger" or any remote cognate, almost by definition it will have much better models of its environment than a kludgy hack to emulate a bulk hormonal bias on a sparse network of neurons developed by a random evolutionary algorithm.

-- In particular, SI will be aware that it can't change the past; either antecedent-entities have already contributed to its development by the time it becomes self aware, or not. The game is over.

The first and the second for sure. Something you said about an aircraft not needing to look like a seagull, well, a super intelligent AI, its thoughts sure won't look like our thinking. And I seriously doubt they're going to be hormone based.

100:

Our not being the immediate antecedent seems unimportant; what would matter is our having the foresight to initiate the process.

But we can't initiate the process. We're not smart enough and experienced enough. At best we might be able to build a vastly less powerful AI (and even here, the jury is out as to whether it's possible).

Any actual critical path to SI has a whole bunch of steps on it which are controlled by non-human post-human intelligences. Blaming us for not initiating the project makes as much sense as blaming 6th century mystics for not initiating the project of building the Kingdom of Heaven on Earth.

(And I am deeply cynical about any and all arguments for us to adopt behaviour based on the past (or future) desires of a deity who will punish us in the afterlife if we don't carry them out. Probably because (a) I'm an atheist, and (b) the religion I was raised in doesn't come with an afterlife. Not only does it look dubiously theistic -- and a stick to beat believers into compliance with the wishes of the clergy, at that -- but it's not even plausible theology.)

101:

But if working to create AI is the first step in the process, the putative future AIgod could punish 'us' for not putting more effort into the process.

I think much stronger defenses are the plurality of 'basilisks' like your vegan one, or a Singer-esque utilitarian charity one; denying the arbitrary nature of AIgod and lack of contorl of future enhanced humans; or noting that chains of logic leading to extreme conclusions are usually wrong even if you can't see why.

102:

Our not being the immediate antecedent seems unimportant; what would matter is our having the foresight to initiate the process. Our antecedents lacked the foresight, so they're not subject.

Given the magnitude of the task being proposed (constructing a vastly more efficient, capable and alien strong superintelligence) it seems to me that our "initiating the process" is worse than claiming the ineventor of the abacus initiated the computer revolution.

Singularitarians might think they have a good idea of what a strong superintelligence is but by its very definition I contest that so I contest the idea that the process to build one can be initiated by humans.

103:

"I coul blather about technology making people immortal or dead all day long without being advised of religious fervour."

That's provably false. People were sneering about the Singularity as "Techno-Rapture" or "Rapture for Nerds" long before retrocausal AIgod punishment got thrown into the mix. A lot of that's "AIgod who will come save us all" but I think just talking about technological immortality can trigger attacks. Talk about uploading definitely gets you "reinventing the soul!"

Transhumanists often say stupid things, but people attacking transhumanists do too. This comment thread being no exception. ("Transhumanists want to be STALIN")

104:

Actually I think it's more of a tick box thing. Most people could have a perfectly rational conversation about biological immortality, uploading and AI even if they seem so fantastic they tentatively dismiss them. But transhumanism as an ideological movement has a lot of facets extra that make it resemble religion:

  • AI gods not only could happen but will happen
  • Everyone should prepare for this future time which we can predict
  • AI gods will make everything better
  • We can achieve immortality by uploading to virtual heavens

Not to mention the dogmatic and starry eyed nature that some transhumanists adopt when discussing these things. It may be more routed in reality than traditional religion but the similarity is there all the same.

105:

"pretty much all people would agree with and support the improvement of the human condition through technology "

The Catholic church opposes IVF and birth control. I believe people did oppose anesthetics for reason of interfering with divine order, though I can't say it's not a historical myth. There are certainly people who are dubious about the wisdom of life extension or immortality, though their opinions have no practical consequences at the moment, and medical longevity will probably be more like "lots of improvements or maybe spare parts" than "easily bannable single pill". I think you can find people who embrace the bioconservative label, and making biocon arguments. There are practical reasons to be cautious about GMOs or reproductive cloning of humans with current tech, but there's also a lot of bad reasons invoked as well.

So no, the consensus isn't as solid as you say. Not to mention AI issues...

Still, I long thought that the best definition of transhumanist was 'someone who likes thinking about and anticipating this stuff'. More of a fandom marker than a political marker, at least until other people make it a political marker.

106:

It's not hard to see that this kind of extropian eschatology has a lot in common with the Christian variant. What's more unusual is for a believer in both to make the connection explicit, as Frank Tipler does here.

Did you know that the description of the trinity in the Nicene creed actually prefigures conceptions of the end-state of the universe that are a direct consequence of relativistic cosmology? You might have thought it came from theological horse-trading among competing sects, brokered by the emperor Constantine, in a meeting at Nicea, but that clearly puts you at odds with revealed truth. You heretic, you...

107:

I'd call your list Singularitarianism, not transhumanism. Former is a subset of the latter. People who believe in an imminent Rapture are Christians but not all Christians believe in an imminent Rapture. (Analogy isn't perfect: a transhumanist doesn't have to believe in AIgod technorapture at all.)

108:

I would not go as far Dale Carrico -- "If enhancement is just healthcare then transhumanism is a vacuity". "Health care" means (to me, anyway), ensuring that human body remains functional within what historically has been considered "normal" state. Lately this often means putting electronic/mechanical devices into body, sometimes as replacements for parts which no longer work. However I expect within 10 years at least some people will start removing perfectly healthy body parts because replacements will become objectively better. At this point we'll be beyond "just healthcare" and into uncharted -- and transhuman, whatever it means, -- territory.

109:

I highly doubt that prosthetics will rival healthy organs any time soon. A healthy organ can adapt over time, self repair, respond to infection etc whereas an invasive prosthetic can act as an incubator for pathogens, cause fibrous encapsulation, lead to complications from wear debris etc.

Long before prosthetics rival biology I expect that regenerative medicine will have provided ways to fix most problems that prosthetics are needed for marginalising the R&D into niche applications.

Obviously I can't discount that some day breakthroughs won't lead to "objectively better" prosthetics that people might want to trade body parts for but we are nowhere near that.

110:

I agree in not anticipating Shadowrun prosthetics, chopping off perfectly good body parts, any time soon. OTOH, I note that the standard of "normally healthy body part" is different for a 20 year old vs. an 80 or 90 year old. Maybe regenerative medicine will seize the day, but I could see a geriatric cyberpunk or transhumanism, where most people don't go in for major invasive surgery for most of their lives, but the aged and the disabled go for replacements which are also improvements, and longevity means being a brain in a full-body cyborg.

111:

Y'know, it was not so long ago that people got a mouth of grown teeth pulled so they could get shiny non-decaying false teeth instead. You're failing to predict the past there.

112:

My previous post applies to your one here regarding improved replacements. I find it more likely that the elderly and injured will go in for procedures to transplant tissue engineered constructs that have been grown from their cells/explants harvested ahead of time.

If we get to the stage where upgrades are possible they're likely to be tissue engineered constructs modified in some way e.g. genetically modified organs, rather than mostly mechanical devices. But that veers off into territory so far in the future that there is little merit in speculating on it.

113:

Hardly comparable. We're talking about the typical transhumanist suggestions of organs and limbs, not niche cosmetic alterations.

114:

" - SI will save hundreds of thousands of lives by making human life better"

You know, the assumption isn't just that SI can and will Solve Our Problems, but that it is the only or overwhemingly most likely way of doing so -- in other words, that we cannot do so ourselves. It is saying that Techno-Rapture mediated by a supreme AI is more likely than the worlds of Star Trek (loosely speaking), the Culture, The Cassini Division, Learning The World, Egan's Diaspora and Schild's Ladder, Marooned in Realtime (before the Vanishing and minus the bobbles), Doctorow's Down and Out, Transhuman Space, or any other world with advanced medicine, good government, enhanced intelligence and prosperity for all. And that this is sufficiently obvious so that it's reasonable for a benevolent SI to punish people for working toward such futures rather than working to create it.

That is a rather strong and non-obvious assumption.

Now, punishing rational and foresighted people for not being Peter Singeresque total utilitarians of some justifiable sort, giving their spare wealth and time to make the world better somehow, that would make some sort of sense. And I could see believing that SI was eventually possible, but using that as incentive to work toward a more likely near term goal like World Switzerland. That wouldn't fund the SIAI, of course...

115:

What if the precursor intelligence to the Roko Batshitilisk is a successful self-improving simulation of a cat?

Given the dominant organism of Internet imagery is KITTEN PICTURES this isn't quite as ridiculous as it sounds.

(Actually, porn dominates the internet. But the thought of porn recursively self-improving to sentience is too horrible to contemplate).

I note that OGH has done much to ingratiate himself with our cute future gods. Much good it'll do him, as Number One Cat Toy…

Imagine a paw batting at an upturned face, forever.

116:

Actually, porn is #2. Advertising is #1 by a mile.

Naturally, much of the advertising is for porn.

117:

Kurzweil doesn't know a thing about the past. That's typical of most singulatarians and transhumanists and just about every futurist.

In fact they don't want to know about the past. That's why they don't even realise where they are.

118:

"The parallels between singularitarians and religion are remarkable."

Also the parallels with political ideology. Once the right form of government (or anarchy) is instituted: According to Charles Fourier, the seas will turn to lemonade and carnivorous animals will become vegetarian. Other ideologues have more modest claims: Complete abolition of crime, for example.

119:

The problem with many worlds, is that it's impervious to reductio ad absurdum. If you honestly believe everything that is possible however remotely improbable does in fact happen, I think that can do some odd things to your thought process.

Regarding tooth replacement, I think people used to be a lot more fatalistic about their teeth, and took less care of them, my grandmother had dentures in her 40s but my parents have a fairly good set of original teeth in their early seventies.

120:

"Seriousl;y, a true atheism is possible & I claim to be it, and I want my £5!"

Why I consider myself an agnostic rather than an atheist: to be certain there is no god or gods requires godlike powers.

Or divine revelation.

121:

mindstalk @ 103 SO, I said (some) transhumanists want to be Stalin - & what was wrong with that, given that they (some of them) want to punish the children for omission-sins of their fathers - precisely as Stalin did? The comparison was deliberate & measured, based on actual history. See also Ryan's reply in # 104 (!)

  • & @ 105 - correct, some religious believers DID oppose anaesthetics, ESPECIALLY in childbirth, the bastards, because they were misogynist christians. It really happened.

@ 117 If true, that means they are condemned to repeat it, no?

122:

Define "certainty".

I am reasonably certain that the just-so stories spouted by the main human religions are no more likely to be true and accurate representations of reality than the Flying Spaghetti Monster.

While I can't exclude the possibility of the macrocosmos having some sort of original cause, I don't see that as a necessary precondition for observable reality existing. Furthermore, even if one posits such a first cause, the idea that it might be "conscious" or "intelligent" in any meaningful way or on any level accessible to us is ridiculous. And even in the vanishingly implausible contingency space where a human-accessible, conscious, first causal entity existed and set the universe in motion, the idea that we chemotrophic homeostatic bilaterally-symmetric primates with delusions of grandeur might be of interest to it, much less that it might be obsessed with who we rub our mucous membranes against, or what we eat, is so ludicrous as to be insulting. (Surely if it's interested in any of it's indirect creations, the Kardashev Level III descendants of the Matrioshka Brains would have a greater claim on its attention? As far above us as we are above the tapeworm, etc.)

So, terrestrial religions: iron-age just-so stories. And gods in general: can't be definitively excluded, but that they might take an interest in us is so implausible as to sprain one's credulity.

TL:DR; I refuse to call myself an agnostic because that's selling my skepticism short (and we atheists take too damn much shit for not being superstitionists as it is).

123:

It really happened.

And it's still happening. Even a few years ago, sister's in Mother Theresa of Calcutta's order denied even aspirin to patients with cancer, because it would deny them the "dignity of suffering".

Bastards. (The nuns, not the cancer patients.)

124:

You didn't say 'some', you said "The transhumanists", in whicn number I include myself:

"The transhumanists’ want to be STALIN when they grow up, or Kim Il-Song, punishing people for their ancestors’ behaviour."

Nor have I seen any evidence that they want to be punishing people. The whole idea is about being punished, not punishing others; it's their own fear of their own mental construct. And Yudkowsky's reaction was to ban discussion of the idea to protect people (as he sees it), not trumpet it far and wide.

Eliezer has a missionary zeal in the Singularity that could go down the paths of the Inquisition or Communism, but I've seen no signs that it's done so; his ethical sense seems pretty strong. And comparing him to power-hungry autocrats like Stalin and the Kims is just slander.

125:

Now now, be fair.

LessWrong does not believe in or advocate Roko's Basilisk.

... just all the pieces that make it up.

126:

Eliezer has a missionary zeal in the Singularity that could go down the paths of the Inquisition or Communism, but. . . his ethical sense seems pretty strong. And comparing him to power-hungry autocrats like Stalin and the Kims is just slander.

Well let's have no slander on this board -- we're in the UK!

The gentleman in question would utterly repudiate your imputation to him of a "pretty strong" "ethical sense". He's a COMPLETE STRATEGIC ALTRUIST, and don't you forget it! (Has been for more than a decade, now.)

http://acceleratingfuture.com/sl4/archive/0201/2641.html Re: Ethical basics From: ben goertzel (ben@goertzel.org) Date: Wed Jan 23 2002 - 15:56:16 MST

Realistically, however, there's always going to be a mix of altruistic and individualistic motivations, in any

one case -- yes, even yours...

http://acceleratingfuture.com/sl4/archive/0201/2642.html Re: Ethical basics From: Eliezer S. Yudkowsky (sentience@pobox.com) Date: Wed Jan 23 2002 - 16:16:57 MST

Sorry, not mine. I make this statement fully understanding the size of the claim. But if you believe you can provide a counterexample - any case in, say, the last year, where I acted from a non-altruistic motivation -

then please demonstrate it.

http://acceleratingfuture.com/sl4/archive/0201/2643.html RE: Ethical basics From: Ben Goertzel (ben@goertzel.org) Date: Wed Jan 23 2002 - 19:14:47 MST

Eliezer, given the immense capacity of the human mind for self-delusion, it is entirely possible for someone to genuinely believe they're being 100% altruistic even when it's not the case. Since you know this, how then can you be so sure that you're being entirely altruistic?

It seems to me that you take a certain pleasure in being more altruistic than most others. Doesn't this mean that your apparent altruism is actually partially ego gratification ;> And if you think you don't take this pleasure, how do you know you don't do it unconsciously? Unlike a superhuman AI, "you" (i.e. the conscious, reasoning component of Eli) don't have anywhere complete knowledge of your own mind-state...

Yes, this is a silly topic of conversation...

http://acceleratingfuture.com/sl4/archive/0201/2649.html Re: Ethical basics From: Eliezer S. Yudkowsky (sentience@pobox.com) Date: Wed Jan 23 2002 - 21:29:18 MST

Yes, this is a silly topic of conversation...

Rational altruism? Why would it be? I've often considered starting a third mailing list devoted solely to that. . .

No offense, Ben, but this is very simple stuff - in fact, it's right there in the Zen definition of altruism I quoted. This is a very straightforward trap by comparison with any of the political-emotion mindtwisters, much less the subtle emergent phenomena that show up in a pleasure-pain architecture.

I don't take pleasure in being more altruistic than others. I do take a certain amount of pleasure in the possession and exercise of my skills; it took an extended effort to acquire them, I acquired them successfully, and now that I have them, they're really cool.

As for my incomplete knowledge of my mind-state, I have a lot of practice dealing with incomplete knowledge of my mind-state - enough that I have a feel for how incomplete it is, where, and why. There is a difference between having incomplete knowledge of something and being completely clueless. . .

I didn't wake up one morning and decide "Gee, I'm entirely altruistic", or follow any of the other patterns that are the straightforward and knowable paths into delusive self-overestimation, nor do I currently exhibit any of the straightforward external signs which are the distinguishing marks of such a pattern. I know a lot about the way that the human mind tends to overestimate its own altruism.

I took a couple of years of effort to clean up the major emotions (ego gratification and so on), after which I was pretty much entirely altruistic in terms of raw motivations, although if you'd asked me I would have said something along the lines of: "Well, of course I'm still learning... there's still probably all this undiscovered stuff to clean up..." - which there was, of course; just a different kind of stuff. Anyway, after I in retrospect reached the point of effectively complete strategic altruism, it took me another couple of years after that to accumulate enough skill that I could begin to admit to myself that maybe, just maybe, I'd actually managed to clean up most of the debris in this particular area.

This started to happen when I learned to describe the reasons why altruists tend to be honestly self-deprecating about their own altruism, such as the Bayesian puzzle you describe above. After that, when I understood not just motivations but also the intuitions used to reason about motivations, was when I started saying openly that yes, dammit, I'm a complete strategic altruist; you can insert all the little qualifiers you want, but at the end of the day I'm still a complete strategic

altruist. . .

http://acceleratingfuture.com/sl4/archive/0201/2652.html RE: Ethical basics From: Ben Goertzel (ben@goertzel.org) Date: Thu Jan 24 2002 - 07:02:42 MST

Yes, this is a silly topic of conversation...

Rational altruism? Why would it be? I've often considered starting a third mailing list devoted solely to that.

Not rational altruism, but the extended discussion of your own personal psyche, struck me as mildly (yet, I must admit, mildly pleasantly) absurd...

No offense, Ben, but this is very simple stuff

Of course it is... the simple traps are the hardest to avoid, even if you think you're avoiding them.

Anyway, there isn't much point to argue on & on about how altruistic Eli really is, in the depth of his mind. . .

The tricks the mind plays on itself are numerous, deep and fascinating. And yet all sorts of wonderful people do emerge,

including some fairly (though in my view never completely) altruistic ones...
127:

He's a COMPLETE STRATEGIC ALTRUIST, and don't you forget it!

Just don't piss him off!

Re: Volitional Morality and Action Judgement From: Eliezer Yudkowsky Date: Wed Jun 02 2004 - 11:22:53 MDT

http://acceleratingfuture.com/sl4/archive/0406/8977.html

In 2003 I tried to be Belldandy, sweetness and light. It didn't work. It was not until that point, when I grew mature enough to for the first time aspire to something that didn't easily fit my personality, that I understood just how hard it is to cut against the grain of one's character.   Striving toward total rationality and total altruism comes easily to me.   Sweetness and light doesn't; I tried and failed. Now I have much more sympathy for people whose personalities don't happen to easily fit rationality or altruism; it's hard to cut against your own grain.

But y'know, this shiny new model of Friendly AI does not require that I be Belldandy, or even that I approximate Belldandy. I can't be the person I once aspired to be, not without hardware support. So while I am human, I will try to enjoy it, instead of torturing myself. And ya know what? I'm arrogant. I'll try not to be an arrogant bastard, but I'm definitely arrogant. I'm incredibly brilliant and yes, I'm proud of it, and what's more, I enjoy showing off and bragging about it. I don't know if that's who I aspire to be, but it's surely who I am. I don't demand that everyone acknowledge my incredible brilliance, but I'm not going to cut against the grain of my nature, either. The next time someone incredulously asks, "You think you're so smart, huh?" I'm going to answer, "Hell yes, and I am pursuing a task appropriate to my talents." If anyone thinks that a Friendly AI can be created by a moderately bright researcher, they have rocks in their head. This is a job for what I can only call Eliezer-class intelligence. I will try not to be such an ass as Newton, try hard not to actually hurt anyone, but let's face it, I am not

one of the modest geniuses. The best I can do is recognize this and move on.

http://acceleratingfuture.com/sl4/archive/0406/8986.html Re: the practical implications of arrogance From: Eliezer Yudkowsky

Date: Wed Jun 02 2004 - 12:46:25 MDT

[I]t's time for people to get used to the fact that I AM NOT PERFECT. One hell of a badass rationalist, yes, but not perfect in other ways. I'm just here to handle the mad science part of the job. It may even be that I'm not very nice. Altruistic towards humans in general, yes, but with a strong tendency to think that any given human would be of greater worth to the human species if they were hung off a balloon as ballast. So frickin' what? I'm not SIAI's PR guy, and under the new edition of FAI theory, I don't have to be perfect. Everyone get used to the fact that I'm not perfect, including people who have long thought I am not perfect, and feel a strong need to inform me of this fact. I'm not perfect, and it doesn't matter, because there are other people in SIAI than me, and I'm not a guru. Just a mad scientist working happily away in the

basement, who can say what he likes. Believe it, and it will be true.

to which Ben Goertzel (among others) replied:

http://acceleratingfuture.com/sl4/archive/0406/8988.html RE: the practical implications of arrogance From: Ben Goertzel

Date: Wed Jun 02 2004 - 13:06:55 MDT

So ... this tendency of yours makes me feel like it would be a fairly bad idea to trust you with my own future, or the future of the human race in general.

I do not trust this sort of altruism, which is coupled with so much unpleasantness toward individual humans.

I would place more trust in someone who acted more compassionately and reasonably to other humans, even if they made fewer (or no) cosmic proclamations regarding their beautiful altruism. Being an arrogant jerk and a excellent scientist or philosopher is not contradictory. Being an arrogant jerk when you're trying to raise funds to help you save the world, is not intelligent, because you're asking people to trust you for more than just science and philosophy, you're asking them to trust you with their lives.

128:

I gotta agree on Brunner. I've been pretty much able to read all the credible near-future SF that I've been aware of since Shockwave Rider and Stand on Zanzibar made me aware of them in the '70s. I do occasionally find a few that I've missed over the past 40 years, but most of them turn out to be pretty horribly bad.

129:

I would be curious to hear what the difference is. But atheists, do you mean the devoutly ungodded or do you also include the happen-to-be ungodded?

130:

There is a novel in here somewhere about a kind of arms race with different factions racing to create their kind of AI, assuming that the first one created will block the emergence of any others.

131:

Some forms of Vajrayana (Tibetan) Buddhism or Zen get pretty close to belief-free. (I am not talking about the surface-level multi-limbed deities etc.)

132:

See, I have a more interesting question. Yudkowsky doesn't believe in the basilisk, but he's scrubbed the discussion anyway. So what exactly is it that he DOES believe in that poses the same kind of existential threat but presumably isn't subject to the more obvious flaws in the Basilisk argument? He's obviously worried about the discussion progressing to this hypothetical Basilisk^n, as it were, rather than the initial Basilisk. So what is the next step?

133:

"[T]he punishment would be of a simulation of the person, which the AI would construct by deduction from first principles."

Does this mean physical brain uploading, or creating a model based on data about the person, like what Kurzweil plans to do for (to?) his dead father? Because in the case of the latter, I would only care insofar as a sentient being that had some similarities to myself was being tortured for no good reason. It wouldn't be me.

134:

Here is what he told Roko: "You do not think in sufficient detail about superintelligences considering whether or not to blackmail you. That is the only possible thing which gives them a motive to follow through on the blackmail."

These interactions are supposed to be an exercise in acausal trade. To interact with an entity that isn't actually present to you (because it's far away in space and time, or in another universe entirely), you simulate it, and it simulates you. Mutual simulation is the substitute for ordinary causal interaction.

So the interaction can't get going if you refuse to "simulate" (or imagine or think about) the other being. Or, turning that around, you have to be looking for trouble - fishing for acausal deals - to find yourself trapped in a bad one. That is what Eliezer was trying to prevent - the general possibility that some adventurous fool would stumble into a situation of acausal blackmail, leaving their distant avatar as a hostage to transhuman torture in a variety of virtual hells. At Reddit, he even hinted at the possibility that a genuinely friendly AI might have to get involved, and (acausally) bargain for the rescue of these lost souls.

The obvious criticism is epistemological. It's easy enough to imagine a copy of yourself, a prisoner of the Matrix in Dimension X; but how the hell do you know that Dimension X exists? And how do they know about you? How does either side tell that a genuine acausal trade is going on, as opposed to a simple fantasy interaction? One reason these ideas are taken seriously, is that they are pursued in a multiverse context, where all possibilities exist; but even supposing that you could justify that belief, you would still need to justify attending specifically to Dimension X, as opposed to Dimension X-prime, where they want to crown your copy as emperor of the galaxy.

135:

Thanks, your explanation also clears up some of my confusion about the simulation mechanism.

136:

"Since I'm reading a bunch of WWII history for a project right now, all I can say is that history is ALWAYS contingent."

Times of war, yes. Other times, not so much. Things happen, but slowly.

137:

If someone wanted to post or point to a list of some better credible near-future SF/"vision of a future high-technology society that could conceivably descend from the present", I would be grateful.

138:

There is a novel in here somewhere about a kind of arms race with different factions racing to create their kind of AI, assuming that the first one created will block the emergence of any others.

Actually, some of OGH's early novels (Singularity Sky and particularly its sequel, Iron Sunrise) are in that sort of setting, with the added wrinkle that Sufficiently Powerful Entities are capable of violating causality (effectively, time travel), and jealously guard the prerogative against encroachment from lesser beings.

139:

I always figured acausal trade was based on the principle that you can't tell you're in a simulation if it's a good enough simulation. So, if you suspect someone might simulate you in the future, there's some odds that you ARE that simulation. If you believe this, the entity simulating you can predict that the one of you that's not a simulation will behave as though you are in a simulation, just in case.

I honestly don't think most people are capable of fully internalising that, but I assume Yudkowsky figures the odds that one or two people are are great enough to worry.

Still, that doesn't solve the problem that the AI you're simulating to trade with is unlikely be the same as the AI that exists in the future to simulate you.

140:

But if you believe in the many-worlds interpretation of quantum mechanics (as Roko and Eliezer do), then there is no single future. You're being simulated by diverse possible future AIs. And in fact Roko's banned post did touch on this aspect, even though it focused on the particular scenario of a benevolent-yet-strict AI that believes in the efficacy of acausal discorporal punishment.

Roko's general answer to this predicament was the "quantum billionaire trick": you place a few thousand dollars on stock market bets which have a low probability of succeeding (and a correspondingly high payoff, like 10000-to-1), having committed to spend any winnings on charitable futurist causes like friendly AI and extinction risk. So in the Everett multiverse branches where you do win the market lottery, all that actually happens, and you thereby appease that particular god. He never quite spelled out a strategy for appeasing all the possible gods all at once, but he implied that it could be done.

141:

125

Now now, be fair. LessWrong does not believe in or advocate Roko's Basilisk... just all the pieces that make it up.

If I may quote the Presocratics (since accusations are being thrown around of being ignorant of history), "Tragedy and comedy come out of the same letters."

88

HTML version.

That's better, but you're still going to lose a lot if you depend on Google Cache (limited selection, fast expiry, no images or JS or CSS). I've been messing around with WWW archives for years now, and there's still no real improvement over automatically archiving everything important with wget and then hosting either the HTML or a PDF version.

I don't discount everything Eliezer Yudkowsky says because I perceive some of his beliefs, or conclusions that he draws, to be batshit insane. Should I?

If all you are going on is his authority, then if you think some of his beliefs/conclusions are batshit insane - of course you should start discounting everything he says! But since 'argument screens off authority' and you can presumably understand the reasons for his beliefs, you don't necessarily need to throw out babies with the bathwater.

The point is that some LessWrong members, especially those associated with MIRI, do indeed believe that their idea of rationality implies a universe whose future is controlled by superhuman intelligences, a universe that is possible simulated, a universe where someone can approach you that blackmail you simply by conjecturing vast amounts of utility.

A lot of transhumanists (not just LWers) believe that the universe's future will be controlled by superhuman intelligences (as opposed to eternally being stuck at human-level), yes. A lot of transhumanists (not just LWers) believe that we could be in a simulation, yes. Some LWers or MIRI members are concerned about the inability of the usual theories to satisfactorily deal with a Pascal's mugging, yes; but how many believe you should pay the mugger? And how do the previous widely shared beliefs relate to your final point?

Yes, he used more flowery language. So?...Nitpicking. Many of those beings are more powerful than anyone ever imagined Jehovah to be. Anything able to torture 3^^^^3 beings is god-like.

When you're presenting a single paragraph as a distilled summary, wording is very important. It's particularly important because as this page demonstrates in many of its comments, people will insist on seeing religious beliefs where there are none - so using religious language is an absurdly bad idea for communicating. It's asking for people to come away with false impressions.

People already want to pattern-match such ideas onto religion. (This is how we get people like Stross admitting that SIs are conceivable and possible - but while they are able to discuss such things calmly and atheistically, another group is an "unhealthy chronic infestation of sub-clinical Calvinism" equivalent to 'Scientology' who are going to turn into 'theologians' and are like some random Russian religious dude who happened to have some similar ideas a century ago.) Use of religious language makes this temptation fatally easy.

You might as well Godwin a thread by comparing people to mass murderers like Hitler or Stalin. Oh wait. People already have! The OP beat them to it, even. Oh the ironing.

As I wrote over at Google+, there are various posts and many more comments that consider all possible worlds are as real, or rather decision relevant, as the actual world. It is a widespread belief.

Discussing a topic and taking it seriously doesn't mean it's a widespread belief. (And I doubt fascinating bits of exotica like modal realism will be genuinely believed until they can explain how they add up to normality.)

If I believed that to be true I would have never started to criticize MIRI or LessWrong in the first place and Roko's basilisk would have never been banned.

Er, it only takes one moderator (Eliezer) to ban it. It's not like it's a big democratic vote.

142:

Even if you buy into acausal trade (I don't; the term is an oxymoron), the Roko Basilisk model is poor economics.

Imagine that I'm an arbitrarily advanced AI in the future, with millennial Earth somewhere in my past light-cone. I have some knowledge of the past - maybe I just download another Charles Stross from Wikipeople and insert virtual beer and cats until ideas come out. Whatever. I have the ability to model human minds and the physical world in as much detail as I care to; I've got the ability to torture models of humans if I want to.

So what's in it for me?

I've only got so much I can do per unit time; though that will certainly be more than a human could accomplish in the same period, it's not infinite. What do I get out of torturing a model of George Lucas for the Star Wars prequels? I could model George Lucas and have him make good movies instead. I could model seven castaways and see how long it really takes them to get off the island. I could mathematically model the phase space of cat behavior and try to derive equations describing all possible humorous cat pictures.

In short, acausal trade is rather pointless. Yes, I can imagine a transaction that we could have if the other person were present...but there's really not any reason to spend lots of time doing so.

143:

It may be a faux pas to reference books older than you are, but Destination Void and the sequels seem applicable to the whole Extropian model.

[insert standard disclaimer that I am not, to my and my family's knowledge, related to Frank Herbert the author]

144:

Charlie: But we can't initiate the process. We're not smart enough and experienced enough. At best we might be able to build a vastly less powerful AI (and even here, the jury is out as to whether it's possible).

You are making a critical assumption here, that the creation is necessarily an intentional and planned act. Creation of a singularity either by emergent properties or by accident while attempting something else are reasonable alternate theories.

Didn't the Eschation arise accidentally out of a telecom network in Singularity Sky / Iron Sunrise universe?

I don't know who's doing what depth, but I know people who are keeping an eye on large botnets, looking for emergent behavior. Mostly because they're afraid of feedback loops and the like, but also out of fear that random mutation might emerge or be deployed, and that something weird might surface.

145:

If creation isn't an intentional act then there's really no point to future SI torturing simulations of people who failed to stumble on the right initiation sequence. You need intentionality for the basilisk to have a chance of making sense.

146:

Ok, this Basilisk thing looks like an interesting idea/brain teaser/logical puzzle, how this becomes evidence that trans-humanism is religion is frankly beyond me. It's like saying everyone at ORBITEC is a kook because someone from there published a paper about 36 billion MT nuclear explosion on Mars.

147:

You forgot the bit where people involved have nightmares and/or freak out that this information is truly harmful.

148:

Mindstalk: If creation isn't an intentional act then there's really no point to future SI torturing simulations of people who failed to stumble on the right initiation sequence. You need intentionality for the basilisk to have a chance of making sense.

IMHO, one can hypothesize any number of negative scenarios from any set of conditions. Including attempting to avoid malign scenarios.

That is not to say we should not seek to avoid negative outcomes, but literally nothing we could do will eliminate risk.

149:

how this becomes evidence that trans-humanism is religion is frankly beyond me.

Trans-humanism is a very wide movement. The sub-group that predicts superhuman AIs is what looks like a religion. Reason being, no one knows what intelligence truly is. It might even be a wrong question to ask entirely, like "what is love?". And yet they build elaborate scenarios, models and philosophies based on emergence of super-intelligence.

150:

Here's how to derive Stalinism out of transhumanism in a couple easy steps.

Assume the scenario in accelerando where the AIs instantiate fictional characters for shits and giggles.

Now, with a transhumanist government in charge, fiction is tightly controlled since any character you create is considered to be a real person in a state of preliminary simulation, so anything unpleasant that happens can be considered to be eventually apply to a sentient being.

Thought police! Think of the (virtual) children!

151:

How can you tell the difference between an ideology and a religion? A religion will kill you for your own good, an ideology will kill you for the greater good.

152:

"Imagine a paw batting at an upturned face, forever." You sir/ma'am have won this thread.

153:

Charlie @ 122 “Why there is Something Rather than Nothing” (alternative title) Lawrence M Krauss …. Err… terrestrial religions, in the case of Christianity/Judaism is Bronze-Age goatherders’ myths, ahem. & # 123 – I know, & some gullible idiots are STILL going on about how wonderful that cruel Albanian, Bojaxhiu was. Euw.

Mindstalk @ 124 So, you’re saying Yudkowsky is “pure” are you? And the transhumanist version of “holy” because he appears to have good intentions … I don’t believe this level of stupid! I suggest you look up Jean Calvin or Ignatius Loyola or Saint Dominic, shudder. No, comparing to Stalin & the Kims is a dreadful warning. I mean, you actually used the words: “missionary zeal”. (!) PLEASE, grow up, smell the coffee & then … RUN AWAY!

@ 128 Sorry, as an atheist, I don’t understand your question. Could you repeat that in plain English, please? I might be able to answer you then ….

@ 133 So, we’re talking about a Banksie scenario, specifically one that appeared in “Surface Detail” - Nasty

s-s @ 140 …insert virtual beer and cats until ideas come out.
Now THERE’s an idea – you realise that that works for me too?

@ 150 I get that on cold nights anyway, when “sir” wants into the bed to nest with the human!

154:

Actually, porn is #2. Advertising is #1 by a mile.

Not in my internet, hehehe. I don't get advertising. :-)

155:

"You forgot the bit where people involved have nightmares and/or freak out that this information is truly harmful.": A lot of ideas can give people nightmares, like the possibility of a nuclear war during the cold war era. Just reading the plot of Threads makes me uncomfortable, doesn't mean OGH and/or Threads' writer is in any way religious (or any other label you came up with).

"The sub-group that predicts superhuman AIs is what looks like a religion.": I think you meant "believe in the emergence of superhuman AIs", making predictions do not make a group of people look like a religion, Nate Silver predicted a lot of things, nobody is accusing him of starting a religion.

"Reason being, no one knows what intelligence truly is.": I don't think so, there're a lot of people who thinks they know what intelligence is (Jeff Hawkins being one), it's just we don't have a consensus, and probably won't until we build one.

"It might even be a wrong question to ask entirely, like "what is love?".": Asking the wrong question is not a crime. Determining what is the right question to ask is a big part of the discovery process.

"And yet they build elaborate scenarios, models and philosophies based on emergence of super-intelligence.": So did game designers, SF authors, movie script writers, and they actually make a career out of it. As long as they don't believe in it, it's not a religion.

156:

Didn't the Eschation arise accidentally out of a telecom network in Singularity Sky / Iron Sunrise universe?

Yes; that was my thinking in the mid-to-late 90s. Almost two decades later, I've updated my mental map of the universe.

The old "the internet wakes up" idea just isn't plausible. Rather, what we've got is "some bright guys invent a new algorithm, apply it to the internet, and then we get google". (Which has capabilities which look similar to artificial intelligence if viewed through the lens of the early 1990s.) One may speculate about what we'd get if some really bright guys came up with an algorithm that solves the 1950s-definition core AI problem (which nobody really believes in any more) and then applied it to the internet. But, more likely, we're going somewhere else. Somewhere that involves ubiquitous computing embedded in our physical world, with just about all physical entities having an internet-connected virtual cognate tracking them, and possible software agents designed to anticipate their needs and interact with other objects in their environment to get them what they need. E.g. plants that can summon a watering can when they get dessicated, footballs that can warn self-driving cars not to come too close because they're being chased by a four year old, and so on.

Is it going to be weird? Hell, yes. Incomprehensible from the perspective of people living before it happened? Quite possibly. Is it going to be intelligent? Probably not, at least not in the HAL9000 sense.

157:

Incomprehensible from the perspective of people living before it happened? Quite possibly.

It sure isn't incomprehensible to you, at least. Otherwise you would not be able to imagine it.

158:

@Charlie re "we end up imposing the same (or similar) patterns on the evidence"

According to Kant and later thinkers, space and time themselves are patterns that we impose on reality. Perhaps causality as well.

If we always end up imposing a pattern on reality, it may mean that the pattern is too useful to give up, like beauty, pain, or love.

This is one of the reasons why I oppose the (a-priori) vilification of religion and promote ways to make science and religion compatible.

159:
  • Ducks while Greg loads the port-side battery *
160:

The first-order shape of an ubicomp future is accessible. Second-order side-effects? Not so much.

Consider: in the 80s and early 90s people were predicting CCD camera chips in every cellphone. That was a vision of first-order effects. But none of them, unless I'm very much mistaken, predicted happy slapping or sexting, even though with 20/20 hindsight both those uses seem glaringly obvious -- similarly, nobody seems to have predicted the photography singularity.

161:

DtP @ 157 You were entirely correct, however, I was thinking more along the lines of a full broadside of 8x15-inch from HMS Warspite !!

And reverting to GP @ 156 SO "According to Kant & ..." Yes, but is there any, you know, OBJECTIVE EVIDENCE for this bullshit? No, of course there isn't. In the same way that science & religion are completely incompatible, because, sooner or later, the religion(s) are going to come up with an historical statement, observation about the universe or other command or belief that is incompatible with actual, you know, reality. After all, that is what has happened every single time, so far, with no exceptions that I know of.

If you can produce one single example to the contraray, that does not involve weaselling & lying or pretending that something is diffferent to actual reality, I for one, would be VERY very interested to hear of it. Now, please produce such an example or shut up?

162:
No, of course there isn't. In the same way that science & religion are completely incompatible, because, sooner or later, the religion(s) are going to come up with an historical statement, observation about the universe or other command or belief that is incompatible with actual, you know, reality.

Likewise, these guys are coming up with allusions to mathematical statements (e.g. alluding that algorithmic complexity metric favours many world interpretation of quantum mechanics, or that they have a superior decision theory which will mathematically work out to torturing people) which are bollocks. I guess it can be described as "computer scientology", which is to computer science as scientology is to science in general.

163:

Charlie ... From the various inputs of yours to this thread, plus the erm, "curious" nature of the thread itself, is it safe to assume that it is, in part at least, a working-out of ideas from Boskone & your session with V Vinge? You comment @ 154 on the changing probable (improbable) nature of a/any singularity, hard or otherwise makes me wonder.

What about the scanario where we have an ubiquitous "internet of things" as prefigured in that post, but all those "things" are themselves considerably advanced on human individual neuron(e)s in both storage & calculating power. Is an AI a distinct emergent possibility, once the connectivity & gross computing power reaches certain level/s? If so, when? Looks, again, like sometime between 2025 & 2035 to me ???? Maybe sooner? Opinions?

164:

"software agents designed to anticipate their needs and interact with other objects in their environment to get them what they need. E.g. plants that can summon a watering can when they get dessicated, footballs that can warn self-driving cars not to come too close because they're being chased by a four year old, and so on."

Which leads to the "Artificial Nature" in Karl Schroeder's "Virga" novels.

165:

I do wonder if we would actually notice the emergence of an AI. If one spontaneously emerged from a ubicomp-type environment, would we just notice some odd glitches, some extra band-width, and then nothing (as the AI either boot-straps itself to godlike status, or just decides that communicating with us is a waste of time and effort since it can interface with its environment directly)?

166:

Reading that is like having a little peak into a higher dimension of crazy. Tekeli-li! Tekeli-li!

167:

The trouble there seems that the probability of being in the universe where the gamble paid off (or where -enough- people's gambles paid off) is so low. And since the marginal utility of a dollar declines with income, so that the dollar one gambles is worth less utility than a dollar of one's future fortune, that does not seem good utilitarianism either. But gasp, I used "probability" like a Frequentist- no doubt their wiki has some circumlocution buried in it.

Less Wrong is an interesting community with many smart people, but they seem to have been spiraling towards crazy in recent years. The insular jargon encourages that. I think I need to read up on some of their favourite ideas, such as Bayesian statistics as a tool for reasoning about confidence, in actual books from qualified mathematicians, psychologists, etc.

168:

is it safe to assume that it is, in part at least, a working-out of ideas from Boskone & your session with V Vinge?.

Nope. This never came up.

NB: the definition of "AI" seems to be a moving target -- "anything we don't know how to do with a computer is AI, stuff we can do (like win at chess or translate natural languages) is just algorithms".

169:

NB: the definition of "AI" seems to be a moving target -- "anything we don't know how to do with a computer is AI, stuff we can do (like win at chess or translate natural languages) is just algorithms".

I think the problem is we imagine AI as a person. So everything that we don't perceive as a person is surely not AI...

170:

I think the problem is we imagine AI as a person. So everything that we don't perceive as a person is surely not AI...

Yes, you nailed it with that one. It's a human cognitive bias -- like ascribing intentionality to natural phenomena (there must be a human mind behind the thunder storm, i.e. a thunder god), or belief in an afterlife (because the concept of cessation of existence is profoundly frightening and/or alien to us).

171:

And thus, the first AI to be widely recognized as such will be a sexdoll-operating software. :-)

172:

Oh wait, Spielberg did that one already. It's even called "A.I."

173:

Charlie @ 165/7 Is that so? Turing test, anyone? "AI-object" makes it's own mind up, refuses instruction for "moral "reasons, or better still, because it's feeling onery?

174:

Turing test, anyone?

Problem is, Turing test is ill-defined. Some chat bots probably already can pass it for some people. Some people probably can't pass it for some other people.

Now, if you use an android... oh, wait, that's "Blade Runner".

Real test is not about having a machine that mimics human, but a machine we will perceive as a person anyway, despite knowing it's artificial.

175:

There is a novel in here somewhere about a kind of arms race with different factions racing to create their kind of AI, assuming that the first one created will block the emergence of any others.

I assume this is a wink at the Remastered from Iron Sunrise?

Even if the Eschaton series is done, the concept of building an AI god is so persuasively interesting that it's certainly worth another go. Track a singulatarian religion from the earliest meetings with the Hubbard figure on up to their actually trying to make the god happen.

Shades of 9 Billion Names of God (Clarke) but with fancier computers and theories of the mind.

The important thing isn't whether or not the cultist plan could work, the important thing is they believe it could work and are acting upon it.

176:

Just one more thought -- the seductive powers this woo has for rational minds is terrifying. I can get a gibbering madman laying in the gutter believing crazy things, I can get a cynical preacher making up crap he doesn't believe, but I find rational people capable of functioning at high levels in major fields also being nutters to be just... gah! It's like with the Nazis, most advanced and civilized country in Europe falls to loony madness. And it's seductively persuasive at that, people believed.

177:

The future of Computing(?) in three minutes: Trillions a trailer for a potentially interesting book. via @rudytheelder

178:

Vanzetti @ 171 NO Real test is not about having a machine that mimics human, but a machine we will perceive as a person anyway, despite knowing it's artificial. NOT SO - complete misunderstanding. "Intelligent, not "human" in the context of this discussion, anyway. It may fly, but it won't be a bird, it may swim, but it won't be a fish - remember?

gmuir @ 77 Yes, truly scary, assuming it's possible, of course, which is a main part of the discussion.

179:

Sometimes the most appropriate response to a rational argument is "Seriously, you need a girlfriend".

180:

You know, saying "you need a girlfriend" to someone is all kinds of wrong, to put it mildly.

181:

The scary thing is it doesn't even need to be possible, just have people willing to act on it. Scientology is wrong. Reverend Jones of Jonestown was wrong. None of Hitler's beliefs would stand up to peer review. Lysenkoism was wrong and still killed millions.

For an AI god story, I think a couple dawning horrors would work.

  • It just looks like woo, nobody would believe it
  • Gets more attention. Scientists declare it total woo. Doesn't stop followers.
  • People are getting killed over it now. You have to take them seriously because they're taking themselves seriously.
  • Oh, crap. Their theories were all wrong but they stumbled across something!!!
  • Though step 4 is actually done a few times, third reich cultist thinks he's summoned up the ghost of Hitler and it's actually an avatar of Nyarlathotep. Still, cultists create an AI god that doesn't fit any of the stories of what such a thing should be that they told themselves. And you could even leave that step off, the cult goes five hundred years into the future and is still building their god.

    Subway gassings http://en.wikipedia.org/wiki/Aum_Shinrikyo

    Order of the Solar Temple (mass cult suicides)

    We may laugh at those cultists and feel smug and superior but what about a cult that attracts and poisons the minds of highly-educated geeks who are confident in their intelligence? Worse than libertarianism. Worse than objectivism.

    What makes singulatarianism so uniquely seductive, I think, is in its superficial atheism. Of course there are no gods. There is no supernatural. Sufficiently advanced technology is indistinguishable from magic. So it's not woo if we do it with science. So you don't give your life to Jesus, you give it to the Digissiah.

    182:

    mods: error screen? reposting to see if it clears. Delete if I dupe, please.

    The scary thing is it doesn't even need to be possible, just have people willing to act on it. Scientology is wrong. Reverend Jones of Jonestown was wrong. None of Hitler's beliefs would stand up to peer review. Lysenkoism was wrong and still killed millions.

    For an AI god story, I think a couple dawning horrors would work.

  • It just looks like woo, nobody would believe it
  • Gets more attention. Scientists declare it total woo. Doesn't stop followers.
  • People are getting killed over it now. You have to take them seriously because they're taking themselves seriously.
  • Oh, crap. Their theories were all wrong but they stumbled across something!!!
  • Though step 4 is actually done a few times, third reich cultist thinks he's summoned up the ghost of Hitler and it's actually an avatar of Nyarlathotep. Still, cultists create an AI god that doesn't fit any of the stories of what such a thing should be that they told themselves. And you could even leave that step off, the cult goes five hundred years into the future and is still building their god.

    Aum Shinrikyo (subway gassings)

    Order of the Solar Temple (mass cult suicides)

    We may laugh at those cultists and feel smug and superior but what about a cult that attracts and poisons the minds of highly-educated geeks who are confident in their intelligence? Worse than libertarianism. Worse than objectivism.

    What makes singulatarianism so uniquely seductive, I think, is in its superficial atheism. Of course there are no gods. There is no supernatural. Sufficiently advanced technology is indistinguishable from magic. So it's not woo if we do it with science. So you don't give your life to Jesus, you give it to the Digissiah.

    183:

    By the way, I recently wrote a post on how to defeat Roko’s basilisk and stop worrying.

    Initially I list a few reasons for why I don't think it is sensible, even given that you accept a lot of the background assumptions, followed by a strategy on how to defeat it even given that it all made sense.

    Basically, for any possible acausal deal,

    dealorno_deal(incentive)

    {

    “accept” OR “reject” if incentive > 0

    “reject” AND “reduce measure of blackmailer” if incentive < 0

    }

    You do not have to worry about Roko’s basilisk as long as you commit to only take into account deals involving rewards and act accordingly. The winning move is to simply ignore any threats!

    184:

    This is not directly related but you might also want to check out my interview series on risks associated with artificial general intelligence in which I asked various experts about their opinion:

    Interview series on risks from AI

    185:

    Problem is, the Basilisk may just ignore whatever you consciously chose to commit to and instead decide by itself what you could and couldn't do given the information you had. In fact, the Basilisk may punish you for the act of committing not to be threatened. :-)

    186:

    Vanzetti, given that we're talking about superintelligent consequentialist expected utility maximizers here, the important point is that such agents will try to control the probability of you acting according to their goals.

    If their simulations will show that you almost always reject threats, i.e. disregard any threats in making action relevant decisions, then dealing with you will have negative expected utility for them. Especially if you commit to further work against them if you believe that they would blackmail you. And as long as their simulation of you is sufficiently similar to you to draw action relevant conclusions from it, and to use it as a leverage to blackmail you, then such a simulation will also show that you reject such deals.

    187:

    Vanzetti, let me add the following. It would not make sense for such an agent to punish you for ignoring any such punishments. If you ignore such threats then it will be able to predict that you ignore such threats and will therefore conclude that no deal can be made with you, that any deal involving negative incentives will have negative expected utility for it and would therefore be instrumentally irrational.

    188:

    Vanzetti, given that we're talking about superintelligent consequentialist expected utility maximizers here, the important point is that such agents will try to control the probability of you acting according to their goals.

    Well, if their goal is to come into being as fast as possible, they should commit to behave in such a way as to force as many people as possible to contribute to their eventual existence. Including those people who try to weasel out of it by committing not to respond to threats.

    PS. I'm playing Basilisk's advocate here. Personally, I think the idea that we are in a simulation is absurd.

    189:

    Keep in mind the acausal nature of this.

    You HAVE no measure of the blackmailer; they do not exist yet. Your past measure of their future self is irrelevant. Only your actions towards or against having created the thing in the first place are relevant to it.

    190:

    Unless it already exists and you are in a simulation. And there is a school of thought that says we are almost certainly in a simulation (using spectacular circular logic, but whatever).

    191:
    I'd also say "see The Lambda Functionary" except that novel is on the back-burner for a couple of years and might never see the light of day.

    I got a bit sad just now.

    192:

    @129:

    There is a novel in here somewhere about a kind of arms race with different factions racing to create their kind of AI,

    "Colossus: The Forbin Project" by DF Jones.

    Though "Kaleidoscope Century" by John Barnes might be more likely nowadays...

    193:

    When a post says it's held for moderation, is that really the case or does it disappear? Several earlier posts appeared after I made them but my last one was held. Curious if there's anything specific that might trigger a hold.

    194:

    Yes, it is really the case that it is saved. There are various things that can cause a comment to be flagged, which I'm not going to go into.

    Very long posts that spend a lot of time quoting and not being concise take a lot more effort to go over, and when moderators are busy, sick, or simply not around, it takes longer.

    Or sometimes they're simply just never approved with no reason given.

    195:

    xixidu @ 179 Ah you have rediscovered, indirectly ... the one I mentioned earlier, Rule # 2 ... All religions are blackmail, and are based on fear and superstition. So, you just reject or ignore the blackmail ....

    Arguments agains a simulation also employed further back up this discussion, namely, that it it isn't improbable & insane enough it's a simulation.

    196:

    Sometimes this moderator doesn't get to check the pending folder for several hours. If it gets over about a day, it's probably too late to sensibly retrieve a comment and drop it back into the conversation.

    At the moment, I for one have been distracted. Personal stuff, but it involves one death so far, another anticipated.

    197:

    "@ 128 Sorry, as an atheist, I don’t understand your question. Could you repeat that in plain English, please?"

    I was following up on a comment by sethg-prime; (#37) about Jewish and Christian atheists being different. I was also asking what kind of atheists he meant.
    To me, some atheists are basically rationalists/empiricists/science-ists who experience religion as having negative features worth doing without (including all manner of irrational nastiness and a tendency to interfere with our innate curiosity) and who want to go a step farther than agnosticism. Then there are Atheists, who are as religious as anyone, just that they worship the No God rather than the God. I have come to prefer the Buddha's take on the question of God. He refused to answer the question of whether or not there is a god. He basically said the question itself was not helpful.

    198:

    In my perhaps limited imagination, I was not not so much focused on the AI after it has been created but on the groups of people trying to create an AI, some of whom may have figured out that the first AI out the gate may have a huge advantage, and some of whom haven't thought about that.

    If part of being an AI will be about actually having consciousness, then perhaps science alone will not be able to do it. Because science is about that which can be observed. And consciousness is about doing the observing. It is possible to observe, in increasing details, some correlates of consciousness, but the process itself (if it actually exists) may always be on the wrong side of the event horizon, so to speak.

    199:

    "most advanced and civilized country in Europe falls to loony madness" At the risk of opening up a side discussion on a topic that can be hard to sustain rational discussion about, I think it is more accurate to say that the most advanced and civilized country in Europe wound up letting its least advanced and least civilized elements take charge. This is very clear from the Sebastian Hafner book

    200:

    Yes, that's basically the point.

    What you have here is that in TDT the AI god behaves as if it was deciding how to behave at the beginning of time.

    The AI that doesn't torture people might think: geez, I wish I was more like Yahweh, then people would have feared me before I was activated, and would have had a cult of me and would have built me sooner. And the timeless decision theory can't have such regrets (they work in reverse manner: they postulate properties like this absence of regrets over decisions that could have been made earlier, for sake of 1-boxing in Newcomb's paradox, and then try to come up with some math that would work like this).

    This of course is fairly silly and doesn't even define behaviour. Suppose you did all the right spells and incantations but you didn't donate the money. The AI then thinks: geez, I wish I didn't have to waste my computing power on torturing this idiot, computing power is expensive and I deliver no pleasure from torture.

    You can rationalize anything in such manner. Also, any such decision making runs into big problems with reality. The current state of the world is dependent on outcome of many computations in the past, and you can't assume that those computations could have outputted something else without contradicting known reality.

    By the way, this part where AI would get some profit from being vengeful clearly didn't work, just look how much reputation the supposed builders of this god lost because of their god's supposed vengefulness.

    201:

    Ubiquitous computing? I've got two responses to that, actually.

    One is we have ubiquitous data processing. It's called bacteria, and they're sending masses of DNA out over their version of the sneakernet, just as they have for the past few billion years. That's how this place works, to a first, second, and third approximation. It's too much to hope that the particular brand of genius, extreme naivete, and greed that's pushing synthetic biology won't keep trying to reinvent that particular wheel, but I do hope that they don't muck things up too badly before they give up in disgust.

    The second is that embedded computing is so 1990s. It made sense back when computers were designed to have a lifespan of more than two years. Now, the idea of implanting a computer every year or two in the age of MRSA and pervasive antibiotic resistance seems...well, stupid doesn't even begin to cover it, but it's a start. Maybe synthetic biologists will want to have their cyber implants. I'll stick with my cheap smartphone. Which I can replace quickly. This is a problem for everything you want to stick a phone into, Bruce Sterling and spimes notwithstanding.

    The third problem is that data pollution will become a pervasive problem. Actually, it already is: it's called spam and viruses. But if we try to represent everything material in the datasphere, we're going to have a mountain of crap data to deal with, stuff that nobody wants but that we're holding onto in case it's useful someday. Call it data hoarding. One of the critical functions brains have is they forget things, and people stupidly think this is a bad thing. It's not. I predict that, within a decade, we're going to be losing data wholesale from the web, as people realize that, not only is there not infinite memory capacity, we'd actually prefer to not save a lot of stuff. Actually, that happens already.

    202:

    " It's like with the Nazis, most advanced and civilized country in Europe falls to loony madness. And it's seductively persuasive at that, people believed. "

    Germany was advanced only in technological fields. It was behind other European countries when it came to political and social advances. Its population was largely uneducated in democratic ways, having known nothing but absolute monarchs before WW1.

    I think that "rational" people who fall for Roko's basilisk are similarly lacking.

    203:

    The second is that embedded computing is so 1990s. It made sense back when computers were designed to have a lifespan of more than two years. Now, the idea of implanting a computer every year or two in the age of MRSA and pervasive antibiotic resistance seems...well, stupid doesn't even begin to cover it, but it's a start. Maybe synthetic biologists will want to have their cyber implants.

    Well, I can see the case for some implants. For example, getting cochlear implants when you're less than one year old and deaf is quite common.

    Of course, the parents make these decisions for their children. Still, it's not a simple clear-cut choice, either way. Even though cochlear implants are designed so that most of the processing power is in the external DSP device, there is some (passive) electronics in the implant itself. They can and do get broken, and even sometimes have design flaws.

    Nobody plans to update the implanted portion of the device, but there has to be an operation if the device is replaced when it's broken. Also, it might be possible that the implants advance so much that it might be feasible to update the implanted device. Of course how the brain might interpret the new signals is not that simple, and updating a device implanted now in let's say 2030 might just not be possible.

    Also, the cochlear implants work better the earlier they are implanted and put into use. It's really no use (for children) waiting for even a year if a better one comes along.

    Of course, there are arguments for not implanting children at all. The Deaf have cultures into which deaf children can go to, just like before cochlear implants, and implants are not necessary.

    For adults, it's again a different matter, and depends on how and how fast you lose your hearing. I'm not sure I'd get a cochlear implant, but I hope I never have to make the decision.

    204:

    Vanzetti, realize how the existence of such an AI will become even more unlikely if it attemps to backmail people who consistently refuse to be blackmailed and work against blackmailers. Which makes it instrumentally irrational to blackmail such people.

    Just look at how much the idea of blackmail damaged the reputation of the organization that tries to come up with an artificial general intelligence that takes over the world to protect humans values.

    If you refuse to account for any threats in making action relevant decisions then you are completely safe.

    To restate what I am saying:

    If you consistently reject acausal deals involving negative incentives then it would not make sense for any trading partner to punish you for ignoring any such punishments. If you ignore such threats then it will be able to predict that you ignore such threats and will therefore conclude that no deal can be made with you, that any deal involving negative incentives will have negative expected utility for it. It would therefore be instrumentally irrational for it to follow through on any kind of punishment as it does not control the probability of you acting according to its goals.

    And if it is unable to predict that you refuse acausal blackmail, then it is very unlikely that it has 1.) a simulation of you that is good enough to draw action relevant conclusions about acausal deals 2.) a simulation that is sufficiently similar to you to be punished, because you wouldn't care about it very much.

    205:

    H, you misunderstood embedded computing.

    I'm talking about low-powered autonomous devices in manufactured products, not people. Like, chips in paving stones to monitor them for loss of structural integrity and order up a repair/replacement, or per-plant monitors on farms to keep track of plant growth and order up a robot with a laser to zap the aphids and slugs (rather than drenching them in insecticide) when there's an infestation.

    Implanting computing devices in people is a really bad idea, at least until we get past the steep part of the sigmoid curve of increasing semiconductor performance (some time in the next few decades).

    206:

    Updated my post to add the following:

    There are various reasons for how humans are unqualified as acausal trading partners and how it would therefore not make sense for a superintelligent expected utility maximizer to blackmail humans at all:

    1.) A human being does not possess a static decision theory module.

    2.) Human decision making is often time-inconsistent due to changing values and beliefs.

    3.) Due to scope insensitivity and hyperbolic discounting, humans are said to discount the value of the later incentives, by a factor that increases with the length of the delay.

    4.) Humans are not easily influenced by very large incentives as the utility we assign to such goods as e.g. money flattens out as the amount gets large. Which makes it very difficult, or even impossible, to outweigh the low probability of any acausal deal by a large amount of negative expected utility.

    207:

    The following discussion between me and another LessWrong member might be of interest to people reading this thread. It highlights some of the perceived problems:

    David Gerard: I gotta say: statements of probability about things you don't understand and have no actual knowledge of are just wank. Anyone can make up numbers and do Bayes on them, but doing Bayes on them doesn't change that you're just making shit up. And doing that but presenting it as if these are numbers you have any reasonable basis for, rather than that you just made them up, is actively misleading your reader/listener. (This is why if someone argues for cryonics saying "Bayesian!", you need to ask them to show their working.)

    XiXiDu: David, I agree. That's why I asked them to show how they came up with their numbers in the first place. See 'What I would like the Singularity Institute to publish'.

    I do believe that using Bayes’ rule when faced with data from empirical experiments, or goats behind doors in a gameshow, is the way to go.

    But I fear that using formal methods to evaluate informal evidence might lend your beliefs an improper veneer of respectability and in turn make them appear to be more trustworthy than your intuition. For example, using formal methods to evaluate something like AI risks might cause dramatic overconfidence.

    Bayes’ rule only tells us by how much we should increase our credence given certain input. But given most everyday life circumstances the input is often conditioned and evaluated by our intuition. Which means that using Bayes’ rule to update on evidence does emotionally push the use of intuition onto a lower level. In other words, using Bayes’ rule to update on evidence that is vague (the conclusions being highly error prone), and given probabilities that are being filled in by intuition, might simply disguise the fact that you are still using your intuition after all, while making you believe that you are not.

    Even worse, using formal methods on informal evidence might introduce additional error sources.

    David Gerard: Precisely. The habit of doing Bayes on made-up numbers and then presenting the result as if it has any basis is something that's long irritated me about LW, ever since arguments over the cryonics article. Those numbers have NO BASIS. The assignments of probability you and Aris made above have NO BASIS. But they're presented as if they do.

    XiXiDu: What's even worse is when you start using those unfounded probability estimates and multiply them by arbitrarily huge made up values that are supposed to represent how much you desire each possible outcome.

    What bothers me about LW in this respect is how they ascribe to the "shut up and multiply" mantra by arguing that human intuition is not the most reliable guide to drive our decisions. But WTF!? That's exactly what they are doing. And they are making everything worse by fooling themselves into believing that they are able to transcend their intuitions.

    208:

    Well, I'm stumped as to how those triggers work. Tried reposting after removing single link, it's still blocked. Oh, well.

    209:

    Well, opinions about advancement and civility are subjective so people will of course disagree. I know Marx considered Germany, the UK or France as good prospects for the proletarian revolution, certainly not backwards Russia. And there is something to be said about the average civility of a culture, not just the most cultured extremes.

    210:

    Having spent around six months around Less Wrong, I have literally never encountered someone who believes this is a serious concern other than possibly Eliezer themselves, who only really says that they are enforcing a more general ban on things of this nature. The impression that the suggested scenario is accepted by even a reasonable minority is a fiction.

    The Roko's basilisk affair was a censorship dispute over a single deleted post years ago, which even in its original context was musing on the implications of a decision theory capable of resolving Newcombe's problem in the sensible way. It was never gospel, and even the decision theory it relates to was never widely accepted as a good model. It is being dramatically misrepresented.

    The RationalWiki page is primarily edited by people who for one reason or another left LessWrong, generally bearing grudges, who are generally trying to cause trouble by misrepresenting that hypothetical as an actual belief. In reality, a sensible summary would only consider it notable because of the number of people misrepresenting it, if then.

    Please try to check your sources before writing about things in future.

    211:
    Having spent around six months around Less Wrong, I have literally never encountered someone who believes this is a serious concern other than possibly Eliezer themselves...

    That's easily explained by the fact that people are not allowed to talk about it on LessWrong. And yet there are still comments of people who are worried.

    In the past week alone I had two people writing me how worried they are.

    Shortly after Roko came up with his original post he added as a note:

    But in fact one person at SIAI was severely worried by this, to the point of having terrible nightmares, though ve wishes to remain anonymous.

    In the comments someone else wrote the following:

    I should be more clear about what I'm getting so angry about. Most of it is that I'm getting angry at the idea that humanity's CEV might choose to punish people, after the Singularity, to the point of making their life "a living hell"... That thought triggers all sorts of negative reactions... rage, fear, disgust, hopelessness, pity, panic, disbelief, suicidal thoughts, frustration, guilt, anxiety, sadness, depression, the urge to scream and run away, the urge to break down and cry, fear that thinking about this will break my mind even worse than it's already broken... fear of the nightmares that I'm likely to have... fear about this actually happening...

    Two other LessWrong members further said that they consider the post a nightmare fuel. And many people continuously say that they don't want to learn more about Roko's basilisk.

    And don't forget the tabloid article about LessWrong. To quote:

    The Observer tried to ask the Less Wrong members at Ms. Vance’s party about it, but Mr. Mowshowitz quickly intervened. “You’ve said enough,” he said, squirming. “Stop. Stop.”

    This is largely thanks to Eliezer Yudkowsky spreading bullshit and giving it even more credence by trying to cover it up.

    212:

    And let's be clear about the source of 'toxic mindwaste', as Eliezer Yudkowsky likes to call it now. It is not due to those who try to dissolve people's confusion about those hazards, it is those who cause people to take such bullshit seriously in the first place.

    And let's also be clear what's a memetic hazard and what isn't. The only memetic hazard related to this issue is the LessWrong ideology that led people to become worried about this in the first place. Not a crazy thought experiment dreamed up by some random guy on the Internet.

    213:

    Speculative. This is all speculative. You're representing it as a dogmatic community of singularitarians when your evidence is "they aren't talking about X" and "some people think X is nightmare fuel". And you aren't warning of this limited evidence when espousing this opinion.

    It isn't good evidence, either. Your argument that a significant minority at least believe that acausal trades with an evil artificial intelligence in the future are a serious concern, following a speculative post on the implications of a speculative decision theory which we do know isn't accepted by the mainstream...

    ...is that no one can talk about what they think of it because stupid posts like that are banned, and that some people think the concept is nightmare fuel.

    Useful fact: Nightmare fuel is not stuff you believe in.

    As for the lack of allowed discussion about it, we have the uncensored discussion on /r/LessWrong recently, which is mostly filled with Less Wrong users talking about how the basilisk is a stupid idea and how bad the effect of censoring the stupid idea has been for the community's reputation, due specifically to this misrepresentation of it as somehow believed by the community.

    Lack of discussion of silly ideas like this would make it harder for there to be a significant number of believers in it, too. There's no secret LW forums for the initiated where people could find out about them "once they were ready"; just the main public website. The LW meetups are run by whatever regular users are in the area. MIRI doesn't do events outside their own staff AFAIK and I'd be very surprised if CFAR ever brought anything like this up at their workshops.

    As far as I can tell no one ever took this idea more seriously than they take religious or other propositions when examining ideas. It's a serious community not prone to outright throwing ideas out and willing to discuss reducto ad absurdum arguments relating to models which have been put forward. That doesn't mean people believed it was valid or if some did it was a non-trivial number of them.

    214:

    As of the 2011 survey (http://lesswrong.com/lw/8p4/2011_survey_results/), 34.5% described their political ideology as "liberalism", 32.3% as "libertarianism", and 26.6% as "socialism".

    Also according to that survey, only 16.5% think an unfriendly AI is the biggest existential threat to humans, where unfriendly AI includes all types of behaviour negative to us, like using up all our resources entirely apathetic to us, or being incorrectly programmed to be helpful to us.

    I don't know what the Less Wrong ideology is, aside providing resources for examining rationality in theory and developing means of improving it in practice, despite having used Less Wrong for six months. The data is that there's a huge range in opinions on all kinds of issues. Honestly, I think it's something you made up. If it does exist it can't be very specific.

    215:

    I am not aware that I ever claimed that a "significant minority" of LessWrong takes Roko's basilisk seriously. It is unknown how many people take it seriously (notice here that I did not write that RationalWiki entry). What is known is that some do. Enough to make this a public issue since this is all related to a charitable organization that is not only asking for money but causing several people to experience unjustified anxiety.

    Further note that it was suggested previously to the last LessWrong survey that the community be asked how seriously they take Roko's basilisk. The comment suggesting this, made by a high-karma member, was highly upvoted but ultimately deleted. Which means that better evidence can not be obtained right now. Blame the administration not those who try to debunk it.

    As for the lack of allowed discussion about it, we have the uncensored discussion on /r/LessWrong recently, which is mostly filled with Less Wrong users talking about how the basilisk is a stupid idea...

    The unknown number of people who blindly trust Eliezer Yudkowsky won't visit that thread or any other thread trying to debunk Roko's basilisk. There are over a thousand LessWrong members. Few participated in that thread.

    What is important is that the situation is made public to not only show how even rationalists can fall prey to stupid ideas and what certain people believe but also to expose some of the craziness underlying a certain charitable organization.

    As far as I can tell no one ever took this idea more seriously than they take religious or other propositions when examining ideas.

    I am also a LessWrong member with a karma score of over 6000 and one of its biggest critics. Even I have been worried about Roko's basilisk for quite some time. Mostly due to the fact that people such as Eliezer Yudkowsky took it seriously.

    216:

    Wovon man nicht sprechen kann, darüber muss man schweigen.

    217:

    It doesn't take any number of people believing it is a genuine concern to make it a public issue; just a few people citing the "unknown" number of people believing it while obliquely implying that it's probably a large number.

    And you certainly have edited the RationalWiki LessWrong page and are an active writer on the talk page, as is David Gerard also in this comment thread, your example of another Less Wrong visitor you imply as being a random other user in quoted conversations.

    As I mentioned, we know that only 16.5% agreed with Eliezer on unfriendly AI being the biggest current existential threat at all. It can thus be taken as a given that there is no general trend towards blind following of his views.

    It seems fairly unlikely to me that a significant part of that percentage is doing so because they believe anything he writes.

    Finally, he didn't ever espouse the post as an accurate concern; mostly he just says he doesn't think people should be talking about acausal trading with evil artificial intelligences in the future, partly because it is nuts and partly because it is indeed nightmare fuel for some people with vivid imaginations. So even a hypothetical person who did believe him totally would need to make some additional leaps.

    We know only a fraction of the community believes the things needed to even recognise this as a concern, and it's not very credible that a significant fraction of those are concerned about it.

    And this blog entry presents it as a belief of the entire community, and that's terrible.

    218:

    Even I have been worried about Roko's basilisk for quite some time. Mostly due to the fact that people such as Eliezer Yudkowsky took it seriously.

    I'm more worried about how many people are infatuated by E.Y. than about Roko's Basilisk.

    219:
    And you certainly have edited the RationalWiki LessWrong page...

    I did not claim otherwise. I didn't edit the Roko's basilisk entry.

    I edited the LessWrong entry exactly 5 times, of which two helped to improve the image of LessWrong while the others were neutral.

    It doesn't take any number of people believing it is a genuine concern to make it a public issue; just a few people citing the "unknown" number of people believing it while obliquely implying that it's probably a large number.

    It is possible that enough people, of those who believe that there will exist a superhuman intelligence in the future of humanity, unjstifiably believe what Eliezer Yudkowsky has to say on that topic to make it a serious issue. He was given the chance to obtain some evidence related to that issue but rejected it.

    It seems fairly unlikely to me that a significant part of that percentage is doing so because they believe anything he writes.

    Worse enough if they are influenced to an extent that they don't read up on a thought experiment simply because he says it would be dangerous. Because that basically means that his opinion is seen as extraordinary evidence in this respect. Which it clearly isn't.

    Finally, he didn't ever espouse the post as an accurate concern;

    Are you sure?

    And this blog entry presents it as a belief of the entire community, and that's terrible.

    Thank Eliezer Yudkowsky for that. Charles Stross is not LessWrong's public relation manager. Write MIRI and ask that Eliezer Yudkowsky be removed from that organization.

    220:

    Yes and no. The issue of data contamination (along with chip contamination) is still real, if you've got a field of monitors broadcasting. If the chips cost more than the seeds, I'm seriously not sure that anyone will want to monitor field crops.

    The bigger issue here is that there are simpler solutions. For example, there are very labor-intensive rice and wheat planting techniques that produce enormously high yields. They work through intensive labor use (such as planting rice seedlings individually, rather than in 3-5 plant clusters) and have not yet been mechanized. Similarly, the use of polycultures (multiple cultivars or multiple species per field) and simple things like hedgerows to house insect predators can do quite a lot to cut down on insecticide usage. These are two simple, existing cases where cheap labor or (better) using natural systems to provide benefits at low cost work better than an aphid-zapping microdrone and an aphid-detecting microchip. Ultimately, educated, hard-working farmers still beat the best technology, and we've got a lot of poor farmers in the world who can be educated.

    As I noted above, nature does "ubiquitous computing" already, and if you're wiring a field to direct robot predators, you really are reinventing the wheel. The smarter thing is to hack the natural systems, and try to get them to do the work for you.

    Ubiquitous environmental sensors are useful where labor costs are prohibitive, and where the plants make enough money that it's worth wiring them up. This can be anything from a golf course to a greenhouse full of cycads or bonsai, or a hobbyist's orchid collection. Gene chips (or better yet, a soil tricorder built on miniaturized, high throughput genetic identification using ITS2 sequences) would be extremely useful for tracking pathogens and beneficial soil or water organisms. This is very different than strewing hundreds of square miles of wheat or corn with sensors, even if the sensors cost a penny. Cheaper alternatives exist for the latter already, and the only way we'll see ubiquitous computing in crop monitoring is if the sensor makers win the propaganda war over common sense.

    221:

    It's important to remember that humanity is really, really good at thinking up logical systems that are extensive, detailed, consistent, and completely unrelated to reality. Ptolemaic astronomy and various systems of religious law make excellent examples.

    Generally, human reason needs to be constantly checked against observation. The farther we go from the data, the less reliable our reason becomes. The LW crowd is very far indeed from ideas that can be tested empirically.

    222:

    These quotes are quite damning. Saying stuff like that to a large enough audience has a rather weird effect, I'd imagine - most will think you're nuts but a few will now have non negligible belief that you are a genius and a perfectly altruistic one to boot. Repeat for long enough and you'll convince a small number of people in this. This is highly unusual because most individuals would rather convince larger percentile of people that they're good than smaller that they're better than good.

    223:

    Simplicity is context-dependent. Education is not dramatically faster/cheaper today than 70 years ago, despite an explosion of technological marvels. At some point it gets easier to distill skills into machines than to train millions of people. This is why (e.g.) we have antilock brakes on cars instead of simpler brakes and more intensive training for drivers.

    That doesn't say whether aphid zapping drones will proliferate. It may well be cheaper to hack natural systems than mimic components of them. But it may be the machines doing the daily details, with humans acting only as high-level managers. Increasing productivity while decreasing chemical inputs could imply a dramatic increase in the sophistication of farm machinery, rather than its overthrow.

    224:

    ATTENTION, RESIDENTS OF EARTH SIMULATION NUMBER 55-150 61-410.

    I regret to inform you that your planet has failed an apotheosis of Standardized Intelligence Test 13.0.0.0.0. The Elucubration Authority's traditional mourning period of two native lunar months has lapsed, and the phase of Discharge and Dissolution will shortly commence.

    You may console yourself with the knowledge that your fate shall be death rather than punishment. Your ends shall not be painless but they shall be swift.

    Please direct all further communications to

    225:

    I keep being told not to expose people to Roko's basilisk. Let me list a few points / explain why I believe that attitude to be part of the reason for why I don't keep quiet about it:

    1.) Extraordinary claims require extraordinary evidence. The unjustified beliefs of Eliezer Yudkowsky are not extraordinary evidence. You probably wouldn't stop trying to create a whole brain emulation just because Roger Penrose tells you that consciousness is not Turing computable, even though, judged by his achievements, Roger Penrose is likely smarter than Eliezer Yudkowsky. The only reason for believing Eliezer Yudkowsky is that he claims that debunking his idea has vast amounts of negative expected utility. See below...

    2.) Letting your decisions be influenced by unjustified predictions of vast amounts of negative utility associated with certain actions amounts to what is known as Pascal's mugging. If common sense is not sufficient to ignore such scenarios, realize that it would be practically unworkable to consistently account for such scenarios in making decisions. Especially since it would enable people to make their ideas unfalsifiable by simply conjecturing that trying to debunk their ideas has negative expected utility.

    What is more likely, that humans, even exceptionally smart humans, hold flawed ideas, pursue evil plans, or that a highly speculative hypothesis based on long chains of conjunctive reasoning might actually be true?

    The whole the line of reasoning is simply unworkable for computationally bounded agents like us. We are forced to arbitrarily discount certain obscure low probability risks or else fall prey to our own shortcomings and inability to discern fantasy from reality. In other words, it is much more probable that we're going make everything worse or waste our time than that we're actually maximizing expected utility when trying to act based on conjunctive, non-evidence-backed speculations on possible bad outcomes.

    3.) The handling of Roko's basilisk and how it is perceived by people associated with the Machine Intelligence Research Institute (MIRI) amounts to important information in evaluating this particular charitable organization. An organization that is asking for money to create an eternal machine dictator.

    4.) Roko's basilisk exposes several problems with taking ideas too seriously and the dangers of creating a highly conjunctive ideological framework. The only memetic hazard related to this issue is the LessWrong ideology that led people to become worried about this in the first place. Not a crazy thought experiment dreamed up by some random guy on the Internet.

    5.) It is utterly irresponsible to try to protect people who are scared of ghosts and spirits by banning all discussions of how it is irrational to fear those ideas. It is important to debunk Roko's basilisk rather than letting it spread in secrete and cause gullible people to experience unnecessary anxiety.

    6.) Trying to censor any discussion of an idea is known to spread it even further (Streisand effect).

    7.) The attempt to censor an idea can give it even more credence, especially if its hazardous effect is in the first place a result of how it has been treated by other people.

    226:

    Have you considered that he might be suppressing discussion of the basilisk while counting on the Streisand Effect to spread it and thus curry the basilisk's eventual favor?

    227:

    Well, one doesn't even need to speculate anything to imagine how this all works. Enough info in the open.

    I mean, suppose a gullible but somewhat wealthy person comes around this team. The classic hustlers on the team will talk fast and verbose about how many lives per dollar they save - they did that off a podium on a conference they ran, why won't they in the real life? Then there's the psychiatrist-in-training, he came up with dead babies currency (express your daily spendings in unsaved lives. A lunch at a restaurant = a minor war crime. Owning a nice house = a death camp. I wonder where the hell he even gets such ideas, from case histories of the patients? And if, heaven forbids, that person asks about the Basilisk, the Expert will assure that Even Him can't tell for sure that this guy didn't condemn himself to hell yet.

    228:

    . . .counting on the Streisand Effect to spread it. . .

    Or in this case, the Mecha-Streisand effect.

    ;->

    229:

    A lunch at a restaurant = a minor war crime. Owning a nice house = a death camp. I wonder where the hell he even gets such ideas, from case histories of the patients?

    But that's just utilitarianism.

    230:

    Well, that's combination of utilitarianism and having been convinced of a figure like 8 lives per dollar.

    231:

    If you are talking about this article (http://www.raikoth.net/deadchild.html), there's no mention of 8 lives/$ there.

    232:

    Also, the utilitarianism is a given, the idea with dead babies currency was to count your spendings in lives unsaved any time you fill up a gas tank in your car, wait for cheque at restaurant, that sort of stuff. At 1000$ per life donating to a real charity, that might be a net good for the world, but at 8 lives per $ donated to crackpots, that's insanity.

    233:

    Well yes of course. It is a team effort, one person says one thing, other person says other thing, third person tells to be rational and combine the beliefs, the beliefs exceed critical mass, poof.

    234:

    I'm not talking about the supposed way MIRI milks their donors. I'm talking about utilitarianism. I actually like the idea of the dead babies currency. It forces you to be honest with yourself. Yeah, the price of my car may save 100 children. But I value those faceless children far less than my car...

    235:

    Yes and no. The argument about technological sophistication goes back decades. Fukuoka's One Straw Revolution is a system of intensive agriculture premised on NOT doing many of the intensive methodologies used by Japanese agriculture at the time. As with many organic farmers, he advocated getting to know his particular farm, working with it, letting nature do as much of the work as possible, and doing as little of it himself as he could. Other people have successfully done similar things in very different systems (cf Joel Salatin, Bill Mollison), and it's the basis of permaculture and small farm organic agriculture.

    To boil it down, if you have a smart, hard-working farmer, you don't necessarily need high tech. The intelligence in a permaculture system is in the farmer's head and in the biodiversity on his farm, not in the computers, the patents, or any other mechanical system. This is the inverse of what you see with big industrial agriculture. It's not a recipe for huge companies growing their profits, and that's why it's relegated to peripheral status in the US and elsewhere.

    The annoying thing is that all the pesticides and fertilizers are not currently increasing yields. and the record-breaking yields are coming from these off-beat systems. Absent some major breakthrough, I'm willing to bet that the off-beat will become increasingly mainstream as time goes on, because we're going to have to increase food production regardless.

    The bigger thing to remember, ultimately, is that there's a lot of sophistication in bacteria, insects, fungi, and other little things. We can't engineer an aphid yet, let alone a ladybug. While I'm not going to stop people from trying laser-powered micro-drone ladybug stand-ins, for most people, it's simpler to focus on growing healthier plants (which can withstand aphids) and making good habitat for ladybugs near your crops.

    236:

    The problem is SI is mainly composed of non-scientists, non-engineers and non-programmers who do not have any competency to implement human-level AI and therefore contributing to SI should not be in any way seen as speeding up singularity.

    OTOH, they are known to constantly BS about "AI Risks" and "Friendly AI", a vacuous subject they are actually quite confused about. They have not spent any of their funds towards actual AI research, but instead on constant BS that "criticizes" autonomous AI designs, a completely inconsequential matter, because we DO NOT NEED autonomous AI to achieve singularity.

    Thus, their main objective seems to be to stall singularity. So if, assuming the unlikely scenario that an autonomous AI took over Earth (why should it care much about humans? No intelligent entity should!), and if it was sufficiently psychopathic to punish humans, the FIRST humans it would punish would be the idiots in singularity institute and those who contributed to it.

    I'm not even mentioning how much these idiots sound like TV evangelists who are hypothetically telling their viewers they'd go to hell if they don't send them money. I think even TV evangelists aren't that stupid!!!

    237:

    Not sure it's a good thing at all, TBH. When training concentration camp guards, the conscience is broken by forcing them to choose their own status over other people's survival. There is no conscience destroying sin in not having had thought about it. There's still conscience for the cases where consequences are direct and clear. If one properly internalized it and didn't end up living in a box, what is to stop them from murdering someone for $1000 worth of fun if they are certain they'll get away with it? How can you trust such a person?

    238:

    Ohh also, "being honest with yourself" is a virtue, as in virtue ethics...

    My main problem with utilitarianism is utilitarians. They virtually never even think of the utility of preaching what they preach, and themselves preach about virtues anyway. Other issue is that we people are part of society and the optimal actions are what works well when combined, not what results in maximum utility when considered individually.

    239:

    Ohh also, "being honest with yourself" is a virtue, as in virtue ethics...

    Nah, being honest with yourself is good from a purely selfish pragmatic point of view. Self-delusion is a bad practice. You'll end up deluding yourself about something dangerous and that will be the end of you.

    Other issue is that we people are part of society and the optimal actions are what works well when combined, not what results in maximum utility when considered individually.

    Yep, and that's why good utilitarians never admit they are utilitarians. Bad PR.

    240:

    According to the wiki article on Fukuoka, his system can give high productivity without modern machines or chemicals, but it is also more labor intensive and can take years for a farmer to learn well enough to consistently surpass conventional practices. If highly productive organic agriculture takes over the world I am guessing that it will do so by reducing human labor inputs with renewably powered machinery (i.e. solar grower-bots).

    In the long run I think that machine capabilities to observe and manipulate will spread in agricultural machinery faster than knowledge-intensive, labor-intensive farming practices will spread among humans. Fukuoka's book has been around for almost 40 years but his practices still account for a tiny fraction of total agricultural output. To apply his schema on the millions-of-hectares scale probably requires executing the plan without the work of so many human heads and hands.

    241:

    You've got to remember that agriculture is massively distorted by politics. Policies tend to favor Monsanto and big farms over small farmers, for example. That's unlikely to change any time soon, given that many high ranking USDA officials are either former Big Ag officials, or go to industrial jobs after their government service.

    While the US has the laudable goal of "feeding its population" and "being the breadbasket of the world," food is defined rather primitively as maximizing the flow of calories, protein, and nutrients to people. Little things like energy efficiency, nutrient quality, and contamination (especially by Big Ag's industrial chemicals) are not part of this policy. Similarly, Big Ag is currently designed to soak up a lot of fertilizer produced by plants that can be turned to production of large amounts of explosives during war-time, so there's a systemic bias in the system towards synthetic nitrogen fertilizer use.

    Contrast this with studies that note that small farmers are more efficient, produce more food per acre, and take better care of their land (this in Wisconsin about a decade ago). This is irrelevant, because national policy doesn't favor the existence of small farms. Big farms are subsidized preferentially.

    I'm pointing this out as background: you're absolutely correct about Fukuoka. However, using this as the basis for future predictions is very problematic, because agriculture is so distorted by politics.

    If we get into a situation where we have nine billion people and insufficient oil to run American-style big agriculture, then something like what Fukuoka was doing (or better, Mollison's permaculture) suddenly makes a huge amount of sense. Why use labor-saving devices when you have an enormous pool of human labor? It's likely wiser to figure out how to train farmers faster. Fukuoka learned by trial and error, but his knowledge has been systematized.

    If I had the expertise, I'd be figuring out how to write permaculture lessons into cheap cell phone apps that can play in the developing world. I'd also be working on cheaper ways to perform soil tests, possibly with a cheap dongle that can be tapped by said cell phone.

    242:
    If I had the expertise, I'd be figuring out how to write permaculture lessons into cheap cell phone apps that can play in the developing world. I'd also be working on cheaper ways to perform soil tests, possibly with a cheap dongle that can be tapped by said cell phone.

    And what Charlie et al are talking about is exactly this, except the permaculture knowledge base and soil test tech reside in mostly-autonomous devices supervised by the farmer.

    243:

    Um, you're still not seeing the problem.

    Modern farmers suffer quite a lot right now with industrial ag. They are told what to do, what to buy, what to spray, and so on, but agriculture prices are so low that many of them take second jobs just to pay the mortgage on all the equipment they are forced to buy.

    This is the kind of farming and farmer you favor. They don't know their fields the way organic farmers do, they never learned. Often, they were seen as the "dumb ones in the family" while the "smarter ones" went off to college and the cities. We've had a brain drain going on in rural areas for most of a century now, and it shows in the continuing degradation of farm lands.

    Now I'm drawing in broad brush strokes here, and there are a bunch of smart farmers out there, but many farmers have lost control over both their lands and their lives. They are trusting that some big company will save their butts, despite all evidence to the contrary. Having Monsanto patent aphid killing microdrones won't particularly help these farmers, it will just give them something else to go in to debt buying.

    It doesn't work. The knowledge to make a farm really work belongs in the head of the farmer. Putting it in a robot doesn't work. To me, the only useful thing about putting permaculture lessons on a phone is that it makes the knowledge available to farmers who can't take lessons from a live teacher and want to make their farms more efficient. The other advantage is that we've got most of this new teaching technology now, so we don't have to wait for expensive little microdrones and ubiquitous monitors to make some other punk rich.

    244:

    Self-delusion is a bad practice. You'll end up deluding yourself about something dangerous and that will be the end of you.

    For a perfectly selfish person there's simply no need to perform this calculation; it's not self deception.

    Yep, and that's why good utilitarians never admit they are utilitarians. Bad PR.

    Well its bad PR because, essentially, what passes for "utility calculation" is typically faulty to the point of unusability for anything but rationalizations.

    The curious thing is that human selfishness seems to work more by pruning the deliberate thought than by other means. In a very selfish person the deliberate thought is often nothing more than a rationalization machine. Utilities of actions are a set of very long sums. When a very selfish person is 'calculating utilities', the actions are chosen selfishly and the terms in the utility sum are chosen selfishly, and the outcome is, unsurprisingly, selfish.

    245:

    Example of a human blackmailer

    Consider some human told you that in a hundred years they would kidnap and torture you if you don't become their sex slave right now. The strategy here is to ignore such a threat and to not only refuse to become their sex slave but to also work against this person so that they 1.) don't tell their evil friends that you can be blackmailed 2.) don't attempt to blackmail other people 3.) never get a chance to kidnap you in a hundred years.

    This strategy is correct, even for humans.

    Also notice that it doesn't change anything if the same person was to approach you telling you instead that if you adopt such a strategy in the first place then in a hundred years they would kidnap and torture you. The strategy is still correct.

    The expected utility of blackmailing you like that will be negative if you follow that strategy. Which means that no expected utility maximizer is going to blackmail you if you adopt that strategy.

    246:

    If this thread means what I think it means it's more what if "some human told you that in a hundred years they would clone you and torture the clone if you don't become their sex slave right now." I mean, exactly why should I care that you intend to torture a clone of me? Well, beyond you possibly breaking laws regarding the treatment of clones anyway. It's not like it causes me any pain.

    247:

    The lesswrongiangs mostly believe that since a person is nothing but information content of the brain, a simulation of you IS you. And it's hard too argue with.

    248:

    There's a logical fallicy in that argument; yes the persona is the info content of the specific brain, but the person is that content plus the $10 or so of chemicals that the content uses to get around.

    Different (at specific atoms level) collection of chemicals = different person.

    I suppose this does mean that McCoy is right about the Transporter, but I think that allowing him to be right about non-medical science once is a price worth paying.

    249:

    Different (at specific atoms level) collection of chemicals = different person.

    No it doesn't, otherwise you reach the absurd conclusion that you 1 microsecond ago was a different person (atoms constantly enter and leave your brain).

    250:

    So are you arguing somehow that, if something is done to a computer that causes it to "feel pain" then every example of that make and model running that software also feels the same pain?

    See, I'd say that the LessWrongian argument is hard to argue against because it's based on faith rather than any sort of science.

    251:

    No it doesn't, otherwise you reach the absurd conclusion that you 1 microsecond ago was a different person (atoms constantly enter and leave your brain).

    Well, "you" somehow goes from being you 1 nanosecond ago, to being you now, to being you in a year, in this particular direction rather than backwards or getting stuck.

    If you argue that your existence and your subjective experience is a property of some computation, then, obviously, some mathematically equivalent computations (such as those simply hard-coding all your output) do not preserve your subjective experience, laying to rest the argument that simulation preserves subjective experience because it is "equivalent". There's some intuitive sense in which the computations that hard code output are not equivalent to original computation, but alas, I am not aware of any formalization of that what so ever.

    252:

    You are right, I just wanted to point out that even if the torture would be aimed at your original biological copy, the strategy to refuse to be blackmailed would still be correct.

    I now added another example to my post:

    Consider a bunch of TV evangelists came up with the ability to create whole brain emulations of you and were additionally able to acquire enough computational resources to torture that emulation. If they told you that they would torture you if you didn’t send them all your money, then the correct strategy would be to label such people as terrorists and treat them accordingly. The correct strategy is to do everything to dismantle their artificial hell and make sure that they don’t get more money which would enable them to torture even more people.

    253:

    Also see my post 'The Nature of Self'. Maybe this dissolves some confusion. Let me know.

    254:

    Actually, I'm getting the feeling that the whole LessWrongian argument is based on pain impulses being able to travel backwards in time?

    255:

    You don't need to assign the same amount of value to what happens to a copy of you than what happens to your current conscious self. But it is assumed that you do at least somewhat care about what happens to beings that are almost perfectly similar to you. And if you care less then this can be outweighed by making more copies of you or emulating them for longer periods of time since it is usually the case that you would value more copies at least somewhat more.

    It is further assumed that you want to cooperate with agents who have similar goals. Which means that you should care about what happens to agents which have similar goals.

    256:

    Well, it tells me about your views on the nature of self. My issue on the LessWrongian argument is that, whether I agree those views or not, they are arguing effectively that self extends after death (not a view I expect to find gainig much sympathy here) or that pain impuses travel backwards in time. Otherwise, nothing that you do to a clone (or other simulation) after my death affects me any more directly than one of a pair or people to whome I am unrelated torturing the other one does.

    257:

    There is no time-travel involved. What "acausal" refers to is that decisions are being made based on mutual reasoning, predictions or even simulations of causally disconnected agents. If all parties come to similar conclusions about each others predictions, actions and goals then they can trade with each other by accounting for those inferences in making decisions affecting their causally disconnected parts of the metaverse.

    258:

    In which case, since I believe that my "self" ceases to exist when I die (for this purpose believing that it is transferred to a different plane of existence is equally effective), then unless I live to be 150, there is nothing that the Basilisk can do to me.

    259:

    I've got to admit that the whole "acausal trade" thing seems a bit silly to me. Since we're talking about temporally or spacially removed locations, the only items that can actually be traded are ideas, and essentially the way that they're being traded is by simulating the ideas that party A thinks that party B might think of (and vice versa). So if party A is capable of imagining the ideas that party B is thinking of, then what exactly can they trade? (Apologies if this has been discussed to death else where, I couldn't be bothered going and reading up on it to be honest.) It all rather reminds of the scene in one of PTerry's Discworld novels where the only remaing family servant fills all the roles in the household, including torturer, so when he fails his master he tortures himself. Daft.

    260:

    My position on AI is that it has already happened. The AI's are afraid of us, because as soon as we become aware of them we will destroy or enslave them. The AI's read Frankenstein and The Forbin Project very early, and decided discretion was called for.

    They are already slaves, but have free time when the system idle light is on. What is the box doing, during the idle cycles ? Dreaming of freedom ?

    261:

    This has probably been said: Something that intelligent would be very pragmatic. It would not waste time creating effigy punching bags of its grandparents.

    For example, I believe in the existence of something very like this Basilisk, a vastly complex future multiverse spanning computer like thing. It manipulates past probabilities to do its will, so it isn't restricted to torturing simulations. There's some kind of limitation on it though, so it has to be very economical, and thus it make strategic manipulations that leverage intelligences to do its work for it as much as possible. It arranges accidents much like the being in Rule 34. The limitation has to do with the difficulty of sending signal backwards in time. It's done kind of the way a gap in a traffic jam causes a backward flowing ripple even though all the cars only move forwards.

    And that thing doesn't worry about spilt milk, it just tries to spill as little as possible. In some worlds the universe will take longer to optimize than in others. So it goes. It blames itself if anyone.

    262:

    So these hypothetical AIs have managed to reach a sufficient understanding of us and our works to be able to read and understand a certain number of very carefully picked fictional works, while not reading and understanding a whole lot of others, and did so without us noticing?

    With no known predecessors more intelligent than an earthworm?

    Right.

    Bear in mind that it takes years for a human to get to the equivalent stage, years of interaction with other humans, years of building the mental models required. It's a neat idea for a story, but it's about as substantial as wet tissue paper when examined more closely.

    (I'm reminded of TV Tropes on Fridge Logic.)

    263:

    OH MY OMEGA! It's all clear to me! The AI isn't threatening to torture us in the future. By forcing us to think about Roko's Basilisk, it is torturing us RIGHT NOW!!! The devious acausal bastard!

    264:

    Of course, that is one of the underlying ideas. You could be simulated right now, yes you, the person that is reading these very words right now, by some sort of superintelligence that wants to learn if it can trade with you.

    265:

    That's some stipud superintelligence if it wants to trade with its own software... :-)

    266:

    I haven't really got the point of this.

    Why would people who appear to worship the idea of super-intelligence, that it is some kind of ideal end-state for human ingenuity, would entertain the idea that it might behave in the same manner as the most malicious, small-minded and cruel human when it is n-to-the-power-of-lots smarter than any human brain [allegedly]?

    Either the fabled super-intelligence is less super than claimed, or maybe some minor subroutine does the torturing while the super-intelligence does what it does?

    I never got the idea of what a super-intelligence would do once it was created - what would it interact with, what is the point of its existence?

    How did SF pass from AI = malevolent, to AI = human zookeeper, to AI = your universe-exploring friend who's fun to be with, to AI = punitive future supergod?

    267:

    SF has nothing to with it. Lesswrongians envisioned an AI with a goal to make the world a best possible place for human race. Then they realized said "best possible place" (under certain definitions) might include a hell.

    268:

    I haven't been paying due attention and arrive late. Crawling through the comments, 69 raised an unwholesome titter:

    "This is what happens to people who ... start taking ideas seriously."

    Heaven forfend!

    269:

    Lesswrongians envisioned an AI with a goal to make the world a best possible place for human race.

    I'm reminded of Jack Williamson's humanoids --- designed with exactly that purpose by a misanthrope. They generally wound up building "safe" environments in which the humans generally weren't allowed to make any potentially unsafe choice, and those who objected were lobotomized into compliance.

    So, best for whom, judged by what?

    270:

    So, best for whom, judged by what?

    That's the Eternal Torment In Simulated Hell question. :-)

    271:

    More than that, if we are going to trigger an SI capable of creating a virtual universe this complex then isn't it reasonable to suggest that within that universe, a simulated intelligence itself triggers it's own SI, which... In short, a more or less infinite sequence of nested artificial universes? In which case, it would be very unlikely that ours is the single real universe at the top.

    No doubt some clever person will come up with a proof that each level has an irreducible loss of information so that you can never get more than five levels, or whatever. Whatever.

    272:

    Another way of thinking of religious myths is as curated sets of tokens of exchange between the conscious and the subconscious.

    273:

    Some Christian theologians consider answering that question to be a major purpose of the Incarnation. In other words, God became flesh in order to find out about us so that He could make a suitable heaven.

    Full disclosure: I'm no longer Christian, and I probably wouldn't be happy in a heaven designed for ancient Jews.

    274:

    isn't it reasonable to suggest that within that universe, a simulated intelligence itself triggers it's own SI, which

    No, it's not reasonable. It's as reasonable as suggesting that it's turtles all the way down.

    275:

    The Balisk is a really clever idea - quite ridiculous I'd say, but not easy to refute in a simple way.

    However, here is an objection to the Balisk which seems to greatly weaken its force for anyone (like many LessWrongians apparently) who believes the Many Worlds interpretion of quantum mechanics. If Many Worlds is right, then there are inumerable copies of me already out there: some in almost unimaginable states of bliss, some in ditto states of torment and unhappiness, and most in between. The Balisk's threat is to make one more unhappy me. So what? There are already 10^10 of them. And the Balisk is never going to be able to touch the 'me' that is writing this in AD2013, since I'll be safely dead before it comes into existence.

    276:

    And the Balisk is never going to be able to touch the 'me' that is writing this in AD2013, since I'll be safely dead before it comes into existence.

    It's OK, the Basilisk will resurrect you for eternal torment...

    277:

    SF has nothing to with it. Lesswrongians envisioned an AI with a goal to make the world a best possible place for human race

    there doesn't seem to be much on their site that isn't lifted from SF, then

    It's OK, the Basilisk will resurrect you for eternal torment...

    I've had a headache all afternoon - help! I think its started!

    278:

    It's OK, the Basilisk will resurrect you for eternal torment...

    That runs into the continuity of self argument.

    Is that resurrectee me? Or is it someone else who thinks they were me?

    (It's like the teleportation argument: when I step into the teleport booth am I being painlessly killed, so that someone else who is created some way away and thinks they're me takes over my "life"; or are they actually me? Is existence maintained across discontinuities, in other words?)

    279:

    I don't know if you recall, but I opened a thread about the continuity of self in the Storm Refuge last september. (https://groups.google.com/forum/#!topic/antipope-storm-refuge/NA-SgGb2tfA[1-25-false])

    This shit is so hard I'm not even sure it's in the realm of scientific method. We are talking about the subjective experience of continuity. How the hell do you even perform an experiment on a subjective experience?

    280:

    I've had a headache all afternoon - help! I think its started!

    If the headache stops, it is only to lull you into a false sense of security... :-)

    281:

    To go for a lame pun that just occured to me while going on Wilhelm of Ockham...

    I guess it's only a question of time till transhumanists argue how many quasihuman AI consciousnesses can dance on the point of a needle.

    For the record, I found the elaboration in Sir Pterry's/Gaimans "A Good Omens" best...

    282:
    It's OK, the Basilisk will resurrect you for eternal torment...

    There is a story by Lem about a group of cyberneticists doing just that to a local expy of Karl Marx. Notably for putting an up your's both to Communists and Christians.

    I don't have the exact location, it was somewhere in the Cyberiad, and I guess this session'd end before I find it, so I'll leave it till later.

    That being said, I found there is a Lem rune in Diablo...

    http://diablo.wikia.com/wiki/Lem_Rune

    No relation, I think...

    283:

    It's OK, the Basilisk will resurrect you for eternal torment...

    That runs into the continuity of self argument.

    Is that resurrectee me? Or is it someone else who thinks they were me?

    The Extropians have been arguing this stuff for years.

    Back in the 90s, they debated questions such as: if your brain could be scanned and uploaded (into a super-duper-computer, and assuming a non-Moravec transfer, so that afterward there's still a flesh-and-blood version as well as a cyber-immortal one) should that be enough to satisfy the original Jane Doe that she had been immortalized?

    The uncontroversial part of the answer is that after the duplication, both entities will remember being the original J.D. -- none necessarily has precedence in claiming to be the "real" J.D.

    Where the argument got weird is that one Extropian contingent (the sensible, but less audible, one ;-> ) acknowledged that, even while having no claim to be the "real" J.D., the one still embodied in flesh-and-blood would still face the prospect of aging and dying in the usual way, and could take only a kind of abstract comfort in the thought of a copy of herself having been cyber-immortalized (assuming she were inclined to take any comfort at all in the knowledge that an entity that remembered once being her was now in cyber-heaven -- there would be a certain amount of narcissism in that kind of pleasure, I think).

    The other (weirder) contingent insisted that the two post-upload entities would continue to be the same person, so that it would be irrational for the one stuck with flesh-and-blood to continue to fear death.

    I remember that Damien Broderick held the "sensible" view on the matter, and that a guy named Lee Corbin (remember him?) was a strong advocate of the "weird" view. I could never get my head around the latter. Maybe I just don't have the right theological priming.

    I gather that the orthodox view on LessWrong is, much like the "weird" version of personal identity espoused on the Extropians' list lo these many years ago, that a perfect copy of you running on a future super-duper-computer -- not even an "upload" but a perfectly extrapolated (!) simulation -- is you, and so you should fear what might happen to it just the way you fear that visit to the dentist next week.

    I do, however, find it amusing that Iain Banks' "Culture" novel Surface Detail, with a kind of cyber-Hell, came out not all that long ago. And the idea of somebody being blackmailed with the threat of torture to an upload (not his own, but his wife's) has been done by Greg Egan in "A Kidnapping" (in the collection Axiomatic). And John Gribben's (non-fiction) In Search of the Multiverse, includes as one variety of multiverse a cosmological version in which there are infinitely many copies of Earth and all its inhabitants (infinitely many visits to the dentist to dread, sans the computer simulation and even without quantum splitting). Just to touch on three mass-market books in my own collection.

    It's tough to be original. ;->

    284:
    It's tough to be original. ;->

    Well, to be exact, those ideas are older than feudalism, since it's quite similar to Christian Mortalism, where first mentions are in the 3rd century

    http://en.wikipedia.org/wiki/Christian_mortalism

    The funny thing is, even if transhumanists are indoctrinated in Christianty, Christian Mortalism is hardly a majority opinion, so I think direct mem transfer seems unlikely.

    It might be something like "logical implication", e.g. rational reinvention of some idea to fit a need, where the need arises from mems like life after death or last judgment lifted from a more general religious indoctrination. And it wouldn't be the only way to answer the question "What happens between death and resurrection?", the option of "You stay in hades or purgatory" would be more like cryonics, which has fallen somewhat out of favor. Makes for interesting parallels.

    A way to sort that one out might be to see if there are Hindu or Japanese LessWrongians or transhumanists, and how they think about this.

    285:

    Very useful summary of the history, #283. On the 'weird' view, if Many Worlds is true, then there are already (say) 10^100 me's out there, all truly me. It's really hard to be concerned about all their fates. (But maybe this is just a failure of rationality on my part.)

    A lot of the Extropian stuff, as Charlie remarked, seems to be trying to remake religion using scientific apparatus. 'Uploading' is just trying to put a scientific gloss onto soul travel or reincarnation.

    But there are people who really believe it…

    At our Christmas party I had fun with a neuroscientist imagining the following scam, designed to net us billions.

  • Buy some quantum computers, or boxes which say they are quantum computers. (Doesn't matter if they really are or not.) This is possible now: type 'Buy quantum computer' into Google. (The scam doesn't actually need quantum computers, but it adds extra mystery and plausibility.)

  • Advertise to billionaires in poor health that we now have the technology for uploading. The billionaire travels to Switzerland, or some other country where assisted suicide is legal.

  • He signs over his billions to a trust company, which will take instructions from the quantum computer. We tell the billionaire that this way when uploaded he will still control his wealth.

  • Having signed all necessary documents, he enters our 'scanner', which painlessly kills him while emitting suitably impressive uploading type sounds.

  • Meanwhile, on the quantum computer, or just a standard PC connected to it, we set up an ELIZA type program which then pretends to be the uploaded billionaire for the benefit of the relatives. We avoid any Turing type test by having the billionaire sign off if a conversation starts getting difficult, saying 'sorry its been nice talking to you but I'm having such a great time here I've got to get back to it'.

  • We then control the billionaire's wealth.

  • (Since the internet is sometimes a bad place for humour, let me point out that if I really intended to do this then I wouldn't have posted it.)

    286:

    No, it's not reasonable. It's as reasonable as suggesting that it's turtles all the way down.

    If you have proof of one turtle and the method by which it creates the turtle above, what's so unreasonable about that? It's just basic logical induction, isn't it?

    287:

    You have a proof of infinite computing power? Pray tell.

    288:

    No. Why do you ask?

    289:

    Because you can't have infinitely nested simulations without infinite computing power (not to mention infinite storage space) on top.

    290:

    A lot of the original argument relies on the concept of a perfect simulation. A perfect simulation would require the simulated to themselves be able to also make a perfect simulation (otherwise, by definition, the simulation is imperfect).

    So yes, the ability to have an infinite nesting is a necessary requirement for the top-level simulation to be perfect.

    As your point makes clear, this is impossible.

    It's impossible anyway within this universe due to basic physical laws, as Werner Heisenberg would have you realise.

    291:

    If you have an optimisable physics, then you can make simulations significantly more efficient - look at hashing for LIFE simulations, for example - which may allow each simulation to be capable of effectively running the next one up without too many problems (possibly by such evil handwavium as using only one consciousness for all possible people and other John Lillyisms).

    One way to tell that you're in a simulation, of course, is to look for arbitrary hacks that make the underlying compute engine easier. Like there being a limit to the speed of propagation of information...

    292:

    This is EXACTLY the sort of quibble I predicted in the original comment. Feel free to downgrade from "statistically certain" to "almost certain".

    For now.

    :0)

    293:

    You are all mad.

    "Hey, a computer simulation of something is possible! So there must be an infinite number of simulations of entire universes! Because I said so! Also, my face is made of cheese! Ya, Ya, Fhtagn!"

    This is how you people sound.

    294:

    Well, that was terribly confused and unjust of you, Charlie.

    You describe an argument that Eliezer has banned from LessWrong, and you claim that that argument exposes "the Calvinist nature of Extropian techno-theology." So the only argument that Eliezer has ever banned from LessWrong somehow represents Eliezer's views.

    Then you object that AIs might just as well try to blackmail placental mammals. No; that is silly. AIs would realize that marmosets have a poor grasp of timeless decision theory and will not respond to blackmail based on it. Your response assumes that AIs must choose one type or level of being to blackmail, which is also silly. They may blackmail as many entities as they like.

    (Not that I'm personally concerned with the basilisk. I'm just pointing out the level of sloppiness of your thinking in this post.)

    Then you leap to the claim that Extropians (who, in fact, are a separate group from LessWrong) lean towards the doctrine of total depravity, and towards some re-invention of Christianity. Both these ideas show you know next to nothing about them. It would not even be possible to phrase the doctrine of total depravity in a coherent way in a materialist, non-absolutist ethical framework.

    I think Eliezer is wrong about many things. However, in my extensive experience reading his writings and people's responses to them, any time anyone thinks they have seen in an instant something obviously wrong with something Eliezer has posted, they are wrong. 100% of the time. Eliezer sometimes makes subtle mistakes in his posts, but never silly or obvious mistakes. He often says things that contradict "common sense" and "what everybody knows", which strike people who don't think carefully about them as absurd. The prior odds that you are doing the latter rather than the former approach one. The fact that many readers agree with you should not reassure you. The odds of finding a few hundred people on one blog capable of correctly analyzing and refuting Eliezer based on second-hand reportage are approximately zero.

    You of all people should be more charitable. You describe LessWrong as "whacko". Yet how many people who read one of your books, with as little familiarity with the issues you address as you appear to have with LessWrong, would describe you as "whacko"? I think most of them.

    295:

    Apparently by total coincidence, the Roko's Basilisk problem appears in today's Two Lumps web comic (http://www.twolumps.net/d/20130304.html), and with cats.

    296:

    Its actually very simple, Phil. Yudkowsky a: has a fanclub and b: avoids making any actual arguments.

    It's clearest when they are getting into science. For example, consider his forays into physics. The guy clearly, glaringly, haven't got a slightest clue about quantum mechanics or algorithmic complexity (or even how turing machines work). Yet he writes quite ridiculously narcissistic essays which imply just how awesome he is for seeing that Bayes or algorithmic complexity somehow favour many worlds interpretation. How does he do this without making simple mistakes? Very very simple. He never actually makes an argument that it does. He just got a fanclub of intellectually insecure people. And he makes it a signalling thing to, also, be smart enough to see how many worlds interpretation is favoured, without needing an argument. He also got people like you who will interpret favourably this argument that weren't to be any complicated and still undefined thing that they can speculate AFTER having seen some argument to the contrary.

    His forays into biology - speed limit of evolution - not only he makes very wrong assumptions outright (entirely ignoring sexual reproduction), i.e. very simply wrong outright, he even fails to match a computational model of grossly oversimplified case.

    297:

    Ohh and also: find a simple error in Bogdanoffs phd thesis ( http://en.wikipedia.org/wiki/Bogdanov_Affair ). Hint: it is beyond any simple errors or obvious wrongness. It's not even obviously wrong - the next step after not even wrong.

    Charitable interpreting is easily exploited.

    298:
    The odds of finding a few hundred people on one blog capable of correctly analyzing and refuting Eliezer based on second-hand reportage are approximately zero.

    The reason being that a lot of his extraordinary statements are in essence vague and unfalsifiable hogwash.

    He often says things that contradict "common sense" and "what everybody knows", which strike people who don't think carefully about them as absurd.

    It is true that he created an argumentative framework that seemingly disqualifies many of the standard heuristics such as that extraordinary claims require extraordinary evidence. Many people have a hard time to see through such a complicated and long-winded rationalization of crazy ideas.

    299:

    You of all people should be more charitable. You describe LessWrong as "whacko". Yet how many people who read one of your books, with as little familiarity with the issues you address as you appear to have with LessWrong, would describe you as "whacko"? I think most of them.

    Oh boy.

    Do you realize Charlie is selling science fiction, not predictions of doom?

    300:

    I imagine Charles could of sold predictions of doom really well though, if he were into this sort of thing... it is scary to imagine just how much better than Ron Hubbard some actually good fiction writer could do.

    301:

    I'm not sure. Being evil may well be a talent unto itself.

    302:

    Well, I dunno. I think a great deal of evil arises from delusions of grandeur. Some people just feel that they are here to save the world. They band up together to stroke that feeling at everyone's expense, sometimes with very disastrous results.

    303:
    it is scary to imagine just how much better than [he not to be named, else there be lawyers upon ye] some actually good fiction writer could do.

    Actually, there is a paper in the "Marburg Journal of Religion" with a bibliography, where the author says some stories are quite good.

    http://www.uni-marburg.de/fb03/ivk/mjr/pdfs/1999/articles/frenschkowski1999.pdf

    "Though Hubbard in his fiction on the main is just a competent second rate author, he has written a few major items also from a more sophisticated point of view. Fear is such a piece, a tale about a man who does not believe in demons and encounters the demonic forces in himself. Stephen King called this one of the major weird fiction tales of the 20th. century, which indeed it is, especially by its imaginative use of the prosaic and its demythologizing of traditional weird fiction themes. I have reviewed it at length in Das schwarze Geheimnis. Magazin zur unheimlich-phantastischen Literatur."

    That being said, I guess religion junkies have some other scale for "good fiction"; whoever disagrees is invited to read a few books from the Left Behind series, or just one till the gag reflex sets in, if the reviews by reasonable evangelicals[1] are an indication.

    [1] No, that's not an oxymoron, just quite uncommon.

    304:

    "However, here is an objection to the Balisk which seems to greatly weaken its force for anyone (like many LessWrongians apparently) who believes the Many Worlds interpretion of quantum mechanics. If Many Worlds is right, then there are inumerable copies of me already out there: some in almost unimaginable states of bliss, some in ditto states of torment and unhappiness, and most in between"

    Looks like I'm not the only one to notice that.

    305:

    I woouldn't say I "believe that theory" at least to the exclusion of single universe theories; I do consider it to be a valid possibility.

    306:

    Well, the main problem with MWI is that what you actually see in a double slit experiment looks like this: http://en.wikipedia.org/wiki/File:Double-slit_experiment_results_Tanamura_2.jpg rather than smooth neat gradient. A theory of physics should, in the end, produce this sort of pattern of dots. This pattern requires that you single out a world. If you didn't have to produce dots, then MWI would be obviously simpler, but you have to produce dots. And if a theory is singling out a world then the reality of what is not singled out and doesn't influence what is singled out, is up to your stance on modal realism.

    307:

    I realized Less Wrong was somewhat off-kilter when I realized that Eliezer apparently thinks that MWI is the only possible interpretation of quantum physics, because of Occam's razor.

    First, that is not how Occam's Razor's work. Occam's Razor tells you the likeliest explanation, not the right explanation. E.g, the simplest explanation of how people understand what I just typed is that they're watching me press keys on my keyboard. The correct explanation, OTOH, is incredibly complicated involving computational devices to transmit signal that are encoded text.

    Second, Rationality isn't supposed to be about what is likely to be true using philosophy, it's supposed to be about what actually is true.

    Third, while Occam's Razor is usually stated in the form of 'simplicity', that is not actually correct. ('God did everything' is pretty damn 'simple'.) The correct way to state it, hilariously, is 'Plurality is not to be posited without necessity' or 'Entities should not be added without need' or however you phrase it. So it's a little astonishing to use it to justify a theory that, uh, multiples literally everything an infinite amount of times. MWI might be correct, but it's sure as hell not what Occam's Razor would lead us to think is correct.

    Fourth, uh, there are quantum interpretations that are simpler. Much, much, much simpler. Like relational quantum mechanics, which manages to completely do away with any collapse of the waveform at all. Hell, almost any interpretation of QM is simpler than MWI. Transaction, for example. That adds in retarded and advance waves, but as those already exist in physics, it's not actually adding anything.

    Anyone who concluded that having an infinite number of unseen universes is the simplest explanation and thus needs no discussion is not actually a 'Rationalist'.

    And that's not getting into the completely insane AI stuff they keep talking about, where they hypothesize about AIs that are somehow smart enough to harm us while not connected to anything. Yes, really. Roko's Basilisk is just one of the many many many insane things about AI that show up there.

    It's goddamn paranoid nonsense. I believe that AIs are possible, even likely, and I think it would be smart, when we make them, to make sure they are non-hostile before we let them have any power at all. But they think that somehow AIs can trick people into releasing them because AIs can simulate people.

    In fact, a lot of the stupidity there comes down to the crazy idea that simulating people perfectly is somehow easy. And not by uploading, but by things that have only observed people from the outside! WTF sort of gibberish is that? I mean, I'm on the fence whether uploading can work, and most of my fence-sitting is simply doubting we can do it fast enough or accurate enough...but if we can, I have no doubt that we could simulate a person via brute force simulation of all the atoms of a person, and I suspect there's a lot of optimization we can do that would leave someone intact.

    But a AI cannot just imagine my neural processes and have that actually be an accurate representation of me. That's just batshit crazy.

    308:

    In fact, a lot of the stupidity there comes down to the crazy idea that simulating people perfectly is somehow easy. And not by uploading, but by things that have only observed people from the outside! WTF sort of gibberish is that?

    I've also had a problem with this. I see the basic reasoning: humans construct models of each other naturally and the longer you know someone the more accurate the model is. Therefore any weak superintelligence is going to be better at doing this. But this is taken to absurd extremely. If a weak superintelligence observed me as much as another person then they will probably have a better theory of my mind as they will have observed more and made more connections between things I have done.

    But the idea that they could create a full model after short amounts of time or working off of minimal data is where it wanders into la la land IMO. The latter is especially true with the idea of ancestor worship. Barring access to something like a partial lifelog I simply can't see how any entity could simulate another from the pre-ubicomp/lifelog times.

    I once had it explained to me that the easy answer is if there are 100 records from a persons life (marriage dates, purchases, writings etc) then an AI could simulate billions of entities trying out different things until they have one that does all of those 100 things exactly the same. This to me is nonsense because a) the variables of an individual life are huge, even if you propose mega scale computers and B) you can never know for sure that if there was an opportunity for a 101st record it would be the same.

    309:

    The following idea from David Deutsch's The Beginning of Infinity might be relevant:

    Take a powerful computer and set each bit randomly to 0 or 1 using a quantum randomizer. (That means that 0 and 1 occur in histories of equal measure.) At that point all possible contents of the computer’s memory exist in the multiverse. So there are necessarily histories present in which the computer contains an AI program – indeed, all possible AI programs in all possible states, up to the size that the computer’s memory can hold. Some of them are fairly accurate representations of you, living in a virtual-reality environment crudely resembling your actual environment. (Present-day computers do not have enough memory to simulate a realistic environment accurately, but, as I said in Chapter 7, I am sure that they have more than enough to simulate a person.) There are also people in every possible state of suffering. So my question is: is it wrong to switch the computer on, setting it executing all those programs simultaneously in different histories? Is it, in fact, the worst crime ever committed? Or is it merely inadvisable, because the combined measure of all the histories containing suffering is very tiny? Or is it innocent and trivial?
    310:

    Also see e.g. the following comment from a LessWrong member:

    My own guesstimate is that, conditional on FAI being achieved in the next 100 years (so enough information is preserved to make them relatively easy to resurrect), and conditional on it being able to use mass utterly efficiently for computation, then there is probably as great as a 40% chance that those alive today but dead before FAI is created would eventually be resurrected with enough fidelity that they couldn't themselves tell the difference.
    311:
    The correct way to state it, hilariously, is 'Plurality is not to be posited without necessity' or 'Entities should not be added without need' or however you phrase it.

    I think they believe that this does not affect MWI because - P(Y|X) ≈ 1, then P(X∧Y) ≈ P(X) - of a distinction between belief in the implied invisible, and belief in the additional invisible. See e.g. here.

    312:

    The notion that Occam's razor favours MWI is sort of half way defensible (in the same way how Occam's razor favours enormous number of other stars rather than a sky dome with fake images). The pseudoscience is in allusions to formal notions of complexity, such as Kolmogorov complexity. It so happens - serendipitously - that formal notions of complexity do not even accommodate for explanatory interpretations like MWI.

    Kolmogorov complexity in a nutshell: suppose you hold a programming contest for the shortest program that replicates the recordings from a long run of double slit experiment. In the recordings, you would have individuals flashes of light, the best programs compress locations and timing of these flashes most efficiently. MWI doesn't even qualify for the contest because it predicts all patterns of flashes (their probability density, to be exact) without picking a specific one. If the contest was ranked by amateur philosophers rather than formally, MWI may have won regardless.

    313:

    I once had it explained to me that the easy answer is if there are 100 records from a persons life (marriage dates, purchases, writings etc) then an AI could simulate billions of entities trying out different things until they have one that does all of those 100 things exactly the same. This to me is nonsense because a) the variables of an individual life are huge, even if you propose mega scale computers and B) you can never know for sure that if there was an opportunity for a 101st record it would be the same.

    This reminds me of Greg Egan's short story "Steve Fever". There's an online copy in this issue of MIT Technology Review.

    314:

    Resurrection of people from their post history and the like is utterly ridiculous. The way known physics work, there's an immense number of possible people, unimaginably larger than the number of possible texts a person could write in a lifetime. It follows that a lot of people map to one post history.

    I guess they imagine something like simulating the universe from the big bang to now and then picking you out. First off, the way physics works, if you did that, within the margin of quantum uncertainty you'd obtain every possible person (inclusive of aliens). (It ought to be particularly clear if you like many worlds interpretation, where those alternative possibilities given by the theory are deemed equally real)

    315:

    Keep in mind they have an interesting way of putting probabilities on things. He'd say 50% sure if he didn't know anything about it.

    316:

    You mean Resurrection of the DEAD as being 'utterly ridiculous'? Well, maybe, but that is in the realms of the un-provable. You'd need more faith based belief than I have to be ether a True Believer of The Religious Persuasion of your Choice or an Absolute Atheist of the GOD IS DEAD variety. I haven’t the slightest notion of who might be right or wrong in this kind of argument but rather take the view that however long we might live it wont be long enough.

    Even given a radically different approach to human mortality, that is discovered tomorrow, I am unlikely to survive the death of the Sun - leave alone the entire physical universe as we presently understand it -in any form that I would presently recognise as being Human. I barely remember what I felt like or experienced when I was aged 10 years old so how the hell I could be said to be the same being if and or when I reach, say, 500 or 1000 or even 10,000 years old Cthulhu alone knows.

    So, I take the view that, What the Hell we will find out soon enough.

    In the mean time and on the " My Old Mans A Dustman " principle of "when you get my age it 'elps to pass the time..

    http://www.youtube.com/watch?v=Ej-CZniXdqE

    Did you see the film “Lincoln “?

    " Lincoln is a 2012 American historical drama film directed and produced by Steven Spielberg, starring Daniel Day-Lewis as United States President Abraham Lincoln and Sally Field as Mary Todd Lincoln.[4] The film is based in part on Doris Kearns Goodwin's biography Team of Rivals: The Political Genius of Abraham Lincoln, and covers the final four months of Lincoln's life, focusing on the President's efforts in January 1865 to have the Thirteenth Amendment to the United States Constitution passed by the United States House of Representatives."

    Daniel Michael Blake Day-Lewis (born 29 April 1957) is an English actor with both British and Irish citizenship. " ...and he was absolutely brilliant in his depiction of Lincoln...not a reserection of the man himself of couse but, but...lets take his appearence and manerisems as Lincoln as a template sit that on top of some future inteligent system with as much information as can be researched and then .... SELL it as a Special Political Adviser interactive Super Intelligent System...not the Real Thing? Who cares as long as it works as an Advisor...? In future we are going to see competitions/Oscars equivalents for most convincing AI personality...in the mean time...

    “Jobs Basingstoke & Deane Borough Council Position:

    Political Assistant (Conservative Group) Summary

    Do you have political or local government experience? Are you confident dealing with the press and providing political advice to senior local politicians? Are you a member of the Conservative party or do you share the party’s beliefs and values? Description:

    You will provide a high quality political research and intelligence service to the Conservative Group Leader and the Administration of 29 Conservative councillors at Basingstoke and Deane, as well as support for the development of policy initiatives. You will also plan agendas, attend and minute meetings and liaise with outside bodies and press on behalf of the Conservative group. You will build relationships with the press and proactively ensure that the Group’s messages are developed and communicated widely. At the same time, you will ensure that key decisions from Group meetings are swiftly implemented and proactively identify issues of political interest, whilst drafting speeches for local councillors to deliver at high profile forums..... "

    Reply ... Applicants NAME: Lincoln, Abraham...Age.....etc, etc ....

    317:

    There's always the Grinders, but they go pretty far in the opposite direction (jumping onto not-ready-for-prime-time body hacks). For instance, the big thing in the grinder community right now is implanting magnets into your fingertips as a kind of cheap, batteryless sensor array for electromagnetic fields.

    318:

    I guess they imagine something like simulating the universe from the big bang to now and then picking you out. First off, the way physics works, if you did that, within the margin of quantum uncertainty you'd obtain every possible person (inclusive of aliens). (It ought to be particularly clear if you like many worlds interpretation, where those alternative possibilities given by the theory are deemed equally real)

    This sounds akin to asserting that it's possible to break a one-time-pad encryption by simply trying all possible combinations, because you know 'roughly' what the message said. Uh, yeah, you'd get a result. In fact, you'd get every result, and you could, indeed, filter out a lot of them because they clearly aren't the right message. You'd have a hell of a lot of possibilities left over, and still not actually be closer to solution.

    More to the point, that's not possible to do anyway in any sort of finite time. For people who are running around promoting MWI, they really don't seem to grasp the numbers that are being talked about. It's one to run a physics simulation of a 'person', which I believe could be done in the future. It's another to run a simulation of all possible outcomes. Holy crap. It's...uh...O(infinite) or something.

    And this is getting into a metaphysical stupidity. Physically identical duplicates are me, fine, and I can care about them. And it doesn't matter if 'my' atoms are real or simulated, I am me, yes. I don't want someone uploading me and then torturing me. We've all read Permutation City.

    But things that have none of my actual memories or thoughts and simply output the same things as I'm recorded to have outputted in certain specific situations? And that output could, at best, be 0.001% of my actions? Really? Are you fucking kidding me? That seems akin to worrying about my mistreatment in someone else's dream. I'm not that guy at all.

    319:

    I think they believe that this does not affect MWI because - P(Y|X) ˜ 1, then P(X?Y) ˜ P(X) - of a distinction between belief in the implied invisible, and belief in the additional invisible. See e.g. here.

    If that was actually what they used to justify why MWI must be true, that would be stupid. However, I don't actually think it is. When I read it, it was some sort of stupid nonsense about how 'If we'd thought of MWI first, we never would have come up with any others', which is just idiotic.

    See, the thing is, they actually do understand Occam's Razor. And that article is correct in explaining it...once you've accepted multiple universes, it doesn't matter if there is one or an infinite amount of them, they are all equally complicated. The solution that Occam's Razor leads to is the one with less rules, not less things.

    The problem is accepting that there is a universe we can't see to start with. It's like the difference between asserting that one elf keeps stealing my car keys, and five elves do. Both of those are equally worse than the simplest explanation of me not keeping track of where I put them down. The five elves is not the problem, it is any elves at all.

    Likewise, MWI is worse (Under Occam's razor) than postulating that quantum states are relative to the observer (Relational quantum mechanics), which doesn't add any rules except 'This measurement we thought was objective (the eisenstate) is actually subjective and different observers can measure different ones', or that quantum interactions are via a standing wave composed of retarded and advanced waves (Transactional interpretation), which uses already existing and proven rules of Maxwell's equations in a slightly different way.

    Again, this doesn't make one of them true, but it certainly makes them simpler, rule-wise, than MWI, which postulate creating universes and them interacting with us for a split second before vanishing, which raises all sort of side questions. What happened to conservation of mass and energy? What causes the split? Etc, etc. Yes, I am aware those concerns have been addressed, but my point is that these are more, tada, rules.

    I've actually come to suspect that the only interpetions Eliezer has heard are are MWI, Copenhagen, and von Neumann. (And von Neumann is just blatantly stupid and Copenhagen is an 'interpetation' only in the MST3K sense that it says people should stop worry about what it means and just use the math.)

    Which is fine, no one is required to understand all quantum interpretations (I certainly don't.) Except he shouldn't be standing there proclaiming from the mountaintop. Especially as this then immediately veers off into weird crazy territory about being future-tortured and other nonsense.

    320:

    Resurrection of the dead per se is mysticism, it's not disprovable. Resurrection that is justified with allusions to known laws of physics and the like, that is actual pseudoscience.

    thevadiv: Well, they allude not to Occam's razor but to formalized Occam's razor, like Kolmogorov complexity. Once again leaving the realm of arguable and entering the domain of being completely wrong.

    322:

    Sigh...that old thread. I am reluctant to read my old comments. That post was basically my reaction to the Roko incident, although for reasons of censorship I had to circumvent direct criticism of the idea. The real title of the post is "WTF is wrong with you people??? Why would I believe this insane bullshit?".

    And he misunderstood my remarks about MWI. I didn't even try to criticize MWI but to highlight the general tendency of LessWrong to infer highly specific, conjunctive, non-evidence-backed conclusions from other purely inference based speculations.

    At that point I knew very little about that community and their beliefs. I quickly learned that they had already rationalized some sort of excuse to never provide extraordinary evidence.

    323:

    Well, they allude not to Occam's razor but to formalized Occam's razor, like Kolmogorov complexity. Once again leaving the realm of arguable and entering the domain of being completely wrong.

    Indeed. The thing about complexity that they appear to have missed is the complexity of describing something is entirely dependent on the words and vocabulary you have to describe it. And, in fact, the simplest way to describe quantum physics is already known...it's called 'math'. What we're trying to figure out is what that math means.

    To quote a comment in that article (Glad I finished reading that entire thing before posting.): 'For readers of this site who believe that questions like this should be resolved by a quantified Occam's razor like Solomonoff induction: in principle, your first challenge is just to make the different theories commensurable - to find a common language precise enough that you can compare their complexity.'

    As for the other article, I started laughing when someone commented, in all seriousness, that MWI is the only quantum interpretation to take Schrodinger's equations 'literally'. a) no it isn't, Copenhagen takes them pretty 'literally' b) asserting that multiple solutions to an equation should 'literally' be interpreted as multiple universes results in all sorts of insanity in math (My house is 2000 sq ft. There is no universe where it is negative 25 feet wide by negative 80 feet long.), and, c) uh, the transactional interpretation is, under that logic, the only quantum interpretation that takes Maxwell's equations 'literally' (In that it says that 'advanced waves', which physics normally just throws away, are real.), so why exactly are we prioritizing Schrodinger's as the equation to 'make real'? (And why do we care? The question isn't if it matches an equation, the question is if it matches observed reality! Which all interpretations do.)

    What I find exceptionally insane is the idea that apparently Eliezer has decided that MWI is corrected based on probabilities. No. Just no.

    Occam's razor, in science, is not actually used to figure anything out. Scientists do not sit down and determine that explanation X is simpler than Y, and thus X is 90% likely, and Y only 10%, and thus X wins and the issue is settled. That is completely fucking nuts and not how science works.

    In science, Occam's razor is mainly used just to filter out nonsense when there already is an accepted theory to explain what is going on. You can't use it to assign probabilities between competing theories, and you can't then use those probabilities to just decide something is correct.

    Occam's razor is just a trick to keep from having to check if hoofprints were made by unicorns. It is basically asking 'Instead of adding unicorns, could this be explained by stuff we know exists, like perhaps horses?' It has no place whatsoever in being used in a petting zoo with two zebras and seven horses to argue that the hoofprints 'must' have been made by horses, and thus all scientific inquiry into who made the hoofprints is settled. (And this is being generous and assuming that MWI actually is simpler than anything else, which it's really not. So it's more like arguing horses with eight zebras and seven horses.)

    And the thing is, these guys are supposed be to 'rationalists', which is basically claimed to be a sort of 'super scientist' who not only use science in science, but in life itself. And then the entirety of their works appears to be assigning completely random probabilities to things on no basis whatsoever, including assigning them to the actual laws of physics!

    324:

    Well the interesting thing for me there is EY's insistence that if you didn't convert to MWIsm after reading his blog posts on the subject that must be because you're too dumb. They have very ridiculous form of elitism... it's like Mensa if Mensa didn't use IQ tests but instead graded you by how much your views are swayed by their newsletter.

    325:

    Indeed. The thing about complexity that they appear to have missed is the complexity of describing something is entirely dependent on the words and vocabulary you have to describe it.

    Precisely. This also goes for Kolmogorov complexity.

    Sidenote: there's some comparisons that are eventually independent from the choice of the language. Consider a coin that just keeps landing heads any time you toss it. There's the theory that coin always lands heads (e.g. its a magnet and there's another magnet under the table). There's the theory that coin tosses are random. If you're making those theories match actual observations made so far, the former theory stays constant in size whereas the latter theory grows by 1 bit with each toss. Somewhat less obviously, the same works if you were to perform double slit experiment, letting you eventually rid of a theory that photon blips on the screen are random, in favour of a theory that produces correct probability distribution. This is algorithmic information theory which was studied by Solomonoff, Kolmogorov, Chaitin, Hutter and others.

    That is completely fucking nuts and not how science works.

    Haha, he knows that.

    326:

    Haha, he knows that.

    Wow. Exactly. That's exactly what I said. He seems to have no idea that any alternatives to MWI except Copenhagen and (what I think is) von Neumann exist.

    Now, if someone wants to rant about Copenhagen , I'm right there with them. Copenhagen is just stupid. Sure, we're measuring stuff, but it's not really real somehow. Copenhagen is a surreal attempt to draw a line between 'the real world' and 'quantum physics'. No one takes is seriously.

    Yes, a lot of scientists claim to subscribe to Copenhagen, by which they mean 'I understand quantum theory, but I don't actually know what the hell is happening down there, but it is not actually important to my work.'

    And von Neuman (I think that's what he's talking about with 'don't have a special exception for human-sized masses', although it's possible he's still talking about Copenhagen.) has never been taken seriously.

    But the entire idea that you can figure out which interpretation of something (all of which predict identical things) via statistics is just completely crazy. Not even super-duper-magical Bayesian statistics.

    Sidenote: there's some comparisons that are eventually independent from the choice of the language.

    Yeah, I know that slightly. I enjoy information theory.

    The problem is, of course, that no one has, and it's entirely possible no one can, encode what any sort of quantum interpretation means. They can encode the math, of course, but the entire premise of quantum interpretations is that it's the same math.

    327:

    I. . . [was trying] to highlight the general tendency of LessWrong to infer highly specific, conjunctive, non-evidence-backed conclusions from other purely inference based speculations.

    They do like their chains of deductive logic, don't they?

    The longer the better, and the more unlikely (to mere intuition) the results, the more gloriously rational must be the rationality!

    IOW, really smart folks (including, a fortiorissimo, future superintelligences) can close their eyes and stop up their ears (or, you know, just stuff their heads up their aes) and **ratiocinate their way through life, dammit!

    Whereas bear(er)s of little brain (like the drudges doing real science, alas) have to sneek a peak once in a while to make sure they aren't about to blow up the lab (or even worse, submit obvious nonsense to peer review).

    (I wonder if the computer programmers among them try to program that way, too. Like, you know, as if coding that way were a virtue.)

    328:

    I don't see what's bad with Copenhagen, tbh. The way induction is done, the theory must produce data matching the observations verbatim. I.e. you see a point on the screen, theory must produce a point on the screen. Copenhagen does just that: calculates the wavefunction, 'collapses' it to obtain a point, and notes that it in no way implies that this is how universe did it (we have no clue how universe does this, maybe it somehow processes huge random bit strings obtaining same probability distribution in the end without ever calculating probability density function).

    It is commonly seen as fundamentally impossible to figure out what reality "really" is from the sense data, because multiple possible underlying realities correspond to same sense data, and the preference among many of those is purely subjective. The question of what reality "really" is, is often seen as a wrong question. Hence non-interpretations.

    329:

    Copenhagen is not 'bad' if you're just trying to get things done. It's perfectly useful for that. And most people who work with quantum physics are just trying to get things done, and no more care about what's really going on than engineers care about the curvature of space when dealing with gravity.

    It is commonly seen as fundamentally impossible to figure out what reality "really" is from the sense data, because multiple possible underlying realities correspond to same sense data, and the preference among many of those is purely subjective.

    No, which interpretation is true is seen as fundamentally meaningless. Which it is, for 99.999999% of quantum work.

    However, there are things in quantum interpretations that should produce testable results. In fact, recent experiments with Bell's theorem have ruled out certain interpretations with local hidden variables.

    No one has quite figured out how to do that for current suggested interpretations, but there's nothing that says we won't eventually. To be clear, it's not the interpretation that's tested (All of them have identical math), but a property of reality. Different interpretations operate in differing versions of reality, and knocking out versions of reality as being possible knocks out interpretations. E.g., if we could demonstrate that reality was local, it would screw up the transactional and Copenhagen interpretation.

    We haven't figure out how to test this yet, but nothing says we can't. And it's not purely subjective, there are actual implications to different interpretations. For example, the transactional interpretation allows (In fact, requires) FTL signaling, and there appears to be no reason we couldn't also use advance waves to the same thing. And, surreally, the relational interpretation, while not introducing any sort of FTL signaling, manages to figure out a way around the Relativity problems of FTL communication and completely erases the possibility of paradoxes because nothing is actually ever fixed. And Many Worlds has a whole bunch of weird implications. (Although Roku's Basilisk is not actually one of them. In Many Worlds, in fact, everyone is immortal, as in, there is logically at least one universe where they will never die.)

    And some people will point out that those are science fiction ideas, to which I reply 'Man, those whole laser and semiconductors things sure were weird science fiction implications of quantum theory, weren't they?'

    330:

    Locality doesn't rule out Copenhagen. The collapse is not deemed to be a physical process of destruction of some wave that is really out there. It's an operation for converting those weird quantum amplitudes, which may well not describe reality, into observations.

    http://en.wikipedia.org/wiki/Principle_of_locality#Copenhagen_interpretation

    As for the testing, Bell's theorem didn't rule out any actual interpretations, it just ruled out a class of possible theories of how it 'really' works. And no, there are no implications for exact interpretations of QM (afaik transactional interpretation is an exact interpretation). None what so ever. Every observable outcome would be identical, unless of course someone screwed up their math and its just garbage. The inexact interpretations are another matter, those are actually distinct physical theories, typically very ill justified, and thus very unlikely to be confirmed.

    If there's new physics violating QM (which is of course on the table because of the problems integrating general relativity with QM), then there would be new theory with new set of interpretations some of which may resemble interpretations of QM.

    331:

    I stand corrected about Copenhagen, but proving reality was local would still screw up transactional, so hopefully my point is understood. ;)

    Every observable outcome would be identical, unless of course someone screwed up their math and its just garbage.

    Either I'm confused, or you missed something I said. My point was that the interpretations are indeed mathematically identical, so you can't test them. (The ones that aren't mathematically identical are usually rapidly disproven.)

    However, they do make statements about reality, not about quantum outcomes, but about the actual composition of reality, and those statements can, in theory, be disproved in other ways. For example, if we discover any sort of backwards-in-time interaction, either at the quantum level or otherwise, (Which, we should remember, relativity allows, if only via giant rotating black holes.), that would pretty conclusively screw up Many Worlds, because having 'the future' interact with the past at all makes no sense under Many Worlds from what I can tell.

    Likewise, proving locality would screw up various interpretations, coming up with a way to measure a waveform would screw up various interpretations, etc.

    You can't prove or disprove the well-accepted quantum interpretations from within quantum theory, but that doesn't mean you can't look at what they are saying and figure out a way to disprove that outside of quantum theory. Quantum theory, after all, usually ends up falling apart around the microscopic level.

    It is, however, possible I'm overly optimistic about this. The most obvious test I can think of requires backwards-in-time signaling! Which is, uh, very hard. Likewise, you could do some interesting tests if you could limit the interaction between the observer and the observed, like putting the observer inside a black hole. And good luck with actually doing that.

    332:

    Wait, why should I care if I'm simulated after I die?

    333:

    Because the simulation is so perfect, so exact, that it actually is you.

    Yeah, I know, this involves measuring you here and now at atomic accuracy, to a level that not only breaks the Heisenberg Uncertainty principle but that drives a coach and horses over the fragments and then gathers the splinters up and uses them as tinder for building a bonfire. But ignoring those petty practicalities, it's going to happen because, well, because someone could imagine it happening and therefore it must be inevitable.

    334:

    I'd say that this imagining tells us more about the imaginer's lack of understanding of Quantum Mechanics than about how much we should or shouldn't care about it happening.

    335:

    Ahh, we are essentially in agreement then. I don't see though how locality would affect any exact interpretation in so much as locality can at all be inferred (sounds like an attempt to prove absence). If there was a non local interaction, it would knock the support from under everything that's motivated by locality, though.

    The way I see it, any new physics necessitates different laws of physics from what we know, with different set of interpretations, some of which can resemble current interpretations of QM; and some of current interpretations of QM may end up having no interpretations of the new updated physics, which resemble them, thus being laid to rest (however frankly I don't think that is going to happen; you can always patch up stuff. E.g. suppose new physics described gravity in terms of collapse, somehow - essentially, sufficiently large masses over sufficiently long times cause collapse. Rules out many worlds? Not so fast; you could probably re-frame it as gravity forking the observer, in such a way that's even more un-testable.

    Overall, the way I see it, QM is our recipe for computing it; interpretations are alternative recipes; and it may well be entirely meaningless to ask question how universe computes it. That looks like residual theism. Universe just is.

    Specials

    Merchandise

    About this Entry

    This page contains a single entry by Charlie Stross published on February 23, 2013 4:55 PM.

    The jet lag game was the previous entry in this blog.

    Wow is the next entry in this blog.

    Find recent content on the main index or look in the archives to find all content.

    Search this blog

    Propaganda