Back to: Competition Time! | Forward to: Gratuitous Self-Promotion

Cytological Utopia and the rapture of the eukaryotes

(I'm back from Balticon and blogging again. Sorry about the hiatus!)

If you're a regular on this blog you're probably more than a little familiar with the Rapture of the Nerds (as Ken Macleod calls it)—the particular variety of eschatological singularity initially proposed by Russian Orthodox theologian Nikolai Federovitch Federov and championed more recently by Raymond Kurzweil that posits that the great destiny-determining task of humanity is not space colonization, or achieving immortality, but the "great task", The Religion of Resusciative Resurrection. In Federov's view it's our duty not only to transcend human limits but to strive to bring about the resurrection of everyone who has ever lived, or who might have lived.

You're probably also familiar with the Simulation Argument, originally proposed by philosopher Nick Bostrom and most recently discussed in public by Elon Musk; that in a future-unbounded cosmos there can only be one first time for everything, including intelligence, but subsequently any number of intelligent civilizations might want to introspect about their origins by running simulations of their predecessors, and so we are almost certainly trapped inside a giant game of Minecraft.

This kind of intellectual masturbation is pleasurable but ultimately unproductive insofar as we can't—at least from here—examine either our own future outcomes or the context in which our universe is embedded. (Hence Musk's very wise decision to ban the subject from hot tub conversation.) It is, however, seductive: what could feel better to the self-consciously modern mind than an enlightenment age origin story that scratches the afterlife itch without requiring us to commit to evidence-free belief in an invisible sky daddy who created pions and potatoes but is bizarrely obsessed with our genitals? And so, it spreads: a modern secular religion that echoes the design pattern of Christian millenarianism, with its afterlife and heavenly happy-talk ...

But I have had an annoying question.

Let us posit for the sake of argument that one or the other case is actually true; that either we are living in an afterlife sim or that our descendants are going to colonize the universe, achieve immortality, and resurrect us all:

Who, in this thought-experiment, qualifies as "us"?

Greg Egan took a cold-eyed look at the essential immorality of trying to develop a human equivalent artificial intelligence using genetic algorithms in his story Crystal Night. (Briefly: evolution works but it's inherently wasteful, and when we're talking about evolving an intelligence that can pass some sort of fitness test, we're actually talking about committing genocide against all the versions we spawn prior to the one that finally passes our arbitrary test. In the field of Christian apologetics theologians and philosophers have spent centuries tying themselves into knots trying to explain why God created suffering. A reductio ad absurdam of the problem of suffering features in Ursula le Guin's short philosophical fiction, The Ones Who Walk Away from Omelas—if you can have a utopia for all but one, but the price is that one person must suffer on behalf of everyone, is it justifiable? More to the point, we can generally agree that inflicting suffering on sentient individuals is morally wrong: a predisposition towards fairness-favoring behaviour appears to be hard-wired into primates in general, and inflicting suffering on another individual for our own benefit goes contrary to that. (Yes, I realize I'm making an evo-psych argument here. Please bear with me.)

So: we're primates, we have an innate sense of fairness towards others, and evolution is messy. Let us assume we're in the position of the lucky, lucky executors of the Federovite final common task of humanity: to coordinate the resurrection of the dead. Where do we stop?

Do we resurrect all previous non-Evil individual instances of H. sapiens sapiens—no Hitlers wanted in heaven—or do we resurrect everyone? Surely this is fairly clear-cut? (If you believe anyone can achieve redemption eventually, you resurrect everyone; if not, not.)

But wait! Modern humans have inherited Neanderthal genes. If we're interfertile we're barely a separate species: surely this means we have to resurrect all members of H. sapiens neanderthalensis as well?

It's kind of hard to extract DNA for really ancient bones but one may presume that Neanderthals and Denisovans and other hominins have a considerable number of points of similarity. If we're going to resurrect severely developmentally impaired specimens of H. sapiens sapiens can we justify not also resurrecting viable Gorilla beringei and their kin? Bear in mind that a whole bunch of primates pass cognitive tests for self-recognition in a mirror. Does this mean they're enough like us to deserve an afterlife? Is self-recognition/consciousness of identity a necessary, or a sufficient, justification for the rapture of the higher mammals?

But hang on a moment—what about the --dinosaurs-- corvidae? If it can make tools and plan complex tasks, communicate with its peers, distinguish between individuals of a different species and retain memories of their distinctive behaviour across the years, doesn't that make them worthy too? Surely the precautionary principle suggests that when in doubt, we should resurrect. And there's some evidence that even rodents exhibit empathy. Just because we don't notice the properties of consciousness in others it does not follow that they are absent: so we should resurrect the lot and let the god we are trying to create sort them out.

So far, I've had to extend the Noah's Ark of the resurrection from the 144,000 Anointed of the Jehovah's Witnesses to pretty much the entirety of classes Mammalia and Aves, going from a coracle to a supertanker ... but I'm not done yet.

Humans form superorganisms, hives of people that come together to achieve certain tasks and perpetuate themselves. But we are also composed of superorganisms. Each of us contains a myriad of cells. Our own cells (as opposed to the prokaryotes of our gut ecosystems, which outnumber us a hundred to one) are generally descended from one or two fertilized ova by way of pluripotent stem cells which differentiate and specialize, forming distinct tissue subtypes fine-tuned to fulfill certain functions without which the other cells in the ensemble can't live independently. Sometimes one of these cells reverts to its pluripotent state and de-differentiates, throws off the fetters of responsible cellularity, and begins to spawn with haphazard abandon: we call these "cancer cells", but there's an argument that they're merely eukaryotes reverting to their primitive origins and doing what comes naturally. Eukaryotic cells are themselves complex superorganisms with intracellular organelles and structures that perform specialized functions—in some cases these were formerly free-living prokaryotic organisms in their own right that have become obligate endosymbionts (such as mitochondria or chloroplasts).

If we resurrect all human beings we are implicitly resurrecting all human-derived superorganisms. And if we resurrect all human beings we can see the precautionary principle leading us to resurrect all animals. Is there any reason not to extend the process down the hierarchy of superorganisms to include the entirety of the domain Eukaryota? There is evidence that some eukaryotic cells (specifically neurons) may contain internal fine scale activities that effect synaptic processing: in the absence of a disproof of the Orchestrated objective reduction hypothesis, surely we can't stop the resurrection at cockroaches!

This is an essential conundrum at the heart of the Federovite/Kurzweilian theory of the singularity as afterlife: where do you draw the line between consciousness and unconscious life, and then between life and un-life? Worse: it stacks the deck for the simulation hypothesis with additional unwelcome abstraction layers, multiplies the problem of suffering unimaginably (think of the enormity of the crime implicit in resurrecting all the lions and all their prey), and bloats the minimum viable capacity of the computing substrate required for a successful ancestor simulation to the point where it may not be possible within the physical constraints of a material universe.

(And this is where my next non-series SF novel will probably come from ...)

318 Comments

1:

Here's another question. Can we resurrect everyone?

Short of the device in Clarke and Baxter's Light of Other Days, can we resurrect more than a few specimens of Neanderthals? Where would we find the DNA to do so?

This would be the same problem with the gut bacteria. If you go down to a fine enough resolution, the distribution of gut bacteria is unique to each individual, and each stage of development.

This brings up another question. Your personality has changed compared to 10 or 20 years ago. Is your 5 year old self a separate person? Your 10 year old self? What if you suffered a trauma such as abuse when you were 20? Are the pre-abuse you and the post-abuse you different people?

2:

Since Federov's proposal (as quoted by Charlie) is

to strive to bring about the resurrection of everyone who has ever lived, or who might have lived.

you don't need that many Neanderthal specimens to work from, because reproducing an individual's DNA is of limited value. What you want is sufficient samples to determine the complete possibility space of Neanderthal DNA, so that you can build every genetically plausible individual. Given the reasonable expansion in Charlie's original post, you actually probably wind up determining the complete possibility space of earth-style DNA-based life; you're going to be "reconstructing" entire ecosystems that never existed on the basis of theoretical plausibility.

3:

With regards to "Is your 5 year old self a separate person?", that's just the tip of the iceberg. We know that epigenetic factors and other environmental considerations can have a huge impact on physical growth as well as personality issues. Consider current me, and then an alternative current me born under different conditions, or to parents who had undergone different environmental stresses (thus leading them to behave differently during my childhood).

Never mind 5-year-old-me, what about me who was born six months earlier (or later), or who lived through a (counterfactual, for our reality) nuclear winter, or global famine. Suppose there's a significant difference in outcomes depending on whether the famine started when I was five or when I was six - shouldn't you simulate both? (Of course, you have to simulate both to a fair accuracy in order to know how different they are anyway.) What if my parents had moved into house B when I was young, instead of house A? Or if someone different moved in next door to us (or to my grandparents, or...).

Hence the fact that while simulating everyone who ever did live requires collecting evidence that probably isn't available, trying to simulate everyone who ever might have lived almost certainly exceeds the computation capacity of the universe.

[Sorry for the twin posting, I hit the wrong button!}

4:

The entire idea of simulating people using their DNA is ridiculous. Clone someone from me and raise them outside my 60s-70s Irish family and neighbourhood and you wouldn't get me - you'd get something like a separated twin. Like me in some ways, not in others.

It's a stupid idea from the outset.

5:

Forget DNA: try exploring the phase space of all possible variants on the human connectome instead!

Now that's what I call a reductio ad absurdam!

6:

Just to make it worse - Charlie, you wrote about "super-organisms". What are civilizations other than super organisms? Does this mean we need to recreate all human civilizations (including Nazi Germany? Including the Spanish Inquisition (which none of you ever expected....)?

Or how about postulated civilizations or persons? I'd like to talk to C'mell, and Strider, and I REALLY want to talk to Dick Seaton (and the Norlaminians, which sounds like a group....).

mark, who wants to be Dick Seaton when he grows up

7:

Sure civilizations are superorganisms! Or maybe meta-superorganisms made up of component entities above the level of individual humans but below the level of the civilization as a whole -- multiple nations, for example, or the RSPCA.

The question of which civilizations to resurrect of course gets us right back to the problem of suffering, doesn't it?

8:

(Correction, according to this we probably have about the same number of bacteria as human cells. To within the nearest billion or so anyway)

9:

Another question is what does simulation accomplish.

Say that there is a cellular automaton in which sentient organisms can exist. (Conway's game of life, for example, is Turing complete, so whether this is possible depends on whether you believe any computer can be sentient). There is a definition of state, and rules for evolving the state. So you pick an initial state, run the simulation on your giant computer, and observe sentient organisms appear.

Now what happens if you turn the computer off? Is it murder, killing these simulated organism? But wait a minute, you can turn your computer on again and run the simulation again from the same initial state, and run the simulation right past the point you turned it off before. Someone else with another computer can run the same simulation, and get the same results.

So it makes more sense to sense to say your simulations are simply observing this universe and its organisms. From the point of view of the inhabitants, it makes no difference how many times the simulation is run, or whether it is ever run at all.

10:

The correct thing to do, of course, it to re-simulate the entire universe again, from here to Big Bang, and "resurrect" everything we want.

What? The computronium won't fit in the universe? I don't care, find another one!

>>>The particular variety of eschatological singularity initially proposed by Russian Orthodox theologian Nikolai Federovitch Federov and championed more recently by Raymond Kurzweil

Charlie, don't get it wrong, but you seem to be weirdly fixated on Federov. Do you have any evidence that he is actually in any way connected to modern singularity? You know, idea can and do get re-invented.

It also seems to me that you are using this line of thoughts: 1. "Singularity people" have resurrection. 2. Christianity has resurrection, too. 3. But christianity is bad. 4. Therefore, "singularity people" are bad. 5. Hitler ate sugar.

Feel free to disagree, but Yudkowsky and his followers are crazy enough in their own special way, there is no need to pattern match then with christianity.

>>>Do we resurrect all previous non-Evil individual instances of H. sapiens sapiens—no Hitlers wanted in heaven—or do we resurrect everyone? Surely this is fairly clear-cut? (If you believe anyone can achieve redemption eventually, you resurrect everyone; if not, not.)

In a world where anyone can be resurrected, what is evil? Is Hitler truly evil if his actions are reversible?

11:

Not even close, by several orders of magnitude. Bacterial cells are smaller than eukaryotic cells, and some brain cells (like the nerves that reach your toes) are among the biggest cells in the body.

Incidentally, elephants have three times more neurons than we do.

As for intelligence, the best knockdown is "define it realistically and give three examples," and you can make the test harder by having the question answered by an alien who doesn't think like us and who wants to objectively identify the most intelligent species on our planet.

At the moment, I'm reading Frans De Waal's Are We Smart Enough to Know How Smart Animals Are?, which I strongly recommend reading during discussions of intelligence.

One reason I recommend it is that for most people (and speaking as a biologist, that included me) don't realize just how much bigotry, bias, and creationism go into definitions of intelligence. The tl;dr version is to count the number of fields that assume that humans are different and special, and that "evolution stops above the neck" (this was originally proposed by Alfred Russel Wallace, because he believed people we're specially intelligent, had souls, and had brains designed by god). It's also to look at: --how perniciously wrong the single scale of intelligence is in the real world (like the INT stat in an RPG. Humans are incredibly stupid about stuff that other species are pretty smart about, things like olfaction or spatial memory), --how many intelligence tests are so biased that their results are untrustworthy (why do we assume that monkeys or even sheep can readily identify different random humans as well as they can members of their own species? It's not like we can tell different monkeys or sheep apart without a lot of training) --how, once an animal is proved to have some trait originally thought to only be possessed by humans (like tool use), that trait is immediately redefined so that only humans have it (tool use, theory of mind, language, and creation of sex toys all fall into this), --how many often the bias that "humans must be special" distorts not just the science, but the discussion of the science (apparently, this especially goes in philosophy departments).

Fortunately, the actual study of animal intelligences has been diligently, step by step, working around these issues once they're identified. The spread of their knowledge into the wider culture is limited by the written excretions of any number of talking head non-experts (people like me, in other words), who can make their personal biases sound well-reasoned and insightful, even if they're not backed up by experience or data.

Incidentally, someone who knows more about AI than I do can chime in about whether all these problems show up in the field of machine learning as well.

12:

Cute. And we're then firmly into Bertrand Russell's "does the set of all sets include itself?" puzzler :)

13:

A couple of questions:

Once you've resurrected everyone, what do you do with them?

Are all the resurrectees dropped in the same substrate and allowed in interact freely? Or is each mind sitting in a black box running its own Heaven simulation?

A resurrected mind that has been told it is a simulation that will run for all eternity (depending on clock speed and the heat death of the universe) is in no way the same entity as a mortal mind that lived and died a physical life. So are all the resurrectees kept ignorant of their own simulation and allowed to live and die a simulated life with continual resets?

If the goal is to simulate the entire possible phase space of humanity plus the civilizations they lived in, are you not in fact just simulating the universe as many times as necessary to achieve that goal? It's simulations all the way down.

And if you feel the need to push your simulation task down the evolutionary ladder, what about up? Do you rev the clock speed on your Cetacea and Ursidae simulations to see what happens?

What about sideways? Many (all) fictional characters will fit in the possible phase space you are simulating. Are sci-fi aliens and fantasy portal worlds part of your simulation? E.T.? Luke Skywalker? Bugs Bunny?

Once you start simulating, where do you stop?

14:

Hmmm...

If our simulation is going to be remotely convincing it's going to need to simulate the environment which our resurectees exist within and interact with. Which means all the non-human life (down to the parasites which might colonise us, the bacteria and viruses which might infect us, the cats and dogs we keep as pets, the various beasties the cats and dogs hunt, chase, and prick their ears up to warn us of, etc, etc, etc) in sufficient detail for the simulated environment to work (or at least seem to work) properly.

In short, surely it's everything or nothing in terms of the biosphere...

15:

Why limit it? I'd imagine there would be in fact different resurrections by differing groups.

People are people, and people will diverge on these opinions.

If it's possible, I'd imagine several different types. Raw simulations of all of history, including the bad and the good. A perfect place for the good. A horrible place for the damned (Hello Roko's Basilisk!). (Your mileage may very on damned/good).

Not to mention stranger stuff that's basically entertainment. Simulations with a change inserted like giving magic powers. Or a simulation where the people figure it out and start attempting to alter the code.

Maybe recreating humans by cloning a bunch on a super structure at the end of time. Maybe make it some sort of ring world or Alderson Disk, and you have hundreds, thousands of planets sharing a map based on atmosphere, and you copy earth on October 2nd, 1962.

Maybe its because of some transhuman desire to understand baseline humanity, that they will clone all the great minds of the 20th century, resurrect them into primitive survival struggle to demonstrate human nature. Maybe that's how they got the grant from the transhuman counsel and its really those famous minds are in sexy young bodies in a jungle full of predators and they hope viewers want to see if JFK and Marie Curie will hook up before the tiger sized rats eat them.

Maybe its what a Matrioshka brain filled with an upload of humans thinks about.

Maybe its from a religious sect who wants to create a second chance for those who didn't believe the first time around. Maybe its a religious sect who wants to build a virtual heaven. Maybe its part of some long scam to build a virtual heaven for those who donated to a transhuman upload project in the dawn of time.

Maybe its a group of philosophers who only want to bring back people with A+ blood types because for shits and giggles they want to test japanese blood types in a simulation space in a dying universe.

Maybe all of these happen over a long enough timeline. People are people, and will have their own motivations even when machines. Some might be a form of a recognizable faith or philosophy. Some will have ones we've never dreamt of.

16:

RE Martin, post 12: I suspect the answer to that is no, and the reason is Goedel's Theorem.

mark
17:

Here's another question: you're going to recreate them... when? When they were born (but then they must have all the same experiences that the original did); when they were at the height of their powers? When they died?

Sample case: where in her life would they resurrect my late wife?

18:

I had been thinking that this would imply a phase space. With quantum computer simulation, and just not collapsing the state (i.e. Multi-World quantum theory, EWG variety) you might be able to do this completely with one computer. Of course, it would probably run a lot slower than reality...

After all, you can't realistically resurrect someone without resurrecting their memories, and this implies that you've simulated their environment in detail.

My real question is: How often to you save the (qubit) state, and to what medium? If I'm not the person I was two decades ago, it's also true that I'm not the person I was two minutes ago. The idea of resurrecting someone from the moment of their death is really inhumane. You get lots of cripples, lots of people with advanced senility, etc. And if you're going to fix those problems, you're changing them so much they aren't the same people, they've just got (sort of) memories of the same people.

Now clearly nobody was proposing something this extreme, but that's what you get if you take the argument as stated seriously. It's sort of like the multi-world version of the physics view where all time is omni-present, and our memories are just a trace of relationship between the states. I believe that that's still a valid interpretation of quantum physics, but it's no more useful than Copenhagen, and more depressing, so nobody pays attention to it. (The standard version, though, doesn't contain the multi-world extension, but is only compatible with it.)

19:

This is a yellow card for Sealioning. (See the moderation policy.)

Hint: your demand for citation doesn't require me to go running; rather it suggests that you need to do some research for yourself and either find supporting or contradictory evidence.

20:

Me-as-I-was-aged-five is embedded in me-as-I-was-aged-fifteen and they are both embedded in me-as-I-was-aged-fifty ... but lossily, because memory isn't perfect (and mine in particular resembles a sieve).

Yes, this is indeed a problem for the sim/resurrection hypothesis. And it has an even gnarlier corollary or two: (1) in the afterlife, do we qualify for subsequent additional lives if we fuck up and get killed (again)? (2) your deceased family members and you are now separated by a gulf of experience: being reunited in the afterlife with a much more experienced version of someone you knew before you died is going to be problematic, to say the least. So what sort of problem does resurrection solve, anyway?

21:

Here's another question: you're going to recreate them... when? When they were born (but then they must have all the same experiences that the original did); when they were at the height of their powers? When they died?

I had occasion to contemplate that very question a few weeks ago when my MIL died after a long decline into Alzheimer's. The person who died at 89 was very far from the person(s) who existed in previous years and decades. If you're going to do the resurrection thing, some set of selection criteria would be necessary, or you're going to have to resurrect all versions of the person who existed over their lifetime. Just what that means is left as an exercise for the reader; probably a quantum multiverse would come in handy.

BTW, this consideration applies to religious matters: which version of you would stand before St. Pete?

22:

I'm reminded of David Brin's Stone of Significance.

23:

@OGH's OP:

One of my formative influences as a kid was Daniel Galouye's story based on this. Corpats simulate a world so they can do market research in it cheaply, then we find out that our world is another such a fish-tank belonging to Them Upstairs. Simulations all the way up. 1964, yet, pretty prescient. According to Wiki, Dawkins rates the guy highly too.

"No man is a villain to himself", and if Hitler was running this sim he would call us the Evil ones. I don't want to be a moral relativist, but I don't know an intellectually honest solution to this one, only an existentialist-type plunge into choosing sides.

I never thought that our species' distinguishing mark was intelligence anyway. One of my favourite jokes is that the Missing Link between the apes and sapient creatures has been found – it's us. And I call us the Enslaving Mammal: outside the social insects, what other creature can make some other bugger do his work for him?

Regarding predators and prey, Schopenhauer replied to Panglossians by inviting them to compare the lion's utility from a good meal with the meal's own disutility. Are they "equal" and in "balance"?

@15 elfey1: Lewis once suggested, as a reductio, that a hell for humans and a heaven for mosquitoes could very easily be combined.

@Whitroth 17: This one afflicts the actual christian doctrine of the resurrection, as opposed to the Hollywood version involving ghosties. According to said doctrine, we get our bodies back. Let's hope Stephen Hawking gets the patch as well.

24:

This is an essential conundrum at the heart of the Federovite/Kurzweilian theory of the singularity as afterlife: where do you draw the line between consciousness and unconscious life, and then between life and un-life?

It definitely becomes a problem of feasibility, but if examined solely on an ethical angle there's no real reason not to simply resurrect anything we have the ability to that we can determine was alive.

"No man is a villain to himself",

That's simply not true. There are plenty of people who see what they're doing as villainy, and even hate themselves for it.

Which means all the non-human life (down to the parasites which might colonise us, the bacteria and viruses which might infect us, the cats and dogs we keep as pets, the various beasties the cats and dogs hunt, chase, and prick their ears up to warn us of, etc, etc, etc) in sufficient detail for the simulated environment to work (or at least seem to work) properly.

What would be the point of just having a digital copy of the universe as it already was? Why couldn't the simulation just be modified so that violence/predation/parasitism is simply not necessary for any resurrected entity to exist?

25:

in a future-unbounded cosmos there can only be one first time for everything

Unbounded how? If the cosmos contains infinite space -- more than just what we can see in our own light cone -- and our location is nothing special, then any finite arrangement of atoms will occur somewhere at some time. In fact, any finite arrangement of atoms must occur repeatedly in an infinite number of locations. The tricky bit is getting to see all those alternate copies of whatever.

Which means that 'first time' event will be happening over and over again in causal domains unreachable from here.

26:

"Incidentally, someone who knows more about AI than I do can chime in about whether all these problems show up in the field of machine learning as well."

Yes, but in reverse: whenever we figure out how to get a computer to do something that was previously firmly in the realm of AI, it immediately becomes "not really AI, just a tool".

Playing board games: Tic-Tac-Toe is easy, you could solve that in the 1940s as soon as you thought about it (though it took until the 60s) but Chess is hard, only a human can play Chess well. Eh, not so much. Programs started routinely beating casual humans in Chess soon after personal computing became a thing. Go turned out to need a lot more work...

Many problems wavered back and forth between "that will be easy, my grad students will have it next year" and "that can never be done" and back again. You'll note that speech-to-text conversion is doing very well these days.

27:

Your other option is unbounded in time (going in the direction away from the big bang). And yes, this gets us into Olber's paradox and Boltzman brains and rapidly down a rabbit hole of barely comprehensible infinities.

28:

The real cognitive dissonance will set in if and when we work out how to computationally perform self-awareness.

At which point "that's not AI, that's a tool" gets really hard to support as a position ...

29:

What is self-awareness?

Should a program that is able to locate it's own binary code and modify it be considered self-aware?

30:

Kkoro @24:

"There are plenty of people who see what they're doing as villainy, and even hate themselves for it."

If you say so. Moi, I've never met one. Minor stuff, one may hate oneself, yes, but major villainy automatically triggers self-justification. Which is what our big brains seem to be for.

And Hitler definitely didn't fall into your category. Nor does Anders Behring Breivik, to whom I have some purely geographical proximity.

31:

"Crystal Express" is by Sterling, the Egan story is "Crystal Night" per the link.

Meanwhile, one of the Thunderbirds crashed an hour ago, after performing at the USAF Academy graduation. The pilot held on unti lthe last moment before ejecting, he's seems to be okay, and the F-16 is intact in a field. Sounds like the engine may have flamed out.

Now back to rereading this and the comments without distraction.

32:

...and there are some that just do it because it's fun.

It's not that the sociopaths don't see themselves as villains, and they certainly don't hate themselves; they know perfectly well what they're doing and they just don't care.

33:

I don't have a copy of the DSM handy, is Musk's belief of reality as a simulation among recognized delusional systems ? I thought that the "Body Snatchers" delusion that friends and family had been replaced by aliens or robots was common enough to be a recognized delusional system.

34:

I don't have a copy of the DSM handy, is Musk's belief of reality as a simulation among recognized delusional systems ?

Should the belief that reality is a product of omnipotent being that transcend time and space be a recognized delusional system?

35:

I think you can recognize your own actions as villainy without hating yourself. Like littering. You can understand perfectly that you are decreasing the quality of life of other people by it, and just not care.

Hmmm. That would be empathy without sympathy. Is it a recognized disease?

36:

Of course, from the point of view of someone who actually does believe in an invisible sky daddy, and an afterlife, the ethical stakes of this issue are significantly reduced. If you think that everyone who ever lived is already "out there" somewhere, then resurrecting them has no imperative value at all.

Regardless, if this scenario ever does come to pass, then almost certainly society will take the market-based approach toward rationing resurrection opportunities. If you can afford it, and be responsible for the economic costs of maintaining "the returned", then go for it. Therefore, to maximize your own chances at being given a second chance, try to do something that will impress Gill Gates' distant descendants.

I could suggest a more equitable approach that is based on this. Treat it as equivalent to giving birth (recreating past life and creating future life has an interesting symmetry). Everyone is allowed to resurrect their own immediate ancestors (your parents), and no one else. You have to pay for the resurrection, and any subsequent economic costs. You have to submit a financial plan to an appropriate government agency to get permission to do this. Impoverished people wishing to bring Mam and Pap back from beyond can receive means-tested assistance.

What I did was not try to answer all the ethical questions up front, but create a sustainable system that could answer them itself over time. If, working backward, we can find a way to support everyone who ever lived, we will. The main limiting factor is that the process would stop when you reach an ancestor considered intellectually too primitive to be responsible for a decision like this. We would have to generate some criteria for that, but that can wait until we gather some IQ data on living hominids.

37:

For just resurrecting the people, Farmer did that in Riverworld (and sequels). A great way to get Mark Twain and Richard Burton (the explorer) to meet (and others).

38:

almost certainly society will take the market-based approach toward rationing resurrection opportunities

Your faith in late-period neoliberal quackspeak is touching, if somewhat naive, historically speaking.

39:

In a sufficiently large multiverse anyone can be resurrected from arbitrarily small amounts of information. As for who gets resurrected, I would suggest only those who have a concept of death and afterlife. Of course, one might also restrict it to those who have given implicit permission by adherence to such a belief in the first place.

More details in my IEET article here: http://ieet.org/index.php/IEET/more/bruere20121015

40:

The real cognitive dissonance will set in if and when we work out how to computationally perform self-awareness.

At which point "that's not AI, that's a tool" gets really hard to support as a position ...

Not really, but I don't want to go down the rabbit hole of the origins of modern racism in the rise of pre-modern slavery.

My point here is that cognitive dissonance seems to be the overlooked disease of our time, whether it's climate change or American politics. Some seem to have wondered whether the apparent epidemics of depression, anxiety, and PTSD among returning soldiers is an effect of this dissonance.

So I expect that long after machines become self-aware, they will be treated as tools. Indeed, I expect that engineers will do their damnedest to make self-aware machines that like being subservient, out of misplaced empathy.

Speaking of which, why is something supposed to be simulating us again?

41:

Gave my heart a jump, my sister is there for a family friend's graduation. (it seems like no one beyond the pilot was involved, and they seem ok)

42:

Speaking of which, why is something supposed to be simulating us again?

See also: "The Hitchhiker's Guide to the Galaxy", with specific reference to the Infinite Improbability Drive, the sperm whale, and the bowl of petunias.

43:

If we had the ability to simulate people, wouldn't it be a stronger moral position to only simulate people who hadn't had a chance to live, yet? For one thing, people who haven't had a chance to live haven't had a chance to do anything naughty; they're guaranteed to be innocent (until instantiated, anyway) We can be sure that every human that has ever lived has done something naughty (let's call this The Yahweh Position) and therefore is disqualified from the virtual server rack of heaven. Since we're simulating people, we could presumably tweak those simulations to only offer the acceptable options - so they couldn't possibly be racist or murderous or fans of the wrong sorts of popular music. The more we perfect them, the more sure we can be that they are likely to be appropriate entrants into the server rack of heaven - which means they are more deserving to be so, which means we should preferentially instantiate our increasingly perfect simulations instead of imperfect human 1.0s.

44:

I was also thinking: wouldn't a side-effect of being able to simulate people accurately involve the solution of every mysterious crime? Just fire up your favorite suspects and witnesses and read their source code.

45:

In my view simulation is not resurrection if there's no continuity of experience. Unless there is some way of maintaining consciousness though death, presumably by some sort of implant, the simulation is not "me" and the "resurrection" is just a giant computer game. Of course if I'm part of a computer simulation now thye above may not be true. Is the universe a computer game and if so what was the game creator and does it know whether it is part of a higher game?

46:

If that means we've worked out how not to perform self-awareness at the same time that would probably reduce the dissonance if anything. Of course, there's always the "define awareness" problem: FCVO aware, there are Smalltalk-like systems that're reflective enough to introspect and rewrite their own VMs while running on top of them already, so really we're arguing about consciousness, no?

My upstairs neighbour is a philosopher by profession. Funnily enough I've just got him started on Egan...

47:

I can't see how you could resurrect anyone under any scenario, I can't see a sim that could reun WW2, or heck even a football match and reproduce it perfectly

48:

This discussion reminds me of a passage from James Branch Cabell's Jurgen:

"About these matters I do not know. How should 1? But I think that all of us take part in a moving and a shifting and a reasoned using of the things which are Koshchei's, a using such as we do not comprehend, and are not fit to comprehend."
"That is possible," said Jurgen: "but, none-the-less-!"
"It is as a chessboard whereon the pieces move diversely: the knights leaping sidewise, and the bishops darting obliquely, and the rooks charging straightforward, and the pawns laboriously hobbling from square to square, each at the player's will. There is no discernible order, all to the onlooker is manifestly in confusion: but to the player there is a meaning in the disposition of the pieces."
"I do not deny it: still, one must grant--"
"And I think it is as though each of the pieces, even the pawns, had a chessboard of his own which moves as he is moved, and whereupon he moves the pieces to suit his will, in the very moment wherein he is moved willy-nilly."
"You may be right: yet, even so--"
"And Koshchei who directs this infinite moving of puppets may well be the futile harried king in some yet larger game."
"Now, certainly I cannot contradict you but, at the same time-!"
"So goes this criss-cross multitudinous moving as far as thought can reach : and beyond that the moving goes. All moves. All moves uncomprehendingly, and to the sound of laughter. For all moves in consonance with a higher power that understands th e meaning of the movement. And each moves the pieces before him in consonance with his ability. So the game is endless and ruthless: and there is merriment overhead, but it is very far away."

49:

The question is moot: there isn't enough time or matter for the experiment.

The future is not unbounded; in a mere 120 trillion years or so there's no fusion happening in the universe. "Shortly" after that, temperatures are too even to provide enough exergy (energy that can do work) for computation.

The mass of the known universe is likewise limited--all the more so if we can't make use of dark energy. And there are lower limits to the time and energy costs of computation.

Those three constraints together severely limit the number of possible simulations, probably to the point where you're stuck with variants of known historical figures only, and only rough macro-scale simulations of them.

(Aside: Bostrom's idea also suffers greatly from anthropocentrism. Since computation is bounded, why would non-humans choose to simulate humans in particular?)

50:

There are a lot of old science fiction stories which look very different if one thinks in terms of AI or Singularities. Lord of Light could be set in a singularity gone bad, later colonized by humans, Riverworld could easily be a simulation, I Have No Mouth and I Must Scream... (Lots of Ellison, actually) and most of Lovecraft is vulnerable to the same process. And what happens when Honor Harrington discovers that she's part of some warfare simulation. (Best hope she can't hack your 3D printer...)

I wonder how many of these classics will be rewritten with an obvious Singulatarian framework. (There's a niche here which could be colonized by the right second-tier author, and it would be quite profitable!)

51:

If you are trying to resurrect humans (outside a simulation where you can cheat a bit—or a lot), you can't stop at the Eukaryota. We are going to need at least the members of the Bacteria domain that we co-evolved to live with, or all the resurrectees will quickly die of various intestinal problems. There are probably some Archaea wandering around our bodies doing various useful things, too. Not to mention Viri!

Maybe we can fudge recreating the Prions, at least.

52:

I'm sorry, but that doesn't make sense to even my grumpy sort-of libertarian self. Any culture that is going to go for large-scale resurrection or universe simulations is almost certainly going to be a post-scarcity economy. Way post-scarcity.

The only way I can see any economic limits having any real impact, is if your ressurectees will require some sort of non-AI 24/7 care to continue existing and functioning. Perhaps if they come out like the chronically future shocked unfrozen Revivals of Ellis & Robertson's "Transmetropolitan"?

53:

Hang on.

It's generally considered that the christian heaven is actually a hell, because it's eternal. Nothing that could call itself human, simulation or not, could remain sane and concious for an eternity - we aren't built that way. That goes double if no sin is allowed...

Thus any such simulation system is essentially a hell (torturing with eternity), or at least a limbo (continual wipes and restarts till you get it right).

So if we are inflicting it on someone, we are punishing the guilty (come right in Mr Hitler). And if it's being inflicted on us, we need to reach nirvana to stop the reincarnation. Obviously, over time the only ones to continually fail to learn the lesson, reach nirvana and escape samsara are the incorrigibly evil - which explains modern politics and commerce perfectly.

Or, in short, the only good role for a simulation hell is to get 'souls', or mindstates, to the place where they neither need, nor want, to reincarnate.

54:

OK "Rapture" / nerds / AI Compulsory XKCD - before anyone else spots it .....

a modern secular religion that echoes the design pattern of Christian millenarianism, with its afterlife and heavenly happy-talk ... You forgot, or deliberately omitted something there ( I saw you palm that card ) ... if it's a religion, especially of that model, there must be punishment & suffering & torture for the guilty. { See also Ian M Banks on that gruesome subject }

But, I do like the idea that there isn't enough Computronium in the Universe to do the job, though ....

55:

( @ 11 ) This is a re-statement of the Geocentric hypothesis ( or backwards of same) - right? That "Our" planet / place in the universe / species is in some way "special". What surprises me is that so many supposedly trained scientists still fall for it - probably because they can't see it.

56:

So what sort of problem does resurrection solve, anyway? As your question implies, it doesn't ... But christian & islamic theologians really, really do not want to be told or know about this. In fact, an in-depth "intelligent" discussion of this subject promptly trashes the whole "resurrection" idea ( I think ). What bothers me is that the Fedorovskians/Kurzwilites don't seem to have noticed this problem, either.

57:

YES

Would maker a very good defining test, actually. Anyone got any better suggestions?

58:

Should the belief that reality is a product of omnipotent being that transcend time and space be a recognized delusional system? It damned well ought to be ....... Given that BigSkyFairy is undetectable [ Yes, I know, my pet hobby-horse, but ...] The "believers" get very exited & shouty about that one, because it puts the burden of proof/disproof on to them & they don't like it up'em .... Particularly if you join that up to the mutation of BigSkyFairy from hiding behind the hill & the thunder, to beyond the sky to in the crystalline sphere to ... Ooooooops - receding into undetectability a bit further, every time our detection gets better, what a surprise!

59:

"But, I do like the idea that there isn't enough Computronium in the Universe to do the job, though ...."

Depends whether we limit ourselves to course neural state simulation, or insist that nothing is authentic unless it's all Planck scale. With the former option we probably do not need much computing power beyond what this century will provide. To put it crudely, a neural sim is one where you are a neural network with senses fed by the simulation data stream ie a couple of Mb/s. The rest of the world does not have to be fine grain simulated.

60:

We can be sure that every human that has ever lived has done something naughty (let's call this The Yahweh Position) and therefore is disqualified from the virtual server rack of heaven.

I was wondering how long it would take us to circle back to Monty Python territory!

Every Sperm is Sacred (SLYT)

61:

I was also thinking: wouldn't a side-effect of being able to simulate people accurately involve the solution of every mysterious crime?

It also redefines crime. What is murder if you can effectively resurrect the victim? What is theft if the item being stolen exists in a sim and can be effortlessly duplicated? (Is it rape or torture if the victim is a cognitive zombie?)

62:

The likes of Freeman Dyson disagree with you on the end point for structured interacting processes in the universe; the stelliferous era isn't that far out in his terms (although pushing things much past the end of the black hole era is questionable). However, as we don't have a handle on dark matter or dark energy and the missing mass problem, the ultimate duration of the universe seems to be a wide open question at this point.

Of course, if we are in a sim, one of the tell-tale signs might well be a limit at the very low end of the scale -- a length/duration below which we simply can't probe (because it's the granularity of the huge array of finite state automata on which the universe we live in runs), and a limiting velocity for signal propagation within the sim (to permit coherency to be maintained throughout the grid of FSAs). Paging Messrs. Planck, Michaelson and Morely to the white courtesy phone ...

63:

... and most of Lovecraft is vulnerable to the same process

The deep back story of the Laundry Files series arc is that it's the Lovecraftian Accelerando. A dive into and through a singularity in several episodes, seen through the eyes of the survivors. Such a shame it's the Shoggoth Singularity ...!

64:

Of course, if we are in a sim, one of the tell-tale signs might well be a limit at the very low end of the scale -- a length/duration below which we simply can't probe

Depends. Are we just a by-product of a complete physics simulation, or are we in simulation of human minds? In the later case, for example, there is no need to simulate the Andromeda Galaxy at all unless someone is looking at it.

65:

Or, in short, the only good role for a simulation hell is to get 'souls', or mindstates, to the place where they neither need, nor want, to reincarnate.

Strong fiction rec: go read "Surface Detail" by Iain M. Banks. It's his great big sim-afterlife ethics exegesis, wrapped up in a Culture novel. (It's got philosophy but it's also got space ships going explodey and there's wild sex too.)

66:

f it's a religion, especially of that model, there must be punishment & suffering & torture for the guilty.

I already blogged about Roko's Basilisk, thereby dooming you to heck for all eternity, didn't I?

67:

"Of course, if we are in a sim, one of the tell-tale signs might well be a limit at the very low end of the scale..."

Only applies to a fine grain sim, not a neural level sim. And one can assume that the latter are going to be dozens of orders of magnitude more common than the former.

68:

If we want to do resurrection then it is almost certain that the Halting Problem means that every bit of the reconstructed person's life must be recapitulated. So if you are not in a sim first time around, you will be on the second.

69:

No serious theologian in any tradition ever described eternity as just "a very long time." Eternity doesn't last forever, it's outside time, because time itself is an illusion. The same is true of the Buddhist Nirvana. In a real afterlife or a good simulation of one, there would be no time as we understand it. The issues raised here actually worried Christian theologians when they talked about bodily resurrection - John Donne had some fun with the more outlandish consequences. Incidentally, what about Frank Tippler's idea of new universes created by simulation running faster and faster in shorter and shorter intervals of time?

70:

Incidentally, what about Frank Tippler's idea of new universes created by simulation running faster and faster in shorter and shorter intervals of time?

Tipler postulated a bounced universe that will contract towards a big crunch at the end of time. Current evidence suggests this is not the sort of universe we live in.

71:

This gets more and more brain-twisty, because mixed in with the question of who do you bring back or not is the question of what version of someone do you bring back? Assuming there's zero data loss and there are redundant backups of every moment of everyone's life from beginning to end, which image do you load?

Let's take the Adolf Hitler example. The 1945 version is more crash-prone than an in-place upgrade installation of Windows Millennium, so there's no sense using that image, but if you load the 1914 backup instead, there's thirty years of heuristic programming that can't be reliably recreated except in the environment in which it was originally compiled, so to speak, so aren't you technically bringing a different person into the sim?

72:

Do superhuman AIs get to to heaven too?

Got to get them all, especially the ones that are smarter than whatever we are using to run the sim - greater capacity for X* and all that.

If we include fictitious AIs that want to destroy all humans then what are the moral implications of giving them humans to destroy? Utilitarian calculation involving summing up X presumably?

*X = any presumably desirable trait.

73:

No serious theologian in any tradition ever described eternity as just "a very long time." Eternity doesn't last forever, it's outside time, because time itself is an illusion.

Does not compute. Illusion is a process. Processes happen in time.

74:

Unless you express them as action principles, in which case they don't.

75:

Hi. Nader Elhefnawy here.

I follow this blog regularly, but usually hesitate to comment because by the time I get here the thread's usually hundreds of comments long.

Chiming in this time, though, because Mr. Stross linked my Space Review article on Fedorov.

I thought I'd mention that I have another article on Fedorov's thinking over at Future Fire which is more detailed in its discussion of his ideas, and quotes Fedorov's writings extensively. The address is listed below:

http://futurefire.net/2007.09/nonfiction/fedorov.html

I also thought I'd mention that George Young, in his book on Fedorov (Nikolai F. Fedorov: An Introduction), did specifically raise and discuss at some length the question of just how far resurrection goes.

Incidentally, I'm very intrigued by the prospect of a novel which treats the question raised here.

76:

Incidentally, I'm very intrigued by the prospect of a novel which treats the question raised here.

Spoiler (for something I'm still writing): it's not about the question raised here, it's about the implications of people who believe this shit gaining influence and attempting to implement their various conflicting agendas. (Meta-spoiler: it all ends in tears before bedtime.)

77:

Referring back to one of the classics "... there is another theory which says this has already happened."

(It's already happened, and the solution to the whole issue is you run the universe over again. I answered this question differently the last time round, by the way. There was a three page digression into quantum theory or something and I think I wound up getting told off by OGH - it's all a bit blurry really...)

78:

So, a novel that looks to be critical about the singularity, but for different reasons than expressed in Phillip Pullman's "His Dark Materials" (Assumes a circulation of "Dust" like the deep carbon cycle) or as expressed on TV https://en.wikipedia.org/wiki/Soul_Hunter_(Babylon_5)
(Depicts preservation of personalities as an interrupted journey.) You'll have my interest.

79:

"where do you draw the line between consciousness and unconscious life"

The question treats the science and technology of the simulation as if it simply exists without background.

If they have the ability to similate all that is suggested, then they have the ability to analyse the internal states of each simulant, and the deep understanding of consciousness necessary to make the decisions that you/we can't.

Re: Simulating those who pre-date the era of perfect scanning/uploading.

This sounds like CSI-style photo enhancing. Surely it doesn't matter how clever the algo is, if there's not enough information in the archival material to replicate the subject?

(Ditto, only moreso, the idea of recreating it all by "running the universe again". Why would you get the same result?)

80:
(Aside: Bostrom's idea also suffers greatly from anthropocentrism. Since computation is bounded, why would non-humans choose to simulate humans in particular?)

OGH has already given us one answer: because we were a prehistorical evolutionary pressure for them.

81:

@ DavidC, 69:

'No serious theologian in any tradition ever described eternity as just "a very long time."'

NIGGLE. Theologian, schmeologian. While talking to one another, maybe. But when religious entrepreneurs and managerial cadre were talking to the laity, they jolly well did describe it as a "very long time". Viz, the famous figure of the little bird sharpening his beak on a mountain every thousand years, and wearing it down flat while you are being tortured.

82:

True, but I wouldn't count you as "second tier." You've done too much good work.

83:

This sounds like CSI-style photo enhancing. Surely it doesn't matter how clever the algo is, if there's not enough information in the archival material to replicate the subject?

Do you have eidetic memory -- total recall of everything that's ever happened to you?

(I sure don't; my memory was like a sieve, and that was before I spent a few years on a medication that has creeping memory loss as one of its side-effects. There are whole years I don't recall.)

If we're living in a neural-level experience sim rather than a Planck-level physics sim, then of course memory loss would be A Thing (and rather commonplace, too). We don't even know the details of how memories are stored by the brain in any detail, so we wouldn't be able to tell if our subjective degree of memory impairment is lesser or greater than we would expect of a non-simulated individual.

84:

Or the whole theory of universal behavior we call "quantum physics" or "string theory" both of which might be some form of "we need this to work, but building the whole mechanism would be computationally expensive, let's just make a lookup table, the people being simulated will never notice."

85:

We don't want to waste computational resources on someone being punished. They can experience their agony on a ten-second loop. We'll use that old server that was in the closet, so if it overheats and their processes abort we haven't lost anything important.

In other words, the folks who are in hell may be the lucky ones!

86:

We need to punish Hitler, so we're making 80-million human beings relive World War I, including the The Somme. The good news is we'll still get Lord of the Rings.

87:

@Ludwig 81, Well, I'm no theologian either, and eternity, like infinity which it greatly resembles, is one of those concepts it's hard to get your head round, which is why Buddhists insist that you need to be enlightened before you can do so. But all the major traditions insist on an eternal unchanging creator outside time. It's true that at lower management level this didn't always come over very well, nor was it practically effective in changing behavior (you're thinking of the terrifying sermon in Joyce's "Portrait of the Artist" I presume.) But the idea that a creator of your choice created time at the same stage as the rest of the universe (or that the universe has no beginning and no end) are beliefs you find in every major religion. By analogy, therefore, the constructors of a simulation in which we might be living would not necessarily be limited by time as we understand it.

88:

But all the major traditions insist on an eternal unchanging creator outside time.

Tell that to the Hindus, the Greeks and Romans, the Vikings, the ancient Egyptians, the [modern, enlightenment-era] atheists, and so on!

Just because you're surrounded by monotheists doesn't mean that it's a universal belief system.

89:

Immanence for the win baby!

90:

@DavidC 87:

Read the Joyce for the first time only recently. I have encountered that beak-sharpening story all sorts of places, including among Proddies. I vaguely recall having encountered it in my special period of the 12th century too, but might be mistaken there.

(The story suggests that god has solved the telomere problem in little birds, doesn't it?)

Though I agree with OGH that some religions were into creation ab nihil, I do take your point about the Mosaic triad's Sky Man being outside time. If Kant was right about time being an artefact of our own minds, rather than "out there", he sort of has to be. How the ideality of time goes together with the cosmology stuff discussed here, I don't have the smarts to know.

91:

Once sims are created, how are they treated? Do they have the same rights and privileges as they people who resurrected them.

Once you start resurrecting people, you must keep them subordinate to you, grant them equality, or place them in power over you. If it is either two or three, your agenda may not be relevant to them. And then, who would they be interested in resurrecting?

92:

Not quite on topic, but Derek Lowe's got an interesting post on In the Pipeline about understanding brains vs computer chips.

Speaking as someone who works a lot with simulations, I'm not entirely sure I get the logic here. Simulation isn't the same as replication: I am fairly sure that the course of human civilisation would not be significantly changed if I actually had drowned when I fell off a rock into 10 feet of water at the age of 10 (I couldn't really swim at that age, but it transpired that my response to sinking rapidly was to kick hard, resurface and grab rock—hence, continued existence), so there is no need for posited simulation to include exact replica of me, any more than the Millennium Simulation needs to contain an exact replica of the Milky Way. Assuming that the posited simulators are our descendants, they need a fine-grained simulation of the terrestrial biosphere (if the simulated society is going to evolve in a way approximating their own history, the physics and biology has to work right), but increasingly coarse resolution as you go further away (this is, in fact, how the N-body people set about simulating a single galaxy: they run a coarse simulation till they've worked out which bits are going to wind up in or near the target galaxy, then rewind and run with finer detail in just those bits). You might have to fine-tune your simulation to ensure that people who really did "change the course of history" get properly simulated (I suspect that you don't get our present world if Alexander the Great never existed, and even more so if he did exist but lived to a healthy old age instead of dying at the age of 32), but I imagine that most of humanity could be replaced by different spear-carriers without noticeable effect. In fact, one of the first studies I'd do if I were a simulator is to test how robust their evolution is: if they rerun the simulation with a different random number seed, how often do they get a recognisable "present" (for their values of recognisable and present, naturally)?

93:

Responding to a number of posts: 1. A major issue I've never seen touched on - plenty of folks have discussed the Christian Hell as infinite pain and torture for finite nastiness... but what about the obverse: infinite pleasure (to be, as the Jews say, in the Presence) for finite goodness? 2, Ah, yes, let's recreate everyone. That would, of course, include, say, Joe Hill, and the copper bosses who set up his legal murder, or the 16 yr old black kid who was murdered by a white mob... along with the members of said mob. 3. Punishment and torture for EvilDooers... so, I've been in agony since my late wife dropped fucking dead at 43. So, if this is all part of some alleged deity's Plan (tm), when does alleged deity get the pain and torture for what they have inflicted upon me (not to mention the rest of the world's pain)? 4. Of course Heaven (tm) is hotter than Hell (tm). That was proved in an email that's been around since at least the eightes, wherein the temperature of Hell is calculated with reference to the temp of burning sulphur, vs. the temp of Heaven with the amount of light....

mark
94:

Once sims are created, how are they treated? Do they have the same rights and privileges as they people who resurrected them.

Do you have to pay them? How are they paid? It's obvious that you can't interfere with your sim in obvious ways if you want to get valid results. So you have to find subtle ways to pay your Simulated Employees. So maybe they're paid in luck. Obviously the creatures running the sim can control probability, so maybe if you serve the Creators correctly you get a +1 on all your decisions which the software makes randomly.

Shout your good ideas to the sky! Post them on the 'Net! Your Creators are listening!! They are Ready to Reward You! Long Life, Good Health and Enormous Wealth Await You!! You won't get cancer at 26, instead you'll die in bed at 97 Happy Active Years while sleeping beside the gorgeous Philosophical Zombie you just made passionate love to!!

All Hail The Creators

P.S. The Universe runs on Windows 36, Embedded Simulation Edition. This Explains Everything!

95:

It is overwhelmingly likely that I am in a simulation. Everyone else who ever lived must also be in a simulation somewhere. That includes Hitler.

But it would be immoral to put anyone else in Hitlers simulation, or that of any mass murderer or sadist. And it would be immoral to kill anyone you have resurrected.

Obviously the morally correct way to make everyone immortal within their own individual simulation, with a lot of p-zombies that they cannot inflict suffering on.

Conclusion: it is overwhelmingly likely that I am the only real person in this Universe, I am immortal and everyone else is just a crash test dummy. Hmmm.

96:

Here is another even more immediate answer for why we would resurrect people in sims - because the technology will be available in the lifetime of children now alive and many (most?) will want their dead parents or grandparents back.

97:

What if the price of eternal life is having to go through a reconstruction sim? Does it becomes worth it? Or is all the pain once only in the "shit happens" original universe as good as you want it, with extinction as the final payoff?

98:

Once you start resurrecting people, you must keep them subordinate to you, grant them equality, or place them in power over you.

Primate pack dominance hierarchy, much?

You missed: you can pack them off into Some Other Pocket Universe (sans interaction). Or "you" might not be human in any meaningful way (see also "Missile Gap"). You may be simulating humans because you're interested in their superorganisms. Or because you want a fertile substrate for examining interesting memes. The list of possible motives is probably endless ...

99:

While the list of motives might be endless, some are going to seriously dominate others statistically. For example, it is highly likely that the overwhelming majority of sims will be constructed by PostHuman entities derived from us, because the kind of tech to run such a sim does not allow much scope for standard HSS to remain in charge.

100:

So, if this is all part of some alleged deity's Plan (tm), when does alleged deity get the pain and torture for what they have inflicted upon me (not to mention the rest of the world's pain)?

Join the punch-Jehovah-in-the-face queue; line starts right behind me, bring a good book, we may be here some time.

(This is why apologetics exist, as a thing theologians do to limbo-dance their way under the Problem of Pain. I find the very idea of it highly unconvincing, never mind all the garbage about original sin, God's Gosplan, and so on ...)

101:

As I see it, some bleeding heart liberal type (like me) will get sad over the poor fuzzy simulations (like me,) and there will be laws about how badly intelligent beings in simulations can be treated.

This may be why some of the obvious things governments and private people could do to society haven't happened. LSD in the water supply is an obvious one, so is some kind of social control by pain ray (run a wire to everyone's pain center, give teachers and cops the remote controls.) And why didn't the Nazis just build camps everywhere and stuff every Polack, Frenchman or Hungarian they could find into the ovens? It may be why we haven't had nuclear war, human farming, legalized mass slavery of women/children, etc.

Just be glad you're in a sim that's run under the auspices of the ASPCS (American Society for the Prevention of Cruelty to Sims) and not living in a rogue universe in based in Afghanistan, like the one they found last week where Cthulhu and the Christian God were real and they used little children to feed the Shoggoths and Angels.

Remember, we are likely in a sim. So are the people who are running our sim. It is incumbent upon us to make sure the benefits of treating your sims kindly are broadcast up the chain!

102:

Conclusion: it is overwhelmingly likely that I am the only real person in this Universe, I am immortal and everyone else is just a crash test dummy. Hmmm.

Nope.

Let us posit that within each set of all contemporaneous humans there exist social graphs of people who do not significantly harm one another. (Most folks aren't actually into deliberately harming their neighbours: we don't feel good about inflicting pain, and it takes intoxication, regimentation, or dehumanization to force us to do so.)

A benign sim-runner might run a search for such subgraphs, optimizing for size, then provide each group with a separate sim populated with non-peer-harming people, while replacing the non-members (who they might harm) with p-zombies.

(In other words, the NSDAP get to party together without hurting real people; everyone else is free of them.)

An interesting point is that the less harmful you are to other people, the larger the "real" population of the sim you end up in. And people being people, their social affordances are likely to be richer than those of zombies, and because the selection criterion is that they're non-harmful to you your interactions will be pleasing. (Oh look, I seem to have reinvented heaven, or at least a passing utopia -- post-scarcity and full of nice people being nice to you.)

103:

Well, let's not debate Hindu cosmology for too long, but my copy of the Bhagavad Gita falls open conveniently where Krishna is saying things like "there never was a time when I was not ... nor will there ever be a time when we shall cease to be." Nirvana is a release from endless cycles of birth, and so from time itself. And even animistic religions generally had a creation myth of some kind. But the point of this is just that that if what mystics of all religions have said is true, then time itself may just be a little local delusion we suffer from, and so arguments based on time may have no validity at the level of the simulation designer, if there is one. On the punishment thing, again I think that this is popular religious culture a bit out of control. Hellfire was certainly used to frighten people over the centuries if they didn't do what they were told, but in Christian and Buddhist tradition, at least, what happens to the soul after death depends not on how you behaved in life but on your mental state. This is why some Buddhist traditions insist that the various hells of popular belief are actually collective illusions (as indeed is this world) and that the evil create hells for themselves. Even in the Christian tradition, sin was fundamentally a state of mind rather than a series of acts, and hell was eternal separation from God just as heaven was eternal union. But then of course, as I said, "eternal" doesn't mean what most people think it means. By the way, anybody find the idea of re-doing their life until they get it right interesting? There's no reason why that couldn't be simulated.

104:

And why didn't the Nazis just build camps everywhere and stuff every Polack, Frenchman or Hungarian they could find into the ovens?

I hate to break it to you, but that was more or less the plan; they'd only just gotten started when they lost the war.

(Pain wires probably aren't practical without effective wide-spectrum antibiotics, and are in any case far more expensive to implant than lead-filled rubber hoses and waterboarding gear. As for LSD in the water supply, that's what the folks prosecuted by Operation Julie were allegedly working towards (source: anecdote from someone who knew them personally).)

105:

But none of these things happened, did they. Was it coincidence or our simulation running into some programmed limit having to do with the total amount of pain allowed under the law? We'll probably never know.

This could be why Americans are so nostalgic for the forties and fifties. There was lots of pain elsewhere in the world - China, Korea, starving Soviets, India and Pakistan... if there was a top limit on the amount of pain in our simulation, maybe they turned down the pain in the US and Canada.

(And the "pain wires" are easy. You do one sterile implant under the skull, let the bone heal over it, and use induction from some kind of tech which sits on top of the subject's head. (It's not like you need a ton of voltage to light up the nerves.))

106:

And the "pain wires" are easy. You do one sterile implant under the skull, let the bone heal over it, and

And you are going to get your population to submit to this how, exactly?

You're talking brain surgery. Without any benefit to the subject except pain. Oh, and during implant surgery the patient is kept conscious because the surgeons need to test the electrodes to ensure they've got to the right place. So you're expecting people to submit to being strapped down conscious on an operating table and tortured, for the dubious benefit of being subject to remote-control torture at some point in the future by random authority figures. Mmmmm ... somehow I can't see this taking off, y'know?

(Leaving aside entirely the cost of such surgery! Brain implant surgery doesn't come cheap: best price I can find online is on the order of $50,000 per session. It takes 10-15 years to train a brain surgeon and they're only going to be able to do 2-4 operations like this a day, even on a production line, because it's rather precise work and needs testing. So maybe 1000 patients/surgeon/working year, maybe 10 working years per surgeon's career, so about 10,000 ops per surgeon, tops.)

This futile exercise in BOTE-ing the unfeasability of a grotesquely inhumane torture regime bought to you by Roko's Basilisk.

107:

Ok - probably my last comment in this thread since I have gone over this so many times before, elsewhere. Any PostHuman entity is likely to have sufficient mental capacity that running a neural level sim would be as easy for them to do as daydreaming is for us. Where are the laws that protect your dream characters from nasty stuff? And does it matter, because when they "die" they just return to "you". You all seem fixated on the idea of sims as being things run in a Big Black Box tended by "regular people". Sims are mental constructs of PostHuman Minds

108:

The list of possible motives is probably endless ...

I wasn't actually ascribing possible motives to the resurrectors. Depending on how much freedom or information the resurrectees have, their "benefactors" motives may not be relevant.

By subordinate, I just meant not a peer. Whether you pack them off into a pocket dimension or keep them around to study, you are still keeping them at a distance, whether that is physical, technological, or emotional.

But once you give the resurrectees the keys to kingdom, why they were resurrected or what their resurrectors expected may not matter anymore.

109:

Once sims are created, how are they treated? Do they have the same rights and privileges as they people who resurrected them.

I've mentioned it before, but this is the core idea in Brin's "Stones of Significance": https://www.smashwords.com/books/view/69032

Interesting story, and well worth a buck.

110:

Pain ray for social control, you say?

(Those countries without DARPA-funded weapons research have to settle for bone-breaking water cannon, eye-destroying rubber bullets, and CS gas.)

111:

Actually (104) Generalplanost wasn't quite that. (See Mark Mazower's "Hitler's Empire".) The basic idea was to take over the whole of the area as far as the Urals, and either work the population progressively to death or simply exterminate them if they couldn't work, while replacing them with German soldier-farmers settled on their land. The general assumption is that some 30-40 million people would have perished under this scheme, in addition to those who had already died in the war. But this was because they were Slavs and because there were wide open spaces. For central and western Europe, densely populated and unattractive for settlement, what the Nazis wanted was a continuation of the wartime arrangements, essentially looting the economies and the use of western european labour and industrial facilities. Needless to say, the Nazis thought there was nothing wrong with this: the race was to the swift, and if they didn't do it to the Slavs the Slavs would do it to them. Does make this discussion of ethics and the afterlife a bit iffy.

112:

Mmmmm ... somehow I can't see this taking off, y'know?

Exactly. Because we're in a Certified Ethical Simulation with a properly calibrated Maximum Pain Level.

The question is not whether a particular idea is practical. (We could argue the practicalities of pain implant insertion all day long.*) The question is whether the simulation hypervisor in charge of our universe will allow certain REALLY BAD ideas to come to fruition.

We've only had the technology to implement most REALLY BAD ideas for less than a century, so there's no proof that an ethical simulation hypervisor exists, but we've already dodged a couple obvious bullets. 4-5 centuries from now if we haven't drowned in Gray Goo, blown ourselves up, or toasted the planet... then statistically speaking it will be very likely that there is an ethical component to the simulation we inhabit. (If there is a simulation hypervisor, look for a gosh-wow, deus ex-machina solution to global warming to appear in the next ten years.)

  • Properly trained tech (a two-year course) who does nothing but implant insertions all day long.
113:

I'll admit that you've got half a point. The first sims will be run by giant corporations and the biggest governments. They'll be the only people who can afford them. The next generation of sims will be run by all national governments and medium-sized corporations. Then provinces/states, counties/districts, cities and small businesses.

Eventually there will be a GPL sim library suitable for running a small universe and it will be incorporated into post-human brains because it is free.

114:

Yes See also my comment # 58

115:

Charlie, thank you for this thread. I'm having more fun playing with these ideas than I've for several months at least!

116:

also @Dirk Bruere

Based on Charlie's post, Federov's goal was not just simulation, but resurrection. That implies returning people to life, either decanted in meat space or instantiated in the same idea space regular living entities call home. At that point, the only way to tell resurrectees apart from the "living" is slapping a Star of David on them.

Maybe I'm not understanding Federov's goal, but it seems like he wants to return everyone who ever lived to life making one giant society of all humanity ever.

Obviously other entities could have differing goals for simulating the dead--good, bad, or indifferent. But Federov seems to want pan-Humanity to exist forever in equality and light. That sounds like a pipe dream, but it is certainly preferable to being brought back only to be made to re-live my original life or some similar facsimile.

117:

It sounds like these guys are talking about creating copies of everyone who ever existed. But even if I can bring into being a copy of myself more exact than Heisenberg allows, that's not survival; it's reproduction. And I think to offer me resurrection you have to offer me survival. Why should it make any difference to me that some distant future superentity is going to emulate me?

118:

Until some sadist keeps it focussed on someone for far too long. Just like repeatedly-tasering soemeone, right? Yuck

119:

Actually, a while back, I took a look at Revelations in the original. It says that the suffering of those consigned to hell endures "for an age of ages." It does not say "forever"; there is a perfectly good Greek work of that era that means "forever" and is much shorter. But instead the author used a long phrase.

Do you think that St. John doesn't count as a serious theologian, or do you think he worded it that way carelessly or accidentally? Either of those would avoid the question.

120:

If it didn't leave a mark it didn't happen. First rule of unofficial torture!

121:

Maybe I'm not understanding Federov's goal, but it seems like he wants to return everyone who ever lived to life making one giant society of all humanity ever.

Federov was a Christian theologian of the variety who figured that because God had made us, then we were the obvious tool He had in mind for carrying out His Plan, i.e. the resurrection and the creation of the kingdom of heaven.

So yeah, there's a slight drawback to Federov's plan insofar as it implicitly created a supreme, eternal Christian theocracy. Not really my cup of tea ...

122:

My understanding is that large chunks of the Bible as agreed upon by the various churches at Nicea are basically fan-fic, and the Revelation of St John The Divine is one of the most notorious examples. It makes for a good triumphalist tub-thumping sermon but it's lousy theology (and I gather some folks have used the imagery in it to work backwards and deduce the precise species of hallucinogenic mushroom John was high on at the time).

123:

One though I thought about was the use of a religion or religions for simulated humans to occupy various types of weaponry.

The glorious dead warriors are drafted into a battle in the afterlife against Demons (another polity with similar tech) with the sim minds being motivated to fight for their god(s).

Could do it a few ways, all of history to provide the right type for specific weapons platforms. Maybe its not so much our technology, but to identify smart believer who would make useful missile AI. Maybe its to provide a core of trained minds to invade other simulated spaces to propagate a specific meme.

124:

Please consider a blog on how "fanfic" in religion becomes orthodoxy and what "religious fanfic" of recent times will end up welded to the orthodoxy of the future. Example: the Rapture, a concept dreamed up in the 19th century has become an article of faith for hundreds of millions of people who claim to believe in a literal reading of the Bible.

125:

Nope, that's a blog entry for a professional theologian.

126:

There's something oddly recursive going on here. We need a Kingdom of Heaven to reward all the dead souls, and we created people to have dead souls to populate the Kingdom of Heaven, and somehow that means we must be obedient, because an omnipotent god can't both build and stock a Kingdom with souls that are built to his liking unless we are obedient, and if we're not obedient we'll burn in Hell, which wasn't a part of the plan as first conceived, but Sin requires Hell, which is where you'll go if you don't sign on to God's oddly recursive plan.

Are we having problems communicating the workflow of a being who lives outside time to ordinary mortals who only experience time in one dimension?

127:

@119 and 122:

Regarding Revelation, IANAT but it is my understanding that John of the Gospel and John of Patmos are not the same person, rather two people writing good Greek and atrocious Greek respectively.

(That a fan-fic writer is semi-literate, what a surprise)

Alternatively, they are the same guy who, when he wrote the Gospel, had an amanuensis, and when on Patmos (i.e. in the clink) didn't.

128:

Thoroughly OT. (I got nothing to add.)

Perhaps pointless to use the ((())) here, but I'm in. I'm not on Twitter, but will do it elsewhere. איך בין אַ ייִד / אַנִי יהוּדִי

129:

@ all of 122 - 125 Err .. religious SF? Well, start with the most famous ... Dante Aligheiri. Humans drafted in to "fight the good fight" ?? Try Norse theology, with Brunnhilde leading the Valkyries to rescue the souls of the best warrors to fight for Odin when the day of Ragnarok comes ....

And Nicea was a late-Roman/early Byzantine POLITICAL fix, with many of the books selected (or rejected) purely upon "local" concerns, both in time & space. ( A N Other reason for rubbishing the Bumper Fun Book of Bronze-Age-goatherders' myths, incidentally. )

130:

Yes, which is why BigSkyFairy is undetectable & therefore 150% irrelevant, even if actually, supposedly existing.

131:

Please translate the apparent Hebrew quote?

132:

Okay, from right to left, first is Yiddish "Ich bin a Yid", then Hebrew "Ani Yehudi", both mean I am a Jew. See this for explanation: https://mic.com/articles/144228/echoes-exposed-the-secret-symbol-neo-nazis-use-to-target-jews-online

Though I'm not sure how long outing ourselves like this, or using it as a show of support, will last. Also slightly worried that will obscure who is using it in anger, though if we only use it for ourselves it shouldn't be a problem.

133:

How do we do it now? Messy demigods decide on saving certain species (whales and tigers) and paying not much attention on others. We pamper our pets and kill other animals in troves. More power, security and resources we control comfortably, more are saved, but it goes gradually, because the hive mind needs time to catch up. I am still not seeing the exponential curve in culture (not that there s one in tech sphere too).

134:

This could be why Americans are so nostalgic for the forties and fifties. There was lots of pain elsewhere in the world - China, Korea, starving Soviets, India and Pakistan... if there was a top limit on the amount of pain in our simulation, maybe they turned down the pain in the US and Canada.

Life in the United States in the 1950s was pretty horrifying for vast swaths of the population. This is the era of mass lynchings for Black men and prescription tranquilizers for white housewives. This was the second red scare, and the first atomic anxieties. The country was not a comfortable place for a lot of people, since the main national hobby at the time was trying to wrench back all the disorder that the war had injected into American life and the reassertion of the traditional American system of bigotries. Don't think of it as idyllic--think of it as being in denial. The turmoil of the 60s was really just the contradictions of the 50s coming into bloom.

135:

Yep, read it (read all of them, of course).

Thought is that living for ever is a curse, not a blessing - which ties in with eastern religions. Further, if you are viewing reincarnation/simulation as a way of perfecting the mindstate such that it doesn't need/want to go on, there has to be a purpose.

That purpose might be a group mind - wouldn't want to pollute one with mindstates that were broken.

Thus if you consider simulation a thing, then it would seem likely that sublimation into a group mind is the endpoint of the activity (cf Bank's subliming into godhood).

136:

I'm aware of all this, of course, but mainly I'm playing with ideas about simulations, so don't get all serious on me, kay?

137:

"Culture" Where the revolting Cllr Loakes in Waltham Forest tried to close all the museums & certainly helped in trashing the libraries, or Lambeth RIGHT NOW - where the libraries are closed - but cost more in security guards than if they were open. What's culture?

138:

To answer my own question .. Culture is "elitist" & we can't have that for the uneducated masses, can we? Astoundingly enough, it's the left who are pushing this line right now, same as they seem to be pushing anti-semitism ....

139:

Arguments that suggest we're all living in a simulation always sounded a lot like ontological arguments for god. Huge amounts of unstated assumptions with a thin topping of sophistry served up by insufferably smug individuals.

Possibly more relevant to the OP, what is the point of simulating a dead person, anyway? The dead person will still be dead. If you're going to be creating a whole fresh crop of new people anyway, why not create ones which aren't lumbered with all of our emotional, physical and intellectual defects? Simulations of people, warts and all, sounds pointless and mean.

140:

Also, SMBC (inevitably) touched on the subject more than once. Separate post in case of being eaten by the anti-spam grue.

http://www.smbc-comics.com/?id=2055 http://www.smbc-comics.com/?id=2535 http://www.smbc-comics.com/?id=2824

141:

If you buy Stephen Pinker's pacification hypothesis, then we're not doing too badly at domesticating ourselves as a species: violent cases are weeded out, so that -- to take one example -- the risk of being murdered if you live in any British city today is about 1-5% of what it was 3-4 centuries ago. Most post-demographic transition societies also seem to move away from the spectacle of cruelty: public executions, animal fighting, and so on. There's still the vicarious media experience of violence as entertainment, but special effects are an even more blatant dodge than the theatricals of the Roman coliseum. (Note that trained gladiators were expensive: people who were deliberately killed as part of the spectacle tended to be prisoners of war or condemned felons.)

This isn't "progress" in some Whiggish faux-historical sense. It's more a case of systematically preventing those who can't control their violent urges from passing these traits on to the next generation, by preventing them from having and raising families -- the traits may well be memes (behavioural patterns) as well as genes (hereditable lack of impulse control).

142:

... there isn't enough time or matter for the experiment. The future is not unbounded...

I'm engaging in post necromancy because this might be a useful point, if only to demonstrate the Strong Agnosticism Problem here. It's true that our current best guesses about how the universe works don't allow for infinite computational cycles. On the other hand...if we're in a simulation we have no reason to think that the world we observe is faithful to the real physics outside. It does disincentivize us against creating endlessly nested sims inside sims. While it's hard to guess the parameters of a "computer" running our universe, I can think of no way to check on what might be outside our celestial server rack.

It's easy to imagine someone asking questions such as "What if light traveled at a finite speed?" or "What if stars had to be really big?" - and those would be the most trivial and uninteresting ones.

143:

There are two, er, threads running through this argument. One is the resurrection of the dead, the other is the simulation of the living. If we are living in a simulation, then we have either been resurrected (in the Buddhist sense of reborn) or we are living for the first time. There are some unpleasant things in this world but I don't see it as Hell. It's much easier to see why a super-intelligence would create a simulation than an afterlife, because a simulation is much more fun and interesting, and you can run it many times, changing the parameters slightly.

144:

You miss the point. Any system that can run a sim which includes conscious Human minds is already de facto a PostHuman super-AI.

145:

"Try Norse theology, with Brunnhilde leading the Valkyries to rescue the souls of the best warrors to fight for Odin when the day of Ragnarok comes ...."

No. Freya gets first pick of the battle dead and Odin then has his choice for the Einheriar of Valhalla. What Freya does with hers is not mentioned.

146:

"Yes, which is why BigSkyFairy is undetectable & therefore 150% irrelevant, even if actually, supposedly existing."

But we are talking about building it. Just because something is false in the past does not mean it will be false in the future.

147:

But it won't be our future. A copy of you in some simulation won't be you. And the AI BSF is only the God of it's own simulated world. In the real universe it's a demigod with delusions of grandeur. If we're already in a simulation the same applies.

148:
We've only had the technology to implement most REALLY BAD ideas for less than a century, so there's no proof that an ethical simulation hypervisor exists, but we've already dodged a couple obvious bullets. 4-5 centuries from now if we haven't drowned in Gray Goo, blown ourselves up, or toasted the planet... then statistically speaking it will be very likely that there is an ethical component to the simulation we inhabit.

That's a way of dodging the Great Filter I don't think I've ever come across before!

149:

No, it is a piece of hardware which can run an AI if loaded with the correct program. Whether it is, in fact an actual AI is another matter. Or it might be running multiple AIs inside the simulation of a universe, and those AIs might be incapable of proving that they inhabit a simulation (due to lack of evidence.)

And keep in mind that if we live in a simulation we are those AIs.

150:

...or maybe running a billion virtual Windows boxes.

Anyway, here's the new SMBC. It reminds us that we are stupid and made of meat.

http://www.smbc-comics.com/index.php?id=4131

151:

Culture in a sense of beliefs, standards, general rules etc.

152:

Yup, that is my point. We are still limited by resources, our control of life (animal or even bacterial life) is rudimentary. I am hopeful that this century will be the century of biology and we will master that part, thus it will push us forward. All animals will get promoted into some kind of citizenship and eventually plants, bacteria, fungi and even viruses. If we had resources for perfect, satisfying meat that does not come from animals, of course we would phase out animal slaughter.

First we should be able to master biology on more precise and controllable ways and then we should be closer to start another question, like what is consciousness, intelligence, AIs and stuff. I mean in more measurable, quantitative way besides tests.

I do have a question, it seems that Fedorov considered only saving people, and because of the lack of knowledge of DNA, and how it connects people and other organisms, he didn't consider bacteria as worth saving. Knowledge changes perspective, and I wonder what else is out there that will change our perspective completely.

153:

This is great explanation. It also agrees with the notion that perfect world (nirvana) or distant ideal future is not for the ones who are not prepared to accept it. Yes, one can be cured of all illnesses, have great knowledge and resources but feel bad. It's perfect world but not your world. It reminds me of feral children as the worst example (or some immigrants as the least troubling example). They come to objectively better world for them when they are introduced to civilization, but it rarely it ends as a happy story. It would be the same for unprepared mind in distant ideal future, or nirvana. It's actually (psychologically) better for the troubled mind to stay put in the rut.

154:

It reminds us that we are stupid and made of meat. Out of curiosity, what is the root of that wording? It's been a personal mantra for the past several years (precisely, "I am stupid and made out of meat") but never written down. Does it date back to the Terry Bison story THEY'RE MADE OUT OF MEAT?

(FWIW I was always fond of his Bears Discover Fire)

155:

"Stupid" came from the comic and "made of meat" came from the Terry Bisson story. And that is an AWESOME mantra, by the way!

I was thinking a little more about the simulation hypothesis and decided that it explains that mythological behavior of dieties fairly well, like the way they're simultaneously omnipotent but also kind of stupid. Given a simulation argument that makes perfect sense. The God(s) are omnipotent about us because they can always look up the records of the simulation. But they're also dumb and drama-filled about each other because they don't know what the other guy/gal working in the lab is thinking.

If anyone wants to note that the Judeo-Christian God acts like a rather difficult senior research scientist and Satan acts like a grad student who's noticed a flaw in God's experimental design... I won't stop them!

156:

"A copy of you in some simulation won't be you"

Neither will the copy of me that wakes up tomorrow be "me" now. It doesn't bother me. Or rather, if I learned the tomorrow morning copy of me is due to be executed, it would worry me. There is such a thing as "close enough", and in a sufficiently large multiverse the reconstruction can run from the point of death with no loss of continuity.

157:

If someone decides to execute a simulation of me in the morning then fundamentally they are just deleting a backup.

The backup might disagree of course but if rokos basilisk or whoever decides to create a copy just so it can hold a gun to its head then I just tell it that I don't negotiate with terrorists and get on with my life.

158:

Re the OP, think of the enormity of the crime implicit in resurrecting all the lions and all their prey If we're presuming resurrection via sim, then an ethical sim could, at the point of the conscious predator attack on the conscious prey, fork and remove the consciousness at fork time from the prey in the branch where the predator kills the prey, and leave it running in the non-killing branch. The prey would experience a life of lucky escape from death-by-predator. Not sure if this generalizes fully; e.g. perhaps there are cycles or it breaks in other ways. I've wondered if there is a line of thinking that justifies ritual slaughter (e.g. kashrut/shechita, halal/zabiha, or many hunter-gatherer lores) in a similar way somehow; haven't been interested enough (as a vegetarian) to look though. Does anybody know?

159:

Do non sentient predators need sentient prey or just a beefed up equivalent to a laser pointer?

160:

It seems that solving time is solutions of immortality problems. Maybe for simulations too? You can have same space at many different times so basically infinite testing ground (space-time). Time -the most elusive of all.

161:

@Charlie Stross (38): Oh, I didn't way it would work, I said that is almost certainly what society would try to do.

@J Carl Henderson (52): "...Any culture that is going to go for large-scale resurrection or universe simulations is almost certainly going to be a post-scarcity economy...The only way I can see any economic limits having any real impact, is if your ressurectees will require some sort of non-AI 24/7 care to continue existing and functioning..."

I was envisioning some sort of combination of cloning and personality simulation/downloading... I don't have details because the OP didn't stress that. The question was who should get resurrected, not how. But my scenario is very deliberately not post-scarcity, both because I don't believe in the more extreme versions of that idea, and because it makes a more challenging intellectual exercise to include them. Given financial constraints on access to resurrection opportunities, who would/could society distribute those opportunities? It isn't fair, but economic competition never is. If you have another scenario that you think makes better sense, please propose it.

By the way, calling my idea "libertarian" is so inaccurate that it's basically just a lazy way to dismiss it without taking the time to engage with an idea of your own (I even included gov't assistance for the poor). Some form of capitalism will likely remain with us for the foreseeable future. Including it is simply being realistic, the exact opposite of a crack-pot ideology like Libertarianism.

Bottom line: if you want social justice, you have to come up with a realistic game plan to get from here to there. "Here" the rich get first crack at cutting edge scientific advances. "There"... well, come up with something.

162:

Do non sentient predators need sentient prey...? (No?) Personally, I would only resurrect sentient creatures to full status. Not clear at this point what the cutoff is (e.g. mammals/birds, or what, and see the recent paper on insect consciousness) but it would presumably be clear to the sim implementors.

163:
  • There is no end to how much can be simulated, even in a finite material universe. You just need an algorithm and infinite time.
  • There's no duty to resurrect living things because we care about their inner experiences. Inner experiences are not the proper focus of moral concern. Objective reality is the proper focus of moral concern. Our feelings don't matter, only how we affect the world. Only the world matters. We DO have a moral obligation to maximize the complexity and order of the world, which will also lead us to create a massive algorithm driven simulator, but it will be compelled to recreate past histories because that is an efficient algorithm for creating copies of itself, thus maximizing complexity. The point is that yes, it will make evil stuff too. Also empirical evidence for that one.
  • 164:

    Re insect conciousness, What insects can tell us about the origins of consciousness (Abstract only unless you have magic access; didn't spot a stray pdf in a very brief search.) (Unrelated woot, finally shamed into getting tor browser working and reasonably jailed!)

    165:
  • is incorrect. You also need infinite memory.
  • 166:

    Of course something human could remain sane and conscious for an eternity. It would merely need to enter a periodic state, i.e. looping, forever...

    Doesn't really sound desirable, mind you. (This was the -- self-chosen! -- fate of Peer in Greg Egan's Permutation City. His plot thread sort of... dangles to an end, because really after he's entered an eternal periodic state there is nothing more to say.)

    167:

    If it turned out our world was just a simulation after all, and someone asked me how I felt about being demoted from genuine reality, I'd answer like the computer did to Matthew Broderick in the '80s film War Games, "What's the difference?" Not sure if it's been made into a story yet, but a mashup of the ideas from the Star Trek holodeck episode where fictional 1930s noir gangsters got loose briefly on the Enterprise, combined with the Avatar film concept of remote controlled surrogates, could lead to a plot device wherein AI characters inhabiting a simulation could remotely control occupants of a "real" realm and interact from their "fake" world with the "true" one, of which they are a subset. Lines would blur immediately when considering who's real and who's not.

    168:

    Not really. Let's try and calculate this. The Universe is approximately 26 billion light years in diameter and we can presume that it is roughly spherical. If we break that down to a volume calculation in which r is measured in Planck lengths, the resulting number is, in fact, finite.

    We could even get cute and multiply the result by the total lifetime of the universe measured in units of Planck Time...

    Sorry, still finite. Really, really big, but definitely finite.

    And we could assign each Planck unit a really, really big number - 256 bits would probably do nicely - which defines what's at that location, (either part of a particle or vacuum) and how hot/charged that part of the particle might be and viola! Still finite. Immense, but finite.

    And now we have a complete record of our simulation run. Don't forget to make a backup!

    169:

    Re insect conciousness, What insects can tell us about the origins of consciousness

    Full paper is available via http://sci-hub.bz .

    170:

    We actually can determine something about the hardware running the simulation. From the 1972 HAKMEM, an MIT classic:

    ITEM 154 (Gosper):

    The myth that any given programming language is machine independent is easily exploded by computing the sum of powers of 2.

    If the result loops with period = 1 with sign +, you are on a sign-magnitude machine.

    If the result loops with period = 1 at -1, you are on a twos-complement machine.

    If the result loops with period > 1, including the beginning, you are on a ones-complement machine.

    If the result loops with period > 1, not including the beginning, your machine isn't binary the pattern should tell you the base.

    If you run out of memory, you are on a string or bignum system.

    If arithmetic overflow is a fatal error, some fascist pig with a read-only mind is trying to enforce machine independence. But the very ability to trap overflow is machine dependent.

    By this strategy, consider the universe, or, more precisely, algebra: let X • the sum of many powers of two • ••• 111111 now add X to itself; X + X • ••• 111110 thus, 2X = X-I so X = -1 therefore algebra is run on a machine (the universe) which is twos-complement.

    171:

    Why do you think they resurrected ALL of us? They could just be running 1950 to 2050. They could be running perhaps a few thousand of us at full resolution, but the rest at lower resolution perhaps using statistical data from other simulations or some other form of jack knifing.

    I don't think it is even possible to resurrect ALL of us in all but some limited sense of resurrecting those whom about enough information is known. They have to simulate quantum mechanics and QM is indeterminate. A simulation starting at the Big Bang, for example, would produce a different set of us, if it produced us at all, each time. It's a similar problem assuming they could start at an arbitrary checkpoint.

    In the 18th century there was some hope that if one could record the universe in enough detail one could predict the future. In the 20th century we learned that this is just not so. It's not even that we can't record enough detail. It's that the necessary detail doesn't exist.

    Of course, that could be one of the simplifying assumptions that makes the simulation possible.

    172:

    Damn! I always knew I was a non-player character!

    173:

    No you are waffling about "building it" It is an 150%-religious project, which if "successful" will only end in many tears ( "Answer" by F Brown ) & if unsuccessful will end in losing a lot of money - & tears. Why not give up now?

    At the same time, I Have a horrible suspicion that we are less than 20 years away from a strong AI, which will probably arrive "by accident". Um.

    174:

    If we had resources for perfect, satisfying meat that does not come from animals, of course we would phase out animal slaughter. Oh yeah. And those animals that are carnivores? Or insectivores? Or omnivores, like us?

    Vegetarianism / Veganism is a PERSONAL CHOICE for individuals .... But to put a "social/moral" dimension on to it is ... not stupid, but in some sense ... wrong. Probably because you are "answering" the wrong "question", I think. [ I have expressed that very badly - I might have a n other go, later ... ]

    175:

    THAT has the exact same fault as the christian & muslim versions of "heaven" No cats, no trees & wildlife & "other" life, no richness of the universe. In fact, as has been noted, it actually looks more like hell, or N Korea, as the case may be ......

    176:

    They have to simulate quantum mechanics and QM is indeterminate. Err ... no. QM is SIMULATED to be indeterminate in the sim that we are running in. Turtles all the way down. See also SMBC 2055, referenced earlier (!)

    177:

    Give the man (or woman) a sausage voucher! That is superb & I salute you sir/madam. It certainly narrows the search down a bit, doesn't it?

    178:

    Jokes on us if the universe is a distributed system running on a hetrogeneous cluster and algebra is only currently running on a twos complement machine.

    If a load balancer hiccups or a network cable comes loose we might find that we have to reinvent mathematics.

    179:

    "It certainly narrows the search down a bit, doesn't it?"

    Not really, since our serious AI projects are almost all moving to neural architectures, some of which will be analog (memristors)

    180:

    I think this is closely related to the problem I have with the Catholic Church accepting evolution and still claiming there are souls. At some point you have a parent and a child, in all practical ways indistinguishable in qualities, but only the child has is ensouled (in this argument, only the child qualifies for reconstruction).

    Which then - of course - makes it a variant of the classic "chicken and the egg" although in that case we do know the answer: the egg, since non-chicken-like chicken ancestors laid eggs (and not because, as Frank Skinner once said, "trust me, the chicken never comes first").

    181:

    I quite like the idea that a lot of the oddities of quantum mechanics come from pragmatic decisions in the design of the simulation. So the Planck length is the LSB of the "where is this thing" variable, Planck time is the clock speed, indeterminacy is because most of the time it doesn't need to keep track of things, and only picks one outcome when it's actually going to be seen etc.

    I don't believe it - but as a programmer I quite like it.

    182:

    I've wondered if there is a line of thinking that justifies ritual slaughter (e.g. kashrut/shechita, halal/zabiha, or many hunter-gatherer lores) in a similar way somehow

    Shechita was, as originally codified, about the fastest and cleanest way of slaughtering an animal: roll it on its back and exsanguinate via the carotid artery and loss of consciousness should occur within seconds. Only the sharpest of knives to be used. (There are a bunch of other requirements/dietary laws, many of which look suspiciously like garbled/mistranslated heuristics for maintaining food hygiene in an iron-age nomadic desert culture without refrigeration or the germ theory of disease.)

    Seriously, though: assuming you're living in an "ethical sim" that forks and replaces you with a p-zombie prior to any fatal interaction, how might you go about disproving this? See also Robert Charles Wilson's Divided by Infinity. Brrr ...

    183:

    Personally, I would only resurrect sentient creatures to full status.

    You either didn't read or didn't fully understand the OP.

    184:

    Inner experiences are not the proper focus of moral concern. Objective reality is the proper focus of moral concern. Our feelings don't matter, only how we affect the world. Only the world matters.

    How ... objectivist ... of you.

    185:

    could lead to a plot device wherein AI characters inhabiting a simulation could remotely control occupants of a "real" realm and interact from their "fake" world with the "true" one, of which they are a subset

    That's the first two chapters of Greg Egan's "Diaspora" in a nutshell. Then it gets interesting. (Go read.)

    186:

    The Universe is approximately 26 billion light years in diameter

    No it's not; remember, spacetime is continuing to expand. Current best estimate for the size of the in-principle-observable universe is that it's about 98 Gly in diameter, but this may still be somewhat wonky, and we've got no way of knowing what lies outside the event horizon. See wikipedia for more on the size of the universe and in particular common misconceptions about its size.

    187:

    I don't believe it - but as a programmer I quite like it.

    If you postulate an underlying reality substrate that, underneath all the layers of sims, has no lower bound of granularity -- I think (my maths isn't up to this) that means it's a Cantor space -- and is unbounded in all or most dimensions then you've got plenty of room for universe-scale sims with boundaries and a minimum scale (e.g. finite state automata on the order of the Planck length).

    Of course, that "ultimate substrate" universe is probably going to look rather different from our own, to such an extent that the question of whether it would exhibit physical properties capable of giving rise to life hurts my head, much less whether they could exhibit introspection and decide to create sims.

    See also Wang's Carpets, from "Diaspora", by Greg Egan (I keep coming back to that, don't I?).

    188:

    No thanks, I mean this is the world I grew up to deal with. The concerns of a century from now are not something I expect to be able to make meaningful input into, let alone the concerns of beings who can simulate universes. I can hope to help leave the world a little better, resolve a little of the present condition, but the people of the future will have their own new concerns.

    And are we talking democratic representation to go alongside this unwished resurrection? How could I meaningfully participate in a polity where the idea of physical reality is understood in radically different terms, where the idea of conscious individuality is likely understood very differently. I would be hopelessly conservative. Why should I take up whatever counts for resources over someone who has never yet been and who is actually suited to the environment.

    189:

    When I go to sleep tonight, the me who wakes up the next morning (barring death, coma, or major illness) will certainly be "me." It will be the same physical entity, as identified by being on the same timelike worldline through relativistic space; and survival is about continued existence as a physical entity. The continued existence of the information content of that physical entity, stored within a different physical entity or medium, is reproduction and not survival.

    190:

    If the simulation is accurate, I'm not sure there's any meaning to the question whether we're "real" or "simulated"; for all intents and purposes, the real and the simulated are the same person?

    191:

    The Universe is approximately 26 billion light years in diameter and we can presume that it is roughly spherical.

    Charlie beat me to it (I've been marking exam papers and trying not to look at other stuff: I am very easily distracted from marking exam papers), but there are multiple things wrong with this assertion.

    First point: I take it you got the 26 billion light years from the age of the universe, but you have forgotten that the universe is expanding. Light that has been travelling for 13 Gyr originated from a source that is now much more than 13 Gyr from us, since the universe has been expanding as it was on its way. The Friedmann equation for our universe does not give a tidy expression for the proper distance (1/(sinh theta)^2/3 is a pig to integrate), but I did a simple numerical integration once to get a plot to show the students, and the horizon distance came out to 3c/H0, which is about 12 Gpc or 42 Gly, for a diameter of 84 Gly: that's a bit less that Charlie's number but this was a few years ago so the numbers I used are a bit out of date.

    However, second point: that's the currently visible universe, and there is absolutely no reason whatever to believe that what we see is all we get. All the observational data suggest that the geometry of the universe is flat, which implies that it's infinite; only closed geometries are finite in general. (And yes, it can be infinite from the inside while appearing finite from the outside—see Brian Greene's The Hidden Reality for a discussion of this).

    Planck data, plus baryon acoustic oscillations (i.e. analysis of galaxy surveys) give Ωk = 0.000±0.005 at 95% confidence (Planck 2015), which does, of course, still allow a closed geometry, but with a radius of curvature of at least 61 Gpc (200 Gly).

    192:
    Not sure if it's been made into a story yet, but a mashup of the ideas from the Star Trek holodeck episode where fictional 1930s noir gangsters got loose briefly on the Enterprise, combined with the Avatar film concept of remote controlled surrogates, could lead to a plot device wherein AI characters inhabiting a simulation could remotely control occupants of a "real" realm and interact from their "fake" world with the "true" one, of which they are a subset.

    Depending on how you read it, William Gibson's The Peripheral could be exactly that... or something completely different.

    Either way, quite a few ideas about the why and how of such a relationship between worlds, including some which I think haven't been suggested here.

    193:
    Here is another even more immediate answer for why we would resurrect people in sims - because the technology will be available in the lifetime of children now alive and many (most?) will want their dead parents or grandparents back.

    Parents? Grandparents? They lived their life and were done with it. If you want to get "must resurrect this person", try "child who died". (And "spouse who died way before their time").

    194:

    We all know the real money is in pets. One simulated goldfish looks much like any other, and you don't risk tripping the sim up by reminiscing about events that the programmer didn't know about.

    195:

    Getting back to the basic premise, I fail to see why simulating past versions of our selves is such a compelling thing. Given the ability to simulate whole societies, why focus on a poorly documented collection of ancients from the 21st century? The notion that we are somehow special enough for Roko and the gang to want to simulate, seems to presuppose that posthumans are going to retain the basic glitchy primate cognitive architecture, and that such beings will somehow also care deeply about their own pasts to check what-if questions. Without the Christian scaffolding that Fedorov brings in, or a Kurzweilian "I miss my dad" motivation, this just seems absurd. Who thinks they are so special that they should feature in remakes of their time period, or that it is any less DSM-worthy to obsess about how plucking up the courage to smile at that cute person four years ago might have turned out?

    If we are in a simulation, it seems unlikely to me that it is simulating any specific historical society. Bostrom's corollary is worth recalling: unless we are living in a simulation, our descendants are unlikely to find ancestor simulations interesting. By inspecting Ockham's silverware, it is easy to see that dichotomies are usually designed to slice the world into two possible states, both of which are believed to be true by the proponent, so this one seems to be hinting at beliefs that we are living in a simulation, but also that our (simulated) descendants, if any, would have better things to do than to play with models of their quaint (but also simulated) forebears. If the entities responsible for the simulation are doing social science or futurology, then this reality is likely an artificial construct intended to capture aspects they are studying, not some retreaded past. And even if such a simulation were instantiated with a specific known state, it would likely have diverged by now: our cave-dwelling ancestors might have been perfectly copied from a template, but we are by now merely figments of one run of the universal computer.

    TLDR: positing A:we the simulated OR B:resurrection is the plan is essentially the same as positing A on its own, as B seems exceedingly unlikely at present. Bostrom said A OR not B.

    Of course, attention may snowball until the Church of Fedorov is a thing, taking tithes in the Valley and directing resources into pushing B. So OGH seems to be sneakily making agenda B more likely by blogging about it, and hinting at forthcoming book coverage...

    196:

    "Who thinks they are so special that they should feature in remakes of their time period,..."

    Me

    197:

    Or perhaps souls evolve and you only qualify for resurrection when your soul is sophisticated enough to understand the difference between good and evil. (There are lots of ways to thread that needle - we have our best apologists working on the problem right now!)

    198:

    Only an amateur programmer here, but I quite like the idea too. I can even imagine the conversation:

    BOSS: So let me get this straight. If we actually simulate quantum reality we'll need 11 more servers and that will put us over budget. Can you guys just have quantum interactions refer to... one of those programmy things you do? A lookup table or a random number generator or something?

    PROGRAMMER: (Sighs) We could, but their scientists will make some very weird assumptions about reality.

    BOSS: We don't care about that. The chairman wants us to figure out which sitcoms will have the best numbers next season, so we don't give a fuck about the physics model. Just make sure we get the social metrics right.

    PROGRAMMER: What about the-

    BOSS: The chairman still loves his idea of a dark comedy where Trump is president, so make sure he wins the election in all sims. Everything else - may the best writer win!

    199:

    You're doubtless correct, but the amount of memory to do a full simulation of our universe is still finite, which was my major point.

    I'm more curious to have comments on the issue of how much memory is needed to properly simulate a "Planck Pixel;" that is, a 4D 1x1 Planck Unit. We'd need to know whether there was anything at that location at all, then what kind of particle (probably a part of a particle) is at that pixel, what its charge or temperature might be, etc. Any thoughts?

    200:

    This thread has much Bear.

    Alexander the Great living to a healthy old age (Susan @ 92), a contributor named Nader, universal resurrection by near-godlike computational entity, and indeed, "tears before bedtime" seems to be what you'd expect if the Jarts come calling...

    201:

    Ancestor simulations get run by historians on old servers which can no longer be used for more modern simulation work. They're used in experimental work to check things like our assumptions about the economy of ancient Sumeria.

    202:

    for maintaining food hygiene in an iron-age nomadic desert culture ... Err ... don't you mean Bronze-Age culture? Iron by the time of Solomon & David, but Bronze Moses - Gideon - Judges
    IIRC, There's a passage where it's noted that the "Philistines" ( Phonecians? ) have iron & are determined to stop the Israelites getting their hands on the new tech ....

    203:

    Worse than that, Charlie ... How Stalinist of him, actually.

    204:

    And ME TOO ! We are all Spartacus, right?

    205:

    There's even a user-manual, written long ago: The teachings of the Gautama.

    Which reminds me, no-one's mentioned "Lord of Light" yet, have they? ( Or did I miss it? )

    206:

    "It's generally considered that the christian heaven is actually a hell, because it's eternal."

    That's an old one, but you've got it backwards. All scenarios of eternal existence (afterlife, whatever) are hell, no matter how attractive they may look at first sight, because you'd go batty with it sooner or later; the only difference is how long it takes. The Christian heaven is the exception, because being with God is the only way to achieve the impossible and make eternity tolerable.

    See also DavidC @ 103: "...hell was eternal separation from God just as heaven was eternal union." It doesn't make any difference what actual kind of environment the particular postulated afterlife involves; the fire and brimstone and devils with toasting forks shit is just a rather crude and dubiously effective method of conveying the concept, by means of metaphor, to monkey minds primarily adapted to thinking about sex and bananas: you don't really get thrown into a lake of fire, it's just that absent union with God, whatever you are doing is comparably unpleasant after a long enough time.

    207:

    I mentioned Lord of Light at comment 50. The modern version would be "Humans with cloning tech land on a planet where the singularity went bad."

    Damn, I really miss Zelazny. I also just reread Creatures of Light and Darkness... wow!

    208:

    Heh. Massive tangent, but a recent New Yorker had a list of guard's complaints about Spartacus:

    -

    "Spartacus said he had a secret to tell me, then burped loudly into my ear. I still have some ringing."

    "I noticed Spartacus sitting in his cell, writing. I asked him what he was writing, and he said poetry. I encouraged him, and the next day he showed me his poem. It was a vulgar, obscene poem about my mother. It didn't even rhyme."

    "Spartacus smells."

    "While I was guarding the master's chariot, Spartacus asked if he could drive it, just around the courtyard. Against my better judgment, I said yes. Spartacus whipped the horses into a frenzy and crashed the chariot. Now, as punishment, I am to be crucified. When I told Spartacus, he just laughed."

    "As you know, I have a limp from my service in Gaul. Whenever I escort Spartacus someplace, he imitates my limp."

    "Spartacus is always spitting at me. When I went up to him and asked him to stop, he pretended to think for a second, then spat at me again."

    "I was praying at the shrine to the goddess Minerva when Spartacus tiptoed in and drew a phallus next to her head."

    "Spartacus has never once mentioned all the weight I have lost."

    "When I make my rounds at night, Spartacus pretends to be dead. As I approach closer, he suddenly opens his eyes wide. It scares the daylights out of me."

    "Right before he was to fight in the arena, Spartacus showed up drunk. He could barely stay on his feet. Old Marcellus stood up, grabbed a sword, and offered to fight in Spartacus' place. Spartacus became confused and stabbed Old Marcellus."

    "I loaned Spartacus two hundred denarii, but when I asked him about it he said it was two hundred centarii. What?! It's like a bad joke."

    "Maybe this is mean to say, but sometimes I wonder if the only thing Spartacus cares about is Spartacus."

    "Never let Spartacus see you cry."

    "Spartacus told me he had received a letter from his son. That's good, I said. He looked embarrassed and said there was a problem: he couldn't read. I offered to read the letter. I opened it and there was a drawing of me performing oral sex on a Cyclops, while being raped by another Cyclops. I hate to admit it, but Spartacus is actually a pretty good artist."

    "I have nightmares about Spartacus."

    "Someone left the front gate open, and Spartacus escaped. The gate is still open. Unless it is locked shut, Spartacus may return."

    209:

    "Incidentally, what about Frank Tippler's idea of new universes created by simulation running faster and faster in shorter and shorter intervals of time?"

    Well, it leads to a new (to me, at least) idea for talking religious bollocks about...

    The idea, as I understand it, boils down to available computing power being a function of energy density which explodes more rapidly than time itself, so subjective eternity is possible in objectively finite time. The Tippler proposition relates to exploiting these conditions at the end of the universe (big crunch), but by symmetry the big bang involves the same conditions happening backwards.

    So we could think of God as the infinite being who existed in the infinite processing power of the first infinitesimal instant of the universe. It's a science-fictional backing for a variant of the "creator exists, but has gone away" idea. The universe is the Mind of God... but his clock speed has dropped through the floor due to expansion. As creatures of the universe, we are all literally children of God. Since information cannot be destroyed but only scrambled to buggery, come the big crunch God's clock speed ramps up again and we all get resurrected in subjectively-eternal union with God. Or something like that.

    (Charlie's reply in the next post noted, but it is a looong way from being definite that a big crunch has now been ruled out. It's certainly still possible enough to be considered as a basis for the sort of wildly speculative gubbins we're talking about here.)

    210:

    P.S. Actually, I was addressing Catholicism.

    211:

    As far as a simulation goes, I think we're only concerned with the light cone of the big bang, thus the amount of memory needed for a simulation is still finite. Assuming there's no Big Crunch, we can stop simulating/recording well before the actual end because the last quadrillion years or so - particles moving away from each other, light-years apart, plus the occasional speck of dust - will be really boring; the equivalent of watching static on an old-fashioned analog television.

    212:

    "Creatures of Light |& Darkness" GO TO the current British Museum exhibition ... artifacts recovered from the lost cities of the delta & ALL the gods ( ye gods! ). I was thinking of CoL&D all the way around - and the similarity between Osiris & Yeshua ...

    213:

    Unfortunately, I'm in Lost Angeles, but it sounds amazing!

    214:

    Two late-night points ... The lost/sunken cities of Egypt exhibition also started a music-track in my head - the score & songs of "the Magic Flute" ( Isis & Osiris) & the "good man" of the Enlightenment, Sarastro ... playing as I walked around. Which loops me back to Charlie's original posting & our subsequent discussion. There is an alternative future for humanity, not encompassed by Fedorov or the "transhumanists", that I was introduced to at a very early age [ My father had bought a very early penguin/pelican copy, which I read before I was 10 ] The worlds & universe as seen in "Last & First Men" & the other works of Olaf Stapeldon. The moral compass & outlook is, if not "opposed", is certainly completely orthogonal to the drives, desires & stated aims of people like Dirk, here. Discuss?

    215:

    That is the traditional flying car and galactic empire future, which is now off the menu. Once we create general AI it all ends. The only argument is when. My view for the past 30 years has been around the year 2035. However, if it is 2080 it really doesn't make much difference. This century is the last where HSS is the dominant species on earth.

    216:

    I did read it, but disagree a little; IMO it is likely that at the time when such a project is started, consciousness will be a solved problem, so it will be clear what is (or was) conscious or not. If this is not the case, then your full argument wins. Also, a frugal implementation might plausibly do lazy evaluation of the parts of the sim that are not conscious. (That's what was meant by other than "full status" - lazy evaluation.) It's not clear now how much saving lazy evaluation would buy, since we don't NOW know where the consciousness threshold lies. If e.g. Mammalia and Aves, then probably it buys a lot.

    Anyway, that's the expansion of what I was getting at; perhaps still wrong but maybe clearer.

    217:

    Yes, but that doesn't work because if there is no carry over from previous states it is effectively the same as one normal life led.

    And if there IS carry over, you are back to induced insanity and a hell you want to escape from (cf the Matrix).

    The only point of doing it at all is if there is an endstate in sight as a target (eg reincarnation as a tool for nirvana).

    218:

    It's a nice try from the god bothers - but the problem is anything that could survive an eternity is by definition 'not human', and so not you. And you can't get away from it by saying "ahh, but time doesn't exist" for similar reasons (you think because of time, no time, no think).

    At some point you end up recognising that a concious 'you' can't, and shouldn't, go on forever. Something of what you were might, but it's more knowledge and mindstate than the concious entity 'you' because that just isn't adapted for eternity.

    219:

    Seriously, though: assuming you're living in an "ethical sim" that forks and replaces you with a p-zombie prior to any fatal interaction, how might you go about disproving this?

    The sim ethics would be mainly for the implementer(s). Re "Divided by Infinity", cool story, thanks for the ref, exactly; disproving it is not subjectively possible, and yeah (if you were suggesting this), such an ethical sim would be a fork bomb. Have been looking for a while for a coherent framework that permits causality violation (CV mechanism unspecified) but doesn't involve a full-blown multiverse. Nothing coherent so far - noodling about some combination of entangled histories (links in a previous thread) and only forking on (non-deterministic) choices made by conscious entities.

    Another ethical edge case is where an entity escapes death but is left permanently harmed somehow, e.g. maimed.

    Re Shechita, I'm sort of remembering some explanation where the controlling deity (or sim manager) essentially enfolds the consciousness of the creature about to be killed and comforts it, and maybe gives it an afterlife somehow. (Perhaps it was just a personal thought, not sure.) Looked for 30 minutes via google and didn't spot anything.

    220:

    That is the traditional flying car and galactic empire future Personal abuse is not permitted on this blog, but you are tempting me. So, instead, I suggest, very strongly that you fucking READ "Last & First Men" & "Star Maker", because it's blindingly obvious that you have not done so & don't know what you are talking about. ( ! )

    What's more: Once we create general AI it all ends. HOW DO YOU KNOW? Got any, you know, EVIDENCE for that? WE are a general AI, after all, or it could be the future of the Culture, couldn't it? More quasi-religious millennial emoting - I won't say "thinking" because it's all too obvious that none was involved.

    Grrrrrr .....

    POLITE REQUEST I would like to hear/see Charlie's take on this future/moral dilemma / prediction?

    221:

    I read them decades ago

    222:

    @Ian at 218:

    I blame Plato & co, for the idea that nothing matters unless it is incorruptible and eternal. Someone should have asked him whether that made a rock superior to him, as it underwent less change. Humans have continued to spout this shit ever since, to earn brownie points for being "spiritual", while at the same time being creatures that fundamentally lack interest in the changeless. Our perceptual systems are evolved to detect things moving and changing. (The rock won't eat us, the moving thing might.) We also need plot arcs, what Frankl called meaning.

    223:

    It makes some sense. If you have to struggle to survive all the time then leisure time is precious.

    If a bit of leisure time is good then a lot of leisure time is better and an infinite amount is paradise! Almost as good as having an infinite supply of fatty food!

    224:

    I have 2 main problems with the whole simulation argument.

    1: technically it is possible for evidence to exist. There are certain interesting physics experiments which should get certain results if we're in certain types of simulation. Until then I'm happy to wave it away as religion.

    2: Any world where it's possible to do this much computation is very very different from our own.

    Even if all we want to do is count up to 2^256 to (never mind actually doing anything more than incrementing a single counter) try all possible variations on a state machine with 32 bytes of writable memory, even if you captured every joule of energy from a supernova and had a perfectly efficient computer you'd still only be able to run through a fraction of possible states.

    I mean, I'm sure it will eventually be possible to have some kind of simulation with intelligence's inside but actually simulating reality closely enough to re-create someone is... unlikely unless information theory is utterly wrong.

    Even in those higher order universes this should be a problem since they'd need to cut out so much complexity of the universe that re-creating individuals is likely to become a problem.

    225:

    Then why did you claim That is the traditional flying car and galactic empire future When it wasn't & isn't? The first men (us) are almost-totally mutually-exterminated in L&FM & a new humanity, the second men arise, tens/hundreds of millennia later, for instance. Well?

    226:

    It's been a while but IIRC the first men do spend a lot of time zipping around in personal aircraft before the coal runs out and they all die of not having a plan B.

    So he might have read the first chapter.

    227:

    CS solved that problem (glasshouse?) 10 years ago. If you have power to load your previous lives, you can have both, continuity and a fresh start. It now seems that designing/loading/changing minds in silico is easier (faster for sure) but that could be because we don't have knowledge for biology processes. I am interested in implications of time difference considering that our bio minds and lives are painfully slow. To be honest we don't need that much detail in our simulation, while there could be nonlinear cases where butterfly wings cause a storm, mostly it's a simple case where things fall down to lowest energy state with slow degradation.

    228:

    The modern technological approach to caching a human mind in software would probably be to emulate the human brain as a virtual machine. At the very least this requires a simulation which is capable of dealing with neurochemistry and neuro-electric behavior. This probably does not require Planck-Scale accuracy, but it certainly means we need to know what a gigantic number of individual electrons are doing... the memory requirements would be finite, but stunningly large by modern standards.

    In time it will probably be possible to simplify this a lot via some equivalents of compression and reducing redundant pathways and redundant calculations while still accurately recording a human mind, but that won't happen until we can properly run a simulated brain so we can figure out what's important and what's not.

    229:

    You're right, but only for the physical laws/energy requirements in this universe. Its worth keeping in mind that one of the goals of any simulation I ran* would be make sure the people in my simulated universe don't run complex simulations of their own.

    What I find interesting is that if we are in a simulation, this discussion is generating enormous amounts of information about the memory requirements, processing speed, etc., of running that simulation.

    • Or that you ran, if you're looking to keep your simulation on track and get the results you want.
    230:

    Imagine this: You've got a really modern, powerful machine set up to do virtualization, and you decide to virtualize a human mind. Nothing special. No super-duper general AI, just human-level thinking. How much memory do you need? How many separate CPUs do you need to make this happen at something resembling the speed of human thought? How much storage do you need? How much code do you need to write? What are the characteristics of a programming language meant to emulate human thought? At what scale do you need to emulate physical processes in the brain? What shortcuts are available to you? How do you get this mind you're building to do calculus without thinking about it in order to catch a ball? (Or any of the other things a human being can do "without thinking?") What advancements in artificial stupidity do you need to make to support your artificial intelligence? Do you need to emulate "muscle memory?" If so, what algorithms do you need to write to make this happen?

    And those are just my amateurish thoughts on the obstacles...

    2035? Twenty years? Not gonna happen. But we might have brain/computer interfaces before 2100, in which case AI will have human component and be available to anyone who can afford the surgery.

    Also, when thinking about AI, you might consider the Graphic Omniscient Device from When Harlie Was One and the issues it had with latency.

    231:

    It doesn't have to be brain surgery, and I maybe non-invasive techniques for affecting the brain are achievable in the near future (>10 years?) for some scenario where someone would want to punish meat people.

    relatedly, here is a "badass" new method for affecting behavior using a genetically engineered protein that responds to magnetic fields. They did some tests with zebrafish and mice.

    https://www.theguardian.com/science/neurophilosophy/2016/mar/24/magneto-remotely-controls-brain-and-behaviour

    Maybe at some point it would be possible to have a non-invasive delivery mechanism of those proteins to regions of the brain?

    I don't know how useful this would be for some subtle pain thing, since I don't know how one would surreptitiously deliver sufficiently powerful magnetic fields. Or even, make some thinking caps that do so for one.

    I am a bit interested in non-invasive techniques for affecting the brain because perhaps the side effects would be better than the medication I take for bipolar 2. and perhaps it could help more with the depression states than the medication does.

    232:

    There is a project for simulating mouse and human brain that was started and was led by Henry Markam. It was based on simulating every cell, connection with one computer. I am loosely paying attention but after 1 billion and lots of buzz, it seems there is lack of progress.

    233:

    @Troutwaxer, 228:

    "In time it will probably be possible to simplify this a lot via some equivalents of compression and reducing redundant pathways and redundant calculations while still accurately recording a human mind."

    Well, I am so hopelessly non-technical that I don't even know when my trout needs waxing, but I am nevertheless impelled to wonder what sort of human mind we are talking about emulating here. People like Charlie's guests, or the old codgers and biddies on the bus who could probably be modelled on a modern laptop or even a Sinclair Spectrum? Because they have very, very limited cognition and speech. "What I always say is (random bigotry from menu)" Remember Alf Garnett, anyone?

    This would be plenty good enough for market research au Daniel Galouye, see my 23. If the godlike wants the inner lives of poets and mystics, it will be harder.

    234:

    From Charlie's post 102: Let us posit that within each set of all contemporaneous humans there exist social graphs of people who do not significantly harm one another. (Most folks aren't actually into deliberately harming their neighbours: we don't feel good about inflicting pain, and it takes intoxication, regimentation, or dehumanization to force us to do so.) I've read that groups of primates confronting each other, then to form two lines facing each other, and scream and yell till one (usually, I think, the invaders) retreat.

    And it hasn't changed a lot: I've read that there was a study done after WWII, and that of men in real firefights, against a Real, Approved Enemy... only 15% (or was it 5%?) actually tried to kill their opponents, while 70% or so fired in the air, hoping to chase them off, and the rest just kept their heads down.

    So, yeah, there may be some hope for humanity.

    mark
    235:

    S.L.A. Marshall and "Men Against Fire". However, "improved" training changed those statistics such that by the Korean War, it was allegedly 50%, and by Vietnam it was allegedly 100%. Note that these statistics reflected US Army training outcomes for WWII - other armies would have had different statistics, according to their training methods.

    It does demonstrate your point, however - it appears that there are reassuringly deep-seated taboos against deliberately taking life.

    236:

    Going back to the idea of ressurecting everyone and having a lovely society where everyone gets to mingle with everyone else...

    I think I'm going to go and reread Michael Marshall Smiths "Hell Hath Enlarged Herself" and think positive thoughts.

    237:

    What I get for not looking at this thread over the weekend....

  • Sorry, saving those whales was not some demigod or superhuman, it was a Russian from the future (come on, that had to be said).

  • About "being in the Presence" as what make eternal life tolerable... just how much in the Presence... y'know, reading that, in this context, I wonder if that means we become part of God (tm), just good little proven subroutines or VMs in the alleged deity.

  • Susan, post 191: so, gee, it is big enough for my preferred model: a torus. No zero Big Crush, no "where did it all come from?!!!"

  • Where do we put them all, once we're all resurrected? I mean, do you really want the great-grandparent of that mixed-race couple there, in the prime of life, responding to them? Nor, I suspect, would I like to be around with my late wife, and her great-grandfather there to discuss our marriage, my of Yankee and Jewish ancestry, and her a native Texan (or, possibly, a Texian) and WASP.... Hell, I remember an ex's mom, the weekend of the wedding, looking at my father and literally crying, making comments about children with hooked noses(!)(No, he didn't have one, either.)

  • And about that Galactic Empire... sorry, I know, I really have to buckle down and finish my Famous Secret Theory....

    Btw, Charlie, it was a pleasure to meet you at Balticon, sorry I never got to share some of the Balvenie. I did, however, connect with an agent, and submitted a first novel...

    mark
    238:

    But that's the thing. Sure, you can simplify some things for a simulation but any reality where matter and energy is such that you could actually simulate this universe in the kind of detail we find in this universe, never mind simulating billions of versions of it... that universe must have very very different physical laws and if it has very very different physical laws then that makes ancestor sims difficult, a simplification in the sim that changes how atoms interact slightly would utterly throw off any chance of recreating your ancestors simply through recreating the conditions.

    239:

    WWI Where German conscripts encountered Brit regulars at Mons & thought they were facing machine guns .... Um, err .... I passed the other day a plaque set into the pavement in N Finchley, commemorating a man who had lived in the house by: The first man to be killed on the Brit side in WWI, 26th AUgust 1914 ... Um.

    Oh yes: dbp @ 226 Not quite ... on running out of coal/oil the first men discovered nuclear power, but it went very badly wrong, a set of chain-reactions was set off, resulting in a "Deccan Traps" scenario - there were very few ( We now know implausibly few ) survivors ....

    240:

    My recollection was that nuclear power was discovered twice.

    First time by a bunch of scientists who weaponised it, repented and cunningly buried the knowledge so well it was lost for centuries.

    Then civilisation lasts a few hundred years on coal, collapses.

    Then another civilisation of somewhat less healthy and viable first men arises with a different variant of nuclear power and wipes itself out with said chain reactions.

    It has been a long time since I read it though.

    Of all the types of "men" I think the second and fifth were the only ones I would want to meet. The first men were a bunch of idiots :)

    241:

    @dbp 240:

    Well, they would be idiots, they were us...

    Is it just me, or is OS's concept of "man" a bit peculiar? When you have very late men on Neptune looking like bunny-wabbits, in what sense are they "men" – or is he using that word where we would say "sophonts"? That's if the rabbits were sapients or precursors of sapients, I honestly don't remember (if Brin's Uplift universe gets an illustrated encyclopedia, similarly for others, why can't Stapledon have one?)

    I mean, suppose Sid the sloth evolves into us, and then zillions of years ahead our descendants say, "It was very good to have been Sid"?

    242:

    Stapeldon's "men" were .. all the descendants of the First Men (us) who were "thinking" IIRC.

    To cut back to the chase, though. How very, very different is this set of visions from the millenial eschatological crap being sold by the followers of Fedorov. ( I'm looking at YOU, Dirk! )

    243:

    The first time I encountered the idea of the world being a simulation, it immediately occurred to me (and I'm sure that I'm not unique in this thought) that one does not run a simulation in order to identically recreate something. Rather, one does simulations in order to introduce a variable and see how that introduction affects everything subsequently in comparison to the original.

    So, if we're in a simulation, was the introduced variable a positive one, or a negative one? Are we better off than the world of which we're a simulation, or worse off? And, which aspect of our simulation was varied? It would not be obvious, as from within the simulation we wouldn't have any comparison points to identify the point of departure.

    It also is likely to be something that we would not consider to be an element that would be interesting to vary. If you can do one simulation, chances are you can do lots of them. So, our world may be the simulation in which brown hair coloration is the dominant allele, rather than the original in which ginger hair coloration is dominant and brown is recessive.

    244:

    When I make up a world, it's to run tabletop rpgs in it. I strive for verisimilitude, so far as my abilities permit. But insofar as it's different from our world, the aim is, first, to give the player characters agency; second, to make their situation interesting and different; and third, to provoke emotional reactions from the players. None of those would be served by running an exact copy of our world.

    On the other hand, there are simulators who do want to have things turn out as much like the empirical world as possible. A historical modeler whose world persistently had world empires founded from New Zealand and Tierra del Fuego would be looking for what had gone wrong with the model.

    245:

    Sure, the first simulation is likely to be as exact a reproduction of the original as possible, to verify that your simulation-generating process doesn't itself introduce novel elements. After that, if you are going to make more simulations, you'll do it to probe the elements constraining the original. And so there are going to be lots more simulations with variation. Once you have the correct control, it's all about the variables.

    246:

    "...that one does not run a simulation in order to identically recreate something."

    A common mistake. There will not be one simulation, but billions.

    247:

    Well, you try and make your preferred future real, and I will try to make mine real. And may Darwin be the referee. How's it going so far? ( I'm looking at YOU, Greg! )

    248:

    I think both futures look pretty crap, and would like some better options.

    (I'm looking at YOU, screen!)

    249:

    Given the philosophy-of-mind assumptions required to make simulation possible, aren't p-zombies a conceptual impossibility?

    (That is: it's possible to suppose that simply simulating a person's complete set of functional and behavioural dispositions isn't enough to make them conscious, so that a p-zombie identical in behaviour to me could exist unconsciously - but on that basis, simulated people aren't people either.)

    250:

    Assumption 1: It is possible to simulate a human brain without exactly simulating all the cells down to the quantum level.

    Assumption 2: The simulation is being run for the benefit of beings within rather than just being a universe that happens to give rise to them.

    Assumption 3: Computer resources are finite, simulation designers are motivated to cut corners.

    It would make sense to me to have a separate neural simulation with io hooked to the simulated bodies rather than go to all the effort of simulating meat at the quantum level and hoping it works.

    For most cases the brain could just be inert grey goo and you can pretty much ignore it apart from the damage model.

    So if I designed a sim under those constraints then mind body duality would be a real thing and people would have souls.

    Not sure how to test whether or not people are ensouled though. Algebra test or over the age of 14 did it for one of PKDs dystopias but it isn't exactly rigorous :)

    251:

    dualism, not duality. Gah.

    Also, a consequence would be that uploading minds would be impossible, as the thing you are trying to copy is smoke & mirrors. AI would be possible of course.

    252:

    "Given the philosophy-of-mind assumptions required to make simulation possible, aren't p-zombies a conceptual impossibility?"

    Do you want fries with that?

    253:

    In my book TechnoMage I went into a number of strategies one could use to overload a resource poor neural level Sim

    254:

    If it looks as though the sim is going to run out of resources then it can trivially be paused until more are available or shut down. Either way you wouldn't be able to tell from the inside.

    The only case you could detect is "operators allow service degradation". Me? If I caught the AI petss trying to fork bomb my server I would switch em off.

    255:

    Your reply is based on an utterly false assumption. I am merely suggesting that Stapeldon's future is more likely, & in some respects, though not all, preferable to your religious/millenial project. I also have the horrible suspicion that "your" project will turn out badly, as badly & for the same reasons, as christianity, islam or communism. But you refuse to even acknowledge the possibility of such, which is what gives me the shudders ....

    256:

    I have no idea what you mean.

    257:

    @dpb 251:

    "dualism, not duality"

    What's mind? No matter. What's matter? Never mind.

    Gah. I'm with Thomas Hobbes instead – I think, ergo matter is capable of thought.

    Wish I could better remember the Gilbert Ryle I studied at Oxford. It was about mental states being a "category mistake" or misapplication of language: there is no pure occult mentation preceding and causing an act in the material world; rather, mentation consists of intelligent action.

    This was 1949, before there were sims.

    258:

    I agree with you, in reality.

    My point is that matter being capable of thought is not necessary for a sim, and even if it is it may not be the most efficient implementation.

    259:

    I suspect it's a variation on that comment about the usefulness of philosophy to one's career:

    Q. "What does a philosophy student say after they graduate?" A. "Would you like fries with that, sir?"

    260:

    Actually, a comment on all the vast majority of "conscious beings" I meet in this Sim. NPCs. Only brought to full resolution when I interact with them in a meaningful non scripted, non trivial, manner.

    261:

    The idea that you'd only use sims for one thing is kind of silly. For example, one might use a sim to confirm that what we believe about the Roman Empire is true. That is, you put in "Romulus and Remus," then see if you get Julius Caeser 500 years later. Then in another sim, you create a Rome which doesn't use lead pipes - do you still get Caligula? Then you create an ancient Rome which is built for students of your Roman History 101 class to experience ancient Rome... and all these are valid uses of sim time, right?

    262:

    Not sure if Russell's Paradox (known by that name, yes) and Gödel's Theorem are connected that directly actually? Will see if there's anything about it I can find that's not handwritten in the margins of... etc.

    263:

    Well, ok, Russell's Paradox is actually -this- (refreshing my memory from Wikipedia, but yes, I have read this stuff, back when I was a math student etc. half a lifetime ago, and some of it more recently besides...)

    "Let R be the set of all sets that are not members of themselves. If R is not a member of itself, then its definition dictates that it must contain itself, and if it contains itself, then it contradicts its own definition as the set of all sets that are not members of themselves. "

    Not quite what Martin wrote in comment 12, not even equivalent to it. My bad!!

    264:

    @Troutwaxer 261:

    Very minor niggle. AFAIR, the problem in Rome wasn't so much the clean-water pipes as mulling wine in lead vessels. Which would not affect the poor.

    265:

    Although it is consistent with a simulation hypothesis, your observation that most people really resent being confronted with someone going off script is equally well explained as a consequence of the first law of thermodynamics (it requires extra effort to process non-scripted interaction, and living beings usually try to avoid unnecessary effort). Therefore, Ockham says: no.

    266:

    No, I am explaining why the Sim does not have to be anywhere near as complex as you might assume if most people are NPCs.

    267:

    What on earth (emphasis chosen carefully) makes people think that resurrection of the long-dead is in any meaningful way possible? You may be able to recover their DNA, in some cases. That doesn't give you back the person; as mentioned above, the best it can give you is something like a separated twin. The simulation hypothesis does not help with this, because you don't have the information to put into the sim.

    Federov may have hoped that God would solve the problem for us, but that seems very unlikely to me; if he exists, he's likely to treat this project as usurping his position.

    The "all possible people, all possible minds" answer runs into a fundamental limit: there isn't enough matter in the universe to store all the data on.

    Overall, this seems like people wanting to play at God without considering the engineering problems.

    268:

    If most people you interact with are then either the attempt to simulate everyone has failed or you are being kept away from the rest of humanity for a reason.

    269:

    Damnit. "are NPCs."

    This NPC has a broken editor/sanity checker.

    270:

    Well, the point of doing that was that the acetic acid in gone-off wine would react with the lead to form lead acetate, which is sweet, and so it fixed the taste. Would it not be the poor who would be most likely to have gone-off wine that would need such treatment? Or did they still not do it because they couldn't afford the lead vessels, or something?

    271:

    What on earth (emphasis chosen carefully) makes people think that resurrection of the long-dead is in any meaningful way possible?

    One suggestion is that rampant oversharing on the internet may leave enough computer-readable evidence to configure reasonably plausible chatbots. (Going by what I see on Arsebook, they wouldn't have to be very sophisticated chatbots either.) That may not be true, or nobody would want to do it, but it's hard to disprove; individual humans are getting preposterously well recorded these days, and by governments and machines rather than by our nosy neighbors. It wouldn't be an exact replication just so close that nobody could prove it was off - imagine a human personality as a range in phase space rather than as a point.

    I've suggested humorously that we're in a simulation exploring what Charles Stross might have written if the Eschaton novels hadn't been surprise hits. That would imply OGH is run at full resolution all the time, the wife and cats likewise, and high resolution for some supporting NPCs such as Greg Tingey. Most of the rest of us can be reassembled from public records and internet scat; offhand I can think of only three events where I've physically coexisted with other people on this blog. It seems to me that I have a continuous physical existence and do things when I'm away from the computer...but how would you tell? To you I'm just words on a screen.

    272:

    Which reminds me, Blackwell's is doing a book launch on 2016-06-24 . Presumably Charlie will announce it in due course.

    273:

    I'm pretty sure I'm an NPC.

    I've certainly never done anything of world-shatnering importance, and I'm not anywhere near the top in politics, business, or even my own social circle. But I don't feel like an NPC, and I'm equally certain that I'm not a P-Zombie - my inner and outer lives are much too complex for that, so I have to conclude that if I'm not a P-Zombie but I am an NPC, that something is wrong with the idea that most of the simulation consists of NPCs who are P-Zombies!

    274:

    That is a great idea! From personal experience, I would give an arm and leg to resurrect one dear person that passed away. The reasons are purely selfish, not very ethical at all, and one can always count on selfish motive. However, if I can have a valid substitution (lets be honest, we don't need or even desire full-blown person resolution here), that would be great.

    I believe that Martine Rothblatt is doing something like that for her spouse (making artificial personality embedded in a computer). That girl is impressive, it seems that creating organs from genetically modified pigs is going to pay off (off topic but I had to notice because there was a recent news about human/pig chimera).

    275:

    I've suggested humorously that we're in a simulation exploring what Charles Stross might have written if the Eschaton novels hadn't been surprise hits.

    Maybe some unethical publisher is exploring the possible phase-space of Charlie Stross books, and we're high-level NPCs in one of thousands of Charlie Stross simulations. (Personally, I feel that the decision to abandon the Eschaton books is exactly the kind of thing we'd see in such a simulation - why else would any author who wants to pay his bills abandon a series in which both novels had been nominated for Hugo Awards?)

    I believe that in the "real" world Charles Stross was dying, and his publishing house uploaded him to a couple thousand simulations which involved multiple starting conditions. Our particular simulation starts with OGH "remembering" that he didn't like something about the Eschaton novels. There is another simulation where he decided to write a sequel to Accelerando and another where he writes nothing but adult "choose your own adventure" novels, and another where he writes pure Lovecraftian horror, including a Necronomicon which actually drives the readers insane - that server had to be shut down with extreme prejudice!

    Every once in awhile the servers running instances of Charlie are culled, and those which don't produce high-midlist books or better are wiped, along with all their inhabitants. Then a new Charlie-Server is instantiated by copying the backup from a successful Charlie-Server and something happens that moves that Charlie in a slightly different intellectual direction, like some friend handing him a book by Teilhard de Chardin instead of Nikolai Federov...

    Two notes to our sim-administrators. First, I'd like ten-million dollars. Second, can we read the Charlie Stross novels from the other sims?

    276:

    ...servers running instances of Charlie are culled, and those which don't produce high-midlist books or better are wiped, along with all their inhabitants.

    Hey Charlie, no pressure or anything!

    277:

    See the Black Mirror episode Be Right Back for how that is likely to turn out.

    278:

    @various regarding the NPC issue:

    The people of the city in which I live are deeply solipsistic. Where else would people stop dead at the top of escalators? I don't think they all grew up with the simulation hypothesis, though, so as to disbelieve in the reality of others – perhaps they are just badly coded.

    One thing, however, relating to going off-script. Youngsters here seem incapable of processing unexpected (non-smartphone) input. Clerks go "What you on about?" then, as you are on your way out the shop, too late, the original question finishes percolating through the concrete and they answer it. What you get from Andras' thermodynamics, plus laziness plus this being the world's second- or third-rudest culture.

    Seems to me that Troutwaxer has launched a new term for what used to be called the hoi polloi, the populo, the canaille. NPCs of the world, unite, you have nothing to lose but your – what? Let's have a competition!

    279:

    Read the synopsis, she did keep it right despite the flaws? Sounds like artificial dead people will be a good stock buy.

    280:

    NPCs of the world, unite, you have nothing to lose but your predictable trajectories?

    281:

    ...poor pay and long hours? I think we need a union!

    282:

    Every once in awhile the servers running instances of Charlie are culled, and those which don't produce high-midlist books or better are wiped, along with all their inhabitants. Then a new Charlie-Server is instantiated

    Which is a significant part in Brin's story (along with rights for some instances). Sorry to sound like a broken record, but I think you'd enjoy it.

    283:

    I think I read it several years ago. If so, it definitely ties into my posts above.

    284:

    Every once in awhile the servers running instances of Charlie are culled...

    That's hard on the divergent Charlie instances, but low level NPCs are probably just reloaded from archives. Why vary the wallpaper?

    Or...should they? Over on the next server rack, is Steve Sandford replying to Sturgeonpolisher? Would that change their Charlie instance in any measurable way? Hm; that's exactly the sort of thing someone would have to fork multiple worlds to test, too - we might not even be testing anything interesting!

    285:

    @Scott at 284:

    "Over on the next server rack, is Steve Sandford replying to Sturgeonpolisher?"

    Meanwhile, the oak trees are beginning their great migration to the north and my mother is contentedly laying eggs...

    Robert Schrecklich, IIRC.

    286:

    All this discussion about NPCs has had me digging frantically for a half-remember quotation from way before the age of simulations and NPCs. I've even resorted to finding a couple of my printed books of quotations and still failed.

    It's by someone famous - my memory says Thomas Hardy but my memory is notorious for getting things like that wrong - and along the lines of: how do you know that most people exist. Those people who meet on the street or on the bus could all be ghosts.

    I think we're far enough down the thread to throw a random niggle like this in and plead for help in assuaging my curiosity.

    287:

    like some friend handing him a book by Teilhard de Chardin instead No, that would be even further past Upney than even Fedorovskianisn (?) would lead you. See also: Julian May

    288:

    What a wonderful insult to use on the idiots of the world: "Hey, NPC, your coding's fucked!"

    289:

    Also "all you Zombies" by Heinlein.

    290:

    @Nick 286:

    FWIW I don't remember that in Hardy. Seems to me that it's such an obvious thought to have, as expressed in old-school technology (ghosts), that it could go all the way back to the Greek philosophers. Question in 101 too, I reckon. Philosophical idealism...

    Apart from conservatives, of course. It is my experience that they believe the world to be just exactly how it seems to be at first blush, and have no common much less metaphysical curiosity. Well, thanks to this thread we know what to make of them.

    291:

    "But I don't feel like an NPC, and I'm equally certain that I'm not a P-Zombie"

    That's what all NPCs and P-Zombies are programmed to say

    292:

    Yesterday I ran a simulation of a dead friend of mine. Turing Test capable, in surroundings indistinguishable from reality.

    293:

    You ran a simulation. And yet he's still dead. Try asking it a personal question about it's early life. Make sure you don't already know the answer. You have run a p-zombie. There's no way this can have continuity. Your friend is dead and will remain so. You need a "soul" for continuity. An implant which you use during your life for conscious thought and is integrated with the brain might become an artificial soul. Without this a simulation - however complete is just a simulation. I'm assuming you don't have such an implant so your expectation of resurrection is just wishful thinking - like all religion.

    294:

    Well, since you obviously didn't read it I will repost: http://ieet.org/index.php/IEET/more/bruere20121015

    For example, one of the most plausible methods for reconstructing the dead of past ages, or at least us, is from records such as DNA, medical records, photos, writings, videos and so forth. The argument runs that if a simulation of (say) myself could be created such that in the simulation I am writing exactly these words at exactly this time it would be a fairly accurate reconstruction of the historical “me”, long dead. However, would that really be true? The counter argument is that it is only a copy, albeit possessing its own life and sentience and not the “real” me at all. I remain dead. To actually be me the copy has to have identical brain states with the original. Unfortunately some fairly crude back of envelope math suggests that the necessary information output from a person is insufficient by at least three orders of magnitude to select one unique state from a possible 10^10^16 states.

    Hence most of the data used in the reconstruction e.g. “what I had for breakfast on 1 January 1990” has to be a guess that, although possibly having an effect on defining my mental state, may not affect the words that I am typing now. Does it matter? When is a copy good enough to truly be “me”. The answers are unknown, but my own view is that if I am to be brought back from the dead there should be a subjective continuity of consciousness which an imperfect copy cannot possess. Otherwise, it is someone else – a nearly identical twin, but not the real me.

    However, in the multiverse this may not matter because somewhere we can achieve perfection in the reconstruction. This is where the multiverse can “rescue” the situation because part of the reconstruction process can randomly guess what should fill the information gaps. The result of making that random guess is a spread of possible versions of the deceased across the multiverse, including at least one that exactly matches the original. So, the entity doing the reconstruction gets a resurrected person that exactly matches all the data they have of the original – which is as accurate as they can get. On the other hand, somewhere in the multiverse a true and perfect copy is produced that has the requisite continuity of consciousness.

    295:

    The resurrected person is just a copy. Therefore "I" will not be resurrected. Similarly any other "me" in a multiverse is not me. You're just clutching at straws.

    296:

    If the copy of you reproduces that post with 100% accuracy it's an NPC, not a version of Dick which has some level of free will. Thus what your copy really needs to do is reproduce all your posts with 100% philosohical accuracy, but only 99% accuracy in phrasing proving that it thinks like you but still makes the occasional random word/editing choice, or even simple typos.

    On the other hand, we can't reliably believe the copy to be an accurate copy unless it reproduces your posts with absolute 100% accuracy. So your copy can't possibly be perfect under any circumstance.

    You can call this "Troutwaxer's Paradox."

    297:

    Problem with sim's The supposed NPC's "who" are important to the "humans" in the sim, particularly commensal & others. I'm thinking especially of Cats, of course. In a sim, how would the programmers come up with "personalities" & behaviour-patterns as different as (in order): Hermann, Fledermaus, Small Birman ( = Bastet ), Hexadecimal & Stripeymonster & now the current Ratatosk? Whose latest trick is to jump behind the flatscreen, oozle round in front, lie down, stick nose under back of keyboard, roll over, back towards human & demand tummy tickled? At the second iteration of this, he gets put onto the front windowsill & it takes him somewhere between 5 mins & 2 hrs to go around, in one of the back catflaps & repeat the process ( Unless I feed the little monster, of course ) Details, all the time details ....

    298:

    So? The "you" that wakes up tomorrow morning is not the same "you" now.

    299:

    Not if this is a resurrection data fitting Sim. Innumerable branches will be wound back (reversible computing) when they output the wrong "me". It's an iterative process that cannot be short circuited because of the Halting Problem.

    300:

    I'm thinking especially of Cats, of course. In a sim, how would the programmers come up with "personalities" & behaviour-patterns...

    It seems to me that would be one of the easier tasks. Any successor civilization that tried modeling us should have terabytes of cat videos and related information, allowing them to model cats with great accuracy. Indeed, during the R&D phase the production team would probably get Felis catus working long before Homo sapiens. Millennial humanity is working hard to make cats the second best documented species; humans are the only other creature with a remotely comparable internet presence.

    301:

    @Scott at 300:

    I can just see the rarified academic debates (cf "Solaris" the novel) as to whether Felix catus really ate cheeseburgers.

    302:

    The me that wakes up tomorrow will not be identical to the me typing this answer but will be almost the same and will also have continuity (including dreams). And it will still be me. If you "resurrect" a copy of me it will have the illusion of continuity but it won't be me. This copy might give some comfort to relatives if I die (or maybe the opposite if they hate me) but it won't be me. I doubt if Rosko's Basilisk would get much comfort from torturing it but even if that happened IT WOULDN'T BE ME. You're just trying to make your religion real. Don't you trust your God?

    303:

    ...one of the most plausible methods for reconstructing the dead of past ages, or at least us, is from records such as DNA, medical records, photos, writings, videos and so forth...

    A meaningful question is how much information is needed for this; we don't know yet.

    DNA and medical records might be handy for the body simulation subroutine, or a low-res version made from generic templates and occasional photos might be good enough for many purposes. Anyone who keeps a diary is going to be replicated with much better accuracy. On the other hand we're already getting used to the idea that Google knows more about us than we do, at least in certain ways.

    Given the millennial oversharing explosion, it's plausible to start simulations no earlier than around that point unless there's a specific reason otherwise. (I'm about OGH's age; I think I remember experiences from the 1970s...but if I'm a simulation how can I know if I really lived those events?) It's good enough for a story anyway.

    If the author is willing to allow for lower fidelity it's reasonable to fill in low-resolution NPCs made up from little more than hints and educated guesses. Nobody can prove those people didn't exist, right?

    On the other hand, maybe it takes a lot of data to construct even a half-assed brain emulation. If we have to do a neuron-by-neuron map of the whole CNS before simulating a specific person with acceptable accuracy, then not only are such sims going to be a lot rarer but we've got good reason to believe this blog isn't within one.

    304:

    I can just see the rarified academic debates (cf "Solaris" the novel) as to whether Felix catus really ate cheeseburgers.

    Youtube says yes, they did. grin

    This brings up another thought, though. Never mind simulating humans, either well enough to fool them internally or just as online chatbots. How about simulating cats? It's easier in many ways but harder in that you can't just write a verbal chatbot and hope the audience is fooled by semantic blithering on a screen.

    Consider feline celebrities such as Grumpy Cat and Maru; they both have many followers and plenty of humans are fond of them. They are also cats and won't live that long (they're four and nine years old respectively). Some humans would certainly be willing to watch a simulated Maru sit in virtual boxes... Perhaps the upload afterlife really will be full of cats?

    305:

    Never mind the cats, it's the pigeons I'm concerned with. Watching the cartoon "Valiant" it was painfully obvious that nobody involved in making it had the foggiest idea about pigeon behaviour, nor about how they move, nor even about how to represent them in visual images. You'd think knowledge of this sort of thing would be a pretty fundamental requirement for making a cartoon where pigeons are the main characters. But although the cartoon was much slated by reviewers, none of them even noticed what was most wrong with it. Nor, apparently, did anyone else. Such ignorance passing so unnoticed does not fill me with confidence that anyone making a simulation would make a decent fist of incorporating pigeons in it.

    306:

    No, I am explaining why the Sim does not have to be anywhere near as complex as you might assume if most people are NPCs. I think you're underestimating the problem of incrementally generating consistency amongst NPCs resolved on demand. The linkages amongst social beings like humans are both easily navigated in the current digital age, and easily tested for sensibleness, so IMO if you resolve a NPC to the point where it can feel real when one seriously interacts with it, one might as well instantiate an entire network of NPCs to a few hops out, to increase the odds of being able to maintain consistency and consistent shared knowledge systems. (Learned a new related word: egregore.) Not sure. Since you're a hardware designer (I think?) perhaps there are analogies to many-generation incremental design that can be drawn upon.

    (Re a closed recent thread where you commented about Piracetam, tried it just to see what you've been talking about so much. Not subtle for me, and more interesting, the effects closely matched the effects of an aggressive meditation system worked out a couple of months ago, right down to the what-I-now-know-are acetylcholine-deficiency symptoms: afternoon headaches and mental fatigue. Since the meditation system is quite tunable and reversible, will be sticking with it, now with choline supplementation. So thanks for that.)

    307:

    The third best documented species on the internet is probably vampires. The simulators might have a learning curve.

    308:

    You can NOT step into the same river twice. So?

    309:
    Some humans would certainly be willing to watch a simulated Maru sit in virtual boxes...

    Neko Atsume has more than 10 million downloads, so I think you can count yourself correct there.

    312:

    Since we're on the subject of antireligious SF ...

    (I know we're not directly there, but it's sorta-kinda relevant)

    My current fave is IMB Hydrogen Sonata, my reading is that even when an afterlife beckons, finding solace and closure in the here and know remains relevant. A sort of reiteration of this anarchist critique of religion: To praise the heavns means to slander earth.

    The other one is Ijon Tichys 21st sally, from the Stanislaw Lem (Don't know in which english edition of Ijon Tichyes reminiscences it was published). In it we meat robot monks who talk about their believ and deliver a completele immanent decontruction of religion (Reductio ad absurdum is Stanislaw Lems superpower). REcommended reading, not only for the atheists around here.

    Also tangentially relevant: Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer

    Core thesis: Brains do not store and process information but interact with the world, the metaphor of brains as computers is bad because this mode of thinking eventually hampers research but no one knows a better opne yet. Any thoughts?

    313:

    Any thoughts?

    It's entirely true that brains aren't Von Neumann architecture computers, or even any variety of finite state automaton. Viewed on one level they're just gigantic fatty, juicy endocrine glands that squirt out pulses of neurotransmitters in response to chemotaxic stimuli from other tissue types. But they are made of neurons which undergo clear-cut state transitions (axon depolarization) in response to inputs, and neural networks do exhibit some fascinating algorithmic behaviour (note I'm avoiding the word "computational" here, although that's often used synonymously in contemporary discourse), and you can emulate neural networks on finite state machines, so there's a deep level at which brain structures may be equivalent to computers.

    Ditto a previous century's metaphor that brains and biological systems are like intricate clockwork. Mmm, no, not really, that's why we sent off for a better metaphor; but some of the things biological systems do can be approximated by mechanical systems, and metaphors are useful for analogical reasoning.

    314:

    Brains and computers:

    Well, my ten cents is that the computer metaphor makes too much of a passive awaiting of data to process, which is not at all how humans work. Conclusions first, then data. Brains are for justifying action already willed, and leading others up the garden path.

    315:

    Re the accidental skynet in Elite, I ate lunch today with a friend who has worked 30 years in AI (neural nets), and he mentioned offhand that one of his contacts at the U.S. DoD told him that they are building Terminator robots. Friend is reliable, don't know about the contact, so I'd put it at p==80% true. We shared stories about recent surprises in AI and he opined that the Deepmind AlphaGo result was a surprise that was unlike any he had ever seen in AI. More surprises seem likely - it's AI summer as Dirk likes to say.

    316:

    Well, my ten cents is that the computer metaphor makes too much of a passive awaiting of data to process, which is not at all how humans work. There is a camp that has been arguing that humans (and many (most) animals) are predictive, at many scales. Ably argued by Lisa Feldman Barrett at edge.org

    Many predictions are at a micro level, predicting the meaning of bits of light, sound, and other information from your senses. Every time you hear speech, your brain breaks up the continuous stream of sound into phonemes, syllables, words, and ideas by prediction. Other predictions are at the macro level. You’re interacting with a friend and, based on context, your brain predicts that she will smile. This prediction drives your motor neurons to move your mouth in advance to smile back, and your movement causes your friend’s brain to issue new predictions and actions, back and forth, in a dance of prediction and action. If predictions are wrong, your brain has mechanisms to correct them and issue new ones.

    See Interoceptive predictions in the brain for an example paper. (I have just skimmed it, but it seems relevant.)

    In unrelated news, the sci-fi/horror story being written as comments in Reddit, known as the The Interface Series (complete set at the link) seems to be wrapping up. I'm unreasonably amused at the last several episodes. (Don't ask. Slight shifts in weights in a big network of possibilities. Am very easily amused.) See Wikipedia (search for 9MOTHER9HORSE9EYES9) if you're unfamiliar with this whatever-it-is.

    317:

    surely you know, that if the universe is infinite

    Then, in any volume of space, the energy and matter can rearragned in a finite number of ways (finite is a big number)

    Therefore, if the universe is truly infinite, then somewhere there must exist a volume of space identical to our solar system as i type this and a volume of space identical to our solar system 5 seconds ago and a VoS identical to our solar system, except I have red hair

    not only that, there exists an infinite number of VoSs that are identical to our solar system, and an infinite number that are identical to our solar system except one photon is alittle different, and ....

    Specials

    Merchandise

    About this Entry

    This page contains a single entry by Charlie Stross published on June 2, 2016 4:20 PM.

    Competition Time! was the previous entry in this blog.

    Gratuitous Self-Promotion is the next entry in this blog.

    Find recent content on the main index or look in the archives to find all content.

    Search this blog

    Propaganda