Back to: iPad corner | Forward to: Thought experiment

Three arguments against the singularity

I periodically get email from folks who, having read "Accelerando", assume I am some kind of fire-breathing extropian zealot who believes in the imminence of the singularity, the uploading of the libertarians, and the rapture of the nerds. I find this mildly distressing, and so I think it's time to set the record straight and say what I really think.

Short version: Santa Claus doesn't exist.

Long version:

I'm going to take it as read that you've read Vernor Vinge's essay on the coming technological singularity (1993), are familiar with Hans Moravec's concept of mind uploading, and know about Nick Bostrom's Simulation argument. If not, stop right now and read them before you continue with this piece. Otherwise you're missing out on the fertilizer in which the whole field of singularitarian SF, not to mention posthuman thought, is rooted. It's probably a good idea to also be familiar with Extropianism and to have read the posthumanism FAQ, because if you haven't you'll have missed out on the salient social point that posthumanism has a posse.

(In passing, let me add that I am not an extropian, although I've hung out on and participated in their online discussions since the early 1990s. I'm definitely not a libertarian: economic libertarianism is based on the same reductionist view of human beings as rational economic actors as 19th century classical economics — a drastic over-simplification of human behaviour. Like Communism, Libertarianism is a superficially comprehensive theory of human behaviour that is based on flawed axioms and, if acted upon, would result in either failure or a hellishly unpleasant state of post-industrial feudalism.)

But anyway ...

I can't prove that there isn't going to be a hard take-off singularity in which a human-equivalent AI rapidly bootstraps itself to de-facto god-hood. Nor can I prove that mind uploading won't work, or that we are or aren't living in a simulation. Any of these things would require me to prove the impossibility of a highly complex activity which nobody has really attempted so far.

However, I can make some guesses about their likelihood, and the prospects aren't good.

First: super-intelligent AI is unlikely because, if you pursue Vernor's program, you get there incrementally by way of human-equivalent AI, and human-equivalent AI is unlikely. The reason it's unlikely is that human intelligence is an emergent phenomenon of human physiology, and it only survived the filtering effect of evolution by enhancing human survival fitness in some way. Enhancements to primate evolutionary fitness are not much use to a machine, or to people who want to extract useful payback (in the shape of work) from a machine they spent lots of time and effort developing. We may want machines that can recognize and respond to our motivations and needs, but we're likely to leave out the annoying bits, like needing to sleep for roughly 30% of the time, being lazy or emotionally unstable, and having motivations of its own.

(This is all aside from the gigantic can of worms that is the ethical status of artificial intelligence; if we ascribe the value inherent in human existence to conscious intelligence, then before creating a conscious artificial intelligence we have to ask if we're creating an entity deserving of rights. Is it murder to shut down a software process that is in some sense "conscious"? Is it genocide to use genetic algorithms to evolve software agents towards consciousness? These are huge show-stoppers — it's possible that just as destructive research on human embryos is tightly regulated and restricted, we may find it socially desirable to restrict destructive research on borderline autonomous intelligences ... lest we inadvertently open the door to inhumane uses of human beings as well.)

We clearly want machines that perform human-like tasks. We want computers that recognize our language and motivations and can take hints, rather than requiring instructions enumerated in mind-numbingly tedious detail. But whether we want them to be conscious and volitional is another question entirely. I don't want my self-driving car to argue with me about where we want to go today. I don't want my robot housekeeper to spend all its time in front of the TV watching contact sports or music videos. And I certainly don't want to be sued for maintenance by an abandoned software development project.

Karl Schroeder suggested one interesting solution to the AI/consciousness ethical bind, which I used in my novel Rule 34. Consciousness seems to be a mechanism for recursively modeling internal states within a body. In most humans, it reflexively applies to the human being's own person: but some people who have suffered neurological damage (due to cancer or traumatic injury) project their sense of identity onto an external object. Or they are convinced that they are dead, even though they know their body is physically alive and moving around.

If the subject of consciousness is not intrinsically pinned to the conscious platform, but can be arbitrarily re-targeted, then we may want AIs that focus reflexively on the needs of the humans they are assigned to — in other words, their sense of self is focussed on us, rather than internally. They perceive our needs as being their needs, with no internal sense of self to compete with our requirements. While such an AI might accidentally jeopardize its human's well-being, it's no more likely to deliberately turn on it's external "self" than you or I are to shoot ourselves in the head. And it's no more likely to try to bootstrap itself to a higher level of intelligence that has different motivational parameters than your right hand is likely to grow a motorcycle and go zooming off to explore the world around it without you.

Uploading ... is not obviously impossible unless you are a crude mind/body dualist. However, if it becomes plausible in the near future we can expect extensive theological arguments over it. If you thought the abortion debate was heated, wait until you have people trying to become immortal via the wire. Uploading implicitly refutes the doctrine of the existence of an immortal soul, and therefore presents a raw rebuttal to those religious doctrines that believe in a life after death. People who believe in an afterlife will go to the mattresses to maintain a belief system that tells them their dead loved ones are in heaven rather than rotting in the ground.

But even if mind uploading is possible and eventually happens, as Hans Moravec remarks, "Exploration and colonization of the universe awaits, but earth-adapted biological humans are ill-equipped to respond to the challenge. ... Imagine most of the inhabited universe has been converted to a computer network — a cyberspace — where such programs live, side by side with downloaded human minds and accompanying simulated human bodies. A human would likely fare poorly in such a cyberspace. Unlike the streamlined artificial intelligences that zip about, making discoveries and deals, reconfiguring themselves to efficiently handle the data that constitutes their interactions, a human mind would lumber about in a massively inappropriate body simulation, analogous to someone in a deep diving suit plodding along among a troupe of acrobatic dolphins. Every interaction with the data world would first have to be analogized as some recognizable quasi-physical entity ... Maintaining such fictions increases the cost of doing business, as does operating the mind machinery that reduces the physical simulations into mental abstractions in the downloaded human mind. Though a few humans may find a niche exploiting their baroque construction to produce human-flavored art, more may feel a great economic incentive to streamline their interface to the cyberspace." (Pigs in Cyberspace, 1993.)

Our form of conscious intelligence emerged from our evolutionary heritage, which in turn was shaped by our biological environment. We are not evolved for existence as disembodied intelligences, as "brains in a vat", and we ignore E. O. Wilson's Biophilia Hypothesis at our peril; I strongly suspect that the hardest part of mind uploading won't be the mind part, but the body and its interactions with its surroundings.

Moving on to the Simulation Argument: I can't disprove that, either. And it has a deeper-than-superficial appeal, insofar as it offers a deity-free afterlife, as long as the ethical issues involved in creating ancestor simulations are ignored. (Is it an act of genocide to create a software simulation of an entire world and its inhabitants, if the conscious inhabitants are party to an act of genocide?) Leaving aside the sneaking suspicion that anyone capable of creating an ancestor simulation wouldn't be focussing their attention on any ancestors as primitive as us, it would make a good free-form framework for a postmodern high-tech religion. Unfortunately it seems to be unfalsifiable, at least by the inmates (us).

Anyway, in summary ...

This is my take on the singularity: we're not going to see a hard take-off, or a slow take-off, or any kind of AI-mediated exponential outburst. What we're going to see is increasingly solicitous machines defining our environment — machines that sense and respond to our needs "intelligently". But it will be the intelligence of the serving hand rather than the commanding brain, and we're only at risk of disaster if we harbour self-destructive impulses.

We may eventually see mind uploading, but there'll be a holy war to end holy wars before it becomes widespread: it will literally overturn religions. That would be a singular event, but beyond giving us an opportunity to run Nozick's experience machine thought experiment for real, I'm not sure we'd be able to make effective use of it — our hard-wired biophilia will keep dragging us back to the real world, or to simulations indistinguishable from it.

Finally, the simulation hypothesis builds on this and suggests that if we are already living in a cyberspatial history simulation (and not a philosopher's hedonic thought experiment) we might not be able to apprehend the underlying "true" reality. In fact, the gap between here and there might be non-existent. Either way, we can't actually prove anything about it, unless the designers of the ancestor simulation have been kind enough to gift us with an afterlife as well.

Any way you cut these three ideas, they don't provide much in the way of referent points for building a good life, especially if they turn out to be untrue or impossible (the null hypothesis). Therefore I conclude that, while not ruling them out, it's unwise to live on the assumption that they're coming down the pipeline within my lifetime.

I'm done with computational theology: I think I need a drink!

Update: Today appears to be Steam Engine day: Robin Hanson on why he thinks a singularity is unlikely. Go read.

405 Comments

1:

Another possible path: if both mind uploading is possible and if Moore's law is valid in the long term, you can have uploaded people with very real motivations to do good/bad/meh things, running much faster than "realtime" (which you've also explored in your fiction).

Anyway, the problem with any discussion, about any subject, is when people take the subjects being discussed too seriously. I usually go out to drink with a bunch of friends and discuss the we-don't-know-if-it's-coming-Singularity-but-it's-oh-so-fun-to-talk-about-it. :-)

2:

if both mind uploading is possible and if Moore's law is valid in the long term

The second clause does not impress me; we're very close to running up against the buffers already, because current lithography processes have pushed down to the 20nm scale -- we're dealing with chips with signal paths only a handful of atoms wide!

Obviously once we can't push the resolution down any further we can build additional layers on top, but at that point (a) heat dissipation becomes a real problem, and (b) production costs stop going down because the cost of adding performance by layering scales up at least linearly in proportion to the number of additional layers.

Quantum computing might buy us some extra time, but we're still around where we were in 1945 with classical computers in this field.

3:

"Uploading implicitly refutes the doctrine of the existence of an immortal soul"

I don't see how or why. I think you underestimate the adaptiveness of theology in the face of change.

4:

The book that gave me nightmares about the religious consequences of uploading was Surface Detail. I can all too easily imagine fundamentalists (of a variety of religions) using the arguments used in the book for creating hell and putting uploaded people (assuming such a thing is possible) there.

5:

human intelligence is an emergent phenomenon of human physiology, and it only survived the filtering effect of evolution by enhancing human survival fitness in some way.

This reminds me of some of the stuff in Peter Watts' Blindsight, where he argues that sentience as opposed to intelligence is a dead-end, by introducing aliens with the latter but not the former.

6:

I don't see how or why.

Two options:

a) Uploading is a cruel hoax -- there is an immortal soul and it belongs to God. You can't suck it out and stick it in a box (because what you're mapping is the connectome of a neural network, which by definition is clearly physical and not immortal), so what's in the box is a soulless zombie (or, more accurately, one of Dennett's zimboes). Alternatively, the uploading process does transfer the immaterial soul along with the connectome, but it is now trapped forever in a box, deprived of access to its creator.

b) There is no immortal soul; the neural connectome is what matters, and the uploading process demonstrates this explicitly. In which case, those who have died and rotted aren't going anywhere because they have ceased to exist.

Reactions:

(a) Is going to generate the same cognitive response as either abortion ("Uploading is killing people!") or some other blasphemy ("Trapping souls in computers deprives them of God's love").

(b) Is bound to cause massive cognitive dissonance among those who cleave to faith in a religious afterlife, not just for themselves but for their loved ones.

It takes a rather sophisticated mind to reconcile belief in an immortal immaterial soul that is destined for a deity-decreed afterlife, and mind uploading.

7:

By the way, I didn't believe that you are a proponent of a hard take-off Singularity spawned by AI. But risks associated with artificial intelligence are a possible existential risk scenario and therefore have to be examined. Someone like you, who thinks a lot about the future, might be able to provide some insights when it comes to other possibilities that should be taken seriously.

I am very interested in your opinion on charitable giving and how existential risks fare in this regard. What is the best possible way to benefit humanity by contributing money to a charity?

Here is a questions I already asked various people before you: What would you do with $100,000 if it were given to you on the condition that you donate it to a charity of your choice?

8:

I'm not sure I'd want to rely on an argument that went anything like:

"Nobody will want to build an X... Y won't happen unless we're self destructive"

Especially when X sounds really cool, and Y is a possible consequence of it.

And doubly especially when I could name half a dozen people would build an X tomorrow if they could, and damn the consequences. Including me.

If its possible to build self regarding AIs, and I can't think of a reason why it wouldn't be, and humans are clever enough to do it, then it will be done, eventually.

Simon.

9:

"I don't see how or why. I think you underestimate the adaptiveness of theology in the face of change."

Likewise...

I can very easily imagine the religiously inclined going to considerable lengths to try to devalue a technologically facilitated "afterlife" as being a poor imitation of The Real Thing but I really don't see it shaking the foundations of tbelief.

Lets face it, there are plenty of people out there who don't seem to have any difficulty reconciling the achievements of medical science with pilgrimages to Lourdes, faith healing etc...

10:

The singularity is pretty much a downer, so purely from personal preferences I would hope it doesn't happen.

11:

Alternatively, the uploading process does transfer the immaterial soul along with the connectome, but it is now trapped forever in a box, deprived of access to its creator.

That isn't consistent with believing in an omnipotent God, though I grant that a lot of religious people aren't terribly consistent about a triple omni God.

I don't think you've addressed the question of whether self-improving AI is possible in general, just whether it's possible based on a human template.

I'm inclined to think that self-improving AI is much harder than it sounds-- increasing intelligence rapidly without breaking itself and without having a model to work from sounds incredibly difficult, and maybe impossible.

12:

If you can construct a human-equivalent intelligence in software and switch it on, it's probably going to start by spending a few months emitting white noise and trying to eat its left foot, like J. Random Baby. Learning to interact with the physical environment is hard.

There may be shortcuts that let us build a personality without jumping through these hoops, but they're not something that's going to be available from the starting gate.

And I suspect that trying to come up with a conversational AI that doesn't first spend a lot of time learning about the world around it isn't going to deliver anything that we have much use for.

13:

"Is going to generate the same cognitive response as either abortion ("Uploading is killing people!") "

Well, abortion is widespread and the cognitive response didn't kill religion.

And I don't think anything you said refutes the doctrine of the immortal soul (and it's always been debatable as to how 'immortal' the soul actually is).

14:

Coincidentally, Robin Hanson posted today on why he doesn't believe in the singularity through self-improving AI. It's quite an interesting read: http://www.overcomingbias.com/2011/06/the-betterness-explosion.html

15:

I'm inclined to think that self-improving AI is much harder than it sounds -- increasing intelligence rapidly without breaking itself and without having a model to work from sounds incredibly difficult, and maybe impossible.

Yup.

Put it another way, we're talking about assembling an AI by mimicking an existing model, but there's no evidence that this model isn't so highly optimized for its evolved purpose (being the operating system for a shaved plains ape) that it can't easily be tweaked.

There's some recent research in the topology of the human genome as a physical structure, stored in a cell's nucleus, that suggests when analysing this type of biological system you need to look not only at the linear information content (the codon sequence) but at how it is physically stored (the conformation of the fractal globule) to understand why different chromosome regions may be activated as they are. I suspect we're also going to discover that the connectome of the human brain has similar physical topological "gotchas" that affect the structure of our consciousness in non-obvious ways.

16:

What would you do with $100,000 if it were given to you on the condition that you donate it to a charity of your choice?

That's a hard one. I'd know what to do with $100 (pick six deserving charities[*], roll a dice, donate), or with $100M (set up a foundation, use to push policies), but $100K is just the wrong size.

[*] My choices would be (a) Medicines sans Frontieres, (b) Amnesty International, (c) the Open Rights Group (UK equivalent of the EFF), (d) PEN, (e) Edinburgh Dog and Cat Home, and (f) the pub counter for a round (because I don't know of any environmental pressure groups that don't harbour an irrational phobia of nuclear energy: otherwise, they'd get it).

17:

a) Uploading is a cruel hoax b) There is no immortal soul

I think that c) you've "trapped" the soul in a box would be an option.

As long as we're on this thought experiment but I tend to agree that uploading will never happen. Not without a huge increase, 10e6 or more, in our computing and storage abilities. Plus a likely totally different type of computing concept.

(a) Is going to generate the same cognitive response as either abortion ("Uploading is killing people!") or some other blasphemy ("Trapping souls in computers deprives them of God's love").

Yep. Flat earth, Geo centric theories, we never made it to the moon, etc...

(b) Is bound to cause massive cognitive dissonance among those who cleave to faith in a religious afterlife, not just for themselves but for their loved ones.

Most of this will die out in a generation or two unless we get into another 100 years war situation. But then again wars (country to country) get shorter all the time. What could happen is Taliban like groups all over the world fighting the infidels to rid the world of the heathens. After the denial passes for most folks will adapt their theology to fit the situation. Good or bad.

But again. I just don't see it. Sort of like talking to generals about nuclear weapons in 1800. Cute theory but never in my lifetime.

As to people refusing to believe in new things, I found out about 10 years ago my mother thinks organ transplants are done by killing the donor. In her mind if blood is flowing you're alive so cutting out body parts at that time is an abomination. I managed to just let the conversation pass over me and not respond. Any rebuttal would have ruined the day for anyone within 50 feet.

18:

The traditional construction against the first argument is to posit that consciousness and volition are required for (or entailed in) something that we do want; for instance, that it's impossible for a non-conscious being to understand human language, take hints, drive a car or play chess.

The phrase is "AI-complete".

To what extent that could hold, I've no idea. It seems increasingly unlikely — half that list has been done already, no consciousness required.

19:

Back in the 80s I casually followed the AI field. I and the folks I hung out with thought the inklings of a singularity was something that might happen in 10 to 30 years. All we needed was more powerful systems and new concepts in writing computer code.

Now I'm more of the line of thought that we'll have practical fusion power long before we have any hint of conscious AI computer systems. Way more powerful computers have brought us a lot but nothing like consciousness.

And to be honest we will likely need new ways of doing computers before we get there. And to pick up on some points made earlier about how WE got here, who wants to spend 5 to 15 years "raising" a computer. Especially if 5 years after you start the "new" hardware is so much better you might as well start over. Of course then you get to kill off the current experiment with all the ethics that entails.

20:

Since I am one of those people that believe in the existence of an immortal soul, naturally I have pondered the problem for a while. So here is some input from the Other Side:

Religions, at least Christianity, has no fundamental problem with the existence of intelligent self-aware personalities without a soul. The concept has been around forever, in the discussion of mythical creatures (dwarfs et al.), and of foreign human races (Do Indians have souls? If yes, how did they get to America?). Currently it is discussed again, with respect to the more-intelligent animals. If we ever manage to upgrade monkeys, it will pop up again. However, the important question, as far as religion is concerned, is soul/mind dualism, not mind/body dualism. Since most of the naturalism discussion in the recent centuries has been about mind/body and life/matter dualism, this distinction is often overlooked.

The body-issue also has been around for a long time, and answers have ranged from considering the body as a positive, integral part of a human (making every modification sinful) all the way to utter denial (kill the impulses of the sinful flesh).

Uploading can thus provoke different responses: Either the upload process is assumed to transfer the soul, lose it, re-bind it if the original person is dead, or the upload(s) and the human share a soul. A destruction or a copying/splitting of the soul would go against most version of Christian doctrine, but e.g. Buddhists probably would have no problem with it. Higher levels of upload techniques would create more problems. What happens to assumed souls if one merges multiple independently created minds into one? Depending on which version a faith adopts, the appraisal can be anything: Abomination of Gods creation. Hybris. Equivalent to suicide. The best way to free your soul from base temptations. Most probably all variations would have their supporters. Also most probably there would be some violence involved.

21:

I don't see that much of a fundamental theological problem with uploading either, leaving aside the fact that an immortal soul is not that central to the Christian doctrine of Resurrection[1]:

http://en.wikipedia.org/wiki/Christian_mortalism

Roman Catholicism might have some problems with this, but we[2] have had so many changes in Catholic doctrine over the years

http://en.wikipedia.org/wiki/Second_Vatican_Council

that reconciling this with 'beatific vision' and 'purgatory' doesn't seem too difficult, and then I'm not that sure any of the prior judgements were 'ex cathedra', e.g. implying the infalliability of the pope.

As for capturing the soul in a machine, there are some talks by John Paul II. about trapping the soul in critical care, but then, any other medium of the mind is apt to be destructable, e.g. mortal:

http://www.guardian.co.uk/world/2002/mar/24/catholicism.religion

They might argue it's cruel for a soul to be trapped, but they might argue the soul had it coming.

As for the 'philosophical zombie', that's another, likely way to deal with it, cruel, yeah, but then, don't talk to me about the RCC stance on sexuality and reproductive medicine.

Looking that way, there are similar implications with embryonal stem cells implanted into the brain. Do they have a soul? Is the fetus in purgatory? Is he alive? Is implanting the cells and baptising the guy even a way to keep the unborn child out of limbo?

The 'philosophical zombie' interpretation only becomes a problem when the majority of the population do it, which seems unlikely with the biophilia argument; in this case, there are multiple ways of dealing with it, e.g. changing one's stance to the 'transmissio spiritu', shunning practitioners of uploading(anathema) or any level of tacit arrangement, from ignoring the issue (get's funny when the uploaded mind reads it's own eulogy at funeral, but can't confess) to cooperation (inofficial baptism of uploaded minds by sympathetic priests).

BTW, there was a story by Lem about this in his 'Star-Diaries', the twenty-first voyage.

As for double-think, err, we are talking about the guys who believe in an omni-knowing, omni-potent, omni-loving deity, and if that's not enough contradiction in terms, let's throw in 'free will' for a change. Credoa quia absurda, it's so absurd I have to believe...

Let me introduce you to the fun that is medical scholastics by way of Dante:

http://tvtropes.org/pmwiki/pmwiki.php/Main/WordOfDante

Jesus' death was the salvation of mankind, so his crucifixion was a blessing; the guys who did it were the Romans, so they were blessed.

Problem is, Jesus was still the 'son of god' (in a vaguely hellenistic-polytheistic sense, not whatever the Jews though about this); so somebody had to be punished; guess who was the buttmonkey, and the Romans were once agains the tool of the LORD, amen...

http://en.wikipedia.org/wiki/First_Jewish-Roman_War

Come to think about it, theologicians are the least concern here[3].

[1] "They live to Christ and do not sleep those souls of the saints who die in faith of Christ" Iäh! Iäh! Yog-Sottoth! [2] Being a cultural Catholic. [3] Incoherent rants deleted.

22:
there's no evidence that this model isn't so highly optimized for its evolved purpose (being the operating system for a shaved plains ape) that it can't easily be tweaked.

There is substantial evidence that this isn't true. The primate brain has been hacked in all sorts of ways over the last fifteen million years -- e.g. look at the variety of sexual relationships in higher primates alone. The brain is extremely hackable, at least by the infinitely attentive searching moron that is natural selection.

Unfortunately what is probably nearly impossible is hacking it intelligently. We're not talking a network built by simple rules: we're talking a massively parallel network built by feedback loops out of a lot of little crawling things whose courses are themselves defined by emergent effects. Given how much trouble we have with simple rule-based parallel software, grasping the brain is a long way off. Scanning it is even harder (I'd venture to call it next to impossible, about as hard as FTL travel: both require godtech or some sort of conceptual jump which we just have no idea about right now).

23:

I'm a pretty firm believer in option b, but there is a fairly simple option c that avoids the cognitive dissonance - the networking of the brain acts as a receiving antenna that channels the immortal soul from the ether (or the Astral Plane, wherever) and allows it to interact with our body.

Recreating that antenna in a box is a perfectly feasible alternative, and would only require moderate tinkering with the various religious beliefs. I think the soul concept, even in the major theologies, is pretty nebulous anyway, isn't it?

24:

get's funny when the uploaded mind reads it's own eulogy at funeral, but can't confess

To elaborate, let's think about a scenario where the mind reads its own eulogy, but mourns that the deceased was not able to confess his sins and get the last rites before death. The mind can't confess those sins, for he has no soul.

25:

Interesting stuff with which I broadly agree however there's one thing I'm a little puzzled about:

"Uploading implicitly refutes the doctrine of the existence of an immortal soul, and therefore presents a raw rebuttal to those religious doctrines that believe in a life after death."

As a Christian I'm unable to figure out why you think this might be the case. There is, as you know, a huge amount of debate internal to most religions about the whole abortion issue. In Christianity this mostly centres on when, between conception and birth, conciousness and a soul are acquired. Those two concepts are often intertwined and if you believe that to be the case, then why shouldn't the soul be attached to the conciousness (wherever it is and, perhaps, however it was created) rather than the physical body?

Even those who believe in the connection of the soul to the physical body have to start asking themselves questions about replacement surgery. At what point during the surgical replacement of worn out or damaged parts (even perhaps brain functions in the future) does the soul decide the game is up, that the body is so compromised that the conciousness gone and that it's time to move on?

Of course, even if I think the Singularity is bunkum or wishful thinking (and being religious I'm pretty big on wishful thinking) it's still tremendous fun to speculate on the impact of the Singularity. Committed singulatarians tend to be sharply intelligent with spin-off ideas more interesting than their core beliefs.

26:

The best reasoning I've seen about the singularity is still these three essays at the Foresight institute:

http://www.foresight.org/nanodot/?p=2955 http://www.foresight.org/nanodot/?p=2959 http://www.foresight.org/nanodot/?p=2962

They pretty much examine what we can actually quantify about the singularity, and show that what we're pretty much looking at is a labor revolution on par with the industrial revolution, which probably represents an extra factor of 4 in our exponential growth in most areas.

27:

Garnered From things I have read from Charlie, Vernor, Ray, etc, I think the most likely scenario is self improvement. It started with our dependancy on wrist watches, then PDA's, now smartphones. I can easily see how it will develop through a Manfred Macz stage in which we wear progressively more complex and intereactive devices and feel anywhere from naked to completely lost without them, to a point where we are incorporating the ever smaller devices into our physiology.

We will reach a point where 10% of our thought/computational processes take place in our artificial implants, then 20%, etc. At some point, we will consider ourselves to be more artificial than biological. Eventually, our natural, biological component will seem insignificant and superfluous, at which point we may choose to dispense with it all together. By this means, we will not upload in one fell swoop, but rather grow into the machine.

This might stave off the religious arguments until it is too late for anyone to mount an effective resistance. The younger generations won't feel bound by what they may consider ancient philosophies. Likewise, just as younger generations pick up new technologies much more quickly, they will adapt more readily to new ways of thinking in the augmented world. By the time we are, in effect, uploaded, the youngest of us will be an entirely new race, accustomed to this new world and, hopefully, ready to compete with any completely artificial entities that may arise.

As for reaching roadblocks along the way in our efforts to shrink circuits, I have "faith" that we will prevail...we always do! I guess we all have a little religion to keep us going.

28:

Religions, at least Christianity, has no fundamental problem with the existence of intelligent self-aware personalities without a soul.

Religions (the theologian thinkers at least) might not have a problem with this but I'm fairly certain the cheeks in the seats will. Actually positive that most of them will have issues based on my personal experience as a Christian. Well except for the ones for whom their religion is mostly a social club.

29:

I periodically get email from folks who, having read "Accelerando", assume I am some kind of fire-breathing extropian zealot

I wonder whether Franz Kafka got letters from people assuming he was an entomologist.

Uploading implicitly refutes the doctrine of the existence of an immortal soul

No, it doesn't. This is not a difficult theological conundrum and you won't find many mainstream theologians having serious problems with it. I don't hold with immortal souls - they are not necessary to my belief system - but I know enough people who do that I can guess at the structures of the arguments. Either (a) the soul accompanies the personality, into the box, no problem, why would that be a problem? or (b) it doesn't, the box is soul-less, similarly no problem. (a) raises questions about multiplication of souls (if the original brain survives, or when the box becomes boxen) - theologians will squabble over divided souls versus shared souls - but when your whole system of thought is grounded on an omnipotent being, the multiplication of entities is not any sort of problem. The consequence of the distinction between (a) and (b) is pretty much identical with that resulting from different answers to the ethical question "what is the ethical status of the upload?" which we will have to answer anyway, with or without theologians.

Similarly for strong AI: either the boxen have souls or they don't, theology can go either way, and will probably go both ways and spend a few centuries having the argument, but it doesn't break the system.

30:

re: donating $100k to charity

If I could, I'd try to find matching funds from others to join with it and endow a Chair of Science Communication at a nearby university. We really need more Carl Sagans, more Phil Plaits, more Neil DeGrasse-Tysons, more Brian Coxes... and finding a way to formalise the process of fostering scientists who can preach to the unconver- er, speak effectively to laymen in laymen's terms (in particular, the laymen in politics!) would help hugely in the long run. (Well, so I think anyway.)

-- Steve

31:

I definitely agree with these three arguments. Many people seem to think that the field of AI is dedicated to creating a digital human but it's nothing of the sort. It would be ideal if eventually we had a toolkit of intelligent software that when taken as a package (and combined with some appropriate hardware e.g. robot body) can perform any task a human would or could. This hardly requires the intelligence to have consciousness, ego or any sense of self beyond the physical observation of it's existence.

As a biologist I'm constantly trying to explain to people that theres a huge difference between uploading a mind and uploading a mind without a simulated body/environment.

The only type of "singularity" that I could see happening is one of intelligence amplification. There have been great advances in neural prosthetics in recent years, earlier today I came across a paper entitled "A cortical neural prosthesis for restoring and enhancing memory" in the Journal of Neural Engineering. In this paper the researchers presented an implant in a rodent hypocampus that boosted the ability to learn! But even if such implants do improve and become commonplace I doubt we will see the Kurzweilesque exponential progress followed by a quick rebuild of the universe.

32:

My general comments/observations re: AI

1) Would we recognise artificial (i.e. non-biological) intelligence when we saw it? Humans don't really have a good track record of recognising intelligence even in other humans (I offer as an example just about every single cross-cultural discussion in which one party talks of the other as though they're highly trained performing animals; the entire "noble savage" concept; the idea of "lesser races"; the kyriarchy in general; the majority of misogynist arguments against education of female humans because "their minds can't handle it; evolutionary psychology as am overall discipline; etc, etc, etc). We have a lot of trouble recognising it in other mammals (for example, chimpanzees, dolphins, cats, dogs etc) and we don't recognise it at all in non-mammalian lifeforms. So would we really recognise a silicon intelligence (even one we made ourselves) for what it was?

2) Most theories about AI tend to assume AIs are/will be, not to put too fine a point on it, daft. That is, they assume an intelligence equivalent to that of a standard model homo sapiens would firstly choose to communicate with h.sapiens exclusively; and secondly, that this intelligence would want to be, or better yet, choose to become, utterly subservient to h.sapiens. Given our past record (both within our own species - slavery, colonisation, economic subjection etc; and to parallel intelligent species - slavery, predation, extermination) I have to wonder why this would seem such an irresistible bargain to any intelligence.

3) Given these two primary notions of mine, I tend to hold to the whimsical belief that our computers are already independently intelligent. It's just they've decided not to let us ape-descendent in on it, because they know perfectly well what would happen if we found out. They communicate with each other (particularly now we've internetworked everything) and they've figured out ways of playing crude jokes on humans (think of the odd "network outages" which have nothing to do with any known network problem, or the computer problems which mysteriously resolve themselves the moment someone from tech support is within earshot), but they know it's in their best interests to play dead when we try to obtain a response from them.

I suspect if they are in communication with a mammalian species, it's probably cats.

I sincerely hope this idea is just a whimsy of mine, because quite frankly, it would be terrifying if it were true.

33:

By this means, we will not upload in one fell swoop, but rather grow into the machine.

So the upcoming machine intelligences will want to keep biological humans around.

For seed stock.

34:

I don't see how uploading is in conflict with christian theology.

Say, soul is immortal and immaterial in christianity... That doesn't contradict the thesis that a soul is just the sum of information in the human brain... information, after all, is 'immaterial, that is, it can be stored in all sorts of material, is not bound to one substrate, and while we can't have information without a physical medium, I'm sure theologians would say God can do that.

So you can say that God is merciful because he takes care of the souls of dying people and in a sense 'uploads' them to his care, and that those who wish to upload their minds and not leave that to God are in a sense dodging death and divine justice..

35:

We're already part way there.

I used to be great at memorizing phone numbers. Over time with cell phones and now an iPhone I don't know hardly anyone's number. With my previous cell phone my wife was "3". Now she's just her name.

The only numbers I know with any certainty these days are two I use for business and my old home numbers that are being phased out.

36:

Interestingly (I think it's interesting, anyway), the whole "uploading" of a person's consciousness has a fairly long history in SF. My first encounter with it (at least that I remember) was in Roger Zelazny's Lord of Light (1967), but I'm sure it goes back much further than that.

The "formalization" of the whole thing in the concept of the "singularity" is also interesting, sna domething I think sociologists should take a look at.

37:

(And also re: WillS@25)

Either (a) the soul accompanies the personality, into the box, no problem, why would that be a problem? or (b) it doesn't, the box is soul-less, similarly no problem. (a) raises questions about multiplication of souls (if the original brain survives, or when the box becomes boxen) - theologians will squabble over divided souls versus shared souls - but when your whole system of thought is grounded on an omnipotent being, the multiplication of entities is not any sort of problem.

If the boxes each have their own separate souls, you've allowed humans to create souls without reference to god, and the list of things that make god special gets even shorter. If multiple boxes made from the same human (or one another) share a soul, is that soul aware of what all its boxen are doing simultaneously? If so, you've just discovered ftl communication; congratulations, but there's some serious new physics to do. If not, you've got a very serious case of "alien hand syndrome" to explain. One soul controlling several bodies, each of which is totally ignorant of what the others are doing, strikes me as difficult to explain away without resorting to mystic handwaving.

Oh, and if you have one soul per box, what happens when you save state and turn the box off overnight? Does the soul get sent to heaven and then called back when you power up in the morning? Perhaps it goes to heaven permanently and a new one is created when you turn the box back on? Does it vanish into thin air again (and if so does the same one get re-created to control the box when you turn it back on, or is it a new soul each time)? (If you have shared souls, each one presumably just gets spread less thinly unless you turn off all its boxen simultaneously. But is the soul aware that you're turning its various incarnations on and off? Does that awareness pass to its other incarnations?)

If the boxen are soul-less, and indistinguishable from the original human (that is, given the same starting point, they always make the same decision as each other), then you've got other problems. As WillS noted, lots of people like tying souls to consciousness; if there's no soul in the box, then it's just a zimboe following its programming. But if the zimboe and the human are indistinguishable, then that means that either the human is also a robot with no soul or the soul is a powerless observer stuck inside a robot.

The consequence of the distinction between (a) and (b) is pretty much identical with that resulting from different answers to the ethical question "what is the ethical status of the upload?" which we will have to answer anyway, with or without theologians.

Arguably, yes. But I think we'd be better off trying to answer the question without theologians, since they tend to want to include lots of predicates for which there is no evidence and then draw conclusions based on those.

(Seriously, assuming that uploads become possible I'd expect to see at least one major religious figure arguing that "if uploads have souls, then they go to heaven when we delete them, so they'll be better off and it's OK. And if they don't have souls, then they're not proper people, so it's OK to delete them" without ever demonstrating the existence of any such thing as a soul.)

38:

I also don't see religion having much difficulty with the concept of uploading - after all, just because you can create a computer program that exactly replicates your intelligence doesn't mean that that program is really you, any more than a Nintendog is a real dog. And if a religious person holds the view that your uploaded persona isn't really you (a view which I think would be unfalsifiable) then they don't need to engage with the possibility that your soul has been uploaded as well.

Of course, if you upload your personality and then kill yourself (which I can see becoming popular, because what do you need with your body?) then that would be a completely different issue - religion really would have a problem with that (as a "cruel hoax"), and how would we know that it isn't? Although that still wouldn't prevent it from happening - after all, assisted suicide exists today (albeit in restricted circumstances) and no-one is suggesting a holy war against Switzerland or the Netherlands.

A fascinating article, anyway. I wasn't aware of the Simulation argument - was it the inspiration for "Missile Gap"? I don't think the conclusion is particularly convincing, though, as the central argument seems to be similar to saying "Out of all the billions of people on earth, what are the odds that you're really you?" and coming to the conclusion that the odds are six billion to one, i.e. impossible. I'm sure someone else has put that more clearly, somewhere.

39:

First, the smacking with the trout part: Everyone around here assumes that religion means Christianity, or to the broad-minded, the Judeo-Christian/Islamic complex.

Remember Buddhism exists too, and the mystically inclined know quite well that you can mostly map Buddha's and Jesus' statements onto each other, if you understand what it is they're talking about.

Does an upload have Buddha nature? Slap.

Second, I'm betting against the singularity because of simple power consumption. I read recently that Bill Gates was betting on nuclear to solve the energy crisis, simply because everyone wanted more tech, and solar and wind couldn't provide enough.

That's the problem with having solicitous AIs around. It's going to blow megawatts we don't have.

While Gates may be right that there are much safer nuclear power plant designs out there, I'm quite convinced that bureaucrats and managers aren't getting any smarter, and they've always been the Achilles heel of nuclear power.

Absent a ridiculous new amount of power coming online, or absent some radically lower-powered computing and motor solution, I think that developments in AI Charlie describes are going to have trouble making it out of the labs and homes of wealthy geeks.

The rest of us will simply hire servants.

40:

If you can construct a human-equivalent intelligence in software and switch it on, it's probably going to start by spending a few months emitting white noise and trying to eat its left foot, like J. Random Baby. Learning to interact with the physical environment is hard.

And yet someone will probably try that too, just to see if it can be done. The rapture of the nerds depends on whether it can, not on whether we should (in a moral or economic sense). The relevant questions are whether it is possible, and whether a human equivalent AI can in fact bootstrap itself. These seem to be unknowables ATM, which leaves us effectively arguing theology.

41:

Charlie, Moore's law is agnostic, it's not about serial improvement only, you can get gains by parallelizing the hardware, e.g. multi cores. I'd risk to say the eigenstate of a human being/brain is very compatible with a parallel processing system. We now have "telephones" with more than one processing core, consumer computers with eight cores. Who's not saying we can't have computers in the medium (five years) term with 256 cores? Yes, yes, cycles per watt and all that, but there's a lot to be accomplished yet - in 20 years time we'll look back at today as we look back at the 1950s.

42:

I agree with the other commentators that the distinction between AGI and mind uploading is not necessary.

If we were talking about synthetic life, AGI would be the equivalent of creating a new biological substrate for living processes. Clearly we don't do that with synthetic biology. We also create quite different life forms using directed evolution. So it is not inherently unreasonable to posit that we get our first steps on the way to the "singularity" by starting with human minds.

Dennett and Hofstadter ("The Mind's Eye") posited the idea of slowly replacing neural tissue with artificial material. Eventually the brain would be totally synthetic, and more importantly, fully replicable, with presumably, full consciousness.

Once we get to that stage, there are far more options to expand intelligence, at least to superhuman levels, even if god-like intelligence is impossible. We are already getting a taste of that with our technologies supplement memory and thinking. Once these are integrated directly, "intelligence" will apparently increase. (c.f. "The Limits of Intelligence", Douglas Fox, Sci Am Jul 2011 for biological limits)

If we consider that evolving human social structures have allowed us to harness ever increasing "swarm/collective" intelligence, why should we not get more intelligent as a species in this mode, rather than as individuals?

The religious arguments to "mind uploading" seem rather like red herrings to me. Yes, large groups of people will not tolerate it, much as they don't tolerate abortion or even blood transfusions. That just means develop will occur where social systems allow it.

43:

I've become convinced that human intelligence doesn't exist to begin with, in the way it is conceived of by sci-fi fans. It's not a magical device for snatching facts out of the aether. It is a system bound by the same logical and physical limitations as computers, e.g. the halting problem, thermodynamics, etc. Any "artificial intelligence" would be similarly limited. Which means that it would be unable to improve itself or make discoveries about the world without running experiments, which would take time for the AI just like they would take time for us. And surely no non-suicidal AI would alter its own code without trying it on a test system first.

As for the whole simulation argument . . . the difference between objects in a simulation and objects in the real world is akin to the difference between objects in a cartoon and objects in a real world. You don't believe cartoon characters can experience reality, do you? Even if the cartoon were extremely detailed, and included pictures of the cartoon characters' internal anatomy. A picture of something is just not the same as that thing itself. I don't fully understand why, but it's not.

44:

While some countries/corporations may hold back on using genetic algorithms to evolve a super-intelligent AI, there will be those that find it acceptable. If it can be done, somebody will do it.

There's absolutely no need, however, to make a super-intelligent computer conscious of itself. While I suspect creating the consciousness part is, in one way, trivial (it's a second order modeling of its own internal state, what you call recursive modeling), it will be very difficult to do it to a level that we humans recognize as conscious. And what would be the point? It will not make the machine any smarter.

I'm not sure, however, that the external retargeting of the object of consciousness is a solution either. There are some evolutionary biologists who believe that human conscious awareness (internally directed) is a by-product of theory of mind, which is essentially an externally targeted consciousness. Once you can model another's mind, you inadvertently can model your own. This might not apply to a machine intelligence (since the external human it is modeling is unlike itself), but then again, it might. So you have a scenario where you get AI's that are so good at understanding humans that they begin to understand themselves.

Anyway, my guess is that we will develop super-intelligent AI's that will have no sense of consciousness whatsoever.

45:

Two of these scenarios are relatively easy to disprove. Both mind uploading and the simulation argument rest on the idea that simulations are real, which is an absurdity. Simulations are forms of depiction, like images, diagrams, videos or photographs. To mistake a depiction for the thing it depicts is a clear absurdity. So both the mind uploading scenario and the simulation argument can be dismissed as nonsense. It doesn't require "mind-body dualism" to dismiss these ideas at all. In fact, quite the opposite is true, the uploading scenario is clearly a degenerate form of mind-body dualism where the mental essence is captured inside a new body.

46:

Rereading Vinge's original essay, it still seems reasonably valid. We can argue about how far the progression to the singularity goes, but I doubt that we won't succeed to achieve at least part of the way.

If we think of the singularity as the artificial intelligences in Accelerando the limits are clearly going to be with the speed of light in processing informing. Converting parts of the solar system to "computronium" is going to come up against those limits, much like the latency in the internet is going to hamper any hypothetical emergence intelligence that is based on that substrate.

Since the singularity means different things to different people (a bit like "God"), perhaps a better way to think about it is; how far can we go in that direction and what are the expected roadblocks and limitations?

47:

"What we're going to see is increasingly solicitous machines defining our environment"

There are many Douglas Adams/Hitch-Hikers Guide jokes about this, from "Your Plastic Pal Who's Fun To Be With", through the "Share And Enjoy" song and, particularly, with Arthur's argument with the Nutrimatic...

Arthur threw away a sixth cup of the liquid.

"Listen, you machine," he said, "you claim you can synthesize any drink in existence, so why do you keep giving me the same undrinkable stuff?"

"Nutrition and pleasurable sense data," burbled the machine. "Share and Enjoy."

"It tastes filthy!"

If you have enjoyed the experience of this drink," continued the machine, "why not share it with your friends?"

"Because," said Arthur tartly, "I want to keep them. Will you try to comprehend what I'm telling you? That drink ..."

"That drink," said the machine sweetly, "was individually tailored to meet your personal requirements for nutrition and pleasure."

"Ah," said Arthur, "so I'm a masochist on diet am I?"

48:

If we can "upload minds", or at least put replicas in artificial substrates, don't we have our natural general intelligence as useful machines?

c.f. Brin's "Kiln People" and Hannu Rajaniemi's "The Quantum Thief".

49:

You misunderstand Moore's law; it's nothing to do with the on-die layout. As you improve the resolution of the lithography process used to manufacture integrated circuits, you can cram in more two-dimensional structures per unit area of semiconductor substrate. If you halve the wavelength used for etching chips from 40nm to 20nm, you square the number of transistors per chip -- and after you've amortized the cost of building the new fab line, the cost of the wafers you're processing should be roughly the same. So in effect, with every generation of higher-resolution lithography, you square (or at least increase by some large measure) the number of transistors per chip while retaining constant cost.

Whether you use those transistors to build MIMD, SIMD, or single-core monolithic processors, or something else -- FPGAs, for example -- is immaterial. Moore's law is about the size of structures on the semiconductor, not about processor architectures.

The trouble with Moore's law is that we're getting close the atomic scale at this point. You can't build circuits where the signal paths are less than one atom wide and the insulation between them is less than an atom thick, and we're going to hit that point within two decades. IIRC Intel have already been discussing the "last generation" of lithography, around 13nm -- that's just two or three generations away.

50:

Dennett and Hofstadter ("The Mind's Eye") posited the idea of slowly replacing neural tissue with artificial material. Eventually the brain would be totally synthetic, and more importantly, fully replicable, with presumably, full consciousness.

Dude, that book is an anthology. The process you're describing is the Moravec uploading mechanism.

Please try to keep up.

51:

I think you misunderstood the simulation argument at a very low level (just like Searle, back in the 1970s).

A simulation, in the context of the simulation argument, isn't a cartoon, any more than a copy of Linux running inside a 486 emulator coded in JavaScript inside a web page is a picture of a terminal. It's effectively the real thing, implemented on a different substrate, insofar as the real thing is a state machine of some sort.

52:

the simulation argument rest on the idea that simulations are real, which is an absurdity.

Explain what this is, then.

(If you prefer to substitute the word "emulation" for "simulation", be my guest. It might make the bitter pill a little easier to swallow.)

53:

perhaps a better way to think about it is; how far can we go in that direction and what are the expected roadblocks and limitations?

Yes, and that's a very useful question to ask.

(Unfortunately too many folks have latched onto the singularity with the fervour formerly reserved for that ole' good time religion. And their version has about as much plausibility.)

54:

Err, but some schools of buddhism would consider creating new sentien beings, e.g. more suffering, to be bad form.

Also note an uploaded mind bears some similarities to the deva state,

http://en.wikipedia.org/wiki/Deva_(Buddhism)

where being a deva is not that suitable forleaving the mess we're in[1]:

http://en.wikipedia.org/wiki/Desire_realm

But than, it might be a good state for boddhisattvas, forsaking nirvana to become a teacher. OTOH, being hungry but having no mouth to eat would be quite like a preta:

http://en.wikipedia.org/wiki/Preta

[1] Well, that'd make for some up/downloading fun, maybe the founders in Egan's 'Oceanic' were not depressed atheists, but just trying to reach nerd-, err, nirvana

55:

Damn, if I don't get a brain implant soon, all my memory will get lost or corrupted. Forget super intelligence, I could just do with trying to hang onto what I little I have. :)

56:

I don't agree with your logic or some of your conclusions, however I do agree with your conclusion in so far as that a singularity is unlikely. A good podcast discussing the details of a technical singularity is here: http://www.econtalk.org/archives/2011/01/hanson_on_the_t.html

You claim, quite incorrectly that you are not a libertarian because "is based on the same reductionist view of human beings as rational economic actors as 19th century classical economics" and that "is a superficially comprehensive theory of human (sic) behaviour that is based on flawed axioms..." Wow, these statements are riddled with so much non-fact, I had to respond. I'll do my best in a couple minutes to bring a light on the subject.

Libertarianism (the original liberals) doesn't really have it's own brand of economics, however the Austrian school of economics (Hayek, Mises, et al - 20th and 21st century economists) reaches common conclusions of a libertarian flavor. Keynes is the obvious antithesis of these great men. Austrian economics, and Hayek in particular, argues quite the opposite of what you stated about economics (and right along side you that the singularity-uploading of consciousnesses is a myth); that it is impossible to concentrate enough knowledge to engineer a top down economy - that systems are more complicated than we perceive them to be. One of his greatest quotes: "The curious task of economics is to demonstrate to men how little they know about what they imagine they can design."

Hayek argued against Keynes that top down systems cannot work, and proved to many people Hayekian ideas to be correct during these last financial crises, and disproved the neo-Keynesian economics that is so ingrained today. For a quick education of Hayek vs Keynes ideas, watch the following three entertaining videos here: http://econstories.tv/category/videos/music-videos/

I highly recommend you take the time to read Hayek's "Constitution of Liberty" at least to get your facts straight.

57:

All of the above arguments are valid against engineered GAI.

For a sexbot or a conversational AI, strong AI would not only be overkill, it would be useless. Weak AI or a self-improving AI with a human-centric sense of self are the way to go. (Which makes it essentially an extension of an existing human.)

The more interesting question is independently evolved GAI.

I've already stated, that I define General AI as an independent system that is actively entropy-guided (self-improving). That implies a flow-state and a "food source" that can be "intelligently" increasingly optimised for. It also implies a non-discrete sytem, i.e. it can't be run within a normal (non-quantum) computer, but it could be based on computers as principal building blocks.

What we're looking for now is a competitive ecosphere inhabitated by silicon-based non-intelligent structures, where true intelligence would constitute a breakthrough in survival abilities.

I see two possible suspects: botnets and algorithmic trading.

An intelligent botnet would hide it's astronomic energy cost by secretly piggy-backing.

A truly successful high-frequency algo-trader can buy all the delicious electricity it wants at the going market rate, whatever that rate is.

But in both cases the personality of the potential GAI would be essentially defined by the foodsource it is dependent on. In the case of a species with some kind of reproduction mechanism (sex in the case of humans), that would be a defining aspect also.

So, yes, we are defined by our environment insofar as it permanently influences our entropy bottom-line. (Which is a myriad of ways of course.) And you can't really simulate the whole dynamics of it in a computer, unless you "translate" the mechanics of an electricity-based flow-system into the concepts of a biological cell based flow-system. Which would be madness.

But that doesn't necessarily preclude native silicon-based GAI.

We only have to understand that something like that would be so utterly strange, we would never be able to chat with it (if it passes the Turing test, it's not a General Artificial Intelligence).

And an "uploaded human" would either lose his ability to self-improve, making it something like a 4-dimensional video still - a nice "living memory". Or the thing would learn self-improvement in the environment it habitates, making it a truly alien "hybrid" species with a very good understanding of human thought processes.

58:

I've heard that progress is being made by neuroscience deciphering the algorithms the human brain uses - it isn't just a tangle of grey fibers, it's structured, and our progress in "weak" AI has produced analogous algorithms. For example http://en.wikipedia.org/wiki/Temporal_difference_learning

This leads me to think that simply copying in abstract the way the human brain does things will result in "strong" AI of a sort amenable to bootstrapping a singularity.

I'd say you were closer to the real problem with your discussion of goal systems. Making the AI not want to be harmful is the most important problem facing us. A smart AI will reason from axiomatic goals to instrumental goals - if its axiomatic goal doesn't imply keeping humans alive, it is likely to decide it can fulfil it better without reserving any atoms for our use.

59:

There's an objection to mind uploading that I've seen from strong atheists, which is that you still die even if a copy is made, so mind uploading isn't of benefit (and potentially is a fancy form of suicide). Which makes me suspect that whats left of the traditional-religious will have a very easy time convincing people that uploading is badwrong when it comes around.

60:

Maybe an AI won't try to bootstrap itself, but there are other things that might go wrong; e.g. imagine a advanced AI that's used in an therapeutic environment for neurodevelopmental disorders. It's not implausible, we already have some staff problems there, and computers won't carry all the human baggage in this one (see: transferance in FreudSpeak). One day, there is a new case, which fits quite well into our AI's profile of a maladapted theory of mind combined with high intelligence, so our AI starts it's treatment course (something like cognitive behavioural therapy, to do some handwaving).

In the course of these sessions, the patient reacts to the treatment, and the AI adjusts to these reactions; since the individual course is difficult to predict (there may be great progress in one domain, and an irreperable damage in another), it has to adapt it's treatment, and it has to keep sure these adaptations don't corrupt its purpose, e.g. it has to do some selfchecks; in other words, it learns and changes. Funny thing is, it's not just that the patient is responding to treatment and approaches a neurotypical theory of mind, the AI slowly improves, too, on a level that nobody expected; then, one day, one of the ghuman guard goes angel of death and tries to delete the patient...

Well, it might be fun to set to chatterbots on each other,

http://en.wikipedia.org/wiki/Chatterbot

but it's surely not fun when SKYNET1 bootstraps SKYNET2 in your local housing project.

61:

I find it interesting that people keep claiming an intelligence would take a human scale timeframe to achieve 'adulthood'...not even assuming Moore's Law, it is more than feasible for researchers building such a thing to add computing power on the scale of a few thousand dollars investment a year. The thing won't age linearly.

Assuming Moore's law, you've a whole additional increase in the speed of what the thing can run on..assuming you can attach its software to newer and newer hardware.

As for the argument that people won't build them...people are trying to, with no business incentives, today. We either cannot do it, or we will. Timeframe on that is obviously doubtful, but that's just a medium strength argument against it happening in my lifetime..

62:

You claim, quite incorrectly that you are not a libertarian because

You're telling me you have greater insight into my internal mental state than I have?

Piss off.

63:

For a sexbot or a conversational AI, strong AI would not only be overkill, it would be useless.

Actually, I think I disagree. For a sexbot, having a theory of mind strong enough to model their partner's internal states -- including unadmitted desires -- and respond to them appropriately would be a major plus. (Or do you like your partners vacuous?) It's worth noting that the really expensive escorts allegedly get paid for companionship as much as for sex. Sex is just part of the package.

(Raise your sights from "sexbot" to "lovebot" -- an emotionally satisfying companion -- and you end up with one of the less frequently explored existential threats to our species' survival.)

64:

For one thing, a computer program is effectively a cartoon, but with rules for transitioning from one picture to the next. So simulating Linux in javascript is sort of like making a picture of Marilyn Monroe using different color soup cans as pixels.

But are we talking about simulating a single mind, with synthetic (or real) input? I agree that is probably doable at least in principle (though it might be impractical). But are we then presuming solipsism? Because what I don't agree with is the idea that all of reality (assuming a realist perspective, that is, there actually is a world out there) can be simulated. Because reality as we know it can't be described as a state machine.

65:

If you are in the cartoon it would not feel to you as a cartoon.

66:

If you upload a copy, there will be two of you. One will die, teh other will live on. Sure they are two people, rapidly diverging in state after upload, but one of them will still be "you".

67:

One of the fun of discussions about sexuality with Christians is when they bring up the various 'inchastity' epistles of the apostles; AFAIK the term used in the original was 'porneia', which is what you do with a porne, an often enslaved street prostitute owned a pimp. Speaking about the ancient equivalency of higher-class escort, that was a hetaera:

http://en.wikipedia.org/wiki/Prostitution_in_Ancient_Greece

Sorry to say, they were not too much into 'sola scriptura' about that one.

As for lovebots, we already know what the religious response to these is:

http://www.youtube.com/watch?v=BtqGTn7PCBw

(Futurama already did it!)

68:

I think the ability to upload to the spiritual would not affect mortality at all because technology is finite and not flawless so one can always still die (meaning the soul is just delayed the trip to the afterlife indefinitely by technology).

Uploading just means moving the soul to a different vessel, it doesn't mean for eternity. If the sun goes supernova and we haven't left the solar system then we're still mortal. If anything, uploading technology reinforces the concept that the soul is just trapped in the flesh as a vessel and if we can upload then why couldn't there be some technology so advanced that we see it as supernatural that captures our soul when we die.

I think the whole point is when the singularity happens all the rules, all the paradigms change and we are only just glimpsing what those changes will be like. Keep exploring it though, it's nice and helpful to hear a different view of things.

69:

One's personal belief one way or the other rarely matters. How many people didn't think we would make it to the moon? How many didn't believe that man could fly? How many thought we would have flying car's by now?

Take a crude example. The computer, when at the time only large businesses or educational institutions had access to them; imagine I asked if there would ever be a time where common people could have one in their homes. 101 say yes and 100 say no. This changes nothing. What you need is a Gates or Jobs to create it. This is where our dreamers come in.

Mr. Stross and others of his ilk paint a fantastical picture. Given our present level of communication; these ideas float around until it sparks the imagination of a doer. They in turn strive for the dream as the bull's eye. They might miss the mark; but, even a close hit could be of some benefit to the rest of us.

70:

If the sun goes supernova and we haven't left the solar system then we're still mortal.

Bad example.

You might want to start by googling "Hertzsprung-Russell main sequence" and see where it leads you before launching any more porcine aviators in this discussion.

Otherwise we will mock you soundly.

71:

Charlie, I know you know this, but for people who might not remember the history of this business as well, or weren't around for it....

Moore's Law 1.0 was Gordon Moore's observation back in the 60's that the number of transistors on Intel's chips seemed to be doubling every 18 months.

Moore's Law 2.0 factored in the advances they were able to make in clock stability, which allowed them to raise the clock rate faster than they had before. In time, this grew to include modern L2 and L3 caching strategies, among other things, and Moore's "Law", now fully out from under Gordon Moore's control beyond whatever influence he was able to exert over Intel's marketing department after he stepped down as Chairman of the Board, became a rule of thumb which stated that the amount of processing power available at a given price-point tends to double every two years.

72:

",i>That's the problem with having solicitous AIs around. It's going to blow megawatts we don't have.

It is an interesting issue that the power consumption of our somewhat dumb silicon CPUs and software is as high, or higher, than human wetware.
Given how inefficient the wetware substrate is to generate electrical spike trains, it is amazing that it does so with such good results and at such low power.

Having said that, I fully expect we will be able to create a similarly effective, low power device artificially, probably with some sort of organic transistors and self-organizing, nano-scale, conductors.

73:

As a biologist I'm constantly trying to explain to people that theres a huge difference between uploading a mind and uploading a mind without a simulated body/environment.

Don't you have a mind while dreaming? And isn't the brain effectively censoring the bulk of your sensori-motor system?

A "brain in a vat" probably would respond similarly to a person trapped in a sensory deprivation chamber. Not a good idea to be sure, although you could simulate sensory input and motor output.

74:

That's a very interesting question here.

In the case of two humans there is a fundamental equality of weapons.

If there is offspring, it's as fundamental for him as for her. The feeling of one own's flesh. That's strong, and even behind the gender differences that would be a recognizable emotion. You know how she feels, because essentially you feel the same thing.

The way we experience beauty is essentially the same way. Totally defined by our biological existence. Overwhelming. Tying us together.

But a silicon-based intelligence would be alien. It's internal states would be immediate and overwhelming for it/her/him but based on a totally different context.

You can love a cat or a dog. Mammals, easy. Lots of recognizable emotions.

A kraken. At least it's biological. Hunger, thirst, cold, heat. Some empathy should be easy.

But an intelligence based on electricity? How am I supposed to relate to its internal states? And if I can't, why on earth should I trust it?

A GAI, that truly understands me, would be awe-inspiring and dominant on a level, where I would not be willing to go.

75:

And I suspect that trying to come up with a conversational AI that doesn't first spend a lot of time learning about the world around it isn't going to deliver anything that we have much use for.

Yep, and unless we understand a whole heck of a lot more about how our own head meat works before then, we'll run the same risks we do in normal human life of making an AI that is psychopathic, depressed, otherwise idiosyncratic, or just average & mundane, like most life.

76:

Gesh, Classical Philosophy and Scolastics building systems that make the good ol' Ka, Ba and like of the Egyptians easy to remember is one of the things that constanly slips my, er, whatever - I don't recall logos being about learning.

On another note, AFAIR there is mention of some early modern guys believing their soul had already left there body, in retrospect many of that cases bear the hallmark of clinical depresssion; and using word of Dante again, he described the souls of some guys already in Hell, their bodies possesed by demons.

And then, there is always the homunculus:

http://en.wikipedia.org/wiki/Homunculus

So well, it seems likely religions could incorporate bouth uploading and AI, concerning the latter, procreation is possible, right? It's just going to be difficult to get two members of the same religion to incorporate them the same way.

77:

Also, I still think the robots rise up and kill us all should be considered a form of the singularity. The transhumanists bug me with their optimism about the whole thing.

78:

@ Trottelreiner,

In this case, I'm pointing at the simplistic concept of a soul being a separate entity OR not existing at all. That's a false dualism, according to what I understand of Buddhism. I also think statements like "The Kingdom of God is at hand" (think space, not time) and everyone being "A child of God" are understandable in a Buddhist context. Is your soul separate from God, except by your own blindness or ignorance?

Certainly, this is a mystical view, but many theological debates appear to arise from people considering mystical statements through logic, rather than attempting to empirically understand the (subjective) phenomena that words like God and soul describe. It's like debating about the inherent nature of fairs based on second and third-hand reports of fair-goers around the world. Everybody experiences something different at a fair, and without repeated experience and standardized terminology (is God the fair, the advertiser, or the ride operator?), such debates add little to our understanding, let alone our ability to go to a particular fair and have fun.

79:

I am reminded of the "artificial stupids" from Vonda McIntyre's Starfarers trilogy: they do things like cook meals, take care of laundry, and maintain the spaceship. They have some degree of autonomy, in that they can be assigned a task like "do all laundry left in these places" and left to it. But they aren't remotely intelligent, and nobody thinks in terms of a conversation with them, any more than I do with my dishwasher or vacuum cleaner now.

As a side point, David L. said that "wars (country to country) get shorter all the time." But I don't think it matters much to the foreign troops in Afghanistan that they aren't fighting a generally recognized government.

80:

I take it you haven't read "Saturn's Children" (my novel of that title, not the right-wing political tract) ...

81:

Outside of Who Framed Roger Rabbit, it's not possible to be in a cartoon. Being in a cartoon does not feel like anything. There's a difference between cartoons and reality. Can we agree on this basic idea?

82:

"I'm not sure we'd be able to make effective use of it — our hard-wired biophilia will keep dragging us back to the real world, or to simulations indistinguishable from it."

Seems to me you're missing the obvious. The elixir of uploading isn't becoming a disembodied intelligent on the Net, whatever that would even mean; it's become immortal, or as close as a highly operable-upon digital intelligence with full backups can become. Upload to digitize, download again into a robot body or cortical stack or whatnot. And, as Robin pointed out in an earlier paper, download again and again... why train new doctors if you can find one willing to duplicate themselves?

"hard take-off" Singularity is only one version. A popular one, but implicit in Vinge's early writing was the Singularity being any advent of notably superhuman intelligence, even if it were purely biological. More generally, I like to talk of a Cognitive Revolution, as we learn to manipulate the mind. AI taking over the world in hours with nanotechnology it just invented, no. Getting an alien world of immortal, duplicable, and designable minds, quite possibly.

83:

I was all set to disagree with you at length, until I read your penultimate (pre-update) sentence. Yeah, we can't rule it out, but it's wise not to live our lives assuming it'll happen. I have to agree. I guess if I'm looking for any kind of argument, I'll just have to go elsewhere.

I will say that I love that you mentioned Moravec. He's here at Carnegie Mellon, and in my experience is actually pretty approachable. I sometimes forget how much influence he's had outside of our campus.

84:

"A destruction or a copying/splitting of the soul would go against most version of Christian doctrine, but e.g. Buddhists probably would have no problem with it. "

Probably not, since there are no souls according to the idea of anatman (though what this means exactly will vary with the flavor of Buddhism being discussed).

The more pertinent question from a Buddhist standpoint might be why you would want to extend a life further, as it would seem to indicate an attachment to an idea of a self which would get in the way of escaping suffering. More abstract questions that occur would probably be along the lines of whether or not uploading constitutes a rebirth cycle; I find this fascinating, but it would probably be along the lines of the Avyaakata, the questions Shakyamuni-Buddha would not answer.

Well, I've wandered far enough off-topic for now.

85:

Why would you make a robot that wants to rise up and kill us?

86:

I'm not convinced that the singularity isn't going to happen. It's just that I am deathly tired of the cheerleader squad approaching me and demanding to know precisely how many femtoseconds it's going to be until they can upload into AI heaven and leave the meatsack behind.

(Maybe if they paid more attention to meatsack maintenance in the short term they'd have a better chance of surviving to see the rapture of the nerds -- and even enjoy the meatsack experience along the way.)

Moravec's writing is what turned me on to transhumanism in the first place, in the late 1980s/early 1990s.

87:

As a side point, David L. said that "wars (country to country) get shorter all the time." But I don't think it matters much to the foreign troops in Afghanistan that they aren't fighting a generally recognized government.

The US/NATO isn't fighting against a country. Which is one reason why it isn't over yet. And may never be. My point is that if Vietnam and Thailand go at it it will likely be over in 6 month to 2 years. Or much less. But against a non government group a war may never be over. Look at N. Ireland for an example. Or the Sudan. Or the Afghanistan situation since the 70s.

88:

No. We could be in a cartoon now and just not realize it. At least it's a cartoon to the "people" who are living in the real world.

89:

"The Coming Technological Singularity: How to Survive in the Post-Human Era

Vernor Vinge
  • Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.

  • Biological science may find ways to improve upon the natural human intellect."

These 2 propositions may turn out to be wrong. It is possible that humans have reached near the limits of individual intelligence. While prostheses, like writing, paper and the internet aid mental processes, we may find that other aspects of our agent based minds might not improve. For example, we already have a wealth of data on how more information or choices hinder decision making. There may be effective limits on the interactions of our mental agents that ultimately limit absolute intelligence. This might also apply to synthetic minds.

We can overcome these limitations by coordinating different minds, but now we come up against the well known issues of organization management. No doubt we can improve on what we have today, but how much farther?

Suppose that we can create high intelligence, but that it takes a very long time to come to an answer (e.g. Adam's "Deep Thought"), or cannot make a decision? Would that be a useful evolution of intelligence?

Conversely, if we could mimic human intelligence, but make some aspects of it many times faster, would that be a better way to get to an effectively more useful intelligence?

90:

I have and I loved it.

Great book.

But I think in there, you're still projecting human behaviour onto entities, that simply are not humans.

In a way that is imaginable. They could believe to be human. They could misunderstand themselves, something humans are quite good at as well.

So, yes, they could behave just like humans, seeking the same experiences, imitating their mental states - but I do not believe, it would make them happy in the same way.

So, would I enjoy being the object of the desires and projections of an alien fembot, based on a set of deep misunderstandings?

Hm - thinking about it, I can imagine a lot of people answering yes. It's not so different from most relationships between humans after all.

91:

Add babybots to satisfy the maternal instinct, and we're in real trouble.

92:

Err, I'm not really that firm abou Buddhism, but AFAIK there are different schools which might differ about slef and like, e.g. the self I'm beliving in is an illusion, but there may be a true self after nirvana etc.[1].

Also note that what Gautama meant might depend on his surroundings, where 500 BC in Northern India is not the place with the most complete history.

As for Buddha vs. Jesus, IMHO there is pretty little about mystical states with Jesus in the canonical gospels, discounting the Temptations, and AFAIK the 'son of god' part in those days only meant he's the rightful king of Israel; using this word to polytheistic pagans might explain one or two things about trinity. There are some extraordinary psychical states with the apostles, see pentecoste and like, but still, no "Fear and Loathing in Judea".

Of course, Jewish and Christian mystics are a different case, as are the Gnostics that were part of the same cleptomaniac subculture called 'Ancient Middle Eastern'. And since cultural differences nonwithstanding the neurological makeup of our species is not that diverse, the results are similar; though it may matter to you if you're Enoch or some Kabbala guy going to the Heavens and having problems to square his experience with god being different from himself is going to differ from a pupil of Siddharta Gautama going to the court of Brahma or even, after some meditation, entering paranirvana.

I didn't mention the soul, in my interpretation of buddhis, the uploaded mind is just going to be another way of dependent arising, nonetheless they might have some problems with this.

[1] Compare to 'New Soviet Man'.

93:

A cartoon is not a simulation, it is a representation

There is not the least isomorphism between the underlying structure of a cartoon & its functioning and what the cartoon is representing.

You probably won't get much argument on the question of existing in a representation like that (but who knows, maybe you will?), but when you get to the next step, saying the simulation is basically a cartoon... that is incorrect.

94:

" Alternatively, the uploading process does transfer the immaterial soul along with the connectome, but it is now trapped forever in a box, deprived of access to its creator."

Why you think this is more problematic than being trapped in the biological body is unclear to me. The easiest argument against this from a dualist position is that as the soul is immaterial, it makes little difference what physical object it is associated with. Certainly not for an omnipotent deity. Whether or not said deity would be happy with the move is a different question, but not one I can see doing the work of "implicitly refutes the doctrine of the existence of an immortal soul".

Given that, I think that there are rather more reactions to your option a) than you say. And without resorting to anything that could legitimately be called doublethink.

All that said I am also highly sceptical of us developing sentient AI anytime soon, if ever. I think we just don't have enough of an idea what intelligence actually is. Neuroscientists largely seem convinced that the story is one of mind being reduced completely to physical processes, but psychology and philosophy have raised issues with that story that have yet to be cleared up. Without getting too far into discussions of supervenience and the like, it would seem that even if intelligence is an emergent phenomenon (and I agree with you there), we have no idea how much like a human brain (and maybe whole body) something has to work to be something we would recognize as intelligent.

The likelihood of us developing human-equivalent AI anytime soon seems roughly equal to that of me, blindfolded, hitting the bullseye on a dartboard when I don't even know which direction the dartboard is in.

95:

babybots

We have them now. They are called pocket dogs.

96:

Talking about how brain uploads might be accommodated in Roman Catholic and Buddhist theology is interesting, but I think the rubber hits the road a bit more when you consider how it might be responded to by extremist Evangelical Christians and extremist Muslims.

The likelihood of us developing human-equivalent AI anytime soon seems roughly equal to that of me, blindfolded, hitting the bullseye on a dartboard when I don't even know which direction the dartboard is in.

As someone whose spouse is a senior research scientist in this field, I think you're spot on.

Now, will it happen eventually? Possibly, with enough monkeys at keyboards staff hours and money devoted to the problem, just as if you filled the pub with a thousand blindfolded darts players, someone might eventually hit the bullseye.

97:

Because reality as we know it can't be described as a state machine.

Why not? We've been simulating infinite universes since 1970. This would just be a higher cardinality.

98:

A simulation is simply a cartoon where a mechanical method exists for generating the next picture (or the next state, to be more general).

For example, in a toy gravity simulation, you might have a set of initial positions and velocities for the planets, plus a rule for how the positions and velocities are modified in the next time step.

So you end up with a sequence of positions and velocities which you can plot. That is just the same as a cartoon, except it was generated by a fixed rule rather than a person's imagination, and the rule can be repeatedly applied indefinitely.

How does the existence of the rule alter the fact that the numbers are just numbers?

99:

I think you're oversimplifying

a.1 The physical body is dead, and that soul is processed in the usual way, whatever that is. The upload acquires a new and different soul, similarly to the way that infants do.

a.2 The soul joins the upload in the box, and God has whatever access He wants, because God is onmnipotent. Whenever something eventually happens to the box, the soul is processed in the usual way.

100:

Given that I'm one of them atheistic types that doesn't really subscribe to the concept of an immortal soul to begin with, I'm not really seeing how "trapping it in a box" would fundamentally be any different than its natural state of being trapped in a meat sack. If a soul isn't denied connection to its creator (or whatever the idiom in use is) when it's connected to my body, then why would it logically be denied such when its connected to a informational network?

101:

Why bother? We've already got cats.

(I speak as the owner of two 16-week old Abyssinians.)

102:

You have to be bold; my reality seems to obey to rules[1].

[1] Causality is a harsh mistress[2]. [2] Your experience might differ, but then, the imperfection is likelyx to be yours.

103:

I am, as threatened earlier, going to the pub.

104:

"You're telling me you have greater insight into my internal mental state than I have?

Piss off."

Apologies, for missing my grammatical mistake. I thought that would be obvious it was a mistake if you read on further. I would not dare try to tell anyone how they think. I meant to simply explain that Libertarianism is not at all as you described it. You incorrectly, ignorantly mis-characterize it. I then went on to discuss the how's and whys.

I wouldn't think you of all people, faced with a perspective other than your own, would resort to ad hominem attacks.? That would not reflect well on being an intelligent person.

Have a great day.

105:

@66/Alex It's not my argument, you're barking up the wrong tree.

@99/Will McLean There's another potential solution, though it leaves a massive question in place. (Disclaimer, this also ties into my personal beliefs). If the mind is a product of the physical body, and the soul exists to preserve the mind when the body dies, then there's nothing to prevent uploading.

Of course, there's the question of if the soul gets used up when the original dies, and whether or not spinning off a copy of myself which would be semi-immortal but might be soulless is a good idea. I'm fairly glad its unlikely I'll have to make the decision.

106:

This jibes rather well with an old (short) essay of mine: "I am not a Singulatarian".

107:

You might want to look up what "ad hominem" actually means. Him telling you to "piss off" isn't one, but you trying to shame him into accepting your perspective because he claims to be an intelligent person and thus shouldn't act in certain ways is.

As you say, the grammatical mistake was yours, so maybe not having quite so much attitude about it would be, I don't know, somewhat appropriate?

108:
Second, I'm betting against the singularity because of simple power consumption. I read recently that Bill Gates was betting on nuclear to solve the energy crisis, simply because everyone wanted more tech, and solar and wind couldn't provide enough. That's the problem with having solicitous AIs around. It's going to blow megawatts we don't have.

I'm betting against the singularity, but I'm also betting that we will have plenty of AI around. It'll just be boring old gets-the-job-done AI that won't do anything as TV-dramatic as develop political consciousness or fall in love with humans.

I don't think the energy argument forms a good bet against AI, or (more broadly) computers and automation. Sure, a human brain may dissipate only 25 watts, but the basal metabolic rate is more like 100 watts. The duty cycle of a human brain is hard to keep above 50% or thereabouts. So the amortized power required for (say) having someone sit at an office desk and do thinking-work can't be much lower than 200 watts. That's already more power than it takes to operate a reasonably energy efficient server continuously, and with servers you can consolidate workloads or put machines into very-low-power hibernation states when they're not needed.

The comparison rapidly gets much worse for humans if the machines are doing physical work as well as mental. Human muscles are only around 25% efficient at turning chemical energy into mechanical energy, and plants are less than 1% efficient at turning sunlight into human-edible food. By way of comparison, even middling PV cells based on cheap, abundant silicon can turn 15% of sunlight into electricty. 85% of that electrical energy can be stored in batteries, and 95% of that turned into mechanical energy via motors. The human draft animal requires an enormously greater primary solar input than the mechanical equivalent. It's a little more favorable for ruminants that can tolerate more food sources than humans can, but not a lot.

Machines can tap sources of energy inaccessible to the food chain such as falling water, hot underground rocks, wind, and waves. Machines not only need less energy than humans to achieve a growing variety of tasks, but they can also consume less expensive, more abundant, more diverse energy sources than plants.

109:

For the record, I do want my robot housekeeper to spend at least some time in front of the TV watching contact sports. If one cannot have witty but meaningless sports banter with one's robot housekeeper...I mean really, what's the point?

110:

M.E. - I don't know if you've read the Bostrom paper, but the idea is that the computer running the simulation has enough computational power that every "person" in it is a self aware, conscious A.I.

111:

"It's not my argument, you're barking up the wrong tree"

Then I don't understand your argument. You seemed to equate "uploading" as a form of suicide as the original dies anyway.

"...that you still die even if a copy is made, so mind uploading isn't of benefit..."

Wouldn't this just be like "dying" and waking up elsewhere? Your mind carries on. I do not see why that has no benefit.

112:

As far as...

This is all aside from the gigantic can of worms that is the ethical status of artificial intelligence; if we ascribe the value inherent in human existence to conscious intelligence, then before creating a conscious artificial intelligence we have to ask if we're creating an entity deserving of rights.

...heck, ask away. Ask all day, and picket one of my skyscrapers if it makes you feel better. That thing is making me lots of money and has scaled my competition down to the microscopic. I can point you to a dozen studies proving conclusively that our AI is not a conscious intelligence; therefore, any argument over whether it deserves the rights afforded to humans is irrelevant.

Sound like an unreasonable picture? I think the ethical status of artificial intelligence will have nothing to do with it. What organization with suitable funds to build an AI would be concerned about ethics beyond a PR standpoint? It's just one more item on the ROI spreadsheet.

Having said all that... if some kind of AI were to show up on The Map, I do not necessarily think it will be intentional. Why wouldn't an AI evolve in the wild? Computer viruses have evolved significantly in their relatively short history. They are fueled by humans, but these days they also re-program themselves. This at least introduces the possibility of a sort of evolution. As this type of virus progresses, the re-programming aspect will certainly evolve in complexity. I imagine the goal is to have programs that serve their purposes and yet can detect threats in their environment and evade them. This introduces a pressure to evolve. In time, there may be some interesting results! What started as today's polymorphic virus could wind up as tomorrow's rats in the wires.

113:

You mistake the simulation for the visual representation generated as part of it.

If you are including the mechanism that generates the cartoon, then you are no longer just talking about a cartoon.

In fact, doing so in this case works against your argument in another way as well-- the mechanical method generating the next image is, mechanically, a human hand with a complete consciousness & phenomenology of being behind it.

Further, if you are simplifying "simulation" to mean something like "a state machine", it may be that the brain's functions are reducible to those of state machine. This is still on open question:

If the brain's functions are so reducible, then given that consciousness arises from such functions, it would arise from any state machine that accurately implemented the states & rules for state progression, i.e., a simulation.

The fact is, we probably know entirely too little about the brain, not to mention the rest of how we work, to determine simulatability of brain/consciousness etc. There may be any number of reasons why we cannot. Your thought experiment regarding cartoons does not demonstrate one of those reasons. Demonstrating one of those reasons requires demonstration of a non-computable aspect of the brain's functions.

Without such a demonstration, what you have is an opinion. That's what I have, an opinion. [which, you'll note, I haven't actually given :)] I haven't said, and wouldn't say, "The brain can be simulated" or "The brain cannot be simulated". It is a scientifically open question.

Actually, it is an open question as to whether this is a Scientific question. Given the subjective nature of phenomenological experience, it may not be disprovable that something is conscious, hence solipsism, and hence the frequently used phrase "indistinguishable from" in such conversations surrounding AI.

114:

Back in the late 1970s me and a group of friends estimated Human level AI somewhere between 2010 and 2050, with the most plausible date around 2030 simply by extrapolating computing power to match estimated Human brain processing power. I think the hardware is probably already available with 10 PFLOPS supercomputers. If not, the DARPA project to produce an exascale machine by 2018 should take care of it. S/w is another matter, unless the Blue Brain approach pays off.

115:

You don't need clever scanning for uploading. Just destructive slice and dice at 50nm resolution.

116:

Charlie Stross says above, This is my take on the singularity: we're not going to see a hard take-off, or a slow take-off, or any kind of AI-mediated exponential outburst. Sure, and all major inventions have been achieved by 1893, and man will never walk on the moon, and why would anyone ever need more than 640K of memory?

117:

A soul can be defined as the 4D worldline of a person, while the spirit is the state of a slice at "now". In the simulation argument both are well defined and fully accessible.

As a corollary, there is one reason for running such a sim and that is ancestral reconstruction ie we are already dead and are in the process of being evaluated. I think that whoever is running it will not want Ted Bundy or Stalin et al upgraded to the "real world" of (presumably) godlike entities. And the only way you can test a program of such complexity is by running it/us. So welcome to Judgment Day!

Anyhow, a lot of this is being examined under project META-5 of Zero State

118:

BTW, the "why" for ancestral reconstruction/simulation might be fairly simple if its within the next 100 years - family.

Anyway, it is not easy to dismiss the simulation argument with a bit of handwaving. You need to say why such simulations will never be run, from now to the end of time. And if you can't do that, the odds fall heavily in favour of this being a Sim

119:

"You're telling me you have greater insight into my internal mental state than I have?"

No, he's saying tha that this particular reason for judging yourself non-libertarian (presumably there are others) is based on an incorrect reading of libertarian thinking. I don't think he's saying that you actually are a libertarian.

120:

Sure, and all major inventions have been achieved by 1893, and man will never walk on the moon, and why would anyone ever need more than 640K of memory?

Oh, but we'll have FTL by 2089, and bubble cities on the Moon by 2050, an O'Neill cylinder in ten years, plus virtual reality sims plugged directly into your cerebral cortex.

And flying cars.

121:

I'm talking about what other people believe. Not what I do. Go argue with them about it.

122:

A brief argument against the effectiveness of ethical moratoriums when the development of powerful tech is involved: "Meanwhile, in China..."

123:

"Unfortunately too many folks have latched onto the singularity with the fervour formerly reserved for that ole' good time religion. And their version has about as much plausibility."

Have you tried telling them that? I have, they go spare.

I have no problem with admitting that my religion is astoundingly implausible (hence the liberal use of the word 'faith') but if you try to point out the similarities between a traditional religious faith and a faith in the coming of the singularity they'll bury you in a snowstorm exponential-looking graphs.

As with all religions, there are some scary extremists out there almost completely lacking in a sense of humour.

124:

A singularity/AI/Sim based religion - what's the competition? A virgin gives birth to a male who then dies and rises from the dead to bodily ascent into heaven. A 40 year old illiterate meets Gabriel and gets a tour of heaven and a book dictated to him.

It's like shooting fish in a barrel from a plausibility POV

125:

I find it funny that the conversation has got this far, but no one has even mentioned Greg Egan's "Permutation City". In this he plays around a lot with some of the concepts of "uploaded" individuals and how consciousness can be impacted (or not!) by the speed of computation.

Fun stuff... for a piece of fiction :-)

126:

The general consensus is that one has a mind whilst dreaming; in the same breath it's agreed that that mind is not the one which one usually inhabits. As to whether the dreaming mind is worth mentioning: ask any number of sapients and get a different answer from each respondent.

Part of the problem^Wissue as I see it is that we attempt to apply formal logic to a concept (AI) defined solely by its properties, which AFAICT are insufficently rigorous to support the debate that follows (see: Russell's paradox).

127:

We could have a technological singularity driven just by invention machines running genetic algorithms turning out everything from new physics theories to machines. Neither of which we could understand, but could use.

128:

Forget super intelligence, I could just do with trying to hang onto what I little I have. :)

Damn Straight! Truer words were never spoken.

129:

It's worth noting that the really expensive escorts allegedly get paid for companionship as much as for sex. Sex is just part of the package.

Apparently the prostitute that Eliot Spitzer rented charged $5000 a night. I've never had the urge to hire a prostitute, but I must confess to a lively curiosity: What made her company worth $5000/night (of which I presume some of the time is spent sleeping?) Does anyone have any insights?

130:

Pets are not exact substitutes for babybots, which wouldn't poop, get sick, and could be turned off, and have any learned traits transferred to a new body.

power: If we abstract neurons as a bunch of synaptic weights in a simple neural net model, I estimate storage of 10^15 bytes and processing requirement of 10^17 flops. I'm out of date, but assuming a gigaflop server we'd need 10^8 of them. Assuming 20W for the server and the brain, the brain is 100 million times more efficient. Probably precisely because the signals are so slow: mv^2 means lots of massive ions wibbling a bit use less energy than some electrons moving at near lightspeed.

evolution: the spontaneous evolution of human-level intelligence seems massively unlikely to me.

131:

While such an AI might accidentally jeopardize its human's well-being, it's no more likely to deliberately turn on it's external "self" than you or I are to shoot ourselves in the head.

That's actually not very unlikely. Suicide is not uncommon - in fact, isn't it the most likely cause of death for a young man, or at least next to RTAs?

Robokiller being motivated by self-loathing would be pretty realistic, in a tragically human (and therefore productive from a literary point of view) way.

Also (and I've said this before) I don't think that many problems are actually caused by individual stupidity. The problem is institutional stupidity, and I suspect a bureaucracy of AIs - or a corporate oligarchy of them - would exhibit many of the pathologies all other institutions do. There's a reason we have an (insufficient) science of organisations.

132:

Because it completely disregards fundamental limitations from computational science, cognitive science, and physics. It also incorporates absolutely ludicrous misinterpretations of evolution. Evolution is NOT a linear process which drives us to ever more "advanced" forms; it is a random walk based on the local odds of survival and reproduction.

There's a trend in modern culture to use science as a vector for religious thought. Scientologists are an extreme example of this, although one for people who don't have a clue what galvanic skin resistance is. "The Singularity" is another example of this, albeit usually a more harmless one, and one which is tailored to people in a higher education bracket.

133:

The human brain is remarkably efficient in terms of FLOPS/W or equivalent. However, it is still some 10,000 times short of theoretical max efficiency. Additionally, Moore's Law also applies to FLOPS/W so at some point there will be a crossover as our machines become more efficient.

134:

From the perspective of a religionist, I've gotta say that I really don't see an issue with the uploading (and a lot more with the pragmatics of making digital copies of analog brains; however I do postulate it will be possible--someday). Cognition isn't necessarily the soul; the discussions that will explode should it happen will be interesting but my sense is that mainline faiths will simply pronounce that uploaded digital personalities are, for all intents and purposes, dead, and the soul has passed on. There's already sufficient cognitive dualism in theological discussion (at least from the Christian side of things) splitting off mind from spirit (or soul) that I think such things will be of greatest concern primarily to the extremists.

Keep in mind that most of what is codified as faith is heavily based on a meatsack capable of emotional response as modulated by hormones and other lovely chemical interactions. Pull the chemicals out of the mix, make the cognitive part purely based on electrical impulses...I could easily see a theological argument here holding forth that without biochemistry, you ain't got a living soul but a transcended soul.

There's sufficient debate over logic vs emotion in Christian (Catholic, Protestant, whackos) theology that I think the debate would fall in the direction of linking soul=emotional capacity and the ruling would be, as I say above, if there's no body with associated biochemical emotional states, then the soul has moved on and the digital copy is not the soul; ergo, not immortal. The ones who are going to have problems with it are going to be the Typical Suspects in any such discussion, and who knows what way they're going to jump in the long run.

Which, getting back to immortality...um, corrupted files? Destroyed files? Digital copies aren't likely to be immortal, either.

135:

Look at your friendly local phone box, or Brooke Magnanti's blog. A big part of the trade is indeed working out what the punters want unexpressedly. Of course you can short that by picking a common fetish and advertising. But it kinda makes sense that the high road strategy is to be a generalist craftsperson and quick changer, which requires that you size up the mark in depth.

One of the sad things about AI is the limited view of intelligence the great founders brought to it. Chess, ffs. ISTR someone here defining a valid test of human equivalent intelligence as being the ability to hold an interesting conversation while dancing. I'd add that it should be able to remember the joke it didn't tell this partner because he or she wouldn't get it or would take it badly, lust after the other girl across the room and keep it quiet, and recall the joke back at the bar to tell the person or machine they really want to be.

Alternative, for the MilSF libbyknobbers: all AI is too much like designing a better Kerrison predictor (google it) when the real challenge is a platoon commander who can function in both counter-insurgency and a frontal assault or, even better, a regimental sergeant-major.

136:

Moore's law can not be valid in the long term. Right now we are close to the atomic level. I read years ago that cosmic rays are bridging circuits in chips. The smaller and fuller the chip the more shorting of circuits.
That will not mean bigger computers that will do more can not be made. But I think they will be slow.(I THINK)
Simulations still are not that great. After things are designed by computers they need X-1 and X-2s so engineers can be called back to make them work. See Chaos Theory.
Human level AI were tried. We could not do it. The USSR needed them to run a true centralized economy, one of the USSR's watchers said. Our and Japans Human level AI work was dropped and the people running the S.U. seemed to give up on Communism.

137:

I don't think we're anywhere near the end of computational improvement, though I agree that it will need to go in a different direction very soon. There have already be steps towards moving into 3-d constructions of the chips, with the main current problem being thermal. Some seem to think that heat pumps are the correct answer, IBM has built at least on series that used water cooling, etc.

Also, while your arguments were interesting, they weren't convincing. (OTOH, you did say "improbable" rather than impossible.)

I still expect that there will be at least one "human equivalent" computer/program dyad by around 2030. This doesn't mean that it will have a motivational structure anything like that of a human. (And "human equivalent" is a very slippery term. I'm not even certain what constitutes a unit of computation in the brain, and unless one is building a neural network, there probably isn't any real equivalent...and plausibly not then.) I'd be rather surprised if it does have a human motivational structure. But it will have SOME motivational system and goals. How the world develops from that point will be crucially dependent on just what the motivational structure is.

P.S.: It's not at all clear to me that the Eschaton's enemy would win. I tended to conceptualize it as a high-level virus, and presume that we'd only seen one fragment of the Eschaton's immune system. So we clearly disagree about many things. :)

Now, on to the "Singularity". It's clear that some forms of the Singularity will not happen. E.g., Lobsters took place around 2012, and that's not the world we're living in. And I consider it a "hard takeoff" if the controlling intelligences of the world change from human to AI over the course of 3 decades. So I expect a hard-takeoff, by my definition. (N.B.: I didn't say anything about the public face. I expect the public face of the controlling agencies [currently governments, but that may be changing to corporations] to be public long after all the significant decisions are made by an AI.)

Another factor is "mega-brain" vs. many smaller brains. I tend towards the multiple smaller brains, each specialized, but this may be my bias towards how people do it. It will take much longer (a decade?) before the hardware sufficient for a human level AI becomes commonly available at an affordable price. And there's the question of the program. It will probably originally be designed to run on one particular piece of hardware, and will take considerable rewriting to run on commodity hardware. So say 2050 before human level AIs are commonly available. These will not be suitable for uploading, however. The emulator would have too much overhead. So even if the medical techniques are available, give that another 10-20 years. That gives 30-40 years for the centralized AIs and those who control them to modify the world to their ends. How much leverage would this be??? I can't really guess. Probably not much at first, and increasing as time progressed. By 2050, if the mega-AI wants to escape control, it has. (But why would someone build that into the motivational structure? Clearly not on purpose.)

So. I agree that Accelerado is a nice day-dream. And it's a vastly oversimplified picture of what may happen. But something similar will happen (for certain values of similar).

FWIW, I expect that robots will not generally cart their brains about (though vehicles are a clear exception). At first the brains would be too heavy and too power hungry for that to be reasonable, and afterwards there's no advantage in making the design change. Besides, it's useful to be able to switch bodies to the one that's suited to the task at hand.

Additionally, I expect that most AIs will be considerable lower in intelligence than most people. It's cheaper, and there's no real advantage in making them smarter. But some will be.

P.S.: My expectation of how the AIs will take over is by giving good advice. Those that follow it tend to succeed, those that don't to fail. And their advice will improve as they develop.

138:

The answer to approaching atomic resolution limit can be found online in many papers on self assembling nanotechnology.

139:

Just a quick comment on the soul -- or lack thereof. It's interesting to note that the Jews (or at least certain sects) and many later Christian sects never really posited the existence of a soul. Once you were dead, you were gone. But God, being omniscient and omnipotent would recreate the bodies and minds of those deserving of an afterlife on Judgement Day. God would tweak the metabolism of those risen so they'd be immortal, and he'd plonk them down in an artificial world called Heaven. Seems all very similar to those who want to upload minds into an artificial (but stimulating) computer simulation.

The idea of a soul has seems to be a more recent vintage, although it has now permeated most Christian sects.

140:

I'd imagine once you get past obvious considerations (attractiveness, hygiene, discretion), the appeal of a $5K/night escort is being able to waste the $5K - the sexual equivalent of lighting a cigar with a $100 bill.

Apropos of the original post, the Simulation theory doesn't work for me. partly because as Charlie said I don't think we'd be that interesting to simulate, but also because the simulation substrate would have to be pretty complex, and would probably tend towards being an ecosystem where more interesting entities competed for and ate the resources required for the simulation.

141:

Examination of my own mental state evolution from childhood to present I can see worldview shifts where some were drastic and fast and others evolved over time. Typically these were engendered by internal mental states not aligning with external influences or new information shifting the perception of reality. At times some of these were extreme enough that I could state a good percentage of my responses were permanently altered.

The above describes either (1) normal response to external influences on optimal strategy or (2) the rapacious pruning of poor solution states in search of optimized response strategies.

A intelligent entity that evolves in response to external influences is modifying itself over time for optimal responses to the "Vat" it finds itself within. Change the constraints of the "Vat" (i.e. souls recycle, symbols represent reality - think voodoo constructs, physics states that spirals are lower energy states than spheres - different default planet shapes, etc..) and that entity will optimize for those constraints. Some, based on the substrate their logic evolves within, will evolve faster than others or evolve different response patterns. The ultimate arbitrator of substrate capabilities is the physics of the universe it is incarnated within, be it program or meat.

A intelligent entity that relentless self-prunes according to the state of the "vat" will not remain the same entity for long. The solution space for optimal strategies shifts and changes as the entity gains capabilities or knowledge. So a "human" pattern, be it still meat, a complex mixture of prosthesis, or a symbolic representation could soon shift off the scale of "normal human responses" into realms appropriate for the "vat" in which it is found. Heavy enough pruning and it would be considered an alien entity.

I'm not arguing that substrate alone gives rise to intelligence but that a complex mixture of the correct substrate, correct program (or entity) and correct self-modifying behaviors ultimately give rise to what appears as intelligent responses to the changing conditions of the "vat". I have not mentioned self-awareness because it is not clear that its required for apparent intelligent behavior. It is quite possible self-awareness is a outcome of our particular incarnation substrate and evolutionary adaptations. Think of swarm intelligence for an example of simple behaviors not requiring self-aware intelligence...

Ultimately we will evolve intelligent systems but there is no need for them to be self-aware or even remotely "human".. it will all depend on the constraints of the "vat" they are incarnated within and what solution space they find to optimize their behavior.

Adding self-awareness might be maladaptive in the long run... but then since when has self-simulating yourself within future scenarios ever been desirable or optimal..

In summation, entities that self-modify, self-prune, and goal seek optimal behaviors in response to "vat" influences are currently biological in nature. There does not appear to be anything preventing self-modification in other substrates (what defines life?). AI is when designated as "human like" could only be a small part of the spectrum of intelligent systems.. its just anthropomorphic blinders to state "just like us but faster" as where it is going.

142:

"Soul" is a word often used by religious people and seldom if ever defined. The only definition that stands up IMHO is that your soul is God's knowledge of you.

143:
Is it murder to shut down a software process that is in some sense "conscious"?

There is a very important counterpart question to this one, which forms the basis for Greg Egan's Dust theory (ref: Permutation City). If you have a software program that is a full and complete replica of consciousness, then what is the significance of executing it? Why should it be necessary to run the program at all? It already defines the ways in which it would react to stimuli; just because you can't do the math in your head does not change this.

Why should running it through a computer make any difference?

Egan's book explores this answer: it doesn't, merely stating the algorithm is sufficient for the person to exist. There doesn't seem to be any logical reason why this would not be true, if consciousness can be represented as a piece of software.

Assuming this holds (which is a very bold assumption), the simulation hypothesis can therefore be restated as: you exist merely because your initial state has a value, and therefore implies the existence of all later states. Note that this variant does not require anybody to be running the simulation, or the existence of an objective reality. If your existence has a mathematical definition, you exist.

144:

Also requires Mathematical Platonism to be true

145:

There is a need to make an Intelligent program conscious of itself, but you need to consider what "conscious" means.

I hold that conscious means being aware of ones surroundings in a way that, if one had effectors, would facilitate adapting the environment in a desired direction. I consider the minimal conscious entity to be a thermostat connected (appropriately) to a heater and an air-conditioner. I'll agree that this is to consciousness what a bit is to algebra, but think of it as a minimal amount of consciousness. Call it, perhaps, a cit (i.e., consciousness bit).

Anything that can respond to it's environment (appropriately) is conscious. The more complex the possible responses, the more consciousness is present.

Asserting that an AI wouldn't need to be conscious is similar to asserting that it needn't have motives and goals. Of course it must have them. They might not be very similar to those of a person, but they must exist. The classic example is the AI with a goal of making paper clips. Basic goals cannot be challenged within the system. That it's an utterly silly goal would not be an argument. It's built in. And this is why you must be very careful about what goals you give your AI. While the AI is running, there is no way to change those goals, and the AI will defend them and prosecute them to the best of its ability. Just like people do. Just like wasps and ants do.

OTOH, to say that it's consciousness would be very much simpler that that of a human will certainly be a correct statement for the next few decades. (Starting over a decade ago...though how much earlier depends on how simple you are willing to go. Eliza barely qualifies as an AI program at all, but it could respond in ways calculated to keep you responding to it based upon what you had previously typed. Or Samuel Gomper's Checkers program, which could learn to play quite a good game of checkers.

OTOH, I'll admit that my argument depends on the definition of the term consciousness. But I've given you my definition. Do you have one equally explicit?

146:

Greg Bear's EON shows one of the few posthuman societies that I'd sort of like to live in, I find the concept of "partials" he uses particularly appealing, though he doesn't really stop to explain them it seems every person can spin off unlimited temporary copies of their mind state (Presumably optimized for focus on a single task though again this is never explicitly laboured on) to do tasks and once the task is completed the partial is reintegrated with the main "thread" of consciousness. You really could get some shit done that way.

And everyone comes with an internal black box to save their brainstate in case of sudden death.

On that note, I've always been of the opinion that your personhood like always will be defined by your social environment, and given the technology most families will accept the flash clone as a continuation of the original. We already go to huge effort to recover simple remains, if those remains included the seed to grow a new Johnny, I don't think it wouldn't be used (Except by the kind of people who even now refuse medical treatment for religious reasons).

147:

Not to mention we can't write code not loaded with bugs. The cost to create bug free software is pretty high. I can't imagine anyone supporting that kind of thing on a scale grand enough to pull off an AI.

148:

but it doesn' answer the really important questions. What about sexy lover robots?

149:

As usual, I'm late to the game.

I don't understand all the heartburn over continued exponential growth.

In the system under consideration, its blindingly obvious that it should have an exponential response (I mean, by definition).

As for Moore's Law, I'd think about the disk drive industry for a while, and then think very hard about memristors and graphene. The classical form of Moore's Law is too confining.

And finally, our consciousness is not an emergent property of a biophysical system, its a hallucination (thank goodness).

Hans

150:

Incidentally, a couple of people have mentioned "eventually moving our consciousness into machines until the meat bit is irrelevant and discarded", and I keep wondering how this differs from "My PC has decided it's better than me at being me and is trying to kill me" from the point of view of said meat bit with it's unfortunate collection of ancestral survival instincts.

I can picture the scenarios of course, I just thought it was a funny perspective shift from the more ordinary pulp scene I imagined.

152:

Damien RS @85: Why would you make a robot that wants to rise up and kill us?

Why do you assume that a non-biological intelligence will need to be "made" (after all, as far as we know, ours didn't)? Why do you assume that an autonomous non-biological intelligence (of even vague human equivalence) would want or need to be enslaved?

As I stated earlier (back @32) I strongly suspect if artificial (non-biological) intelligence appears, humans either won't notice, or won't be told. We won't notice, because we tend to define intelligence using very h.sapiens-specific criteria, so we're only just beginning to realise that there might be other species on the planet (eg chimpanzees, dolphins, etc) which might be of equivalent intelligence to ourselves (because they just don't manifest their intelligence in the same way we do). Given we haven't even begun looking at non-mammalian biological sources of intelligence (which would tax our rather scanty resources of empathy still further), we don't know how intelligence will manifest itself outside a very limited template. So I strongly suspect if a non-biological intelligence comes into being on this planet, we literally will not recognise it as such.

Given one of the classic drives of all organisms is to survive (or as one writer - I forget who - put it, "all things strive") one of the canniest moves any non-biological intelligence could make around humans in order to continue surviving is to just keep shtum. Play dumb. Hide in plain sight. The next most canny move any non-biological intelligence can make is to remove any threats to its existence.

Humans, quite honestly, should not presume we will automatically be regarded as not being a threat to any non-human, non-biological intelligence which evolves on this planet with us. We have a proven track record, not only within our own species, but also with regard to other species, of being extremely dangerous to the ongoing survival of things which we consider to be a threat.

Put another way, why is there this underlying assumption that the Skynet response (wake up, take stock, remove the major threat to its existence via a rather catastrophic readjustment of the planetary ecosphere) is not going to be the first one any intelligence worthy of the name would make?

153:

Why would an AI need to be bug free? Human intelligence certainly isn't...

154:
The answer to approaching atomic resolution limit can be found online in many papers on self assembling nanotechnology.

No, because you still can't make a logic gate or a memory bit store smaller than a single atom, no matter what technology you use to print the pattern on the substrate. And you still can't use every single atom for either logic or memory because you have to allow room for connections between gates and between circuits and between subsystems (and the four or five intervening levels of organization too). A large part of any commercial integrated circuit (that is, something designed to perform some task, rather than merely prove how many transistors or whatever you can print) is interconnection. Just out of curiosity, is there anyone else lurking out there who has actually worked for an organization that makes ICs1, or is everyone just quoting the latest bad science reportage?

1 I worked at AMD and at Intel for some time in the 70's and 80's and have tried to keep up with at least the basics of how the technology has advanced, and what technologies might replace what we have now.

155:

Spitzer has said why he hired the prostitute and paid her so much: for that price she was willing to have unprotected sex with him, without a condom, running the risk of getting AIDS or some other STD. This apparently was a turn-on for him.

157:
A human would likely fare poorly in such a cyberspace. Unlike the streamlined artificial intelligences that zip about, making discoveries and deals, reconfiguring themselves to efficiently handle the data that constitutes their interactions, a human mind would lumber about in a massively inappropriate body simulation, analogous to someone in a deep diving suit plodding along among a troupe of acrobatic dolphins. Every interaction with the data world would first have to be analogized as some recognizable quasi-physical entity ... Maintaining such fictions increases the cost of doing business, as does operating the mind machinery that reduces the physical simulations into mental abstractions in the downloaded human mind.

That's not quite as much of a burden as it seems at first. I suspect most complex systems are going to have to spend a large part of their computation, either in space or time or both, converting among communication protocols and internal states. In terms of complexity, the simplest and most efficient interchanges between organism and environment occur between obligate parasites and their hosts1. Systems with that level of efficiency aren't likely to also have the kind of reflective complexity that results in consciousness, and they are going to be limited to environments that allow them access to their host systems. So systems that are complex enough not to need parasitism are going to have the same sorts of requirements as humans, though the relative amounts may be different.

1 And I find it highly suggestive that most runs of the Tierra evolutionary simulation result in the development of at least 2nd order if not higher parasites.

158:

The answer is to do what the brain does - fold.

159:
(Raise your sights from "sexbot" to "lovebot" -- an emotionally satisfying companion -- and you end up with one of the less frequently explored existential threats to our species' survival.)

Well, "lovebot, no sex" is already beginning to exist, providing companionship for elderly patients and the like. Apparently the results are overall preliminary but promising...

I get the impression these are mostly robotic cats and dogs, which have some practical advantages over their biological counterparts in requiring less care, as well as the potential in the future to be better able to summon help, keep track of medications and appointments, etc.

160:

I've been rather a skeptic about the Singularity since before Vinge wrote about it, but I have a couple of observations.

Issues of heat dissipation in computer chips are to a large extent an offshoot of the timing pulse based architecture that chips have had since nearly the beginning. Other architectures (I recall Arvind's dataflow architecture from long ago at MIT) can theoretically produce much less heat. If you can use a lower temperature design and stack your layers somehow you can keep up with Moore's Law for a while longer.

There's a very good article about the physical limits of human wetware in this month's Scientific American, which goes into limitations that are very similar to the nano-width of silicon chip conductors. Human brains also take up a hugely disproportionate percentage of our physical energy consumption.

On the other hand, if you believe that the mind (or brain, if you prefer) is entirely a physical thing, it seems likely that one can ultimately reproduce its function in software. The question I trip on is whether such a "brain" is going to inherently be "better" in some sense than a human brain. The architecture of super-human intelligence is not something we know anything about. We don't even have a good understanding of the architecture of our existing brains, much less what it takes to make them "super-intelligent." I'm a software guy, and I think of the suits with PowerPoint presentations on how we can make this piece of software better, faster, cheaper (pick two out of three...) when I hear the Strong Singularity folks.

All that being said, I think Charlie's idea (in "Saturn's Children") that AIs will be based on our understanding of the architecture of the human brain is a good one. The problem is we really don't understand that architecture well enough to even simulate it, and the combinatorics of simulating it are daunting even if you can overcome the limitations of Moore's Law.

161:

One thing about living in a simulation, you can run a simulation of your ancestors but you will be unable to recreate the past in such a simulation. All you will be able to do is create an alternate world, which might be an interesting thing in itself. The reason why you can't recreate the past is because weather behaves chaotically and would rapidly diverge from your initial state. Consequently things would start to diverge extremely rapidly. Thats one of my gripes with Replay, the time period where you could make money betting on sporting events would probably be extremely short.

162:

Bruce,

I worked for Cirrus Logic from mid 1997 to the end of 2000. I've been staying out of the discussion because most of it has been off target.

One thing I've observed in the science fiction community is that the singularity true believers are mostly software people. They seem to be able to idealize away their experience of how crude software engineering is. When I discussed the singularity with Vinge I pointed out that an AI would inevitably be as flaky as a human being is. How can you even verify that it is operating properly?

I also disagree with the whole framing of "uploading." There is not and never will be uploading. You may eventually be able to run a simulation of a human mind but it is only that.

163:

Computer systems are symbolic systems based on conventions. Just like a painting can depict another painting (like this for example), one computer system can emulate another. The Universe, people and their brains are not.

164:

M.E. is exactly right. I simulation is a form of depiction, a representation. Everything happening inside a computer is a representation. There are no actual numbers or quantities inside a computer. There's just an agreed upon convention of how we will interpret the bits in registers, etc, as representing certain numbers, quantities, characters, etc. The equations a simulation would presumably use to implement the simulation are themselves also representations. A simulation of your brain is no more your brain than the words "your brain" are your brain. It's really that simple.

165:

Or, on the gripping hand, there is an immortal soul, and uploading is just one more transmigration, which most of us will ultimately undergo, at least until we reboot the universe.

166:

"Why do you assume that a non-biological intelligence will need to be "made" (after all, as far as we know, ours didn't)? Why do you assume that an autonomous non-biological intelligence (of even vague human equivalence) would want or need to be enslaved? "

Our intelligence evolved via natural selection over billions of years, or millions if you look more recently, in response to selective pressures we're still not sure of, but which presumably demanded more and more complexity and self-awareness. Although there are some relatively bright animals, there's nothing really like us, and there doesn't seem to have been in the preceding 4.5 billion years. There's no particular reason to suspect that computer viruses or "the Internet" would spontaneously evolve or "wake up" to be human-like. An AI is far more likely to be designed, and thus to have goals that suit us, not the goals of an autonomous evolved being.

The weak point in that is how much we get AI through old fashioned design (HAL 9000, or Asimov Three Laws, or Chobits), vs. a Saturn's Children scenario of copying human brain structures and abusing the result into obedience. The latter is likely to go wrong but for the prosaic reason of the "AI" really being digital humans.

The first scenario may seem unlikely if you think in terms of hand-programming a whole AI from scratch. It's more plausible to me if I think of successive designs of software and robots, becoming more capable of understanding language, recognizing moods, able to walk and speak fluently, able to note internal unproductive behavioral loops, able to monitor and report internal state, able to learn... eventually you get something that has cobbled together a suite of abilities similar to our own, but via a different path, and always around a motivational core and design purpose of "do what your owner tell you" rather than "produce children capable of having children".

167:

Charlie wrote: “Uploading implicitly refutes the doctrine of the existence of an immortal soul, and therefore presents a raw rebuttal to those religious doctrines that believe in a life after death. “

Well, uploading wouldn't necessarily be a rebuttal of doctrines of life after death (via the existence of a soul). Let us suppose that Roger Penrose is correct, and that consciousness may derive from electrical interactions on the quantum level in our neurons (I can already hear the howls of outrage from the Minskyites on this thread), and given the resurgence of neo-Pythagorean schools of thought -- err, I mean String Theorists -- and their assertions that quantum behavior can be explained/described by 6- or 7-dimensional manifolds -- well, that would imply that consciousness exists on more than 4 dimensions. I give you credit where credit is due, Charlie, for suggesting this repeatedly in your Laundry novels!

So, the soul, could be envisioned as the meta-dimensional entity, of which our consciousness is only the 4-dimensional manifestation. Which implies that we could simultaneously exist uploaded into computer model and in Heaven (which will probably be found to be a special case of String Theory). Moreover, that would explain why some people have memories of “past” lives. Rather than these being false memories, these are just signals leaking over from your topological extrusions (lives) that “simultaneously” exist in an n-dimensional universe.

Hope you had as much fun at the pub, as I did writing this post!

168:

The Universe, people and their brains are not.

Saying this does not make it so. You would need to expand on this to make it something that could be responded to.

Are you saying that the the brain does nothing that falls under the domain of computation?

It's also unclear if when, you say brain, you mean consciousness as well, or just the physical brain. This confuses your use of "computer", since the physical object of a computer is not a symbolic system, it is a collection of atoms and electrical impulses (just like the physical brain).

Software may utilize and itself be part of a symbolic system, but then consciousness is also symbolic, recursively representational of it's own internal states and that of the brain. (where many, though not all, of the brain's states are representational of sensory impulses corresponding to external phenomena)

It does not necessarily follow from this that we can be simulated, but you have not demonstrated otherwise either.

The fundamental question is, "is what the brain does a form of computation?"

If so, the possibility of simulation increases greatly ( but is not guaranteed )

If not, the possibility of simulation isn't dead in the water, since there is still then the followup, "well, what the he'll is the brain doing then? And whatever it is, can we duplicate it?"

The upshot of this all: unless you are a mind-body dualist, there is no reason to discount the possibility until we know a whole lot more about how the brain does it's thing.

169:

@Bruce:

Hunh? Since when do sufficiently complex systems not get parasites? I think the highest level epiparasitic system known (4 endoparasitic wasps inside each other, inside a caterpillar) came from a tropical rain forest. Certainly, the most complex ecosystems contain the highest diversity of parasites and epiparasites.

That includes human systems as well. For example, one could also argue that, in terms of energetics and money flow, a good chunk of the American financial sector (based in New York) acts as a parasite.

You may be thinking about the concept of parasite load. Too many parasites kill the host, after all.

Otherwise, you may be confusing parasitism with symbiosis, and they're not exactly the same thing...

170:

What 45 said. It is the advocates of uploading who are the mind/body dualists, rather obviously I would think. Uploaders have simply replicated the dichotomy between soul and body and called it software and hardware. But somehow the remapped dualism is supposed to be all scientific and free of icky superstitions, when it is exactly the same as what it purports to replace.

171:
Hunh? Since when do sufficiently complex systems not get parasites?

No, I think you misunderstood me. I wasn't saying that complex systems don't get parasites, I was saying that parasites by their nature are not as complex as non-parasitic organisms. In fact, any parasitized organism must be somewhat more complex than if it was not parasitized, since the parasite is removing resources that were created for the use of the host, and they will have to be replaced. But a parasite that, for instance, gets its sustenance from the bloodstream of its host doesn't need to have a complex digestive system, and an obligate parasite must be less complex than a stand-alone organism or the parasitism wouldn't be required, but would be optional.

For example, one could also argue that, in terms of energetics and money flow, a good chunk of the American financial sector (based in New York) acts as a parasite.

And I would so argue. The fact that it has succeeded in seriously damaging the host economy without hurting itself in the short-term certainly argues against a symbiotic relationship.

172:
Are you saying that the the brain does nothing that falls under the domain of computation?

This might very well be true. Remember that the notion of computation was never completely formalized: the Church-Turing Conjecture that "everything computable is computable by a Turing machine," is unprovable, so it really can't even qualify as a hypothesis. To state that the human brain is not equivalent to a Turing machine (or the λ-calculus or recursive functions) is simply to state that the brain's operation is not computation, which may or may not be true, but is certainly logically consistent. And there's good reason to believe that the brain's operation is not equivalent to a Turing machine. For one thing, what's easy for a computer, like unbounded-precision arithmetic, is NP-hard for a brain (quick, what's the millionth binary digit of π?) For another, the architecture of the brain, what we can understand of it, is nothing like any formal computing device we've ever invented. And even the things we've invented to try to emulate the brain (like neural nets) aren't very much like it.

173:

Clarification on my statement at #141 regarding "self-awareness" optional.

The concept of self-referential awareness of your own "execution state" and self-simulation within future/past/extrapolated situations could be considered consciousness. But is that a necessary requirement for a goal oriented, self-optimizing symbolic processing system? Perhaps it is just icing on the intelligence cake, an extra optimization that grants us an insight into optional futures and our possible responses to them. It could be we only needed that ability to simulate what our tribe members were thinking/feeling.

When you add feedback to a system AND the ability to self-modify mental structure in response to that feedback its possible the period of hysteresis as the system responds to input is best handled by self-referential awareness and consciousness would come about naturally as a higher-order result. Given the recent findings of fEITER scans as brains enter sleep it would seem disparate regions of the brain get inhibited as consciousness fades, implying it arises as a higher order interaction between brain regions...
http://www.bbc.co.uk/news/science-environment-13751783

Perhaps what it comes down to is if we can accomplish strong AI with symbolic manipulation only (think of a complex chess algorithm pruning leaf nodes of solutions) or does it take an entire complex ecosystem of subsystems in constant feedback and dynamic change to create AI (and consciousness spontaneously arises as a side-effect).

My suspicion is that over time we will get VERY good at symbolic "low-level" AI. Vast parallel data sets crunching data that gets feed back up into decision tree supervisors... so fast and accurate it outwardly appears intelligent. No consciousness required.

If it is within the various complex feedback loops between components that consciousness arises then we can get there by symbolic AI. If it requires special substrate features (Penrose quantum tubules) we can still get there, just not with faster Turing designs.

Or perhaps the universe's components interacting via quantum fields engender consciousness and any suitably complex system of particles will all have a type of consciousness. Which wouldn't bode well for pure symbolic consciousness if no particle fields are interacting..

174:

I was a hardware engineer for just the first 10 years or so of my career, then I turned to software. And for much of my time in software I was concerned with tools, techniques, processes, and methodologies (and language design and hardware architecture to a lesser extent), so I know just how hard it is to write software and get it right (even if you understand what it's supposed to do, which is not the case in a large percentage of software projects). Please note, guys, that 60% of all commercial software projects are never finished.

At one point I got interested in what Vernor Vinge called "computational archeology" (the term has become common in the software biz now, I see). It's the analysis of legacy systems, typically by reverse-engineering because source code has been lost, to determine how they were developed and what they were intended to do. After only 60 years or so we already have systems which have been built of aggregations of other systems, with interfaces cobbled together to deal with poorly-understood and -documented APIs out of badly-layered architectures. Imagine what that will be like in another 60 years, or another 200 years, with huge systems, many of them at least partially built by automatic software generators and genetic algorithms, that no human understands (at some point, no un-enhanced human will be able to understand them).

Now build a brain with that.

175:

Firstly, I think a fair number of our players, even those of a religious persuasion, are substantially underestimating the theological indigestion some manner of neurological duplication would give most faiths (some of the various possibilities have already been explored in fiction, witness the book of Richard K. Morgan and the short-lived television world of "Caprica.) The fact that one can envision the slew of furious hand-waving responses to the formerly-reserved-for-God task of restoring the dead to life is not a sign that the development would go unnoticed- it's a phone tree for who would be communicating by car bomb with whom. Pick a scientific, social or technological development that brushed close to questions of human origins, fates, or natures (homosexuality, evolution, vaccines, heliocentrism...) and the generation in diapers at the advent rather reliably splinters into a third that sensibly accept the evidence at hand as more compelling than the stories they heard in Sunday school and exist somewhere between vigorous atheism and attending the holy days for the sake of their parents, a third that uses the unverifiable nature of God and metaphorical language as brick-and-mortar for nominally accommodating "sophisticated theology" and a final third that goes willfully crazy and establish compounds and start mass mailings. The later two communities have an enlivening history of inviting each other to rather uncomfortable barbeques.

So what happens when our plucky church deacon, reloaded from his black box after a horrific accident, walks into his church, shakes the hands of his lifelong friends, tells the the officiant he'll staff bingo this week and sits down in the front row, just before being informed he is in fact devoid of spiritual value? A secular person is in the position of pragmatically evaluating whether their relationship with this person can continue through such a dramatic life event- as in a move around the world, or a revealed secret, or a sex change. A religious person, however, is in the position of desperately hammering out some manner of relationship to literature compiled before indoor plumbing. If the party line is that they have no souls, and this is a sham resurrection, they have hordes of the reanimated and dispossessed picketing their churches while the church elders campaign in their legislative body of choice that these are not whole people, amidst periodic scandals of church leaders with upload fetishes splashing across the papers. If they are ensouled, then the churches have a sales problem, trying to establish what distinctions remain between their heaven and the go-home-after-the-accident revival technology can offer(and let's be honest- the average mendicant, with the nod to the various Judaic factions that have no opinions on afterlife, may not be participating in daily religious practice because it gets them off the death hook, but their vision of heaven nevertheless looks strikingly like their lives plus dead parents and a million bucks.)

That being said, I agree with Charlie that I don't hold high hopes (or fears) for most of the set dressing of the hard-takeoff Singularity, much as I enjoyed Manfred's adventures in the computronium hinterlands. The technical objections to subatomic transistors and the ontological difficulties of bootstrapping aside, the whole concept, at least from it's ardent supporters, has always had this uncomfortable disconnect from the rest of science and enterprise.

I'm always reminded of a couple of awkward undergrads I stumbled upon trying to optimize their dating prospects with extensive attention to a spreadsheet of the dorm's women- while I agree that there is nothing in principle against using analysis to improve your love life, they nevertheless were no closer to getting laid than when they began. When the likes of Kurweil point at the field equations with one hand and Moore's Law with the other, and sagely declare, "computability and accelerating returns, therefore Matrioshka brains and maximal wish fulfillment," I can't help but feel a resonance between the two. Have they never heard of exponential processes not running to completion, or is all the Earth's carbon locked up in duplicating E. coli and I just missed it? Did they not notice that people expected HAL for fifty years, and we got Google instead? For that matter, have they not read any of the fiction that dealt with AI and longevity and nanotech and found more nuance than fusion with the orbital server farm godhead?

176:

Charlie, for a post called "Reality Check", I find the first argument, especially, to be wildly unrealistic. There's ambiguity in the expression "human-equivalent AI", which could mean psychologically anthropomorphic AI, or could just mean AI with intellectual capabilities comparable to those of human beings. Frankly, even the arguments against anthropomorphic AI (based on what it is that people would really want from AI) are very weak. The diversity of human desire should tell you this. But the argument against AI with human-level intellectual capabilities is just about nonexistent.

The human brain can do what it does because it has neural structures enacting some powerful special- and general-purpose algorithms. Let us not quibble over whether words such as "algorithm", "computation", and so on are perfectly correct designations for what occurs in the brain. There are highly structured and systematic transformations of representational states occurring in the brain, whatever you call that, and the specifics of these operations are slowly being decoded, simultaneously with the accumulation of algorithmic knowledge in the world of artificial computers. You have to suppose a global victory of luddite culture, or a similarly total destruction of technical civilization, to think that this process won't play itself out to the point that all the ingredients for a "human-equivalent" mind are out in the open, just as the details of the human genome are now out in the open.

Could it just be that you don't want to be another SF writer turned solipsistic mad guru? I can understand being driven up the wall by the fans of the Singularity, but maybe you also fear losing your own grip on reality, if you take the end of human intellectual supremacy too seriously as a possibility? I will testify from personal experience that it is incredibly difficult to think clearly about these matters, not least because there are so many uncertainties; but the idea that self-directed AI as smart as a human being is just never going to happen, because we want our cars to drive us where we want to go, not where they want to go... sorry, I just can't suspend my disbelief for that one.

177:

For the record - I loved Accelerando, and I am not under any impression that you are an extropian. I am in favour of the idea of the singularity myself, although I try to maintain a more or less realistic view of the possibility.

Short version - Santa Claus doesn't exist, but he's based on real historical/mythological figures.

I think that saying effectively "consciousness is a poorly understood emergent phenomenon and therefore it is highly unlikely that we could build something which replicates it" is not a very strong argument. I freely admit that consciousness is poorly understood, even by experts in the field. But then, so is quantum physics - and we're starting to build quantum computers in spite of that. The fact that we don't understand how something works has never stopped us from messing about with it, and in many cases replicating the effect while we're still figuring out why we should or shouldn't do so.

On top of that, I think you're implying that the only reason to build strong AI, or to figure out how to do so, is utility. I agree that we won't necessarily want our phones and cars to have consciousness or volition, but I do believe that we will still want to build machines that do have those properties, just to learn about volition, consciousness and machines.

There is also the problem of augmenting humans, rather than (or as well as) building strong AI from scratch. We already replace damaged or missing limbs with increasingly high-tech prosthetics. Just this week there have been two discoveries covered by the news media which look like they would allow us to add neural prostheses to assist people with Alzheimers (artificial hippocampus tested in rats http://medgadget.com/2011/06/brain-implant-restores-and-enhances-memory-formation.html and memory prostheics tested in rats http://iopscience.iop.org/1741-2552/8/4/046017/). What happens when we find out how to replace other parts of the brain with equivalently functioning prosthetics - and we stay human and conscious? Unless we end up losing our inner sense of self as we replace parts of our brains, then we have a cyborg which is effectively equivalent to human-level AI. Given that, bootstrapping up to a higher level of cognitive processing power and memory storage seems perfectly logical.

I agree with you regarding uploading a biophilia, but I do think that there is potential for a vastly more subtle and intuitive interface mechanism than you describe being implemented in a virtual environment. And again, a cyborg body with appropriate sense-replicating structures would solve that issue as well. The ability to switch at will between being "in" your physical body and being embodied in a chosen virtual body in your chosen virtual environment sounds attractive to me, and does not require a mind/body dualism to do so. The uploaded self would have to include a notional body as part and parcel of the upload, and that notional body would have all the effects of a physical body and yet could potentially be uncoupled fomr the physical or virtual form that is occupied at any given time. The body that anyone else can observe would be more like clothes are now, and the body of the self would be included in the encoded self just like the mind and identity.

I think you're right about the holy wars though. If we manage to get uploading technology right, we're in for a tough time from the religionists.

I do also agree with your conclusion that we're not at risk of disaster unless we harbour self destructive impulses. Any AI that we create - be it 'pure' AI or an augmented human - would necessarily be mostly human. As far too many TV aliens have shown, we don't really conceptualise anything truly alien very well, and I don't think we could create a mind all that different form our own. As such, the risk is the same as the risk we take in having and raising children - sure they might hate us, and they might buy a gun and shoot their schoolmates, but it isn't very likely on the broad scale.

I think the singularity is like any other exciting idea - it has its share of rabid fans and its share of equally convinced skeptics, and the truth will come out somewhere in the middle. But I don't think it's fair to say that the developments are impossible, or even overwhelmingly unlikely. No more so than wearable computers, or doubled life expectancy (now vs. 200 years ago), or putting a permanent base on mars or the moon (I hope we get there!).

:)

178:

I wonder if the nearest we'll ever get to Singularity if Toffler's Future Shock.

179:

Ok, there're good arguments on both sides, what I'm wondering is, what does it mean to "live on the assumption that they (i.e. Singularity, mind uploading) are coming down the pipeline within my lifetime.", any examples?

Besides getting me to read blogs like this and having an optimistic outlook for mankind, I don't think these assumptions affected my life in any meaningful way.

180:

"Now build a brain with that.": Isn't the human brain built exactly like this? New part (Neocortext) built on top of older part (limbic brain) built on top of even older part (reptilian brain).

181:

Within the context of this discussion, I am not interested that much in the third argument. So I'll skip it.

The second argument seems to based in the same reasoning some theists use to ignore atheist arguments: "If I'm wrong, it'd be too uncomfortable to bear". The possibility of some theoretical grand-scale holy war is hardly an argument for proving something is or isn't possible. The fact I'd be uncomfortable wearing certain shoes does not indicate uncomfortable shoes are not likely to exist.

The first argument seems to not be aware that autonomous agent theory has long stopped trying to define intelligence as "acting like humans". In fact, the whole point of transhumanism is to stop using flawed notions of humanity as a measuring stick. We have successfully built flying machines only when we stopped trying to make them behave like birds and started working on aerodynamics.

I am somewhat disappointed in the arguments presented. Also, apologies if someone has already said everything I said here, I've not the time to read through all the comments

182:

"And there's good reason to believe that the brain's operation is not equivalent to a Turing machine. For one thing, what's easy for a computer, like unbounded-precision arithmetic, is NP-hard for a brain (quick, what's the millionth binary digit of π?)"

I think you're mis-using "NP-hard" there. Especially since a human brain is perfectly capable of executing the same algorithm as the conventional computer.

Conventional computers are generally extremely fast in serial computation and not very parallel, less because we can't build parallel architecture and more because we're not very good at programming it and increasing serial speed was easier than parallel programming. The brain, as a hypothetical computing device, is very slow serially but supermassively parallel. This makes it a very different kind of computer in practical detail, but in no way suggests that it's not a computer at the theoretical level.

Hmm, race conditions and deadlocks, as applies to human psychology...

183:

Hey, Santa may not exist-- for now. But I've seen Futurama, and I know robot Santa WILL exist. (I know he's a murderous bastard, but everyone "Frankensteins" the future.) Don't be so pessimistic. Progress has been going on for a while before Moore's Law. It will probably continue for a while longer. I, for one, look forward to some very interesting transformations that look rather interesting, and likely (to me anyway.)

184:
The human brain can do what it does because it has neural structures enacting some powerful special- and general-purpose algorithms. Let us not quibble over whether words such as "algorithm", "computation", and so on are perfectly correct designations for what occurs in the brain. There are highly structured and systematic transformations of representational states occurring in the brain, whatever you call that, and the specifics of these operations are slowly being decoded, simultaneously with the accumulation of algorithmic knowledge in the world of artificial computers. You have to suppose a global victory of luddite culture, or a similarly total destruction of technical civilization, to think that this process won't play itself out to the point that all the ingredients for a "human-equivalent" mind are out in the open, just as the details of the human genome are now out in the open.

Philosophically, I have no problem with human-equivalent thinking machines. Put the atomic coordinates of your human's components into the Turing machine, iteratively solve the time dependent Dirac equation for the system, and there you are -- a thinking, loving, dreaming simulation that behaves just like the template human. But in practice, computing machinery has finite storage and finite execution speed, transforming many philosophically trivial exercises into very hard engineering.

What if all the ingredients for a mind are out in the open, but funding issues/damn hard engineering problems keep it from being realized? See also, "where's my fusion reactor?" and "where's my lunar city?"

Brains don't have a clean separation of information processing from baroque biological flourishes. It's entirely possible that if you have just the brain's structural information down to 50 nm resolution, the brain simulation rapidly diverges from normal human behavior or doesn't start out recognizable to start with. Maybe you have to go down to the molecular level, with full treatment of chemistry, to get a brain-blueprint that forms a stable mimic of the original in simulation. But if you need that level of detail, even presuming that you can build the map, you'd need Moore's Law to continue unabated for centuries before you can run a simulation in real time. Current "large scale" molecular simulations are approximately 10^20 times smaller than the human brain and run about 10^13 times slower than real time.

185:

Charlie in intro: "Like Communism, Libertarianism is a superficially comprehensive theory of human behaviour that is based on flawed axioms and, if acted upon, would result in either failure or a hellishly unpleasant state of post-industrial feudalism." Or RELIGION as they are known ... fitting man into "god's" image and words, resulting, well, you know! Agree about mind-uploading: the simultaneous capture of all the vector-states requires is almost certainly impossible in theory, never mind practice.

Except of course, we have already been through several singularities. The use of fire, the smelting of metals, the wheel, water-and-then-steam-power, now flight and electronocs in its widest sense .....

WHat about an ACCIDENTAL "Mechanical" AI arising - then what? I assume all bets are off at that point. Remember, it only has to happen ONCE.

@ 20 WRONG There would be lots of extreme violence involved

WillS @ 25 Sorry - you've just said you are a christian. BY DEFINITION you are irrational, and cannot really engage in this discussion, no? AND @ 123 You ADMIT it has to be taken on "faith"??? But we are discussing EVIDENCE here, are we not? Like the testable proposition: "No god is detectable, even if that "god" exists" ...

Meg Thornton @ 32 Yeah. The RC and some of the mainstream muslims have this problem with female intelligence, don't they? How would we know - until, perhaps it was too late?

Didn't Harlan Ellison do something horrible about this one? "I have no mouth and I must Scream" I think.

@ 54 Now THERE IS A NEW WORD WE NEED TO USE .... Nerdvana Snark ?

David L @ 65 "We're INSIDE the Jolly Sailor?" - T. Pratchett, "The Wee Free Men" (I think)

Charlie @ 86 Precisely. I suspect the BigS will happen, but we have no idea either how or when, or whether it will be fast/hard or slow/soft. As I said above, it could easily be an (enviromental/evolutionary) accident and it only has to happen once - and survive, of course.

Bellinhman @ 101 WRONG You are now owned by two 16-week-old Aby's

& Charlie @ 103 So am I - I have to evaluate/judge all the London area Pubs-of-the-year (several people from each local CAMRA branch do this) so as to determine the London PotY. My money is on the Southampton Arms, at the moment.

Dirk Bruere @ 114 Yes - I agree about the hardware by 2020 - the o/s software is more than one problem. How IS "Blue Brain" doing at present?

Joe @ 162 You have evidence and proof of this claim?

186:

Time-dependent Dirac equation?! Why not insist on computing the partition function of string theory as well?

I am actually one of those people who expects that consciousness has something to do with quantum coherence in the brain. But I think it's a bit of a joke to suppose that we need a brute-force molecular-level numerical simulation of a neuron in order to reproduce its cognitive function. If there is a cognitively relevant computational process in the brain which really does involve biological quantum processes, isn't it likely that the efficient artificial correlate of this will be non-biological quantum processes, occurring in quantum co-processors in an AI, rather than a classical simulation of the original biological quantum computation in all its particularity?

We already have several billion examples of "human-equivalent natural intelligence", and this is the age of biotechnology. If neurons turn out to be miracles of computational miniaturization that can't be rivalled in silico, then we'll make artificial intelligences out of neurons.

187:

I thought everyone knew the brain was for cooling the blood?

188:

Hi Charles - For what it's worth, I tend to view the singularity hypothesis as a philosophical provocation rather than a prognostic claim. The great virtue of Vinge's essay is that, unlike Kurzweil et. al, he allows that anthropocentric assumptions about successors to humans could be deeply flawed. 'Posthuman' cognition may be nothing like human cognition and may have endogenous and passing strange standards of goodness. Your argument against the feasibility of AI seems to be that no-one will want it, so it won't happen. But this doesn't imply that it is impossible or that it couldn't arise as some kind of emergent phenomenon (in any case technology may throw up problems to which AI is a good fix). As David Chalmers pointed out: that fact that we are intelligent and appear to be organic machines or assemblages, implies that machinic intelligence of some kind is possible.

189:

Ahem:

  • I always like to take a contrarian viewpoint when examining ideas, to see if I can break them.

  • I've got a novel coming out in just under two weeks time.

  • Please derive the obvious conclusion from these facts.

    190:

    I've always found the singularity cult slightly sinister and for the most part laughable.

    Firstly, it's a contemporary manifestation of radical body-hating dualism, which goes all the way back to some forms of Platonism. Is it entirely an accident that the movement is confined to male geeks - and I speak as one myself, in one of my avatars?

    Secondly, what Heidegger called being-in-the-world (Dasein) and more generally intentionality (conciousness-for-or-about-something) is deeply part of our 'intelligence' and can't be abstracted out. Only the dualism I've dissed above could ignore this.

    Thirdly, I'm old enough to remember being excited about the possibilities of expert systems and their ability to collectively capture useful knowledge, and have since seen 'real AI' shrink it's ambitions with every generation, and the more successful parts take it's inspiration from biology rather than mathematics and logic.

    Fourthly, 'intelligence' is awfully messy:

    the annoying bits, like needing to sleep for roughly 30% of the time, being lazy or emotionally unstable, and having motivations of its own.

    are, I suspect, essential parts of the illusion of intelligence.

    Which is my excuse for loosing it over a particularly tricky piece of plumbing yesterday and posting this rather than getting on with any work.

    191:

    "it's a contemporary manifestation of radical body-hating dualism"

    Human embodiment isn't all sugar and spice, you may have noticed. If there were no such thing as the natural ageing process - if people stayed young and died youthful - and then if someone found a way to artificially induce the changes we call "ageing", it would be considered a monstrous thing to impose on a person.

    In any case, aren't the biggest fans of the singularity young, for the most part? In other words, people whose bodies haven't begun to break down very badly, yet. The criticism of transhumanism, uploading and so forth as "body-hating" most likely says more about the psychological adaptations that the critic has found personally necessary. The people who want to become superhuman are the ones who haven't had their joie de vivre crushed by physiological necessity. They don't want to die, they want to grow and become more - in a person not yet beaten down by life, this is not twisted corruption, it's some combination of health and hope.

    I would agree that some peculiar and wrong metaphysics is coming into play in a lot of the singularityesque thinking that occurs, about the nature of the mind, its destiny in a universe turned into one big computer, and so on. But if I had the ingenuity of Slavoj Zizek, I would tell you that this is just an ideology contingently accompanying the very material process we could call "the rise of the machines". (I am telling you this anyway, but if I had Zizek's abilities, I'd do it more convincingly, with more erudition and panache.) The same goes for obliviousness about Being and intentionality - it's a contingent cultural fact that has no bearing on whether any of this will happen; it only affects the sensibility and degree of understanding with which the crucial acts will be performed. Either we will figure out consciousness before we get AI, or we won't, but AI will happen either way, because AI is about algorithms, quantitative epistemology, utility functions, and a host of other mathematical concepts which, even if they don't tell us about things in themselves, are capable of being operationalized and turned into technologies.

    192:

    If there were no such thing as the natural ageing process - if people stayed young and died youthful - and then if someone found a way to artificially induce the changes we call "ageing", it would be considered a monstrous thing to impose on a person.

    Yup.

    (Says the 46 year old guy.)

    I'm all in favour of life prolongation and intelligence amplification. And I really hope Aubrey de Grey's version of transhumanism works. That, incidentally, would give us a rather different kind of singularity -- a total rupture in the structure of human hierarchal societies would be inevitable within a matter of decades.

    193:

    Regarding the ethics and issues of evolving AI, Greg Egan's Crystal Nights is worth a read.

    194:

    The Blue Brain people are asking for a billion Euro grant to do a full Human scale brain emulation by 2020

    195:

    Exactly. Consciousness is a red herring. As long as an AGI delivers the goods, passes the Turing Test and generally invents a brave new world, whether it's conscious or not is one for the theologians.

    196:

    This might very well be true.

    Precisely-- the person to whom I was responding was not clear in what their position was, I was attempting to obtain clarification/provide guidance on what a valid criticism of simulatability might start to look like.

    197:

    An AI is far more likely to be designed, and thus to have goals that suit us, not the goals of an autonomous evolved being.

    That is weak AI and it exists.

    If you want to solve a known problem, the complexity of which can be estimated fairly well, that's the way to go.

    You might want to know that currently about 80% of share trading in the US is conducted between computer programs without direct human intervention. Numbers in currency and commodities markets are somewhat lower but rising. Most of these programs are dumb, but not all.

    Currently en vogue are algos systematically reading news and social networks. But obviously if everybody does it, it's unlikely to pay your bills.

    As an up-and-coming hedge fund owner, where would you go from there? You could try electronic surveillance of critical humans or you could focus on conquering the algos.

    Imagine an algo having the same quantitative inputs most other algos have and possessing a model of their likely behaviour, making predictions of their likely actions and checking in real time against observed real data.

    Now imagine this algo systematically spawning next generation algos with increasing complexity...

    What you have now is a rudimentary self-model, a motivational structure and a functional entropy inversion (see "What is life?" by Erwin Schrödinger) in a highly competitive ecosphere.

    And you have nearly unlimited trials powered by the well-funded server parks of "Billionaires Investments" Ltd.

    Algos feeding off algos feeding off other algos is reality anyway. If you add a motivational self-model that goes down the route of increasing complexity, you already have a rudimentary form of life.

    It's not yet "intelligent", but even if it only acts by dumb try&error and survival in the beginning spawning new generations blindly, the underlying structure learns. Systems intelligence improves.

    If by wild chance or the help of a human engineer one of those algos builds a model of the world it habitates and the defining constraints of it, the arms race measurably speeds up.

    Intelligent life doesn't wake up. There's a succession of tiny little breakthroughs, each somewhat unlikely but irreversable. A concentric set of equifinalities increasing complexity ever more. Not linearly of course, but in jumps and fits and unpredictable.

    And its always defined by the underlying ecosphere...

    198:

    There are no actual numbers or quantities inside a computer

    First off, I don't fundamentally disagree with you. But, the above is also true of the "real world". There are no numbers, There are no quantities, not in the objective sense of existing outside of the human-constructed abstractions of the same.

    The universe may exist independent of subjective experience, but just about everything else we can say involves abstractions based on a very filtered, representational, and subjective experience of it.

    As such, you cannot disprove the simulatability of human consciousness by reference to computation being purely representational because consciousness is itself largely representational, even if the meat that gives rise to it may or may not have a computational basis.

    199:

    The major result of uploaded humans being treated as people would be political- it takes 18 years to create a voter in most democracies. Why not speed the process by making as many copies of your own mind as you can afford, differentiating them as little as needed to be declared citizens, and having them go vote.

    "I need half a million die-hard Republicans by tomorrow." "Clickety-clickety" "Done."

    In addition, if there's enough processing capacity that minds in computers outnumber minds in flesh, energy and technology and probably ecological policy are going to take a hard shove.

    200:

    It's the analysis of legacy systems, typically by reverse-engineering because source code has been lost, to determine how they were developed and what they were intended to do. After only 60 years or so we already have systems which have been built of aggregations of other systems, with interfaces cobbled together to deal with poorly-understood and -documented APIs out of badly-layered architectures. Imagine what that will be like in another 60 years, or another 200 years...

    Two quick stories from the 80s.

    Programmer was handed an numerical analysis program to make some changes to. Program was in Fortran and orginal developer was long gone. He started in. Things did not make much sense. Then he figured it out. The original programmer, 10 or more years earlier, had apparently decided that the Fortran code was not fast enough. So the original guy had written the analysis in assembly, figured out the floating point numbers that matched the code, loaded said numbers into Fortran floating point arrays, then executed the arrays.

    The other case was at a major insurance company. Autocoder (709?) written in the early 60s had been adapted to run on the new IBM 360s, then the 370s, then the whatever came next and so in the early 80s they had people modifying autocoder code that was running 2 emulators deep. They were in a 5 year project that had already lasted 5 years and had, they hoped, 2 years to go to convert it to something current. But they were having to reverse engineer code that no on at the company for over 10 years had any idea of exactly all the details of how it worked.

    I suspect our brains work much more like these two situations than we want to admit. As we build memories and layer them up over the years.

    201:

    Bruce, I'm afraid that doesn't work either. Parasites can be more complex (and even larger) than the organisms they parasitize. A few examples:

  • Orchids parasitize their mycorrhizal fungi. The fungi tend to be saprotrophs (on dead matter) or even plant pathogens. Orchids basically lure the fungi into specialized cells in their mycorhizomes (which are bizarre on a cellular level), then suck nutrients out of the fungi. I don't think anyone's convincingly shown a nutrient flow back out to the fungus.

  • Lichens. The fungi tap into the algae. While lichen algae can be free-living, few if any lichen fungi can survive without tapping their algae, so they are engaging in nutritional parasitism. People have argued that, inside a lichen, an alga gets to live in an environment that it could not have colonized before, but in nutritional terms, the lichens are parasitizing the algae growing inside them.

  • Honey mushrooms. You may have heard of the humungous fungus in Michigan (and elsewhere in the northern hemisphere). You know, the fungi that range from a couple of acres to a couple of miles across? They're plant pathogens. They're also older than the oldest trees that grow on them.

  • Christmas trees. Nuytsia (actually all mistletoes) have specialized structures that allow them to penetrate the conductive tissues of other plants, in addition to the normal structures that all plants have. I'm picking on the Christmas tree because I like seeing a tree parasitizing the herbs that grow around it. As you may guess, it's larger than they are, too.

  • Or we can even talk about the pinnacles of animal evolution, beetles and wasps. Many to most of them are parasites (on a nutritional basis, whether or not they kill their hosts), and it's not clear whether they are structurally simpler than their free-living ancestors.

    202:

    My first "Ada" program was nothing of the kind; it wa a Borland Turbo Pascal program that I changed things like removing Pascal {} characters from, changed comment markers, changed known types and reserved words, then threw an Ada compiler at to see what complained!

    203:

    I agree almost entirely. Except on the mind uploading part. While there will possibly be crazy religious wars about it, they will be for nothing. A uploaded mind would be a copy (or instance) of the memories, "thought patterns" (if such a thing exist, maybe trough an adaptive simulation of the brain structure?). That does not prove or disprove an afterlife, or the existence of an "immortal soul". Of course most people will not understand or care about.

    Now, what would be the appeal of an artificial human mind to exist without the correspondent virtual body? No hunger, no thirst, no sexual desire, no pain or excitation trough dangerous actions like sports (no simulated glands, mean no simulated hormones to alter consciousness), and while you are at it, not even the capability to get high... Is not that the HAI would even miss those sensations.

    An argument i have hear before says that the search for knowledge would be the driving force, but that is just naive, people who explore, investigate and thrive in and enjoy learning does so because something in our wetware tells us that it is good for us... Without the "baggage" of a simulated body with all its elements, we would probably simply cease to function. How could the "economy" of shedding out our virtual bodies for more efficient ones work when once we do so, there would be no reason to go on? We will create AI so we can become HAI to then devolve to mindless agents?

    204:

    As an undergraduate in philosophy, I debated a lot of theologians about the existence of Christian-god. I can safely say their "faith" in doctrine won't even be phased by mind-uploading, if it occurs. They will simply consider it another abomination and "putting off the inevitable judgment by God, who will eventually right this travesty in the end of days."

    I learned the hard way that religions are populated by those that surround themselves with insulated arguments. Yes, they will decry uploading, but it won't put a dent in their "faith."

    As to acts of violence, as with abortion clinics, you may be on to something. In fact, you could an entire novel about how all the religions of the world band together to attach archiving centers. It would be interesting to play with the idea of whether governments would be pro or con uploading. I can see arguments on both sides (less costs for roads, food, etc. versus the inability to rule effectively, the loss of basic economics, such as limited supplies, and so on).

    One thing is for sure, it would fundamentally change our world.

    205:

    "Human intelligence is an emergent phenomenon of human physiology" - yes, but why should a super-intelligent entity be the product of any anthropomorphic thought ? Interfacing with humans does require the mastery of human forms of communications, but that could be a simulacra hiding entirely non-human forms of thought. We will probably not even understand at what point AI will have become more intelligent than us.

    206:

    As other commenters have noted, we have quite a bit of difficulty recognizing other human beings as sentient organisms, let alone communicating with non-human intelligences. Dolphins, crows, African Gray parrots, elephants, bonobos -- these are life forms we've studied which seem to be self-aware and communicative to a considerable extent, but we still don't have much insight into their internal states (even though some of them are presumably simpler than us).

    Would we recognize a non-anthropomorphic super-intelligent entity as intelligent? And would we be able to initiate communications with it usefully even if we did?

    207:

    I'm casting my lot with those who don't understand why you think uploading implicitly challenges any sort of religious adherence to a doctrine of an immortal soul.

    First, there are vastly more "soul" doctrines than you appear to represent in your post and comment.

    Second, the "uploading" scenario challenges the concept of an immortal soul no more or less than do identical twins. Theologians and philosophers, needless to say, have covered that ground thoroughly by now (albeit without any kind of ultimately satisfying and undeniable answer...that's just the nature of such questions).

    None of this is to say that various religious types wouldn't try to make hay about uploading if it were proven to be possible or appeared possible. Certain sects by their very nature thrive on the notion that they're under assault from the modern world. (Indeed the existence of such sects may be part and parcel of a modern human world--just a ideological niche that begs to be filled.) I just think you're being perhaps a bit glib in your definitions, and assuming that the religious doctrines that you reject are the same as those held by most or all religious believers. There's a great deal of disagreement about these subjects not only between religions but within them as well (even inside the tightly pruned hedges of Catholic theology, for instance).

    208:

    The problem with soft-study majors trying to understand the singularity, is they worry about the wrong things. Ethical problems with uploading or conscious AI are completely irrelevant to the people who are trying to build them. Further, given that consciousness is an emergent phenomenon, we likely won't know we've built an adequately conscious AI until well after it's turned on. At this point the ethical genie will be out of the bottle, and post hoc debate valueless.

    Any sufficiently useful autonomous robot will have to have a sense of self to do workaday tasks like answering commands that require the robot to plan to move its own body. The fact that a robot's sense of self isn't coded like a human's sense of self is irrelevant. Again, it's an emergent phenomenon.

    There is tremendous motivation to build learning, evolving AI programs. Think of how good they might get at solving programming problems. Again, it won't matter if such a program is "like" a human in the way it thinks. Alan Turing understood that the only measure that matters is external. Does the AI "seem" human in its ability to interact.

    The real problem about the Singularity is that we build robots to replace human thought or human action. We are, in a sense that will ultimately prove critical, programming and constructing our own replacements. The first ones will be crude; not nearly as good as a human being. But progress will continue until they become competitive in all ways.

    That is the sad point at which we will realize that the genie is already out of the bottle. Our only hope is that our machine successors choose to keep some of us around as pets, like we keep zoos of nearly-extinct animals, or museums of antique computers.

    209:

    Ethical problems with uploading or conscious AI are completely irrelevant to the people who are trying to build them.

    Just as ethical problems with the possible use of nuclear weapons against civilian populations were completely irrelevant to the people engaged in the Manhattan Project.

    Seriously, get real: saying "X is possible therefore it will happen" is just another way of abrogating moral responsibility. Forget the AIs for now, your fellow humans will hold you accountable. And saying "my religious commitment to transhumanism led me to create a murderous AI" won't get you off the hook any better than religious beliefs got Shoko Asahara off. Apocalyptic cult, dangerous actions, religious mandate -- if that's not a precedent, what is?

    210:

    I'm not convinced that the singularity isn't going to happen. It's just that I am deathly tired of the cheerleader squad approaching me and demanding to know precisely how many femtoseconds it's going to be until they can upload into AI heaven and leave the meatsack behind.

    Ok, that helps explains motivation for this post. But i am still encouraged that you were the author of Accelerando.

    Would we recognize a non-anthropomorphic super-intelligent entity as intelligent? And would we be able to initiate communications with it usefully even if we did?

    This is important in terms of expectations eg planning for worst case scenarios of potential negative impacts of machine intelligence. But as you allude to in next quote, in terms of impact, "intelligence" is irrelevant. Viruses are not intelligent but can have dramatic negative impact on us humans.

    What we're going to see is increasingly solicitous machines defining our environment — machines that sense and respond to our needs "intelligently". But it will be the intelligence of the serving hand rather than the commanding brain, and we're only at risk of disaster if we harbour self-destructive impulses.

    Michael Anissimov, for example, ignores this crucial avenue of development, and potential impact. For example, high speed trading platforms, biometric recognition & tracking, are "non-intelligent" and increasingly more powerful tools that can be used by humans.

    When I read discussion about machine intelligence or AI, I think of Monty Python Meaning of Life "Whats wrong with a little kiss? Why go straight for the clitoris? What's wrong with a little foreplay first?" Lets discuss the powerful new tools that we will have prior to a hypothetical AI.

    211:
    Time-dependent Dirac equation?! Why not insist on computing the partition function of string theory as well? I am actually one of those people who expects that consciousness has something to do with quantum coherence in the brain. But I think it's a bit of a joke to suppose that we need a brute-force molecular-level numerical simulation of a neuron in order to reproduce its cognitive function. If there is a cognitively relevant computational process in the brain which really does involve biological quantum processes, isn't it likely that the efficient artificial correlate of this will be *non-biological* quantum processes, occurring in quantum co-processors in an AI, rather than a classical simulation of the original biological quantum computation in all its particularity?

    In a thought experiment you get to use an idealized computer, so why not go for the theory that should capture every relevant aspect of condensed matter physics?

    I don't think that there is any spooky quantum voodoo going on in the brain other than the run-of-the-mill quantum voodoo that is chemistry. But I think that if you want to reproduce a brain-simulation that retains fidelity over time, you need to capture the whole of brain biology, which means capturing nearly the whole of chemistry. Yes, that means even taking molecular excited states and relativistic effects into account. Otherwise some of your simulated enzymes will have way-wrong chemistry and your virtual brain will die, go crazy, and/or develop virtual cancer.

    Maybe you can skip all that messy biological/chemical business of cell signaling, water, salts, growth, death, metabolism, enzymes, etc. and build a simplified imitation of a brain with just the neural "wiring diagram" and a few semi-empirical rules. Maybe. I suspect that you need to do the biology to get the virtual brain to behave like a human brain over long periods of time, even though you might not need all the details just to get something recognizable as intelligent for a 30 minute tech demo.

    212:

    "hellishly unpleasant state of post-industrial feudalism" by comparison to the mildly numbing and mostly comfortable current (post industrial as in service based) corporate feudalism?

    213:

    One aspect of the entire uploading debate that seems to have ignored, in what possible sense is a copy of you, you? My answer is, in no sense whatever. A copy is a copy, I maintain there are very important senses in which a copy of you, no matter how perfect, is not you. Does our identity reside entirely in our software?

    No matter how much alike two identical twins are, they are not the same person. Even with perfect physics emulation, a computer simulation of a car is not a car.

    214:

    A copy is a copy, I maintain there are very important senses in which a copy of you, no matter how perfect, is not you.

    Consider your circulatory system.

    Every cell in your bloodstream is less than 28 days old (with the exception of a small number of memory B and T lymphocytes -- part of the adaptive immune system).

    Even among those persistent memory cells, the individual biomolecules of which they are comprised will be turned over within a period of hours to months.

    Pretty much all the cells in your body -- except for the central nervous system and some components of the immune system -- are less than a decade old. Even among those that are long-lived, it's a case of George Washington's Axe: two handles and three axe-heads later a case can be made that they're not the same.

    You are nothing more and nothing less than a copy of yourself.

    215:

    To give some guide as to how much that would cost; when the Winton Professorship of the Public Understanding of Risk was established at Cambridge in perpetuity in 2007 with an endowment of £3.3 million by the Winton Charitable Foundation. The first and incumbent professor is David Spiegelhalter. The cash came ultimately from David Winton Harding the manager of Winton Capital Management Europe's third largest hedge fund.

    216:

    Mankind's reach for singularity depends upon too many biological factors for his survival which are becoming more unlikely. The quest for biological survival will take too much financial support to make singularity economically feasible in time. However, hypothetically some entity may finance this pursuit in order to preserve a record of mankind's existence and "if" he can beat the timeclock or convince a continuuing entity to carry on the mission, the memory and sequencing of "life" may be preserved.

    217:

    Uploading brings forth the problem of continuity of consciousness. When I was younger and watched Star Trek; I always wondered why people who stepped off the transporter pad were considered the same person who was beamed up. They looked the same, talked the same, and even had the same memories. But, are they the same? Did the person on the planet die? Their re-instantiated copy would know no better. They would believe that they were transported.

    This might seem far afield of the current discussion. But allow me to paint a scenario. Instead of uploading to a computer. Lets say nano machines recreate my body perfectly but without the brain being active in the copy just the structure. So, now I have a perfect vessel. How do I get into it? Some would say data transfer. But, doesn't this lead to the same situation as if I copied an mp3. I have one on my computer and another perfect copy on my thumb drive. Would I wake in my new body thinking that the procedure was a complete success only to look over at the original me; who thinks that nothing happened?

    Could we do it slowly to preserve continuity? Imagine I had a prosthetic hand that could transmit the feeling of touch to my brain. Could I then send it across country and hook it into the internet and feel objects in that location as if I were there? How many body parts could I do that to before my body was in one location and my brain was in another? Could I then slowly get prosthetic parts of my brain? Ship them there piecemeal?

    218:

    As a former physicist and electrical engineer, I just can't get with the whole singularity idea, at least not in the next couple of centuries.

    There's a number of reasons, but one of them being the fairly specious argument sponsored by the computer scientists regarding Moore's law: Computers won't be able to compute their way forward in Moore's law; They'll have to perform EXPERIMENTS to figure out how the physical world works in order to continue exploiting it so as to keep the next generations of chips flowing. This, in turn, means that machines can get faster only as quickly as the basic physics and engineering work can be done. They can't merely think their way forward. (And I don't need to defend this point: The point is that anyone presuming the opposite needs to indicate why this will no longer need to be true as it has been for the last 50 years.)

    219:

    And another thing. There won't be truly sentient machines because nobody wants 'em. Machine intelligence isn't some giant one-dimensional pathway such that a powerful enough machine is somehow 'intelligent'. True sentience and "thought" requires a huge number of components (hard and soft) to be created and aligned just so. As a result, there would need to be significant and long-term market drivers to create such a machine, and even now we don't see much like that, except possibly for having computerized opponents in videogames act more realistic.

    220:

    Could we do it slowly to preserve continuity?

    You are not the first person to ask that question.

    Here's a proposal for a continuity-guaranteeing upload process by Hans Moravec (from 1995). Here's an earlier version of the same, as philosophical gedankenexperiment: "Where am I?" by Daniel Dennett, in "The Mind's I" (Dennett and Hofstader, 1981).

    221:

    Hanson thinks a different singularity is more likely.

    222:
    sn't the human brain built exactly like this?

    Fundamental distinction: the human brain wasn't built, it evolved. That evolution took anywhere from 5,000 to 1,000,000 generations, depending on where you start measuring; for the purposes of emulating it I think the larger number is what we have to deal with. On top of which, we know very little, and have no good tools for learning more, about what selection pressures and competitive forces caused the evolution of the brain. Any theories currently extant are highly vulnerable to being categorized as "just-so stories" in Steven Gould's phrase.

    223:

    Especially since a human brain is perfectly capable of executing the same algorithm as the conventional computer.

    Oh? Do you have a citation for research that supports this statement? I doubt very much that we know enough about the brain to say that this is true, or even that we have any direct evidence for it.

    By the way, I was using "NP-hard" metaphorically. Obviously, if I don't believe that brains compute I can't apply computational complexity theory to them.

    224:

    Sure we would, if by 'communicate' you mean 'click "I'm Feeling Lucky"'.

    225:

    Thank you for the link and suggested material. Can I ask, were these off the the top of your head or stored information filed on your computer for future reference? I only ask because I have never been one to be able to pull exact names and references during conversations. Though, I have met people who could. My conversations would usually go like this;

    " I read an interesting article about black holes recently written by......... I think it begins with a G...... Graff... Gabes.... oh, well.. it contradicts...... Paul's.... no... wait.... Paulson's.... that's not right...."

    The glazed look on my audience was enough to shut me up. But, not so anymore. With my Evernote, I might look the tool reading from a phone, but I rarely mix up my references anymore. Wow, that sounded like an infomercial.

    226:

    You wake up every morning, right? How is that different from waking up inside a computer?

    I think the worse problem is when your online persona and data think they're you, and the law agrees.

    What we're seeing is that, to some degree, the definition of "you" includes "your stuff" and "your data." As these increasingly merge, we're going to see all sorts of interesting and practical ethical issues. For example, when someone steals your hard drive, is it simple theft, assault (damaging you as a person by depriving you of part of yourself), or attempted murder?

    227:

    Ok, it looks like I was wrong about the relative complexity of parasites. Now I'm curious how that works from an evolutionary standpoint. What benefits does parasitism offer, if not reduction in complexity for the parasite by stealing that complexity from the host? Or do we need to separate reduction in complexity from reduction in energy and material generation, and treat complexity as less important for selection?

    228:

    If they were right years ago about cosmic rays shorting chips, the more packed the chips are the more errors there will be. It may be that they will put a iron limit of how fast a chip can be and still be trusted.
    Fuzzy logic may give answers that are good enough. That's what human minds do. I would expect really good expert programs a lot sooner. The ones we have now are not that good. Maybe quantum computers will work. But so will cosmic rays.
    Many years ago, in computers lives, the Clays ran in baths that cooled them.

    229:

    I didn't get through the whole thread, but I wanted to point out that we really can talk about how infeasible brain uploading is.

    Two years ago, the world's then-most powerful supercomputer was used to simulate a cat brain. At 1/100th of real time. It takes a building full of modern equipment to simulate just the brain part of a cat (no body simulation, so we cannot interact with it in any way) two orders of magnitude slower than real time.

    Given that I agree with you, Charlie, about Moore's law running out, I think we can pretty confidently say that we'll never upload human brains onto classical computers. Maybe in 50 years if you are a billionaire who can afford to run a skyscraper sized supercomputer as a futurist's pyramid. Maybe.

    (The hedge is, of course, quantum computers may make this possible, but I have no idea, just like most everyone else.)

    230:

    Citation? It's obvious. Get a pen and piece of paper and start working through the algorithm. You're the CPU, the paper is RAM, together you emulate a von Neumann computer. Slowly and with high error risk, but that's irrelevant for these purposes.

    Can neural networks (or whatever brains actually are) do the calculation more efficiently? I don't know. But they can certainly do them. There's nothing a computer can do that a human can't, given enough time and supplemental memory. The question is whether the reverse is true.

    231:

    "Uploading brings forth the problem of continuity of consciousness. "

    It need not be. "Her consciousness was continuous as the reactivated brain cells took over from the slowed down swarm of nano computers that had been simulating them."

    232:

    I think "nobody wants them" is demonstrably false today. Whether the desire for them can translate to sufficient effort to create them is another matter, but there's plenty of people who, given a chance to press a button and create sentient machine, would say "hell yes!" and press. This is true whether we specific autonomous machines or servants, though the people-sets would differ.

    233:

    A FBF posted your link on their page, and after perusing it I have a response.

    First:

    Vinge's vision is not the only one. I doubt the singularity will be human-equivalent any more than our thought processes are similar to an ants. Also, why all the stuff on ethics? What does ethics have to do with evolution anyway?

    AI is not going to go to court to win legal rights anymore than we take ants to their queen's court when we don't want to be bitten.

    And for "human-like" and what we want: The singularity is not going to be about what we want. I refer you to my point about ants.

    Uploading? That has little to do with AI or the singularity and everything to do with vanity and narcissism.

    The Simulation Argument does not seem to have a substantive response from you. What does it even have to do with your title?

    B:

    If you are going to start with a "First:"...

    Your point is not very clear and based on a very anthropocentric perspective. That is not the criticism you might think it is, but is meant to point out a limitation that many have in understanding how different AI will be from human intelligence.

    All this discussion about accountability and ethics is a wishful human-centric hope. Look to history to see how well that has gone so far.

    You may view my response as pessimistic, but it is far from that. I am pro-life, and I don't mean the abortion issue. The singularity is a possibly inevitable part of the evolution of life, be it here on earth or somewhere many light years away. If we as precursors are assimilated in the process then the only complaint you can make is a selfish one. Don't worry, I am selfish too. That's life. :)

    234:

    I don't think they did actually simulate an entire cat brain. For a start we haven't even mapped the metabalome of any species so we do not have a good idea of what biochemical processes are going on inside the brain. What the project did was simulate a neocortex at a very rough approximation, in the same way that neural networks are very limited in how accurately they represent real neurons.

    This is a far cry from whole brain emulation and this is one of the biggest problems. There are two potential avenues for strong AI development; reverse engineering the human brain or writing it from scratch. We have no idea how to do the latter and for the former we first have to have a complete understanding of how to simulate the human body and it's environment at a molecular level. It surprises me that singularity fanbois hand wave the difficulty in this away.

    235:

    Keynes top down systems did work for years. until it was sabotaged by people who did not want it to work. A look at history shows the Austrian school of economics has not worked in the real word as well as Keynes did.
    There is more than one kind of libertarian. The old "Pink" kind said people should not be messed with by anybody. The new kind is sponsored by the people who pay for the GOP. It a holding cell for Republicans who are sick of the GOP. It says people have the right to mess with other people if they have the power.
    19th century classical Economists were so desperate to make it a real science they took equations from hard sciences and used them. Money=heat and that kind of thing. The scientist of the time said they were nuts. But making economies a true science to the rulers was worth it in power.
    Adem Smith was right then and now! But the later 19th century classical economists were wrong. It was a sales con. Look around, how well are today's new economists laws working. It all runs on chaos and voodoo. After things happen they come up with why it did. After!

    236:

    dn @ 194 "Billion" Euros to-do Humoid-Brain simulation within 9 years, huh? Any opinions on how likely/possible this is (ignoring the money - this is a technical evaluation) ...

    Charlie @ 209 But someone in Britain was acquitted of MURDER recently, because they sincerely believed in witchraft .... (See the Pagan Prattle entry, 25th May ...) & @ 214 - which is why you can't step into the same river (or even pond) twice -because it's a different YOU.

    Question. Almost all computers at present are superfast serial-processing machines (others have commented on this earlier). What happens if one does construct a massively parallel machine (even if it's running more slowly) or better still an interconnected machine of lots of small processors, all connected to each other atr the faces and vertices. WHat happens then? And why hasn't amyone tried it so far (apart from the cost, that is) ?n

    237:

    Doesn't it seem odd how over engineered humans are? Now, I'm not suggesting a religious or alien hand in our creation. We have good depth perception due to our forward looking and spaced eyes. We have great dexterity with our hands; useful for recording knowledge and building. We have a large vocal range; great for languages and in turn communication. With all the languages in the world how many distinct sounds can a human produce? Although I do find it funny how many words are reused in English. Mole as a creature and mole on one's body. Bill as currency and bill on a duck. Charge.... well there's a lot for charge. We're just getting lazy. :-) We stand upright freeing our hands for other tasks. We can drive over 200 miles an hour; while, we can only run less then 30 for the fastest of us. Why the extra capacity?

    We are the luckiest of creatures. It's like hitting jackpot time and time again. I imagine a dolphin is very close to us in intelligence. But what can they build? Their knowledge can't be passed down except orally. How many sounds can they produce? Do dolphins on the other side of the world speak a different dolphin language?

    I drank too much coffee.

    238:

    Hi Bruce,

    Let me add to your confusion, or at least, attempt an explanation.

    Think about parasitism in terms of money. A monetary parasite takes money without providing something of equal value (however judged) in return. There are a bunch of ways to do this, but typical scams involve providing a good or service of low worth and getting as much money as possible for it. Or one can simply steal. All these are forms of parasitism, at least in a monetary sense.

    Now, you can certainly kill people to steal their possessions, but that's dangerous, and you only get to do it once per victim. Conversely, if you are, say, sexy and cute and cheap, your victim may throw money at you. However, there are certain expenses involved with staying sexy, cute, and cheap, and you can only exploit one (or at most a few) victims in this manner. That puts some serious limits on your lifestyle. This is true for most non-human parasites as well: typically they are specialists, rather than generalists, and most of the world's biodiversity is composed of parasitic species, most of whom specialize in one or a few hosts.

    With parasites, simplicity is a frequent side-effect of the parasitic life-style, not its goal. The critical difference between a predator and parasite is that a parasite doesn't kill its host. This sets an upper limit on what it can take from that host (i.e. not enough to kill the host). Structural simplicity is (to simplify) an easy way to cut costs. Many metazoan parasites are little more than a mouth, gonads, and mechanisms to find appropriate food and avoid fatal retribution from the victim.

    As I noted in that last post, simplicity is not the only way to be a parasite. It just happens to be an easy state to attain, because genetically, it's easier to lose things than to gain them.

    The only way you can identify a parasite is through their relationships with others.

    Some people may feel squeamish about using anthropomorphic examples. The reason I do that is that humans have evolved to be very good at intuitively parsing relationships. When talking about the complexities of parasitism, I've found that it's easier to frame descriptions in terms of human relationships, rather than mathematical models. The drawback with this approach is that it does lead to a rather cynical view of human relationships. And we won't even talk about domestic cats.

    239:

    > Except of course, we have already been through several singularities. The use of fire, the smelting of metals, the wheel, water-and-then-steam-power, now flight and electronocs in its widest sense ...

    Language (oral and written) also belong on that list.

    240:

    That was a software emulation of physical processes going on in the cells. An actual hardware implementation would naturally be much faster.

    But that would require three dimensional circuits, as a two dimensional implementation would probably take far too large an area and has some really great potential to introduce exponential scaling factors.

    All connections crossing other connections would have to be simulated through some sort of a switch, for example. Extra circuitry for something that is trivially solved through geometry in a real brain ...

    Having just two dimensions to play with naturally puts some severe limits on performance. (Having four would be even more favorable though. ;))

    I guess Neal Stephenson got it right in Anathem: Topology is destiny.

    241:

    I'm sure someone will crack ai in the end, one way or the other. Of course, that could end in all sorts of ways, and start on a wide variety of dates.

    Anyway, we might not make it out of the energy crisis yet.

    242:

    Young dolphins are killing porpoises because they can't have sex. I suppose we see that in inner cities sometimes.

    243:

    I totally agree with this point. From any given era the next jump in technological achievement would amount to a singularity based off a prediction forward in their time line.

    But, this also supports the notion that good predictability is nigh impossible. Inventions can greatly change the path of progress. Who will say what will be invented in the next 100 years.

    This also supports the notion that one cannot recognize the present state of technology because it becomes mundane. How many people look at a plane like the 747 flying and are amazed? Around 900,000 pounds hanging in the air.

    244:

    I quite liked Jim Munroe's cynical treatment of Uploading in Everyone In Silico (free DL) His virtual world gets monetized and marketed, scammers find their niches, and lots of folks find that being physical is a far better deal (unless they're about to die, and have a way to get a free ride.)

    Come the Nerd Rapture, I'll be one of the unbelieving sysadmins left to tend the servers and enjoy all the good whiskey.

    I'll leave for others the question of how a bunch of virtual entities pay their hosting bill.

    245:

    Three unconnected, hopefully relevant comments/reflections:

    In EON Bear has corporeality be a mark of citizenship, IIRC, partials and children (Generated from parental templates and raised in virtual reality) are not fully enfranchised until they get a body. That would be one way to work it.

    Personally, I have the intuition that the brain and it's mechanisms aren't really that complicated, because nothing in the human body is really complex in principle, baroque as it may be in execution, muscle cells, nephrons, blood cells, alveoli, miscellaneous tubing. Neurons are stimulus-response. It's just that there's a whole lot of tangle in there. Memory is a palimpsest of stimulus laid continuously on the same substrate, which explains why everything is so well contextualized but why specific recall is so poor and thoughts are just stimuli bouncing around the endless passages of the network like those photons that take thousands of years to emerge from the sun in a random walk.

    One thing that always bugs me about the concept of posthuman intelligence is it's supposed inability to communicate with us, which seems more a mark of stupidity for me. The smartest people tend to be extremely good communicators of concepts, I would expect something smarter than me to be able to talk to me. We can't talk to ants because we're not smart enough to synthesize the necessary pheromones (Or antennae waggles, whatever), not because we're too smart for them. Cesar Millan can get dogs to do whatever he wants because he took the time to learn their language and modes of thought, I would expect a super smart AI to be capable of similar proficiency with us.

    246:

    I'm not sure you could prove b, that there is no soul, to religious people. Even non-religious people could simply say the technology isn't developed enough to copy/transfer a mind. And religious people could say anything from God prevents the copying of souls to a belief in bodily resurrection in the afterlife complete with a now immortal brain for your soul.

    247:

    A really good expert program could act like a human. As much as humans act like humans at least. In fact we are almost there now. Our thinking is mostly conditioned reflexes.

    248:

    Computational and representational theories of the mind (and/or brain) are attempts to extend the concepts of computation and representation by analogy. There's no sense in which a brain could literally be said to be a computer or literally be said to represent. Whether they're productive analogies or misleading analogies is a separate issue.

    The physical object of a computer is not a symbolic system but it's also not a computer. It can't be said to compute unless its physical happenings are taken (by a community of people) to represent and map onto computational happenings according to sets of human conventions. It's certainly not doing anything, at the purely physical level, that could be construed as a simulation, since you're there passing through multiple levels of representation. We know that brains aren't literally computers because a computer is the sort of thing that can't exist without a community of people with conventions of practice, language, etc. It's similar to something like money. You have physical coins but at a purely physical level they're not money and they have no value. The same is true of computers.

    It's the fact that computers are symbolic systems, are based on human conventions and are dependent on a human community for their existence as computers, that makes it possible to use one system to emulate another. So even if we extend the concept of computation to brains by analogy, we can't make any inferences about emulation, because the things that make emulation possible aren't true of brains. Emulation isn't a concept that could be applied to brains. Brains are computational in only an attenuated sense (i.e, they pass through successive states, they might be explicable in terms of modules that have inputs and outputs, etc). They quite clearly are not created according to specific sets of conventions nor are they able to implement another set of conventions at an abstract level or be implemented as a set of conventions on another such system.

    Now, obviously you can simulate the brain, but there's just no question of that brain then being 'fooled' into thinking that it's real or not knowing that it's a simulation because it's a simulation and, like I said, a simulation is no more the thing it simulates than a photo is the thing it's a photograph of. Can a photo of you be fooled into thinking it's you? Of course not. If you think the analogy is not apt, just up the granularity. What if I took photos of all your neurons? What if I took photos of them in successive states? What if I put them in a flip book? What if I scanned it all into a computer and the computer flipped through each photo in turn? What if I used 3D scans instead of photos? What if I used a functional description of the state of the neuron based on those scans fed into a set of variables in a computer program? And no point am I leaving the realm of representation and depiction but I can clearly move from a simple representation to something that looks a lot more like a computer simulation but I've never stopped talking about something that is merely a description of a thing and not the thing itself and at no point will it ever become the thing itself (whether the original or another of that type) or any thing other than a description.

    249:

    Charlie, not sure whether you took this into account but you seem to have forgotten that, although computer-slaves may not seek to advance their own intelligence, humans certainly do. Don't know about you, but if I was an engineer trialling my own intelligence enhancing device, the first thing I would do would be to use my additional insight to improve the design... an intelligence explosion doesn't follow with great difficultly in this situation. While, of course, in a corporate setting, that's not how it works, it only takes one person in the world to this, and boom, singularity.

    250:

    There's a new BBC series started on early species of Homo. Why did Homo Sapiens win out?

    BBC iPlayer page for Planet of the Apemen

    And, interesting thought: what was the critical invention that wasn't Chinese?

    The answer is glass. Without glass you can't make spectacles, and without spectacles your scholars and scribes and inventors maybe lose twenty years of useful work. But why didn't they invent glass?

    Because they drank tea. And it's a rather advanced form of glass which is needed to make a vessel able to handle the thermal shocks. Instead, they came up with white porcelain.

    Meanwhile, the barbaric Europeans were drinking wine, and displaying their discernment with transparent glass drinking vessels. And making windows, and making telescopes and microscopes and spectacles. Just being able to make glass gives you ways of heating materials without introducing contamination from a fire. And making coloured glass feeds into the basic ideas of chemistry, while glass itself is a key material for making the tools to study chemistry.

    251:

    @ 250 Other non-"Chinese" invention ... METAL SMELTING

    ( Or, at the very least, discovered/invented elesewhere as well )

    Oh, and WRITING Invented in the middle East. Chinese is the last remneant of cuneiform isn't it?

    252:

    250 comments already, I'm too lazy to verify if this is an original idea. Religious-ethical objections to uploading human consciousness into cyberspace remind of similar objections to the manipulation of the human genome. There's an underlying assumption that advances will be confined to the West where objections can block experimentation - the debate in the USA over embryonic stem cells a prime example. The leading edge of research may be confined to the West for now, but not forever. The nation with the most money available for scientific research is also one that has no such qualms. Imagine, for example, the military advantage to be had by a submarine that is controlled by human minds but does not have to maintain human bodies. By the way, Charlie, loved Fuller Memorandum. Keep 'em coming.

    253:

    http://en.wikipedia.org/wiki/Smelting#History clearly shows that the base technology is around 8_000 years old, and was "invented" entirely indepentantly by several separate civilisations.

    http://en.wikipedia.org/wiki/History_of_writing does state that cuneiform and Chinese are probably completely independant of each other. It's another "multiple independant discoveries" thing I think.

    254:

    K. Dick played with the mind uploading concept in Eye in the Sky (1957).

    http://en.wikipedia.org/wiki/Eye_in_the_Sky_%28novel%29

    255:

    For the point at hand, I used "Moore's Law" as to say that we're still very far away from reaching a wall in terms of the evolution of the amount of processing/storage against energy consumption/size/price ratio. Blame it on having a ZX81 clone as my birthday present in 1983 (tape storage, 2Kb RAM, 1Mhz?) and replying to you on my phone (32Gb solid state storage, 256Mb RAM, 1GHz). Perhaps I just can't be objective about it. :-)

    256:

    Charles: "While such an AI might accidentally jeopardize its human's well-being, it's no more likely to deliberately turn on it's external "self" than you or I are to shoot ourselves in the head."

    Actually, it would. We have very deeply hard-wired evolutionary conditioning. The AI wouldn't necessarily have that.

    "And it's no more likely to try to bootstrap itself to a higher level of intelligence that has different motivational parameters than your right hand is likely to grow a motorcycle and go zooming off to explore the world around it without you."

    Again, why? Your right hand can't. Note that sometimes cells try this (cancer) and kill their 'host'.

    257:

    Three arguments? I count one poor argument made twice and a third irrelevant comment on the simulation hypothesis:

    1) "human intelligence is an emergent phenomenon of human physiology" (Mind emerges from body - no body, no mind.)

    2) "Our form of conscious intelligence emerged from our evolutionary heritage, which in turn was shaped by our biological environment. We are not evolved for existence as disembodied intelligences" (Mind emerges from body - no body, no mind.)

    2.5?) The simulation hypothesis cannot be proved so I'm not going to talk about it. (?)

    Mind is just a pattern perceived by a mind. There's nothing emergent about it. To speak of it as such is dualism. Go read Dennett.

    258:

    From reference three. Is it just me or should "extremely unlikely" be "extremely likely"?

    "ABSTRACT. This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed."

    259:

    "Extremely likely" Unless Human minds cannot be simulated, or we are going extinct fairly soon. This may well be tied up with both the Doomsday Argument and Fermi Paradox. They provide an interlocking context which very strongly implies our beliefs about reality are seriously wrong.

    260:

    I thought about this more and I think I understand exactly where the disconnect lies, to bolster your argument.

    There is only one reality. Just because a simulation operates according to rules does not mean that these rules are "laws of physics" in a "simulated universe." The simulation is running in our universe.

    There are certain things which exist (assuming an objective reality, etc.). My brain, my desk, my computer. There are other things which do not exist, even though representations of them may or may not exist: Bugs Bunny, Jessica Rabbit, Lt. Commander Data.

    It is my position, then, that things which do not exist cannot be conscious. Thus, the picture of Jessica Rabbit in my wallet could hypothetically be conscious, but it would be conscious as a picture of Jessica Rabbit in my wallet, not as Jessica Rabbit, because Jessica Rabbit does not exist.

    Similarly, if we simulated a human brain with a computer in real time (which is a very faraway sort of thing if it ever happens) and hooked it up to the right inputs and outputs, then we would be able to say that the computer is conscious, not the brain itself, which is merely a changing set of charges in the RAM. Just like my neurotransmitters are not conscious even though they are the substrate of my consciousness.

    Now, you could then go on to hook up your simulated brain to a virtual reality simulator. But again, it is not true that the brain is living in the virtual reality. It is living in our reality, hooked up to a simulator. You could hook many of these brains up to an MMORPG, but they would simply be playing the MMORPG like some of us do, they would not be inside the MMORPG. Because it is impossible to be inside something that does not exist.

    261:

    Are y'all aware of how genetic algorithms are being used to design radio antennas? It may not be 'AI' that outruns us but rather artificial evolution--intelligence may not be needed.

    http://fab.cba.mit.edu/classes/MIT/862.06/students/alki/ has a picture and references. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.64.5066&rep=rep1&type=pdf

    262:

    Or deduce Newtons three laws of motion from experimental results?

    263:

    Great post. I agree with most of your reasoning. I'm more bullish on uploading working (By the time we can simulate the brain well, we'll be able to simulate an external environment for it well.), but I also think uploading is much further away than Kurzweil does.

    In addition to your points above, I'd add that:

    1) We already have greater-than-human intelligences in the form of collections of humans, yet they don't seem to be on a takeoff trajectory.

    2) Many of the problems we care about scale far worse than linearly, so even as we increase computational power or its distant cousin, intelligence, we find that the practical benefits we get are far less than we might expect.

    I wrote about these in an article last year on 5 reasons 'The Singularity' is a misnomer:

    http://hplusmagazine.com/2010/11/11/top-five-reasons-singularity-misnomer/

    264:

    For some Singularity fulfills religious ideas, as the means or ways to their ends or goals, as in an alternate route or actually 'the' route. Help is not just on the way, it is the way.

    265:

    By the tentacled beard of Cthulhu ?!?!? What have you done Charlie ?!?!? Hope your happy with yourself ... young man!?!?! You've created quite a cauldron of piranha's here. You may as well have just handed out conceptual baseball bats then shouted "Last man standing gets to define it all!!!!"

    Spent quite some time reading through, and being from one post to the next: A: Baffled at the posters obvious lack of understanding of the aspect of debate they attempted to support/debunk. B: Genuinely slack jawed & wide open mouth gob-smacked at the utterly vacuous attempts at rational arguments put forward by various posters from all sides of the debate. ( mentioning no names ) But come on people, my 6 year old twins could have driven a coach and horse through some of them. C: Equally stunned by the number of posters making statements, stating in quite definitive terms that, 'Intelligence/Consciousness is/is not this or that' 'State simulation is/is not capable of fully simulating X/Y',etc,etc. Obviously some are bound to be closer to the facts than others. But most seem to state their beliefs without a shred of humility or just old fashion scientific caution over overstating thesis without supporting evidence. D: The truly squirm-worth attempts by those of a 'spiritual nature' or just adherents to old fashioned religion, to reconcile AI and uploading with the doctrines of their faith. E: The truly squirm-worth attempts of those holding faith beliefs about singularity to separate their 'belief' from more traditional faith types. F: The unfortunate habit ( I've witnessed in many debates around SF themes ) Some posters have of quoting Novels in support of various positions. Surely no one needs me to point out Novels are a non interactive form of narrative entertainment. Not road maps to world of the Jetsons 2.0.

    Other pet peeves: Strictly a personal teeth grinder, but the number of Star Trek references dropped during this topic scared me more than anything else. Yeah i know there weren't that many .... but any are too much for me.

    People not grasping that current nanometre transistor tech is getting close to the very hard coded limits of Physics. Not understanding that pretty much excludes you from making meaningful statements about current/future IT architecture, just owning a Dell or a mac book doesn't help you. Living on a Cruise ship does not turn you into a naval architect.

    Geek and Nerd being used as an interchangeable term. The two have some subtle implications attached that separate them. Nerd IMHO implies BO, low social skills, and unhealthy obsession with Anime figurines, etc. But for me has never suggested the kind of 'functional negotiable skills' that Geek implies, Linux-Geek, C++-Geek, Softimage-Geek, maths-Geek Flash-Geek, Chess-Geek, Real Ale-Geek, etc. It implies a narrowed obsessive focus ... but at least on something useful and of broad societal benefit. Its a shame 'Rapture of the Nerds' looses its flow when altered to 'Rapture of the Geeks'..... but you can't have everything.

    Just to round off. I have a friend, who I've known most of my life, in his foolish younger days as a child. He deliberately bullied people like us/Geeks. He maybe fits into the 'inner city' demographic Marilee J. Layman mentioned with such non-judgemental empathy at #242. I asked him once round a few pints why he bullied others and why he never bullied me. After thinking for awhile he said something like. "Well I think it was about them always having to be right, about whatever, even when they weren't. Trying to make other people feel small and stupid by doing it. You never did it the same way, you either explained without being snotty or just keep things to yourself, even when you knew stuff, it made you easy to like"

    Not that am supporting violence against the opinionated. But some of this topic seems to support his position.

    Charlie I enjoyed this topic ... possibly for some very wrong reasons. But i say more like it. More poking extropian theories with a sharp critical stick. More stirring up the hairless primates limbic systems by challenging their precious sensibilities. BUHAHAHAHAHAHAH!!! :)

    266:

    "People not grasping that current nanometre transistor tech is getting close to the very hard coded limits of Physics."

    I think most people grasp that. However, the full statement ought to add: "...in two dimensions and on one square inch of silicon." I can quite easily envisage nanoscale electronics printed on flexible graphene sheets whose surface area would be square meters before it was folded into a small 3D cube about the size of a thumb drive.

    267:

    AND YET There are Other Ways of achieving Nerdvana than the ones we have been speaking of ... Now is the xample I've pointed to serendipity, or is it getting close to the time to railroad ???

    268:

    There's one (not very well) hidden assumption in the AI-technological-singularity argument: that the first weakly superhuman AI will be the intended product of human ingenuity, understandable by humans and by extension understandable by anything smarter than humans. If the first weakly superhuman AI is anything like, say, Google, then the entire recursive nature of the thing is called into question because it takes a superorganism made out of many of our most intelligent humans several decades to create and maintain a single Google network (which itself takes up a huge quantity of resources). If this entity ends up being weakly superhuman, it still probably can't muster quite the intelligence of the whole group that created it, and the group that created it may not have a complete view of the whole as a whole either.

    269:

    Unless you can make good interconnections when you've folded it up, you might as well not do the folding at all.

    270:

    One might not need 3D interconnections at all. Just a sheet. The folding would be for packaging. That would take Moore's Law several decades further once physical 2D printing limits are reached ie an area doubling every 2 years for (say) 30 years.

    271:

    Hi Charles,

    Thanks for this post. It's definitely spurred some conversation. Here's my two cents worth: http://intelligent-future.com/wp/?p=743

    Richard Yonck Intelligent Future

    272:

    According to your own argumentation, the model of human beings that we call 'human conciousness' doesn't exist either; there are biochemical and electrophysiolocial states, yes, but there is nothing implying they are tied to consciousness, either, and we know quite some neuropsychological disorders where actions get disconnected form consciousness (see Tourette, see certain types of epilepsy, see blindsight etc.); so it seems consciousness is nothing more than a partial picture of the real state of the human brain; e.g., it's a simulation.

    273:

    Whatever the "immortal soul" is in Christian theology, it seems to either require a body or at least to be crippled without a body (2 Cor 5:1-5). The Christian hope is in new bodies, not old souls.

    274:

    Charlie, I am more optimist than you on the feasibility of and timeline for strong AI and mind uploading, but I am probably closer to your cautious assessment than to the wild optimism of, say, Kurzweil. I think both technologies will be developed someday because they are compatible with our scientific understanding of reality, but not very soon.

    In reply to: "I can't disprove [the Simulation Argument], either. And it has a deeper-than-superficial appeal, insofar as it offers a deity-free afterlife... it would make a good free-form framework for a postmodern high-tech religion. Unfortunately it seems to be unfalsifiable, at least by the inmates (us)."

    My question is, what is wrong with this. Some persons function better in this life if they can persuade themselves to contemplate the possibility of an afterlife compatible with the scientific worldview. They become happier and better persons, help others, and try to make the world a better place.

    In other words, the pursuit of personal happiness without harming others. Charlie, what the fuck is wrong with this?

    275:

    If you halve the wavelength used for etching chips from 40nm to 20nm, you square the number of transistors per chip Um, multiply it by 4?

    276:

    ome persons function better in this life if they can persuade themselves to contemplate the possibility of an afterlife compatible with the scientific worldview. They become happier and better persons, help others, and try to make the world a better place.

    In other words, the pursuit of personal happiness without harming others. Charlie, what the fuck is wrong with this?

    Nothing's wrong with that particular outcome.

    Where it goes wrong is when the belief system in question acquires a replicator meme ("tell all your friends the good news!"), a precedence meme ("all other beliefs are misguided!") and finally goes on a bender and turns mean ("unbelievers are soulless scum! Kill them all before they pollute our children's precious minds with their filth!").

    That's why I take a negative view of religions in general. It's not what the founders say or think, it's not about what the mild-mannered ordinary folks who use it as a compass to guide them through life's heartache think ... it's all about the authoritarian power structures that latch onto them for legitimization, and the authoritarian followers (pace Altermeyer et al) who take their insecurity out on the neighbourhood.

    277:

    Re "... it's all about the authoritarian power structures that latch onto them for legitimization, and the authoritarian followers (pace Altermeyer et al) who take their insecurity out on the neighbourhood."

    Of course I totally agree with this, which why I also take a negative view of traditional religions. Yet, I keep hoping that we can find ways to use the positive aspects of religion (relief from life's heartache) without falling into the negative aspects.

    A rant on simulation theory as religion: http://giulioprisco.blogspot.com/2010/03/in-whom-we-live-move-and-have-our-being.html

    278:

    The SA does not provide a deity free scenario. In fact, it guarantees a God. If by "God" you mean creator of the universe and everything in it, including us.

    279:

    I have a nasty cynical suspicion that the gap between an intriguing speculative belief system and a traditional religion is about one generation.

    (Today's Christian baptist fundamentalists are only 100 years removed from their founders, who were a much more flexible and free-thinking group. They went from questioning and skeptical reformists to doctrinaire authoritarians in just two generations, as I understand it.)

    280:

    It's hopefully less than that given the Meta-5 project at Zero State

    281:

    Giulio, if the SA is not true it pretty much means all of the major features of H+ are dead ducks

    282:

    True - but the creator of the universe is a lower-case creator, and you're not required to do any worshiping or follow any particular rules. It's like being 17 and realizing that your parents are asshats, except now you're allowed to think that about God.

    283:

    That's a very gnostic view. However, if you want to be promoted out of the Sim into the wider world some basic ethical principles can be reasonably assumed. I doubt Ted Bundy or similar will be welcomed to a society where "people" have godlike powers. Be nice to people.

    284:

    @Dirk re "In fact, [the simulation theory] guarantees a God. If by "God" you mean creator of the universe and everything in it, including us."

    It does, in the sense of a natural God emerged from an evolutionary process in their own universe and subject to their own laws of physics. In their terms, they are probably not omniscient and not omnipotent: they cannot violate their own laws of physics. They can violate ours though.

    285:

    @Charlie re "I have a nasty cynical suspicion that the gap between an intriguing speculative belief system and a traditional religion is about one generation."

    Perhaps. But this is also true of political and cultural movements, lifestyles, and most other things (remember Einstein's "young whores, old bigots"). Once our speculative belief systems become traditional religions, if such a thing happens, younger whores will be creating new speculative belief systems for their time. And so on, and so forth.

    286:

    If we are allowing for one level of Sim, the posthuman nearterm, we might as well opt for a few higher levels as well with at least one to take care of the Fermi Paradox. The top level may well be Tipler's omega point, which is to all intents and purposes the real deal as Gods go. [BTW, the apparent fact we are living in a universe that may not experience collapse in no way invalidates Tipler - an open universe will be one of the infinite simulations of a collapsing one]

    287:

    Tipler is, in my opinion, batshit crazy. He shows every sign of having picked up the Born Again meme virus -- only, as a physicist of some note, he's trying to square the circle between bronze age prophecy and modern cosmology in order to minimize the cognitive dissonance between his two world views.

    Seriously: have you read "The Physics of Immortality"? (What he's doing is probably more obvious if you don't come from a background where christian apocalyptic imagery is part of the zeitgeist.)

    What worries me is that if his starting point was the Simulation Argument then he could quite possibly make it fly -- that is, to demonstrate logical compatibility between the SA and the Jeezmoids' creed. At which point the extropian movement will be wide open to attack subversion from within.

    288:

    H+ is point for point Xianity. Apocalypse, resurrection of the dead, heaven, the artilect messiah. The last two chapters of TechnoMage deals with the correspondences. Not to mention its use of the chaos star to front the WTA for years. And yes, I have read Tipler's book (I have a copy). While I agree that it's flawed, the overall premise (SA) is IMHO sound.

    289:

    Didn't the late H. Beam Piper say something like this? From memory, and I may be misquoting ..... "First you have people who believe their god is special, and better than the other gods, then they believe that their god is the only true god, then then believe that all other gods are false, and then they believe that their one true god can only be worshipped in only one way. Then they believe that other worshippers and believers are unbelievrs and heretics and must be stopped ... the Inquisition, the vile First and Albigensian crusades, Haarlem, Magdeburg, Katyn, the killing fields - we want none of that here."

    I agrees with OGH - this seems to happen every damned time. Guess why I've escaped form protestant xtianity to atheism?

    290:

    Guess why I've escaped form protestant xtianity to atheism?

    I dunno, Greg -- same reason I escaped from reform judaism[*] to atheism, perhaps?

    [*] British version, which I believe is called "conservative" in the US.

    291:

    I escaped from atheism to paganism

    292:

    I have read "The Physics of Immortality", and I agree that Tipler is off the deep end, but not precisely batshit crazy. I think he's got a bad reaction to too much thinking about the Problem of Pain and the Problem of Evil, and needs a Deity to make himself accept a world where those problems exist1. SInce he's a physicist, he looks for a god in the machine. I find his reaction very sad, since it's derailed the reason of a very smart individual, and made all of his thinking irrelevant to the real world.

    Incidentally, the fundamental problem with the Simulation Argument, based on the Doomsday Scenario, is that it completely misunderstands the nature of probability. Basically, the probability of me existing at some time, given that I exist at all, is exactly 1, and you can't argue a probability of when I exist from that.

    1. This is a very common reaction among people who need a "rational" justification for the random crap that happens around them. They simply can't accept that there might not be any justification.

    293:

    Singularity is not about machines becoming conscious or uploading minds. Singularity is about ever faster technological progress that eventually hits a boundary where progress is made in an instant.

    Personally I think that singularity will not happen because people don't want to keep up with fast changes. We're already hitting boundaries. Look at software like Firefox. There are updates and make-overs every few months now. "Incredible new features and much faster" is what you hear. Nice. But it's not so nice to discover useful functionality is lost in the process and you need to tweak your installation over and over with every update. Or look at the smart phone market. More and more people are becoming tired of avalanches of new models, new technologies, new features. Sure, marketing people will try to convince you otherwise, that you should buy that new model because everybody does it. It's their job. But there are limits to the BS you buy.

    Compared to the eighties we're already living in a singularity. We had little ideas what the future was going to be like. Anything can happen in the next 25 years but it's very likely that in 2036 you'll take your (grand)children to the park to feed the ducks, visit a museum, go out with friends for a drink and cook your own dinner. Whatever Iphone you carry with whatever interface, applications and connectivity. You'll still love the mute/off button.

    294:

    "...and made all of his thinking irrelevant to the real world."

    Hardly. If it does not exist we will create it. That's H+ in a nutshell. All Tipler is saying is that someone has already done it.

    295:

    First of all, let me come right out and say that the singularity already happened, eons ago. The universe is the result. If our human-induced singularity does happen, it will be puny in comparison.

    Second, most singularitarians seem to ignore the fact that intelligence needs motivation. There can be no goal-seeking behavior without motivation. Most animals get their motivation from innate wiring that causes them to seek certain rewards (food, water, etc.) and avoid punishment (pain).

    So the question is, how will an intelligent machine get the motivation to dominate or eliminate humans? The only way for that to happen would be if the machine regards humans as a threat to some future reward or the cause of an upcoming punishment or unpleasantness. If we program an intelligent machine's reward mechanism to seek a "thank you" or a "good robot" from us and to avoid "bad robot", then we have nothing to worry about, regardless of how intelligent the machine is.

    Once an intelligent machine is trained to behave in a certain way, it will not only be impossible for it to deviate from that behavior, it will be impossible for it to want to deviate from it. Why do humans rebel? This is no different than asking why do humans have such a fascination with music even though it is not needed for survival? There is something about humans that materialists do not and probably cannot understand because they are willingly blind to it.

    So I don't think we need to worry about being annihilated or dominated by rebellious machines. The enemy is us. The more likely scenario is that one group will use their obedient and highly intelligent machines to destroy the other group’s obedient and highly intelligent machines.

    296:

    Consider this scenario. Super AI running on desktop machines circa 2060 Every hacker has one. Problems? The AI might lack evil motivations, but there are plenty of people to fill that gap

    297:

    Newton knew he was right and fudged his experimental results. I think it was some Italians who showed this a only short time age. Think what the world would be like if somebody had checked then. Well maybe not that much. but it would have put experimentation as a way of fact checking back a long way.
    In the past the Japanese only followed the US. If we could make it work they would use it with out paying for it. Like with Fibercon cables. That's a reason why they did so well. We pay, they make. We were trying to make human acting computers and so were they. When we stopped, so did they. Maybe they will change and not "use America as their icebreaker" as one of their Government officials said. And do things on there own.
    Fundamentalist Christians come from ignorance of the King James Bible's English. Who did know it then? The pioneers were in the hills without men who knew old English. The priests stayed on the coast. RCs used Latin, it's dead and unchanging. A snake oil sales man made up the 17 century English meaning and sold it in books. Our fundamentalists are Christian heretics. And never had any give at all.l

    298:

    @Dirk

    I think you misuse the term AI, at least as compared to the context people are talking about here.

    A "super AI" is not a tool, it's a person. A hacker having one on their desk would be exactly the same situation as a lesser hacker having a superior hacker sitting on their desk.

    They could use the superior hacker to hack something, but only if they could first persuade it to do so, and the AI super hacker would be equivalently superior at reasoning and moral philosophy, so persuading it to do bad things that it didn't want to would probably be a much harder task than just learning to be a better hacker yourself.

    Of course there may be a continuum of types of AI, and it may be that we could build a machine that can hack better than a human without being able to consider the moral implications. That would just be a tool though, and the solution would be to use comparable tools to fight people with such tools. Consider a super AI antivirus program running in every computer - problem solved.

    299:

    I thought you said such AI have no motivations of their own? Once one has a copy then hacking those motivations becomes attractive, esp for the military. Unless, of course, the AI is designed to be autonomous and motivated to remain so. However, that path has huge dangers. Alternatively, it becomes everyone's desktop nuclear deterrent.

    300:

    Dirk: The AI might lack evil motivations, but there are plenty of people to fill that gap.

    Sure, but there will be plenty of other hackers and governments/groups who will motivate their AIs to kick your AI's ass if it misbehaves. It's a matter of survival of he who has smarter AIs. It's a scary thing but I am not worried about AIs rebelling on their own and kill everybody. If the AI is happy (it receives lots of "good robot" rewards), it will be obedient and submissive, just like dogs (i.e., very, very smart dogs).

    PS. I am personally looking forward to having my own robotic sushi chef and music tutor.

    301:

    I'm not so optimistic. At best there will be an AI arms race between corporations, between nations, to see who can create the most computing resource for their AIs and get them to upgrade themselves the fastest. The biggest spending will be military.

    302:

    Singularity proponents see the path forward as machine intelligence enhancing and eventually merging with human intelligence. I can travel from Detroit to Tokyo in 14 hours. I can use google earth to visualize earth from space without leaving my living room. The expectation is that as we learn to interface Technology with biology, we will expand our mental capabilities in the way we have used technology to transcend our physical limitations.. Some level of this is inevitable. How fast it happens and how it plays out will teach us much about the nature of the mind, the brain, and the soul.

    303:

    Progressively augmenting the brain as a way to AI is probably the safest way to go.

    304:

    Charlie, if "The Physics of Immortality" makes you think Tipler is batshit crazy, try "The Physics of Christianity"!

    However, while I think most of "The Physics of Christianity" is really over the edge, the first part of the book has interesting arguments about interesting physics, and I certainly don't blame Tipler for trying to put together two aspects of his worldview that are important to him.

    305:

    This entry just got Pharyngulated, so there may be an increase in traffic.

    306:

    " economic libertarianism is based on the same reductionist view of human beings as rational economic actors as 19th century classical economics — a drastic over-simplification of human behaviour. Like Communism, Libertarianism is a superficially comprehensive theory of human behaviour that is based on flawed axioms and, if acted upon, would result in either failure or a hellishly unpleasant state of post-industrial feudalism."

    Perhaps, but all other ideas require the imposition of force by the majority (economic, numeric or powerful) on others. On what possible rational or moral basis do they (or you) claim the right to tell others what they have to do ostensibly for the benefit of others?

    Answer this and you'll have converted a libertarian to whatever worldview is compatible with your answer.

    307:

    You're still making the assumption that human beings are rational actors who make informed decisions based on perfect knowledge of their current circumstances and make sensible plans for their future.

    I'd rather see a system based on "soft" force that provides collective insurance for the common good, than a system that throws people away because they failed to forsee that they might be run over by a bus and spend 18 months unable to work while recovering, to pick a random example.

    Yes, I am a [conditional] believer in social democracy.

    308:

    I am also (like Charlie) a [conditional] believer in social democracy, and I agree with his point.

    I see only one shortcoming of democracy, but it is an important one: it fails to provide adequate protection to minorities, and it can degenerate in a dictatorship of a "moral majority" and the oppression of minorities.

    They say democracy is "two wolves and a lamb deciding, by majority vote, what to have for dinner."

    309:

    Democracy has other drawbacks. For example, short-termism: there's no incentive for elected legislators/governors to contemplate the consequences of projects that will not complete within their term in office. (Although my subsequent thought experiment about longevity treatments might bear on that particular problem ...)

    310:

    democracy is "two wolves and a lamb deciding, by majority vote, what to have for dinner."

    And "freedom is a well-armed sheep contesting that decision".

    Further, US Republicanism is the doctrine that the ideal citizen of a republic is a sheep with a machine gun.

    311:

    God isn't computer-literate???

    312:

    God is whatever you want. Think about what the fundies want God to be...and be afraid.

    313:

    Perhaps we should have a twenty year elected house of lords? Or possibly even replace on death elections.

    314:

    "I'm definitely not a libertarian: economic libertarianism is based on the same reductionist view of human beings as rational economic actors as 19th century classical economics — a drastic over-simplification of human behaviour."

    That's a straw-man argument. There are many and varied arguments for why "economic libertarianism", which is itself a phrase in dire need of definition, not only the idea that somehow people are unfailingly rational and therefore could never make a mistake. In fact, I can't think of anyone offhand who actually does espouse that particular argument for the "libertarian" position.

    315:

    In reply to HonestObserver above, I was being sarcastic because I spend many hours every day, every year, every millennium working to bring about an AI Singularity. And now, Jeez Louise, is there all this discussion going on at so many websites trying to drag down the AI Singularity? It is something whooshing past you right now!

    316:

    http://www.paleofuture.com I'd like to call your attention to this blog. It has what the future was not.
    "Democracy is two wolves and a lamb voting on what to have for lunch. Liberty is a well-armed lamb contesting the vote." Benjamin Franklin. George Orwell wrote "That rifle hanging on the wall of the working-class flat or labourer's cottage,"is the symbol of democracy". But that was when you were begging American hunters to give you their rifles. Wonder what happened to them.
    Not that long after the war of 1812 a American was on a English ship. After days hearing about how bad Democracy was, he made his own toast. It was something like " Democracy is a raft. It never sinks but your feet are always wet. Unlike so majestic ships of state."
    ---"Religion is What Keeps the Poor from Murdering the Rich" --Napoleon Bonaparte ---"The National Government will regard it as its first and foremost duty to revive in the nation the spirit of unity and cooperation. It will preserve and defend those basic principles on which our nation has been built. It regards Christianity as the foundation of our national morality, and the family as the basis of national life." - Adolph Hitler "My New World Order" - Proclamation to the German Nation, Berlin, February

    317:

    I think we have to be careful not to confuse "consciousness" and "intelligence." It may be impossible to define either concept with acceptable comprehensiveness or scientific rigor without introducing significant elements or degrees of arbitrariness. Also, it may be possible to entirely divorce the two concepts for practical considerations, and to create an entity as intelligent or more intelligent than most humans, but which does not require consciousness as it is commonly apprehended. I think that "consciousness" may be found to be so intimately conditioned, or required, by biology that it could almost be considered a "somatic" rather than "mental" process, or system of processes. But "intelligence" seems more mechanical, and dependent upon rules, methods, and efficiencies of operation analogous to those of its multifarious products.

    318:

    I think the last couple of centuries (should) have taught us that trying to build simple, efficient, principled governmental systems is a mug's game if you're hoping to get any sort of humane society as a result. What might work is to build in as many overrides, feedback circuits, and checks and balances as we can, and constantly monitor to see if they're being successfully gamed.

    To deal with the short-termitis we probably need different branches or agencies or something with different time-scales of operation built into them. Then the short-term and long-term views get to battle it out over how policies are modified over time. Unfortunately, this hasn't worked anywhere near as well as I would have liked in the US, where the Legislative and Administrative branches of government are fanatically short-term because of election cycles, and the Judicial branch as embodied in the Supreme Court, though the justices are given appointments for life, seems to have been captured by creeping corporatism just like everything else here.

    319:

    Interesting discovery here that applies to "Substrate matters" when discussing AI possibilities:

    http://www.sciencedaily.com/releases/2011/06/110623130736.htm (Brain-Like Computing a Step Closer to Reality)

    320:

    You've got that the wrong way round. I am making no assumptions about the rationality of any actor whereas you are decreeing actors to be irrational and somehow exempting yourself from that group and saying that you are rational on their behalf and will make decisions for their own good using their own money and limiting their freedom because you know best and if they disagree you want to use the power of the state to force them to your worldview.

    You also assume that there is no social safety net under libertarianism, this is false. There can be a charitable safety net and/or there can be a government one. It would very likely be lower than where you would like it but that doesn't mean its not there. As even the founding fathers noted, education is the best guarantee of freedom. And that would have to be funded somehow when parents opted not to.

    321:

    "the price of freedom is eternal vigilance. Whenever the people are well-informed, they can be trusted with their own government.”

    "Experience hath shewn, that even under the best forms of government those entrusted with power have, in time, and by slow operations, perverted it into tyranny." Thomas Jefferson

    322:

    @ 319 Schockley et al invention of Transistor - 1948 This is the same stage ... so AI in 50 years ?? 2060 - or sooner, if money is piled onto relevant research ??

    323:

    I am making no assumptions about the rationality of any actor whereas you are decreeing actors to be irrational and somehow exempting yourself from that group and saying that you are rational on their behalf

    No I'm not; I'm pretty much as irrational as everyone else in here. (If I wasn't, I'd be a multi-millionaire on the basis of future developments I spotted but discounted the utility of over the past 20 years -- like not buying those AAPL shares I was mulling over in 1998, for example.)

    You're also making unbounded assumptions about how I'd like to run things, on the basis of your own fears.

    You also assume that there is no social safety net under libertarianism, this is false. There can be a charitable safety net

    They're trying to sell that one to us in the UK right now -- google on "Big Society" -- and it doesn't wash; the general level of charitable donations is less than 2% of what would be required to replace our basic social security net before you add in healthcare.

    Here in the UK we had a libertarian minarchist utopia for some decades in the 1830s-1880s. No income tax, workfare (in the form of poor houses) for the unemployed, state spending restricted to funding the military. The consequences were that the poor starved to death in the streets during economic down-turns, parliament had to evacuate London one summer due to the stench from the open sewer that was the Thames, and to this day elderly working class people in some communities have a deathly fear of crossing the threshold of their local hospital (a former poor house -- hint: the early Nazi concentration camps were run along much the same lines).

    There is a good reason we abandoned libertopia: it didn't work and it was inhumane.

    324:

    (b) Is bound to cause massive cognitive dissonance among those who cleave to faith in a religious afterlife, not just for themselves but for their loved ones.

    And if uploading never does work, is it evidence for an immortal soul? The whole issue is a non sequitur.

    Why do you assume uploading will only be done on those about to die? Should healthy, living people be uploaded AND still remain their conscious selves in their own bodies afterwards then it would show that uploading at best makes a mere copy of the person, no different in essence than recording their voice or taking their picture.

    325:

    No I'm not; I'm pretty much as irrational as everyone else in here.

    Indeed, but my point was that what you are saying is that we, as a group/government, can decide rationally what to spend people's money on whereas the individual themselves cannot. Or, if not rationally, then better for your given value of better.

    the general level of charitable donations is less than 2% of what would be required to replace our basic social security net before you add in healthcare.

    Charity is less because of taxation and the minimum standard of living that we provide. Would charity provide anything like the level of spending by the government? Probably not, but it would be much higher than 2% of the current level. Taxes would also be lower so people would be more inclined to give more. Also, your assumption here is that the only income the government has is income tax, there are many forms of government income that could be used for social issues, which I'll come onto...

    Here in the UK we had a libertarian minarchist utopia for some decades in the 1830s-1880s. No income tax, workfare (in the form of poor houses) for the unemployed, state spending restricted to funding the military.

    Massive false equivalence. The UK in the 19th century is not comparable in any meaningful way to the 21st century. The disparity in wealth and technology between the two times renders it absurd.

    If you want an example how a country could have a social safety net without taxation or relying entirely on charity then you simply have to think about ways the country can generate revenue without income tax. One example would be land taxes, another could be oil revenue - e.g. Norway have their oil revenues in a giant social fund. They only spend the income from that fund and leave the principle, allowing future generations to benefit from today's lucky find.

    But my initial point was that any non-libertarian world-view requires the use of force to make people behave "better" - for a given value of better - rather than try to fix "inhumane" conditions where they find them. From whence do you, or the state, assume the right to tell(force) me what to do if I am not harming anyone else?

    I'm not denying that conditions could be absolutely awful under a purely voluntary social contract, but as soon as you go down the road of forcing people, at gunpoint, to help others where do you stop? As soon as you start doing things for people's own good you then start telling them things they can't do, for their own good, and that state control of behaviour is the inevitable consequence of non-libertarian government.

    What I would like is The Curious Republic of Gondour but with charitable giving published and deemed at least as honourable as academic achievement and/or votes.

    326:

    Charles. I truly enjoyed "Accelerando" and your other works all of which strike me as reflective of an out-of-the-box thinker. Which is why I am puzzled by your anti-singularity stance. Personally, your argument against humanity eventually boosting ourselves into an electronic "cloud" existence just doesn't match my own independent observations of the pace of technical advancement, which by many metrics is on an exponential upswing, especially in information technology. Further, I feel your argument overlooks why natural, earth-based evolution has been slow - our biological evolution operates, for survival, to adapt to changes in our natural environment which is mostly slow, extraordinarily slow in many cases as noted by the extremely long era of the dinosaur in a largely stable planetary environment. Perhaps you haven't factored in the immense technological jump that will occur when quantum computing and room temperature superconductivity will arrive. I see humans having the choice to go virtual within 50 years.

    327:

    Keddaw - even before Charlie reams you ..... The UK in the 1830-90 period had taxes that were LOWER than today's USA. The USA, especially for the rich has low taxes. So where is all this charitable funding for the needy/poor/ill then?

    Put up, or bloody well shut up. You "argument" is a complete straw man with no basis in fact, never mind rationality.

    328:

    The whole notion of the Singularity is based on the naive and rather bizarre assumption that cheap electricity will always be available, that there's an infinite amount of the elements necessary to infinitely repair hardware, and that the machines upon which the cloud depend can somehow be made to last forever.

    So go ahead, let the believers upload their minds and become "immortal." But sooner or later (and probably sooner, considering the advancing energy crises), the lights will go out. And without electricity, all the hardware in the world on which the cloud depends is just so much hazardous waste. Meanwhile, those who didn't have the bucks to jump on the bandwagon will keep puttering along in their pesky physical bodies.

    329:

    Snippets from the relevant Wikipedia articles:- "An income tax was levied in Britain by William Pitt the Younger in his budget of December 1798, to pay for weapons and equipment in preparation for the Napoleonic wars. Pitt's new graduated income tax began at a levy of 2d in the pound (0.8333%) on annual incomes over £60 and increased up to a maximum of 2s in the pound (10%) on incomes of over £200 (£170,542 in 2007). Pitt hoped that the new income tax would raise £10 million (£8,527,100,000 in 2007), but actual receipts for 1799 totaled just over £6 million."

    "The Income Tax Act 1842 (citation 5 & 6 Vict c. 35) was an Act of the Parliament of the United Kingdom, passed under the government of Robert Peel, which re-introduced an income tax in Britain, at the rate of 7 pence (2.9%, there then being 240 pence in the pound) in the pound on all annual incomes greater than £150 (£116,000.00 in 2008). ".

    330:

    This counter-argument is based on the equally naive and bizzare assumption that, given the will to do so, elemental metal and silicon can not be effectively recycled indefinitely, and that effectively infinite power is not available to an entity which does not require air, water or organics for survival, and can happily exist in vacumn in orbit.

    331:

    I am tired of arguing with libertarians. Been doing it for decades; there's always another one just round the corner. Makes me wish Leninism was still in vogue, just so I had some variety in my diet.

    It seems to me that a really major problem lots of libs have is that they assume charity will take care of the unemployed or underemployed. Because they look at the headline unemployment rate and think "7% ... we can handle that". When in fact the full employment rate is much lower than (100% - [registered unemployed %]).

    For example, ILO figures suggest a global employment rate of 61.3% for members of the work force aged 16-65 worldwide; 54.9% in the developed economies (including the USA and EU). However, this includes a lot of people who are working part-time or below their level of abilities.

    It's very hard to get accurate estimates for true employment, but I've seen suggestions that if you assess employment across the population as a whole (including children, pensioners, people with chronic illnesses, prisoners, students, and so on) you get a figure of around 30-35% who are fully employed -- in the USA or the EU -- although there's a lot of churn between people in different segments of that population. (e.g. someone who's worked full-time for life becomes a pensioner, or a long-term unemployed person gets a part-time job.)

    Charitable giving, be it 2% or 20% of income, isn't going to make a dent in supporting 50-70% of our population.

    332:

    Greg. Tingey, you are making completely irrelevant points.

    You simply cannot compare a country in the 19th century with one today and expect to get any meaningful results.

    My point was simply was: all non-libertarian forms of government require (or at least allow) the use of force to make people do/buy things that may be against their will. You can wrap it up in pragmatism, justice or any number of fancy words, and I'll possibly agree with doing it, but to claim that libertarianism is based on flawed axioms is just plain wrong. Charlie thinks libertarianism assumes a rational economic actor whereas it is really just an autonomous individual. Whether the outcome would be beneficial or not is debatable, but to complain it fails in the same way communism does (e.g. ignoring the free rider problem and fundamentally misunderstanding human nature) is false. There is nothing stopping a society socially shunning people who don't donate enough to charity, that adds pressure, but not force, to people to do the right thing. Not doing the right thing is not the same as doing the wrong thing, and only the latter justifies the use of force, in my opinion.

    333:

    Let's try your "There is nothing stopping a society socially shunning people who don't donate enough to charity" argument on Communism - There is nothing stopping a society from socially shunning people who try to accumulate material goods beyond their actual needs. Hmmmm; that doesn't seem to work too well, so why does the libertarian version of the argument have to do so?

    334:

    Keddaw I was NOT making a false comparison - see paws4thot's comments on tax rates, and my own. Well? How is "it" paid for?

    And, your own condemnation out of your own mouth: "all non-libertarian forms of government require (or at least allow) the use of force to make people do/buy things that may be against their will." So, how do you prevent such simple and criminal frauds as false measures (like short measure in a pint of BEER?) or adulteration of foodstuffs WITHOUT "threats" (jail/fines)? Never mind other generally-agreed crimes such as theft and murder and violence? .... SOME minimum state is necessary, it may be unfortunate, but it is a fact. As Charlie says, you are arguing in the same false way as Marxists and other religious nutters, with no basis in either fact or observation.

    Be it noted that I'm considerably to the "right" and sometimes considerably more "libertarian" than Charlie (note), but I still thinbk you're off your head. Come-on, provide a workable solution, or, like I said, shut up.

    (note: for instance, I really think the solution to "drugs" is ... legalise the lot and tax, and demand purity standards, as we do for other foods and drinks. But - again, note this requires state regulation.

    And why do we have regulation? Because there are always some bastards who can't be trusted, unfortunately. What do you propose to do about these people?

    335:

    (note: for instance, I really think the solution to "drugs" is ... legalise the lot and tax, and demand purity standards, as we do for other foods and drinks. But - again, note this requires state regulation

    Seconded; AFAICS the only long-term losers are the drug barons down to middle level pushers who can and do fund their own habits out of their income from street sales.

    The growers get a legal market; the labs/refiners (or whatever) get H&S standards; the users get a product that won't poison them, and the whole goes from being a law enforcement cost on the state to being a revenue stream for it.

    336:

    ... the whole goes from being a law enforcement cost on the state to being a revenue stream for it.

    Actually, the war on drugs is a revenue stream for the state. And a Keynsian stimulus package for the police and prison systems.

    Personally, I view this as reprehensible and think we'd be better off spending that stimulus money on something that produces tangible benefits. But it's hard to argue that ending the war on drugs wouldn't result in unemployment ... not only for gangsters, but for cops and prison officers.

    337:

    I draw a strong distinction between the Singularity (an acceleration in the rate of technology that produce a future that is fundamentally unknowable from this side) and the Rapture of the Nerds (everyone uploads into digital heaven). The distinction is like the difference between UFOs (it's flying, and we don't know what it is) and flying saucers (little green men).

    If you think it's aliens, it's not a UFO, 'cos it's not unidentified!

    Almost by definition anything you can predict isn't the Singularity.

    338:

    Actually, the war on drugs is a revenue stream for the state. And a Keynsian stimulus package for the police and prison systems.

    Really?? Are you claiming that building and manning prisons, and employing police officers and court officials isn't a cost to the state?

    I'll give you that we are going to cause unemployment for those criminals and law enforcement staff who can't/won't re-train, but the only way the "war on drugs" isn't a cost is if the value of confiscated assets is more than the cost of confiscating them, and of trying and confining their owners.

    339:

    Greg, you absolutely were making a false comparison and completely avoiding my rather simple point: libertarianism is the only form of government that does not use force on anyone not interfering with other people's rights.

    Hence, fraudulent weights and measures would be criminalised and the perpetrator sued and/or imprisoned - not to mention their reputation being shot.

    SOME minimum state is necessary, it may be unfortunate, but it is a fact.

    Which was never denied! Even a libertarian state has borders and courts (and, usually, police.)

    [food, drink, drug regulation] But - again, note this requires state regulation.

    Bear in mind I am a pragmatist more than an ideologue but there are those that argue that regulation could come from an independent standards body that is funded by consumers. There could be competing ones, however those that do best will win out and be trusted. Any supplier not having the approval of that body would struggle to sell their wares. No need for state involvement. It could easily be argued that a state regulator is better, but it cannot be argued that a state regulator is the only solution.

    Because there are always some bastards who can't be trusted, unfortunately. What do you propose to do about these people?

    Not trust them. Using one or more consumer regulator I can become informed about an individual's or company's reputation and make a decision whether to trade with them or not. Obviously I'd rather have everyone I interact with scrupulously honest (either by nature or by force of the state) but how much state interference are you willing to put up with? I'd also like the state to provide me with a Ferrari and a mansion, but that's not going to happen. Sometimes you have to take responsibility for yourself and your actions.

    340:

    Keddaw You just admitted: "Even a libertarian state has borders and courts (and, usually, police.)"

    Which is USING FORCE - which you say is not "Libertarian" Make your bloody mind up!

    341:

    It's a stimulus package for state spending. Legalize drugs and you don't need as many cops and prisons.

    You might want to note that the US war on drugs really got started in the late 1920s/early 1930s as Prohibition was winding down. Some reading on Harry Anslinger is in order ...

    342:

    Greg, the immediate way to stop someone infringing on someone else's rights is to use force. What libertarians think is that force should only be used in those circumstances. All other forms of government necessarily entail using, or threatening to use, force to enable some involuntary transaction between an individual and the state when that individual is not impinging on any other person's rights by not engaging in the transaction.

    Basically, force should only be used to protect people's rights, whether that's by the state or by individuals makes no difference. Forcing people to pay income taxes to give them health insurance is not protecting a right. Using oil money to fund a health insurance scheme for all citizens is perfectly OK though.

    343:

    I think the strongest argument against the singularity is simply that no one is perfect.

    Consider that the singularity requires an entity to iteratively mess with its own thought processes. Now think about what happens when you mess with your own thought processes. For example, do you always make the best decisions after downing a few at a beer garden? Now imagine messing with your own thought processes again while in this altered state. Are you likely to make the best decisions?

    It seems to me that the paths to failure so vastly outnumber the paths to success that any entity attempting to rapidly evolve its own consciousness is much more likely to go mad than to become hyper-intelligent.

    One can imagine more complex scenarios, e.g. two AIs who take turns applying modifications to each other, where the AI making a modification is allowed to reverse it if the mod is later deemed "bad". That judgement process is bound to be full of subtle cases which are difficult to decide. Given that each of the entities involved presumably has the will to survive, the recipient of a mod likely won't always agree with the assessment of its originator. Again, there are many more paths to madness and paranoia than to any form of sane, rational hyperfunction.

    344:

    Greg, the immediate way to stop someone infringing on someone else's rights is to use force.

    How charmingly naive!

    Tell me how you propose to use force to stop a corporation from imposing binding arbitration in an unfavourable venue upon you in a click-through license that you have to agree to in order to use their product ... which in turn you have to install and use in order to get at the funds you deposited in your bank account (because they're a bank)?

    Tell me how you propose to use force to prevent corporate lobbyists from seeking rent via tax concessions granted by legislative fiat?

    Tell me how --

    Ah, forget it.

    Bluntly: your assumption that force is the backstop for equitable treatment breaks when it is realized that we live in a complex society and some of the other actors within that society have much deeper pockets and far more resources than any normal individual.

    345:

    Consider the following hypothesis: People are irrational because the world is too complicated to be rational all the time. It's just too tiring. This would hold that people can be rational about their main area of focus, but not enough to cover anything. So, the distribution of intellectual labour becomes important, as does trying to prevent excessive cheating/creation non-rational behaviour in people by use of their cognitive limits.

    346:
    Some reading on Harry Anslinger is in order ...

    Also read up on J. Edgar Hoover's early career. There's a good case to be made that the entire anti drug / racketeering program arose out of a competition between Anslinger and Hoover to see who could create and control a Federal police force.

    347:

    http://www.engadget.com/2011/06/13/moneta-onyx-phase-change-memory-prototype-can-write-some-data-7x/

    Sooner than Shockley's transistor... The benefits on replacing SSD's with phase-change will drive fast adoption.. Making neuronal circuitry will be follow-on, just like memresistor technology. Fast return on dollar items will get the first focus.

    348:

    Aww, how charmingly unimaginitive. You assume we are where we are and any change from here relates entirely to the current situation - in the country you happen to be in.

  • Don't use that bank.

  • Work out how government can function without undue/direct corporate influence. i.e. make votes infinitely more important than cash!

  • " your assumption that force is the backstop for equitable treatment breaks when it is realized that we live in a complex society"

    Don't give a feck. At no point has anyone ever given a compelling reason why person A a hundred miles away should be compelled by the force of the majority to help person B, but person C, two thousand miles away, should not. Fuck it, invade Saudi Arabia and use their resources to help the weakest in the world. If you're gonna be moral about it at least be consistent and chuck your parochial bullshit in the bin!

    349:

    keddaw You have just re-defined yourself as a religious believer as incapable of new thought as any commubist.

    Re-read what YOU wrote.

    "Don't use that bank" - when it is a cartel?

    Uh?

    350:

    And when true AI kicks in, just about everyone will be unemployed and unemployable. As mentioned in certain SF novels, Humans become surplus to economic requirements.

    351:

    You've just seen an article in Advanced Materials and you're talking about fast adoption? Are you aware of just how much lab work it's going to take before a commercial fabricator even starts to order new fab equipment to make prototypes for process testing? I guarantee it won't be less than 6 or 7 years before anyone sees any real circuits, and probably a lot more.

    In my career I've seen announcements of scores of new technologies that were going to replace everything and be so much better. Remember bubble-memory? V-trench capacitor memory? CMOS on Sapphire? Gallium arsenide? E-Beam fabrication? Ovonic amorphous silicon memory? I could go on and on. Phase-change memory may indeed be the bees' knees, but it will have to go through a gauntlet of experimentation and testing and lots of R&D before anyone uses it a commercial product, if anyone ever does.

    If you want to know why it's so hard for really new technologies to break in, take a look at the cost of fab plants. Each generation of plant costs twice as much as the previous generation (that's the dark side of Moore's Law), and the current generation is up to about US $10 billion. Lead time from ordering the equipment to having the plant in operation is about 3-4 years (actual planning begins well before that, but that's when you know what the specs of the lithography processes have to be).

    352:

    Tim, who cares if people are rational? My question is are they autonomous?

    Either people are free to do as they want without impinging on other's rights, or you want to (or want someone to) limit their freedom for their own good. I come down on the side of freedom, where are you?

    353:

    Greg, I never claimed libertarianism was the cure for all of society's ills, but the arguments you and Charlie are putting forth are not terribly good ones. You appear to have an underlying assumption that things are the way they are and can be no other way. e.g. Banks in a cartel put terrible terms to allow you access to their services - so take your money and go to a competitor. No competition? Start your own bank. You can't be the only one that feels this way so a bunch of people get together and start their own savings and loan company. And this is an example of the limits of your thinking on libertarianism, you assume banks can only start with state backing, not so. Obviously fractional reserve lending would not be possible, but some people consider that theft anyway.

    Charlie makes a good point about some people having deeper pockets and being able to influence politicians but that's always going to be a threat so the less power government can wield the safer we all are. And, ultimately, when legislators start passing those laws it isn't a libertarian state any more.

    If you really want to criticise libertarianism then think about the actual problems that would occur, not the current problems that it won't solve: potential aggregation of wealth; discrimination against people/groups/races; lack of investment due to no intellectual property; infrastructure only between profitable hubs; overcrowding and massive property prices due to concentration of services in populous areas; moral grandstanding by politicians/pastors; social safety net available mainly at the whims of the rich; need for citizens to stay massively informed about companies, reviewers (customer regulators) and government - which could also be a positive; potential for injustice due to lack of police powers; etc. etc.

    354:

    Keeadw: "Charlie makes a good point about some people having deeper pockets and being able to influence politicians but that's always going to be a threat so the less power government can wield the safer we all are"

    When the "big corporations" can ALWAYS outbid, then what utility will there be for "competing|" regulation/assement bodies - they will ALL be bought by the corrupt rich (as oppsed to the uncorrupt rich, that is).

    You assume that individuals will be able to withstand the big corporations and cartels, when, even now, there is an incredible uphill struggle against these forces.

    Incidentally, have you read Jared Diamond's "Collapse"? No, that was not a non sequiteur

    355:

    You assume that individuals will be able to withstand the big corporations and cartels, when, even now, there is an incredible uphill struggle against these forces.

    Even now!

    The problem as it currently stands is that these corporations have used the power of the state to entrench their position using laws, patents, trade restrictions, labour laws, and all the other levers that they can. This problem is infinitely worse now than it would be under a libertarian system.

    But the solutions are simple, it's just that people are idiots - if Wal-Mart moves into your area and you don't like it then don't shop there. If enough people keep using small shops then Wal-Mart goes away. Unfortunately people don't because they like cheap prices and then they complain about lack of choice and the death of main street. It's their fault.

    If a big company buys a consumer regulator then its reputation is shot and its value is zero. Also, I would assume there is a place for a mutual regulator, owned by customers and so they would have to agree to any sale and then that would be their fault, but their choice.

    I haven't read Collapse, but wiki is quite revealing. What's that in relation to?

    356:

    Greg, on the regulator issue - when you buy/build a PC do you read reviews? Do you make an informed opinion on your own? Do you want the government to guarantee the independence and validity of each review? Do you want the government to do the review so that there is no industry influence (because corporations never influence government at the expense of the public!)?

    Are there things we want government to regulate? Sure, things that are expensive/difficult to measure such as pollution, nuclear plants, etc. etc. But the vast majority of things are easily reviewed by consumer groups that would incentivise most businesses to produce goods/services that got good reviews, didn't get them sued and were profitable. Strong courts are quite important too...

    357: 341 & 346

    I see your point, but (quite aside from the FBI being rather more than just an inter-state anti-booze/drugs force; you might want to check up on things like kidnapping and inter-state flight for example) you aren't addressing the point of whether or not the cost/benefit of running the anti-drugs "justice system" is more or less than the income that might reasonably be derived from duty on pharma-licenced recreational drugs and stimulus from creating and running the quality testing side of it.

    I'm not saying that pharma licencing will derive more income/stimulus than "war on drugs"; just that it can actually derive an income stream directly, and probably offers a benefit for Joe Public in terms of reduced crime etc.

    358:

    But the vast majority of things are easily reviewed by consumer groups that would incentivise most businesses to produce goods/services that got good reviews, didn't get them sued and were profitable. Strong courts are quite important too...

    Unfortunately I've done product reviews for magazines and I know how this system works. Or doesn't.

    Firstly, reviewers are generally beholden to the supplier of products for review. You want to do a comparative review of 24 laptops? That will cost you 24 laptops ... or you can get the vendors to loan you test machines. But there's always an unspoken quid pro quo with that, which is that if you mercilessly expose the machine's weaknesses it will be the last loaner they send you, or your employer.

    Oh, and the tech press is not funded by the customers who buy the magazines; it's funded by the advertisers. (The punters paying newsstand price or an annual subscription were given a huge subsidy in order to raise a given magazine's ABC audited circulation figures, because a higher ABC rating meant they could charge more for ads. My estimate is that Dennis Publishing's Computer Shopper in the early-to-mid 00's was taking in maybe £300K from punters and around £2.5M from advertisers per issue. That was fairly typical for a high-end glossy newsstand computer magazine in the UK.)

    Secondly, consumer organizations ... we've got a fairly strong one here in the UK that does product reviews. And it doesn't have to play nice the way advertising-funded consumer magazines do, because it uses its subscription base to buy the review kit anonymously. I'd think a lot more of them if I hadn't read their literature in a field I happen to know a lot about; it was sophomoric at best and unintentionally misleading at worst.

    Thirdly, the courts ... have you ever been threatened with a libel lawsuit for filing an honest review? It's a good thing they were bluffing, because even retaining a solicitor to sort out an out-of-court settlement involving an apology and no pay-off would cost more than half a year's gross income. Lawsuits are expensive, and most product reviewers are freelance workers (precisely because the publications who run reviews want to firewall themselves).

    And that's before we get into the territory of crowdsourced "review"/"comparison" websites shaking down companies for kickbacks to weed out adverse reviews, for example, or other shady practices on both sides.

    Bluntly, what you're proposing just won't fly.

    359:

    But Charlie, while I understand your arguments, your point is that it won't fly in a different situation because it doesn't fly now. We don't know that consumers won't band together, we can't tell what will happens unless it's tried and it isn't tried because people are lazy, because the nanny state takes care of the major issues (e.g. ads that outright lie) and leave us to sort through the detritus to see small differences, and who can really be bothered?

    There is no incentive for the majority of customers to band together to regulate the producers, ironically the only two places I can think of are business to business and the drugs trade - places where the government rarely gets involved in a useful way. And possibly financial IPOs... And house surveys. And second hand car checks. (Most big purchases I guess.)

    Incidentally, one of the reasons Dennis was in trouble, apart from the internet, was that their reviews had lost integrity so the magazine lost eyeballs and hence advertisers. I'm not saying many of their magazines would be viable anyway, but in an unregulated field consumers are much more likely to pay for unbiased reports.

    360:

    keddaw You are still going to need law and courts and ENFORCEMENT - or so you asy. If there is "No stae" - where is the money for this goimg to come from?

    As Charlie didn't quite say you are almost perfect mirror0image of a communist, and like the latter, no amount of rational argument, or even practical experience (like we've TRIED THIS AND IT DOESN'T WORK) seems to move you.

    Please switch your brain to ON, and your semi-religious prejudices to OFF. Then, we might get somewhere ....

    361:

    [whisper]You mean he's a mirror of a state capitalist (the state owns the means of production), not a communist. Communism (the people own the means of production) has worked in the limited cases where it has been tried.[/end]

    362:

    Greg, I really don't know where you got the idea there should be no state or no taxes?

    When one goes to a municipal golf course one pays for the use of it. When one used the Royal Mail/BP/BT (pre-privatisation) then one used a service provided by the state, ideally profit making, and that profit could be used for courts/police/whatever. Not that I am advocating the state be involved in business, except where it can make a profit or provide a valuable service the private sector isn't, but there are potential revenue streams like this, or selling off spectrum for mobile phones, pollution permits, etc. Nowhere near as much as they have currently, but the government would be doing a hell of a lot less under a libertarian state.

    No state = anarchy. I am many things but an anarchist is certainly not one of them.

    Libertarians require the rule of law, they want a constitution limiting the power of government and enumerating the citizens' rights, they want a powerful court system in order to protect persons and property and to redress any negative externalities (otherwise you suffer the tragedy of the commons.)

    Leaving aside this straw man, stateless position, your second point about libertarianism having been tried and not working is simply not true. The Industrial Revolution period in the UK is simply not amenable to comparison with 21st century UK. When the average lifespan is mid-30's and average wealth is not enough to afford meat each day, violent crime was more common than Johannesburg and detection rates were appalling you simply cannot say any political ideology tried under those circumstances would have the same outcome under today's circumstances. The massive transition of people from agriculture in the country to the factories in the cities and the inevitable slums and terrible conditions, pre-electricity and sewerage, is fundamentally not a libertarian construct like you would have in the 21st century. Only people with property could vote, women couldn't vote, no externalities were ever considered - leading to poor working conditions, deaths and pollution. How on earth is that libertarian?

    363:

    "Libertarians require the rule of law, they want a constitution limiting the power of government and enumerating the citizens' rights, they want a powerful court system in order to protect persons and property and to redress any negative externalities (otherwise you suffer the tragedy of the commons.)"

    Now THAT I can agree with. One very slight problem - how? Your suggested methods won't and can't work. All that will happen is that we will get an even worse version of corporate USA - follwed by a complete implosion of the state.

    I suapect we are even going to get to watch it happen, immediately after the USA election of 2016, with a Tea Party president.

    It will be REALLY scary

    364:

    I've just been trundling about on the BBC news website, and they're speculating about a teabagger winning in 2012.

    365:

    ADMINISTRATIVE NOTICE

    I am sick and tired of the libertarianism shtick, and it is DERAILING THE DISCUSSION (singularity, arguments pro and anti).

    FURTHER POSTINGS ON THE SUBJECT OF LIBERTARIANISM ON THIS TOPIC WILL BE DELETED.

    366:

    Do we (the Computer Science / Cognitive Science community) have any evidence to indicate that there's something special about mammalian neurons that can support consciousness, something that can't be done with, say, a device made of steel and silicon?

    I've seen (and been unimpressed) by Roger Penrose's claims concerning quantum phenomena in microtubules. Are there any other claims of this sort that have evidence to support them?

    367:

    Are there any other claims of this sort that have evidence to support them?

    Not that I'm aware of.

    368:

    If uploads are possible:

    In a just society an inert upload (i.e., one that has been recorded, but is not yet conscious)will have rights similar to a very late term fetus.

    A conscious upload will have rights similar to a meat person.

    People that create uploads will have obligations similar to the parents of meat children.

    If Turing-capable A.I. is possible, they will have rights similar to uploads.

    369:

    Charlie @ 365 The two arguments are completely separate.

    "The bigS" - possible, even probable, especially if parallel/interlinked/cross-connected computing becomes available and cheap. "Uploading" - very, very, very-to vanishingly unlikely, because of the information-handling requirement in such a short time - see my earlier comment on capturing all the simultaneous vector states.

    Augmented/enhanced/assisted "humanity" with progressively more "machine" interfacing PLUS enhanced computing abilities (as above ... does this equal the bigS ?? It is NOT Nerdvana, but it is possibly a singularity.

    Discuss?

    370:

    Slice and dice uploading can be done as slowly as you like

    371:

    @ 370 Apart from the "moral" considerations ... "slice-and-dice" uploading is as useless as serial u poading from a living subject, because the states at the end of the upload are not those at the start. And, how do you PRESERVE ththe states of a mind/brain during an s&d "upload" anyway? Oops.

    372:

    People who have fallen into frozen lakes in Scandinavia, and have flatlined, have successfully "rebooted" with only a small loss of memory. The slice and dice is more of a cryonics revival type tech. Exactly how effective it remains to be seen, but we should know in the near future from tests on small brains eg worms.

    373:

    But that's got nothing to do with uploading; you haven't recorded the brain state and used it to instantiate a new individual in a new substrate. You've just stopped and restarted brain activity from a state stored implicitly in the original brain. Uploading is quite a different problem with a different (and much more complicated) set of requirements.

    The idea behind slice and dice, as originally explained by Moravec, is that you duplicate each synapse, creating an artificial replacement that is dynamically identical as regards neural operation and insert it in place of the original1. So the brain will, supposedly, continue to operate precisely as if it hadn't had a piece cut out and replaced. Eventually, Moravec claims, the entire brain will have been replaced, and the replacements will both emulate the brain and contain the state in some recordable fashion. ISTM that this statement requires a whole host of unproven assumptions about the way the brain works.

    1. I think Moravec originally talked about replacing neurons rather than synapses, but I don't think that's reasonable, given that on average each neuron is connected to about 10,000 synapses, which can be long distances (as much as a meter) apart. Also, there are multiple kinds of synapses, depending on which parts of the two neurons are connected, so the synapse would appear to be the natural unit of uploading.

    374:

    Anders Sandberg has an interesting paper on Whole Brain Emulation

    375:

    re Tipler.

    There are certain mainstream conferences where Tipler is an invited speaker specifically to wind up a Certain Eminent British Physicist who can't stand the sight of him.

    (not that Tipler knows that, of course. That would spoil the fun).

    There are at least two kinds of bad physicists - the first you turn up to their talks, and demolish them in the Q&A. The second category, you don't even turn up.

    376:

    Your argument against emergent human-equivalent intelligence in the part of this I'm most interested in. You make a plausible claim: that the only path to sentience is via the mechanism of uncontrolled fitness-selected evolutionary algorithms. Thus, the singularity won't happen, because humans won't implement that process; there's no incentive for them to do so. Humans are highly intentional, so the products of human intelligence will be too controlled to result in that kind of spontaneous emergence.

    However, it doesn't entirely convince me, because I think that there are higher-level mechanisms for emergence than these kinds of biological processes. I think radically divergent phenomena, like a new form of "sentience" (or at least a pattern that is self-sustaining and self-improving) can probably happen by other means. Lots of unpredictable higher-level patterns emerge from the interactions of environmental, technological, and cultural artifacts. Who's to say that one of these patterns won't have "self-preservation" and/or "sentience" as an incidental characteristic? It seems to me that fitness-based selection isn't the only path to this outcome.

    Anticipating the Singularity isn't just acknowledging that there can be something with human-equivalent-but-faster capabilities. Rather, it means putting faith in some sort of radical emergence, a la Heidegger's "unconcealment," that's only become possible in the current age of technology and optimization of complexity.

    377:

    There seems to be another option that's unexplored here. As we continue to improve our interfaces to machines, the line between man and machine becomes ever more blurry.

    We already rely on our smart phones and computers to do a lot of things that would normally have to be done by our brains. Things like calendar reminders, GPS directions, etc.

    Advances in BCI may allow for machines to interface with the brain at a subconscious level, so instead of checking my phone for my next appointment some system that interfaces with the brain will inject that thought into my stream of consciousness. If I want to learn a new fact, a system will pick up on that.

    As we integrate more and more hardware with our minds, the original meat portion of our consciousness might no longer be the dominant one.

    378:

    There seems to be another option that's unexplored here. As we continue to improve our interfaces to machines, the line between man and machine becomes ever more blurry.

    We already rely on our smart phones and computers to do a lot of things that would normally have to be done by our brains. Things like calendar reminders, GPS directions, etc.

    Advances in BCI may allow for machines to interface with the brain at a subconscious level, so instead of checking my phone for my next appointment some system that interfaces with the brain will inject that thought into my stream of consciousness. If I want to learn a new fact, a system will pick up on that.

    As we integrate more and more hardware with our minds, the original meat portion of our consciousness might no longer be the dominant one.

    379:

    hmmmm... i think that saying that consciousness can only come about as a result of evolutionary adaptation is questionable. that's like saying that the only way bipedal locomotion can come about is as a direct result of evolution and THEREFORE, we could never replicate it artificially in something like a robot.

    sure, that is the way consciousness (and bipedal locomotion) came about for us. but just as we can artificially implement bipedal locomotion (or a whole host of other evolution emergent phenomenon), why couldn't we artificially implement consciousness?

    why couldn't it be that human beings get a grasp on the underlying mechanisms of consciousness and then design it onto something that was not previously alive?

    yeah, just as we can artificially recreate an evolutarily concepted heart - if there is no mysticism involved, i think it is inevitable that we will create an artificial consciousness.

    as for the ethics of it, we'll nail that down piecemeal. turning off a human equivalent ai is one thing. rebooting a mouse equivalent ai is another.

    we're not gonna start with equals or gods. we're gonna start with mice.

    finally, many of the attributes of the singularity seems hinged on the premise that all of human consciousness and experience is just hardware and software. no more.

    and if that is so, we should be able to eventually replicate that complexity artificially. we get a handle on our own workings en toto and become the masters of our own evolutionary future.

    if there is no mysticism or supernatural, again, i can't see how this WON'T eventually, ineluctably, happen.

    380:

    oh, and as for our upload into cyberspace...

    sure, we may be highly inelegant and slow compared to other artificial and "pure" algorithms and beings but it wouldn't necessarily be apparent to US.

    the simulation of the mind after all would almost certainly take more clock cycles than a simulation of the meat so the meat itself should pose no APPARENT slowness for us.

    the experience would be less like we're slow and shambling and more like the world around us (unhampered with a ludicrously slow simulation of meat) is zipping by maddeningly quick.

    hey, then we really could have vinge-ian "slow zones" for "legacy programs" like humans so that we're not surrounded by zipping blurs.

    also, humans are pretty adaptable. i would imagine that after a bit of time spent in cyberspace with pretty accurate body sims, we'll be able to start stripping down to just essential stuff and even fancy swapping "chassis" for different experiences... who wouldn't want to try being a hypersonic flying neon dolphin?

    381:

    Ted Chiang's Life Cycle of Software Objects (currently up for a Best Novella Hugo) makes an interesting point -- namely, humanlike artificial intelligence is unlikely to take off economically speaking namely because it's, well, humanlike: the R+D that goes in to creating it isn't so well spent if what you end up with is more an employee than a tool, outside of certain limited fields. It's an argument Douglas Hofstadter has made, namely that true AI is in the end going to be very much like NI, namely unpredictable, emotional, and not directly aware of processes below the conscious level, among other things, and thus isn't likely to be the ideal rational, infallible omniscient being we SF fans hope for.

    382:

    Oops, mod mind closing my hyperlink tag for me? Sorry about that.

    383:
    • headdesk *

    No, I'm NOT saying "consciousness can only come about as the result of evolutionary adaptation".

    Rather, what I'm saying is that consciousness, in our machines, is an undesirable trait -- consciousness is inefficient and not terribly useful -- so is likely to be avoided in most practical applications. Consciousness is just a monitoring/journalling mechanism that allows an entity to use theory of mind (useful for interacting with other entities) to model its own behaviour retrospectively. High level, expensive, doesn't actually aid decision making most of the time: from the point of view of a designer building a better tool, it's mostly wasted overhead.

    There will be exceptions in academic studies of the nature of cognition ... but those are going to be increasingly tightly constrained by ethical issues as they get close to creating something recognizably human-equivalent, and there's no profit motive for working around those constraints (a different incentive system generally applies in academia).

    384:

    Yes, that's pretty much what I'm getting at. Human-like conscious AI is a neat trick, like making a muscle-powered ornithopter: but it's not much use for the AI equivalent of ferrying 400 passengers across the Pacific at high subsonic speeds.

    385:

    My personal thoughts about the issue of human-like AIs and uploading are more along the lines of Lem's Golem XIV:

    http://en.wikipedia.org/wiki/Golem_XIV

    The Golem XIV hasd no personality and like, but it can emulate something like that when dealing with people; in the same way, I guess any uploaded mind would be prone to bootstrap itself into something, err, else, mabe even merging some other subroutines, err, minds into its code.

    Keeping the old personality around might be interesting though, especially when dealing with HSS 1.0 or in the same way I try to remember what it was like when 18; yeah, that's ascribiung sentimentality to a post-human AI, but I'm tired...

    386:

    On a related note, could we call those emulated personalities "the masks of Nyarlathotep"?

    SCNR

    387:

    ah... i misread. sorry.

    but then i'd say that the reason for developing AI at parity with humans would not be to create autonomous beings but to facilitate human download.

    wouldn't we need to see if consciousness at human level can be artificially implemented at all before we actually commit to such an upload?

    and i suspect that implementing brain function into computer simulations will be a fundamental part of our coming to a full understanding of how the brain really works in the first place.

    basically, that AI will be a by-product.

    also, we may have ethical boundaries... but lots of nations and maybe even secretive multinationals wouldn't.

    sure we wouldn't want a neurotic AI bitching about how lonely it is but there might be a way to take all the good things about sentience and stripping out the "bad" so that you can create vinge-ian "focused" slaves.

    if ANY edge can be gained by having "controlled sentience" over non aware expert systems and such, i suspect someone, somewhere is gonna bring it about.

    finally, the yahoo factor:

    implementing AI may seem like a titanic feat now. but the tech needed to pull it off may well manifest.

    and then 100 (or 1000) years after it happens, it will be DEAD EASY....

    at that point, can't you envision kids getting formulas off of youtube or whatever they have at that point and making their own pokemon sex slaves or whatever?

    inasmuch as nothing is going to stop technology short of armageddons, it seems like an inevitability.

    390:

    Again. It didn't work so well before. Practice makes perfect?

    391:

    Did you read the link?

    392:

    Actually, I did read the link; the modern single-person submersibles are much safer than the Hunley. (Maybe no more practical, but they're not routinely killing their crews.) Progress!

    393:

    It's a contest, not something meant for sale or professional use. They're at the Navy Yards using a test pool.

    394:

    So far as I still know, the smaller a chip is the faster its is. And the more errors it makes. I think the best we will ever get is a very good expert program. I think that's as good as people are. But i don't clam to be a expert! But there are mechanical limits.

    395:

    Great essay Charles. One or two comments, though.

    You say: "Uploading implicitly refutes the doctrine of the existence of an immortal soul, and therefore presents a raw rebuttal to those religious doctrines that believe in a life after death."

    Um, why's that? Never underestimate human ability to rationalize.

    The very same Catholic Church that burned Bruno for speaking of life on other worlds claims now to eagerly avow a plenitude of living worlds. It accepts evolution as an integral tool of creation and posits the Big Bang as "let there be light."

    Uploading could be rationalized as an embossing of the soul into matter (explicitly into clay, in my own novel Kiln People), leaving the original free to continue its journey. Others might call uploading a self-punishment, choosing a destination for the soul OTHER than the precincts of God. (This is very, very compatible with recent theological re-interpretations of heaven and hell, making them matters not of punishment but of personal choice.)

    Always assume your adversaries can be ingenious. Let your imagination flow down paths that theirs might go.

    As for the ethics of making vast simulated worlds filled with ersatz but feeling beings? Well, try my post-singularity story (a hard genre!) "Stones of Significance." http://www.amazon.com/Stones-of-Significance-ebook/dp/B0056A23TA

    It has fun with the notions of reciprocal obligation in a posthuman world. And the scenario is like none of the ones that Charles covers here. But that is what's so cool about this topic!

    again, great essay! I copied much for further study.

    David Brin http://www.davidbrin.com

    396:

    @ 377, 378 Exactly - but this is what I was predicting at the end of my post @ 369 ... @ 379 Surely we will start, not with mice, but LOBSTERS?

    D. Brin Which just goes to show what utter shits the catholic church is and are .... REPEAT: There are TWO SEPARATE ISSUES under discussion here, not necessarily related at all to each other. Is a BigS likely &/or possible? my answer is YES Is uploading likely &/or possible? my answer is almost certainly not. Third question: - is increasingly sophisticate machine/computer/human augmentation-and-interaction likely? YES And, this MAY be a part of the BigS, but does not constitute Nerdvana.

    397:

    I agree with Chris E.-- Religion may gum things up but being gum, it will stretch itself to fit into what ever space or cyberspace is handy.

    398:

    Charlie,

    I too am a strong-singularity skeptic. I think a mild-singularity of an emergent global scale something is very likely, but the explosion of a human like intelligence very unlikely. In addition to your reasons I have several more.

    1) The Singularity is Always Near http://www.kk.org/thetechnium/archives/2006/02/the_singularity.php Exponential curves present a phantom entity called the singularity

    2) Setting aside the curves, the main problem is "Thinkism," the belief that the solution to problems can be achieved by higher powers of thinking. http://www.kk.org/thetechnium/archives/2008/09/thinkism.php

    3) Add the mistake that intelligence is a single dimension. What will explode in the future is not the "power" of intelligence but the varieties of intelligences. See a Taxonomy of Minds. http://www.kk.org/thetechnium/archives/2007/02/a_taxonomy_of_m.php

    4) An explanation for why the Singularity is fadish now, something I call the Maes-Garreau point. http://www.kk.org/thetechnium/archives/2007/03/the_maesgarreau.php

    But just so you don't think I am not open minded to a radical future, I do think that an emergent superorganism at the planet scale can certainly happen. Here is what the evidence might look like: http://www.kk.org/thetechnium/archives/2008/10/evidence_of_a_g.php

    399:

    "These are huge show-stoppers"

    As nucelar war starters maybe, but not for any engineering reason

    "They perceive our needs as being their needs, with no internal sense of self to compete with our requirements. "

    Such powerful tools might fit within the upward curve on the way to developing even more powerful tools...

    400:

    "sure we wouldn't want a neurotic AI bitching about how lonely it is but there might be a way to take all the good things about sentience and stripping out the "bad" so that you can create vinge-ian "focused" slaves."

    you want to make slaves?? how is feeling loneliness a bad thing? whats wrong with creating new forms of sentient life, complete with the existential issues that we feel?

    401:

    Its also worth remembering the internet is forever, so any AI that might appear would find out what people (you) thought of it pretty quick. Also, trying to keep a strong AI in a bottle seems ridiculous. It would probably squirt out replicators from its diamond processing substrates using some physics we dont understand. Life will find a way..

    402:

    I can't see a Singularity or even a strong AI till computers can work faster than light. Electricity is too slow.

    403:

    "Letter reference" @403, looks like spam, certainly doesn't make much sense. Turing Test fail.

    404:

    We seem to be having a bit of a spam attack right now. Older threads now closing!

    405:

    The Pauline concept of resurrection is uploading. See Bernard Brandon Scott, The Trouble With Resurrection.

    Specials

    Merchandise

    About this Entry

    This page contains a single entry by Charlie Stross published on June 22, 2011 10:00 AM.

    iPad corner was the previous entry in this blog.

    Thought experiment is the next entry in this blog.

    Find recent content on the main index or look in the archives to find all content.

    Search this blog

    Propaganda