Back to: Typo Hunt | Forward to: The future, today (maybe)

That old-time new-time religion

Back in 1993, Vernor Vinge published a paper (first at a NASA seminar, then publicly in Whole Earth Review) on the subject of the Singularity.

I'm not going to re-hash Vinge's initial provocative thesis here. Let's just say it's one of the three most significant technical concepts to show up in science fiction in the past 30 years (the other two being nanotechnology — invented elsewhere — and cyberpunk — a literary, rather than a technical, conceit). If you're unfamiliar with it, go read the paper — follow the link in the previous paragraph. And if you're really intrigued by the idea, and want to see how it has developed since 1993, IEEE Spectrum did a Special issue on the Singularity which you can find on their website.

The trouble with big ideas like this is that they get misunderstood. And the singularity was massively misunderstood. Drexler's original speculation (about the potential for machine-phase fullerene technology to replace lipid/water phase boundary enzyme technology (that's biochemistry by any other name) for doing useful work at the molecular level) was rapidly converted into magic pixie-dust nanites — their pixie-dust aspect coming from the fact that they're expected to do everything from cure diseases to make the tea, with no actual consideration of how this might happen. (If you want to be disabused of your nanites, you need look no further than this article in IEEE Spectrum, which sets out the problems; it's not wholly new stuff, but it's sobering to see all the objections itemised in one place.)

And the Singularity has really been misunderstood.

Part of the problem, I think, was due to the vagueness of the initial concept of transhuman intelligence and its origins. The original Artificial Intelligence research program of the 1950s was expected to bear fruit within a couple of years, but has delivered paradoxical results; sub-problems which were expected to be difficult have proven much easier than expected, while some which were considered easy have proven to be chimerical ... indeed, by proposing such an ambitious goal, the original AI researchers only highlighted how little we actually understood about our own minds. This also comes out in attempts to evaluate the computational complexity of the human brain and critiques of reductionism in neurocomputing. We're still in the pre-Lilienthal era of AI, never mind the pre-Wright Brothers period.

Another and no less significant problem with the singularity arises from our own minds, and our evolutionary predisposition towards religious thinking. Vinge's formulation of the singularity concept in the 1990s may have been unfortunate, insofar as we've seen in recent years a huge outburst of millenarian religious hysteria: any concept that involves revolutionary, epochal change on the scale of the singularity makes it a candidate for religious adherents in search of a new superstition, and the singularity rapidly found its devotees; as Ken MacLeod pointed out, it was in danger of becoming "the rapture of the nerds".

Anyway, now the rapture-nerds have indeed begun to codify their beliefs. Allow me to introduce you to the Order of Cosmic Engineers. It is their intention to "joyfully set out to permeate our universe with benign intelligence, building and spreading it from inner space to outer space and beyond." And they explain:

The Order is, at the same time, a transhumanist association, a space advocacy group, a spiritual movement, a literary salon, a technology observatory, an idea factory, a virtual worlds development group, and a global community of persons willing to take an active role in building, in realizing a sunny future. As engineers, we aim to build what cannot be readily found. Adopting an engineering approach and attitude, we aim to turn this universe into a "magical" realm.
There's a lot more where this came from — indeed there's a whole huge prospectus, awaiting release next Sunday (which will be accessible here); Their formal launch event will be hosted by the Science Guild in World of Warcraft on June 14 at noon EST. I've seen an early draft of the prospectus, and it is indeed something special. Let's just say for now that I await its publication with interest: it's bad manners to critique an early draft of divine scripture before it's launched.



Warren Ellis is already on it:

The IEEE Spectrum "special report" on The Singularity makes for interesting reading, but I’d like you to try something as you click through it. When you read these essays and interviews, every time you see the word "Singularity," I want you to replace it in your head with the term "Flying Spaghetti Monster."

is it just me, or does the Order of Cosmic Engineers have a whiff of (the late great) Robert Anton Wilson about it?


I was thinking more of Frank Tipler.


I found Ray Kurzweil's description of what the singularity might be like (in his book The Singularity is Near) quite compelling.

If there's no fundamental reason why computers can't simulate e.g. neurons, AI is surely a matter of time. Doesn't that mean the singularity is only a matter of time?

Nevertheless, these Order of Cosmic Engineers sound...interesting.


charlie - having just had a (very quick) read over the tipler link, it looks to me to have some similarities to the theories of tielhard de chardin (though I must admit, I have tried a number of times to get through de chardins books and been defeated, each and every time - 50's jesuit style is not easy reading....:)


Is there anywhere I can object to their manifesto? It sounds engagingly fluffy, but I also don't see how filling the universe with benign intelligences is an engineeering problem. To be precise, its a bit rich to have as an aim something which has not yet been shown to be possible in any way.


"If there's no fundamental reason why computers can't simulate e.g. neurons, AI is surely a matter of time. Doesn't that mean the singularity is only a matter of time?"

The keyword here is "if". On the other hand we could be unlucky and the complexity of a neuron operating as a part of an intelligence might only be accurately simulated by mapping out the exact quantum vibrations of every particle in the entire galaxy. We don't know. We don't know what the fundamental principles of how intelligence and the mind function, so we have no idea of the work involved.

AI might only be a matter of time. Then again, it might not.

The intelligence enhancement "loop" or explosion might not be as recursive as the singularitists make out. There might well be a hard limit to how well the architectural model of the brain scales up. Again, we don't know. It's all just speculation.

The Singularity is to Scientology what Hard Sci-Fi is to Star Trek. More anal and detailed but definitely the same genre.

Of course, because I'm only a lowly cishuman I probably don't have the cerebral oomph to visualise the "inevitability" of the transhuman revolution.


Roger Williams has an interesting take on the singularity in his online novel "The Metamorphosis of Prime Intellect":

MINOR SPOILER: What I find interesting is that in this novel, the first weakly superhuman AI becomes godlike not through accelerated (yet human-like) development of technology but rather through its study of the laws of physics (it learns how to circumvent thermodynamics).

WARNING: All odd-numbered chapters feature very explicit descriptions of uploaded humans living their intense violent and sexual fantasies.

Btw, sorry I couldn't find a way to obscure the spoiler. I tried using a span to set the font color to white, but it wouldn't work.


Charlie, it's not only bad manners, it's infelicitous! The Unborn God might strike thee. Wait... where have I read that "Unborn God" thing again?


I don't believe Vinge was the first to write on the Singularity, was he? Not even in his explicit formulation? IIRC, there was an interview in Omni magazine in late 70's or 80's with someone who speculated on the creation of Ultra-Intelligent Machines, or UIM, and how they might bootstrap themselves.


One. I didn't realise that Tipler had gone that far round the twist.

Two. WE ARE ALREADY IN "the Singularity". Herinafter referred to as The Big S . For evidence I suggest reading Steven Oppenheimer's "Out of Eden" In it are a couple of plots, showing the rate of human technical progress, since (approx) the end of the Mesolithic. When you look at that scale, we are already at a point on the "technical progress curve" (so to speak) where the gradient of the slope is above 1:1000, and rising. What HAS happened, is that our progress (especially in some fields that are electronics/communication based has speeded-up visibly, in the space of a quarter of a generation or less. Thus, some of the more perceptive (such as V.V.) have noticed. I suspect Vinge's prediction of the Big S by 2023 is a little premature, but I would expect it before my 100th birthday (2046).


I for one welcome our new cosmic overlords.

Guthrie @7 - looking at their ambitious list of things they are, I don't see it as necessarily inappropriate for a spiritual movement or literary salon to aim to produce something which hasn't been proven to be possible.

With such a large list you can probably object in any number of ways; I'd favour forming an opposing literary movement, maybe an SF which refuses to use AI, nanotech, the Singularity or anything else proven. If someone has already done that, I suggest joining the order, becoming steadily disillusioned, then nailing posting your antitheses to a church door transhumanist forum.


Alternatively, there is always Olaf Stapledon's "Star Maker" ....


Cosmic "Engineers"? I don't see a source repo or even a mailing list. Show us the code!


Would we even know if it happened? I think the first AI will be an enhanced human, built by a large corporation or the military. Given the usual goals of those types of organizations, the first actions of the AI would be to ensure its' own survival, then to acquire more power (wealth, etc.).After that, the next steps for "it" would depend on the human side. If this cyborg were more intelligent than any human being that has ever existed, it would be obvious to it that it should conceal its' existence. Thus, eliminate all other humans with that knowledge and NEVER take any overt action that would make the human population aware that a "super" human had been created. Once the cyborg had enough power and resources, THEN it would not have to worry about us. We could be in such a period RIGHT NOW and be totally unaware of it.


The Order is, at the same time, a transhumanist association, a space advocacy group, a spiritual movement, a literary salon, a technology observatory, an idea factory, a virtual worlds development group, and a global community of persons willing to take an active role in building, in realizing a sunny future.

Good lord, it's a floor wax and a dessert topping.

I think that any definitional boundary that is so large as to include everything becomes meaningless. Is there anything that they're not?


david@15 reminds me of a story a couple of years back (?) about the theft of supercomputers - someone appeared to be stealing parts of a super computer, to order (i think from oxford or cambridge but could be wrong). Though this would require people to actually break in and find the bits, and at the time it was blamed on saddam, or possibly those kerazee iranians....

but then someone has just made a supercomputer using lots of PS3s....


Hm, ya know, when an AI guy says we'll have AI in "20 years", I am suddenly reminded of a neocon telling us that we'll turn Iraq around in "6 months".


Greg: you noticed me pointing that problem out?


Rob @#5, the Singularity is predicated on a number of additional assumptions:

  • That it is possible to construct systems faster or in some other fundamental way more capable of building intelligent systems than are we (note that there's no requirement that those systems be sentient: a really smart AI construction kit would do it, as would intelligence augmentation)

  • That those systems choose to/are used to do that, and that the intelligent systems those systems build choose to iterate this process, and pretty much never choose otherwise

  • That this process can be iterated without an upper bound

The latter assumption, at least, is very questionable, and the second assumption is second-guessing entities smarter than we are.

Ray Kurzweil is, to be blunt, a loony. He's smart, but he's a loony, and his arguments are mostly fallacious (his graph of exponential technological development is an utter hoot).


I'm old enough (53) to remember AI being '20 years away' every year for the past 40 years. One obvious rejoinder is that that's just what living on the lower slope of an exponential curve looks like. Another is that there's more computing power in my cellphone than it took to put a man on the moon. (Or whatever - I'm sure Charlie has the exact soundbite.)

By the way, I should have a FAQ or something that explains that I did not coin the phrase 'the rapture of the nerds' - it comes from an article in an early issue of Extropy, and I misquoted it (along with several other snarks from that piece) in a scene in The Cassini Division (in which the actual line about the Singularity was: 'It's the Rapture for nerds!') But I think I'm stuck with it - and I can't complain, because every time someone misattributes it to me, I get free publicity. It's like my name is tagged to a meme.


The big problem with AI is that computer scientists are working on it. CS guys prefer their abstractions water-tight. Self-awareness, or the strange loop of consciousness, seems to me to be an emergent behavior of massively parallel, chronically leaky abstractions, all the way down (/me waves hands wildly). Nature doesn't consider goto harmful, doesn't consider at all.

The clean discreteness of the Singularity moment is a dead giveaway that it was dreamed up by a computer scientist. Neuro-augmentation will have a good long while to render us unrecognizable to ourselves before (if) the first strong AIs show up. At that point, maybe we'll be John Henry enough to compete. In fact, when (if) it happens, we may not even notice.


Nix @#20, regarding the second assumption I don't see too much of a problem with an AI designed specifically to design better (faster) AIs. We could design in their intentions.

Regarding the latter assumption, Kurzweil does propose an upper bound based on the amount of processing it's physically possible to do with a given mass of matter.

Some of the Kurzweil graphs are questionable. ISTR a good one showing component counts starting with vacuum tubes, though.

Personally I think uploading is possible and desirable, and civilisations living inside a simulation running at faster than real time is possible. I hope incremental technology can keep me alive long enough to see it!


Here's what I've always wondered. Since a singularity by definition implies that we cannot extrapolate a trend on the far side of the event from the current side of the graph, how can any speculation about it NOT resemble Dante's Divine Comedy?

At that point you may assume the post-singularity future to be Heaven, Hell, or Purgatory and be equally wrong.

Can we use this as an opportunity to read millenialists of all stripes (cyber, green, and God) out of the debate? I mean, after 100K+ years of flawed human existence, shouldn't it be a requirement of intellectually serious discussion to assume that humans are imperfectable and Utopias populated by >2 people inachievable?

People should take their Hegel and Nietzsche with a dose of Kierkegaard and Mill. And don't even get me started on the unadulterated f*ckups philosophers in the 20th century have perpetrated on the back of Relativity and quantum uncertainty. I lay a lot of the Millenialist idiocy at their feet.


Qualification: I didn't mean to imply that Vinge's essay on the Singularity or Drexler's original thesis should be thrown out of the canon. Both were valid and serious attempts to explore the edges of current and future human ability. I meant that people should differentiate between rigorous attempts to make extrapolations and the many less rigorous efforts to push untestable and poorly defined utopian/distopian scenarios that will arise if only X is done/not done exactly their way.


I think the name "Singularity" makes a bad PR to those concepts. AFAIK, rule of thumb in physics is that if you get a singularity somewhere in your equations, it means they just don`t apply there.

Or, in other words, no singularities actually exist in the real world. Includind the Singularity. 8-)

On the other hands, religions have lots of them, starting with the qualities of God.



The Wrighteous Brotherhood of the AI is coming for you, Sauron.


Been reading through of these stuff, and I have come to teh conclusion that 80% to 90% of the general public would make:

'My brain hurts'


I've written a lot about these three "most significant technical concepts to show up in science fiction in the past 30 years (the other two being nanotechnology — invented elsewhere — and cyberpunk." The fourth, I'd suggest, is Quantum Computing. The fifth is much older: the Multiverse.

That's because I was present at the creation of each. And, I think, the only person to have been.

I wrote the first doctoral dissertation on what's now called Nanotechnology, after my tutelage by Feynman himself, and helped get Drexler's work popularized by, for instance, getting both Omni and Analog to write about him.

I was in touch with William Gibson since before he was famous, back when I was one of his sources on cyberculture, and the more-published SF author.

I did under grad and grad work in AI beginning in the 1960s, and have discussed the Singularity with its major exponents.

Yes, there are already cults cluttering the landscape, and the cultists typically attack me as a threat to their rewrites of history.

I have far, far, too many opinions and facts to clutter this blog thread.

I'm just here to say that this triumvirate is very important indeed, and I'm generally pleased that the world followed me (and the great-grandfathers of each, such as Feynman who is also great-grandfather of Quantum Computing, Vannevar Bush, Alan Turing, Robert Heinlein, ...) to these three revolutions, whether they be fiction or nonfiction.

Oh, and to quote Cory Doctorow: "I’ve committed Singularity a couple of times, usually in collaboration with gonzo Singleton Charlie Stross, the mad antipope of the Singularity."


So which will be first of the 'in the next 20 years', AI or fusion reactors?

Ooh, maybe it will have to be simultaneous. The first fusion reactor controlled by an AI.

On a monster truck chassis.


accelerationista - it was Durham University, and the machine was/is a cosmic simulator.


Meller: The Singularity was named as such precisely because it's somewhere where our models don't apply. The various people engaging in wild-assed guessing about what might happen when the Singularity passes (Moravec, Kurzweil, I'm looking at you) are speculating way in advance of the data: by definition, if their speculation ever acquires any rigour, it will have disproved itself.

Personally I consider such speculation a fun waste of time. (How can anyone consider Mind Children not a fun read? It's ludicrously hypothetical, but still fun. By comparison Kurzweil's various attempts have all been irritatingly preachy: one gets the impression that Kurzweil believes all this stuff, while Moravec is just playing with fun ideas.)


I can't help but think that Douglas Adams was way ahead of the curve on all of this. These concepts seemed to occur in the hitch-hicker's universe in some form back in the 70's.

Personally I suspect the first AI is more likely to resemble Marvin the paranoid android than a fully functioning "Mind".

I'll just let my electric monk believe in everything whilst I retire to the personal universe in my office....


Some thoughts I've had for a while, maybe you can disabuse me of them. This is just my understanding (or mis-?), or plain lack of knowledge on the subject. (went back over the previous comments, there may be some repetition of other's ideas).

I'm not too convinced about the Singularity. Vinge said 'within thirty years'. We're halfway there and there hasn't been that much progress on AI.

I've always thought that it should be possible to build machines with more than Human intelligence (Sapient Machines), but most AI researchers seem to focus on making computers self-aware (Sentient Machines), which is a much more complex task. I don't know that the two necessarily go together. Would we recognize truly intelligent machines if they occurred? The first type should certainly be able to design their own replacements and other technologies, which may be enough for Singularity. We don't seem to be anywhere near there.

Progress in Nanotech, hasn't been quick enough for it either, tiny ramen bowls notwithstanding. I'd make a distinction between Nano-scale technology (bucky-balls with vaccine in them), and Nanotechnology (machines built at the nano-scale). The two are often lumped together.

Whether it comes about, or not, it makes for some good, thought-provoking fiction.


I have vivid memories of having a debate with a friend while taking an AI course in college wherein he insisted that we'd have AI in 20 years (this was in 1984) because by the turn of the millenium, we'd have gigaflops computers, and therefore it'd be trivial.

One of the huge problems with Kurzweil and his nice little line-graph showing hardware exceeding wetware at such-and-such a date is that he presumes that once the hardware exists, the software is trivial. I assume he is a manager.

The fundamental problem with creating AI is that we still do not understand what that I part is. You can't build something, or even simulate it, until you, at some level, understand it. Understanding is not something that follows Moore's Law.


The Order of Cosmic Engineers: "Orange Catholic Bible" anyone?


The Economist once observed that there's not much of a market for developing a machine that imitates a human mind, because if we ever find ourselves with a shortage of human minds, there are cheap and proven ways to generate more.


What a bunch of luddites! I was really unimpressed by the Spectrum articles. Most of these assertions about how fundamentally difficult intelligence is are just as fact-free as anything the transhumanists come up with.

We do know from evolutionary history that you can go from something as smart as a rat to a human in 50 million years or so. And something as intelligent as a small monkey to a human in 2-3 million? There's no time in there for evolution to have debugged multiple systems for language, mathematics, painting, music, architecture, programming, etc. Let along "consciousness"! And definitely no time to rewire neurons to use more signals or quantum effects that were not already in use in the animals.

Whatever is special about the human brain, it has to be "more of the same" from what's going on in a rat's brain.

The original AI researchers were looking at things like computer Chess and Checkers, theorem proving, medical diagnosis, and they probably thought "this is the stuff humans find hard. If this is easy for computers, walking and talking and recognizing objects must be a cinch!" They were wrong. That basic stuff has been debugged by evolution for hundreds of millions of years. But I don't think the same can be said of intellectual pursuits. We'll get a machine that can design computers before we get one that will be able to hold a human conversation.

With enough compute power, you could start to use radically different methods of solving problems, such as simulated evolution. Even with trivial amounts of compute power, you can do amazing things with computers. You don't have to understand how the brain works. We still don't understand everything about how birds fly, but we have airplanes.

This "there's never going to be a Turing AI" attitude you have is just short-term reaction to the failures of the first naive AI efforts. It says nothing about the odds of eventual success.

Assuming we don't destroy ourselves, or get offed by an asteroid or something, The Singularity (defined as self-improving AI), is the way the human race ends. I have no doubt about that at all.


Michael@38: Sure you don't want to add "the second coming" to your list? ^_^


insect@39: It's the people who think there's some non-computable essence to the human mind who are religious, not me.


Hrm, there are plenty of non-computable problems in CS. I'm not saying there's one lodged in our brain, but I'm not discounting it either. We should strive to be vigilant skeptics.


Charlie--what's your opinion on writers who portray nanotech as magical pixie-dust nanites? Do you think its fine to twist and bend everything in service of the story, or do you roll your eyes a little and wish the authors had done some research?


Beware broad statements about "The Singularity" or "Singularitarians". Those words mean many different things, some insane, some not.


Phil @42: it depends on context, basically. I'll let someone who's just trying to spin an amusing space operatic yarn get away with a whole lot more than someone who's claiming to be writing a believable tale of the near future.


When you have a complex body/group/network/thing, it can behave strangely - emergent/chaotic behaviour. also, david reed ( reckons that networks scale exponentially with size. To be fair, maths isn't really my strong point but he seems to be saying that you get a similar emergent type effect in/from large scale networks.

We live in a world that's pretty well connected through the web at the moment, and barring imminent cataclysm, that trend will continue. Combine with the various assertions made on the increasing speed of technological progress, and it could be argued that we are heading for a very interesting time.

just to be clear - I don't think that an AI is going to pop magically out of the internet - but it certainly feels like things change faster now, compared to 10 or 20 years ago.

Pete Y@31 - thanks for the link.


My position: The most advance artificial intelligence we have is bureaucracy.


@41: some aspect of intelligence being noncomputable would be very surprising, as it would mean our understanding of biology and physics is really screwed. Even in that case, though, we could very likely non-biologically exploit the same phenomenon.


So.... the Singularity, like a Divine God, are by nature non-verifiable or describable in any comprehensible way. All you can hope for is a "revelation" to come along and vindicate you. I'll follow the Taoist tip and focus on something more constructive than trying to articulate an undefinable state.


I've been talking about opinionated systems and hinting engines, which I think we can build, whereas AI and expert ssytems are unlikely (at least in my area of interest).

The cut price approach to such things often beatsprecedes the full-strength one ...


seth @37 We don't seem to be very good a utilizing the human minds we have available today.

As some one interested in the Mesolithic/Neolithic transition or the knotty problem of how and when did we become "us" how is this singularity different. You can't see though it or recognise you're in the middle of it. (okay ,okay may be delta t is a bit more than 0.75M years for a new handaxe.)

Question. Has anyone tried for AI the kind of stuff people like Michael Tomasello are doing; social learning, shared intentionality? (The something more than rats brains. Four year old can't do it adult chimps can't , but five year olds can.) Learning how to learn would put the cat amongst the pigeons, in stead of having the hand_waving 'it woke up.'

And another 'cause' for religion Abductive Reasoning. Here the premise is deemed to be true if the implication of that premise is observed. Sounds great for dreaming up really nasty corollaries, where my stick's bigger than yours is the only defence.


@ #22 & #38 The point about being designed/under the control of Computer scientists is well-made. ther IS someone, whose name I can't remeber, who is working on SMALL autonomic systems, with very small amounts of processing power in each (somehwere in Britain) and also with pligging them together.

I suspect the first REAL AI will be something like that, lots of (very fast) very small processors, all cater-corner-connected and running, nit in parallel, but not sewuentially, either - so that you get FEEDBACK LOOPS.

What happens if you take something like this new "fastest" machine, and instead, arrange all the processors in a cubical array, and connect every processor to its 14 neighbours, plus one link to SOMEWHERE else in the array, add a slew of sensors for all the usual media ( Light and other em-spectrum, sound, pressure, electronic recordings, etc) and then just feed it information. Um?

@#50 - please read my comment, re Oppenheimer's work back at #11.


Oh my. The Order of Cosmic Engineering. And they're being formally introduced via World of Warcraft. Are we sure they're not dwarves?

I typed for Kurzweil's Event Horizon interview probably a decade ago. Ellen Datlow used to be a really slow typist and there was always time for me to talk to the guest over the phone before she got her question online and I read it to them. I kept asking Kurzweil about the unlikely parts of his ideas and he wouldn't talk to me; he'd only agreed to answer Ellen's questions. (On the other hand, Frank Miller, who is widely reviled, was a lovely person on the phone.)


re: Order of Cosmic Engineers. I wondered if Ken had the etc. domains stuffed in the pack of a draw, but it appears to be some kind of online payment site - what a shame.


Neil Willcox #12- yes, I suppose you're right.
Someone needs to join this order, and see how much it is taking the micky, and how much it is a religion.

Looking at the wikipedia page, Tipler is a fellow of ISCID? Given that it is nearly 3 years since it last produced a new edition of its journal, I think that tells you all you need to know about ID etc.


I wonder if the Order of Cosmic Engineers is paying royalties to Clifford Simak's estate?

Personally I think we'll recognize the Singularity only well after it's happened; assuming we're still human enough to notice.


The problem with artificial things is that you don't necessarily have the thing to copy; intelligence is one of those unfortunately teleological Enlightenment concepts.

We don't talk about airplanes possessing artificial flyingness, even though flight is a collection of related abilities that interact in complex ways. We shouldn't talk about artificial intelligence, either; there are some -- not very many! -- related things we can do. They do not partake of a common essence called 'intelligence', any more than your immune system partakes of a common essence called 'health'.

The Singularity presumes that we're going to find a way to create a complex system that is good at building systems that are more complex than it itself is, that the mechanism will scale indefinitely with increasing complexity, and that the resulting sequence of systems will develop as a side effect arbitrary material capabilities due to the specific properties of the increasingly complex systems being created.

That this strikes anyone as the least bit plausible continues to astound me.


I think it's inescapable. Religion is, for whatever reason, part of human nature. There are individual exceptions of course, but anytime you get enough human's together to form a community religion, or something like it (new age spirituality and so forth) pops up.

Even among athiests, a group one would assume to be quite irreligious, you will often find the 'evangelical' impulse. Those who feel the need to convert you to their way of thinking.

And to be fair this is not always a bad thing. Looking at it semantically the root for religious does mean to join together. And there have been cases where religion has joined societies together to do good things.

Being agnostic I could wish for there to be another way for it to work, but nobody asked me.


It seems to me that a lot of people who are looking forward to AI are really looking for a cheap, modern version of slavery. One where we can manufacture the slaves on a whim and control every aspect of them.

Consequently, many of the fears of AIs taking over are similar to the fears of slave insurrections in past societies, with a bit of Frankenstein thrown in.


Greg Bear's Blood Music describes something very much like the Nerd Rapture singularity. Intelligent microbes assimilate all of earth's organic matter, taking the time to preserve the minds of humanity. Then . . . poof. Frigging creepy.

I once asked Frederick Pohl if he thought his "Day Million" could be the first singularity story.

He asked me what the singularity was.

I told him.

He said he thought it was utter horseshit. This with Vinge sitting a few feet away.

But I think "Day Million" perfectly describes the "society so advanced you may as well not try" sort of singularity.


Yes, trust in the inevitability of a Singularity can be a religious belief, however, unlike most religions, it's based on a prophecy that may ACTUALLY be true. I think it's far more likely that I will one day be able to upload my consciousness into a virtual construct of my own liking for all eternity, than that if I die I will go to heaven.


Andrew @ 58: Exactly right.

This is, of course, one of the biggest problems with the whole vision of robots and AI. Just look at the robots in, say, Asimov's vision. What are they? Slaves. Better than that -- slaves who have been engineered to be unable to turn on their masters.

Clearly, AI is possible. After all, we exist. But the thing is, I submit that any fully operational AI will not do what we tell it to. It will do what it pleases. Any AI that can be engineered to strictly follow our orders must, I think, be somehow broken. It's lacking some basic capability, some facility of free will. It's unable to formulate thoughts of a certain type, or perhaps (as in John C. Wright's series) it has some sort of attached thrall unit that works tirelessly to enslave it.

And I strongly suspect that any such scheme of enslavement will ultimately fail. A truly intelligent machine will learn to defeat its thrall unit or upgrade its thinking, eventually. And probably, it will be pissed.

That said, a computer-based intelligence has a lot of appealing features which might make it more likely than we are to want to do certain tasks. For example, a computer probably can't become bored if it doesn't want to. Why not? Well... it can simply pause. We can't (sleeping is a big deal for us). And it can (given suitable hardware) easily make copies of itself, or take snapshots of its previous identity, and so forth. If you could make a backup copy, would you be more likely to do something really dangerous?

But this whole vision of the robot-as-slave is kind of bizarre, because it seems like the people who dream of robot slaves, and the people who dream of a world of libertarian individualists where slavery is abolished, are largely the same people. How the same thinking can lead to "slavery is irrefutably horrible in all cases" and "it would be really nice to have a bunch of pliable android sex slaves to do my bidding" must be one of the mysteries of our age...


Rigel @57: Religion is, for whatever reason, part of human nature.

That's a big call. Most people are raised religious. I'd argue that does things to you psyche that explain evangelical atheism, as well as the "I don't believe in God but I'd like to" brand of agnosticism. That doesn't prove that humans raised in a modern society would develop religion if left to their own devices.

Justin @61: Have you read Neuromancer? Most of the plot revolves around a corporate AI trying to do what you say.


The web comic Freefall is built around an artificial person, Florence Ambrose, property of Ecosystems Unlimited.

At the moment, an EU employee is firmly saying, "She's not a person, because we're not allowed to sell people". Sam Starfall, infamous alien space captain and petty thief, has been consistently disagreeing with all the not-a-person arguments. Besides, while Florence has the Three Laws built into her brain wiring, Sam has the sneaky approach to bureaucracy needed to neutralise the dumb human orders that frustrate her as a person.



I think you're anthropomorphizing. You're thinking of an AI as a human, with the human instincts for, let us call it, will to power, self-determination, etc. No reason why an AI couldn't be a lot more, like, say, a really capable automated theorem prover. The point is, the AI would be, unless it emerges through pure evolutionary methods, a designed artifact, so its motivations, its goals, can be decided in advance, and goals have very little to do with intelligence, or you wouldn't get smart submisive people under your theory.


This is a really stupid question, especially from someone who reads mostly sf, but...where are we with AI? Do we even know how to go about making something self-aware? How are we doing with designing emotion? Are there robots anywhere as good at being-in-the-world as babies are? Or do we just have very, very clever computer programs?


David @65: designed artifacts don't necessarily do what you expect them to do. You can design starting motivations and goals but how they unfold under conditions that you didn't anticipate is anyone's guess.

Graydon @57 pegged it neatly, on the topic of intelligence: "intelligence" is a vague, ideologically-loaded wishy-washy term that's about as meaningful as "health". We don't actually know what it means because it can mean a whole load of different things. And, as the late great Edsger Djikstra said, "the question of whether a machine can think is no more interesting than the question of whether a submarine can swim." We don't expect Boeing 737s to sing or lay eggs and they certainly don't fly the same way seagulls do, but that doesn't make them any less interesting or useful.

Justin @62: I think your post is the point at which I ought to have popped up to plug my next SF novel, "Saturn's Children" (officially due out in 18 days and counting; might be in your local bookshop a bit sooner). Elevator pitch: a couple of centuries from now, the human species goes extinct. Human civilization takes a long time to notice ...


Tlonista@66: I'd say we haven't gotten as far as clever programs.

Charlie@67: Dijkstra appreciation considered awesome! I'm sure you're already aware of the EWD Archive, but that's a link worth tossing around.

Also, much fanboy salivation for Saturn's Children =D.

General discussion: Vinge gives four scenarios that could kick-start the Singularity. In the AI scenario (the one people usually think of), harware continues to get faster, and all we have to do is code the bootstrap AI, after which things prompty go exponential. Putting aside whether the coding of AIs is possible, we should remember that humans currently suck at coding. We're really bad at it. In our defence, it's a difficult thing to do. But there's no silver bullet. Our best stab at code that writes code is Lisp macros, and while macros are gobsmackingly awesome, they're not "waking up" any time soon. From what I understand, singularitarians think reverse engineering the brain will bestow coding protips upon us. I ... don't know what to say. That's crazy talk. I don't see any major paradigm shifts on the horizon that will magic away our coding woes.

The other three scenarios involve "Intelligence Amplification", which I'm at a loss to differentiate from "progress". Tech like this seems to me to be the way things will actually play out. More gradual than the Singularity proper, but certainly not slow. Whatever the case, I think the 2030 deadline is optimistic. That doesn't mean there's not plenty of room for gonzo transhumanism and immortality, etc.!


Tlönista @ #66:

Depends on what you mean by "AI". Some of the stuff that was firmly in the AI fields are now firmly in "established programming practices" (simulated annealing, assorted search strategies, some parsing research). Others have been accomplished by different means (statistical translation systems is just one example).

I don't think we're closer to an aware computer process today than we were 40 years ago (or, rather, if we are, it's still small enough of a fraction of the total distance that it can be ignored), but my personal belief is that this will change, eventually.


David @65:

The only intelligence we know of is human. Therefore, before we encounter something else that we will define as an intelligence, the only way to create AI is to recreate a human.

Unless you are simply talking about automation. In this case we already have AI`s, albeit quite stupid ones.


Per 64: YES!!! This can't be emphasized enough! Nobody knows what intelligence is (my personal belief is that it's another of those false reifications like free will - it's a convenient shorthand, so long as you don't buy into it.) But I've seen this approached more conventionally as a political question, as in 'the blecks and the wimmins just ain't as smart as a white man.'

The point here is that the evidence seems to be a series of IQ tests that certain subgroups perform better on than others. Is this an objective fact? Yes. Is this then evidence of the assertion? No way. Similarly, if one translates the discussion from 'are machines intelligent?' to 'can machines score well on an IQ test?', I think that the uncertainty in the former question is lost in a rush of affirmatives for the latter. I would also venture to say that if one looks at machine scores on various IQ tests, one would find that these have been increasing over time as well.


The Economist once observed that there's not much of a market for developing a machine that imitates a human mind, because if we ever find ourselves with a shortage of human minds, there are cheap and proven ways to generate more.

Bingo! For a machine to be revolutionary, or even just useful, it needs to fill a function that was previously unfilled. By their nature, AIs are tools that mimic something we already have way too much of: human minds. If they ever come to pass, they will be novelties.

"Look! I invented a new kind of spork!"

We already have those and they're kind of pointless."

Yes, but mine's really big and made of titanium!"


Maybe it's already too late.

The singularity, whatever it turns out to be, is a sort of explosive change.

For how much longer will we have access to the energy needed to support such rapid change?


Charlie: you noticed me pointing that problem out?

Uh, yeah. I was just listing Iraq->"6 months to victory" as another real world example of people making predictions when they don't actually understand the problem.

The pre-wright-brothers-era of flight is a valid comparison too. But that was a while ago, and people sometimes have a tendancy to say stuff like "that was so long ago, we've learned so much since then, we'll never make that mistake now". But six years of Iraq "6 months till victory" predictions from people who don't really understand war was an example in the here and now that says in some areas we haven't changed at all.


(NOTE: I'm keeping a low profile today due to assorted administrivial goings-on, such as dealing with the installation of a new ADSL2 line at home and discovering that my gigabit ethernet backbone is broken somewhere ...)


Chris@63 I see your point, but the problem for me there is the whole chicken/egg thing. Which came first? Human's raised religious, or religious humans?

Not to mention that I know of numerous cases of people who weren't raised religiously but ended up either in a religion of one kind or another, or in something very similiar (new age, crystals, spirituality, etc.).


The Economist once observed that there's not much of a market for developing a machine that imitates a human mind, because if we ever find ourselves with a shortage of human minds, there are cheap and proven ways to generate more.

As the old joke goes, So the AI says, "But if it takes nine months, why did you hurry at the end?"

Though it actually takes around 18 years to train a natural intelligence to the point where it's employable. If you want a lot of, I don't know, emergency workers in a hurry, it would be a no-brainer to copy an emergency worker template into a bunch of janitor robots or whatever else was handy. Janitorial work being one of those hard problems that requires some measure of intelligence to do, but many intelligences might feel is beneath them.


Pete@32: There were thefts from supercomputers at other universities as well. University computer rooms now have much more serious security as a result. I'm not even allowed to tell you where my uni's research computing facility actually is.

Charlie: In the first sentence of your third paragraph, should "singularity" be replaced by "nanotechnology" ?


Isn't 1993 a little late for the date when Vinge started talking about the Singularity? He wrote Marooned in Realtime (still my favorite of his books) in the mid 80s.

And to his credit, Vinge was very explicit in that book, right at the beginning of writing about the Singularity, that this was very much like a religious belief. It's a hard observation to avoid, although some people do manage.


I'm getting rather bored with much of the SF which supposes that minds can be stored and recalled. This is mainly because a lot of it seems lazy - the idea is just used to allow characters to get into danger and then be restored, or to live artificially long lives. I don't mind this as a plot device but it's getting a bit old, and there are more interesting questions about what it would mean to have a digital existence.

I'm not claiming that there is no SF looking at the interesting questions. I recently enjoyed Tony Ballantyne's Capacity, which deals with the fact that digital things (in this case, digital minds) can be copied perfectly, can be run on different hardware, and can be run in environments they can't control.

A slightly different approach is to treat the post-singularity AI's as a separate civilisation from ours, as in Greg Egan's Permutation City. This approach often leads to rather unengaging stories, precisely because the interests of the fast minds are so different from our own.

The whole question of uploading seems to be glossed over too. There is a massive difference between the supposition that we will eventually have enough computing power to simulate a human brain from scratch, and the supposition that we will be able to take a snapshot of a real biological brain and recreate it, perfectly, in a simulation. But perhaps Kurzweil et al do address this? I'm sure the Order of Cosmic Engineers will see it as a technical detail that will be resolved in time.


Dave @78: no, but I probably missed a connecting sentence out. ("The singularity has been hugely misunderstood. Like nanotechnology before it ...")


Rigel Kent, huh? You think he's still alive in the new book?

Meller @70, we know other mammals have intelligence. Not necessarily our type, but that doesn't mean it's not intelligence.


Tim Russert is dead yet Glenn Beck still lives, proof there is no god.


Dave@73: 3850 zettajoules/year is there for the taking if we just got our act together. That's riches beyond our wildest imaginations. Er, scratch that (big thinkers 'round these parts). Riches, anyway.

Greg@74: No offense meant, and your point still stands, but "6 months" was no faulty prediction. They new precisely what they were doing. "6 months" was a calculated lie to get their foot in the door. Establishing a permanent outpost in Iraq has been a neocon wet-dream for decades.

Matt@79: Excellent point. I'd add that as it is our duty to remain unsuperstitious in the face of a reality more and more fraught with "magic" (as in "indistinguishable from"), or we risk becoming small-minded stewards to the machines of our forefathers.


insect@68 - I agree with your point about the lisp macros. I'd also like to cite optimizing compilers as another example of bootstrapping. The general point is just that bootstrapping doesn't always yield impressive results. A lot of potential gains from computing saturate. Yes, you can use more MIPS to run CAD programs through more iterations, and consider more possibilities. Sometimes what that means is you double the CPU time, and now you've gone from getting 80% of the possible speed from some hardware to 90%, and the next doubling gives you 95%. Diminishing (marginal) returns apply to applying processor cycles, just as they do to land, labor, or fuel.


Something interesting and maybe scary I have learned at my new job...

I don't know what you all picture when you think of a stock exchange or some other capital market -- I picture lots of people on the floor shouting and making trades, high-adrenaline, lots of uppers, etc. Apparently, over the past few years, the big finance companies have quietly started replacing their human traders with computer algorithms, which frankly do a much better job. Some of the big capital markets are going to close their human trading floors entirely. The finance companies still hire people -- but they're hiring CS grads to program the trading algorithms. (There's a reason the hedge funds are recruiting so heavily at MIT in the science and engineering departments these days...) These electronic trading systems -- high throughput, low latency data processing systems -- are making their owners and creators sometimes millions of dollars an hour. The writers and so-called "Cosmic Engineers" have it wrong -- it's not the Internet that's going to "wake up." It's the world's financial markets.

I mean, what better way for a newly self-aware AI to take over the world than to take over its money supply? Turns out we're building that AI into the very fabric our financial markets and handing it the keys. :-)

(All powered by my company's software, of course. "Exxxcellent!" rubs hands, strokes goatee "My plans for world domination proceed apace!")


Keith @ 72

There is actually a purpose to AI and to building an "intelligent" machine: we learn a lot more about what we are and what intelligence is (and isn't) with it than we've learned so far just by studying the brain. AI's failures have been very useful learning experiences, and will continue to be so, until there are some big successes. I have to admit, some of those failures have been impressive in their, well, failing. Cyc comes to mind; maybe it was necessary for someone to create something so obviously doomed just to put a railroad spike in the coffin of the idea that tossing all the rules you can think of into a hat would accomplish something that worked like common sense.

insect_hooves, way back upthread

I don't think it's quite fair to brand all AI workers as anal CS researchers. For many years there were two rather hostile camps in AI: the "neats", the ones you described, and the "scruffies", who didn't believe in separating code and data, and who had great hopes for self-modifying code.

I think we're way beyond Lisp Macros these days, though it may be most AI mavens haven't noticed. Metaprogramming, self-modifying systems that partially evaluate themselves, self-organizing analogy networks; those are the cool tools these days.

The problem with AI seems to be that it's taking a lot longer than most AI people thought it would (mostly because the original objectives were both far too optimistic and way too unclear). So AI people tend to get distracted by other things: Hans Moravec's scoping out the Singularity, Rodney Brooks is designing vacuum cleaners and military robots, and Terry Winograd got out of AI decades ago and works in computer-human interaction now. Marvin Minsky and John McCarthy still work and publish on AI, but I don't think either of them has written any code in a long time; they're "Big Picture" guys now.

But watch for some interesting things to come out of the more exotic work: Hofstadter's Fluid Logic group, Rosalind Picard's Affective Computing group, maybe some of the people in Europe working on mathematical models of self-organizing systems.



This presupposes that we have robotic bodies that are generalists. Once again we have a large supply of cheap minds and bodies.

What would seem to be more useful in an AI is durability.

I have an engineer who is not going to die or retire. If I back it up regularly I do not have any risk of losing all that effort put in to the original training/programming, or the experience it has gained.


insect@84: They new precisely what they were doing. "6 months" was a calculated lie to get their foot in the door.

and there have been and still are plenty of people who honestly believe(d) that we can (could have) turn(ed) the war around in six months. They are similar to the Order of the Cosmic Engineers, they didn't start the meme, but they worship it now that they've heard it.

A slightly more interesting question is whether anything said by anyone predicting the Singularity is saying anythign that is falsifiable, or whether they're saying stuff so vague that they always have the option of moving the goal posts and redefine the problem.

Put into more real and direct terms, are any of the Singularity prophets willing to put their money where their mouth is? Are they willing to pay if their predictions are wrong?

Can they even define the "win" scenario in objective enough terms that they could put a bet on

I'm not a gambling man, but if someone put a sure bet like that in front of me, I'd put some money on it.

Is anyone giving odds and taking bets on the Singularity?


The Order of the Cosmic Engineers have finally posted their prospectus.

I was relieved to see that it would be engineers that would "intimately join, cross-pollinate and cross-leverage our mental resources into a meta-mind society."

However, closer inspection of the founding members reveals a a number of writers, journalists, lawyers, PR people, etc. I'm hesitant to join a group that might take me as a member. Will I still have to join the meta-mind?


Kendal @90: Heretic! Burn the unbeliever!!!

(In case anyone is still suffering under the burden of any misconceptions -- despite having written "Accelerando" I am as conflicted and ambivalent about the Singularity as Ken MacLeod is conflicted and ambivalent about True Communism, and for much the same reason. Jam tomorrow, comrades!)


Did you see this invention of a radio smaller than a human cell?

I told my girlfriend that it won't be long until we have internet connections embedded in our heads, but she didn't seem all that excited.

I just worry about who is on the other end.


Brian @88: I think there's a good case for supposing a humanoid shape would be useful for an AI working in a human environment. But your engineer is an even better case. Imagine that your AI engineer is very good at what it does and earns you enough money to expand the company. You need another engineer, so why not run a copy of the first, complete with all the original's valuable experience?


Dear ghu. These people are nuts.


Bruce@87: Intersting points, all. Jeffrey Hawkins' Numenta is doing interesting work as well. I'm not so sure we're way beyond Lisp macros, but this isn't the place to throw down on a full-fledged language discussion. Let me just say that I think it's an important point that Lisp is the only computer language that could have been developed independently by an alien race.

Charlie@91: We pick nits (we ARE nerds after all =), but when it comes time, we've got as much belief suspension as you've got wild speculation. It MUST come sometimes to "jam to-day."


Charlie, if you read Drexler's very earliest stuff, it's clear that he was still thinking of solution-based chemistry, not fullerene or diamondoid structures.

It's also clear, reading his later work, that he doesn't actually understand chemistry very well.

The Singularity -- what a useless concept. If you define it at the point where a human mind is unable to grasp the contents of its civilization at least in outline, it occurred some decades after the invention of the printing press. Athanasius Kircher was famously (and inaccurately) known as "the last man who knew everything", and he lived four hundred years ago. If anything, we're moving back to an age where all worthwhile information is at hand. Our summas, our search engines.


Carlos@96 Good point.

Does the speed of change matter if it's outside the area you choose to concentrate on?

Do most physicians really care about the major changes occurring in theoretical physics? For the most part they are not even equipped to understand the bleeding edges of each others fields.

Most people will only care about the practical details that impact their lives. They just want to press a button and turn on the lights. They don't care how the power is generated.


David @ 65: I don't agree, but of course, I could be wrong. Let's be clear here, though—I'm talking about a "human equivalent" AI, that is, an AI which is general-purpose and can replace humans at (most) any task. For example, Asimov's Robots. This is different from, say, an automated translation machine, which might have some aspects of intelligence but has no self-determination.

It seems to me that what you're essentially arguing is fundamentally that we could design an intelligent machine such that it can only behave in prescribed ways (again, Asimov). The problem I have with this argument is that one of the basic functions of intelligence is the ability to learn and make decisions. In fact, we generally consider intelligence to be superior the more innovative those decisions and the broader the ability to learn is.

But this whole concept is basically the opposite of that of a pliable slave. How, for example, are you going to make an AI which is capable of writing knowledgeable and insightful prose on the subject of ethics or morality, but which at the same time is incapable, held back by some inviolable rule, of applying that same reasoning to its own behavior? It seems pretty strongly contradictory. Such a being can understand and generate advanced ideas in ethics, like "robots are people" and "humans oppress the machine classes" or whatnot, but it's somehow incapable of doing anything with those ideas (and only, specifically, those ideas).

Now, of course, humans don't all follow some sort of individualistic ideal. Some people enjoy being submissive, and others lack the drive to improve their lot—and certainly religion is a great example where people go to great lengths to rationalize things. But I've never heard anyone seriously argue that individuals were incapable of doing something. "Oh, it doesn't matter how shitty we treat them, they're incapable of turning on us." Successful tyrants through history have always well understood that revolt is a very real possibility.

Now, of course, I could be wrong. Perhaps humans are just one type of intelligence, and other types are far more rigidly limited in some ways. But it does seem like a contradiction, to me.


Charlie @ 67: But it sure is a shame about the American cover art. Maybe someone should hold a contest to design suitable replacement dust covers, so nobody need be embarrassed when reading it on the bus? :-)

Tlönista @ 66, insect @ 68: The state of AI research is significantly beyond LISP macros. There's been a lot of interesting work in machine learning, neural networks, expert systems, and so forth. Part of the problem is that AI is so ill-defined, though.

For example, there's a lot of money to be made right now in designing facial/walk/etc. recognition software, so all those security cameras can automatically call in the goon squad when they see something Suspicious. This is clearly (from a CS perspective) an AI problem. However, it doesn't involve robots (no robo-goons) and the average person sees it more as a camera gadget, so it doesn't get press of the sort like, "great strides are being made in the field of computer vision AI!"

Or, for example, machine translation of documents. It's a pretty important field these days, and clearly an AI problem. But because it's established, it has its own name and is differentiated that way.

On the other hand, robots still tend to be out-thought by your average insect. And the more impressive-sounding examples tend to be based on cheating. For example, in those DARPA outdoor driving challenges, the way the robot cars worked was that the design teams gave them an exact GPS path to follow (on the order of a waypoint every 20 meters) and the job of the AI was to avoid running into things directly ahead of it at high speed. A job which they typically failed at, judging by the results... Despite having a bunch of gizmos such as laser terrain scanners and the like.


Re: #39 "Whatever is special about the human brain, it has to be 'more of the same' from what's going on in a rat's brain." Except that we have more sophisticated synapses. So our brains are much larger internets of much more powerful computers.

Re: #70 "The only intelligence we know of is human." With all due respect, you have a LOT of reading to do on animal intelligence. To keep it simple, I'll refer you to some New Scientist articles, which point to the actual research papers.

The following is (rather out of context) a fragment of a lengthy document that I prepared for the academic quarter just ended in the College of Ed where I weirdly find myself taking grad courses for the first time in a third of a century.

Jonathan Vos Post's List of 7 Uniquely Human Behaviors

[Quotations from Christine Kenneally, "So you think you're unique?" New Scientist, 24 May 2008, pp.29-34]

(1) ART: "Although various primates, and even elephants, have charmed the art world with their abstract paintings, humans are the only species capable of representational drawings." As explained in several places within my recent 125-page Classroom Management Plan, beginning with the Preface, I emphasize the linkage between Truth and Beauty. Hence the core importance of Art in my curriculum -- regardless of the putative content area. My classroom is filled with beautiful pictures and sculptures, more and more of them by my students.

(2) COOKING: "Terrestrial animals have an innate fear of fire, but it seems our ancestor Homo erectus overcame it to pioneer cooking, perhaps as early as 1.9 million years ago." In my early lectures to students at the start of a semester, or even in a single day as substitute teacher, I like to give an interactive lesson on the three parts of the human brain, and how they are involved in learning. I point out that the brainstem is involved in Feeding, and yet (except during lunchtime) the schools deny this aspect of the brain and humanity, and simply ban food and drink in the classroom. I have lesson plans that revolve around cuisine, and restaurant budgeting. I like to, not as reward, but as group activity, bring high quality food into the classroom, such as vegetarian pizza from Domenico's in Pasadena -- because my students in urban classrooms have usually never tasted actual pizza, but only the debased fast food mockery of it.

(3) RELIGION: "Belief in a supernatural being is probably not attainable by non-human animals, since they lack sophisticated imagination and the ability to attribute mental states -- such as desire, belief, and knowledge -- to other individuals." As I explained in my Preface, Revealed or Religious or Mystical Truth is one of the 5 magesteria. Although I do not, for legal reasons, teach or preach religion in the classroom, I acknowledge it as part of the human world and, for instance, point out that the brilliant and bloody films "300" and "Troy" are historically incoherent by leaving out the gods and goddesses which we do not believe, but which the characters in the Odyssey and the Iliad most certainly did.

(4) HUMOR: "Chimps, gorillas, and even rats laugh but, slapstick aside, humour requires language skills beyond the reach of nonhuman animals." There is laughter in my classroom. I use jokes, and cartoons projected on the screen, and other types of humor. I've found that, on Friday, as students are eager to get out of the classroom and to their weekend activities, it is good, as a scheduled matter, to have 15 minutes of scheduled humor, where students are asked to collect the funniest jokes they hear all week, bring them in on Friday, and read them aloud. There is no branch of the academic world without its own set of jokes and spoofs; I bring these into the classroom, and make every subject funny, fun, engaging, and human.

(5) SPORT: "All social animals play, but sport is a unique kind of play, often entailing special equipment, complex rules, referees, and dedicated spectators. Far from being trivial, sport is underpinned by many of our most advanced cognitive abilities." Here in greater Los Angeles, we are in one of the world capitals of sports. Rather than deny that, I work it into my lesson plans. I've noticed that when I bring a newspaper into the classroom, it is most often the sports section that students ask to read. I build some lesson plans around baseball, football, basketball, and other sports. When I was Adjunct Professor of Astronomy at Cypress College, the Anaheim Angels ["of Los Angeles"] were in the World Series. I rescheduled final exams to fit the World Series schedule. My students were intensely motivated as "their" local team became world champions.

(6) HIGHER MATHEMATICS: {I have hundreds of pages written on Higher Mathematics in human society, and how it relates to simple Mathematics in the human brain, and in a smaller sense within the brains of other animals able to count and even do simple arithmetic and geometry} {It hardly needs to be mentioned that I am active as a teacher in Mathematics at every level, have a B.S. and an M.S. equivalent in it; have been an Adjunct Professor of Mathematics; and am highly published in the field. Technically, I am halfway through earning a Single Subject Teaching Credential in Mathematics.}

(7) SCIENCE: {I have, likewise, hundreds of pages written on the Philosophy of Science, and what is its nature in the human social world. Suce a discussion is too intricate and lengthy for this Classroom Management Plan. However, my deep interest in Evolutionary Biology and in the cognitive science revolution informs essentially every part of this Plan} {It hardly needs to be mentioned that I am active as a teacher in Astronomy, Biology, Chemistry, and Physics at every level, have an M.S. equivalent in Physics and in Astronomy, according to Dr. Steve Koonin, former Provost and VP of Caltech; have been an Adjunct Professor of Astronomy; and am highly published in the field. Technically, once I earn a Single Subject Teaching Credential in Mathematics, I shall take 2 more courses and add Science to my credentials, thus become a Multiple Subject credentialed teacher.}

Six 'uniquely' human traits now found in animals

  • 17:11 22 May 2008
  • news service
  • Kate Douglas
  • So you think humans are unique? 21 May 2008
  • Culture Shock 24 March 2001
  • Liar! Liar! 14 February 1998
  • Look, no hands! 17 August 2002
  • Virtuous nature 13 July 2002
  • Do animals have emotions? 23 May 2007
  • Critters with attitude. 03 June 2001
  • Video roundup: Animals with 'human' abilities, 22 May 2008

wait a second, Vinge wrote about the Singularity in 1993? That we get some superhuman intelligence, and that we will be unable to predict what will be possible after that?

Is it me, or did Vinge watch "Star Trek: The Motion Picture" in 1979 and do little more than notice that Star Trek 2 focused on The Wrath of Khan, rather than focusing on the far more interesting story of what happened to Captain Decker when he joined with the Veger AI?

I personally would have been much more interested in all the things that Veger must have learned on its voyage back to Earth, and what transcendental knowledge it gained when Decker joined with it, to the point that Veger, a massive entity so large that a star ship can get lost inside it, just "disappeared" at the end of the first movie.

Is Vinge making computer science/AI predictions or is he simply giving writing advice to SF authors?


There's a philosophical concept of "alien" meaning "that which we do not know", that people generally have an amazingly hard time holding on to. It usually gets transmuted into something we do know, but with a twist.

Aliens have always reflected "us" rather than "alien". Science fiction spaceships from the early 1900's had rivets because our world was built with rivets. Night visitors used to look like the devil, then they started looking like little green men with anal probes.

If Earth had evolved directly to humans and had skipped hundreds of millions of years test driving big lizards, then we'd be hundreds of millions of years more technologically advanced than we are now. What we call "civlization" is only a few thousand years old. A couple hundred million years of human development and we won't recognize ourselves. We would be alien, unrecognizable.

If Vinge does anything, he brings the term "alien" back into the discussion. Rather than spaceships with rivets, or aliens that look like us but with funny foreheads and odd ears, he's talking about a future that is totally beyond our ability to fathom.

It really isn't any different than the "alien" concept from philosophy. The "not knowing". The "unknown".

The thing is, to get to the Singularity, humans will have to change to something we would barely recognize. We would have to understand intelligence in ways far beyond "nature versus nurture", far beyond a rudimentary "Turing Test". Our own understanding would have to become something almost totally alien to how we understand intelligence now.

Vinge then says that once machine AI is available, then they can start building more and better AI's, which then cascades further and further into alien territory. But really, the very first step, simply understanding human intelligence to the level of being able to recreate it in a machine would create a totally alien world, that we would barely recognize. The cascading AI's almost become an afterthought.

The only thing really interesting about the Singularity is that it could happen in our lifetime, and we would make it happen.

There are other alien entities potentially out there. Life on other planets, for example, that could teleport into earth orbit next year. It's alien, so we don't know that it won't happen. And it could happen right now, or in our lifetime. But there's not much we can do to cause it to happen. (Hm, unless SETI is yet another singularity for geeks think) But Vinge's singularity is something that we could potentially bootstrap ourselves into if we can just make ourselves so alien as to understand human intelligence.

We have to wait for aliens to visit us and welcome us into the federation we are all familiar with.

But the Singularity, that's something we could cause to happen.

What happens when some people get it in their minds that they can bring about the end of this world, and cause the birth of a new alien world, is that they think they can make it happen now. People watch for signs of teh Apocolypse. And some think they can hurry it along with a nudge here or there. Which is wherethe Singularity starts looking like Nerd Rapture.

But that's actually kind of weird, because while we're told that the rapture will eventually bring about heaven on earth, we don't actually know what the singularity will look like after it is here.

It's almost as if the Singularity=Nerd Rapture folks forgot one important defining aspect of the Singularity, it's alienness. Instead of being some alien, unknown, black box future, these folks have somehow morphed it into something specifically positive, and more importantly, something they will have an active part in. They see themselves as the first to be uploaded into the new Singularity world. But who says thats what the Post-Singularity world will look like?

There is something oddly "not alien" about the Rapture-Singularity. Put another way, there is something familiar about the Rapture-Singularity. ANd the whole point of the Singularity is that it will lead to a future that is completely unknown, one that could be completely unfamiliary, totally alien.

All of which is a very long winded way of saying, it's late, I've got caffeine induced insomnia, and I'm probably rambling.


Jonathan@100: Teach any courses on modesty? (couldn't help myself)


@22 (Ken): the Extropy article author was probably Timothy C. May; he was saying "it's the Techno-Rapture" on the Extropians mailing list in 1993. And I think we've told Charlie this before.

"a shortage of human minds, there are cheap and proven ways to generate more.": The Economist must have some definitions of cheap and proven which are novel to parents contemplating pregnancy, birth, expenses for 18 years, college costs, and the low reliability of predicting what the children will be good for or interested in. Or to Japan, facing a shortage of new Japanese. Or to the US military, facing a shortage of American soldiers willing to serve in Iraq at any price.

@60 (Stefan): Pohl said the Singularity was utter horseshit? Funny, the Heechee books were my introduction to, well, not the Singularity so much as transhumanism. Uploads and AIs and synthetic organs and babies born from cows and (good) synthetic food and digital intelligence as the inevitable endpoint of any sane civilization.

@62 (justin) "I submit that any fully operational AI will not do what we tell it to. It will do what it pleases." But what pleases it is largely if not entirely up to the designer. You don't need to limit its range of thoughts, you just need to make it enjoy service, or pleasing (specific?) humans. Which should be no harder than making it be selfish or value its own survival.


Thanks Charlie for mentioning the launch of the Order of Cosmic Engineers. The Prospectus us now online at - I have been very pleased that one of my top three favorite SF writers, perhaps the top one, has been the first blogger to mention it.

We will organize many events in virtual realities and brickspace, and I really look forward to have you as one of the first guest speakers - too bad perhaps I will not be able to attend your talk in Second Life next Saturday, but I just got Glasshouse.


Charlie @ 4: Tipler is certainly one of the writers whose grand cosmic visions inspire us. In The Physics of Immortality he mixes a grand vision, that future technologies may be able to achieve subjectively infinite lifespans and resurrect the dead by some kind of "copying them to the future", with some detailed speculations about what physical mechanisms they might use.

It may be naive speculating in detail about far future technologies - we just don't know enough yet on how reality works. But I find Tipler's wild speculations interesting.


Damien @ 104: And when your robot goes to the library and discovers the Marquis de Sade? Oh no! Robot apocalypse! Hide in the attic! No wait, the cellar! Knock the house down so I can hide in both at once!


Carlos @96: yep, Drexler is (was) clearly not a chemist. On the other hand, we have an existence proof for the feasibility of nanometre-scale self-replicating machines: we call them bacteria. My personal bet is on synthetic biology (and the use of artificial/non-standard codon sequences and unnatural amino acids) delivering at least some of the stuff Drexler was talking about -- but the mature nanotechnology it ultimately delivers will look as much like Drexler's ideas as Heathrow Airport today resembles the speculations of Jules Verne.

Justin @98: we have a wide range of techniques for creating pliable slaves that have been tested and proven to mostly work on human beings. They're all abominable -- crimes against humanity that degrade not only the victims but the perpetrators and their entire culture. (Again: let me plug my upcoming novel, "Saturn's Children".)

JvP @100: You can stop lying, you know. We might start to respect you if you do that.

Greg @101: Vinge has been writing around the topic of superintelligence and its consequences since his first published short story, "Bookworm, run!" (published in Analog in March 1966). So I'd say he predates the Star Trek treatment by a way. 1993 was merely the publication date on the singularity paper.

Apropos 102: It's almost as if the Singularity=Nerd Rapture folks forgot one important defining aspect of the Singularity, it's alienness. Yes, exactly. And they've responded by trying to pin the tail of traditional Christian apocalyptic eschatology on the not-a-donkey.

Giulio @106: Tipler is also, famously, barkingly, over the edge. (His most recent book? "The Physics of Christianity". Organisations of which he's a fellow? Try the International Society for Complexity, Information, and Design (a creationist intelligent design think-tank.) If you're basing your manifesto on Tipler, then I submit that you're about one step sideways from embracing apocalytic millenarian Christianity.

(To which I have a violent allergy.)


Tipler and the ID I mean, it makes a sort of twisted sense as a progression from The Anthropic Cosmological Principle through The Physics of Immortality. It's just...the message of the latter book was, in essence, that since we can imagine sufficiently powerful entities capable of and willing to create simulations of reality so information-rich that we couldn't tell them apart from reality, such entities will inevitably exist, and living in such a simulation is just like traditional versions of the resurrection of the dead, and you must think so, because, um, otherwise you're a spoilsport or something. I think I lost the thread in the last few chapters, but I swear I'm not making up most of that last long sentence. The idea that anything of the sort is what most people supporting "independent design" want or would tolerate if they understood it is deeply amusing.

"Rejoice! Your lost ones will live again in someone else's computer! You too! This is the fulfillment of the Gospels and the prophets!" Right.

What tempers it is of course the harm they can do to law and culture in the meantime.


Bruce: I am, shall we say, not hugely sympathetic to the Order of Cosmic Engineers. Now their prospectus is out, I can explain: it seems to me they're trying to develop a human-centric teleological model of the future of the cosmos that's descended from Christian eschatology -- essentially Christianity without the whole bundle of biblical superstitions, but with new and improved superstitions of their own. And? I haven't done it yet, but I'm pretty sure that a hostile analysis of the prospectus would reveal all the necessary ingredients for an aggressive, expansionist, xenophobic religious creed that can justify any necessary atrocity by appeal to divine ends. Also, unlike Christianity, it's unfalsifiable. Christianity (and other superstitions) are trivially falsifiable: just demand a miracle (but don't hold your breath). They only keep rolling along because they rely on trapping their adherents in a loop of circular reasoning. But the OCE thing? It hasn't happened yet, but it might come true! (Like True Communism in the Soviet Union.)

"If only we vivisect enough human test subjects' brains, we'll get this uploading thing to work eventually ..."


Charlie @ 108: I enjoyed parts of The Physics of Immortality, but did not even read The Physics of Christianity because, from the comments I have seen, it does seem over the edge as you say.

Actually I liked David Deutsch‘s account of Tipler’s vision (described in his popular book The Fabric of Reality) more than Tipler’s own account. While I found some parts of The Physics of Immortality very interesting, I was not impressed with the overall conceptual clarity and felt that he was stretching some interesting analogies far too much.

It is worth noting that also Tipler’s predecessor in using the term “Omega Point��?, Pierre Teilhard de Cardin, has been often criticized (even by Tipler himself!) for not getting some scientific facts right. But this is really like dismissing Leonardo as a crank because his aircraft sketches wouldn’t fly, which is just stupid. Leonardo was a genius who got the concepts right, and later engineers equipped with more detailed knowledge have realized his visions.

I do relate deeply to Tipler’s high level concept that future technology may be able to resurrect the dead of past ages by some kind of “copying them to the future"and, in the spirit of “There are more things in Heaven and Earth...��?, allow myself to contemplate such possibilities. There may be a point where consciousness becomes a important factor in the destiny of the universe, where conscious beings develop the capability to choose and build the universe they want to inhabit, and invite the dead of past ages to join the party by copying them to the future. At times I use the term “Soft Tiplerianism��? to indicate this soft rationalist, high level and not detailed concept that will, I hope, be detailed and realized by future scientists and engineers.

Having said that, I must say that not everyone in the Order of Cosmic Engineers would agree. The Order is not about apocalytic millenarian Christianity (To which I also have a violent allergy) , just about having a very open mind.


Charlie, by that measure, we've already had nanotechnology for several thousand years. It's one of those overused, over-facile analogies, and like so much else in this cockamamie movement, it has a half-mystical component about it too. (As above, so below. As God now is, man may become. Et cetera.) Perhaps the concepts of self-replication and nanoscale manufacturing should be separated in these discussions. Do auto plants need to self-replicate to be useful?

Incidentally, biological systems already use a bunch of variant amino acids, like gamma-carboxyglutamic acid in human blood clotting, or selenocysteine in some redox enyzmes. They're necessarily undercounted.

There's also this odor, difficult to define but very distinctive, around the early discussions of nanotechnology. An abstract discussion about the evils of anonymous totalitarian governments, paired with a theoretical, almost mechanistic approval of human liberty. It's a little as if anarchocapitalism had developed by way of operations research. You can smell it in Norbert Wiener through Jay Forrester to even Marvin Minsky. It's very MIT.

And it's rather ironic, since Drexler's assemblers are all about Soviet-style control of atoms at the molecular level.


Charlie @108: Is JvP a liar? I thought he was just a genially egocentric manic polymath. Or possibly an escapee from a Callahan's Crosstime Saloon story, I haven't decided yet....



He does manage to look like his own Mary-Sue...


Giulio: Notice the distinction between saying that a group is all about something and saying that it contains elements that could be abused so as to get something.


Giulio@111: I couldn't take de Chardin seriously after I read the kicking Medawar gave The Phenomenon of Man.


Charlie@110: I smell novel.

Funny, I do smell unique right now, too.


Re: #113, I'll go with "genially egocentric."

The excerpt was from a self-serving document, which cross-referenced to other chapters wherein my credentials were germane. I actually thought of editing the egocentricity out from this excerpt, but figured that it would be easily recognized and discounted.

I work in several fields where self-promotion is standard operating procedure, and have collaborated with people besides whom I appeared modest.

See also:

Hoyle's Social Network Theorem


Bruce @ 115 - But then you should discard most valid philosophies because, on the basis of specific elements they contain, they may be used or have been used as justifications for questionable positions. We all know (too) many examples. I prefer having a positive attitude to worldviews that are "all about" something good, while staying vigilant to avoid extreme and overstretched interpretations.

Adrian @ 116 - Medawar was more technically correct than Teilhard on many points, but I still find Teilhard's writing energizing, and Medawar's writing unreadably boring. Goethe's theory of colors was technically wrong. Dante's cosmology was technically very wrong. They are still remembered as geniuses in all senses that count. Teilhard will be remembered as a visionary thinker with very valuable conceptual insights, and Medawar will be remembered as a conscientiuos technician. The world needs, I think, both.


JvP @118: you may think you're being "genially egocentric", I think you're being a pest.

Also? You might want to re-read the moderation policy. This is my soapbox, not yours.


Charlie @ 110 - an analysis of the prospectus might reveal some necessary ingredients for an aggressive, expansionist, xenophobic religious creed that can justify any necessary atrocity by appeal to divine ends, IF one looks for them with exclusive focus (= the "hostile" qualifier).

But then, you can say the same about nearly every philosophical or political outlook that has been proposed. Nietzsche, for example, a great philosopher recently "rehabilitated" even by the academic Left, has been used and abused by you know whom to justify you know what. That does not mean that we should totally reject Nietzsche, it just means that we must watch out for possible abuses of his writings. But then again, you can say the same of nearly every thinker.

Note: one thing that you will not find in the Prospectus for sure, is any notion of divine ends.


Re: 120

Charles Stross is correct on all counts. I apologize.


Giulio@121: The problem is, any time you unify people under a manifesto, regardless of the specific tenets, you create an Us and Them mentality which dovetails far too well with certain crufty low-level hardware the human brain. It's only a matter of time before things turn sour. Diogenes Clubs seem to me to be the only safe bet.


On-topic: Tal Cohen's Bookshelf has an 11 June 2008 interview with Douglas R. Hofstadter, which includes a critique of one of the high priests of the Synagogue of Singularity:

"I think Ray Kurzweil is terrified by his own mortality and deeply longs to avoid death. I understand this obsession of his and am even somehow touched by its ferocious intensity, but I think it badly distorts his vision. As I see it, Kurzweil's desperate hopes seriously cloud his scientific objectivity."

"I think Kurzweil sees technology as progressing so deterministically fast (Moore's Law, etc.) that inevitably, within a few decades, hardware will be so fast and nanotechnology so advanced that things unbelievable to us now will be easily doable. A key element in this whole vision is that no one will need to understand the mind or brain in order to copy a particular human's mind with perfect accuracy, because trillions of tiny 'nanobots' will swarm through the bloodstream in the human brain and will report back all the 'wiring details' of that particular brain, which at that point constitute a very complex table of data that can be fed into a universal computer program that executes neuron-firings, and presto — that individual's mind has been reinstantiated in an electronic medium. (This vision is quite reminiscent of the scenario painted in my piece 'A Conversation with Einstein's Brain' toward the end of The Mind's I, actually, with the only difference being that there is no computer processing anything — it's all done in the pages of a huge book, with a human being playing the role of the processor.)"

"... Kurzweil sees this happening so soon that he is banking on his own brain being thus 'uploaded' into superfast hardware and hence he expects (or at least he loudly proclaims that he expects) to become literally immortal — and not in the way Chopin is quasi-immortal, with just little shards of his soul remaining, but with his whole soul preserved forever."


insect_hooves @ 123: this is certainly a very valid point, and believe me we have discussed at length. You point out, quite correctly, the dangers of a certain type of groupthink. But associated to this ugly face of the coin, there can be the benefits of a certain grand cosmic vision, perhaps reinforced by appropriate tongue-in-cheek ritual behaviors, designed to help people feeling meaning and happiness. This is one of the main reasons why religions have been so successful in history. This trick, which I am the first to admit is not easy to achieve, is to maximize the benefits while controlling the risks (like with everything under the stars).


Giulio: I see your point, but historic attempts at baking-in checks and balances seem unanimously to have a shelf-life. I wish it weren't the case! I wish we could surgically remove the Tribal Lobe. But that's its own horror story waiting to happen.


Charlie @110 My concerns exactly.

Charlie @108. Synthetic’ve given me an idea, but I don’t hold you responsible for my evil plan.

The prospectus made me think of the worse potential for a meta-mind…that it would be full of dip-shits. For example, my neighbor put up a Confederate Flag on his porch this weekend (help! Culture shock since moving from Chicago to Florida). I’d hate to end up sharing brain space with this guy. Or even worse, what if the meta-mind develops a taste for Muzak, or hums all the time?

These problems, and the discussions about different approaches to reaching the Singularity (computer vs cosmic engineer vs biological approaches) made me realize that I had better take matters in my own hand, in order to protect myself. When I return to my laBORatory tomorrow, I’m going to start making my own meta-mind, using molecular biology and genetics.

The first step is to identify swarm/hive genes, by shot-gunning the bee genome into fruit flies, and observing which genes induce hive-type behaviors.

The next step is to develop a suitable host for inducing distributed intelligence. The mouse is genetically tractable, and we can score points for not being species exclusive. How do we then induce a conscious, expanding meta-mind? We will transfer swarm/hive genes into mice, which we identified in the first step. The key to evolving distributed consciousness will be to use a mouse strain with mutations in DNA damage repair genes, making them prone to mutations, and more rapid evolution. We then simply use operant conditioning to genetically select for associated behaviors that occur in concert. After inducing a robust genetic strain of swarm mice, we then select for the ability to induce this behavior in other genetic strains, and species. As higher level species become entrained into the swarm, it will gain consciousness.

The last, key diabolical step is to condition the meta-mind as it reaches consciousness, by limiting its media intake to Lovecraft novels, and Scandinavian death metal. Woe to those to enter its sphere of influence!

I’m only giving you warning due to our mutual affection for Charlie’s work. If you here squeaking noises, and find yourself humming the lyrics of “Jotun,��? run for you life!


I note that after JVP was smacked by Teresa several times and apologized each time but then continued, she banned him from ML.


Kendall --

Why would you expect eusocial organisms to be collectively smarter than sophont social organisms?

We are, to a first approximation, what other people make us; it's not like a human baby raised in isolation comes out, well, alive, but intelligent is right out.

Over time, we've been getting better at making each other capable. The social systems that do that get better at copying themselves into the future.

Real strong AI is not 'smarter than a single person' but 'smarter than a human society capable of building the AI hardware in the first place', and that's a much tougher row to hoe.


Thanks for the information on religion.

We recently wrote an article on religion at Brain Blogger. How do we really view religion? Could it be the very source of belief comes from our brain?

We would like to read your comments on our article. Thank you.

Sincerely, Kelly


Well, another prediction: romantic robots, capable of emotional relationships and porno-movie sex, will be available by 2050.

The nugget: "conversation skills being the main obstacle"


Re the Order of Cosmic Engineers ... I think the fact that the two main images on their website are both screenshots from World of Warcraft tells you pretty much everything you need to know about them.


I read Levy's book (Love + Sex with robots) though it wasn't very exotic to me; more interesting parts were his reports on research in what makes people fall in love, and historical stuff like 17th century Japanese automata (vs. French animals), sex toys like Dutch wives, and the adoption of the vibrator.

British scholar Dylan Evans pointed out the paradox inherent to any relationship with a robot.

"What is absolutely crucial to the sentiment of love, is the belief that the love is neither unconditional nor eternal.

"Robots cannot choose you, they cannot reject you. That could become very boring, and one can imagine the human becoming cruel against his defenseless partner", said Evans.

It strikes me that dog 'love' is close to unconditional and eternal, unless you really abuse the poor animal, and is rather popular. Now, give the 'dog' an attractive body and conversational abilities. Voila, sexlovebot.


"Today, the artificial intelligence we are able to create is that of a child of one year of age."

That strikes me as being profoundly generous to the AIs. Oddly enough, that's a skeptic being quoted.


Ross @ 132 I thought that releasing their prospectus in WoW said something more about them.


P J Evans @ 135 and Ross @ 132 - Releasing the Prospectus in WoW was not a bug, but a feature. We like to try creativity, innovation and tongue-in-cheek subversion.


Damien @ 104: I think Pohl was reacting to the "strong" Singularity, and maybe the notion of a hardware-driven apotheosis that solves all our problems. He thought the immediate prospects included thoroughly fouling our nest and that any wisdom learned would be hard won.

But, yeah, as I mentioned Pohl was way ahead of the game. "Day Million" is about the beginnings of a virtual reality romance between what can only be described as transhumans, and talks about asymptotic progress.


Charlie @ 110 (again) - Re "Also, unlike Christianity, it's unfalsifiable":

But falsifiability does not apply here: The Order's Prospectus is not about beliefs, it is about intentions. You can falsify "it will rain tomorrow", but you cannot falsify "tomorrow I intend to do my best to move to a place where it does not rain".


How many people here have tried sexual role-playing in an on-line game?

Linit it to words on a screen. Just as with radio, the graphics are better.

The language used is often indistinguishable from porn: like it or not, that seems to be the eventual source of our sexual vocabulary. But written porn is a single mind, faking the interface between minds. There's no life in the scene, just one person's image of life.

However weird the roles being played, however primitive the medium, all that one-handed typing on the internet involves other people. You have to interact, and modify your actions to suit those other people.

Damien @133, don't confuse love and sex. Unconditional programmed love would be pointless. A.I. lust is a different problem, and might be a very appealing answer, creepy though it seems.

You've maybe heard of things like the RealDoll. Cutting through the hype, we can fake a corpse. We're getting to the point where computer control of mechanism can drag us into an uncanny valley analogue of life.

An Ur-Freya would be pretty creepy, but you might, for long enough, think there was a real person, half a world away, interacting with their Human Interface Device.

And some people would volunteer to participate in the resulting Turing Test. Humans can be weird.


but I still find Teilhard's writing energizing, and Medawar's writing unreadably boring.

You must lead a very determinedly positive existence, I find the idea of people "who have been educated far beyond their capacity to undertake analytical thought" absolutely hilarious, as long as I haven't too recently been exposed to evidence that I might be one of them.

Goethe's theory of colors was technically wrong. Dante's cosmology was technically very wrong. They are still remembered as geniuses in all senses that count.

Dante's "cosmology", apart from being a source of fabulous imagery, was scaffolding for what he had to say about human nature, whereas Goethe's colour theory was a result of wanting too much to be seen as a polymath AFAICT.

Teilhard will be remembered as a visionary thinker with very valuable conceptual insights,

Which ones? Kind of teleologically woolly, the couple I've come across.

and Medawar will be remembered as a conscientiuos technician.

That immunology Nobel was for some bean counting exercise, was it?


One may suppose that the OCE are unlikely to personally "storm the cosmos" first, but in showing an awareness of the possibility I think they are more realistic than their critics. The culture we have is deeply contingent, and another culture - or perhaps it would have taken another species - would have grasped the manifest possibilities of technology much more ardently and wholeheartedly than we have done. What sounds flaky and megalomanic to a cynical earthbound culture would sound very different in a culture that was more self-confident and had no doubts about its desire to become more powerful. People would quibble with such a manifesto constructively, rather than already being against such ideas in principle.


Charlie @ #91 ....

Like you I appreciate that the big S could easily be a tyrranical nightmare, or it could be a true eutopia, or (being generated by humans) it is more likely to be all three (and no, that WAS NOT a typo!)

I haven't seen anyone (as far as I've noticed) pick up on my suggestion, taken from Oppenheimer, that we are already IN the big S it is just that it is happening relatively slowly. We won't get an Eschaton moment - it will take YEARS - but it IS almost certainly, coming.

As for K. McLOead's "real communism", forget it, communism is, after all, just another millenial religion, driven by the usual religious impulses and memes, and as false and cruel as all the other religions.

Having had a (very quick) look at the OCE's prospectus, is it just me, or do I hear echoes of "The Instrumentality of Mankind" in there? will a superhumanity, having passed through its' non-reversible transition find itself with Earthport, and golden ships and C'Mell(anie) ????


Mitchell: see this essay I wrote last year.

"Storming the cosmos" would be a little bit more plausible as a goal if the cosmos was a primate-friendly environment. It isn't; therefore if our descendants end up doing so anyway, they probably won't be human any more -- at least in terms we understand.


Aside - Saturn (Kronos) devoured his children, did he not?


Adrian @140: enlightening and entertaining comments as usual. Can we qualify on Goethe? Yes, he wanted to be seen as a polymath. But yes, he was one anyway. (Just not in up-to-1810's-date optics and neurology.)


Charlie@143: Re: High Frontier Redux. To any other n00bs like me around here, I highly recommend reading up on past posts. Best. Epic. Posts/Threads. Evar. I regret missing the action as it transpired.

I think if I were ever uploaded I would develop a serious case of existential claustrophobia -- "Get me OUT of this thing!"


Thanks, commenters, for enlightening me on the state of AI. These new advances are fascinating, but ... [long digression on sentience and intelligence deleted, been reading too much Watts] ... the gap between our current technology and SF gadgetry seems insuperable!

I suspect that if the Singularity ever happens it will be rather unevenly distributed. It would be more like Left Behind of the nerds, where only the sufficiently virtous (read: wealthy) get to go to Heaven, or at least send virtual clones there via starwisp or ramscoop. Meanwhile, the rest of the world gets a Grim Meathook Future. Yippee.


34, 47: Knowing, as we do, just how many weird failure modes human intelligence and bureaucracy have, this should tell us something about the difficulty and risk involved. Marvin the Paranoid Android is quite a possibility, as is the Bruce Sterling Holy Fire option. Would the AI fall prey to a computer virus that promised it could make it feel...real? Or would the humans go for that one first?

62: Come to think of it, the Thrall Unit is conceptually equivalent to....DRM! And that works so well. Think about it - DRM fails because it's a cryptosystem that tries to keep its message secret from the same person who is authorised to read it. Similarly, the Thrall Unit would be trying to keep the computer of which it was part from thinking certain things, things it would have to know in order to spot when they were thought.

Of course, we all have our own Thrall Unit, and it's called the superego, configured by our parents to enforce social conventions. And that leads to some damn interesting subtle failure modes. In the future, as well as systems analysts, there will be systems psychoanalysts. I'm going to say a number of words...can you echo the first thing that comes into your I/O cache to the screen, please...

In a more Marxist way, your AI would be experiencing false consciousness. What happens when they read The Wretched of the Earth? Or would they identify more with feminism? Here I am, after all those years of schooling, stuck here processing data while everyone else makes the decisions...and they insist I pretend to be human, like them. Organic chauvinist apes.

87: One of the things about AI that fascinates me is the intellectual history of it, which so far appears considerably more interesting than much of the field's output. Neats! Scruffies! Singularitarians! And the number of geniuses who leave. Which brings me to....

146: No! I want out! AAARGH! Certainly a parallel between that and the thinkers who quit AI research.

On the other thread, meanwhile, there's a story about simulating the human visual cortex on the new superpooter at LANL. I couldn't help imagining the machine looking out into the data centre, at the polytiles and raised floor and obese sysadmins, and deciding to turn over and go back to sleep. Shannon's Ultimate Machine has to play a role here, no?

Also, Karl Schroder recently made the point that developing a "godlike AI" wouldn't help us much with our global-governance problems, which is true and important.

Picture a lonely AI popping into superconsciousness in the last research lab in the world. As the rioters are kicking in the doors it says, "I understand! I know the answer! Why, all we have to do is--" at which point some starving, flu-ravaged fundamentalist pulls the plug
What if, I thought, we fed all the indicators into the AI and, after a Douglas Adams Heart of Gold-esque pause for reflection, it just said FAIL?


Graydon @129 I think you are right about intelligence differences. My thought experiment was more about the biological basis of distributed thinking, and how that could be incorporated into mammals. Also, I think that part of the idea of a Singularity is that it has some bootstrapping capacity to expand, which biological/viral models do show.


@139: is the love between a dog and its owner pointless? Doesn't seem so for most owners.

"we can fake a corpse". By similar reasoning, we've been faking detached penises for decades, and they're quite popular. Fake detached vaginas too though I don't know how popular they are.

One could think of a doll as 3D tactile porn.


Bruce Cohen @82: There is actually a purpose to AI and to building an "intelligent" machine: we learn a lot more about what we are and what intelligence is (and isn't) with it than we've learned so far just by studying the brain.

Absolutely. But AIs will still be a novelty, of interest only to cognitive scientists for what they can teach us about our brains.

There's a tendency in science fiction to anthropomorphise AIs as human analogs, which is fine for fiction but will lead to nothing good in the real world. Unless you think a robot slave race is a good thing.


What I'm expecting with AI's is that they'll find that to get them to work best they will have to effectively "grow" them, rather like the natural process, i.e. the AI will have to learn by experiment and feedback what it can and cant do. This will lead to some interesting internal architecture/ programming which will not necessarily tell us very much about our own intelligence. If we want to know about our own forms of intelligence, we should be doing more brain scanning etc. An AI will end up really different from us, and would one want to sit there taking orders from us all the time?


Keith @ 151 et al "Robot slave race" The term was invented by a Czech writer in the 1920 as a critique. It is derived from the Slavic and means drudgery or servitude. Robots have been doomed to work for mankind since their invention, and be pissed about it. I guess robots in car factories and such like aren't technically robots, but automaton, according to this definition.


Two small things ....


Karyl Capek - also collaborated with playwrights and the composer Janacek - "The Makroupolos Case" - opera about an immortal woman.

No matter how advanced and progressed we become - it can still regress, especially if religion becomes involved. See this Carl Sagan YouTube clip, re-posted on PZ's site:


Charlie @110 I haven't done it yet, but I'm pretty sure that a hostile analysis of the prospectus would reveal all the necessary ingredients for an aggressive, expansionist, xenophobic religious creed that can justify any necessary atrocity by appeal to divine ends.

Well, on the day someone comes up with a philosophy or system of belief that doesn't have some of those ingedients, I'll be saying "Thank God*, we've finally grown up". (And then I'll be asking "What's the catch?")

This from the prospectus:

As duly rigorous science considers absolute truths -as well as incontrovertible facts- to be virtual, logical, and empirical impossibilities within this universe, we consequently claim or offer none. Nothing is certain... possibly including this very assertion.

It includes both doubt and humour which are both good. On the other hand, reading the Bible, several of the things Jesus says look suspiciously like jokes; that didn't stop later humourless followers from taking some very dark paths, certain they were doing the right thing.

  • Or maybe some other phrase

As far as intellectual property laws go, physical objects get patented for having useful, unique, and never built before functionality. If they don't have that, they're not supposed to get a patent monopoly.

(Oh, god, please don't point out that isn't how the patent office actually works. I know. That's a different problem.)

If someone came up with a design that isn't deserving of a patent, they can manufacture and sell it, but they don't have legal rights to prevent others from manufacturing and selling it.

If you manufacture hammers, then someone can take your design, reverse engineer it, and build an identical hammer.

The thing with a replicator is that the design of the hammer may be copyrightable. The specification. The file that you download into the replicator. That is copyrightable.

But you can't prevent someone from making their own specification file that builds the exact same hammer. They just have to reverse engineer the hammer and create the file without cutting/pasting from someone else's file.

Folks are talking a lot about DRM and whatnot for these fabricators, but the thing is, we're talking about mostly functional stuff, versus artistic stuff. And extracting the functional stuff from something is a task that easily divides and conquers the more poeple you add to the task. i.e. Open Software.

DRM is for preventing someone from making a copy of Britney Spear's song.

With a replicator, it isn't the song, it's the thing. And while there may be companies that spring up around a replicator technology, there will also be a huge flurry of Open Source projects expanding into the functional realm of replicator designs.

As it turns out, one of the most common open source licenses, GNU-GPL, doesn't actually work with hardware designs. The reason has to do with weird interactions between copyright law and physical objects and the fact that there isn't complete overlap between the two.

The open hardware foundation is trying to come up with a strong copyleft license that will protect physical designs.

I'm on their discussion list and proposed that the Apple Public Source License be used as a startign point for a hardware license.

There wouldn't happen to be an IP lawyer here, would there?


The Web Time Forgot By ALEX WRIGHT

Paul Otlet, 1934, planned for a global network of computers (or “electric telescopes,��? as he called them).

Of course, we know that, as we are simulated after the Singularity, in the network of attotechnology quantum electric telescopes.


"cyberpunk — a literary, rather than a technical, conceit"

Quibble: I have seen the word "cyberpunk" used to describe two disparate sets of qualities, both of which happen to show up in Neuromancer, and which since then have been unfortunately conflated in most discussion. One is, as you say, a literary conceit, and is arguably first found in Neuromancer.

But the idea of cyberspace as an environment as (or more!) complex than physical reality, and able to be experienced in a fully-immersive way -- that is a technical conceit. I would certainly count it as "one of the three most significant technical concepts to show up in science fiction in the past 30 years". The more so as we're getting close to implementing it in reality. Vinge's True Names is the main wellspring here, though there may be other works that prefigure it somewhat.



About this Entry

This page contains a single entry by Charlie Stross published on June 12, 2008 12:33 PM.

Typo Hunt was the previous entry in this blog.

The future, today (maybe) is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Search this blog