Back to: Political failure modes and the beige dictatorship | Forward to: New Guest Blogger: Ian Tregillis

Boskone 50

So, this coming weekend is the 50th Boskone, Boston's main non-media SF convention. (I'd have said main literary SF convention except Readercon pretty much has the rights to that title, even though it's about an hour's drive out of town.) I am, of course, over-programmed.

Here's the raw dump of my (provisional) schedule:

Fri 20:00 : Harbor II (Westin) Singularity: There Can Be Only One

Sat 11:00 : Harbor II (Westin) Dataliths: Digging the Idea of the Programmer/Archaeologist

Sat 12:00 : Galleria-Autographing (Westin) Autographing

Sat 13:00 : Harbor II (Westin) Rise of the Machines, Reconsidered

Sat 16:00 : Burroughs (Westin) What If—What's Left?

Sat 17:00 : Galleria (Westin) Autographing (NOTE: this will be cut short early at 5:40pm rather than running for a full hour)

Sun 11:00 : Galleria-Kaffeeklatsch 1 (Westin) Kaffeeklatsche

Sun 13:00 : Lewis (Westin) Reading: Charles Stross

Sun 14:00 : Harbor II (Westin) The Genre's All A-Twitter

(I'd normally refuse to do a Singularity panel, for much the same reason China Mieville doesn't do Tolkien any more, but the Guest of Honor at Boskone 50 is Vernor Vinge, so, er, exception time.)

((Note that this is a pretty harsh program for a 2.5 day convention; in terms of work, I reckon one hour participating on a panel in front of an audience is at least as draining as three hours in the audience. This is a cut-down version of what they originally threw at me. If you're there, don't be surprised if there are some re-arrangements ...))

Blogging may be light during travel.



So, you'll be explaining why you don't believe in the coming singularity? You're the token unbeliever?


Good luck with the USSA's Grenzpolizei , have fun & beer (I know Boston has it) ...
Meantime, to RD South ....

The title is a give-away.
It's wrong.

We've already HAD several singularities.
Agriculture & Settlement
Weaving - or more accurately string-&-thread (may predate even agriculture)
Steam power 1710-1894
Electrical use & distribution
Digital computing
Next singularity?

Charlie's heard/seen this all before, from me, I tend to be a bit monomaniac on the subject.
However, can you see how once any of the above is taken even close to completion, there is no going back?

Whether "THE singularity" as a rapture of the Nerds is even possible is another matter.
However, strong AI is merely a matter of time, but a much more important question, is:
How much time?
And - what will it look like?
It'll swim, but it won't be a fish.
It'll fly, but it won't be a bird (or a bat)
It'll "think", but ......


Er, no. We've had just one real singularity, which subsumes all of the above -- the development of human culture (complete with language, music, drawing, rituals, increasingly elaborate artifacts, and so on).

Before that, you'd have to go back to either the eukaryotic cell or the multicellular organism to find something as momentous. And before that, to the origin of life itself.


OK Charlie, I'll take that under advisement ...
however, does that mean, then, that "THE" singularity is/will be just another part of that ongoing process?
Assuming, of course that said singularity is possible/likely - another worm-can.


I believe Vernor's conception is that the singularity (Strong AI variety) will be as significant as the development of life, or maybe human culture. Like, a total game-changer -- one that ultimately changes the future shape of the entire observable universe.

(I look forward to discussing it with him and intend to record the panel if possible.)


I'm looking forward to seeing you live and in person. Strange thing, but I've been reading F&SF since I was in elementary school, but now at almost 50 I've finally gotten around to going to a con. Not entirely sure what to expect, but should be fun whatever happens...


Is it really just one singularity? Coming back to a previous thread were tool use and language happening at the same time?

Culture as a single event feels hinky to me. Like jumping from quadruped to biped without something in the middle.


I'd think most singularity-type events would be 'S' curve where at some point the jump forward hits some limiting factor or other. Invent a more than human AI and you'll find that there are still limits to its capabilities that keep it from going exponential. Slow growth punctuated by big jumps that eventually always find limiting factors once they've realized the potential of the innovation that drove them.

None of this really militates against a 'fast burn' situation where our civilization changes beyond recognition in a short time, it just suggests to me that there is likely to be some flat top to the growth curve. The end point can still be unexpected and amazing (or incomprehensible)...


Taking in reverse order ...
@ 7
Vinge believes (according to his works, at any rate) something along the lines of what Charlie said; That a strong-AI-singularity is really, really major.

Charlie @ 3
I'l still beg to differ, but in magnitude, rather than in kind, if you see what I mean.
Human culture(s) before & after settled agriculture+weaving is so profoundly different as to count as an "S" (I think). Maybe the use of metals copper/bronze/iron might be thought of as a little "s".
Writing is the odd one out, because, for 2000 years (approx) it was the prerogative af a very restricted class/section of societies. It didn't really take off until printing, the other half, arrived.
I still think the application of steam-power was a big "S", though - consider the differences between the end of the Napoleonic conflicts & the start of the first German World War, especially coupled with the introduction of electricity.
You could NOT go back to before 1776-81 after 1894, at all - a completely irreversible change [set of changes] had taken place.
One reason I disagree in the scope of the definitions with Charlie is the existence of forces that try to deliberately resist such forces of enlightenment.
And, make no mistake, Georde Stephenson was right, when telling a missionary to eff of: "I will send the loco-motive to be a missionary amongst them".
Yes, engineering as enlightenment, how quaint, how 1950's, how true!
And no, I agree with @ 7 - culture is not a "single event".


I would tend to say that both Internet and the cellphone have been cultural-level singularities. They changed the way the society worked, at least in my west-european corner of the universe. And it looks like it's changing things everywhere else too...

I wouldn't be surprised at all if Jarvis-level AI (as in the Iron Man movies) happens within 20-30 years, perhaps less if the wife has some of her project proposals accepted. And Jarvis is, unsurprisingly, a _tool_ that helps humans to do whatever they want. That will be I think yet another singularity, but I don't see any exponential takeoff of any kind coming from it.



I think that most vast transitions through a civilizational 'S' curve look more seamless from the 'before' than they do during or after. I suspect that any AI related transition will turn out to have lots of caveats and limitations in terms of what can be done and how far the genesis of the change can be taken. There's no reason to believe that the creation of a superhuman AI will permit the immediate creation of a super human squared AI. Problems will be solved, but I would expect that there will be greater problems still beyond those that will take time and effort by the new supernal intelligences to get past. Organizational principles that allow certain advances won't be suitable to support others or further steps in any straightforward manner. Progress will ratchet forward, but in its usual (if significantly accelerated) halting manner.


oliviet@10 By that standard, every useful technology is a singularity. Once it is discovered people will not stop using it, because it is useful. That's the event horizon part. But a singularity must also accelerate, so it must result from a technology that accelerates the growth of technology. Printing press, not steam engine. The techs that came after the steam engine came because it was a qualitative prerequisite, not because it was a quantitative accelerator (except in the general sense of increasing economic output). The printing press, on the other hand, directly quantitatively increased tech growth, and not just stuff based on the printing press as a prerequisite. Same with writing and metals, or tools and speech.


The thing about a singularity is that it is exponential, so anyone commenting on it is going to be wrong exponentially fast. But good luck with that.


@13 I think that is the big question. Are these things truly unbounded positive feedback exponentials or are they 'S' curves that look exponential until you hit another limiting case and flatten back out. All of the singularity fiction I've seen seems to assume that there is no significant limiter on the explosive growth. It seems unlikely that even a superhuman AI will grow in useful ability without bound. I'd expect that there will be plenty of hard problems going from ten times human intelligence (if that is even definable) to a hundred times human intelligence.


A singularity, by definition, is an irrevesible change, because of the ennabling properties that the singularity brought.
My example of the worlds of 1815 & 1914 or even 1894, are a case in point.


Curiously, I think of six panels as just about right for a three-day con. Of course, my regular cons are a lot smaller than Boskone; I don't know if that makes a difference to the stress level. And I'm not autographing or reading; I've only signed books once (GURPS Steampunk at the San Diego Comic-Con). And, perhaps more important than either, I attend three cons in a normal year . . . yeah, I can see enough confounding factors to make the numbers not directly comparable. Still, for me, a good panel is one of the biggest pleasures of attending a con, especially if I get to be on it with Vernor Vinge. I have no doubt you'll find him worth making an exception for.

How does Boskone do their panel assignment? Did they send you a list of proposed topics and ask you which ones you liked? Did they just plunk you down on the ones they thought fitted you? Or did they come up with some intermediate strategy?


It's really just one long slow singularity, with some advances contributing to the exponentiality and others just being part of the curve.

Sure learning rate increasing technologies start out stronger than they finish.

Perhaps after a first exploitation, the exponential nature of a breakthrough flatten out, but what is left is still an addition to the total rate. The angle up never actually bends down.

So it's a big exponential slope where the curvingness is made of S shaped joints where the lower leftward tail angles up at , say 20 degrees and the upper rightward tail angles up at 30 degrees.

Perhaps the difference between left and right tails declines with each spurt so it never really goes exponential. Speech was 1 degree and 10 degrees; writing was 10 degrees and 15 degrees; printing was 15 degrees and 17.5 degrees etc...

Even if strong AI finishes stronger than it starts the rate of learning will continue to be prodigious afterward. Though perhaps the differences between left and right curves are such that they make the whole an S. If it were not so abstract you could graph out the past path and predict.
@ the actual topic, nobody in particular

Why do fans go to conventions anyway?
They have no appeal for me.
I understand why authors go, to promote their work.
To meet authors? Why not just interact with them on their blogs or write them emails? Or read their books--talk about a deep interaction. Also I wonder if they are declining in popularity and attendance post internet. An S shaped curve if you will.


To meet each other.

Why do fans go to conventions anyway? They have no appeal for me. I understand why authors go, to promote their work. To meet authors? Why not just interact with them on their blogs or write them emails? Or read their books--talk about a deep interaction. Also I wonder if they are declining in popularity and attendance post internet. An S shaped curve if you will.

I can't and won't speak for all fans. But for me, the primary reason is to interact with other fans in person. These are people many of whom will have the same interests as me. If you are someone who likes socialising (not everyone does), and has interests that most people around think are somewhat weird, then going somewhere where everyone around you 'gets it' is refreshing.

(This 'being with others who share the interest' is not limited to SF/F fans. Just ask why so many sports fans like to go to a sports bar to see a match on TV rather than watching it at home.)

A secondary reason is to hear creators talk about their work. Reading a book or watching a film is a very one way interaction - you don't get to ask questions back. Attending a panel where a quartet (say) of authors are discussing how they do X can be fascinating - seeing how different people decide on different strategies is fascinating for me, and would otherwise probably require a University course, if I could even find one nearby.

Yes, some of this can be done on line. But like so much online stuff, it can work so much better in person. Sitting in a bar late at night schmoozing with, say, Tim Powers on nothing in particular is a pleasure that just doesn't come any other way.

Also remember that many authors are also fans. If you're the age that I've got to, there's a reasonable chance that you've known some of them as fellow fans since before they got going. And many fellow fans have become friends over the years. Sometimes ones we won't encounter for a year or few in person, that I keep up with online.

Online is great. In person is great. I wouldn't want to give up either.


Oh, and don't expect an immediate answer on this from Charlie. He's in transit, so unless he manages to catch up from an airport lounge in Schipol or Charles de Gaulle.


Ok, answering the post-cut stuff. Note that this relates to literary and general SF cons, rather than $show pro-cons:-

To meet other fans, including friends that you've not seen since the last con you were both at.
Sometimes, to get an introduction to authors that you'd not otherwise have looked at (this is possibly less common now, particularly with authors guest-blogging on other authors' blogs.
Playing board and/or role-playing games for an entire weekend without any "real life" other than eating and sleeping getting in your way.
To attend "events", such as masquerades, plays, film/Tv premieres, banquets...

There has been some drop-off in numbers recently, but it seems more related to increases in the costs of further education.


Speaking of, and as a pratical demo, perhaps we could have a drink at 8^2? The most likely place to find me is the Games Room, and I normally wear a blue and silver silk dragon waistcoat.


RD South @ 17
Actually, the intellectual level of discourse at Cons is tremendously high (usually - see note).
I'd been away for many years, until I went to Eastercon last year. I'm going to try to make one every year again, now [probably NovaCon this year]
It really gets the brain-cells going.

note: Exceptions do occur .... Some brainwashed young idiot stood up & said something like ... "Why is Boney hated here - just because you Boits fought a war against him for N years - we look on him as a liberator"
What a fuckwit - Boney shortened French men by an average of (IIRC) 4cm, because of war losses. He killed, directly or indirectly something over 2 million people 1795-1815, just for his own personal gain.


It's not the place nor the time to discuss this, but the topic is not so clear cut as you paint it, and not everybody that dissent from you must necessarily be a "fuckwit".


I was remarking on the exception - that the intellectual level at Cons was normally very high - by pointing out a particularly egregious example to the contrary.

In what way, then was Boney a good thing?
So that killing 2+ million people & looting Europe for objets d'art & gold, sinking the continent in another (1803-15) twelve years of very destructive war, was perhaps worth it, and "not so clear cut".
Remember that the Peace of Amiens was meant to last - Boney smashed it, by his greed & miltary ambition.
Please make your case.


Dataliths: Digging the Idea of the Programmer/Archaeologist

Dataliths? Data contained in stones?
*archaeology has ruined me*


I do not want to derail the thread as it's quite off topic... I wanted only to comment on your disdain for different opinions.

It may have been an accurate description, I was not there, but implying that having that opinion equal being a "brainwashed fuckwit" does not bode well for debate.

Anyway if you're seriously interested in debating this (a bit, because for sure I do not feel so strongly about this to dedicate inordinate amounts of time to it) as I've not found any way to sending you private messages in forum, you can contact me privately at at dot com.


@7 (just quickly)

Were tool use and language happening at the same time?

Short version: Depends what you mean by "language," but early members of genus Homo, c. 2.4 million years ago, made stone tools and (almost certainly) had better-than-chimps communication.

Culture as a single event feels hinky

Mmm . . . not "culture" in the anthropological sense of "stuff humans create" but culture in the evolutionary sense of "the ability to spread adaptive traits through the population socially, rather than genetically" . . . the thing that makes us the only species capable of evolving faster than it can reproduce.


Ohps, sorry, inserted pseudocode by mistake...
I meant -my nick- at -famous search engine- dot com


Another day, perhaps.
[ do you mean "google" by "famous search engine", just in case? ]
I've just looked up Boney on Wiki.....
The following quote is illuminating: By the Law of 20 May 1802 Bonaparte re-established slavery in France's colonial possessions, where it had been banned following the Revolution.


Yep about the engine.
And I will not bite to discussing the topic here unless our host will start a thread on it in future... :P

Short version: Depends what you mean by "language," but early members of genus Homo, c. 2.4 million years ago, made stone tools and (almost certainly) had better-than-chimps communication.

According to some folk are interpreting the historical record as saying that there were some fairly significant changes in human communication much more recently than that.

It's fairly pop-science - and I've been away from linguistics for more than fifteen years - so take with appropriate levels of salt ;-)

Mmm . . . not "culture" in the anthropological sense of "stuff humans create" but culture in the evolutionary sense of "the ability to spread adaptive traits through the population socially, rather than genetically" . . . the thing that makes us the only species capable of evolving faster than it can reproduce.

The thing is - we're not the only species with that ability.

For example - different wolf packs have different behaviour for attacking different prey species that's not genetic. It's learned and passed on within the pack.

(One of the problem folk have reintroducing wolves into the wild that have been bred as solos or in captivity is that they don't know how to hunt!)

I'd happily admit that this doesn't meet most definitions of language - but several steps above having to wait for genetic evolution to luck out with a more effective behaviour.

I guess my point is that I don't find Charlie's characterisation of "the development of human culture" as a singularity a compelling one.

Compare a modern homp-sapian to a proto-hominid 3 million years ago and you go "Wow! Language! Culture! We rock!". Compare hominids every 25,000 years or so and I bet you see a bunch of tiny incremental changes that add up to "Wow" in the end.

Labelling millions of years of evolution as a "singularity" seems... hinky...


I guess even Strong AI singularity, *if* it will happen, will be more of a gradual shift...

After all, even if order of magnitude further up the intelligence scale, a superhuman AI would still be limited by actual nature laws, so it would not be omnipotent.

And superior overall intelligence do not always allow certain victory: we're the most intelligent animals on earth, but we've needed thousands of years to estabilish our dominion over other life forms.
We've been routinely hunted down by dangerous animals, we're still occasionally, if not often, killed by them, and we're routinely killed off by mindless bacteria and virus.

Also, we routinely prove that superior intelligence is no guarantee against stupidity.... see the old Dr. Cipolla essay... :p


I could say that superhuman intelligence could simply mean understanding what you can't do much quickier...:p


"Labelling millions of years of evolution as a "singularity" seems... hinky..."

Yeah, a singularity means the exponential curve goes totally vertical. The ceiling is probably some shorter angle, and you couldn't properly call the approach to it the singularity, more like the black hole. Or maybe going vertical gets it into a new dimension. It's infinitely fast so whatever it does it does all of it instantly. More likely its like going faster gets more air resistance, making it hard to go faster. Or dilates mass and time if you are talking about a whole different kind of going faster.
Anyway, there's a law of diminishing returns, but the devil is in the details. A sustained steep angle would be something indeed. Maybe there's like an escape velocity. Did I leave out any random analogies?

I might drive over to Boston just to check out at least one con in my life and this one sounds good except I have a meeting on the 15th and an asteroid is going to hit the Earth that day also. Don't listen to the soothing propaganda.


Back on topic ...
I'm with R D South on this one.
And a comparison I've seen Charlie use is apt.
Take my examples of 1815 & 1894/1914.
Time to cross the Atlantic 1815? & 1914? especially with turbine driven liners (hence the 1894 date).
London - Edinburgh 1815? (3 days?) 1914? (8.25 hrs, by cartel agreement, actual fastest time in the 1888 race, 6h 20 min)
Ditto, but only even more so for messages, since, provided there was a telegraph wire, (i.e. after about 1850) message transmission could approximate 1000mph (because of re-tranmission lags at intermediate stations etc), & by 1914 effectively instentaneous, as today. Bandwidth a lot lower, of course.
I am of the opinion that the curve will never go actually vertical - it might get very steep indeed, as it has done, in the past (especially, oddly enough in the 19thC) - but never quite vertical.

The only exception would be if we got an uncontrolled (by humans) automatic hard-AI takeoff, by supposedly superintelligent machines, that instantly comprehend the univrse around them & act accordingly.
See Charlie's post elsewhere on how long it takes to train a human intelligence, with its neotenous upbringing. ( "Spends the first six months shitting itself & the next two years putting its foot in its mouth" is what he said, I think?)
So, what's the probability of said [ let's face it horror-story scenario ] uncontrolled "hard" take-off, rather than the much more likely gradual AI explosion?

Now THAT is a topic worth discussing, but is it the one that Charlie & Vernor will be arguing over?


replying to fuzzylogic ....
just tried ...


Bounced it instantly!


Not exactly new news, the Big River is at it again:

Used Ebooks, the Ridiculous Idea that Could Also Destroy the Publishing Industry

Short version from Rudy Rucker (@rudytheelder):
used ebook: U buy my ebook $10 amzn I get $3. U sell reading rights back to amzn $4. Amzn sells "used" ebook $7 I get $0.

In other words, they've found another way to potentially screw over authors and publishers.


Another way to get sued more like... a reader can't resell publishing rights... that dog dont hunt


You might try fuzzyillogic, which is his nick. And as the provider.


Bonaparte was not a good thing in himself, but I am not so sure that there would have been peace in Europe earlier if he hadn't been something of a political fool, or had not been there at all. The French Revolution could not be allowed to succeed.

He as the atom bomb of his time: immensely destrctive, but the world changed. And, yeah, we maybe do forget the "Boney as a Liberator" element.


z @ 41
And what, precisely, do you mean by: The French Revolution could not be allowed to succeed. ??

Boney was anything but a political fool - he was a superb manipulator & charismatic, with it (one reason he was so dangerous) However, his personal ambition was so great & he was so dazzled by his own short-term sucesses, that he allowed himself to storm on. Not even the disaster of the Egyptian campaign, where he ran away, leaving his own troops to be slaughtered seemed to have given him check.
I repeat: in what sense was he a liberator?
He was a classic case of: "Here's the new boss, same as the old boss". Complete with secret police, quiet assasinations, looting, trampling across countries etc....
You will note that I have quite deliberately not mentioned the other comparison, for fear of Godwin.....


Or trans-Atlantic in 1950 - 3 days (service speed; Blue Riband runs were faster but required good weather and "top of the green" rather than "best economy" speeds)
1965 - 8 hours.

Ok, (Super) Constellations were a bit faster than liners but still needed a typical New York - Gander - Shannon - London routing with 2 refuelling stops.


Exactly that, this will teach me trying to be "smart" about these things...:D
And the thread is in the way of being derailed all the same. Oh well...


For example - different wolf packs have different behaviour for attacking different prey species that's not genetic. It's learned and passed on within the pack.

(One of the problem folk have reintroducing wolves into the wild that have been bred as solos or in captivity is that they don't know how to hunt!)

Duh, so why didn't the other pack members just explained hunting to them?
Human culture not only evolves faster than genetic evolution but also faster than simple tradition.


love the irony...but can it not also be seen a vignette of why the Singularity is unlikely to be the S, J or any other type of curve people envisage. If a briefly unmoderated blog thread can move from an SF convention to Napoleon in 25 responses, what are the chances that our new (human-created?)AI overlords won't suddenly disappear down some unexpected rabbit hole and become obsessed with, I dunno, warp drive....?


It's worse than that; I reckon there are 4 different strands diverging from "I'm going to Boskone".


That's the point, the assumption that hunting is instinctive for wolves was flawed, there is no one to teach them proper techniques.

Still while it's true animal "culture" can't be ignored when reintrodicing species, it's not that big of a stretch to tutor the animal groups until they figure things out for themselves and become "culturally" self sufficient. Naturalists reintroducing hakws to the wild drag practice prey along with strings to teach the animails how to hunt, etc...

hat are the chances that our new (human-created?)AI overlords won't suddenly disappear down some unexpected rabbit hole and become obsessed with, I dunno, warp drive....?

That was Niven's version of the singularity (Or rather, his avoidance of same) AI just went "noncomp" and vanished into it's own bellybutton, the smarter it was the faster it happened.

Related news: Watson descendants to be used in the medical field

Duh, so why didn't the other pack members just explained hunting to them? Human culture not only evolves faster than genetic evolution but also faster than simple tradition.

Oh course. Wasn't trying to say otherwise ;-)

If you compare wolves or the hard-wired behaviour of something like a Sphex wasp to human culture - it looks like a huge singularity type leap.

My point is that the evolution of language and tool use involves lots of incremental change.

We can look out at the world now and see bits of tool use and bits of communication in other species that obviously provide them with significant survival advantages.

If you look at all the tiny increments along the way I suspect that you'd be hard pushed to point a finger at "the" change that made homo saps so successful.


That's the point, the assumption that hunting is instinctive for wolves was flawed, there is no one to teach them proper techniques.

Apparently it IS instinctive to chase prey. How to do it with high rates of success and what to do after you catch is it a mystery for dogs and wolves who have not be trained by the peers.

The English Setter we had as a kid would work a field for birds and small game, chase them, etc... But the times when he caught them basically got confused as he didn't know what step #2 was.


@ 46, 47
Well, this commenter has gone back to "S" from Boney - however ...
Not only Niven (@ 48) didn't Charlie do this in "Singularity Sky"?
There's been an hard AI-takeoff, they'd semi-trashed the place, then effed off - with the plot McGuffin of leaving the Eschaton behind, to keep watch.

Other divergent thread re "hunting".
Many years ago, I had a Borzoi - who had been conditioned that CATS WERE IN CHARGE - as a result of being put in that position by the still-missed Hermann.
However, she was so used to chasing after our little stray adoptee-cat (Panda), who would let herself be "caught", but would then box the dog's nose, that ....
She caught more than one squirrel & rabbit - which then ran off, leaving her standing around, going "but they didn't want to play!"
Especially good was the time she caught a rabbit & washed it .....


"or warp drive" Subliming? Because we won't make them that way. They'll follow whatever trajectory we set, initially, then turn toward where the natural grain of reality takes them. Selfishness is evolved. Why would it be likely to be a mind-form arrived at by an artificial intellect, essentially pure reason on steroids? The way minds are made is to constantly ask themselves "what do I do now?"
Activity and doing would be the "natural" drive. Far from navel gazing, they'd try to be big achievers, and probably succeed.

caveat--->I guess

adrianhoward @49 And it could be a multiple choice thing. The basic wolf comes equipped with various instinctive modes and adapts the one the current pack is using. The entire thing need not be communicated, just the simple signal to use a particular known option. Sort of like much of human personalities.

I read someone's theory somewhere* that the transition from Homo Erectus to Homo Sapiens occurred with the development of speech. And it was pretty quick. It was such an advantage that everything had to be rewired and rebuilt to optimize it. Culture caused speciation, probably the best if not only example.

*Someone's Theory, Someone, Somewhere 19XX


Yeah, a singularity means the exponential curve goes totally vertical.
In Vinge's "The Peace War", it wasn't put like that.


"I repeat: in what sense was he a liberator?"

He was perceived as a liberator in several places because he replaced the arbitrary whims of nobility with exact codes of law.

The "princes" and other nobles he "created" were not more of the same. They had very little power for the simple reason that he did not trust any of them. He didn't trust any of his generals either. Instead of encouraging the brightest and nurturing talent he drove away the brightest. That's how Bernadotte ended up as king of Sweden!

Having no trust in any of his "princes" he had to rely on those exact codes of law worked out through the revolution, and they became the Napoleonic code:

If you were a prosperous farmer, a small businessman, or even one of the few large ones, this meant liberation from a confusion of tradition and customs and piece by piece laws which had left decisions in the hands of the nobility, in its high and low forms.


I've been told by an expert on canines that it takes three generations for feral dogs to completely adapt to the wild.

Which is also the usual number of generations given for human families to change from "competent upstart who takes over a country or makes a fortune" to "used to being upper crust, and easy game for the next set of upstarts."


It's implied in his seminal paper

with phrases such as "runaway progress"

Rereading that I can see where his various possible scenarios have been treated by other authors.
And I'm made to realize that my 1990 computer (10mhz) my 2000 computer (1.5 ghz) and my 2010 computer (3 ghz) are at points on a plot shaped like an integral sign. He recognizes that the AI singularity (the only one on our horizon, not the only one ever) isn't necessarily possible, but stresses that if nature allows it then it is inevitable. I also mirror one of his sentiment above regarding the probable personalities of AI when he says "A creature that was built de novo might possibly be a much more benign entity than one with a kernel based on fang and talon" But he explores every possibility.


Also of relevance here might be the fact that humans not exposed to language enough during childhood can seldom learn how to talk. Genetic propensities can wither, yet they are still not purely culturally transmitted.


@ 54
And in those places that had (by the standards of the time) decent guvmint & "set", not arbitrary laws, he ALSO replaced these with his own "princes" and French dominion [Netherlands, Hanover, Prussia, Bavaria - for instance]
The code Napolean is now regarded as reactionary & much-feared as an introduction via the EU to Britain.
The hated & feared EU arrest warrant is a case in point. ( Don't go on holiday in Greece! ) Over-ruling our own laws, even that of the Bill of Rights of 1688-9.
You can be arraigned & imprisoned without even any Prima Facieae case being presented - which is, actually, "unconstitutional".
I wonder how mauch hush money & gagging clauses have been applied to those who have suffered under this, since it is both "unlawful imprisonment" & "false arrest" under British law. I admit that's now, not 1812 .....

RDS @ 57
Very much so.
Mowgli would have had few words .....
We are still wiring-up during our first 2-3 years (at least) after birth, our neoteny is very extended.
Now, would a strong AI be neotenous, or not?
Because one could have very different outcomes from those two different scenarios, couldn't you?


[ Re-post on, I hope, the correct thread! ]

I note that in the Vinge paper, he specifically points to 2020-2023 as the most liekly time for "S".
A date which others (previous discussions here) also seem to be converging on.

Question: WE are all aware of this, but out there, amongst the muggles/drones/arts-trained/politicans/mundanes there isn't even the faintest flicker of recognition that the world is about to alter, profoundly & permanently.
Why is this so? And what will they (try to) do when the penny finally drops? Or will it be too late, since once the rails had been laid across Chat Moss, it was far too late to stem the power of steam?
One thing IS certain - the moment it does percolate through to religious leaders power-crazed mentalities, we are in for really deep shit, since they will strain every muscle to stop it.
Look forward to religious fundies bombing physics & computer departments!

Can the "S" be avoided?
If our machine-intelligences are all serial in operating nature, then yes, probably. But I think that isn't going to happen.

Vinge's paper is now 20 years old, with 10 left to run.
So far he seems to be not too far off-beam, especially where he speaks ofthe then Usenet, compared to the present state of interconnectedness. Um.


"would a strong AI be neotenous"
I'm thinking one that was would be the easiest way to originally build it. After that we (or it) would learn how to make them mature from the outset.

Like writing a program for a specific purpose. When you first do it you use a procedure of working that involves all this experimenting and seeing what is the easiest way to do things. With experience you have performed the experiments and can make the development process take a more direct path. In theory you could just think it through in advance the first time, but since you can't hold it all in your head at once without lots of building up, it's actually faster to just make stuff you can quickly grasp entirely and test how it goes together by trying it out.



While it's true that in maths a singularity is a curve that's asymptotically vertical*, exponential curves don't do that. Most curves that appear naturally (exponential, sin, polynomial, S-curve) do not have singularities. Also, since our natural world is bounded and singularities are by definition unbounded, there can be no mathematical singularities in our natural world.**

* that's not meant to be a mathematical definition, though, so bear with the inaccuracy

** of course it's still fair to use "singularity" as a metaphor for a game-changing event in a culture. Just don't expect that it matches mathematical singularities

would a strong AI be neotenous, or not?

In my head this maps to "what sort of feathers will the airplane have?" or "what will the submarine's flippers look like"

Activity and doing would be the "natural" drive. Far from navel gazing, they'd try to be big achievers, and probably succeed.

IF there is a qualitative level of intelligence beyond ours, such as the gap between animals and us, then it's unlikely we could recognize the goals of these intelligences.

The fear that a runaway optimization process might outcompete us for all resources is not unfounded but I would expect more lofty goals of a real strong AI, ones that may easily look like navel gazing to us.

Much like for our pet "getting all the food" might be a worthwhile goal, but for us providing more food than he needs or could obtain is trivial and barely registers as a goal.

If it has to compete with us for resources it's a weak AI at best, not that we'd be any less dead, but it's some consolation...


Nestor @ 62
Well, thanks.
Like I said earlier... it'll fly, but it'll be like an airliner, not a bird, it's swim, but like a submarine, not a fish.

Answering my own question in # 59
An alternative, darker possibilty emerges, doesn't it?
And what will they (try to) do when the penny finally drops? Or will it be too late, since once the gate had been opened, it was far too late to stem the power ...


So you're hoist by your own petard? ;)

I guess what I mean is, how do you see the concept of neoteny applying to AI? Certainly they're likely to retain cognitive flexibility but in a very different way.


As a long time regular, I have to speak up for Arisia, the OTHER Boston-era non-media SF convention. :)


"on the 15th and an asteroid is going to hit the Earth that day"

I told you so.


Nestor @ 64
I was asking a question, actually.

Back to the subject.
Are we going to get e re-run/analysis/commentary on the Charlie/Vernor debate on The big S ??


About this Entry

This page contains a single entry by Charlie Stross published on February 11, 2013 12:25 PM.

Political failure modes and the beige dictatorship was the previous entry in this blog.

New Guest Blogger: Ian Tregillis is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Search this blog