August 2011 Archives

I spent a good chunk of July and August in the United States; while I was there, I did a number of readings, with Q & A sessions after the gig. Some of which were recorded on video.

One of the readings took place in Cupertino, in a training room at 1 Infinite Loop. The Q & A ran on for quite some time, so it's split into three segments (for YouTube's upload pleasure):

Part One

Part Two

Part Three

(Many thanks to Feorag for recording and editing this! Apologies for the poor sound quality on the questions — we're looking into a better mike solution for future gigs.)

If you look to the right of this blog entry you'll see a link titled "Talk to me". It takes you to an email form that'll send your words to my inbox. On that form it says, perhaps a little bluntly, "I hate link exchanges, refuse to publish press releases and will not publicize your project if you ask nicely. And I am not your search engine optimization bitch." That's because any number of greedy ass-hats think that my blog, with its high google pagerank, exists solely to promote their get-rich-quick scheme via search engine optimization.

Despite this, I still get come-ons from idiots who can't read.

Usually I bin them, but I'd like to deliver a special call out to "Andrew Shen", who wrote the following:

Hi Antipope,

My name is Andrew Shen, I am a regular reader of As you know, Google finally announced Google+, its next effort in social. However, because Google+ is still in beta version, Google doesn't add a functional search feature to its new social network.

I just wrote a free web app: Google Plus Search. it can search the Google Plus contents and profiles online, show the hot trends of G+. It also supports for Chrome Extenstion and Android App. The URL:


If you think it'll be of benefit to your readers, could you like to tell them about my site?

It should be fairly obvious Andrew Shen is a spamming liar ("I am a regular reader of" — yeah, right) who, moreover, doesn't read before spamming. And if you agree with me, you might want to copy this link into your own blog: Andrew Shen is a spamming liar. After all, he asked me to publicize his "free" app (which I suspect rapes your address book, sells the contents to spammers, and buys XXX-rated porn using your Google Checkout account) — shame he forgot to specify how.

(Postscript: maybe there'd be a bit less of this sort of spam on the internet if those of us with high pageranks systematically set about crucifying the grifters? Discuss.)

Okay, we're over 280 comments on the "what do you think is the most important novel of the past 10-and-a-bit years (published since January 1st 2000)?" thread.

A couple of observations have leapt out and bit me on the nose, but I'm not going to state them explicitly yet. However, here's a follow-on question suggested by my #1 observation:

What do you think is the most important novel of the past 10-and-a-bit years (published since January 1st 2000)? All male authors are disqualified. (No men. I'll also accept suggestions for books by transgendered/intersex authors. Moderation note: misogynist trolling and attempts at topic derailing in the comments will be nuked, ruthlessly.)

What do you think is the most important novel of the past 10-and-a-bit years (published since January 1st 2000)?

Explain your reasoning. (Novels by "Charles Stross" are disqualified.)

(Actually, there are several reasons I'm not on Google Plus, nor on LinkedIn or Twitter or a bunch of other social networks, starting with "attractive nuisance" and moving on through "waste of time" and "I dislike the amount of spam you're sending me" and ending in "thank you but I don't want you to monetize my personal information": but this is the stuff specific to Google Plus ...)

The designers of Google Plus seem to get that we have multiple overlapping circles of acquaintances — family, friends, schoolmates, drinking buddies, chess club members whatever — and that we want to keep them distinct. This is good, and a big plus relative to Facebook. They also have a hair in their ass about trolling and sociopathic online behaviour, and want to stamp on it before it gets started. This is also good.

But unfortunately they have misapprehended the cause of bad online behaviour. They think that pseudonymity is an enabler and that by banishing pseudonyms they can make people behave themselves.

So Google Plus has a "true names" policy. This is broken by design.

Let me explain the many reasons why Google Plus's names policy doesn't work.

Two weeks ago, at USENIX Security, I banged on a whole lot about the implications of cheap bandwidth and cheap data storage, by way of lifelogging using devices descended from today's smartphones.

But I am currently thinking that I over-narrowed my focus.

Here's the thing: let us postulate that by 2021, we will have hit the buffers using current microlithography techniques on CMOS -- say at a resolution of 5nm (compared to today's 22nm process). (Below 10nm our integrated circuits experience interesting quantum effects, not necessarily in a good way, due to electron tunnelling.) At this point we're well into the realm of nanolithography. Today's Intel Westermere Xeon server cpu has on the order of 5 million transistors per square millimetre (on a 512mm2 die) using a 32nm production process; my BOTE calculation suggests 80 million transistors per mm2 is likely by the time we get to 5-6nm resolution, giving full-sized chips with up to 40 billion transistors.

What applications are going to hit mass consumer adoption in the wake of us reaching a point where a first-rank CPU of some 40 billion transistors (equal to, say, 16 x 10 core i7's) cost US $250, and low power CPUs (an n'th generation ARM descendant with, say, 2.6 billion transistors -- a thousand times the component count of today's Cortex A-9 ARM architecture) can deliver the clout of a 10 core i7 on a TDP of around 10mW for a component cost of around $1-2?

Years ago, a couple of eminent computer scientists (if I remember the story correctly one of them was Danny Hillis; I forget who the other way) were discussing trends in chip production around 1980, and one of them objected to the other's extrapolation with, "but there's no market for such cheap chips! What are you going to do, embed them in door handles?" And five years later, checking into a hotel, he suddenly realized that he was using a magstripe card to open his hotel room door because there was indeed a microprocessor in the door handle.

But it doesn't take much in the way of embedded logic to operate a magstripe reader and a deadbolt. So what are the doorhandle applications that become practical when low-cost embedded devices are as powerful as today's high end servers?

One trivial possibility is widespread adoption of biometric authentication based on mixed parameters that take quite a lot of processing: for example, that hypothetical hotel room door might open for you by recognizing your facial bone structure and gait pattern as you approach. Again, your car won't have a key; it will "simply" recognize you, both by your face and your voice and more subtle cues such as your pressure distribution as you sit in the driver's seat.

But that's a gimmick. By which I mean yes, it's convenient, but it's not a game-changer: we already have ways of achieving these objectives (hotel room keys — or magstripe cards — and car keys with immobilizer chips). It doesn't fundamentally change the way we live the way that, say, mobile phones or lifeloggers would bring about basic behavioural changes.

What are the consequences of powerful microprocessors getting really ridiculously cheap — applications that just aren't practical today? Things like the library digitizer from Vernor Vinge's "Rainbows End" (which I shall not describe, because it's both a spoiler for the book and a thing of horror to bibliophiles), or infinite focal depth cameras, or giving your lifelogger real-time ubiquitous text recognition (as in, everything textual in your field of vision is scanned, digitized, and indexed immediately). What am I missing that isn't possible today and doesn't substitute for an existing process or technique? Alternative formation: if spimes are artefacts which are the physical instantiation of an entity with a trackable history on the internet, what happens when spimes acquire enough on-board processing power to act as the container for their own virtual existence?

I just hit 'send' and mailed the final draft of THE APOCALYPSE CODEX (the fourth Laundry Files novel) to my agent and editor(s) a couple of minutes ago.

It's due out in hardcover next July.

From Ace's marketing/flap copy:

For outstanding heroism in the field (despite himself), computational demonologist Bob Howard is on the fast-track for promotion to management within The Laundry, the super-secret British government agency tasked with defending the realm from occult threats. Assigned to "External Assets," Bob discovers the company—unofficially—employs freelance agents to deal with sensitive situations that may embarrass Queen and Country.

So when Ray Schiller—an American televangelist with the uncanny ability to miraculously heal the ill—becomes uncomfortably close to the Prime Minister, External Assets dispatches the brilliant, beautiful, and entirely unpredictable Persephone Hazard to infiltrate the Golden Promise Ministry and discover why the preacher is so interested in British politics. And it's Bob's job to make sure Persephone doesn't cause an international incident.

But it's a supernatural incident that Bob needs to worry about—a global threat even The Laundry may be unable to clean up ...

Good afternoon, and thank you for inviting me to speak at USENIX Security.

Unlike you, I am not a security professional. However, we probably share a common human trait, namely that none of us enjoy looking like a fool in front of a large audience. I therefore chose the title of my talk to minimize the risk of ridicule: if we should meet up in 2061, much less in the 26th century, you’re welcome to rib me about this talk. Because I’ll be happy to still be alive to rib.

So what follows should be seen as a farrago of speculation by a guy who earns his living telling entertaining lies for money.

I am back home but stunningly jetlagged, so I won't be posting anything substantial for the next day or so. Many thanks to Karl Schroeder and John Meaney for covering for me while I've been on the road!

If you want to read more of their writing, they have blogs: Karl's is here and here's John's.

Last Wednesday I did a keynote speech at USENIX Security; I'll blog the original text in a day or two. My wife and I needed to be in Portland by Friday evening, and due to a scheduling cock-up I left it too late to book a sleeper berth on the train: so we hired a car and drove. Note: this was the first time in 18 years that I'd attempted to drive on the wrong side of the road.

Things I learned:

What he said, below...

So, thanks Karl, it was a huge pleasure being in the same tag team. (If we do it again, maybe we should have, like, costumes and a heroic name.) Thanks to Charlie for inviting us and trusting us, and I hope we haven't left too much mess around your virtual house. (Must've been Karl who left those beer cans...)

Seriously, it's an honour, and thanks to everyone who continued to visit, read and post while two Non-Charlies were playing in the sandbox. Sayonara!

Well, Charlie should be shambling away from the airport sometime around now, ready to take up the yoke of piloting his unruly ship of ideas once again, so it's time for me to say goodbye. Thanks to everybody who contributed to the fractious and interesting discussions around my postings, and for everybody who just read 'em.

John, it's been a pleasure sharing the stage with you, and Charlie, thanks again. I now turn back to finishing my thesis, working on three new novels, and kayaking around Lake Herridge, where I have been sunning myself between bouts of blogging these past several weeks. Wish me luck. I wish you all the same.

So, I heard a multitude of heads following off when I posted Chris Priest's admonition to print off the first draft and delete all electronic copies...

I know what you're thinking. "Did he fire six shots or only...?" Er, I mean, "Is that the way you write, John?"

All right, I told my friend it was a good idea, but... No. I do not delete my first draft.

However, my second or my third draft is physically a rewrite: I don't amend much of the existing text; I write new stuff at the top of the screen and delete away the older version as I work. No sentence from the first draft survives unless it works so well that I retyped it during the rewrite. But I type very fast - usually I can reproduce an entire sentence faster than moving the cursor to the correct position to, say, delete a word - and the main thing is that the very first words you wrote need to be re-examined.

I suggested in an earlier post that foresight is not so much about prediction as it's about designing against surprise. Key to this is the exploration of multiple futures, which is why scenario-based foresight is so commonly practiced. Scenarios are rarely developed in isolation, but are usually created in decks(generally of four, when one uses the common 2X2 matrix method of generating them). These are then intended as snapshots taken in different points of a complex space of possibilities.

The opposite of scenarios is the default future, which is what everybody assumes is going to happen. If life is what happens to you while you're making other plans, the real future is what happens to you after you've planned for the default future. A classic example of what you get when you plan for the default future is the Maginot Line.

In a 1998 article in the journal Futures, "Futures Beyond Dystopia," Richard Slaughter critiques science fiction's default futures. He accuses SF of oscillating between naive techno-optimism and equally naive apocalypticism. Late 20th century SF lacks the necessary spectrum of intermediate scenarios, according to Slaughter, which may explain its decreasing hold on the public imagination. What we are left with is two default futures, and no societal capacity to plan for a third. This is an idea worth serious contemplation by those of us who write the stuff.

Sometimes, too, our scenarios grow so elaborate that they become more than scenarios--they're complete paradigms. They become default modes of thinking, and come with associated cultures, champions and institutions. At this point, presenting alternatives becomes increasingly difficult; one must present, not just new scenarios, but an entirely new paradigm to complement the reigning one.

May people, particularly in the foresight community, believe that a shift from scenario to paradigm is what's happened to the idea of the Technological Singularity. It's become the new default future--no longer the shocking, thought-provoking alternative to an orthodoxy, but the very orthodoxy itself. Against this, it's no longer sufficient to simply present different scenarios. We need an alternative paradigm (or two, or six).

I've been working on some.

...which is normally translated as austere discipline.

No one pays you for a first novel that doesn't exist yet. Likewise short stories for paying markets. (Being independently wealthy might help, but I'm not even sure about that - it doesn't totally preclude driving ambition, but it surely can't encourage it. In any case, it doesn't apply to most of us.) Therefore you're forced to squeeze it into your normal working day/week, somehow.

Roger Zelazny (back to him) once prefaced advice to new writers with what he called extra-literary considerations before getting to the actual art and practice of writing. He justified the preface by writing: "...if it causes even one beginning writer to think ahead to what it may be like to sell a book first and deliver it by a certain date, to wonder what it will be like to write on days when one doesn't feel like writing... then I am vindicated in prefacing my more general remarks with reference to the non-writing side of writing."

Imagine a future where the most revolutionary changes in our world have not come from nanotech, genetic engineering, artificial intelligence or even space development--but from cognitive science and a deepening understanding of how humans function (or not) in groups. What would such a future look like?

We're all familiar--maybe too familiar--with one model of such a future; it's exemplified by stories like Brave New World and 1984. Those books were direct reactions to the last great cycle of research into human nature. That was the era when Freud seemed to have a true model of human nature, Marx a true model of economics (or not) and when eugenics still seemed like a good idea. (If you want to read an excellent horror/slipstream novel about eugenics run amok, try David Nickle's Eutopia, which is available from Chizine Press). These and related theories were used to justify the great 20th century human engineering efforts such as The Great Leap Forward, Soviet collectivization, and so on. The problem wasn't just that ended up being harnessed for evil purposes, but that they were wrong or incomplete. But what would a correct theory of human nature look like, combined with the principles of self-organization and collective intelligence that are emerging right now? What would a cogsci singularity look like?

I think it would look like good manners.

My thesis is that I have no thesis. I'm writing this in something like the manner that Stephen King employs with his novels: I'll whip up a starting-point and then just go with it.

I have to say, that works better with books. His beginning is a mental representation of one or more characters in a setting that interests him, and then he lets rip. (This is one of the things that varies most among writers, the degree of knowledge of the ending when still writing the beginning. That in turn is one factor in determining which readers will enjoy the books.)

The first C.J. Cherryh novel I read was Heavy Time. For some reason, the title struck me so much that I had a vivid image in my mind before reading the book - I envisioned space opera with weird time-related physics. I guess I was thinking of time dilation in a gravitational field. [If you'll allow me to call it a field rather than distortion. That suggests another side point that might be worth picking up later, to do with metaphors in science. Wow. Must be the cappuccino.]

What do we do about wicked problems? --That is, problems that we can't all even agree exist, much less define well; problems that have no metric for determining their extent, or even whether our interventions mitigate them? I don't have answers, but will venture to suggest a direction for us to look.

The internet has exposed a flaw in our grand plan to unite humanity: it turns out that increasing people's ability to exchange messages does not, by itself, increase their ability to communicate. The Net has developed a centripetal power: for every community it brings together, it seems to drive others apart. Eli Pariser's idea of the Filter Bubble is an expression of this phenomenon. This problem arises because it is easier to communicate with people who share the same understanding of the meaning of a given set of terms and phrases than with people who have a different understanding of these meanings. Automatic translation is not an answer to our diverging worldviews, because each person and social group has their own private grammar. It takes work to learn it and that work can't be offloaded to an automated system. At least, not entirely.

Just checking in from San Francisco to let you know that I'm still alive.

I spent last week instructing at Clarion West, a workshop which can probably best be described as a boot camp for the next generation of SF and fantasy writers. It was fun, but monstrously exhausting — which, on top of eight hours' worth of jet lag, should explain why I've been scarce around these parts. I'm now On Vacation for a few days, but will be appearing as previously announced ...

* Cupertino: Mystery event, Big Fruit Company employees only, Friday August 5th.

* San Francisco: I'm reading and signing at Borderland Books, Saturday August 6th at 3pm.

* San Francisco: I'm giving the keynote at Usenix Security '11 on Wednesday August 10th at 9am.

* Portland: I'm reading and signing at Powells City of Books, Friday August 12th at 7:30pm.

Normal blogging will hopefully be resumed shortly after I get home — hopefully by August 15th.

In an earlier post I talked about prediction vs. preparation as different ways of approaching the future. Also foresight, which is the systematic study of trends and possibilities for the near future. When you do foresight, you quickly begin to realize that our ideas about the future are highly distorted, both by optimism and pessimism, as well as propaganda, ideology, and all the various things that various people and groups are trying to sell us. How do you cut through all of that to get some sense--any sense--of where we're really going?

One annual effort to do just that is the Millennium Project's State of the Future. This annual study of trends and drivers is grounded in research by hundreds of people in dozens of countries around the world. The full report comes with a CD or DVD containing 7000 pages of data, analysis, and background on the 15 years' worth of methodological refinement and legwork that have gone into the project. The pdf version of the executive summary is free to download here, and if you do look at it you may be shocked to discover something:

The 2011 State of the Future report is optimistic.

Near-future extrapolation, as Karl has recently written, is not for the faint-hearted. Vague though I was about the exact dates of my two near-future thrillers (10-30 years from now), I hope to be alive during that time period... So how predictable is the world? A world filled with Wicked Problems (cf. Karl's previous post).

Snapshot of my imagined near-future Britain: corrupt government, even more surveillance, local services beginning to break down, climate change and one socio-political twist. Before I get on to the twist, two things: 1) my technological (and other) extrapolation was deliberately conservative, 2) this is not a dystopia. In the London of these books, people get on with their lives, as they do now and as they've always done.



About this Archive

This page is an archive of entries from August 2011 listed from newest to oldest.

July 2011 is the previous archive.

September 2011 is the next archive.

Find recent content on the main index or look in the archives to find all content.

Search this blog