December 2011 Archives

This will be, I think, the last of my guest posts here on Charlie's Diary. It's been fun to have such a big audience, and some of your comments have been quite valuable.

I'm pleased to have stood in for a man who's one of the greatest SF writers to come along in years. Charlie Stross, Cory Doctorow, and Lauren Beukes are my faves among the SF generations after mine. Not to mention all the wonderful weirdos I've been publishing in my free online zine Flurb over the last five years. Click the cover image to see issue #12. The preceding issues are online as well, and there's an index by authors. If you root around, you'll even find an old piece by Charlie.

Sometimes I get a little tired of being cast as a science fiction writer. In my mind, I see my novels as surreal, postmodern literature. I just so happen to couch my works in the vernacular genre form of SF because the field's tropes appeal to me. The downside is that, since my books have that SF label on them, many people don't realize that I'm writing literature.

The word "transreal" that I started applying to my novels in the early 1980s was inspired by a blurb on the back of my copy of A Scanner Darkly, saying that Philip K. Dick had written "a transcendental autobiography."

I got my copy of A Scanner Darkly at the first-ever SF convention that I attended, at Brighton, England, 1979--Phil's book was just out, and some friendly British stoners whom I'd befriended at the con were talking about it, complaining a little that it was "too anti-drug." They didn't seem excited about the fact that the book was probably drawn from a chapter of Phil's own life--and that it was deeply funny, at least for those who have a taste for Phil's dark humor.


[Painting for my transreal novel, Saucer Wisdom, showing, left to right, my friend Gregory Gibson, me, and two aliens.]

After the Brighton convention, waiting on the platform for my train back to London, I was reading Scanner as I stood there. And I was laughing so hard that I left my suitcase on the platform--which I suddenly realized as the train started to move. I jumped back out in the nick of time.

Up until Scanner, I hadn't fully grasped how close Phil Dick's novels were to the kinds of books that I wanted to write. I particularly liked the language-with-a-flat-tire way that his characters talked in Scanner, and over the years I'd begin to emulate his peculiarly Californian tone. And even more, I liked the sense that Phil was writing about real people.

Internet sales have already eaten about 20% of the retail market by value, and around 10% of shop units in the UK are now standing vacant. Some large retail chains went bust early in the current recession (Woolworths, notably); others are teetering on the brink (Blacks, La Senza).

Where are we going in another decade? What is the high street environment going to look like? (This isn't an exercise in retail management forecasting but in Gibsonian futurism ...)

As I mentioned in an earlier post, I've always been repelled by the notion of the multiversal world of branching time--a cosmos in which no decision matters, as you also do the opposite in some other branch of time.

Rather than feeling that the other paths are real, we in fact have an emotional, experiential sense that the bad, unchosen paths are in fact shriveling away to the left and the right. If we didn't feel this way, why would we sweat our big choices?

I'd like to see a story in which the unchosen paths really are withering away. Or, if not withering, being somehow backed away from.

So for the purposes of an SF story I'm thinking of, I'll propose that there really is only one truly existing path through the branching thicket of possible worlds. The others are juiceless abstractions. But I do want a sense of someone feeling out the best paths as in Phil Dick's vintage precog story, "The Golden Man."

My gimmick might be to suppose that our path is not a striaght line. It has kinks in it, stubs. Our cosmic world line does very commonly grow a stub out a few seconds (or longer) past a given branch point. But then it backs up and goes into the main branch. There's a continuous line of time but it sometimes reverses its direction a bit and then starts forward on a new tack.

Literature at large has its own tropes or standard scenarios: the unwed mother, the cruel father, the buried treasure, the midnight phone call, the stranger in town and so forth.

When I speak of power chords in the context of SF, I'm talking about certain classic tropes that have the visceral punch of heavy musical riffs: blaster guns, spaceships, time machines, aliens, telepathy, flying saucers, warped space, faster-than-light travel, immersive virtual reality, clones, robots, teleportation, alien-controlled pod people, endless shrinking, the shattering of planet Earth, intelligent goo, antigravity, starships, ecodisaster, pleasure-center zappers, alternate universes, nanomachines, mind viruses, higher dimensions, a cosmic computation that generates our reality, and, of course, the attack of the giant ants.


["Welcome to Mars" by Rudy Rucker. More info on my paintings page.]

When I use an SF power chord, I try to do something fresh with it, perhaps placing it into an unfamiliar context, perhaps describing it more intensely than usual, or perhaps using it for some new kind of thought experiment.

As I mentioned in my "Rudy #3" post a few days ago, I'm working towards a certain conception of a mind amplification process that I call the Big Aha. Remember that I'm not looking for something that's inarguably true, I'm looking for something that I'd find artistically congenial for use in my next science-fiction novel. And I'm planning to relate my notion to quantum computation.


[Transreal painting of my wife in analog mode and me in digital mode, dancing in (I wish) the Riviera.]

The fundamental mental distinction I want to make is that my mind, or any person's mind, functions in two distinct modes: (a) the continuous, somewhat analog, wave-function mode, and (b) the discontinuous, somewhat digital, collapsing mode. The mode (a) is when you gaze idly at a menu, and (b) is when you decide what to order.

A few commenters wanted to argue that this is a false distinction, and that wave functions never collapse. From introspection, I don't feel that this lack of distinction is true. I do feel the continuous and the discrete modes of thought within my mind.

I do know about Hugh Everett's many-universe interpretation of quantum mechanics, under which there are no collapses because the timeline is continually branching, or, more accurately, there are continuum of parallel worlds--the multiverse.

But I have an aesthetic revulsion towards multiverse stories. In a nutshell, my problem is this: If everything happens, then nothing matters. I prefer to think that we live in a single and unique universe that is somehow in an optimal form--one might think of an external godlike crafter or one might equally well think of something like a bent wire that holds a soap film that has settled into a surface of minimal area.

So today I'll say a little more about my still-evolving notions, and I'll be doing this in the light of the numerous interesting comments that I got on my earlier post. When I quote a comment, I'll put it in italics, preceded by the name of the commenter.

Many years ago (we're talking about the late 1980s) I spent a year and a half as a shop manager. Well, that and a retail pharmacist running a pharmacy: but in addition to dispensing prescriptions, a chunk of retail management came into the picture. (The 24-year-old Charlie really sucked as a retail manager. I would not hire him. Luckily both stores were parts of small local chains with competent management backup—even if one of them was owned outright by a very happy junkie—and in any event made up most of their turnover via prescriptions. At which I merely sucked somewhat.)

Walking around various British cities over the past couple of years I've noticed an increasing number of vacant shop fronts (some in prime retail situations). I've also noticed a disturbing loss of diversity in our high streets, as quirky local shops give way to cookie-cutter national chains. I have, like most people, had the frustrating experience of trying to work out whether my mobile phone contract or the airline flight I'm been booking is actually the cheapest one that meets my needs, or whether I'm being gouged by a computer somewhere. And so I'm trying to put the pieces of the jigsaw together because I'm interested in guessing what our retail experience is going to look like in 10 years' time—the traditional "if this goes on ..." exercise beloved of science fiction writers.

One more before Christmas...

IA, AI, and the Big Aha

What I've been leading up to with my talk about the lifebox is a discussion about how a certain kind of advance in AI could occur in concert with a discontinuous jump in ordinary human intelligence via IA, that is, Intelligence Amplification.

I'm calling this advance The Big Aha, and it will probably play a role in my next SF novel--which might even be entitled The Big Aha.


[Visions of the cosmic fractal in the sky.]

Let me make it clear that I'm going to be talking about science fictional ideas in this post. Not about a priori academic arguments regarding possibilities of AI.

As I mentioned in the last post, there's a tantalizing dream of AI workers that there may yet be some conceptual trick that we can use to make our machines really smart. The only path towards AI at present seems to be beating problems to death with evolving neural nets working on huge data-bases. We get incremental progress by making the computers faster, the neural nets more complex, and the data bases larger.

The SF dream is that there's some new and exciting angle, a different tech, a clear and simple insight, a big aha.

And--the kicker for my planned SF novel--the aha would work for human brains as well as for machines. I'm in fact thinking of us finding the big aha for human brains first, and only then transferring it down to the computers. Intelligence augmentation, then artificial intelligence. Not that the AI will even matters that much anymore if we can kick our own minds into a higher gear.

So what's the big aha that I have in mind?

Nice cat photo, Charlie! I like the green eyes.

In my previous post, I was talking about the idea of creating an online simulation of oneself--what I call a lifebox. For now, the lifebox is simply a largish database of your writings, spoken words, and/or images, with links among the components, and a front-end that's an interactive search engine.

[At this point I should explain that I'm prone to illustrating my blog posts with images that aren't always quite rigorously related to the topic. But here, clearly, we see a painting by me in which a person is inputting their life's information into a keyboard. Or vice-versa.]

Today I have a few more remarks on the lifebox concept, although at this point much of this material has already been anticipated in the impressively vigorous and high-level discussions in the comments. But I'll go ahead and post this anyway, and move onto something else in a day or two.

As I've been saying, my expectation is that in not too many years, great numbers of people will be able to preserve their mental software by means of the lifebox. In a rudimentary kind of way, the lifebox concept is already being implemented as blogs. People post journal notes and snapshots of themselves, and if you follow a blog closely enough you can indeed get a feeling of identification with the blogger. And many blogs already come with search engines that automatically provide some links.

If you're wondering why Rudy is around here right now and I'm thin on the ground, it's because (a) he happened to be free-ish right now, and (b) I'm arm-wrestling an octopus of a novel called "Neptune's Brood", and coming off worse: I need some offline time in which to build up momentum. So if you wonder what I'm doing over Newtonmass and Hogmanay, the answer is "working".

Also, I have a new camera. (Evidence below the cut-line.)

I'm guest-blogging on Charlie's Diary for a week or two, putting up about six posts. I'm doing this for fun, and to drum up interest in my autobiography, Nested Scrolls, which is just out from Tor Books in the US, and has been out from PS Publishing in the UK since June.

I'll put my first post today, and then come back with my second post on December 27. And I probably won't be delving into the comment threads until Dec 28.

Preminary Pleasantries.

I'm happy to be on Charlie's blog as he's one of my favorite writers. For me, Accelerando was a huge breakthrough. Before Accelerando, SF writers were kind of worried about how to write about the aftermath of the Singularity. And then Charlie showed us how. Pile on the miracles and keep a straight face. And Accelerando was literate and funny. Once I'd read it, I was ready to write my own postsingular novel--in fact I called it Postsingular.

Something I especially liked in Accelerando was Charlie's way of saving fuel on interstellar flights. Send the people in the form of simulations running inside a computer/spaceship the size of a Coke can. And the people on his tiny ship are well aware of their nature--they jokingly refer to themselves as "pigs in cyberspace." Lovely.

Origins of the Digital Immortality Trope

Stepping back from the postsingular future, I'm going to spend my first couple of posts talking about digital immortality, both as an SF trope, and as a near-future real-world tech product. And then I'll do some posts about writing science-fiction, and about some of my recent ideas.

My early cyberpunk SF novel Software of 1982 is, I like to argue, one of the first books in which we see humans uploading their personalities as software for android bodies. Feel free to comment if you think I'm wrong about this. I'm ready for you. But, as I mentioned, I won't be on the comments until about December 28.

For now, let me say a little more about the mind-uploading trope, and suggest a way of faking digital immortality in the near future...

Surprise! Over the festive season, we have a new guest blogger; I'm excited to introduce Rudy Rucker, SF author, mathematician, and computer scientist.

Rudy has published over thirty books, winning the Philip K. Dick award for both his cyberpunk novels Software and Wetware (which are available as part of the Wares tetralogy). He has a Ph.D. in mathematics and has worked as a computer science professor at San Jose State: he took up painting in 1999, and he's had three shows of his pop-surreal works in San Francisco.

Rucker's fantastic, transreal novel of the afterlife, Jim and the Flims appeared this year, as did his autobiography, Nested Scrolls. Nested Scrolls received the Emperor Norton Award for "extraordinary invention and creativity unhindered by the constraints of paltry reason."

For the last five years Rucker has also been editing a speculative fiction webzine called Flurb, attracting contributions from across the field. For more links and ongoing updates about his activities, see Rudy's Blog.

And he's going to be keeping us company over the next two weeks!

I'm off to do a reading in a few hours, and it's chilly outside, so I feel like turning up the heat. Therefore:

My view of contemporary US politics, which is that of an outsider and obviously incomplete (and possibly faulty, and subject to change) is as follows:

We are all, like it or not, consumers — short of going off to live in a hut in the wilderness, it's hard to cut yourself off completely from using services or goods made by other people. And for most of us, the majority of our purchases come from complex supply chains operated by and for large corporate entities that specialize in supplying what most of us are willing to buy, most of the time. (This isn't automatically the same as what we want, but neither is it automatically undesirable rubbish ...)

But some purchases are different because they're unique.

I have an iPad. (I think I already mentioned that a while ago ...)

I think the multitouch tablet interface style exemplified by iOS is the future of computing, in much the same way that the Mac interface circa 1985 was obviously the future of computing back then. Decried as a toy and handicapped by a closed architecture and a lack of third party applications though the Mac was, it nevertheless pointed towards a vastly more transparent and accessible way of working with computers, which in turn made computers useful to many more people. (I discount earlier GUI platforms such as the Xerox Alto because at $75,000 a seat in 1980 money accessible isn't exactly a suitable word to describe it.) You're reading this blog entry online (and, knowing my audience, you are more technologically literate than average), so you may be over-accustomed to using computers, which makes it hard to see it may be desirable to make them even more accessible; but if you watch an 80-year-old try to double-click the left button of a mouse within a particular window on a screen, it becomes glaringly obvious why we need a better, truly intuitive, interface paradigm.

(Moreover, by this time in 2013 we will, for the first time, have a networked general purpose computer in every adult's possession. Smartphones are real computers, and they're finally crawling out of offices and nerd bedrooms and into everyone's pocket. This means computers are making a great leap forward in social penetration, from being embraced by 10% who are truly proficient (and accepted reluctantly by 25-35% who can be taught to click on an icon to run a program), to being used by literally everyone.)

Anyway: I have a criticism ...

Tanenbaum's Law (attributed to Professor Andrew S. Tanenbaum) is flippantly expressed as, "Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway". It's a profound insight into the state of networking technology: our ability to move bits from a to b is very tightly constrained in comparison with our ability to move atoms, because we have lots of atoms and they take relatively little energy to set in motion.

Which leads me to ask the following question:

1. Postulate that we make contact in the near future with an extra-terrestrial intelligence ten light-years away.

2. We can communicate with them (and want to communicate with them) by two means: we can beam data at them via laser, or we can send a physical data package (a "tape" travelling cross-country).

3. Our "tape" package will be made of something approximating the properties of memory diamond, i.e. on the order of 1022 bits/gram.

4. We will assume that we can use a laser-pumped light sail (with laser efficiency of 10%) to transfer momentum directly to a hunk of memory diamond. We're going to ignore the sail mass, to keep things simple. And we're going to assume there's another laser at the other end to allow our alien friends to decelerate it (so if you need xGj/gram to reach a specific speed, we can allow for 2x Gj/gram for the trip).

5. Our reference interstellar comms laser, for an energy input of 1GW, will be able to deliver 2.6 billion photons/second to a suitable receiver 10 light years away, while switching at 1Hz. If we increase the bit rate we decrease the number of photons per bit, so this channel probably limits out at significantly less than 1Gbit/sec (probably by several orders of magnitude). I'm going to arbitrarily declare that for starting purposes our hyper-sensitive detectors need 1000 photons to be sure of a bit (including error correction), so we can shift 2.6mb/sec using a 1Gw laser.

6. Ignoring latency (it will be one year per light year for lasers, higher for physical payloads), which is the most energetically efficient way to transfer data, and for a given energy input, how much data can we transfer per channel?

Here's my initial stab at it, which is probably wrong (because it's a Saturday night, I've been working for the past nine hours or so, and I'm fried):

Let's pick a 10 year time-frame first. 10 years = 315,576,000 seconds.

Laser:

Running a laser for 10 years will emit 3.56 x 1017 joules in that time, at 10% efficiency, so roughly 3.56 x 1018 joules of energy is consumed. It will deliver 0.82 x 1014 bits of data. So, roughly 4000 joules/bit.

"Tape":

A packet of memory diamond with a capacity of 1 x 1014 bits has a mass of roughly 10-8 grams.

Kinetic energy of 10-8g travelling at 10% of c (30,000 km/sec, 30,000,000 m/s) = (10-8 * 30,000,0002)/2 = 0.9 * 106 J. Double the energy for deceleration and we still have 2 x 106 joules, to move 1014 bits. So, roughly 108 bits/joule.

Eh?

Let's dink with the variables a bit. Even if we allow individual photons to count as bits at 10 light years' range, our laser still maxes out at around 4 joules/bit. And even if we allow for a 10,000:1 mass ratio for our data-carrying starwisp, and impose the same 10% efficiency on its launch laser's energy conversion as on our communication laser, we get 1,000 bits/joule out of it.

As long as we ignore latency/speed issues, it looks to me as if Tanenbaum's Law implies a huge win for physical interstellar comms over signalling. Which in turn might imply an answer to the SETI silence aspect of the Fermi paradox.

Of course, this is just an idle back-of-the-envelope amusement and I've probably borked my calculations somewhere. Haven't I?

No, I haven't turned to astrology; but it's handy to have a term for those periods of life that are dominated by Murphy's Law, and the past week has been one of them. Hence the paucity of blogging.

Let's leave aside — for now — the decision to ditch the first 26,000 words (or around 80 pages) of a new novel and re-do from start; this stuff happens. From time to time you dive into a project only to realize you'd started in the deep end and/or the pool was drained for maintenance. You learn to suck it up: part of being a pro is being able to recognize your mistakes and learn from them, rather than blindly pushing on.

Let's also set aside the short-notice turnaround I'm meant to be giving the copy edits on the manuscript of "The Apocalypse Codex" — shockingly, my US publisher is ahead of schedule and so I am expected to return the checked CEM before they close for the last two weeks of December. (This means I can't blame them for my tendency to work over December 25th, which I do every year on a point of principle.)

No. The real pain in the neck has been the Revolt of the Machines ...

Karl Schroeder links to and discusses a fascinating-looking paper vy Keith B. Wiley on The Fermi Paradox, Self-Replicating Probes, and the Interstellar Transportation Bandwidth. (The latter is a new reference point for discussion of the Drake Equation, defined as the number of people capable of moving from one solar system to another per unit time.)

I need to chew on this paper some more before I emit any thoughts. But in the meantime, you might want to go over to Karl's discussion of its implications for his take on it. (I'm not going to spoiler it here.)

Specials

Merchandise

About this Archive

This page is an archive of entries from December 2011 listed from newest to oldest.

November 2011 is the previous archive.

January 2012 is the next archive.

Find recent content on the main index or look in the archives to find all content.

Search this blog

Propaganda