Back to: Unpacking the Zeitgeist | Forward to: Still Busy

15 minutes of fame

I've got an article on the BBCs technology website today: a short polemical piece on the future of history:

We've had agriculture for about 12,000 years, towns for eight to 10,000 years, and writing for about 5,000 years. But we're still living in the dark ages leading up to the dawn of history.

Don't we have history already, you ask? Well actually, we don't. We know much less about our ancestors than our descendants will know about us.

Indeed, we've acquired bad behavioural habits - because we're used to forgetting things over time. In fact, collectively we're on the edge of losing the ability to forget.

(Update: I'm being interviewed live about that piece by BBC Radio Wales tonight at 7:35pm. Updated again: And it went well.)

And in other news, here's the first review of HALTING STATE (my next SF novel) to hit the web.

163 Comments

1:

I'm looking forward to reading it. Keep up the good work!

2:

Well, I was going to report a silly editing error (if you're going to build your data storage out of carbon atoms, it helps if one of them isn't radioactive) but such is the power of the intertubes that it was fixed by the time that I finished typing this comment.

Mind you, you do say `And some time after our demise, this information will be available to historians.' which makes me wonder how on earth we will prevent it being available to the police before that, and how we'll prevent society's implosion into a total-surveillance police state.

Searching that mass of almost completely unstructured data will be really hard, too. Play back that conversation I had in a pub two months ago in which someone used the wordfishhooks' in a strong Glaswegian accent with intense background noise'. A lot of humans have trouble discriminating sound from the background in that situation!

... and the visual problem is vastly harder, and collecting the data and analyzing it when AI improves might not help, because cameras are fundamentally much crappier than the human eye. (Perhaps you think there'll be cameras with the same field of vision and range of intensity response as the human eye anytime soon, but I doubt it because you're not that starry-eyed. The human eye is rather crappy by biological eye standards and is not a very good precision optical instrument but it's far better than anything we can fit on a phone.)

3:

And it's currently the #2 most emailed article on their site :) Only baby mammoths beat you out!

4:

I, for one, welcome our new baby mammoth overlords!

5:

Saw a piece about your ideas on the BBC website today. Imagine pattern (see URL) to be complete, like a DVD game, imagine "the NOW" to be a shockwave on the pattern that allows you and me and any thinking being to move to preferred parts of pattern easily and you have imagined reality. Without the shockwave we would be "stuck", fall from the shockwave and you are dead but not gone just frozen again in pattern. Create a small set of shockwaves from here in the NOW and you can pick people up and put them on a new surfboard here in the NOW again. Now that isn't novel science fiction but it is certainly novel science fact. And it rather destroys the idea that we don't have access to perfect memories, we all do, we just didn't realise it until now. We will be able to devise things for simply looking back in pattern, liars will no longer have the advantage, we can call these wrist watches because they will be worn on your wrist and you can watch things from the past, anyone's past, with them. Rubbish? Of course it is. Go to bodgeitandscarper.org for the basics. You will need some imagination to free yourself from a box in a few years so perhaps it is worth a read now to begin to see how it might be done. The clues are all there but do you have the imagination? Best Regards John. PS If nothing else you may find a few ideas you can nick. All I want is for the ideas to take on a life of their own so I can retire to the beach with a beer and a dolly bird.

6:

Interesting article (though is it similar to a blog you posted not long back?).

Are BBC news articles going to be a new staple of your publishing Mr S? Or was this a one off?

7:

John @5: any chance you could maybe repost that in paragraphs? I'm finding it somewhat difficult to understand ...

8:

We all know how (in)accurate Amazon can be but I'd like to point out to UK-based fans that the £7.99 mass market paperback is not due until March 2009 & not simultaneously with the £10.99 trade PB in January next year. Given the current exchange rate we might as well buy the US edition anyway - snip at $24.99 in October.

9:

I, for one, welcome our new baby mammoth overlords!

Mammoth baby overlords would be a far better story..

10:

gmilton: you'll find the US hardcover somewhat hard to order through amazon.co.uk, and by the time you add air mail on top of the price of getting it from amazon.com you'll be saving precious little money.

(Also, I don't want to encourage Brits to be nasty to by UK publisher by buying gray imports. It doesn't do my book advances any good ...)

11:

Alex: Mammoth babies? I can smell the toxic waste from here! And you wouldn't want to be in earshot when they start crying because they've just tried teething on the Walter Scott Monument ...

12:

Dr. Stross is of the opinion that advances in electronic memory will produce a happy state where 'And if you're a student, it means you can concentrate on understanding your lecturer, and worry about making notes later.'

If only it were that simple.

Many students of my experience are far more likely to zone out in the lecture theatre (snog their bloke/bird, play videogames etc) while recording the lecture - a recording which will never subsequently be looked at or examined ever again.

Here's another point. If all human life becomes recordable, and if future generations are able to access all human, historical events in their entirety, and accurately, what will human communities do for myth? Malinowski defined myth as the charter a human society relies upon for its foundation, and myth is often the product of misremembered history. . .

13:

D. O'Kane: (a) I'm not a doctor, and (b) I didn't have room to examine second-order consequences, or even legal and ethical considerations, while simply working over the issue of unlimited data storage.

Yes, some students doze off during lectures. But many don't. And some who doze off in some, don't doze off in others -- and vice versa.

The question of myth is an interesting one. It reminds me of the question Karl Schroeder posed recently: what if there is an end to the scientific enterprise? What if everything that can be discovered will be discovered, and any remaining questions are obviously beyond our ability to probe? At that point, the mechanics of doing science become .... not useless, exactly, but no longer relevant. And knowledge is a dead thing, the domain of scholars and librarians, not researchers.

The future: a place without mythology, science, or secrets. Very strange.

14:

Hi. Long time reader, first time commenter.

D. O'Kane @ 12: What's the difference between students in the future who never examine their electronic memory and students today who never read their notes?

Charlie Stross @ 13: Well, if (and yes, it's a big if) we ever finish Science, we'll spend the rest of the time making deliberately subversive readings of the data, like those people who arge that Sauron's actually the good guy in "The Lord of the Rings"...

15:

Wherever there are humans there will be mythology--we make this stuff up. I might find the lack of secrets to be the curse of an All-Knowing being. No surprises ever? Sounds very boring.

Jeff

16:

Fair enough Charlie - I won't be buying through Amazon anyway: I have my own, perfectly legal, way of getting it. Have you no way of getting simultaneous publication in the UK & US?

17:

New book sounds interesting. Scot's vernacular? Bank's book (Feersum Endjinn)has passages in heavily accented Anglish. I look forward to being entertained.

Jeff

18:

Re: Grey imports - isn't it swings and roundabouts, or is the hit on your UK advances much bigger than what you make as your rabid fans buy the US editions? Speaking as someone who bought the US hardcover of Glasshouse as soon as I saw it in Forbidden Planet, telling me to wait n-months to a year to get my grubby hands on the crunchy plot goodness seems a little mean...

19:

"a recording which will never subsequently be looked at or examined ever again." That's because it's not searchable, which for lengthy, heavily linear media means its effectively dead weight as far as research goes. There's no way to retrieve a crucial nugget of information without sitting through the whole thing all over again -- complete with lecturer's rambling about his summer holiday -- in real-time, or maybe 2x if your player does Chipmunk Voice Fast-Forward(tm). On the other hand, if, come revision time, certain key words could be plugged into my.life.google.com ...

20:

Re: Scots vernacular. This is notoriously difficult to do well & done badly it's just plain irritating. Irvine Welsh is the best exponent I've read & I don't think even he got it quite right. Nivirthiless ah goat ivry coanfidence in ye Charlie (see?)

21:

An end to the scientific enterprise? Vinge did this in A Deepness in the Sky, didn't he? (Or, at least, he explored the far future of a metaculture in which scientific discovery, as opposed to rediscovery, ground very nearly to a halt a very long time ago.)

I can't see any way in which scientific discovery would ever stop, but it might become a process of filling in, very occasionally, increasingly small gaps in a nearly-complete world model.

I suspect some sciences, notably mathematics, are intrinsically endless.

22:

This is an interesting contrast to the fictional future in Glasshouse, where our descndants know less than the past than we do. I hope the future described in the BBC article is closer to the truth.

As for Halting State, a police procedural with characters talking in Scottish vernacular sounds a bit like an Ian Rankin novel. Well, if Ian Rankin were to start writing in the second person present tense, anyway.

23:

gmilton @16: simultaneous publication in the US and UK relies on one of two things happening: (a) a multinational publisher buys world English language rights to a book and publishes it simultaneously on both sides of the Great Undrinkable, or (b) two different publishers buy local rights at the same time and synchronize their publication schedule. I have two different publishers in each of the US and the UK -- long story, boring business-related stuff -- so (a) is ruled out. (b) has happened in the past -- it worked for ACCELERANDO -- and might happen again in the future, but basically it ain't happening for HALTING STATE because one publisher was a bit doubtful about the original book proposal and didn't make a satisfactory offer for the rights until they had the final thing in front of them, which left them running behind the other publisher's schedule.

Jakob: it's not so much individual punters, but if, say, a big name book chain begins importing the US hardcover in large quantities, they will correspondingly decrease the size of their order for the UK hardcover or trade paperback when it finally emerges, blinking, looking at its pocket watch, and shouting "I'm late!" If that happens too often, the British publisher is screwed, and shortly thereafter (as of the next book) I'm screwed too. (The comments re rip-off Britain cut both ways, and I make more money per UK book sold than per US book sold.)

On the topic of Scots dialect, you need to bear in mind that the reviewer is American, and just throwing in the odd "crivens!" is enough to give it that quaint ethnic flavour. Alternatively, I am no Irvine Welsh (or Iain Banks) and I'm not pretending to be.

(On the other hand, I'd love to aspire to the status of a messy collision between Neal Stephenson and Christopher Brookmyre. There ya go ...)

Nix: mathematics almost certainly is endless, but whether endless mathematics remains interesting in and of itself is another question. (Is the end of "Diaspora" by Greg Egan heaven or hell for the protagonist? My guess is that it's meant to be heaven ... but the protagonist is very, very unlike most of us Version 1.0 Humans.)

24:

Ah, I see. Thanks for the info - I will feel no guilt in asking one of my mates to mule me a copy when they're next over the pond.

The thing that amused me in that review was the description of DI Dalziel in the telly version of Dalziel and Pascoe speaking 'thick Scots English'... somewhere a white rose is weeping.

25:

Could you please post a mirror of your article on your own site. The BBC is blocked pretty impossibly by the Great Firewall of China, none of the proxies I've tried work.

26:

Albert @25: I'll go ask, but as it was a paid commission, they've got first serial rights for sure -- I may be able to repost it later. (In the meantime, it's basically a much shorter, low-brow version of this.)

27:

Good article, my main problem is a pedantic one and that's the prefixes and conversions of bytes are all wrong. The IEC international standard requires binary prefixes for binary numbers not the SI decimal prefixes which are defined incorrectly anyway in the article.

28:

21: There are many generative sciences -- ie, sciences with unbounded objects of study. Right off the top of my head there is chemistry, biology, and computer science. These sciences are by their very nature inexhaustible.

Concretely, take chemistry. You can always synthesize new chemicals, and they will have new properties. And no, we will never become able to predict the properties of any possible chemical in advance -- computational complexity considerations rule that out. Even the transcendent AIs on the other side of the Singularity will have to get their robotic manipulators dirty with labwork.

29:

You might want to talk to your reviewer about this:

"Detective Chief Inspector Andy Dalziel speaks rather thick Scots English"

Andy Dalziel speaks (On TV at least) pure Yorkshire. (Or "pure bloody Yorkshire" as he would put it.)

30:

Look forward to reading the new book. As a US based reader, I must say that your books are not always available in the major Silicon Valley bookstores - Barnes and Noble and Borders. I saw Marc Andresen put you at near the head of his favorite authors on his blog, which I think should give you a small boost here, but for whatever reason, they don't stock many, leaving Amazon as the best route to buy (not that I am complaining of this, but showcasing and spot purchases are important to gain readership). I know you are not unique in this regards, and British authors do seem to get poor exposure here. Maybe that is the inevitable price we have paid for the big box bookstores.

31:

Alex @30, I'm published in the US first -- my agent's in New York and sells to Ace and Tor before the UK get my stuff. In fact, the British publishing biz basically treats me as a US import.

This suggests a problem with your local B&N and Borders, not my US publishers.

32:

Congratulations, Charlie. Being compared to John Brunner is high praise indeed (and well-deserved to my mind, after Glasshouse). I'm getting a pre-order for Halting State in later today.

Nix: sorry, it's not true that we can't build a phone camera that's as good as the human eye. We just can't do it the same way we build normal cameras. and we can't match all the capabilities in one camera cheaply (now). Our cameras are uniform geometry sensors; they have the same resolution over the entire image frame. The human eye has a high resolution (but definitely not higher than a very expensive professional digital camera back, some of which run up to 40 megapixels) at the center of vision and is almost blind at the edges (motion-detection only). As for sensitivity, the human eye is capable of very fine discrimination of shades in the green, but very poor discrimination in the red and violet (and image resolution varies in the same way by color). So it's possible to do as well, but matching the color response is difficult with solid state sensors. It's simpler, though very expensive, to just have the same discrimination across the board, and modify the output image with a filter to match the eye. And so on, for light response and dynamic range; I don't want to write a textbook here and bore you all to tears.

As for the end of science, there's another way it can go: we don't know everything, but everything yet to be discovered is too complex for human brains and only the AIs can do science. Not a new idea, I know, but I think it still bears thinking about. Would that engender a racial inferiority complex, as some have suggested? Maybe not; not that many people really care about science.

33:

Charlie: I'll look into it and find out how the local purchasing decisions are made. We should be in SciFi heaven here, so I would expect a good selection of your works on the shelves, especially when they hit pb.

I note that Amazon has the publish dates for "Halting State" in the US and UK as the same, Oct 2 2007 date, but it looks like they are for the same US edition.

Paradoxically, Greg Bear's "Quantico" was published in the UK nearly 6 months before the US release. Go figure.

34:

@32:

Would that engender a racial inferiority complex, as some have suggested?

Thre was very good book by Soviet writers Arkady and Boris Strugatsky on the subject ( http://en.wikipedia.org/wiki/Arkady_and_Boris_Strugatsky), published in US as "The Time Wanderers" ( http://www.amazon.com/Time-Wanderers-Arkady-Strugatsky/dp/0312910207/ref=sr_1_1/102-1901546-7993702?ie=UTF8&s=books&qid=1184093498&sr=1-1 )

(pls disregard "Editorial review". They have no clue)

I have no idea how good translation is.

I am waiting impatiently for "Halting State".

35:

Alex: Amazon.co.uk shouldn't be listing the US edition for import into the UK. And the UK edition is a trade paperback due out in March 2008.

36:

Charlie @11: Alex: Mammoth babies? I can smell the toxic waste from here! And you wouldn't want to be in earshot when they start crying because they've just tried teething on the Walter Scott Monument ...

Isn't that the mammoth from "Trunk and Disorderly"? I bet it would make a great sequel if you also included NINJAS IN SPAAAACCEE!!!

37:

No, no, no, it's mammoth as an adjective, not a noun!

38:

Kitchen sink approach. How about: Mammoth babies AND baby mammoths AND NINJAS IN SPAAAACCEE!!!

39:

Regarding the storage mechanism. On the way to your hypothetical c12/c13 carbon lattice, we could store the data at the molecular level very soon, using DNA. The A/T =0, C/G=1 base pairs could be used. What is interesting is that MIT is already working on DNA sequencers that detect the size the nucleotide as it is forced through a pore. Controlling the molecular machinery to construct the DNA sequence is quite possible, providing nice linear "tapes".

Putting that into perspective. A human genome contains 310^9 bases ~= 310E8 bytes. A human has ~ 50*10E12 cells. Using your 2003 data storage values, I calculate that this could be stored in 1/100 of a gram of cellular material. Nowhere near a grain of sand, but pretty small and it might even be stored as part of yourself.

40:

Alex T: yes, DNA is a feasible storage medium. On the minus side: (a) it's held together with hydrogen bonds (read: it's not terribly stable if you heat it up), (b) nucleotide bases mass about 325 Daltons (compared to 12.5 Daltons for a C12/C13 mix) so it's about 25 times as massive per unit data storage, and (c) it's prone to transcription errors (can't give you an exact error frequency, but as it's one of the mechanisms by which mutation occurs ...).

The flip side is, we might be able to improve on nature's transcription and error checking mechanisms, and we might be able to improve on the standard base pairs in DNA by using some other nucleotide-like system.

41:

charlie: I was trained in biology and worked for many years in biotech. All your points are absolutely correct. I just wanted to offer a solution that might be quicker in coming that trying to read isotopes in a lattice, plus a possible read mechanism as well - although I well understood that yours was a concretized thought experiment to anchor the possibilities of storage at this dense level. I had read your "Shaping the Future" address and recognized the basis of the Accelerando computronium theme.

Unlike starships, or even human planetary exploration, this increasing density of storage and computation is coming up on us extremely fast. (I'm so old, I can hardly believe that 1GB flashRAM SDcards are old hat - I started on punch cards on an ICL 1905 at the University of London). How does one even start to plan how to use this stuff? As a programmer, I'm already starting to shift my sights to functional programming in order to use the capabilities of hugely more processing power and storage. Won't be long before I need "future shock" therapy.

42:

Charlie, I'd say you can scrap the "might", at least as long as you consider the mechanisms employed in humans. Fault-free transcription would be quickly eradicated by selection for flexibility. However, what we are looking for might be found in old organisms that haven't changed much in the last hundred million years or so. (Their innate flexibility and resistence might have provided them with enough time to bend the rules of selection a bit.) Cockroaches are likely to be a good starting point.

One thing to worry about might be genetic modifications of human beings in order to prevent mutations and cancer in that way. Those might be a boon for all individuals, but might be a curse for humanity as a whole, should the lack of genetic flexibility backfire. But then again, once human genetic modification is out of the box, the lost flexibility would probably be made up by human ingenuity, rather than natural trail and error. The genetic make up of humanity might however change the way that names change today, following trends and fads, booms and bursts.

43:

OK, vertebrate biologist in the house :)

Bruce @32: what on earth are you smoking? And why, like most geeks, are you taking a digital camera as some sort of standard for image quality? Even a 40 megapixel camera is only on a par with ordinary film for resolution, and its dynamic range is much worse than film (and likely to remain so in the foreseeable future), which is turn has far less dynamic range than a retina. Almost blind at the edge of our visual field? Rubbish. The edge of our visual field is primarily formed by rod cells, which only see grayscale, but are way more sensitive to low light and movement than the cone cells at the centre of our visual field. The lack of rod cells at the centre of our retina is part of the reason we have crap night vision (and I hasten to point out that our low-light vision is still better than most cameras, even though we've got nothing on a seal).

What's more, the nerves in our retina perform all sorts of on-the-spot tricks to enhance contrast and accentuate edges (and it took the visual psychologists forever to realise that most of that stuff happens in the eye, rather than in the brain). If you want to see really sensitive to motion, check out what insect eyes can do. I'm sure insect eyes would be a better model for machine vision, actually, but that's a digression :)

Charile @40: The error rate for DNA transcription is very low, and there's a whole lot of error-correction stuff hanging around chromosomes. Environmental things are a far larger source of mutations (free radicals, UV light, even cosmic rays). Although, as someone else pointed out, we're not descended from the organisms that did the best job of keeping their genome stable. People have calculated the lifespan of DNA, based purely on entropy and such; it's in the hundreds of thousands of years, in ideal conditions. I suspect that DNA information storage implies a rather different future to the one you're thinking of, though. Have you read Starhammer, by the way?

44:

Using nanoscale diamond as data storage, six hundred grams (about one and a quarter pounds, if you're my generation) can store a lifelog, a video and audio channel, with running transcript and search index, for six billion human beings for one year.

I saw this movie. It was called Zardoz.

If we end up with a big flying head that pukes guns and James Bond running around in a yellow diaper I'm blaming it all on you, Stross.

45:

Chris @ 43

I used to work in a physiology graduate research department at a university, so I'm not just a silicon geek. And I was talking about digital cameras, not because they're the best thing around (I used a 35 mm film SLR for 37 years before I bought a digital) but because we were talking about surveillance cameras feeding shape and motion recognition systems.

Almost blind at the edge of our visual field?

Yes, the edges of the eyes have very little spatial resolution. In fact they have about none because they do detect motion, not shape. They add almost nothing to the image you perceive of the visual field. If you detect motion at the edges, your brain will move your eyes and head to put the source of the motion nearer to the center of the field so you can get an image.

Ok, if you want to include the entire visual cortex as part of the eye, then sure, it's much better than any camera. But then I get to add as much visual processing hardware and software as I want. As I said, we can do as well, we just can't do it cheaply.

Even a 40 megapixel camera is only on a par with ordinary film for resolution, and its dynamic range is much worse than film (and likely to remain so in the foreseeable future),

And the eye is not as good at resolution as film. There are what, about 100 million receptors in the retina counting both rods and cones? And as I said the receptors at the edge don't add much to the image. I've seen 100 megapixel sensors; they're not common because they're expensive at the moment, but that won't last. Remember Moore's Law before you talk about the forseeable future.

As for dynamic range, one of the things that makes the eye so good is the iris; put an iris on a camera and you've got a similar mechanism. And I know at least 3 ways to increase the dynamic range, again by pumping money into the sensor:

  • Make the image array with several different sizes (and therefore different sensitivities) of pickup cell. That can multiply the dynamic range by the ratio of the areas of the largest and smallest cells. My digital camera allows me to do something similar by shooting 2 frames at different exposures and melding them together with software.

  • Don't use CMOS sensors. They're standard in large image sensors because they're relatively cheap to make. But they have a high black noise current, which puts a floor on the lowest light level you can reliably detect. CCDs are less noisy, and it's possible to tune them for dynamic range by adjusting the geometry of the cell and of the holding cells that store the image.

  • Cool the sensor. This also reduces noise. Stick a Peltier-effect cooler on the back of the sensor chip and cool it down to 150 Kelvin or so. Noise current is exponential with temparature, so you get a lot of help there.

  • (and I hasten to point out that our low-light vision is still better than most cameras, even though we've got nothing on a seal).

    Right, "most". There are cameras that can detect single photons. You can't do better than that.

    and it took the visual psychologists forever to realise that most of that stuff happens in the eye, rather than in the brain

    And it doesn't need to be done in the camera if we've got sufficient cpu cycles or bespoke parallel processing to massage the images once we've got 'em (in real time if we need to). It's better to do it in the eye than the brain because of the trade-offs resulting from the design of the neurons in the central nervous system and the hard realtime requirements of the sensory system. Silicon has different trade-offs.

    46:

    If you're interested in the end result of that comment of Charlie's about students and note-taking, watch the movie "Real Genius". There's a sequence in the movie depicting an arms race between students and teachers: the students carry audio recorders so they won't have to take written notes, then the teachers record their lectures and play them back to the students during class, and then the students just drop their recorders off in class to get the lecture, and go off somewhere else. The total automation of education?

    47:

    Pwned :)

    I'm not sure that the deal with edges of our visual field is as clear as you're making it out, though. There's nothing stopping rod cells from forming an image, but the eye just isn't focussed that way (for good evolutionary reasons, probably; 'tis better to just move if you see something coming towards you from the side, rather than try to figure out exactly what it is). Cameras do have an iris, too.

    I can't remember how few photons a retina cell needs to fire, but it's down to fairly small numbers of discrete quanta. Things have been hunting each other in the dark on this planet for a very long time, after all.

    No bite on the compound eye? Some insects have as many visual elements as a cheap digital camera has pixels.

    48:

    Alex@33: Quantico was released first in the UK because Bear's US publishers refused to publish - too controversial a subject matter at the time in the US. It was good for the company I work for because we sold loads of copies of it back to the US.

    49:

    Wait wait now...sorry to go all thick (Wed' mornings affect me like Mondays).

    So it there a UK Hardback? I'm all luddite with books and like to have nice leathery looking dead tree in my bookcase. I'll post you a cheque for �5 if it helps, just to make up the shortfall for buying a US edition (or I'll buy a friend the paperback ;))

    50:

    Serraphin: there's no UK hardback of HALTING STATE. Just a trade paperback. (There is going to be a leatherbound, gilt-trimmed Easton Press special hardback collector's edition in the US, just to annoy everyone -- I don't enjoy signing a stack of frontspieces as high as my knee -- but that's not a retail edition. So if you specifically want a hardcover, you're stuck with the Ace import via amazon.com.)

    51:

    One of each it is then (Support your author and all that).

    I expect the sames treatment when I 'make it big'�

    Thanks Charlie

    52:

    Eukaryotic DNA has an approx 1/1x10^8 base probability of replication errors, of which circa 99% are fixed by DNA repair enzymes, leading to a `final' error rate of about 1/1x10^10 per replication.

    This seems small until you realise that it means that each of us contains a few hundred unique mutations (mostly thanks to our dads, and varying depending on our father's age at our conception: spermatogonia replicate about once every 14 days...)

    It's amazingly reliable for a solute-chemistry system running at the temperatures it does, but it's not exactly RAM-chip reliable: if DNA wasn't as tolerant of coding errors as it is we'd have no chance. (Of course it can be made more reliable, viz D. radiodurans, but that has enormous metabolic costs: poor old radiodurans gets outcompeted by just about everything except in the nastiest of dessicating or radioactive environments because it spends so much effort on DNA repair and RAID-array-like error checking.)

    Larry Moran just wrote a good article on this, from which I snarfed most of the figures above: http://sandwalk.blogspot.com/2007/07/mutation-rates.html

    53:

    Oh, and what makes you think cockroaches haven't changed much? Arthropods may not change much morphologically (not true for all arthropods), but they experience far higher rates of genetic change than, say, vertebrates. The molecular data are pretty unambiguous here, especially now that we have some non-vertebrate, non-arthropod metazoan outgroups to compare to.

    We don't have explicit data on the cockroach, but we do have explicit data on several other arthropods, and they've all discarded and changed a lot more ancestral genes than have vertebrates. If you want a `genetic living fossil', you'd do better looking at us, but better yet looking at extremophile bacteria (like, well, D. radiodurans).

    54:

    C.S. Regarding Greg Bear's Quantico: Would his US publisher really be that sensitive? I heard he didn't get a hard-back US deal until the numbers were in from a British publisher. Seems Bear's last few hard-backs didn't do so well. I like Bear, but didn't read Quantico, a place near and not-so-dear to my heart.

    Jeff

    55:

    Well I don't have any links to it but I did research this when the UK edition came out and got the impression the decision was political.

    56:

    It won't be the vast amount of data that will be useful, but the ability to organize it ans sift through it and make connections. The librarian's mind and the patternmaster mind will be hugely in demand. However, the patternmaster's mind gone wild will also find or make patterns in the data where there are none, and conspiracy theories will spring up like tumbleweeds. Welcome to another sort of Brave New World - make it into an all-time best-selling novel?

    57:

    Chris @ 47

    There's nothing stopping rod cells from forming an image, but the eye just isn't focussed that way (for good evolutionary reasons,

    Agreed, but I think some of the difference between the edge and the center of the visual field is just a good tradeoff to get high resolution at the center from a somewhat challenged lens and a sub-optimal retina design*. And the tradeoff is especially good because of the feedback mechanisms in the CNS that control focus of attention: highly-stereotyped and efficient motion-detection causes automatic shift of attention, bringing the high-resolution, high-processing power parts of the visual system to bear, and engaging the higher-level systems beyond visual processing to examine the results. That's the kind of tradeoff you can make with electronics too: correct for flaws and limitations in the sensor by adding lots of clever post-processing.

    I can't remember how few photons a retina cell needs to fire, but it's down to fairly small numbers of discrete quanta

    This paper claims a quantum efficiency for the human eye of about 10%, which certainly sounds reasonable.

    • What was the design committee thinking when they put the retina on back-end to the light?
    58:

    ghamiltion, Jeff Minor: Thanks for the info on Quantico. I met Greg Bear last summer and had a chance to ask him about Quantico's delay. He was somewhat vague about the explanation, and subsequent events showed that even his hoped for US release date was optimistic.

    Clearly his fan base is not wildly happy about his shifting direction, but I gather the SciFi market is not good and almost necessitates moving to a wider audience. He and Bob Sawyer talked about the awful "War Porn" sub-genre that is taking up SciFi shelf space. I can certainly detect the shrinking shelf-space B&N and Borders devote to SciFi (and science too) in favor of other categories.

    59:

    The librarian's mind and the patternmaster mind will be hugely in demand. However, the patternmaster's mind gone wild will also find or make patterns in the data where there are none, and conspiracy theories will spring up like tumbleweeds.

    There's this thing called the Internet..

    60:

    Yes and even on the sci-fi shelves it's mostly TV spin offs & huge amounts of crappy derivative swords & sorcery fantasy.

    61:

    DNA, and other, error prone systems for storage: Compensation for random errors can be done using redundancy and polling for consensus answers. The total storage requirements just increases.

    Of course, DNA is highly redundant in both organisms and species, so random errors in the germ cells of individuals is needed for evolution, a requirement that is not part of the posited stable, information storage device. Of course simple minerals have a much longer shelf life.

    In practice, data storage is not permanent. I was reading that Google captures the last updated web page and caches the previous version, the previous cached version being destroyed. Thus modifications to any data can be deliberately or accidentally introduced with each page update (analogous to each biological cell generation).

    62:

    @45

    Very nice summary, I'll save it for future perusal :-)

    Here is another argument: how often looking at digital picture you notice something you have not seen when you took a picture? Obviously, camera picks something that is not part of your experience of the scene. Even lousy old 2Mpixel consumer camera.

    Obviously, staring at picture is not the same as being there. But I do not think "storing lifelong experience" is the same as "ability to recreate any experience with 100% fidelity".

    Hey, we are talking about using one atom per bit for data stoarge and arguing about silly megapixels :-). I expect digital cameras to progress a bit. And compression as well.

    This might be a relevant link (dealing with text rather then images):

    http://prize.hutter1.net/

    63:

    Re: US rights. I've just read an article in the trade rag 'The Bookseller': "Amazon.co.uk gives publishers leave to remove US editions".

    Apparently they have introduced a new system enabling publishers to remove infringing US editions from the website, following pressure from the PA (Publishers' Association I presume). This seems to be following a recent legal ruling about similar shenanigans with CDs.

    Seems like we'll have to wait until January after all.

    64:

    gmilton@60 yes there is a lot of dreck on the shelves. However I have to assume the big box bookstores are rational and maximizing $/floor area. just like any retailer. If I was a buyer, I would use Amazon's ranks as a primary guide to buying and then regionalize/localize to taste. So I assume the SciFi dreck sells well.

    Having said that, a couple of weeks after Andresen blogged his favorite SciFi authors ( http://blog.pmarca.com/books/index.html ) I wanted to buy a selection for my vacation. Amazingly my local B&N did not have even 50% of the key titles, and in one case, not even the author (Ken MacLeod) on the shelf. It's possible this was due to a run on the books, but I have seen this before when searching for a particular author or title.

    I think Stross is making headway - Glasshouse was displayed in B&N's "new releases" section on publication (although I don't recall seeing the The Jennifer Morgue), but the shelves are very sparse regarding his other titles (and TJM was not among them as of 3 weeks ago).

    The problem for authors is that bookstores are showcases to quickly find interesting books, even while Amazon may account for a significant fraction of sales if you can wait a week or two. Thus their inventory is like advertising, reaching out to the casual customer. I value that, so I do continue to buy some retail (for now). Amazon is improving the experience for finding interesting books, but it will never be able to supply the demand for "get it now", and it is spotty (not their fault) for viewing the content and samples.

    Hint to Charlie: "The Jennifer Morgue" and "Toast" do not have "search inside". Neither do the upcoming "Missile Gap" and "Merchant's War: 4". That is the publisher's responsibility, no? I cannot stress how important this feature is to me when buying w/o recommendations on Amazon (although more so with expensive technical books).

    65:

    nix @53:

    What makes me think that cockroaches haven't changed much genetically?

    Well, my ignorance of molecular biology and the factoid that cockroaches have been around in pretty similar phenotypes for quite some time.

    As I said, this was just a guess in complex organisms. Microbes would have been my next one, but since I won't be involved in any such thing it would not have mattered much.

    66:

    tp1024@53 I'm afraid you'd be wrong on the microbes too. They change very quickly. What you have to realize is that some genes are highly conserved, especially in the DNA sequences that are critical to their functionality. Other genes less so. What is worse is that microbes can acquire genes from other microbes directly, bypassing the hereditary requirement for higher organisms. A good example is the acquired drug resistance of some bacteria, like Staphylococcus Aureus, even though morphologically they don't look any different than when we first observed them.

    67:

    Alex T @64: "The Jennifer Morgue" is currently only available in hardcover from a small publisher, Golden Gryphon, who do not have high street distribution. A trade paperback edition will be forthcoming in the US from Ace around the end of the year, and a paperback is coming out in the UK from Orbit around September 1st -- both of which should be widely available in their respective countries.

    In the USA, "Glasshouse" came out in hardcover last July, and a mass market paperback is due imminently (within about four weeks), so B&N have probably pulled any remaining HC stock from their shelves and will have the paperback on order.

    "Missile Gap" was published by a smaller publisher (again: no chain bookstore distribution) and has sold out after two reprintings. You can now read it on the web.

    "The Merchants War" isn't due to be published until November 1st, so it'd be kind of surprising if there was an Amazon "search inside" at this point in time.

    Finally, about Marc Andreesen's list: it covers what are in his opinion the best authors of the new century. That's going back seven years at this point. Publishing is seasonal, and bookstores don't like to carry lots of old stock; some of the titles Marc recommended date to 2000 (or earlier -- some of Ken's date to the late 90s) and are simply unavailable right now in high street stores.

    Amazon, AbeBooks, or your local SF specialist bookstore are where to go if you have a specific shopping list.

    68:

    Alex Tolley @ 66

    What you have to realize is that some genes are highly conserved, especially in the DNA sequences that are critical to their functionality

    Which is why arthropod phenotypes seem so stable to the casual observer. Mostly people see morphological features, and those are stable (IIRC) due to the body plan strategy arthropods use: segments generated by patterned invocation of the homeobox genes, which are very strongly conserved.

    Vertebrate morphology is much less stable; the cetaceans had legs only 40 or 50 million years ago. That's why morphology isn't a good way to estimate either genetic relationships or genetic drift.

    69:

    Charlie: Thanks for the info. You'll be pleased to know that I have you on my "hardcover" list - authors whose works are worth buying that way for the library at home.

    I'm not quite sure what you mean by "seasonal". Asimov is always well represented, Heinlein is going through a mini-revival (although we may be talking about reprints) and Clarke, whilst declining very quickly now, still has some of his classics on the shelves. There are no end of Star Wars and Star Trek titles, some very old now. Surely what sells is important. My point about Andreessen (we both spelled it incorrectly) is that he is a geek and his list was well circulated, so I would have guessed that any titles on his list would have enjoyed a minor blip. Sure, Amazon and ABEbooks are great for new and used books respectively, but they are no help when shopping for titles a day or so before heading off for vacation. And amazingly, there isn't a good SF specialty store on the Peninsula AFAIK south of SF or Berkeley - certainly nothing like "Forbidden Planet" or the defunct, but better, "Dark They Were...".

    70:

    How many Asimovs, Heinleins, and Clarkes are there?

    Most authors in the genre field -- the vast majority, being about 98% of us -- are here today, gone tomorrow. The aforementioned three are superstars; citing them as evidence that book-buyers in the big chains don't churn their stock seasonally is like pointing to David Bowie and The Rolling Stones as evidence that rock musicians are all rich.

    In reality, 70% of the sales of a new hardcover or mass market paperback in the US market take place within 90 days of its launch; if it's still in print a year later, the author's doing well. And the bookshops gear their product turnover to this cycle.

    71:

    bruce@66 - you are speaking to the choir. Conversely, the convergence of shark morphology led many to believe sharks haven't evolved, whilst in fact they have widely speciated and gone through the usual extinction patterns since their emergence.

    72:

    Charlie@70 --

    I recently had a lot of trouble finding stories by Henry Kuttner for my kid to read. I liked them a lot when I was younger but would have no luck buying a book (or even finding it in Chicago Public Library) if not for the recent movie losely based on one of his stories. He is definetely a very underappreciated author.

    I wonder if this phenomena is specifically American. I talked to many guys in US who love SF but do not know about Kuttner or Robert Sheckley. Reaction from Canadians: "of course I know his books!". The same responce from Mexicans or French. His books are still easy to buy in Russian translation...

    73:

    Charlie - Poor choice of authors on my part - although I suspect Clarke may almost completely disappear from the shelves soon over here.

    I was aware of the sales loading, and I have read that it is getting even more concentrated, much like movies. However, I do peruse titles frequently enough, and am sufficiently aware of title age, to note that older titles do seem to be on the shelves - which I assume is due to longevity of sales demand. We're not saying anything different. Most titles are ephemeral, a few have longevity and stores plan their inventory based on their expectations and actual sales.

    What I wasn't aware of, based on what I think you said, is how little flexibility chain stores have in adjusting to high frequency events. I guess that is Amazon's good fortune.

    74:

    72: The Science Fiction Book Club has a reasonably priced edition of Two-Handed Engine.

    75:

    Alex@58: Compare Michael Marshall Smith moving to "paranoia-porn" via his Michael Marshall titles.

    76:

    Bruce @57: For retinas the right was around (and a whole bunch of other obvious anatomical improvements), let me offer you the cephalopods.

    77:

    James@74:

    Thanks, I already bought another book, but sfbc look interesting. I did not know about it.

    78:

    Charlie

    I've seen Glasshouse (mass-market edition) on the shelf within the last two weeks at my friendly big-box book store in Los Angeles.

    79:

    PJE: it takes a while for them to filter out into the wild.

    80:

    Is there a link to the Radio Wales audio of the interview? I have looked for it, quite unsuccessfully, on the BBC web site.

    81:

    Bruce @46:

    It's already happening. In a number of tertiary institutions, lectures (mainly Power Points) are put online (intraweb) for students to access, so during lectures, they can concentrate on learning rather than scribbling notes.

    Some may forgo turning up to lectures and depend on e-lectures alone, but the sensible ones still turn up to lectures, and get a greater learning experience. Nothing (yet) replaces face-time & RL interaction.

    82:

    My entire university course is online with the OU. I probably won't have to attend a lecture until the last stages. All my course work is posted or available online, there are audio transcripts and MP3 files sent to me along with a CD full of study aids for each course.

    I can communicate with a lecturer by e-mail or call him/her should I so need further assistance.

    So in reality - we're there.

    83:

    82 - Yes, but only a very small minority of OU programmes are entirely available online, and there's a lot of research to indicate that we (I work for the OU) get a lot of benefit out of face-to-face tuition. I'm still writing courses in boxes.

    Charlie, have you read Tim Burke's response to yr article? It's online here: http://hnn.us/blogs/entries/40740.html It saved me the bother of writing a comment saying pretty much the same thing. To sum up: if I wanted to read every surviving record from England for any one year after about (waves hands) 1640 (and probably much earlier), I would die of old age before I could finish the job. Searchability is good, but to find out what was going on, you can't trust the index.

    84:

    Chris: that's where algorithms come in. Once this level of data is online you can do really funky stuff. For example: model social networks over time and see how the overlapping sparse networks we move in change. If you want to zoom in on a Man On A White Horse, you can backtrack through his timeline and see how his circle of friends changes and try to pinpoint just who it was who first turned him on to a specific ideology. Or if you're trying to monitor larger social-historical trends, you can track how much time people spend in different domestic activities over the years, and start looking for influences.

    Of course you can't do the data reduction by hand. But once all the data is online, there's some fascinating stuff you can do with it that should reveal stuff about the past that's simply inaccessible today because we don't, for example, have exhaustive records indicating when, where, and who with everybody in England sat down for lunch during the 18th century so we can track trends in food preparation as they feed into issues of domestic service. And so on.

    85:

    I agree about the data mining for social history. In fact, here's a nice (if I say so myself) example that I prepared earlier: http://headheeb.blogmosis.com/archives/031384.html

    It's the Man on the White Horse stuff that I'm rather more skeptical about. Traffic analysis can only get you so far: eventually you have to immerse yourself into his reality feed and see what he did. Then you need to repeat the process with his key associates, in order to find out who their key associates were, then...

    People who suspect this might happen to them might go to lengths to counter it. Think about the 'Randy codes in prison' sequence from Cryptonomicon.

    Ever tried to analyse oral history tapes, or cine/TV footage? I have and it takes bloody ages. Information overload.

    BTW, I'm not trying to argue that this will change nothing, merely that I think that you're overestimating the impact of the change. AI agents would change everything, of course. But they'd want a share of citation index credit...

    86:

    I suggest you check out Google Video's techtalks/engedu stuff; Google puts on regular lecture events for their staff, where people from all over the place present some really interesting stuff (and they're filmed and put online for the public to watch too). One I saw recently addresses the specific problem of searching lengthy video for 'interesting' events. The short form is that it tracks seperately moving elements in a scene over time, and packs them all down, overlaid, into one short sequence. It's a bit like time-lapse footage, except multiple events happen simultaneously. The human brain can do quite a bit when it comes to prioritising multiple simultaneous inputs, one you've found something worthwhile, you can drill down to the original source video. The time-overlay stuff is a bit tricky to explain, but one of the early examples takes footage of a water-cooler over a period of time. Various people come and go, getting drinks. The compression version shows everyone arriving and leaving at once. You can quickly pick out if one of those people is a Person Of Interest(tm), and bring up the original, linear-time footage to see who else was actually there at the same time. Audio is always more difficult for humans to 'scan' because you need to listen to more of it to figure out if you've got to the right/interesting material; you can't "glimpse" it like a still from video. But audio is also more amenable to voice-recognition, automatic transcription, and thus machine-searchability. Sure, the transcriptions are kinda flawed now, but that's just a factor of time. Even tricky issues like homonyms become easier in the face of vast amounts of data. Consider Google's translation software -- it learns languages automatically, based on statistical analysis of large bodies of work that are published in multiple languages (eg UN declarations). I think similar techniques will deal with homonyms over time.

    87:

    It's the Man on the White Horse stuff that I'm rather more skeptical about. Traffic analysis can only get you so far: eventually you have to immerse yourself into his reality feed and see what he did. Then you need to repeat the process with his key associates, in order to find out who their key associates were, then...

    Hang onto that thought, there's an entire novel plot in it. Subtype: SF, historical. (Protagonist is a historian, trying to understand a great upheaval from centuries before. Immerses self in reality stream of the pivotal character. Disturbing things begin to happen in the present: is history in danger repeating itself ...?) Damn, I could write that up as a pitch and sell it tomorrow.

    88:

    Ta for the heads-up, Canis - I'll check it out. Remember though that face recognition (manual or auto) from video images is a difficult task, say my forensic psychology mates.

    89:

    "Hang onto that thought, there's an entire novel plot in it. Subtype: SF, historical. (Protagonist is a historian, trying to understand a great upheaval from centuries before. Immerses self in reality stream of the pivotal character. Disturbing things begin to happen in the present: is history in danger repeating itself ...?)"

    Part of this plot was used in Walter Jon William's "Green Leopoard Plague", which is of course rather good.

    90:

    Here's another: Cross wikipedia with Second life to make an immersive VR virtual history of everything we know. What a useful way to communicate the corpus of academic historical knowledge (a topic close to my heart: http://www.open2.net/historyandthearts/history/bridging.html)and to update it. Of course, it's going to be vulnerable to all sorts of serdarargic. Ghosts in the machine...

    91:

    Which us back rather neatly (I think) to the history/myth problem. In the early days of the Northern Ireland conflict a press photographer is reputed to have been hit with an umbrella by an old woman who told him, 'you're photographing things that aren't happening'.

    Or as Richard Pryor put it, 'who are you going to trust? Me, or your own lying eyes?'

    92:

    A historian friend of mine would be seriously delighted to have such a detailed access to the past, because so much that is relevant is hidden, lost or simply not recorded. It's Brin's "Transparent Society" in spades. One can get a hint of what this will be like when watching very old movies - the details of life and the environment are almost alien to the modern viewer, details that are not generally recorded in novels or other "historical" documents.

    As for navigating in this ocean of data - we are already seeing rudimentary signs of some aspects of this - geo and time mashups and it is only going to get much better as we get more data to play with. I don't doubt the ingenuity of people to develop the tools and make them accessible. Sterling has written about SPIMES which is conceptually similar - every artifact will create a wealth of data about itself over a lifetime. He uses the term "data wrangling" to handle it.

    The technology is almost less interesting than the implications - who will record, how and where will records be kept, tampering, deliberate obfuscation, privacy and of course stability of storage and format changes. Some of these Charlie has written about, others have written about them too, even if the technology is different - eg "The Light of Other Days" by Clarke and Baxter. Will handling the data become a general skill like Googling, or will it be specialized? Will our lives require us to learn how to navigate it, or not?

    My sense is that the eyeballs/attention problem will likely overwhelm humans pretty quickly and that tools and AIs will handle much of the grunt work. Which means that the storage volume will need CPU cycles to keep pace with it. So expect processing power everywhere too. Can those same high density substrates - like the carbon latices - also be used as processors or will we need something very different, but also at similar scales? I think we've been here too - "computronium". :-)

    93:

    Charlie @87:

    Sort of like here : http://qntm.org/responsibility ?

    Alex @92: "who will record, how and where will records be kept, tampering, deliberate obfuscation, privacy and of course stability of storage and format changes"

    And at some point someone will decide enough is enough and release Curious Yellow :-)

    94:

    I think the "Curious Yellow" topic is very interesting. How does it actually find the location of the information that it edits? I can see how a person's body could be digitized/mapped, but how does the CY find that info in the brain? Would those memories all have the same chemical signatures, or can the virus "play" all the memories in that digitized brain and selectively edit? It's a great idea, seeing how it gets into gate software. Would those gates just have to use microscopic mapping? Actual quantum states would have to be treated statistically, wouldn't they? I mean, you can't map a quantum state (of all the particles compossing the matter) unless it's frozen at absolute zero. Maybe.

    Jeff

    95:

    I totally agree that there will be new things we can do with the data that's flowing in now, and network analysis is prominently at the top of the list. But that data won't resolve any of the old epistemological problems with historical knowledge, and may aggravate some of them.

    96:

    Alexey@92 : yes, "Curious Yellow" would be an issue. In a similar vein, Vernor Vinge has asked, are silicon chip processors a potential single point failure (eg EMP vulnerability)? Now in the world of store-everything-in-crystals, maybe the redundancy and resiliance of the substrate might protect us from data loss at least. But perhaps not to deliberate attack and tampering.

    One of the biggest pivotal data losses in history is the deliberate destruction of the library at Alexandria, rather like Google's server farms going down today. Historians think more than was realized escaped, but even so, what a loss, and setback it was.

    To mix this thread with the star flight thread: what if this technology could summarize much of human existence and used to seed the universe with our knowledge and exoperience? A very different play on the SETI approach.
    Rather than assume contemporary civilizations' existence and halting communication (time and bandwidth), just send out tiny spores containing all data and a reader to all points in the galaxy and beyond. The receiver could view the lives of humans, and easily pore over all the records stored on the net today - stored on crystals the size of grains of sand. Most SF has used the cliche of a few, large, enigmatic ruins, much like those of our history. But what if the knowledge of a million civilizations was scattered about us, ubiquitously?

    So here is my SF story themethought for the day. All around us, there are patterned crystals with massive amounts of data from civilizations, some of it in geologic layers. The "readers" only exist when we have sophisticated enough computers that can be hijacked by the S/W that comes with the data via RFID-like technology. In the near future, RFID scanners start detecting at their sensitivity threshold, some examples of this data, which soon turns into a flood...

    97:

    Congrats on "Halting State" buzz. Can't wait!

    Scotland: my son and I consider it our adopted country, via his Scotland-native mom (my wife). I see Charles Stross as Sir William Wallace in Cyberspace.

    When I described, in a refereed paper of 1979, storage of all human data in diamonds, I did not yet have the data on pure Carbon-12 and pure Carbon-13 diamonds, which is now available. I did coin the term "Shannon" for a mole of bits.

    "I suspect some sciences, notably mathematics, are intrinsically endless."

    Hermann Weyl said that "Mathematics is the science of infinity."

    G.H. Hardy's "A Mathematician's Apology", says that 2 + 2 = 4 is absolutely true, but anything in the real world is not as definite, so he thinks that the world of mathematics is more real than our world. This is a view that goes back to Plato.

    Gregory Chaitin writes: " I have information-theoretic results on the limits of reasoning, and that leads me to think that to prove more you have to assume more, and this is a little bit more like the way that physicists work. Mathematicians think that you can start with a few self-evident principles and get to all of mathematical truth. Physicists don't think that. Physicists know that when you go to a new kind of phenomenon you need new physical laws to understand it. My own work says that mathematical truth has an infinite amount of information and any finite set of axioms only has a finite amount of information, therefore you have to add new axioms. Well, where are you going to get them? You have to work intuitively in a quasi-empirical way, it seems to me. In a pragmatic, empirical way, like a scientist does. At least that's my feeling.... I think that computers are changing the way we do science completely, and mathematics too. The computer can provide an enormous amplification of our own mental abilities, and it's really changing the way everything is done. George Johnson just had an essay on simulation in the New York Times in the Week in Review where he points out that now it doesn't matter what field of science you work in, the computer is fundamental in the work you do. [NYT 3/25/01, "In Silica Fertilization; All Science Is Computer Science"]

    98:

    Chris @76; I, for one, welcome our new cephalopodic overlords.

    99:

    Charlie, are you aware of HT Buckle?

    Social network analysis is exciting - I thought this over 15 years ago (blody hell I'm getting old) and checked it out. The conclusion that I quickly reached was that I'd have to write my MA using something else. The problem is that it's impossible to sample a network: 10% of the nodes yields only 1% of the connections. This makes it very hard to use to through historical sources. I don't see Strossian 'ubiquitous' data capture producing the kind of total information that you need to do it properly, merely a rather better assemblage of historical source material.

    100:

    PS - Ken M talks about Buckle here: http://kenmacleod.blogspot.com/2004_04_01_kenmacleod_archive.html

    PPS - do you remember the days when we thought that 100 was a lot of comments on an antipope thread? I recall them like, well, last week.

    101:

    D.O'Kane @ 12: "If all human life becomes recordable, and if future generations are able to access all human, historical events in their entirety, and accurately, what will human communities do for myth? Malinowski defined myth as the charter a human society relies upon for its foundation, and myth is often the product of misremembered history. . ."

    I don't think that's so hard to answer. Look at the way different people can look at the same well-documented events and interpret them in completely different ways, depending on ideology, worldview, religion, etc. We understand the world in terms of stories, not data, and we usually see the stories we want to see. More data won't change that, it might just create more material to create stories from. Especially so since society is complex, and the only honest analysis of an event may be to admit we don't know why or how something happened, leaving a vacum for stories to fill.

    But is this myth? I think it is. I think modern history might play the same role for us as stories about heroes and gods did in earlier times. World War 2 is one example. People like Hitler, Churchill, Chamberlain are more than historic characters, they have come to represent eternal concepts like evil, heroism and cowardice. When a pundit today brings up something like the Munich agreement in a column, he's not presenting a theory in historical science, he's drawing on well-known mythical characters and events to provide us with a moral lesson for our own time. (Ie. myth not as in "a myth", a lie, but as in emotionally powerful story about our origin and place in the universe.) You can test this: Make up some alternate interpretation of World War 2, for instance that Hitler was unwillingly pushed to war by the UK, and tell it to someone. If they calmly disagree, the event is purely historical to them. If they also get angry, the event is more than history to them, it is myth. (Again, not saying myth is false, just that the truth value of a myth is much less important than the story value, and that it resonates emotionally in a way mere fact doesn't.)

    Or think of the role of the Vietnam war in American politics. I was amazed when I first discovered that there were two incompatible interpretations of the Vietnam war, living side by side: One (the one I was familiar with, raised in Norway) that it was a futile, evil war, another that it was just and necessary, and that the US might have won if it hadn't been stabbed in the back at home. Now, this is recent history. But to the people who believe strongly in one of these interpretations, it is also a kind of founding myth for their political movement. And now there's a similar split forming over Iraq, with die-hard supporters of the war laying the ground for a new backstab-myth, and opponents for a new evil hawk-myth. It's not because of lack of data, it's because we're so desperate for powerful and relevant stories.

    102:

    The largest need, then, is a shared collaborative index.

    103:

    Alex@96: "So here is my SF story themethought for the day. All around us, there are patterned crystals with massive amounts of data from civilizations..."

    In the movie AI there is an alien race that has access to some sort of quantum/substrate reality that allows them to see everything that ever was. So perhaps the patterned crystals you spoke of might be there, but the crystals might be composed of something very mystical, like dark matter. I wonder if this universe requires that all information be saved/stored. Maybe that is the real function of time.

    Jeff

    104:

    jeff@103 - perhaps the closest analogy I have read is a set of related short stories by Robert Shaw (?) based over his idea of "slow glass" - glass that hugely delays the propagation of light. Eventually the government drops gazillions of tiny slow glass beads over the US to spy on everyone. I think there was also an idea by James Blish (?) about all communications with some "sub-etheric" device that become an instantaneous blip that holds all communications ever made with the device - but this is a universal "storage", rather than lots of discrete ones with different content. I certainly wouldn't claim my idea is absolutely unique, but I don't think I have read of anything quite like it, and it does offer an intriguing was for civilizations to cheaply communicate huge amounts of information to cultures separated in time, with relatively high resistance to degradation. I also like the idea of the deep time aspects - fragments would arrive and get deposited in sedimentary layers which could be teased apart to determine the times/distances of the transmitting cultures and how they changed. A veritable galactic history buried throughout the earth.

    The stories would then get away from the usual highcostradiotelescope + governmentsecrecyvspublicknowledge scenarios or expeditionstoplanetXormoonY + findruins + discoverancientknowledgeandartifacts that so mirror contemporary archeology. Because the data would be so vast, instead of groups studying and making sense of just a few items and writings, every person and AI on earth would likely spend time possible with something unique. And of course it is another nice way to handle the Fermi paradox that fits in with the energy costs of interstellar flight, even with the costs of maintaining radio transmitters for long periods. Unlike the scenario in the movie AI, the technology could be quite plausible, based on a number of different substrates and methods to find the sparse data grains amidst the un-patterned natural geology.

    105:

    88:

    "Ta for the heads-up, Canis - I'll check it out. Remember though that face recognition (manual or auto) from video images is a difficult task, say my forensic psychology mates."

    Posted by: Chris Williams

    Last I heard, it's also a task in which a lot of money is being dumped. Military, police, corporate, and all combinations thereof.

    106:

    "Or think of the role of the Vietnam war in American politics. I was amazed when I first discovered that there were two incompatible interpretations of the Vietnam war, living side by side: One (the one I was familiar with, raised in Norway) that it was a futile, evil war, another that it was just and necessary, and that the US might have won if it hadn't been stabbed in the back at home. Now, this is recent history. But to the people who believe strongly in one of these interpretations, it is also a kind of founding myth for their political movement. And now there's a similar split forming over Iraq, with die-hard supporters of the war laying the ground for a new backstab-myth, and opponents for a new evil hawk-myth. It's not because of lack of data, it's because we're so desperate for powerful and relevant stories."

    Posted by: Bjørn Stærk

    We'll see it happen again in the US, both because a lot of Americans are uncomfortable with what happened, and because the military-industrial complex needs to sweep a lot of dirt under the emotional rug.

    107:

    And then there's people like me, to who Vietnam is just history. I have no more emotional attachment to it than I do to the Korean War, or WWI, etc. I know understand why other people feel the way they do, after some conversations with my parents who protested the war, and with a rather hawkish conservative who still hates the Democrats for betraying the US and South Vietnam. But I don't feel anything myself.

    As for Iraq, I suppose we'll have to wait until it's over until we see what the narrative adopted by history is.

    I see it as an important lesson, that whatever it is I feel strongly about right now there's a good chance that in 20 years young adults won't care or know much about it at all...

    108:

    Retrospoiler: I hope it's okay now to explain the Computer Science and Physics behind the boostrapping of the Eschaton.

    Quantum Computational Complexity in the Presence of Closed Timelike Curves Authors: Dave Bacon arXiv:quant-ph/0309189 (Submitted on 25 Sep 2003 (v1), last revised 28 Oct 2003 (this version, v3))

    Abstract: Quantum computation with quantum data that can traverse closed timelike curves represents a new physical model of computation. We argue that a model of quantum computation in the presence of closed timelike curves can be formulated which represents a valid quantification of resources given the ability to construct compact regions of closed timelike curves. The notion of self-consistent evolution for quantum computers whose components follow closed timelike curves, as pointed out by Deutsch [Phys. Rev. D 44, 3197 (1991)], implies that the evolution of the chronology respecting components which interact with the closed timelike curve components is nonlinear. We demonstrate that this nonlinearity can be used to efficiently solve computational problems which are generally thought to be intractable. In particular we demonstrate that a quantum computer which has access to closed timelike curve qubits can solve NP-complete problems with only a polynomial number of quantum gates.
    109:

    Charlie@84 and 87: very neat idea, sir, and I mean that in the honestly jealous sense. The question, however, if you want to write real social science fiction, is: what's the theory?

    How do you know that food preparation is relevant, and not just a bad proxy for the real variable of ambient temperature plus the leg length of women in visual range?

    Might even get me interested in SF again.

    Andrew@107: Dude, you really really seem younger than your age with that comment. I'm hoping against hope that your omission of WW2--or the civil rights movement--among other historical events was deliberate. But I'm not all that much older than you, and my fiancee is younger than you. So please do me a favor, Ali G, and don't tarnish American young people by presenting your own (somewhat incomprehensible for an educated person) emotional detachment from the great conflicts of astoundingly recent times as representative of educated American young people in general.

    110:

    There's some interesting e-social-science research going on into developing methods for recording and annotating meetings. Even with clever use of speech recognition tools and the like, it's still very labour-intensive: about 100 person-hours to annotate 1 hour of meeting. At that ratio, it's not something you would do routinely, though if the labour time comes down by an order of magnitude or so you might see fully annotated, fully searchable lecture courses and the like available on the web.

    Even then, though, if you start trying to to total information recording you have to index the process of indexing: obviously the ratio of time spent indexing to the length of the activity being indexed has to drop below 1:1 to avoid exponential blowup.

    Automatically indexing and annotating human speech and movement is a hard, hard problem, much harder than figuring out how to store large amounts of data. It'll take some significant advances in machine interpretation of natural language and body language before a useful collection of global life-history data becomes possible.

    111:

    Barry @105: just because a lot of money gets dumped into a task, it does not follow that the task is achievable. (Especially when the task is minimally -- but no adequately -- useful for lab demos, the cost of entry is low, and there are a lot of companies running such labs in search of a government-funded feeding trough.)

    112:

    A scientific task, maybe Charlie.

    When an engineering task is the focus of a lot of government funds, it tends to happen. Like the Apollo project.

    113:

    Noel @ 109: Yes, I did deliberately omit WWII and things like the Civil Rights Movement. They've managed to stay current and relevant in our culture over the years. Part of that's due to media (look at all the WWII movies and documentaries) and part of it's due to education (go to a school during Black History Month and see).

    But for the other items, as far as I can tell they have much less emotional impact and relevance for the current batch of college age and younger children. Keep in mind, for many kids in elementary school right now, Vietnam was something their grandparents worried about.

    I have a hard time explaining the Cold War to people even a little younger than myself as well. They can't grasp the idea that I grew up knowing that nuclear weapons were aimed at me, and that one slip could destroy the world. That it wasn't an abstract thought, but that there was a real fear that WWIII could happen.

    But I have yet to meet anyone under 30 with any great emotional attachment to Vietnam, though they may parrot what they've heard from their parents.

    (And yes, I am somewhat detached -- I think it's a result of having studied history and learned about the other horrors mankind has inflicted on itself. Put in context, Vietnam was hardly the worst war there's been. Especially not in the 20th century.)

    114:

    Alex@104, I think the only thing I've read regarding the access to some sort of default memory storage mechanism might be Herbert's genetic memory that the Bene Gesserit use. That only gets you so much information, but that idea could me used in combination with technology. Say, a device that could tap into this fictional DNA memory and "play" it. You could, in theory, based on today's population, access the entire history of humanity. It's sort of the Six Degrees of Seperation idea, but you go back in time. I do feel at times like I can remember stuff that isn't part of my known past. Maybe we do imprint our DNA somehow, or our "quantum DNA" (whatever that is).

    Jeff

    115:

    Human memory isn't a recording in the same way the computer memory is. The human brain is very good at extrapolating and simulating based on incomplete information. We remember a few key points, and make the rest up. So it's entirely possible to have memories that you couldn't possibly have -- your brain has created them from what information you do have.

    116:

    Andrew, see: this for one example of a face recognition system deployment test in public (as opposed to in a lab, in police mug shot conditions). Or this on iris scans. Or this for an explanation of why looking for more biometric identifiers actually makes the problem of false positives or false negatives worse.

    Basically, the issue with identity authentication tools is that you really need better than 99% accuracy before you can roll them out in public places to do things like perform dragnet searches for wanted individuals, or screen airliner passengers. Even 99% accuracy isn't terribly good; it means one or two people being hauled out for questioning needlessly on every airline flight. In reality, accuracy figures in the range 20-40% (i.e. utterly hopeless) are what's being achieved, except under special circumstances -- like this (a database of known fraudsters is compared to photographs in passport applications -- given that such photographs have to conform to fairly stringent requirements before they're accepted, giving a much better chance of identifying someone than with a real time video feed where the subject may be at an odd angle and distance from the camera, or in shadow or too brightly lit, or smiling, or blinking, or wearing glasses or a false nose).

    Face recognition isn't, in point of fact, an engineering problem, because the science underlying face recognition isn't a done deal yet. We've got heuristics (rules of thumb) and a bunch of techniques that collectively have some hope of doing it, but it's by no means a deterministic process whereby a set of given inputs always give a corresponding set of outputs that are a function of the inputs. And there's no guarantee that throwing money at it will ever make it work. How many times have you spotted a stranger and mistaken them for someone you know? Or not recognized a friend? Especially at a distance, in a crowd? We're running a neural network that's been optimizing for face recognition for tens of thousands of generations, all the way back to the African savannah, and we still get it wrong surprisingly frequently. It is not immediately obvious that machines are ever going to be better at this job than human beings -- and the only reason we're bothering these days is because the mania for automation and control means that ministers and civil servants are more willing to throw money at software companies than hiring more police officers.

    117:

    "It is not immediately obvious that machines are ever going to be better at this job than human beings -- and the only reason we're bothering these days is because the mania for automation and control means that ministers and civil servants are more willing to throw money at software companies than hiring more police officers."

    Machines don't have to to be better, just 'good enough'. That standard has to be pretty high, granted, so let's set the bar at near human capacities. A machine may also be better than an individual human because humans fail for plenty of reasons, so machines excel because no fatigue, no unfamiliarity with 'foreign' faces, able to zoom in on distant faces or move instantaneously between available viewpoints for the best full-face view. All the usual reasons for using machines rather than people.

    I do not see why the science has to be done before the engineering. It would be nice if you wanted machines to recognize faces as humans do. But that would be like not making chess playing machines until you understood how humans play chess. As you well know, machines play chess completely differently than humans do as they have energy profligate, brute force methods available. Why assume face recognition approaches will be any different?

    It is true that we are at relatively early stages of the technology, but that doesn't mean that it cannot be developed for specific applications before being deployed for general surveillance, or as an adjunct to humans to help reduce the workload.

    118:

    As an addendum to the face recognition issues, a slightly tangential subject is "face blindness" in humans (although it is relevant to the science).

    A women has written a really nice article on this subject using images of stones to convey her points.

    http://www.prosopagnosia.com/main/stones/index.asp

    119:

    Biometrics has always been a problem especially fingerprints. My father worked on the first Scotland Yard fingerprint system and developed a huge distrust of them which cost him a pretty penny in later years when DeLaRue and others wanted him to endorse their products.

    Basically he could see that computers would get faster, but he couldn't see an easy way of increasing the accuracy in a way which he would be prepared to sign up to and say "yeap, that's the person, no doubt."

    Even in the 70s, they could get a match on a partial from the machine which would narrow the search down, but the standing rules he operated when he ran the fingerprint branch were that a human had to do the final analysis.

    I don't believe its got significantly better.

    120:

    Dave, are you exaggerating a tad about your father's involvement in the first NSY fingerprint system? Fathers can be pretty far back, but 1900 is a long time ago. If you're not . . . got any memorabilia I can copy for an archive?

    Alex@117 'no unfamiliarity with 'foreign' faces' - My face recognition mates report that this factor is over-rated: in practice it's hard to make out in ID-parade conditions. They weren't expecting that result, but observed it nevertheless.

    121:

    I suppose this actually isn't something new-- if you lived in a small village a thousand years ago, or even a modest-sized town a century ago, chances are good that everyone knew what you did all the time. Perhaps the one big difference is that while you may have had a compatible agenda with the town (through lack of exposure to anything else), you may well not have one with the people running your country. It's problem based on your type of government more than suveillance per se.

    122:

    Alex, machines would have to be perfect. Because unless you can develop a self modifying and semi-sentient AI to take over the reviewing of the face recognition you're stuffed.

    The current technology is only about 99% accurate. That's one in a hundred wrong... And as pointed out that's if someone hasn't used a wig, grown a 'tache or attached a fake nose.

    The systems are very inncaurate - and I'm sure there was a Beeb or Guardian report that revealed that the system also had difficulty in picking up the key 'points' on the over 60's and those of african/afro-carribean descent.

    Which must be nice for them.

    123:

    (I just realised the last comment may have sounded somewhat racist - what I meant to say was "Must be nice for them as they'll be getting pulled out of lines for flagging up the 'unknown' tag on the biometrics all the time".

    And won't be nice at all)

    124:

    serraphin@122 That is wrong. Humans are not perfect and make mistakes too. To be useful, machines have to be correct enough to reduce the workload on humans. Thus say to recognize a face in a crowd, but a machine can do it instantaneously, but with only 50% accuracy (all true and false positives, no false negatives), that reduces the human effort by 50%. A machine, or more likely many different ones can do many different tests extremely quickly. Suppose that the quality of teh recognition is dependent on viewpoint, for a human, it is costly to switch to better viewpoints, this is really not so for a machine that could track an individual through many viewpoints to get the best chance of recognition in a crowd. Machines potentially have far more senses than we do, so face recognition might be just one of many things at its disposal to recognize an individual.

    125:

    Alex T: Thus say to recognize a face in a crowd, but a machine can do it instantaneously, but with only 50% accuracy (all true and false positives, no false negatives), that reduces the human effort by 50%.

    ... Er, no. If it makes false positives 50% of the time, that means a fairly significant number of innocent folks are going to get tagged as terrorists or criminals. It doesn't take much of that sort of thing before you get a journalist, politician, or lawyer by mistake -- and then the shit hits the fan. One of our endemic problems this century is that there's a mystique attached to computers; ordinary folks don't understand that they're frequently buggy or produce false positives or whatever. Thus, it's only a matter of time before a false positive leads to J. Random Innocent Folks spending serious time in custody (or worse, if there's a black let's-bypass-due-process rendition system in use).

    False negatives are paradoxically less harmful, because if you're running a dragnet one of the other recognizers will pick up the target sooner or later -- without dragging in innocent third parties by mistake.

    126:

    Charlie: The point of having the bias to false +ves is to over-flag the set of targets and use humans with their better abilities to reduce the set to the human error rate. If the there are very few false -ves, then the human operator is saved from looking at the individuals in the discarded set. In fact the efficiency rate would be very much better. Assume 1 in a 1000 people are the targets. Assume a face recognition system that can correctly identify a face but tends to offer 5 false +ves for every true +ve. Then the pool of faces to be viewed by the human operator is now ~200 rather than 1000, saving a huge amount of manpower.

    Obviously how that is used is a procedural issue, not a recognition issue. If you "round up" all the machine flagged targets and haul them off to the station to be fingerprinted, questioned and ID'd that is a very different civil rights proposition than just sending the flagged images to human operators for visual inspection and final ID'ing before action is taken.

    It is all about applications, not the technology. The bit that is totally bogus in my argument is the issue of false -ves. In reality, that is almost certainly going to be a large number and thus the system as a whole probably would be better with just human operators for the near future.

    The current upsets over false +ves in the media today are almost all due to dumb computer systems using data pattern recognition and no human oversight to correct the errors (i.e. 2 year olds and little old jewish ladies from Cincinnati are not likely to be jihadis). That is an indictment of the way we set up the anti-terrorist systems, not of the pattern recognition algorithms in themselves.

    127:

    Charlie: I reread your comment and see that you are assuming the stupid system of today is a given and therefore want automation kept out to reduce the inevitable foul-ups. I was making the assumption that the 2 were independent and that political will (haha) will ultimately ensure that the system uses the technology correctly. I guess I just am more optimistic than you that we will get past this [in]ternational paranoia in due course.

    128:

    Chris - I meant the first computer one that the Yard put in during the 60s/70s. A little unclear on my part. He did start in the Fingerprint branch in 1953, so he was there very early and we do have some fantastic stuff from the clear out they did when they left Scotland Yard and moved to the new building.

    129:

    Dave, can I have a chat with your dad? My day job involves being heavily into the history of police use of technology. I happen to have the UK's second biggest archive of police history material in my office, and we're always on the look-out for new stuff. Especially if he ever feels like throwing it away.

    130:

    Alex @ 124: Sorry to bang this one back. But I didn't say humans were perfect, I said machines weren't. Especially when they're not able to review and make pliable decisions on the data they are receiving.

    The system you're talking about is pretty unmanageable for the requirements that it is to be used for. Assuming you put a 50% hit system into, say, Heathrow airport (a place that 'terrorist threats' are assumed to be high) - and lets make a conservative limit of 150000 passengers a day.

    That's 75000 'positives' sent to a controller. EVen assuming heathrow can afford 10 controllers, we're looking at reviewing 7500 positives. That's 313 flags per person per hour.

    These all need to be reviewed.checked against interpol or watched for suspicious actions...

    Current technology and programming just wont do it. And by the way - do note that the 99% accuracy number is in photobooth conditions. I.e. looking straight at the camera, with no facial expressions, good lighting and solid colour background.

    The system you suggest - picking a face from a crowd and identifying it - is much harder. If not nearly impossible with any form of accuracy until far more powerful software/systems are generated to allow for such.

    Far easier to force through a parliamentary bill that requires all UK subjects to get RFID tags surgically inserted into their spines...

    131:

    Alex @127: you think they're going to throw away the current system and replace it with something better? That's ... optimistic.

    Like it or not, since 11/9/2001 we've seen the construction of a huge -- $100Bn per year turnover -- global security industry. Its lobbying power is enormous (more gravy on the train, please!) and it will fight to keep its position. Seeding scare stories in the press is part and parcel of this industry, just like seeding anti-environmental-change studies is for the petrochemical industry. (Doesn't matter if some or most of the competing security vendors are honest; some of their competitors will be pragmatic and ruthless and take advantage of our gullibility.)

    Things that do not maximize profits include: having competent, highly trained, staff on the ground. (Trained staff aren't trivially replaceable and therefore tend to cost more in wages than semi-competent rentacops.) Having more highly trained staff monitoring cameras in watch centres, along the lines you describe. Things that do contribute to the bottom line include: selling lots of shiny technological fixes (which due to the mystique of technology you can charge more for than the well-trained security staff -- even though they cost significantly less to run), keeping wages in the screening and rentacop sector low, and keeping the public scared (so they buy this rubbish).

    The point I'm getting at is that the "stupid" system of today is only stupid if you take its alleged goals (achieving security) at face value. If instead you analyse it as a system for concentrating wealth at minimum cost, it works beautifully. After all, if there's another huge terrorist outrage, what happens? A few baggage screeners get sacked, then yet more money gets thrown at "fixing" the system.

    It's broken by design.

    Put it another way: I know a technique whereby, with about the same number of volunteers for a suicide mission as Mohammed Atta had, and no tools and technologies that the Jihadis haven't already proven themselves proficient with, I could reproduce the death toll of 9/11 today. (And I'm very tempted to post it on this blog just to see what kind of shit-storm I stir up. Ten to one, there'll be a chorus line screaming at me for "giving the terrorists ideas". The bad news is, it's already been used in Russia by Chechen separatists ... and you know what? I've only ever seen one airport where it wouldn't work properly.)

    132:

    These kind of thing always bring to my mind that 'Matrix' phrase: Never send a human to do a machine's job. Well, it also works in the opposite sense!

    Face recognition must be one of the very hardest tasks we could try to accomplish using machines, while humans are quite good at it after millions of years of practice and evolution. Why do we want to use machines for this? Are we so deeply convinced that machines are so superior to humans that they will do a better work in any conceivable field? Even the usual 'to save money' explanation seems quite inadequate in this case.

    This must be the parallel to voice regognition: even when the human does cooperate 100% the failures are frequent enough to make the whole experience inmensely frustrating. If we tried it doing our best to get our voice not recognized the listening machine would have not even one chance in a billion to succeed.

    133:

    Re: Halting State; nature imitates art... http://www.theregister.co.uk/2007/07/18/second_life_copyright_suit/

    134:

    Re: Charlie's point on screening companies in #131, I'd like to present this article by a former security screener: http://www.kuro5hin.org/story/2006/7/26/1497/94515 I can't vouch for its accuracy, or how much has changed since the writer worked there, but the gist is that, the company doesn't care whether you spot weapons or not. They care whether you spot standardised FAA test items or not. FTA: "FAA Test items are supposed to have the same pattern on the X-Ray machine as the items you are really after. They included a starter pistol with a plugged barrel, a "bomb" that consists of an old style alarm clock, three pieces of PVC pipe and two lead wires, a knife encased in three inch thick plastic, and a couple more I have forgotten. The FAA test items must set off any metal detector. They must also, more importantly, be detected by screeners." [...] "Any gun? Any gun? I don't give a fuck about any gun, dipshit. I care about this gun. The FAA will not test you with another gun. The FAA will never put any gun but this one in the machine. I don't care if you are a fucking gun nut who can tell the caliber by sniffing the barrel, you look for this gun. THIS ONE."

    135:

    I don't understand why we're even throwing numbers out for accuracy of a positive/negative test of any kind (i.e. the facial recognition systems) without separating the false positive number from the false negative number. I'm not even sure what "99% accurate" means without giving both a false positive and a false negative number.

    If a system has no false negatives, but 50% false positives, then it does reduce human workload by 50%. You only have to have a human look at the 50% of matches it claims, and see if they are correct.

    Basically, you have to decide ahead of time: a) what is the acceptable false positive rate b) what is the acceptable false negative rate

    and then calculate whether the computer system is useful based on its actual false positive and false negative rates using something like the equations in the type I error/type II error wikipedia error article.

    My gut feeling is that you could construct a recognition system with a much lower false negative rate than its false positive rate. Such a system would likely be useful even if a human had to double check the results. Even if the system's false negative rate is higher than the acceptable false negative rate, you could have a human double check some portion of its false negatives results to "shore up" your results to an acceptable level.

    136:

    This debate seems to be pre-supposing that cos computers can't do faces, humans can. We can't either - this is especially the case when 'we' are McJobbers who have to do it several hundred times a day. This failure rate is boosted massively by such hi-tech trrist innovations as a new haircut. Or a hat.

    137:

    Chris: Sadly Dad died in 1997, which is a shame because I never got to see him whinge about CSI. If you want to email me (daveon AT gmail DOT com) I can give you some bio-details, there's probably some stuff in police archives about him. He was the last Police Officer to run forensics at the Yard before it because a civilian operation (Commander Martin "Paddy" O'Neill), he also ran a chunk of training operations at Hendon when he was a Chief Superintendant.

    We've got a weird selection of stuff in boxes.

    138:

    Charlie, Serraphim et al: This sub-thread on face recognition has diverted from the issue of technology, where it started, to a very different issue of security, politics and the 9/11 issue. I was responding to the earliest part of the thread that started with the idea of face-recognition , and a further point made on the money that could be thrown at the problem. Charlie's comment 116 was the one I was responding to. However since then the comments have devolved into the evils of the "security-industrial complex".

    The issue of whether, or to what degree, face recognition can be made to work is quite separate from who funds or deploys it. Arguments presented as to its feasibility all start with the "well it isn't working today". But that doesn't mean it cannot be made to work better tomorrow and better still the day after that. Voice recognition is a great example - because despite the early hype and complexity, strides have been made and the latest systems are getting quite remarkable. There is no a priori reason to believe that face recognition will not develop similarly, or that face recognition is some endemically human activity that cannot be mimicked by a machine. As an aside on this point, I raised the issue of 'foreign faces' which I believe is valid. We do not recognize animal faces nearly as well, and I would argue that it is not the faces per se that are the issue, but the discriminators used. (Hence the reference to the face blindness article and the stones metaphor). Bottom line on this is that I think over the next few decades, face recognition from sensors will be made to work with some reasonable (however you care to define this) level of performance. As long as a system can be made to work economically, then it will be deployed. The economics will be affected by what the ploitical and legal goals and constraints are.

    Turning to what has become a rant on security politics. I couldn't agree more that many companies have capitalized on the current demand for security, although I do not believe that "renta-cops" pay scales are being kept deliberately low - that makes no sense to me. As to using shiny toys - well they do work - metal detectors, bomb sniffers, x-rays machines. I also agree that security as practiced in the US (I haven't been back to the UK since 9/11) is a sham - and security experts like Bruce Schneir (sp?) have basically said so loud and clear. The problem is not the shiny toys so much as a determination not to do layered security seriously for whatever reason (I'm not so cynical as to believe that is solely to line the pockets of "shiny toy" suppliers). If I was a terrorist, I would be targeting areas that are not very secure today - chemical plants, water supplies, tunnels etc. The US anthrax scare back in the 90s worked quite well - imagine the fear if it was announced that a major metropolitan reservoir had been contaminated with a toxin weeks after the fact? The UK has gone down the tracks of universal surveillance far more quickly than other places - we still are resistant in the US, although I agree with David Brin (Transparent Society) in suggesting this desire for privacy might not be the way to go. People knowing what you do all the time feels creepy - I know, I experienced it while living in Bermuda in the 80s. For me, because I don't like US airport security, I just travel very much less and use other technologies in their place where possible.

    139:

    Regarding the feasibility of face and/or voice recognition system, I'm an IT professional with more than 20 years of experience, and in my experience there are very few things we can't do... if we are willing to spend enough time, resources and money working on the problem.

    But the real question shouldn't be 'Could we do it?' but 'Would it be an improvement?'

    This question can be answered from several points of view, of course. Voice recognition systems are definitely not an improvement from the user's point of view, as anyone that has had to fight with them can say, but they are economically very convenient.

    A working face recognition system? I really can't see how such a system could be feasible today, or for quite a few years to come, not even spending obscene amounts of money, but a body recognition system would be probably a lot more feasible, i.e. I don't think we could build a machine able to scan a face in the crowd and say "He's the famous actor Brad Pitt with a wig and a false moustache". Bradd could alter his aspect too easily. But he would find a lot more difficult to alter his height, the length of his arms or the size and shape of his face... In my humble opinion we could easily build such a system. Not with 100% accuracy, of course, but accurate enough to be a great help.

    140:

    Gait analyis is just one of the things that automatic systems can't do yet. Identity is one of those problems - like 'solve crime' - which has too many human factors in the mix to make it solvable merely by throwing more money at the software.

    141:

    I have one word for anyone who thinks gait analysis is a panacea: "wheelchair". (Or another word: "crutches", preferably with a fake plaster cast.)

    I suspect these gambits wouldn't easily fool a human guard who's actually awake and on the ball, but unless you're automatically going to red-flag every wheelchair or walking-stick user (including, for example, my elderly and mobility-impaired parents in the dragnet) then it ain't going to help much.

    We also tend to forget that there are styles of clothing that tend to obscure or modify the physical proportions of the wearer -- they're not currently hugely popular in the developed world, but they have been, and given the way fashion runs in cycles I'd be unsurprised to see them reappear.

    142:

    Alex:

    Re: success rate of weapons / bomb detection: there are relatively few 'double blind' weapons tests done, and those I've seen don't give me much confidence that they do work.

    Practical experience: I've several relatives who were in the RUC (Police in Northern Ireland). They were typically armed for personal protection, though for practical purposes a handy inconspicuous place to keep the handgun was in the wifes handbag. Life being what it was, this was often forgotten about, resulting in the handgun been carried through weapons searches and X-ray points without the carrier being aware (and hence nervous). Typically the weapon remained undetected 50% of the time.

    I've seen handbags of relatives searched and the searcher failtonoticeorreact to the handgun inside.

    If the person carrying it was an undercover agent who forgot to take it out, what do you do? create an incident and blow their cover, risk a scene where you get shot, or just 'fail' to notice?

    It helps, as in 134, if you can tell this is not a performance test ...

    143:

    We also tend to forget that there are styles of clothing that tend to obscure or modify the physical proportions of the wearer -- they're not currently hugely popular in the developed world, but they have been, and given the way fashion runs in cycles I'd be unsurprised to see them reappear.

    "Dynamic patterning clattered across the knoats in her crinoline as she staggered like a big seal up the companionway of the giant wind turbine platform, hoping tae fuck the guardbots didn't suspect the thermal lance she hud lashed to the things beneath all that vintage whalebone.."

    144:

    "Things", of course, s/b "thighs".

    Inspiration here.

    145:

    alastair@142: Practical experience: I've several relatives who were in the RUC (Police in Northern Ireland). They were typically armed for personal protection, though for practical purposes a handy inconspicuous place to keep the handgun was in the wifes handbag. Life being what it was, this was often forgotten about, resulting in the handgun been carried through weapons searches and X-ray points without the carrier being aware (and hence nervous). Typically the weapon remained undetected 50% of the time.

    I think this says more about human fatigue than machine failure. X-ray machines would certainly have shown a dense mass in any handbag, it was just not noted by the screener. When items are concealed, e.g. bombs in laptops, search requires these extra "senses" otherwise it would be hopelessly slow. Even in the good old days of smuggling, only a few people could be singled out and their baggage searched. Someone above noted 15,000 people per day go through Heathrow - that is a huge number of almost 100% true -ves to have to screen.

    The classic approaches - multiple viewers and audit by known events only partially works as should be obvious by this anecdote plus the numerous postings on the web of the type "TSA finds water bottle but fails to note bomb in same case". If humans were that good at detecting anything, there wouldn't be spies and spy novels would be a joke. Typically in the past (and the present), people have resorted to extreme authoritarianism and fear. A police state with multiple checkpoints and severe penalties for capture (why else are spies excluded from the Geneva Conventions?). The costs have often proven economically crippling. Technology just extends what you can do before those costs get to high. Airport/aircraft security, despite its laxness, is a significant economic problem. It costs money to implement, reduces travel efficiency and makes travel very inconvenient. This reduces demand and shifts it to alternatives.

    The only choices are - use more technology that works (or bad guys believe works), change the way security is done, forget about security for the most part and use other approaches instead or restructure society to be less vulnerable.

    Personally, I think that a determined attack of some nature would be eventually successful whatever we do. That we haven't seen anything on the scale of 9/11 is more likely to be the lack of capability of terrorists than the capabilities of our defenses.

    146:

    I have one word for anyone who thinks gait analysis is a panacea: "wheelchair". (Or another word: "crutches", preferably with a fake plaster cast.)

    One or two of those little strap-on exercise weights at some point along one or both legs would be easier IMO.

    147:

    Alex: El-Al.

    It's a financial and motivation issue.

    148:

    Can't let today go by without the reminder of what happened 38 years ago today.

    I mixed my memory with an online interpolation of what Armstrong and others were thinking during a rather tense busy time, on PZ Myer's scienceblog:

    http://scienceblogs.com/pharyngula/2007/07/ever_upwards.php

    149:

    Andrew@147: Not clear what the airline El-Al has to do with this other than a very successful history of deterrence since the skyjackings of the 1970s. Palestinians just hit other targets - e.g. shopping malls, buses. It just isn't really possible to be 100% secure, any more than it is possible to be 100% crime free - in whatever crime category you name.

    150:

    Alex: El-Al take airline security seriously. Their implementation isn't about Security Theatre, it's about keeping bad guys from blowing up airliners. Separate boarding and baggage handling facilities at each airport they fly from that they run themselves, police-trained security specialists using behavioural (not racial) profiling to look for signs that something's not quite right, every piece of luggage hand-searched carefully, and so on. They take security seriously enough that it adds significantly to the cost of flying with them; on the other hand, nobody's managed to hit then since the early seventies (despite them being a top priority target for just about every Palestinian and jihadi group out there for the past forty years).

    El-Al doesn't have an army and they don't do deterrence; they just treat security as an integral core part of their business, like ensuring their aircraft are properly maintained, rather than an annoying imposition that they're just going through the motions with.

    151:

    Charlie: I understand Alastair's idea that El-Al takes security seriously as opposed to theater. The real issue is - can that really be replicated throughout the whole economy without incurring huge economic costs?

    If you have read Paul Kennedy's "The Rise and Fall of the Great Powers" (if you haven't get a copy and read it), the theme is that economics drives the the life cycle of powers. Usually, imperialists can project force very cheaply (gunboats on rivers) at first, but eventually the outsiders gain the same force and the costs of maintaining territory become overwhelming and the imperialist economy goes into decline. (cf Vietnam & Iraq with ethe US) and the US. Terrorism is just another example of power asymmetry - it costs very little for terrorists to actually succeed, but it costs a huge amount to protect everyone from random acts of terror. This cost has to be borne somehow - at the moment the US is doing some minimal soft target protection in the US and spending its treasure pretty quickly in Iraq. If El-Al like security had to be practiced against all targets - public & private infrastructure, transport, energy etc, the costs and frictions to the economy would be so great that it would no longer be competitive and probably not very efficient either. Would we really travel by air, train and buses (even cars entering tunnels?) if we had to undergo El-Al level security constantly? - no - which is why Palestinians switched to easier targets in Israel. Imagine the same at shopping malls - it would be too intrusive and reduce commerce. I well recall the security in the 1990s in London during the last IRA bombing campaign - it was extremely intrusive to have to go through security checkpoints just to go into a store - and that was just a relatively cursory handbag and coat check.

    152:

    The real issue is - can that really be replicated throughout the whole economy without incurring huge economic costs?

    Of course not.

    And yes, your point is very valid. It might be of interest to dig up the recent finding that UK-US air travel has dropped by something like 15% since 9/11 -- including business travel, and some US firms are having difficulty winning contracts in Europe simply because travel-related hassle makes it cheaper for European businesses to work with locals.

    153:

    Although counter to this, it appears that international air travel to and from the US had recovered to its 2000 peak by 2005:

    http://tinet.ita.doc.gov/outreachpages/inbound.total_intl_travel_volume_1995-2005.html

    However there is evidence that this is due to fare price cuts. Certainly airlines are quite unprofitable due to price pressures and higher fuel costs. I've also noted the increased airport charges that I assume are used to recoup some of the security costs. In the US, the federal TSA is funded through taxes, so if that was fully added to ticket prices...

    154:

    And the other way round, Charlie. It's been quite noticeable in the games industry, for example.

    Alex, I was talking specifically about airline security, should been clearer.

    155:

    "And the other way round, Charlie. It's been quite noticeable in the games industry, for example."

    That's not about travel hassle, it's about exchange rates. (Oh, and that most of the UK games industry collapsed about 3 years ago, and almost all of what's left is owned by either EA or Sony, or develops mobile-phone minigames)

    156:

    @155: Whats about exchange rates?

    Theres quite a traffic between e.g. here in Ireland and the US due to the exchange rates - since ~2000 to 2007 the dollar has fallen from 1.0 Euro to 1.40 Euro, increasing the volume of consumers travelling to the US for cheap luxury goods.

    This should increase the amount of US business winning contracts in Europe, but hasn't because of the travel hassle, as per @152.

    157:

    Exchange rates? Don't talk to me about exchange rates ... (I get paid about 60-70% of my income in dollars: it's not funny.)

    If anything, the weakening dollar should have lots of EU businesses buying stuff up in droves from the US, and lots of tourists heading for Disneyworld. But it ain't happening. Which is particularly ominous.

    158:

    "Whats about exchange rates?" Well, as mentioned above, the UK game industry is largely funded by foreign companies, primarily American ones, and so as the value of the dollar has tumbled, the effect is to make us "more expensive" (to an American publisher). We haven't changed anything, but their weak dollars just don't go as far. Throw spiralling complexity demands, and the rate that housing costs rise by each year (7% country average, 15% in London, IIRC) and the whole business becomes rapidly unviable.

    (There are ways around this, but they generally involve taking very non-traditional routes to getting funded, published and distributed...)

    159:

    Charlie: I'll bet JK Rowling has her $ income hedged. You could too, even if the sums are somewhat more modest.

    There is some anecdotal evidence that tourists are coming back to the US because of the x-rates. However, there is also the confounding problem that the US is becoming increasingly unfriendly to tourists (not to mention its own citizens once past immigration). I noted a huge number of Brits in Cancun, where the Peso is more closely linked to the USD than Euros, so maybe tourists are flocking to places that are cheap AND welcome them?

    160:

    Canis, I work for Rebellion. We're pretty big. Also, you overstate the effect of the "collapse" quite badly. The names change, the same faces are behind them.

    161:

    Rebellion? Well, say hi from me to Mike Burnham if he's still there. :) But this doesn't change the facts of the industry. I don't think I overstate at all. Somewhere I have a list of dead UK studios, it has dozens of entries. Are some of the same people still around? Sure, but as employees of large, foreign-owned corporations, generally. Rebellion is kinda an edge case -- what with Jason & Chris making the unusual (& interesting) purchase of 2000AD along with all the IP that entails -- & doing a weekly comics print run is definitely covered under the 'non-traditional' options I noted above. And Rebellion's big because it bought out three other struggling studios in the past year or so. Which is almost my point. There's you guys, and inauspiciously-named Climax, and I think that's it for general, independent developers. There's some small-but-successful guys in niches, who've generally survived by escaping from the regular publisher-funded model (like Sports Interactive, who also sell magazines, and pub quiz machines, and such -- although they've been bought out by Sega; or Introversion, who self-financed and self-published and are only about 8 people besides), and Splash Damage have id Software as a sugar-daddy. But everyone else has gone, been absorbed by EA or Microsoft or Sony, or is circling the drain (cough Kuju cough).

    (Wow, this is offtopic.)

    162:

    Frontier has 150 odd people working in Cambridge on PC & console games...quite successful over the past ten years...

    -- Andrew

    163:

    Then there's realtime worlds in scotland and a lot of other devs. As I said, it's overrated as a collapse.

    And yes, Mike's still here. Runs the Oxford office these days :)

    Specials

    Merchandise

    About this Entry

    This page contains a single entry by Charlie Stross published on July 10, 2007 11:31 AM.

    Unpacking the Zeitgeist was the previous entry in this blog.

    Still Busy is the next entry in this blog.

    Find recent content on the main index or look in the archives to find all content.

    Search this blog

    Propaganda