Recently in Computers Category

NSA Headquarters Yes. Yes we can. The last year has brought with it the revelations of massive government-run domestic spying machineries in the US and UK. On the horizon is more technology that will make it even easier for governments to monitor and track everything that citizens do. Yet I'm convinced that, if we're sufficiently motivated and sufficiently clever, the future can be one of more freedom rather than less.

In my science fiction novels, Nexus and Crux, I write about technology ('Nexus') that makes it possible to send information in and out of human brains, making it possible for humans to share what they're seeing, hearing, feeling, and even thinking with one another; and also for human minds to exchange data with computers.

The early versions of that sort of technology are real. We've sent video signals into the brains of blind people, audio into the brains of the deaf, touch into the brains of the paralyzed. We've pulled what people are seeing, their desired movements, and more out of the brains of others. In animals we've gone farther, boosting memory and pattern matching skills, and linking the minds of two animals even thousands of miles apart.

I gave a recent TEDx talk on linking human brains about the science in this area, and where I see it going. You can watch the video below.

My friend and fellow science fiction author William Hertling disagrees with me that the Singularity is further than it appears.

Will has spent some time thinking about this, since he's written three fantastic near-future novels about a world going through an AI Singularity.

He's written a rebuttal to my The Singularity is Further Than it Appears post.

Here's his rebuttal: The Singularity is Still Closer Than It Appears.

If you've seen other thoughtful rebuttals or responses out there, please leave links to them in the comments.

Ramez Naam is the author of Nexus and Crux. You can follow him at @ramez.

Time The Year We Become Immortal.jpgAre we headed for a Singularity? Is it imminent?

I write relatively near-future science fiction that features neural implants, brain-to-brain communication, and uploaded brains. I also teach at a place called Singularity University. So people naturally assume that I believe in the notion of a Singularity and that one is on the horizon, perhaps in my lifetime.

I think it's more complex than that, however, and depends in part on one's definition of the word. The word Singularity has gone through something of a shift in definition over the last few years, weakening its meaning. But regardless of which definition you use, there are good reasons to think that it's not on the immediate horizon.

As you probably know if you read this weblog regularly, I'm working on a novel set about 12 years in the future, at a time when existing technological trends (pervasive wireless networking, ubiquitous location services, and the uptake of virtual reality technologies derived from today's gaming scene) coalesce into a new medium. "Halting State" attempts to tackle the significance of the impact of widespread augmented reality tools (and of interaction in ludic environments) on society.

Looking for a handle on this topic is difficult; how do you get your head around something nobody's yet experienced? One approach is to use a metaphor -- a close historical parallel that you can track.

The metaphor I picked (for reasons which I hope will be obvious) was the development of the world wide web, and the way (pace Lawrence Lessig) in which coders are the unacknowledged legislators of the technosphere. And it led me into some interesting speculative territory. So let me start with some history ...

In the beginning of the world wide web, there was Tim Berners-Lee. And he invented the web as a collaboration and information-sharing medium for academia. Interestingly, it wasn't the first such tool by a long way -- gopherspace, Hyper-G, WAIS (aka Z39.50) and a whole host of other technologies got on the internet first. But for a variety of reasons that are only obvious with 20/20 hind-sight, the web was the one that took off.

A couple of years after Sir Tim began working on the web protocols at CERN, a handful of programmers at NCSA got funding to write (a) a web server, and (b) a web browser. The server became NCSA httpd, which rapidly became the main UNIX-hosted server tool, then gave rise indirectly to the Apache project circa 1995 (when a group of folks who were impatient with the lack of progress on NCSA's httpd got together to write an improved version).

Meanwhile, the NCSA team wrote a web browser called Mosaic. (Actually there were three web browsers called Mosaic -- a Windows version, the Mac version, and the X11/UNIX version -- and they were developed by different people; but from the outside it looked like a single project.) I first began using it in, um, early 1993? Late 1992? It was back when you could still reasonably subscribe to the NCSA "what's new on the web" digest and visit every single new website on the planet, every day. (Which I did, a couple of times, before I got bored.) Mosaic was something of a revelation, because in addition to the original HTML specifications TBL had drafted, they threw some extra hacks in -- notably the IMG SRC tag, which suddenly turned it from a hypertext engine into a graphical display platform. Originally the CERN web team had actually wanted to avoid putting graphics in web pages -- it defeated the idea of semantic markup! -- but the Mosaic developers were pragmatists; graphics were what they wanted, and graphics were what they would have.

Hit the fast-forward button. Mosaic caught on like wildfire. NCSA realized they had a startling runaway hit on their hands and started selling commercial usage licenses for the source code. The developers left, acquired funding, and form Netscape Communications Corporation. They didn't take the original browser code -- NCSA owned it -- but they set out to write a better one. Meanwhile, NCSA licensed the Mosaic code base to Spry and a few other companies.

Some time in late 1994/early 1995. Microsoft woke up, smelled the smoke in the air, and paniced. For some years Bill Gates had been ignoring or belittling the annoyingly non-Microsoft-owned TCP/IP network protocols that were creeping out of academia and the UNIX industry. If people wanted network services, Microsoft could provide everything they wanted, surely? Well, no. And by the time Microsoft did a 180-degree policy turn, they were badly behind the curve and in danger of losing a huge new market. They went to one of the Mosaic licensees and say, "sub-license Mosaic to us, and we'll pay you a royalty on every copy we sell." Then they rebadged it as Internet Explorer and gave it away for free, thus firing the opening salvo in the browser wars. (More or less.)

The browser wars were under way in earnest by mid-1995, with Netscape making most of the running for the first year or so, but succumbing eventually to Microsoft's marketing advantage. The weapon of choice was browser functionality: "our browser does more neat stuff than theirs". And in fighting this way, both sides introduced lots of spurious crap that nobody could understand or use (and which later proved to be full of hideous security holes fit to gladden the hearts of spammers and advertising executives everywhere).

Now, here's a key point: the developers of Mosaic at NCSA didn't know it, but the introduction of the IMG SRC tag in 1992 was going to do more to change the shape of 21st century politics than anyone could imagine. Because it opened up the possibility of using the web for graphical content, including magazine publications and porn. As we all know -- it's a cliche -- new communication technologies catch on for the first time when they are used for the distribution of pornography. This in turn demands mechanisms for payment and for billing users, which ends up creating spin-off industries like the payment service providers, and in turn brings in mechanisms that can be used by other types of business. That one variant tag catalysed, ultimately, the commercialization of the web.

(OK, I exaggerate for effect. Bear with me.)

Those initial design decisions, presumably made by a bunch of postgrads with a shiny new piece of code, sitting around in a university common room, shaped a medium which today mediates a good proportion of all retail sales in the developed world, and which governments propose to use to mediate their interactions with their citizens. And the headaches arising from the browser-war induced featuritis (example: studies show that something like 25% of users pay no attention to their web browser's security and location indicators, and many can't actually read a URL -- they do everything by typing in a google search bar or following links -- this is why phishing scams work and part of why we're seeing such a huge rise in identity theft on the net) define the shape of our current social landscape.

Now. About virtual reality.

Sad to say, the political landscape of the early to mid 21st century has already been designed -- by Gary Gygax, inventor of Dungeons and Dragons.

Gary didn't realize it (D&D predates personal computing) but his somewhat addictive game transferred onto computers quite early (see also: Nethack). And then gamers demanded -- and got, as graphics horsepower arrived -- graphical versions of same. And then multi-user graphical versions of same. And then the likes of World of Warcraft, with over a million users, auction houses, the whole spectrum of social interaction, and so on.

Which leads me to the key insight that: our first commercially viable multi-user virtual reality environments have been designed (and implicitly legislated) to emulate pencil-and-paper high fantasy role playing games.

Sure, Second Life shows a pattern for Ludic environments that is non-RPG based, more user-directed -- after the pattern of LambdaMOO and similar -- but again, the LambdaMOO experiment fell out of dissatisfaction with the fantasy RPG limits that the earlier MUDs imposed on social interaction, and the MUDs were basically networked multiuser implementations of the Colossal Cave Adventure and friends, which all came back to Gary Gygax.

There's no bloody escaping it. The gamers have given rise to a monster that is ultimately going to embrace and extend the web, to the same extent that TV subsumed and replaced motion pictures. (The web will still be there -- some things are intrinsically easier to do using a two dimensional user interface and a page-based metaphor -- but the VR/AR systems will be more visible.)

I'm not sure we've reached the equivalent of Netscape's 1.0 release. New interaction mechanisms are going to come along, especially once the VR experience moves away from the desktop computer paradigm and goes mobile, using PDAs or smartphones, head-up displays, and ubiquitous location services (and speaking of the latter, it is reported that the Galileo PRN codes have been cracked). But VR will be the first medium where the primary path to commercialization will be through game-play.

We're already immersed in a neotenous society, where social adolescence is artificially extended and a lot of people never "grow up' -- that is, never accept the designated "adult" roles and social behaviours. Gaming is a pervasive recreational behaviour; the games industry is probably close to surpassing the traditional motion picture industry in turnover. Play -- historically associated more with childhood behaviour than with adultood -- is a behaviour that is increasingly continued into adulthood. And it has long-term psychological implications. Play is learning tool; young mammals play in order to explore their environment and develop strategies for coping.

An environment developed implicitly for gaming/playing, then re-purposed for acting/doing in real life, offers all sorts of interesting possibilities for behavioural traps equivalent to not understanding that location bar at the top of the browser window. The two general failure modes will be: (a) thinking that something is a game, when in actual fact it isn't, and (b) thinking something is real when it's just a simulation. These will also interact with a population who take longer to reach "traditional" adulthood (if they ever do so), and who therefore may engage in game-play or learning oriented behaviour inappropriately.

The biggest problem facing panopticon surveillance societies will be telling game-play from actual subversion. (How does a game of Killer played in a hotel go down if nobody's told the security guards watching the CCTV cameras?) Whoops, lots of ugly new social failure modes here (especially as our society's rules tend to assume that people are going to slot into traditional adult roles as soon as their age passes 6,574 days and twelve hours).

"It isn't virtual reality until you can mount a coup d'etat in it."

This is the information age, so our definition of a coup consequently varies from the traditional one (involving guns and colonels in sunglasses). But, for the sake of argument, let us posit that it's a de-facto coup if you can fool all of the people all of the time — and controlling their perception of reality is a good start. But how much reality do you need to control?

We can approach the problem by estimating the total afferent sensory bandwidth of the target. If you can control someone's senses completely, you can present them with stimuli and watch them respond — voluntary cooperation is optional. (Want them to jump to the left? Make them see a train approaching from the right.) How stimuli are generated is left as an exercise for the world-domination obsessed AI; the question I'm asking is, what is the maximum bandwidth that may have to be controlled and filtered?

I'm picking on Scotland as an example because it's big enough to be meaningful, small enough to be unthreatening, and generally innocuous. And we can either consider Scotland as a body politic, or as a collection of approximately five million human beings.

First, the body politic. The UK currently has approximately one public CCTV camera per ten people. In addition, the population in general have cameraphones — perhaps one per person (optimistically) with an average resolution of 1Mpixel. Let's wave a magic wand and make all of them video cameraphones, and always on. (Yes, this would swamp the phone network. I'm looking for an upper bound, not a lower.) At 25 frames/sec on every camera and 8-bit colour, that gives us 5.5M cameras generating 25Mb/sec, for 137.5 x 1012 bytes/sec. Reality is liable to be an order of magnitude (or more) lower ...

Let's also give every household a home broadband internet connection — this dwarfs the bandwidth of their POTS connection — running at, say, 40mbps (an order of magnitude above the current average for broadband). Households: say they average 2.5 people — that means we have 2 million of them. So we have another 10 x 1012 bytes/sec.

Thus, the combined internet traffic, phone traffic, and video surveillance that's going on in Scotland is somewhere below 1014 bytes/sec.

Now for the second half: the people. Eyeballs first: going by Hans Moravec's estimate, the human eyeball processes about 10 one million point images per second. I think that's low — a rough estimate of the retina gives me about 40M pixels at 17 images/second, and the pixels take more than 32 bits to encode colour and hue information fully (let's say 64 bits). So, the ten million eyeballs in Scotland would take approximately 2,720 x 1013 bits — call it roughly 1015 bytes/sec to fool.

We've got more senses than just eyeballs, of course. But human skin isn't a brilliantly discriminative sensory organ in comparison — we can only distinguish between stimuli that are more than a centimetre apart over most of our bodies. (Hands, lips, and a few other places are exceptions.) Assuming 2 metres2 of skin per person, that gives us 20K sensors. Giving a firing rate of 10x per second, and 32 bits for encoding the inputs (heat and pressure, not just touch) that still approximates to less than 1Mb/sec per person, or 5 x 1012 bits per second per Scotland, which is basically lost in the noise compared to the optics, or even the spam'n'web surfing.

Sound ... hell, let's just throw in CD-quality audio times five million and have done with it. That's another 10Mb/sec per ear, or 2 x 1013 bits/sec/Scotland.

Now. Let's suppose you can plug everyone in Scotland into a Matrix-style tank and feed them real-time hallucinations. What's the infrastructure like?

We can see that the ceiling is 1015 bytes/sec. (It'd only be 1017 bytes/sec if you wanted to do this to the USA or India — don't get cocky over there!) A single high quality optical fibre can, with wavelength dimension multiplexing, carry about 2 x 1012 bits/sec. So we'd need to run one fibre to every couple of hundred people. The combined trunk to carry the sensory bandwidth of Scotland would need on the order of ten thousand fibres. It's going to be a bit fatter than my thigh; not terribly impressive.

Of course, for this weird thought-experiment to be relevant you'd have to be cramming content into that pipe and monitoring the subject's responses. But encoding human efferent output — gestures and speech — is cheap and easy compared to their sensory inputs — we are net informational sinks, outputing far less than we take in.

I've also completely ignored the issue of redundancy, in assuming that everyone in Scotland has a unique and separate experience of reality. (Call it virtual qualia.) In practice, a lot of folks will see exactly the same thing (or as near as makes no difference) at the same time. If a million people are watching a football match on TV, they will see the same image, subject to a fairly simple pixel-based transformation to modify the angle and distance they're sitting at from the TV screen (and possibly its brightness and contrast). Again, we all spend an average of 30% of our time sleeping, during which we're not doing a hell of a lot with our external visual field. So, conceivably, I'm over-estimating by a couple of orders of magnitude.

Finally: we've got a modern telecoms infrastructure that provides fibre to the kerb. Obviously, our current infrastructure isn't providing terabits per second on every fibre — but the glass is in the ground. Of more interest is the question of whether or not the available wireless bandwidth would support this sort of large-scale subversion. Right now it wouldn't, but with UWB estimated to top off around the terabit/second mark across distances of under ten metres, I wouldn't bet against it in the future.

Now if you'll excuse me, I'm going to go drink my morning cup of tea and start making myself a nice tinfoil hat ...

In the week since I switched this new blogging system on, I've had more than fifty trackback spams ... and no genuine trackbacks at all. So I'm switching trackbacks off completely. (I'll review this decision in a few weeks time, but right now I don't see why I should enable a feature which seems to only be used by spammers. Even though the built-in spam filtering caught all of them, I can't be arsed riding herd on it.)

Interestingly, there have been no attempts at comment spamming — or none that have got through the various spam detector plugins I've installed.

Meanwhile, my email continues to attract 300-500 spams per day, and a total of 2-3000 per day for all ten or so users on my server.

Specials

About this Archive

This page is an archive of recent entries in the Computers category.

Administrative is the previous category.

Gadget Patrol is the next category.

Find recent content on the main index or look in the archives to find all content.

Search this blog

Propaganda