Computers: July 2006 Archives

As you probably know if you read this weblog regularly, I'm working on a novel set about 12 years in the future, at a time when existing technological trends (pervasive wireless networking, ubiquitous location services, and the uptake of virtual reality technologies derived from today's gaming scene) coalesce into a new medium. "Halting State" attempts to tackle the significance of the impact of widespread augmented reality tools (and of interaction in ludic environments) on society.

Looking for a handle on this topic is difficult; how do you get your head around something nobody's yet experienced? One approach is to use a metaphor -- a close historical parallel that you can track.

The metaphor I picked (for reasons which I hope will be obvious) was the development of the world wide web, and the way (pace Lawrence Lessig) in which coders are the unacknowledged legislators of the technosphere. And it led me into some interesting speculative territory. So let me start with some history ...

In the beginning of the world wide web, there was Tim Berners-Lee. And he invented the web as a collaboration and information-sharing medium for academia. Interestingly, it wasn't the first such tool by a long way -- gopherspace, Hyper-G, WAIS (aka Z39.50) and a whole host of other technologies got on the internet first. But for a variety of reasons that are only obvious with 20/20 hind-sight, the web was the one that took off.

A couple of years after Sir Tim began working on the web protocols at CERN, a handful of programmers at NCSA got funding to write (a) a web server, and (b) a web browser. The server became NCSA httpd, which rapidly became the main UNIX-hosted server tool, then gave rise indirectly to the Apache project circa 1995 (when a group of folks who were impatient with the lack of progress on NCSA's httpd got together to write an improved version).

Meanwhile, the NCSA team wrote a web browser called Mosaic. (Actually there were three web browsers called Mosaic -- a Windows version, the Mac version, and the X11/UNIX version -- and they were developed by different people; but from the outside it looked like a single project.) I first began using it in, um, early 1993? Late 1992? It was back when you could still reasonably subscribe to the NCSA "what's new on the web" digest and visit every single new website on the planet, every day. (Which I did, a couple of times, before I got bored.) Mosaic was something of a revelation, because in addition to the original HTML specifications TBL had drafted, they threw some extra hacks in -- notably the IMG SRC tag, which suddenly turned it from a hypertext engine into a graphical display platform. Originally the CERN web team had actually wanted to avoid putting graphics in web pages -- it defeated the idea of semantic markup! -- but the Mosaic developers were pragmatists; graphics were what they wanted, and graphics were what they would have.

Hit the fast-forward button. Mosaic caught on like wildfire. NCSA realized they had a startling runaway hit on their hands and started selling commercial usage licenses for the source code. The developers left, acquired funding, and form Netscape Communications Corporation. They didn't take the original browser code -- NCSA owned it -- but they set out to write a better one. Meanwhile, NCSA licensed the Mosaic code base to Spry and a few other companies.

Some time in late 1994/early 1995. Microsoft woke up, smelled the smoke in the air, and paniced. For some years Bill Gates had been ignoring or belittling the annoyingly non-Microsoft-owned TCP/IP network protocols that were creeping out of academia and the UNIX industry. If people wanted network services, Microsoft could provide everything they wanted, surely? Well, no. And by the time Microsoft did a 180-degree policy turn, they were badly behind the curve and in danger of losing a huge new market. They went to one of the Mosaic licensees and say, "sub-license Mosaic to us, and we'll pay you a royalty on every copy we sell." Then they rebadged it as Internet Explorer and gave it away for free, thus firing the opening salvo in the browser wars. (More or less.)

The browser wars were under way in earnest by mid-1995, with Netscape making most of the running for the first year or so, but succumbing eventually to Microsoft's marketing advantage. The weapon of choice was browser functionality: "our browser does more neat stuff than theirs". And in fighting this way, both sides introduced lots of spurious crap that nobody could understand or use (and which later proved to be full of hideous security holes fit to gladden the hearts of spammers and advertising executives everywhere).

Now, here's a key point: the developers of Mosaic at NCSA didn't know it, but the introduction of the IMG SRC tag in 1992 was going to do more to change the shape of 21st century politics than anyone could imagine. Because it opened up the possibility of using the web for graphical content, including magazine publications and porn. As we all know -- it's a cliche -- new communication technologies catch on for the first time when they are used for the distribution of pornography. This in turn demands mechanisms for payment and for billing users, which ends up creating spin-off industries like the payment service providers, and in turn brings in mechanisms that can be used by other types of business. That one variant tag catalysed, ultimately, the commercialization of the web.

(OK, I exaggerate for effect. Bear with me.)

Those initial design decisions, presumably made by a bunch of postgrads with a shiny new piece of code, sitting around in a university common room, shaped a medium which today mediates a good proportion of all retail sales in the developed world, and which governments propose to use to mediate their interactions with their citizens. And the headaches arising from the browser-war induced featuritis (example: studies show that something like 25% of users pay no attention to their web browser's security and location indicators, and many can't actually read a URL -- they do everything by typing in a google search bar or following links -- this is why phishing scams work and part of why we're seeing such a huge rise in identity theft on the net) define the shape of our current social landscape.

Now. About virtual reality.

Sad to say, the political landscape of the early to mid 21st century has already been designed -- by Gary Gygax, inventor of Dungeons and Dragons.

Gary didn't realize it (D&D predates personal computing) but his somewhat addictive game transferred onto computers quite early (see also: Nethack). And then gamers demanded -- and got, as graphics horsepower arrived -- graphical versions of same. And then multi-user graphical versions of same. And then the likes of World of Warcraft, with over a million users, auction houses, the whole spectrum of social interaction, and so on.

Which leads me to the key insight that: our first commercially viable multi-user virtual reality environments have been designed (and implicitly legislated) to emulate pencil-and-paper high fantasy role playing games.

Sure, Second Life shows a pattern for Ludic environments that is non-RPG based, more user-directed -- after the pattern of LambdaMOO and similar -- but again, the LambdaMOO experiment fell out of dissatisfaction with the fantasy RPG limits that the earlier MUDs imposed on social interaction, and the MUDs were basically networked multiuser implementations of the Colossal Cave Adventure and friends, which all came back to Gary Gygax.

There's no bloody escaping it. The gamers have given rise to a monster that is ultimately going to embrace and extend the web, to the same extent that TV subsumed and replaced motion pictures. (The web will still be there -- some things are intrinsically easier to do using a two dimensional user interface and a page-based metaphor -- but the VR/AR systems will be more visible.)

I'm not sure we've reached the equivalent of Netscape's 1.0 release. New interaction mechanisms are going to come along, especially once the VR experience moves away from the desktop computer paradigm and goes mobile, using PDAs or smartphones, head-up displays, and ubiquitous location services (and speaking of the latter, it is reported that the Galileo PRN codes have been cracked). But VR will be the first medium where the primary path to commercialization will be through game-play.

We're already immersed in a neotenous society, where social adolescence is artificially extended and a lot of people never "grow up' -- that is, never accept the designated "adult" roles and social behaviours. Gaming is a pervasive recreational behaviour; the games industry is probably close to surpassing the traditional motion picture industry in turnover. Play -- historically associated more with childhood behaviour than with adultood -- is a behaviour that is increasingly continued into adulthood. And it has long-term psychological implications. Play is learning tool; young mammals play in order to explore their environment and develop strategies for coping.

An environment developed implicitly for gaming/playing, then re-purposed for acting/doing in real life, offers all sorts of interesting possibilities for behavioural traps equivalent to not understanding that location bar at the top of the browser window. The two general failure modes will be: (a) thinking that something is a game, when in actual fact it isn't, and (b) thinking something is real when it's just a simulation. These will also interact with a population who take longer to reach "traditional" adulthood (if they ever do so), and who therefore may engage in game-play or learning oriented behaviour inappropriately.

The biggest problem facing panopticon surveillance societies will be telling game-play from actual subversion. (How does a game of Killer played in a hotel go down if nobody's told the security guards watching the CCTV cameras?) Whoops, lots of ugly new social failure modes here (especially as our society's rules tend to assume that people are going to slot into traditional adult roles as soon as their age passes 6,574 days and twelve hours).

"It isn't virtual reality until you can mount a coup d'etat in it."

This is the information age, so our definition of a coup consequently varies from the traditional one (involving guns and colonels in sunglasses). But, for the sake of argument, let us posit that it's a de-facto coup if you can fool all of the people all of the time — and controlling their perception of reality is a good start. But how much reality do you need to control?

We can approach the problem by estimating the total afferent sensory bandwidth of the target. If you can control someone's senses completely, you can present them with stimuli and watch them respond — voluntary cooperation is optional. (Want them to jump to the left? Make them see a train approaching from the right.) How stimuli are generated is left as an exercise for the world-domination obsessed AI; the question I'm asking is, what is the maximum bandwidth that may have to be controlled and filtered?

I'm picking on Scotland as an example because it's big enough to be meaningful, small enough to be unthreatening, and generally innocuous. And we can either consider Scotland as a body politic, or as a collection of approximately five million human beings.

First, the body politic. The UK currently has approximately one public CCTV camera per ten people. In addition, the population in general have cameraphones — perhaps one per person (optimistically) with an average resolution of 1Mpixel. Let's wave a magic wand and make all of them video cameraphones, and always on. (Yes, this would swamp the phone network. I'm looking for an upper bound, not a lower.) At 25 frames/sec on every camera and 8-bit colour, that gives us 5.5M cameras generating 25Mb/sec, for 137.5 x 1012 bytes/sec. Reality is liable to be an order of magnitude (or more) lower ...

Let's also give every household a home broadband internet connection — this dwarfs the bandwidth of their POTS connection — running at, say, 40mbps (an order of magnitude above the current average for broadband). Households: say they average 2.5 people — that means we have 2 million of them. So we have another 10 x 1012 bytes/sec.

Thus, the combined internet traffic, phone traffic, and video surveillance that's going on in Scotland is somewhere below 1014 bytes/sec.

Now for the second half: the people. Eyeballs first: going by Hans Moravec's estimate, the human eyeball processes about 10 one million point images per second. I think that's low — a rough estimate of the retina gives me about 40M pixels at 17 images/second, and the pixels take more than 32 bits to encode colour and hue information fully (let's say 64 bits). So, the ten million eyeballs in Scotland would take approximately 2,720 x 1013 bits — call it roughly 1015 bytes/sec to fool.

We've got more senses than just eyeballs, of course. But human skin isn't a brilliantly discriminative sensory organ in comparison — we can only distinguish between stimuli that are more than a centimetre apart over most of our bodies. (Hands, lips, and a few other places are exceptions.) Assuming 2 metres2 of skin per person, that gives us 20K sensors. Giving a firing rate of 10x per second, and 32 bits for encoding the inputs (heat and pressure, not just touch) that still approximates to less than 1Mb/sec per person, or 5 x 1012 bits per second per Scotland, which is basically lost in the noise compared to the optics, or even the spam'n'web surfing.

Sound ... hell, let's just throw in CD-quality audio times five million and have done with it. That's another 10Mb/sec per ear, or 2 x 1013 bits/sec/Scotland.

Now. Let's suppose you can plug everyone in Scotland into a Matrix-style tank and feed them real-time hallucinations. What's the infrastructure like?

We can see that the ceiling is 1015 bytes/sec. (It'd only be 1017 bytes/sec if you wanted to do this to the USA or India — don't get cocky over there!) A single high quality optical fibre can, with wavelength dimension multiplexing, carry about 2 x 1012 bits/sec. So we'd need to run one fibre to every couple of hundred people. The combined trunk to carry the sensory bandwidth of Scotland would need on the order of ten thousand fibres. It's going to be a bit fatter than my thigh; not terribly impressive.

Of course, for this weird thought-experiment to be relevant you'd have to be cramming content into that pipe and monitoring the subject's responses. But encoding human efferent output — gestures and speech — is cheap and easy compared to their sensory inputs — we are net informational sinks, outputing far less than we take in.

I've also completely ignored the issue of redundancy, in assuming that everyone in Scotland has a unique and separate experience of reality. (Call it virtual qualia.) In practice, a lot of folks will see exactly the same thing (or as near as makes no difference) at the same time. If a million people are watching a football match on TV, they will see the same image, subject to a fairly simple pixel-based transformation to modify the angle and distance they're sitting at from the TV screen (and possibly its brightness and contrast). Again, we all spend an average of 30% of our time sleeping, during which we're not doing a hell of a lot with our external visual field. So, conceivably, I'm over-estimating by a couple of orders of magnitude.

Finally: we've got a modern telecoms infrastructure that provides fibre to the kerb. Obviously, our current infrastructure isn't providing terabits per second on every fibre — but the glass is in the ground. Of more interest is the question of whether or not the available wireless bandwidth would support this sort of large-scale subversion. Right now it wouldn't, but with UWB estimated to top off around the terabit/second mark across distances of under ten metres, I wouldn't bet against it in the future.

Now if you'll excuse me, I'm going to go drink my morning cup of tea and start making myself a nice tinfoil hat ...

Specials

Merchandise

About this Archive

This page is an archive of entries in the Computers category from July 2006.

Computers: June 2006 is the previous archive.

Computers: February 2014 is the next archive.

Find recent content on the main index or look in the archives to find all content.

Search this blog

Propaganda