As you probably know if you read this weblog regularly, I'm working on a novel set about 12 years in the future, at a time when existing technological trends (pervasive wireless networking, ubiquitous location services, and the uptake of virtual reality technologies derived from today's gaming scene) coalesce into a new medium. "Halting State" attempts to tackle the significance of the impact of widespread augmented reality tools (and of interaction in ludic environments) on society.
Looking for a handle on this topic is difficult; how do you get your head around something nobody's yet experienced? One approach is to use a metaphor -- a close historical parallel that you can track.
The metaphor I picked (for reasons which I hope will be obvious) was the development of the world wide web, and the way (pace Lawrence Lessig) in which coders are the unacknowledged legislators of the technosphere. And it led me into some interesting speculative territory. So let me start with some history ...
In the beginning of the world wide web, there was Tim Berners-Lee. And he invented the web as a collaboration and information-sharing medium for academia. Interestingly, it wasn't the first such tool by a long way -- gopherspace, Hyper-G, WAIS (aka Z39.50) and a whole host of other technologies got on the internet first. But for a variety of reasons that are only obvious with 20/20 hind-sight, the web was the one that took off.
A couple of years after Sir Tim began working on the web protocols at CERN, a handful of programmers at NCSA got funding to write (a) a web server, and (b) a web browser. The server became NCSA httpd, which rapidly became the main UNIX-hosted server tool, then gave rise indirectly to the Apache project circa 1995 (when a group of folks who were impatient with the lack of progress on NCSA's httpd got together to write an improved version).
Meanwhile, the NCSA team wrote a web browser called Mosaic. (Actually there were three web browsers called Mosaic -- a Windows version, the Mac version, and the X11/UNIX version -- and they were developed by different people; but from the outside it looked like a single project.) I first began using it in, um, early 1993? Late 1992? It was back when you could still reasonably subscribe to the NCSA "what's new on the web" digest and visit every single new website on the planet, every day. (Which I did, a couple of times, before I got bored.) Mosaic was something of a revelation, because in addition to the original HTML specifications TBL had drafted, they threw some extra hacks in -- notably the IMG SRC tag, which suddenly turned it from a hypertext engine into a graphical display platform. Originally the CERN web team had actually wanted to avoid putting graphics in web pages -- it defeated the idea of semantic markup! -- but the Mosaic developers were pragmatists; graphics were what they wanted, and graphics were what they would have.
Hit the fast-forward button. Mosaic caught on like wildfire. NCSA realized they had a startling runaway hit on their hands and started selling commercial usage licenses for the source code. The developers left, acquired funding, and form Netscape Communications Corporation. They didn't take the original browser code -- NCSA owned it -- but they set out to write a better one. Meanwhile, NCSA licensed the Mosaic code base to Spry and a few other companies.
Some time in late 1994/early 1995. Microsoft woke up, smelled the smoke in the air, and paniced. For some years Bill Gates had been ignoring or belittling the annoyingly non-Microsoft-owned TCP/IP network protocols that were creeping out of academia and the UNIX industry. If people wanted network services, Microsoft could provide everything they wanted, surely? Well, no. And by the time Microsoft did a 180-degree policy turn, they were badly behind the curve and in danger of losing a huge new market. They went to one of the Mosaic licensees and say, "sub-license Mosaic to us, and we'll pay you a royalty on every copy we sell." Then they rebadged it as Internet Explorer and gave it away for free, thus firing the opening salvo in the browser wars. (More or less.)
The browser wars were under way in earnest by mid-1995, with Netscape making most of the running for the first year or so, but succumbing eventually to Microsoft's marketing advantage. The weapon of choice was browser functionality: "our browser does more neat stuff than theirs". And in fighting this way, both sides introduced lots of spurious crap that nobody could understand or use (and which later proved to be full of hideous security holes fit to gladden the hearts of spammers and advertising executives everywhere).
Now, here's a key point: the developers of Mosaic at NCSA didn't know it, but the introduction of the IMG SRC tag in 1992 was going to do more to change the shape of 21st century politics than anyone could imagine. Because it opened up the possibility of using the web for graphical content, including magazine publications and porn. As we all know -- it's a cliche -- new communication technologies catch on for the first time when they are used for the distribution of pornography. This in turn demands mechanisms for payment and for billing users, which ends up creating spin-off industries like the payment service providers, and in turn brings in mechanisms that can be used by other types of business. That one variant tag catalysed, ultimately, the commercialization of the web.
(OK, I exaggerate for effect. Bear with me.)
Those initial design decisions, presumably made by a bunch of postgrads with a shiny new piece of code, sitting around in a university common room, shaped a medium which today mediates a good proportion of all retail sales in the developed world, and which governments propose to use to mediate their interactions with their citizens. And the headaches arising from the browser-war induced featuritis (example: studies show that something like 25% of users pay no attention to their web browser's security and location indicators, and many can't actually read a URL -- they do everything by typing in a google search bar or following links -- this is why phishing scams work and part of why we're seeing such a huge rise in identity theft on the net) define the shape of our current social landscape.
Now. About virtual reality.
Sad to say, the political landscape of the early to mid 21st century has already been designed -- by Gary Gygax, inventor of Dungeons and Dragons.
Gary didn't realize it (D&D predates personal computing) but his somewhat addictive game transferred onto computers quite early (see also: Nethack). And then gamers demanded -- and got, as graphics horsepower arrived -- graphical versions of same. And then multi-user graphical versions of same. And then the likes of World of Warcraft, with over a million users, auction houses, the whole spectrum of social interaction, and so on.
Which leads me to the key insight that: our first commercially viable multi-user virtual reality environments have been designed (and implicitly legislated) to emulate pencil-and-paper high fantasy role playing games.
Sure, Second Life shows a pattern for Ludic environments that is non-RPG based, more user-directed -- after the pattern of LambdaMOO and similar -- but again, the LambdaMOO experiment fell out of dissatisfaction with the fantasy RPG limits that the earlier MUDs imposed on social interaction, and the MUDs were basically networked multiuser implementations of the Colossal Cave Adventure and friends, which all came back to Gary Gygax.
There's no bloody escaping it. The gamers have given rise to a monster that is ultimately going to embrace and extend the web, to the same extent that TV subsumed and replaced motion pictures. (The web will still be there -- some things are intrinsically easier to do using a two dimensional user interface and a page-based metaphor -- but the VR/AR systems will be more visible.)
I'm not sure we've reached the equivalent of Netscape's 1.0 release. New interaction mechanisms are going to come along, especially once the VR experience moves away from the desktop computer paradigm and goes mobile, using PDAs or smartphones, head-up displays, and ubiquitous location services (and speaking of the latter, it is reported that the Galileo PRN codes have been cracked). But VR will be the first medium where the primary path to commercialization will be through game-play.
We're already immersed in a neotenous society, where social adolescence is artificially extended and a lot of people never "grow up' -- that is, never accept the designated "adult" roles and social behaviours. Gaming is a pervasive recreational behaviour; the games industry is probably close to surpassing the traditional motion picture industry in turnover. Play -- historically associated more with childhood behaviour than with adultood -- is a behaviour that is increasingly continued into adulthood. And it has long-term psychological implications. Play is learning tool; young mammals play in order to explore their environment and develop strategies for coping.
An environment developed implicitly for gaming/playing, then re-purposed for acting/doing in real life, offers all sorts of interesting possibilities for behavioural traps equivalent to not understanding that location bar at the top of the browser window. The two general failure modes will be: (a) thinking that something is a game, when in actual fact it isn't, and (b) thinking something is real when it's just a simulation. These will also interact with a population who take longer to reach "traditional" adulthood (if they ever do so), and who therefore may engage in game-play or learning oriented behaviour inappropriately.
The biggest problem facing panopticon surveillance societies will be telling game-play from actual subversion. (How does a game of Killer played in a hotel go down if nobody's told the security guards watching the CCTV cameras?) Whoops, lots of ugly new social failure modes here (especially as our society's rules tend to assume that people are going to slot into traditional adult roles as soon as their age passes 6,574 days and twelve hours).