(The following essay was originally written for the program book of HAL-Con 2010, held in Omiya, Japan. Consider it a speculative polemic intended to amuse and provoke, rather than a serious prediction of the future.)
The internet is made out of meat
I realize that this may be a rather strong proposal to swallow, but let us consider the historical evidence:
In 1968, after approximately five years of deliberation, committee meetings, and reports, self-propelled lumps of meat from the US government's Advanced Research Projects Agency awarded another group of meat-lumps a contract to produce a piece of machinery called an Interface Message Processor. The objective of building these machines was to permit the exchange of messages between computers, to allow better use of time-sharing facilities on these expensive, primitive calculating engines ... but even at the outset, it was seen as a useful goal to use computer messaging to allow lumps of meat to send each other email.
See? No lumps of meat, no internet. It's as simple as that!
Less flippantly, we lumps of meat are big on communication ...
I'll drop the meat thing now; people are big on communication. But watch out for anthropomorphism, for the tendency to adopt what cognitive philosophers call "the intentional stance", of ascribing deliberate intent to come up with plausible explanations of behaviour. (Not all of our actions are conscious, rational, or planned, and I'm going to talk here about some stuff that we are usually unaware of.)
Why are we big on communication?
Sometimes it seems as if asking this question is like asking why water is wet: to a first approximation, we can be defined as that species of meat (sorry, primate) that communicates in languages with complex semantics. Human culture is almost entirely about communication, often in manners that aren't superficially obvious: clothing, for example, above and beyond its basic function as an ersatz layer of fur, is almost always used to communicate information about social status and identity.
We're communicators. It's hard to know how long we've been doing it for; recorded communication is a relatively recent development, going back a few thousand years. But it's a fair bet that the evidence of human culture which dates back up to 70,000 years — including organized burials, jewellery and other artwork — required laugnage on the part of its creators.
What is language for?
One hypothesis (which I'm partial to) is that language is a substitute for the physical grooming that maintains social hierarchy in primate groups. If a group is small, the members can groom everyone else and still have time for other activities such as foraging for food. But if a troupe of hominids gets too big — well, you try picking the fleas out of fifty alpha males' pelts before breakfast. Using speech, you can communicate with multiple other primates simultaneously. The upper size limit on a group of social hominids with language is presumably a lot higher than that of apes with no linguistic facility.
But if it's just a social tool, why has it become so important to us?
I don't think we have a definite answer to that question yet. However, I have a gut feeling that the reason we're so communicative is that we are, at a very fundamental level, a communication phenomenon: that is, our actual sense of conscious identity emerges from the internal use of our language faculty to bind together our stream of cognition and create an internal narrative. Internally, language allows us to codify our memories and provides us with a toolkit for symbolic manipulation — it's a very important component of the "theory of mind" which allows us to anticipate the behaviour and internal thoughts of others. And it also extends our awareness beyond the reach of our own sensory organs by allowing us to use others as proxies.
Language is a multi-function tool: it's not just a dessert topping, it's a floor wax too.
I'm writing these notes sitting in the middle of an SF convention in Boston. (This is my fault for being behind schedule.) I've just come from an interesting panel discussion on the subject of the Singularity, with co-panelists Alastair Reynolds, Karl Schroeder, and Vernor Vinge. (Oddly enough we're all sometime hard-SF writers who've dealt with the subject.)
To give a quick recap: over the past 200 years, many of our technologies — themselves, a collection of techniques transmitted horizontally between lumps of animate meat by means of language — have followed a sigmoid development curve.
About 20 years ago, Vinge asked, "what if there exist new technologies where the curve never flattens, but looks exponential?" The obvious example — to him — was Artificial Intelligence. It's still thirty years away today, just as it was in the 1950s, but the idea of building machines that think has been around for centuries, and more recently, the idea of understanding how the human brain processes information and coding some kind of procedural system in software for doing the same sort of thing has soaked up a lot of research.
Vernor came up with two postulates. Firstly, if we can design a true artificial intelligence, something that's cognitively our equal, then we can make it run faster by throwing more computing resources at it. Which means problems get solved fast. This is your basic weakly superhuman AI: the one you deploy if you want it to spend an afternoon cracking a problem that's basically tractable by human intelligence, if human intelligence could work on it for a few centuries.
He also noted something else: individually, on average, we humans are not terribly smart. Our general intelligence, which relies on symbol manipulation, gives us immense power to use other hominids' ideas — but individually we're not terribly good at solving new problems. What if there exist other forms of intelligence which are fundamentally more powerful than ours at doing whatever it is that consciousness does? Just as a quicksort algorithm that sorts in O(n log n) comparisons is fundamentally better (except in very small sets) than a bubble sort that typically takes O(n2) comparisons.
If such higher types of intelligence can exist, and if a human-equivalent intelligence can build an AI that runs one of them — which is an open question — then it's going to appear very rapidly after the first weakly superhuman AI. And we're not going to be able to second guess it because it'll be as much smarter than us as we are than a frog.
Vernor's singularity is usually presented as an artificial intelligence induced leap into the unknown: we can't predict where things are going on the other side of that event because it's unprecedented since the development of language. It's as if the steadily steepening rate of improvement in transportation technologies that gave us the Apollo flights by the late 1960s kept on going, with a Jupiter mission in 1982, a fast relativistic flight to Alpha Centauri by 1990, a faster than light drive by 2000, and then a time machine so we could arrive before we set off. It makes a mockery of attempts to extrapolate our situation from prior, historical conditions.
Of course, aside from making it possible to write very interesting science fiction stories, the Singularity is a very controversial idea — largely because it is built on top of another controversial idea: that of artificial intelligence.
For one thing, there's the whole question of whether a machine can think — as the late, eminent professor Edsger Djikstra said, "the question of whether machines can think is no more interesting than the question of whether submarines can swim".
For another thing, there's the whole question of what thinking is. We tend to think (see, I'm doing it!) that it's something to do with language processing, and as computers are machines for performing general-purpose symbolic manipulation, we assume that they ought to be able to think. But thinking, and consciousness, didn't emerge out of nowhere: they showed up as an evolutionary upgrade to what was already a complex survival machine -- the early hominid. Animals may not have language, but we don't deduce from this absence a lack of the ability to reason, or to respond to their environment. Is intelligence a symbol-manipulation problem, or is it something else? And is it something that requires a brain and only a brain, or could it be an emergent phenomenon, a feedback loop arising when an embodied nervous system interacts with its environment?
We may be barking up the wrong tree in thinking of intelligence as something we can construct mechanistically. But there are other routes to a Vingean Singularity. Augmented intelligence, as opposed to artificial intelligence, is one such route: we may not need machines that think, if we can come up with tools that enable us to think faster and more efficiently. The world wide web seems to be one example. Lifelogging and memory prostheses may be another.
But. Let us for a moment suppose that the classical formulation of the singularity is plausible, and that furthermore classical computational artificial intelligence is possible. Where is it likely to emerge?
Genome researchers were flabbergasted in the late 1990s and early 00's when the Human Genome Project delivered its preliminary results. This project, an attempt to conduct the first exhaustive sequencing of a human genome, revealed that humans run on a total of around 24,000 genes — about the same number as a mouse, double that of a roundworm. (Previously, estimates in the range 40-50,000 genes were common, with some predicting as many as 2 million genes.) Large portions of the genome are very similar to those of other vertebrates; meanwhile, a huge quantity of human DNA consists of stuff other than genes — some are concerned with modulating gene expression (and there's a whole epigenetic apparatus of short interfering RNAs that were only discovered in the mid-00's), but a startling quantity of our genetic payload consists of viruses and other forms of what is currently believed to be functionless junk that is along for the replication ride. Indeed, human endogenous retroviruses account for 8% of our genome. They may actually provide some benefit to the host — they're immunosuppressive, and it's theorized that placental mammals were only able to evolve in the wake of ERV infection, which allows a fetus to suppress the maternal immune system — but there's a lot we still don't know about how junk DNA and endogenous viruses modulate our genome.
I don't like to stretch a metaphor too far, but it's tempting to observe that DNA is an information processing system; proteins are expressed, allow the cell that expresses them to interact with the extracellular environment, and delivers feedback to the cell's genetic apparatus which in turn can express more proteins (or siRNAs and other effector molecules).
And having noted this in passing, it's time to go for the throat ...
Spam is everywhere.
About 92-95% of all email traffic is spam. Every new communications medium that opens up on the internet succumbs rapidly to spam, unless it is designed with such heavy filtering in place that it's almost impossible to send a message to someone else without prior approval. But new communications media don't get adopted unless they're useful — and one of the key uses of a communications medium is to allow strangers with useful information to get in touch. Spam, almost by definition, isn't useful: but it tries to masquerade as meaningful communication.
In the bad old days of email, just about everything anybody sent would eventually get delivered to a mailbox, if it was correctly addressed. When the "anybody" using the internet expanded sufficiently to include unscrupulous advertisers and scam artists, the utility of email began to drop. The solution that eventually turned up was the widespread adoption of filters — software that attempts to determine whether an inbound message is unsolicited rubbish, or something potentially of interest to a human recipient.
There are a vast number of ways of filtering. One of the most effective is to look for patterns in the mail stream; an identical message sent to a million people is almost certainly spam unless it emanates from a well-known mailing list system. Unique messages are less likely to be spam. So looking for huge deluges of identikit mail worked for a while — until the spammers took to appending random snippets of text to each individual message, to make them look different.
Another filtering technique is to look at the word or letter frequency of the message; purely on a statistical level, spam doesn't look like part of a conversation (unless your correspondents regularly interrupt the flow of discourse to shout BUY CHEAP DESIGNER HAND-BAGS or similar). But again: spam is big business — it's a very effective form of mass advertising — and the spammers are ingenious.
As filters get more sophisticated, the spammers are abandoning old-style broadcast advertisements and are moving to much more tightly targeted ads, addressing the recipient by name and attempting to pitch selectively. The most tightly targeted spam is created for spear phishing attacks (in which specific personal information is used to target selected individuals — usually for identity theft or corporate espionage). Today, this is labour intensive: but it's a fair bet that as more of us place more information about ourselves online, spear phishing techniques will gradually become automated, and targeted junk internet advertising will rise to levels of sophistication we can barely guess at. There's lots of money in spam (these days it's a branch of organized crime), and where there's money, talent can be hired.
We are currently in the early days of an arms race, between the spammers and the authors of spam filters. The spammers are writing software to generate personalized, individualized wrappers for their advertising payloads that masquerade as legitimate communications. The spam cops are writing filters that automate the process of distinguishing a genuinely interesting human communication from the random effusions of a 'bot. And with each iteration, the spam gets more subtly targeted, and the spam filters get better at distinguishing human beings from software, in a bizarre parody of the imitation game popularized by Alan Turing (in which a human being tries to distinguish between another human being and a piece conversational software via textual communication) — an early ad hoc attempt to invent a pragmatic test for artificial intelligence.
We have one faction that is attempting to write software that can generate messages that can pass a Turing test, and another faction that is attempting to write software that can administer an ad-hoc Turing test. Each faction has a strong incentive to beat the other. This is the classic pattern of an evolutionary predator/prey arms race: and so I deduce that if symbol-handling, linguistic artificial intelligence is possible at all, we are on course for a very odd destination indeed — the Spamularity, in which those curious lumps of communicating meat give rise to a meta-sphere of discourse dominated by parasitic viral payloads pretending to be meat ...