Recently in Computers Category

(With contributors Raq, Charlie, & Malka - see below)

With Verizon's purchase of Yahoo!, young developers' thoughts turn to Flickr -- one of the long-lived and popular core acquisitions -- and we think longingly of the early days, back when photo apps were fleet and well-supported. When everyone knew the value of good code. When we felt fairly confident that our stuff would stay our stuff.

Yeah. Okay. Done laughing?

We're so resigned to not truly owning the digital property we've paid for that iTunes' arrogance deleting personal music files is barely registering, except with musicians who are losing their personal work and a number of voices crying out on Twitter, especially during updates. We're used to tech companies redesigning interfaces in the middle of the night, taking away much-loved tools, and replacing them with advertising and easier ways to share more faster. The interface is where the profit is, and who cares about the content?

Worse, when looking at the Yahoo! buyout in particular, we worry that Verizonhoo! has no clue how Flickr works, much less how to support it. We loved the tech that Ludicorp built and sold to Yahoo in the early days. But you'll remember that Yahoo!, after "losing" both original leaders -- Caterina Fake and Stuart Butterfield -- also proceeded to lose its grip on how Flickr functioned, and what client needs it served. We fear for what remains of Flickr, once again, and are busy downloading years of images before saying the word "flickr" becomes a paid premium service.

In the bigger (ahem) picture, we're rapidly approaching that point in tech where historical knowledge of base code is long gone and corporate ability to pivot based on user needs is lost due to mergers, firings, and general MBA-nization of tech innovation. Yes, that's been going on since the 1980s and earlier with IBM and friends, but nearly every tech startup founded since then to take on the big corps has either been eaten or become a big corp in its own right. Tech doesn't want to be free (with apologies to Stuart Brand). It wants to be bought out.

With more mergers inevitable as the large corporations hunger for more and more user data, we worry that any suitably evolved tech will seem more like magic to its holding company -- and that will impact not only how we use that technology going forward but also how technology continues to evolve in its use of us. Our data is already a value point, our time already part of the business plan. What's next?

Our intrepid correspondents have amused themselves by ginning up predictions* for future mergers and their outcomes. Feel free to play along / roll your own.
*no actual predictions were harmed in this game.


  1. Facebook/Oracle : they pretty much already owned the mySQL dbase, actually Trading as: FACILE.

  2. Uber/CNN : Advertised as "Uber for News" and facing questions like "how is this different than Periscope?" This merger results in bystander reporting for micropayments, the end of the traditional tv studio, and on-call hair & makeup vans. Trading as: NEWSR

  3. Kaspersky/Tindr : Giving up on pretending to be anything but Russian cyberwar. Trading as: N/A privately held

  4. Amazon/BAe Systems : Amazon needs delivery drones; BAe Systems needs someone to buy their drones. Trading as: AMZN

  5. Pfizer/Blue Apron : For faster distribution of agribusiness output. Trading as: PFOD

  6. Monsanto/Plated : Competition is healthy; you may be less so. Trading as: MOPL

  7. Microsoft/Reddit : Sorry but you know it's true. Trading as: MSFT

  8. Google/Slack : And a hundred thousand voices cried out before their data became part of the hive. Trading as: GAAK

  9. T-Mobile/StubHub : T-Mobile Thursdays via bot army. Trading as: TUBHUB

  10. AOL/4chan : Trading as: your worst nightmare

  11. Flickr/iTunes : Which means Verizon/iTunes...but it's OK because while you have to pay to upload, pay to create playlists, and pay to tag music, the UI is much better. Trading as: iTUNSR

  12. Disney/WOTC : So yes, Nyssa Revane and Chandra Nalaar are now Disney Princesses. Trading as: WSNY

  13. Evernote/PayPal
  14. : Because monthly subscriptions are not enough. Trading as: EVERPAL
  15. Google/Monsanto : Verily. Trading as: MOOGLE

  16. Tesla/Spotify : The next stage in the rolling computer, app launcher, and vehicle. Trading as: TSTIFY

  17. IBM/BuzzFeed : They just bought it to feed to Watson. Trading as: LOLWTF

  18. Apple/SpaceX : Having conquered the terrestrial computer market Cupertino turns its vast, cool intellect towards the Red Planet. Or maybe they just want to download their backup of Steve Jobs into Elon Musk's brain and regain some visionary leadership. Trading as: SKYNT

  19. 7-11/Bitcoin : How many slushies can you mine today? International calling cards and remittance terminals. Trading as: HOTDOG

  20. Spotify/Youtube : video killed the radio star). Trading as: YOUSPOT

  21. Microsoft/Alibaba : mutually assured expansion). Trading as: ALOFT

  22. Contributor Bios:

    Fran Wilde writes science fiction and fantasy and occasionally consults on tech. She used to program games, websites, and maintain youthfully naive buy-in for companies like Macromedia and Flickr before Adobe and Yahoo! ate those and many others. Her next book, Cloudbound, comes out 9/27/2016 from Tor.

    Raq Winchester is a futurist and startup mentor who has been employed by one or more of the organizations mentioned in this article. She is using those experiences to fuel her first book, on being an innovator in a bureaucracy, how a government job is like a LARP, and unicorns.

    Charles Stross escaped from a dot com, wrote for computer magazines for a bit, then dived full-time into writing SF novels for a living -- honest work, unlike the other aforementioned jobs. His next book, Empire Games, comes out 17/1/2017 from Tor.

    Malka Older is a writer, humanitarian worker, and PhD candidate studying the sociology of disasters. Her science fiction political thriller Infomocracy is out now, and the sequel Null States will be published in 2017.

Me again! M Harold Page, but you can call me "Martin" (I use my very fine middle name to differentiate myself from the folk singer and the French YA writer).

I've just published Swords Versus Tanks 1: "Armoured heroes clash across the centuries". It even has a cover quote from Charlie ("Holy ####!").  So now I'm here to shamelessly plug my new book (click through and take a look at the cover... Go on! You know you want to!).

However, you're a sophisticated lot, so call the above "A word from our sponsor" and let me tell you why I think tank stories make great tech myths.

First some examples...

Ramez Naam is the author of 5 books, including the award-winning Nexus trilogy of sci-fi novels. Follow him on twitter: @ramez. A shorter version of this article first appeared at TechCrunch.

The final frontier of digital technology is integrating into your own brain. DARPA wants to go there. Scientists want to go there. Entrepreneurs want to go there. And increasingly, it looks like it's possible.

You've probably read bits and pieces about brain implants and prostheses. Let me give you the big picture.

Neural implants could accomplish things no external interface could: Virtual and augmented reality with all 5 senses (or more); augmentation of human memory, attention, and learning speed; even multi-sense telepathy -- sharing what we see, hear, touch, and even perhaps what we think and feel with others.

Arkady flicked the virtual layer back on. Lightning sparkled around the dancers on stage again, electricity flashed from the DJ booth, silver waves crashed onto the beach. A wind that wasn't real blew against his neck. And up there, he could see the dragon flapping its wings, turning, coming around for another pass. He could feel the air move, just like he'd felt the heat of the dragon's breath before.

- Adapted from Crux, book 2 of the Nexus Trilogy.

Sound crazy? It is... and it's not.

Start with motion. In clinical trials today there are brain implants that have given men and women control of robot hands and fingers. DARPA has now used the same technology to put a paralyzed woman in direct mental control of an F-35 simulator. And in animals, the technology has been used in the opposite direction, directly inputting touch into the brain.

Or consider vision. For more than a year now, we've had FDA-approved bionic eyes that restore vision via a chip implanted on the retina. More radical technologies have sent vision straight into the brain. And recently, brain scanners have succeeded in deciphering what we're looking at. (They'd do even better with implants in the brain.)

Sound, we've been dealing with for decades, sending it into the nervous system through cochlear implants. Recently, children born deaf and without an auditory nerve have had sound sent electronically straight into their brains.

Nexus

In rats, we've restored damaged memories via a 'hippocampus chip' implanted in the brain. Human trials are starting this year. Now, you say your memory is just fine? Well, in rats, this chip can actually improve memory. And researchers can capture the neural trace of an experience, record it, and play it back any time they want later on. Sounds useful.

In monkeys, we've done better, using a brain implant to "boost monkey IQ" in pattern matching tests.

We've even emailed verbal thoughts back and forth from person to person.

Now, let me be clear. All of these systems, for lack of a better word, suck. They're crude. They're clunky. They're low resolution. That is, most fundamentally, because they have such low-bandwidth connections to the human brain. Your brain has roughly 100 billion neurons and 100 trillion neural connections, or synapses. An iPhone 6's A8 chip has 2 billion transistors. (Though, let's be clear, a transistor is not anywhere near the complexity of a single synapse in the brain.)

The highest bandwidth neural interface ever placed into a human brain, on the other hand, had just 256 electrodes. Most don't even have that.

The second barrier to brain interfaces is that getting even 256 channels in generally requires invasive brain surgery, with its costs, healing time, and the very real risk that something will go wrong. That's a huge impediment, making neural interfaces only viable for people who have a huge amount to gain, such as those who've been paralyzed or suffered brain damage.

This is not yet the iPhone era of brain implants. We're in the DOS era, if not even further back.

But what if? What if, at some point, technology gives us high-bandwidth neural interfaces that can be easily implanted? Imagine the scope of software that could interface directly with your senses and all the functions of your mind:

They gave Rangan a pointer to their catalog of thousands of brain-loaded Nexus apps. Network games, augmented reality systems, photo and video and audio tools that tweaked data acquired from your eyes and ears, face recognizers, memory supplementers that gave you little bits of extra info when you looked at something or someone, sex apps (a huge library of those alone), virtual drugs that simulated just about everything he'd ever tried, sober-up apps, focus apps, multi-tasking apps, sleep apps, stim apps, even digital currencies that people had adapted to run exclusively inside the brain.

- An excerpt from Apex, book 3 of the Nexus Trilogy.

The implications of mature neurotechnology are sweeping. Neural interfaces could help tremendously with mental health and neurological disease. Pharmaceuticals enter the brain and then spread out randomly, hitting whatever receptor they work on all across your brain. Neural interfaces, by contrast, can stimulate just one area at a time, can be tuned in real-time, and can carry information out about what's happening.

We've already seen that deep brain stimulators can do amazing things for patients with Parkinson's. The same technology is on trial for untreatable depression, OCD, and anorexia. And we know that stimulating the right centers in the brain can induce sleep or alertness, hunger or satiation, ease or stimulation, as quick as the flip of a switch. Or, if you're running code, on a schedule. (Siri: Put me to sleep until 7:30, high priority interruptions only. And let's get hungry for lunch around noon. Turn down the sugar cravings, though.)

Crux Implants that help repair brain damage are also a gateway to devices that improve brain function. Think about the "hippocampus chip" that repairs the ability of rats to learn. Building such a chip for humans is going to teach us an incredible amount about how human memory functions. And in doing so, we're likely to gain the ability to improve human memory, to speed the rate at which people can learn things, even to save memories offline and relive them -- just as we have for the rat.

That has huge societal implications. Boosting how fast people can learn would accelerate innovation and economic growth around the world. It'd also give humans a new tool to keep up with the job-destroying features of ever-smarter algorithms.

The impact goes deeper than the personal, though. Computing technology started out as number crunching. These days the biggest impact it has on society is through communication. If neural interfaces mature, we may well see the same. What if you could directly beam an image in your thoughts onto a computer screen? What if you could directly beam that to another human being? Or, across the internet, to any of the billions of human beings who might choose to tune into your mind-stream online? What if you could transmit not just images, sounds, and the like, but emotions? Intellectual concepts? All of that is likely to eventually be possible, given a high enough bandwidth connection to the brain.

That type of communication would have a huge impact on the pace of innovation, as scientists and engineers could work more fluidly together. And it's just as likely to have a transformative effect on the public sphere, in the same way that email, blogs, and twitter have successively changed public discourse.

Digitizing our thoughts may have some negative consequences, of course.

With our brains online, every concern about privacy, about hacking, about surveillance from the NSA or others, would all be magnified. If thoughts are truly digital, could the right hacker spy on your thoughts? Could law enforcement get a warrant to read your thoughts? Heck, in the current environment, would law enforcement (or the NSA) even need a warrant? Could the right malicious actor even change your thoughts?

"Focus," Ilya snapped. "Can you erase her memories of tonight? Fuzz them out?"

"Nothing subtle," he replied. "Probably nothing very effective. And it might do some other damage along the way."

- An excerpt from Nexus, book 1 of the Nexus Trilogy.

The ultimate interface would bring the ultimate new set of vulnerabilities. (Even if those scary scenarios don't come true, could you imagine what spammers and advertisers would do with an interface to your neurons, if it were the least bit non-secure?)

Everything good and bad about technology would be magnified by implanting it deep in brains. In Nexus I crash the good and bad views against each other, in a violent argument about whether such a technology should be legal. Is the risk of brain-hacking outweighed by the societal benefits of faster, deeper communication, and the ability to augment our own intelligence?

For now, we're a long way from facing such a choice. In fiction, I can turn the neural implant into a silvery vial of nano-particles that you swallow, and which then self-assemble into circuits in your brain. In the real world, clunky electrodes implanted by brain surgery dominate, for now.

Apex That's changing, though. Researchers across the world, many funded by DARPA, are working to radically improve the interface hardware, boosting the number of neurons it can connect to (and thus making it smoother, higher resolution, and more precise), and making it far easier to implant. They've shown recently that carbon nanotubes, a thousand times thinner than current electrodes, have huge advantages for brain interfaces. They're working on silk-substrate interfaces that melt into the brain. Researchers at Berkeley have a proposal for neural dust that would be sprinkled across your brain (which sounds rather close to the technology I describe in Nexus). And the former editor of the journal Neuron has pointed out that carbon nanotubes are so slender that a bundle of a million of them could be inserted into the blood stream and steered into the brain, giving us a nearly 10,000-fold increase in neural bandwidth, without any brain surgery at all.

Even so, we're a long way from having such a device. We don't actually know how long it'll take to make the breakthroughs in the hardware to boost precision and remove the need for highly invasive surgery. Maybe it'll take decades. Maybe it'll take more than a century, and in that time, direct neural implants will be something that only those with a handicap or brain damage find worth the risk to reward. Or maybe the breakthroughs will come in the next ten or twenty years, and the world will change faster. DARPA is certainly pushing fast and hard.

Will we be ready? I, for one, am enthusiastic. There'll be problems. Lots of them. There'll be policy and privacy and security and civil rights challenges. But just as we see today's digital technology of Twitter and Facebook and camera-equipped mobile phones boosting freedom around the world, and boosting the ability of people to connect to one another, I think we'll see much more positive than negative if we ever get to direct neural interfaces.

In the meantime, I'll keep writing novels about them. Just to get us ready.

NSA Headquarters Yes. Yes we can. The last year has brought with it the revelations of massive government-run domestic spying machineries in the US and UK. On the horizon is more technology that will make it even easier for governments to monitor and track everything that citizens do. Yet I'm convinced that, if we're sufficiently motivated and sufficiently clever, the future can be one of more freedom rather than less.

In my science fiction novels, Nexus and Crux, I write about technology ('Nexus') that makes it possible to send information in and out of human brains, making it possible for humans to share what they're seeing, hearing, feeling, and even thinking with one another; and also for human minds to exchange data with computers.

The early versions of that sort of technology are real. We've sent video signals into the brains of blind people, audio into the brains of the deaf, touch into the brains of the paralyzed. We've pulled what people are seeing, their desired movements, and more out of the brains of others. In animals we've gone farther, boosting memory and pattern matching skills, and linking the minds of two animals even thousands of miles apart.

I gave a recent TEDx talk on linking human brains about the science in this area, and where I see it going. You can watch the video below.

My friend and fellow science fiction author William Hertling disagrees with me that the Singularity is further than it appears.

Will has spent some time thinking about this, since he's written three fantastic near-future novels about a world going through an AI Singularity.

He's written a rebuttal to my The Singularity is Further Than it Appears post.

Here's his rebuttal: The Singularity is Still Closer Than It Appears.

If you've seen other thoughtful rebuttals or responses out there, please leave links to them in the comments.

Ramez Naam is the author of Nexus and Crux. You can follow him at @ramez.

Time The Year We Become Immortal.jpgAre we headed for a Singularity? Is it imminent?

I write relatively near-future science fiction that features neural implants, brain-to-brain communication, and uploaded brains. I also teach at a place called Singularity University. So people naturally assume that I believe in the notion of a Singularity and that one is on the horizon, perhaps in my lifetime.

I think it's more complex than that, however, and depends in part on one's definition of the word. The word Singularity has gone through something of a shift in definition over the last few years, weakening its meaning. But regardless of which definition you use, there are good reasons to think that it's not on the immediate horizon.

As you probably know if you read this weblog regularly, I'm working on a novel set about 12 years in the future, at a time when existing technological trends (pervasive wireless networking, ubiquitous location services, and the uptake of virtual reality technologies derived from today's gaming scene) coalesce into a new medium. "Halting State" attempts to tackle the significance of the impact of widespread augmented reality tools (and of interaction in ludic environments) on society.

Looking for a handle on this topic is difficult; how do you get your head around something nobody's yet experienced? One approach is to use a metaphor -- a close historical parallel that you can track.

The metaphor I picked (for reasons which I hope will be obvious) was the development of the world wide web, and the way (pace Lawrence Lessig) in which coders are the unacknowledged legislators of the technosphere. And it led me into some interesting speculative territory. So let me start with some history ...

In the beginning of the world wide web, there was Tim Berners-Lee. And he invented the web as a collaboration and information-sharing medium for academia. Interestingly, it wasn't the first such tool by a long way -- gopherspace, Hyper-G, WAIS (aka Z39.50) and a whole host of other technologies got on the internet first. But for a variety of reasons that are only obvious with 20/20 hind-sight, the web was the one that took off.

A couple of years after Sir Tim began working on the web protocols at CERN, a handful of programmers at NCSA got funding to write (a) a web server, and (b) a web browser. The server became NCSA httpd, which rapidly became the main UNIX-hosted server tool, then gave rise indirectly to the Apache project circa 1995 (when a group of folks who were impatient with the lack of progress on NCSA's httpd got together to write an improved version).

Meanwhile, the NCSA team wrote a web browser called Mosaic. (Actually there were three web browsers called Mosaic -- a Windows version, the Mac version, and the X11/UNIX version -- and they were developed by different people; but from the outside it looked like a single project.) I first began using it in, um, early 1993? Late 1992? It was back when you could still reasonably subscribe to the NCSA "what's new on the web" digest and visit every single new website on the planet, every day. (Which I did, a couple of times, before I got bored.) Mosaic was something of a revelation, because in addition to the original HTML specifications TBL had drafted, they threw some extra hacks in -- notably the IMG SRC tag, which suddenly turned it from a hypertext engine into a graphical display platform. Originally the CERN web team had actually wanted to avoid putting graphics in web pages -- it defeated the idea of semantic markup! -- but the Mosaic developers were pragmatists; graphics were what they wanted, and graphics were what they would have.

Hit the fast-forward button. Mosaic caught on like wildfire. NCSA realized they had a startling runaway hit on their hands and started selling commercial usage licenses for the source code. The developers left, acquired funding, and form Netscape Communications Corporation. They didn't take the original browser code -- NCSA owned it -- but they set out to write a better one. Meanwhile, NCSA licensed the Mosaic code base to Spry and a few other companies.

Some time in late 1994/early 1995. Microsoft woke up, smelled the smoke in the air, and paniced. For some years Bill Gates had been ignoring or belittling the annoyingly non-Microsoft-owned TCP/IP network protocols that were creeping out of academia and the UNIX industry. If people wanted network services, Microsoft could provide everything they wanted, surely? Well, no. And by the time Microsoft did a 180-degree policy turn, they were badly behind the curve and in danger of losing a huge new market. They went to one of the Mosaic licensees and say, "sub-license Mosaic to us, and we'll pay you a royalty on every copy we sell." Then they rebadged it as Internet Explorer and gave it away for free, thus firing the opening salvo in the browser wars. (More or less.)

The browser wars were under way in earnest by mid-1995, with Netscape making most of the running for the first year or so, but succumbing eventually to Microsoft's marketing advantage. The weapon of choice was browser functionality: "our browser does more neat stuff than theirs". And in fighting this way, both sides introduced lots of spurious crap that nobody could understand or use (and which later proved to be full of hideous security holes fit to gladden the hearts of spammers and advertising executives everywhere).

Now, here's a key point: the developers of Mosaic at NCSA didn't know it, but the introduction of the IMG SRC tag in 1992 was going to do more to change the shape of 21st century politics than anyone could imagine. Because it opened up the possibility of using the web for graphical content, including magazine publications and porn. As we all know -- it's a cliche -- new communication technologies catch on for the first time when they are used for the distribution of pornography. This in turn demands mechanisms for payment and for billing users, which ends up creating spin-off industries like the payment service providers, and in turn brings in mechanisms that can be used by other types of business. That one variant tag catalysed, ultimately, the commercialization of the web.

(OK, I exaggerate for effect. Bear with me.)

Those initial design decisions, presumably made by a bunch of postgrads with a shiny new piece of code, sitting around in a university common room, shaped a medium which today mediates a good proportion of all retail sales in the developed world, and which governments propose to use to mediate their interactions with their citizens. And the headaches arising from the browser-war induced featuritis (example: studies show that something like 25% of users pay no attention to their web browser's security and location indicators, and many can't actually read a URL -- they do everything by typing in a google search bar or following links -- this is why phishing scams work and part of why we're seeing such a huge rise in identity theft on the net) define the shape of our current social landscape.

Now. About virtual reality.

Sad to say, the political landscape of the early to mid 21st century has already been designed -- by Gary Gygax, inventor of Dungeons and Dragons.

Gary didn't realize it (D&D predates personal computing) but his somewhat addictive game transferred onto computers quite early (see also: Nethack). And then gamers demanded -- and got, as graphics horsepower arrived -- graphical versions of same. And then multi-user graphical versions of same. And then the likes of World of Warcraft, with over a million users, auction houses, the whole spectrum of social interaction, and so on.

Which leads me to the key insight that: our first commercially viable multi-user virtual reality environments have been designed (and implicitly legislated) to emulate pencil-and-paper high fantasy role playing games.

Sure, Second Life shows a pattern for Ludic environments that is non-RPG based, more user-directed -- after the pattern of LambdaMOO and similar -- but again, the LambdaMOO experiment fell out of dissatisfaction with the fantasy RPG limits that the earlier MUDs imposed on social interaction, and the MUDs were basically networked multiuser implementations of the Colossal Cave Adventure and friends, which all came back to Gary Gygax.

There's no bloody escaping it. The gamers have given rise to a monster that is ultimately going to embrace and extend the web, to the same extent that TV subsumed and replaced motion pictures. (The web will still be there -- some things are intrinsically easier to do using a two dimensional user interface and a page-based metaphor -- but the VR/AR systems will be more visible.)

I'm not sure we've reached the equivalent of Netscape's 1.0 release. New interaction mechanisms are going to come along, especially once the VR experience moves away from the desktop computer paradigm and goes mobile, using PDAs or smartphones, head-up displays, and ubiquitous location services (and speaking of the latter, it is reported that the Galileo PRN codes have been cracked). But VR will be the first medium where the primary path to commercialization will be through game-play.

We're already immersed in a neotenous society, where social adolescence is artificially extended and a lot of people never "grow up' -- that is, never accept the designated "adult" roles and social behaviours. Gaming is a pervasive recreational behaviour; the games industry is probably close to surpassing the traditional motion picture industry in turnover. Play -- historically associated more with childhood behaviour than with adultood -- is a behaviour that is increasingly continued into adulthood. And it has long-term psychological implications. Play is learning tool; young mammals play in order to explore their environment and develop strategies for coping.

An environment developed implicitly for gaming/playing, then re-purposed for acting/doing in real life, offers all sorts of interesting possibilities for behavioural traps equivalent to not understanding that location bar at the top of the browser window. The two general failure modes will be: (a) thinking that something is a game, when in actual fact it isn't, and (b) thinking something is real when it's just a simulation. These will also interact with a population who take longer to reach "traditional" adulthood (if they ever do so), and who therefore may engage in game-play or learning oriented behaviour inappropriately.

The biggest problem facing panopticon surveillance societies will be telling game-play from actual subversion. (How does a game of Killer played in a hotel go down if nobody's told the security guards watching the CCTV cameras?) Whoops, lots of ugly new social failure modes here (especially as our society's rules tend to assume that people are going to slot into traditional adult roles as soon as their age passes 6,574 days and twelve hours).

"It isn't virtual reality until you can mount a coup d'etat in it."

This is the information age, so our definition of a coup consequently varies from the traditional one (involving guns and colonels in sunglasses). But, for the sake of argument, let us posit that it's a de-facto coup if you can fool all of the people all of the time — and controlling their perception of reality is a good start. But how much reality do you need to control?

We can approach the problem by estimating the total afferent sensory bandwidth of the target. If you can control someone's senses completely, you can present them with stimuli and watch them respond — voluntary cooperation is optional. (Want them to jump to the left? Make them see a train approaching from the right.) How stimuli are generated is left as an exercise for the world-domination obsessed AI; the question I'm asking is, what is the maximum bandwidth that may have to be controlled and filtered?

I'm picking on Scotland as an example because it's big enough to be meaningful, small enough to be unthreatening, and generally innocuous. And we can either consider Scotland as a body politic, or as a collection of approximately five million human beings.

First, the body politic. The UK currently has approximately one public CCTV camera per ten people. In addition, the population in general have cameraphones — perhaps one per person (optimistically) with an average resolution of 1Mpixel. Let's wave a magic wand and make all of them video cameraphones, and always on. (Yes, this would swamp the phone network. I'm looking for an upper bound, not a lower.) At 25 frames/sec on every camera and 8-bit colour, that gives us 5.5M cameras generating 25Mb/sec, for 137.5 x 1012 bytes/sec. Reality is liable to be an order of magnitude (or more) lower ...

Let's also give every household a home broadband internet connection — this dwarfs the bandwidth of their POTS connection — running at, say, 40mbps (an order of magnitude above the current average for broadband). Households: say they average 2.5 people — that means we have 2 million of them. So we have another 10 x 1012 bytes/sec.

Thus, the combined internet traffic, phone traffic, and video surveillance that's going on in Scotland is somewhere below 1014 bytes/sec.

Now for the second half: the people. Eyeballs first: going by Hans Moravec's estimate, the human eyeball processes about 10 one million point images per second. I think that's low — a rough estimate of the retina gives me about 40M pixels at 17 images/second, and the pixels take more than 32 bits to encode colour and hue information fully (let's say 64 bits). So, the ten million eyeballs in Scotland would take approximately 2,720 x 1013 bits — call it roughly 1015 bytes/sec to fool.

We've got more senses than just eyeballs, of course. But human skin isn't a brilliantly discriminative sensory organ in comparison — we can only distinguish between stimuli that are more than a centimetre apart over most of our bodies. (Hands, lips, and a few other places are exceptions.) Assuming 2 metres2 of skin per person, that gives us 20K sensors. Giving a firing rate of 10x per second, and 32 bits for encoding the inputs (heat and pressure, not just touch) that still approximates to less than 1Mb/sec per person, or 5 x 1012 bits per second per Scotland, which is basically lost in the noise compared to the optics, or even the spam'n'web surfing.

Sound ... hell, let's just throw in CD-quality audio times five million and have done with it. That's another 10Mb/sec per ear, or 2 x 1013 bits/sec/Scotland.

Now. Let's suppose you can plug everyone in Scotland into a Matrix-style tank and feed them real-time hallucinations. What's the infrastructure like?

We can see that the ceiling is 1015 bytes/sec. (It'd only be 1017 bytes/sec if you wanted to do this to the USA or India — don't get cocky over there!) A single high quality optical fibre can, with wavelength dimension multiplexing, carry about 2 x 1012 bits/sec. So we'd need to run one fibre to every couple of hundred people. The combined trunk to carry the sensory bandwidth of Scotland would need on the order of ten thousand fibres. It's going to be a bit fatter than my thigh; not terribly impressive.

Of course, for this weird thought-experiment to be relevant you'd have to be cramming content into that pipe and monitoring the subject's responses. But encoding human efferent output — gestures and speech — is cheap and easy compared to their sensory inputs — we are net informational sinks, outputing far less than we take in.

I've also completely ignored the issue of redundancy, in assuming that everyone in Scotland has a unique and separate experience of reality. (Call it virtual qualia.) In practice, a lot of folks will see exactly the same thing (or as near as makes no difference) at the same time. If a million people are watching a football match on TV, they will see the same image, subject to a fairly simple pixel-based transformation to modify the angle and distance they're sitting at from the TV screen (and possibly its brightness and contrast). Again, we all spend an average of 30% of our time sleeping, during which we're not doing a hell of a lot with our external visual field. So, conceivably, I'm over-estimating by a couple of orders of magnitude.

Finally: we've got a modern telecoms infrastructure that provides fibre to the kerb. Obviously, our current infrastructure isn't providing terabits per second on every fibre — but the glass is in the ground. Of more interest is the question of whether or not the available wireless bandwidth would support this sort of large-scale subversion. Right now it wouldn't, but with UWB estimated to top off around the terabit/second mark across distances of under ten metres, I wouldn't bet against it in the future.

Now if you'll excuse me, I'm going to go drink my morning cup of tea and start making myself a nice tinfoil hat ...

In the week since I switched this new blogging system on, I've had more than fifty trackback spams ... and no genuine trackbacks at all. So I'm switching trackbacks off completely. (I'll review this decision in a few weeks time, but right now I don't see why I should enable a feature which seems to only be used by spammers. Even though the built-in spam filtering caught all of them, I can't be arsed riding herd on it.)

Interestingly, there have been no attempts at comment spamming — or none that have got through the various spam detector plugins I've installed.

Meanwhile, my email continues to attract 300-500 spams per day, and a total of 2-3000 per day for all ten or so users on my server.

Specials

Merchandise

About this Archive

This page is an archive of recent entries in the Computers category.

Administrative is the previous category.

Gadget Patrol is the next category.

Find recent content on the main index or look in the archives to find all content.

Search this blog

Propaganda