« In Seattle | Main | Retrograde »

LOGIN 2009 keynote: gaming in the world of 2030

I've just given one of the keynote speeches at the LOGIN 2009 conference here in Seattle. Here's more or less that I said ... Imagine you're sitting among a well-fed audience of MMO developers and gaming startup managers (no, nobody video'd the talk):

Good morning. I'm Charlie Stross; I write science fiction, and for some reason people think that this means I can predict the future. If only I could: the English national lottery had a record roll-over last week, and if I could predict the future I guess I'd have flown here on my new bizjet rather than economy on Air France.

So that's just a gentle reminder to take what I'm going to say with a pinch of salt.

For the past few years I've been trying to write science fiction about the near future, and in particular about the future of information technology. I've got a degree in computer science from 1990, which makes me a bit like an aerospace engineer from the class of '37, but I'm not going to let that stop me.

The near future is a particularly dangerous time to write about, if you're an SF writer: if you get it wrong, people will mock you mercilessly when you get there. Prophecy is a lot easier when you're dealing with spans of time long enough that you'll be comfortably dead before people start saying "hey, wait a minute ..."

So: what do we know about the next thirty years?

Quite a lot, as it turns out — at least, in terms of the future of gaming. Matters like the outcome of next year's superbowl, or the upcoming election in Germany, are opaque: they're highly sensitive to a slew of inputs that we can't easily quantify. But gaming is highly dependent on three things: technological progress, social change, and you.

Let's look at the near-future of the building blocks of computing hardware first.

On a purely technological level, we've got a pretty clear road-map of the next five years. You know all about road maps; the development cycle of a new MMO is something like 5 years, and it may spend another half decade as a cash cow thereafter, The next five years is a nice comfortable time scale to look at, so I'm going to mostly ignore it.

In the next five years we can expect semiconductor development to proceed much as it has in the previous five years: there's at least one more generation of miniaturization to go in chip fabrication, and that's going to feed our expectations of diminishing power consumption and increasing performance for a few years. There may well be signs of a next-generation console war. And so on. This isn't news.

One factor that's going to come into play is the increasing cost of semiconductor fab lines. As the resolution of a lithography process gets finer, the cost of setting up a fab line increases — and it's not a linear relationship. A 22nm line is going to cost a lot more than a 33nm line, or a 45nm one. It's the dark shadow of Moore's Law: the cost per transistor on a chip may be falling exponentially, but the fabs that spit them out are growing pricier by a similar ratio.

Something like this happened, historically, in the development of the aerospace industry. Over the past thirty years, we've grown used to thinking of the civil aerospace industry as a mature and predictable field, dominated by two huge multinationals and protected by prohibitive costs of entry. But it wasn't always so.

Back in the nineteen-teens, it cost very little to get in on the game and start building aircraft; when a timber magnate called Bill went plane-crazy he and one of his buddies took a furniture shop, bought a couple of off-the-shelf engines, and built some birds to sell to the US navy. But today, it takes the company he founded close to a decade and ten billion dollars to roll out an incremental improvement to an existing product — to go from the Boeing 747-100 to the 747-400.

It turns out that the power-to-weight ratio of a modern high-bypass turbofan engine is vastly higher than that of an early four-stroke piston engine, modern construction materials are an order of magnitude stronger, and we're just a hell of a lot better at aerodynamics and design and knowing how to put the components together to make a working airliner.

However, the civil airliner business hit a odd brick wall in the late 1960s. The barrier was a combination of increasing costs due to mushrooming complexity, and the fact that aerodynamic drag goes up nonlinearly once you try to go supersonic. Concorde and the Tupolev 144 — both supersonic airliners — turned out to be dead ends, uneconomical and too expensive to turn into mass consumer vehicles. And today, our airliners are actually slower than they were thirty years ago.

In the medium term (by which I mean 5-15 years) we're going to reach the end of the exponential curve of increasing processing power that Gordon Moore noticed back in the late 1960s. Atoms are on the order of one nanometre in size; it's hard to see how we can miniaturize our integrated circuits below the 10nm scale. And at that point, there's going to be a big shake-up in the semiconductor business. In particular, Intel, AMD and the usual players won't be able to compete on the basis of increasing circuit density any more; just as the megahertz wars ended around 2005 due to heat dissipation, the megaflop wars will end some time between 2015 and 2020 due to the limits of miniaturization.

There's still going to be room for progress in other directions. It's possible to stack circuits vertically by depositing more layers on each die; but this brings in new challenges — heat dissipation and interconnection between layers, if nothing worse. There's room for linear scaling here, but not for the exponential improvements we've come to expect. Stacking a hundred layered chips atop each other isn't going to buy us the kind of improvement we got between the 8080 and the i7 core — not even close.

This is going to force some interesting economies of scale. Over the past couple of decades we've seen an initially wide-open playing field for processors diminish as bit players were squeezed out: we had SPARC and PA-RISC and IBM's Power architecture and SGIs MIPS and ARM and the 68000 series and, and, and. But today we're nearly down to two architectures in the consumer space: Intel on the PCs and Macs — which are basically just a PC with a different user interface, these days — and ARM on handhelds. Actually, ARM is about 95% of everything, consumer and embedded both — as long as you remember that the vast majority of consumer-owned computers are phones or embedded gizmos. The other architectures hang on in niches in the server and embedded space but get no love or attention outside them.

I expect to see a similar trend towards convergence of GPUs, too. It's expensive to develop them and graphics processors aren't made of sparkly unicorn turds; it's semiconductors all the way down, and constrained by the same as other components — memory, cpu, whatever. So I expect we'll see a market in the next decade where we're down to a couple of processor architectures and a handful of GPU families — and everything is extremely boring. New components will be either the result of heroic efforts towards optimization, or built-in obsolescence, or both.

I don't want to predict what we end up with in 2020 in terms of raw processing power; I'm chicken, and besides, I'm not a semiconductor designer. But while I'd be surprised if we didn't get an order of magnitude more performance out of our CPUs between now and then — maybe two — and an order of magnitude lower power consumption — I don't expect to see the performance improvements of the 1990s or early 2000s ever again. The steep part of the sigmoid growth curve is already behind us.

Now that I've depressed you, let's look away from the hardware for a minute.

After processor performance (and by extension, memory density), the next factor we need to look at is bandwidth. Here, the physical limits are imposed by the electromagnetic spectrum. I don't think we're likely to get much more than a terabit per second of bandwidth out of any channel, be it wireless or a fibre-optic cable, because once you get into soft X-rays your network card becomes indistinguishable from a death ray. But between fixed points we can bundle lots of fibres, and use ultrawideband for the last ten or a hundred metres from the access point to the user.

So: let's consider the consequences of ubiquitous terabit per second wireless data.

The quiet game-changing process underneath the radar is going to be the collision between the development of new user interfaces and the build-out of wireless technologies. Ubiquitous UMTS and follow-on developments of WCDMA giving phones download speeds of 7.2mbps as standard. WiMax and the embryonic 4G standards offering 50-100mbps on the horizon. Wifi everywhere.

We're still driving up the steep shoulder of the growth curve of mobile bandwidth; we're nowhere near that terabit-per-second plateau at the top. Wireless LANs are now ubiquitous, and adopting is heading towards 70mbps this year and 200mbps in the next couple of years. On the WWAN front, the mobile phone operators have already been forced to give up their walled gardens of proprietary services and start to compete purely on supply of raw bandwidth: not willingly, but the threat of wifi has them running scared. Their original vision of making money by selling access to proprietary content — TV over mobile phone — has failed; plan B is the ubiquitous 3G dongle or wireless-broadband- enabled laptop.

Telephony itself is turning weird this decade. If your phone is an always-on data terminal with 100mbps coming into it, why would you want to make voice calls rather than use Skype or some over VoIP client? Computers are converging with television, and also with telephones. Or rather, both TV and phones are shrinking to become niche applications of computers (and the latter, telephony, is already a core function of the mobile computers we call mobile phones), and computers in turn are becoming useful to most of us primarily as networked devices.

The iPhone has garnished a lot of attention. I've got one: how about you? As futurist, SF writer and design guru Bruce Sterling observed, the iPhone is a Swiss army knife of gadgets — it's eating other devices alive. It's eaten my digital camera, phone, MP3 player, personal video player, web browser, ebook reader, street map, and light saber. But the iPhone is only the beginning.

Add in picoprojectors, universal location and orientation services, and you get the prerequisites for an explosion in augmented reality technologies.

The class of gadgets that the iPhone leads — I want you to imagine the gadget class that is the PC today, in relation to the original Macintosh 128K back in 1984 — is something we don't really have a name for yet. Calling it a "smart phone" seems somehow inadequate. For one thing, we're used to our mobile phones being switched on, or off (at least, in standby mode). This gadget is never off — it is in constant communication with the internet. It knows where it is, and it knows which way up it is (it's orientation sensitive). It can see things you point it at, and it can show you pictures. (Oh, and it does the smartphone thing as well, when you want it to.)

Let me give you a handle on this device, the gadget, circa 2020, which has replaced our mobile phones. It's handheld, but about as powerful as a fully loaded workstation today. At it's heart is a multicore CPU delivering probably about the same performance as a quad-core Nehalem, but on under one percent of the power. It'll have several gigabytes of RAM and somewhere between 256Gb and 2Tb of Flash SSD storage. It'll be coupled to a very smart radio chipset: probably a true software-directed radio stack, where encoding and decoding is basically done in real time by a very fast digital signal processor, and it can switch radio protocols entirely in software. It'll be a GPS and digital terrestrial radio receiver and digital TV receiver as well as doing 802.whatever and whatever 4G standard emerges as victor in the upcoming war for WWAN preeminance.

One of the weaknesses of today's smartphones is that they're poor input/output devices: tiny screens, useless numeric keypads or chicklet QWERTY thumboards. The 2020 device will be somewhat better; in addition to the ubiquitous multitouch screen, it'll have a couple of cameras, accelerometers to tell it which way it's moving, and a picoprojector.

The picoprojector is really cool right now: it's the next solid-state gizmo that your phone is about to swallow. Everyone from Texas Instruments to Samsung are working on them. The enabling technologies are: compact red, blue, and green solid-state lasers, and a micro-electromechanical mirror system to scan them across a target — such as a sheet of paper held a foot in front of your phone. Or a tabletop. Picoprojectors will enable a smartphone to display a laptop-screen-sized image on any convenient surface.

The other promising display technology is, of course, those hoary old virtual reality goggles. They've come a long way since 1990; picoprojectors in the frames, reflecting images into your eyes, and cameras (also in the frames), along with UWB for hooking the thing up to the smartphone gizmo, may finally make them a must-have peripheral: the 2020 equivalent of the bluetooth hands-free headset.

Now, an interesting point I'd like to make is that this isn't a mobile phone any more; this device is more than the sum of its parts. Rather, it's a platform for augmented reality applications.

Because it's equipped with an always-on high bandwidth connection and sensors, the device will be able to send real-time video from it's cameras to cloud-hosted servers, along with orientation information and its GPS location as metadata. The cloud apps can then map its location into some equivalent information space — maybe a game, maybe a geographically-tagged database &mdash where it will be convolved with objects in that information space, and the results dumped back to your screen.

For example: if you point your phone at a shop front tagged with an equivalent location in the information space, you can squint at it through the phone's screen and see ... whatever the cyberspace equivalent of the shop is. If the person you're pointing it at is another player in a live-action game you're in (that is: if their phone is logged in at the same time, so the game server knows you're both in proximity), you'll see their avatar. And so on.

Using these gizmos, we won't need to spend all our time pounding keys and clicking mice inside our web browsers. Instead, we're going to end up with the internet smearing itself all over the world around us, visible at first in glimpses through enchanted windows, and then possibly through glasses, or contact lenses, with embedded projection displays.

There are many non-game applications for phones with better output, of course. For starters, it'll address all our current personal computing needs: stick a camera chip next to the microprojector to do video motion capture on the user's fingers, and you've got a virtual keyboard for grappling with those thorny spreadsheet and presentation problems. But then it'll do new stuff as well. For example, rather than just storing your shopping list, this gadget will throw the list, and your meatspace location, at the store's floor map and inventory database and guide you on a handy path to each each item on the list.

And then there's the other stuff. Storage is basically so cheap it's nearly free. Why not record a constant compressed video stream of everything you look at with those glasses? Tag it by location and vocalization — do speech-to-text on your conversation — and by proximity to other people. Let your smartphone remember things and jog your memory: you'll be able to query it with things like, "who was that person sitting at the other side of the table from me in the Pike Brewery last Tuesday evening with the fancy jacket I commented on?" Or maybe "what did Professor Jones say fifteen minutes into their Data Structures lecture on Friday while I was asleep?" I don't know about you, but I could really do with a prosthetic memory like that — and as our populations age, as more people have to live with dementia, there'll be huge demand for it. In Japan today, the life expectancy of a girl baby is 102 years. Which sounds great, until you learn that in Japan today, 20% of over-85s have Alzheimers.

Bouncing back to the present day, one of the weird side-effects of dropping GPS into a communications terminal is that traditional paper maps are rapidly becoming as obsolescent as log tables were in the age of the pocket calculator. When we have these gizmos and add access to a geolocation-tagged internet, not only are we going to know where we are all the time, we're going to know where we want to be (which is subtly different). And with RFID chips infiltrating everything, we're probably also going to know where everything we need to find is. No more getting lost: no more being unable to find things.

There are many other uses for the output devices we'll be using with these gizmos, too. Consider the spectacles I'm wearing. They're made of glass, and their design has fundamentally not changed much since the fifteenth century — they're made of better materials and to much better specifications, but they're still basically lenses. They refract light, and their focus is fixed. This is kind of annoying; I'm beginning to suffer from presbyopia and I need new lenses, but spectacle fashions this year are just plain boring.

I've already mentioned using picoprojectors to provide a head-up display via spectacles. I'd like you to imagine a pair of such video glasses — but with an opaque screen, rather than an overlay. Between the camera on the outside of each "lens" and the eye behind it, we can perform any necessary image convolution or distortion needed to correct my visual problems. We can also give our glasses digital zoom, wider viewing angles, and low light sensitivity! Not to mention overlaying our surroundings with a moving map display if we're driving. All great stuff, except for the little problem of such glasses blocking eye contact, which means they're not going to catch on in social environments — except possibly among folks who habitually wear mirrorshades.

So let's put this all together, and take a look at where the tech side is going in the next 25 years.

For starters, once you get more than a decade out (around 2020 or thereabouts) things turn weird on the hardware front. We can expect to get another generation of fab lines out of our current technology, but it's not obvious that we'll see chip fabrication processes push down to a resolution of less than 20nm. By 2030 it's almost inevitable that Moore's Law (in its classic formulation) will have hit a brick wall, and the semiconductor industry will go the way of the civil aerospace industry.

There'll be a lot of redundancy checks, and consolidation, and commodification of the product lines. Today we don't buy airliners on the basis of their ability to fly higher and faster; we buy them because they're more economical to operate, depreciate less, or fill specialized niches. Airliners today are slower than they were thirty years ago; but they're also cheaper, safer, and more efficient.

In the same time frame, our wireless spectrum will max out. Our wireless recievers are going to have to get smarter to make optimal use of that bandwidth; it'll be software-directed radio all round, dynamically switching between protocols depending on whether they need to maximize transmission path or bit rate in the horribly noisy environment. But we're going to hit the wireless buffers one way or the other in the same period we hit the Moore's Law buffers.

There may, of course, be wildcard technologies that will save us. Quantum computing (if anyone knows how to make it work). Massively parallel processing (ditto). We may see more efficient operating systems — Microsoft's Windows 7 seems set to roll back the bloat relative to Vista, which was designed against a backdrop of the megahertz wars for the 5GHz desktop processors that turned out not to be viable. On a similar note, Linux derivatives like Android and Moblin, and that BSD/Mach hybrid, OS/X, are being pared down to do useful work on the sort of low-end processors we can run off the kind of batteries that don't require fire extinguishers and safety goggles. If we can work out how to reduce the operating system overheads by an order of magnitude without sacrificing their utility, that's going to have interesting implications.

But ultimately, the microcomputer revolution is doomed. The end is nigh!

By 2030 we're going to be looking at a radically different world: one with hard limits to available processing power and bandwidth. The hard limits will be generous — there's room for one or two orders of magnitude more processing power, and maybe five orders of magnitude more bandwidth — but they'll be undeniable.

The next thing I'd like to look at is the human factor.

Let's start with the current day. Today, gamers are pretty evenly split by gender — the days when it was possible to assume that there were many more males than females are over — and the average age is north of thirty and rising. I don't know anyone much over fifty who's a serious gamer; if you didn't have consoles or personal computers in your world by the time you hit thirty, you probably didn't catch the habit. This is rather unlike the uptake pattern for film or TV, probably because those are passive media — the consumer doesn't actually have to do anything other than stare at a screen. The learning curve of even a console controller is rather off-putting for folks who've become set in their ways. I speak from experience: my first console was a Wii, and I don't use it much. (PCs are more my thing.) At a guess, most gamers were born after 1950 — the oldest today would have been in their mid-20s in the mid-seventies, when things like the Atari 2600 roamed the Earth and the Apple II was the dizzy pinnacle of home electronics — and the median age demographic were born around 1975 and had an NES.

We talk about the casual/hardcore split, but that's a bit of a chimera. We've always had hardcore gamers; it's just that before they had consoles or PCs, they played with large lumps of dead tree. I lost a good chunk of the 1970s and early 1980s to Dungeons and Dragons, and I'm not afraid to admit it. You had to be hardcore to play in those days because you had the steep learning curve associated with memorizing several hundred pages of rule books. It's a somewhat different kind of grind from levelling up to 80 in World of Warcraft, but similarly tedious. These days, the age profile of tabletop RPGers is rising just like that of computer-assisted gamers — and there are now casual gamers there, too, using a class of games designed to be playable without exotic feats of memorization.

So, let's look ahead to 2030.

We can confidently predict that by then, computer games will have been around for nearly sixty years; anyone under eighty will have grown up with them. The median age of players may well be the same as the median age of the general population. And this will bring its own challenges to game designers. Sixty year olds have different needs and interests from twitchy-fingered adolescents. For one thing, their eyesight and hand-eye coordination isn't what it used to be. For another, their socialization is better, and they're a lot more experienced.

Oh, and they have lots more money.

If I was speccing out a business plan for a new MMO in 2025, I'd want to make it appeal to these folks — call them codgergamers. They may be initially attracted by cute intro movies, but jerky camera angles are going to hurt their aging eyes. Their hand/eye coordination isn't what it used to be. And like sixty-somethings in the current and other cohorts they have a low tolerance for being expected to jump through arbitrary hoops for no reward. When you can feel grandfather time breathing down your neck, you tend to focus on the important stuff.

But the sixty-something gamers of 2020 are not the same as the sixty-somethings you know today. They're you, only twenty years older. By then, you'll have a forty year history of gaming; you won't take kindly to being patronised, or given in-game tasks calibrated for today's sixty-somethings. The codgergamers of 2030 will be comfortable with the narrative flow of games. They're much more likely to be bored by trite plotting and cliched dialog than todays gamers. They're going to need less twitchy user interfaces — ones compatible with aging reflexes and presbyopic eyes — but better plot, character, and narrative development. And they're going to be playing on these exotic gizmos descended from the iPhone and its clones: gadgets that don't so much provide access to the internet as smear the internet all over the meatspace world around their owners.

If this sounds like a tall order, and if you're wondering why you might want to go for the sixty-something hardcore gamer demographic, just remember: you're aiming to grab the share of the empty-nester recreational budget that currently goes in the direction of Winnebago and friends. Once gas regularly starts to hit ten bucks a gallon (which it did last year where I come from) they'll be looking to do different things with their retirement — the games industry is perfectly positioned to clean up.

And then there are the younger generation. Let's take a look at generation Z:

The folks who are turning 28 in 2030 were born in 2002. 9/11 happened before they were born. The first President of the United States they remember is Barack Obama. The space shuttle stopped flying when they were eight. Mobile phones, wifi, broadband internet, and computers with gigabytes of memory have been around forever. They have probably never seen a VHS video recorder or an LP record player (unless they hang out in museums). Oh, and they're looking forward to seeing the first man on the moon. (It's deja vu, all over again.)

I'm not going to even dare to guess at their economic conditions. They might be good, or they might be terrible — insert your worst case prognostications about global climate change, rising sea levels, peak oil, and civil disorder here.

Moreover, I don't think I'm sticking my neck too far above the parapet if I say that by 2030, I think the American market will be something of a backwater in the world of online gaming. China is already a $4Bn/year market; but that's as nothing compared to the 2030 picture. The Chinese government is currently aiming to make an economic transition which, if successful, will turn that country into a first world nation. Think of Japan, only with ten times the population. And then there's India, also experiencing stupefying growth, albeit from a poverty-stricken starting point. Each of these markets is potentially larger than the United States, European Union, and Japan, combined.


The world of 2030: what have I missed?

I said earlier that I'm not a very accurate prophet. Our hosts have only given me an hour to stand up here and drone at you; that limits my scope somewhat, but let me try and give a whistle-stop tour of what I've missed out.


  • I am assuming that we are not all going to die of mutant swine flu, or run out of energy, or collectively agree that computer games are sinful and must be destroyed. This assumption — call it the "business as usual" assumption — is a dubious one, but necessary if we're going to contemplate the possibility of online games still existing in 2030.

  • I have short-sightedly ignored the possibility that we're going to come up with a true human-equivalent artificial intelligence, or some other enabling mechanism that constitutes a breakthrough on the software or content creation side and lets us offload all the hard work. No HAL-9000s here, in other words: no singularity (beyond which our current baseline for predictions breaks down). Which means, in the absence of such an AI, that the most interesting thing in the games of 2030 will be, as they are today, the other human players.

  • I am assuming that nothing better comes along. This is the most questionable assumption of all. Here in the world of human beings — call it monkeyspace — we are all primates who respond well to certain types of psychological stimulus. We're always dreaming up new ways to push our in-built reward buttons, and new media to deliver the message. Television came along within fifty years of cinema and grabbed a large chunk of that particular field's lunch. Cinema had previously robbed theatre's pocket. And so on. Today, MMO gaming is the new kid on the block, growing ferociously and attracting media consumers from older fields. I can't speculate on what might eat the computer games field's lunch -- most likely it'll be some new kind of game that we don't have a name for yet. But one thing's for sure: by 2030, MMOs will be seen as being as cutting edge as 2D platform games are in 2009.


In fact, I'm making a bunch of really conservative assumptions that are almost certainly laughable. For all I know, the kids of 2030 won't be playing with computers any more — as such — rather they'll be playing with their nanotechnology labs and biotech in a box startups, growing pocket-sized dragons and triffids and suchlike. Nothing is going to look quite the way we expect, and in a world where the computing and IT revolution has run its course, some new and revolutionary technology sector is probably going to replace it as the focus of public attention.

Nevertheless ...

Welcome to a world where the internet has turned inside-out; instead of being something you visit inside a box with a coloured screen, it's draped all over the landscape around you, invisible until you put on a pair of glasses or pick up your always-on mobile phone. A phone which is to today's iPhone as a modern laptop is to an original Apple II; a device which always knows where you are, where your possessions are, and without which you are — literally — lost and forgetful.

Welcome to a world where everyone is a gamer — casual or hardcore, it makes little difference — and two entire generational cohorts have been added to your market: one of them unencumbered by mortgage payments and the headaches of raising a family.

This is your future; most of you in this audience today will be alive and working when it gets here. Now is probably a bit early to start planning your development project for 2025; but these trends are going to show up in embryonic form well before then.

And if they don't? What do I know? I've got an aerospace engineering degree from 1937 ....


|


92 Comments

1:

And the response was stunned silence, or carefully prepared questions?

2:

Maggie: somewhere in-between ...

3:

You've forgotten Moore's *second* law: The price of a fabrication facility doubles every four years.

Combined with the 18-month doubling of transistors/cm^2 of the first law, prices should keep going down, but those fabs are getting into the billions.

user-pic
4:

That is a lovely wrapping up of some interesting research and extrapolation.

Nicely done.

5:

The outcome of the copyfight is a big variable, if you're right about everything else.

If strong IP laws stay in place, things could get scary. It wouldn't be economically viable to redesign a processor from scratch and build the fabs necessary to produce it. Intel could more or less continue its business model by setting a max run time on their chips.

If they don't, things are more interesting. Fabs are still horrendously expensive to build, but time in them is just a commodity, just as seats in 747s are today. As processor designs drift into the public domain, their price settles at slightly above production cost.

user-pic
6:

Hey, Charlie, I wanna read that book (you know, the one lurking in your lecture).

Or is it Halting State Deux (This time it's personal-er!)

7:

Interesting thinking, especially RE: the 2030 60-somethings.

Were the disclaimers only for caveat emptor; or designed to make the talk more appealing? (for all definitions of appealing that result from being self deprecating)

8:

A bare-ground 30 nm "node" fab is on the current close order of 4.5 GUSD.

Processor design is so hard that it will never go significantly public domain because the minimum team size is well into the hundreds; the basic primate social mechanisms won't let you do this, it takes forethought and planning.

(Intel's, almost certainly successful in the time frame Charlie is talking about, business plan is simply to be the last guy standing by virtue of capital reserves. It might be Intel and a combined everybody else; it might be a combined everybody else if the various East Asian governments get sufficiently peeved at Intel and the American hegemon isn't in a position to insist they play nice with what is increasingly nominally an American corporation.)

None of this really matters, though; processor capability is not the bottleneck.

There's symmetric multi-processing, a big research area in the early 80s and now ubiquitous, where you have a small bunch of identical processors sharing tasks under the control of a scheduler, which is how everything multi-core and multi-socket now works; there's distributed processing, where you chop your problem into a great many small pieces and assemble the answer from what a diversity of processing nodes hand you back ("SETI/Folding/etc. at Home", the archtypical Beowulf Cluster); and there's (typically massively) parallel computing, where you have a large number of processing components gnawing on pieces of the problem but no scheduler as such; this is how GPUs work, and the great pity of it is that no-one has figured out how to write good algorithms for the hardware. (Which is why you sometimes see 40% performance improvements due to "driver upgrade"; it's a completely new set of code for drawing some class of objects on that hardware.)

If someone does figure out how to write good algorithms for the parallel architecture, expectations of computational performance are going to change.

9:

That's quite nice, and I'm sure it was better delivered as a speech.

Michael@5 has a point about the "copyfight". I'm sure we don't know how it's going to come out, but it could change "everything".

One weird usage I noticed -- "hit the buffers". I know that as a rail usage, which means it'll be unknown to many Americans these days.

user-pic
10:

Great speech Charlie...

that future reads a little like Vinge's Rainbows End

(ie, sounds awesome!)

Thanks!

- 40 yo gaming codger

11:

I concur, that was a very good speech, full of interesting ideas that painted a very believable future. So believable that it was almost too conservative (*gasp*). We've read so much SF that is like that, that it almost feels passe, like cyberpunk and space colonies.

I must say I would like to be in that future now, it is richer and far more interesting that the sterile visions put out by the mainstream hardware and software companies. It offers so many more possibilities to enrich our lives.

Again, well done!

12:

BTW, I am a 54 year old codger who flirted with, but never really GOT video games. But I watch my 12 year old son and see what you say fitting him like a glove. I just hope that the smeared internet will enhance what I am interested in, books and movies.

13:

@Graydon:

I wouldn't totally count out open source processor design. There are plenty of open source projects with team sizes well into the hundreds. That said, it doesn't matter. If feature size reduction hits a wall as posited, existing designs will eventually fall into the public domain.

14:

Charlie's dislike of social networking sites and apps has caused him to miss a major aspect of any future internets and probably future games.

Discuss.

15:

As a young and relatively inexperienced game designer from South East Asia, I don't know whether to be excited or terrified by this near future. (I think I felt something similar after reading Halting State.) It's difficult enough trying to figure out what _this_ generation of adult casual gamers wants. And I still am not attracted to the iPhone. Maybe resistance is futile.

Great speech!

16:

Luna: played with an iPhone yet? I'd have one if they were cheap.

user-pic
17:

Immediately after reading this speech, I followed a link to this strangely relevant cartoon: http://www.slowpokecomics.com/strips/pixelpast.html

18:

My mirrorshades will simply draw your eyes where your mirrorshades are. The can have those 2D barcode things on them to make it easier to get the correct identity and orientation and scale. Easy! Assuming we are friended. If you are a close friend you get my heartrate and eeg and facebook feed and all that, too.

I just attached radio loc8tor tags to my cats, because it is now time to do this.

user-pic
19:

So the Eschaton's not coming to take us away after all?

user-pic
20:

Interesting.
I'll switch to iPhone lookalikes when it gets BETTER.
For the meantime (next 2-5 years ??) I'll stick with single phone, single camera, etc.
Integration only works when it REALLY works - but then I am (was) an engineer.

Computing speeds... erm, diamond?
Or is that handwaving?

user-pic
21:

Here's something you missed. The rise in physical interaction with games, and its consequences.Plug your camera based capture of the world with the Wii and you get MMOs that depend/work on real physical actions. Two things come out of that.

1) Those codgers aren't going to be spry, so how do you deal with fantasy worlds and real life movements?

2) There's only one physical world. Are you going to see people capering down the street, fighting orcs as you try to get to the bank? Not likely. Vinge was right, the first place for augmented reality is the theme park - and that's where the MMO developer has to go.

user-pic
22:

Graydon@8: "Processor design is so hard that it will never go significantly public domain because the minimum team size is well into the hundreds"

You might want to have a read about the Parallax Propellor, it's a 8-core RISC CPU that can push 160 MIPs. It was designed by one man.

He also wrote the assembly language and the high-level interpreter. He's a pretty smart guy, but let's face it, there's ever-increasing numbers of smart people with access to handy kit.

http://en.wikipedia.org/wiki/Parallax_Propellor

It's found a niche as a hardcore microcontroller, running home built games, video processors and the like. Yes, it's niche, but it's an example that with the correct tools, one person can build something that would have taken hundreds, just a few years back. That factor alone is pretty world changing.

user-pic
23:

Charlie,

Interesting as always. And, of course, inspiring thought. Hopefully my wordiness doesn't toe too close to your soapbox.

I can think of a few angles that might expand on some of what you were referring to, off the top of my head. Given that this was a talk directed to the specific audience at hand, I can see how much of this would be digression. But!...


1. In regards to the China and India references - we have to remember that gaming is NOT everywhere there yet, not by a long stretch. The penetration of gaming and computing, or even 'smart' phones, from the folks I have met who were either from or visited these areas, is far from the level you see in Japan, European, or the United States market. Additionally, cultural differences will have a heavy impact. The Indian take on social networking, and the Chinese take on what level of interactive gaming is acceptable in Real Life, etc, will continue to evolve further away from the existing model.

2. The ubiquitous, effectively limitless data storage and broadband access leads to other issues - mainly SORTING data, accessing data valuable to the situation at hand, and properly handling said data. Data mining techniques, which have advanced greatly, still struggle with photographs to some extent and video to a much greater extent. Voice and image recognition helps and we will see a lot of energy poured into optimization of it. WHich leads to...

3. Software. Oh, software. This is going to take sub-structure.
A. Software, taken in general, to include firmware and bios-level functionalities. The direction I see is of hardware-and-software from the manufacturer, combined with the continuing difficulty of homebrew fabrication of said hard/soft-wares. Hopefully, 'user level' abilities will be further augmented, such as home nanoscale assembly would promise. Though trusting the (l)users I do tech support for with anything self-replicating would be a very bad idea.

B. Vendor lock-in: The wave of the future. Without some effort into universalizing or open-source formats which can be migrated from system to system, vendor lock-in is going to be more and more of a problem. As in, "I would marry you - but you're a PC and I'm a Mac, our life-logs aren't in compatible formats." One can only hope that Unix/Linux derivative systems rule the bandwidth/storage roosts, with front-end systems delivering what we currently rely on operating systems for.

C. How much do you trust your software company? As applications grow necessarily beyond the scope of comprehension of any user, the ability to plant backdoors or triggered crash-codes into these massive interfaces grows, especially if we remain with the 'nation' as a primary paradigm. Linked to vendor lock-in, if the code is not open for review and it provides what many would consider vital or essential functions in your life, you're at the mercy of not the coder but their managers, their manager's managers, and the legal/political/military structure in the country of origin. BIOS locks, OEM operating systems, and the constant concern in security and military circles in regard to 'foreign' national software code is certainly based on very viable situations.

D. Social and cultural impacts: The definitions of privacy, appropriate conduct, acceptable behavior, will all mutate. Ubiquitous tele-presence will shatter some otherwise firmly entrenched biases, we can hope, unless national boundary level firewalls become much more common - and I think they will, and not all to the good, not by far.

4. Intellectual Property, Copyright and Content Ownership: Already mentioned above. Definitely a risk-carrier however, much as the vendor lock-in or software trust issues. My lifelog, and my user-generated content (be it social or gaming media), needs to be transferable as any data I create. Once transmitted through a certain provider or system, will they still claim rights over it? Given the draconian nature of EULAs, clickwraps, and other current legal devices I don't see that this will get much better soon. In the US, the DMCA or its successors are going to be ugly. Your picopicture transmitter is a public presentation in many situations. My telepresence ringtone, am I paying by-the-use, or for a unlimited license to private use only? The legal complexities will be astounding, and I have NO clue how Indian, Chinese, or European law will deal with it.

5. Inter-game agencies allowing movement of characters between worlds, by a barter-or-pay equivalent, will be important if we come into the 'age of the multiplayer online'. I would personally look forward to smaller, rather than multiplayer-massive, invite based or reputation-based servers where communities can grow, piggybacked off the general world-spaces. And imagine the horror...and pain....when a true Massive shuts its doors. The loss not only of levels, but friendships! Though with a blurring line between real-life and internet communications, you may know more than just someone's handle, and be able to keep touch.

6. Software agents: Gonna be needed, and soon. However, with the scaling problems of Moore's law in processing spaces as they currently exist, they have to be written for distributed systems of some type or another, unless they become the 'operating system' for your personal phone (comm unit) and are capable of effectively dominating that space, hiring out processing cycles or offloading to distributed systems as needed, and then down-cycling when a specific process is completed (such as re-tagging every video, email, letter, and other action in your lifelog for the last 2 or 5 years when you break up, to make her your "ex-girlfriend Sandy" or "that jerk I dumped").

7. Security....

I'm not going to type all of it. The sugar-rush of a slice of cake has worn off and I worry I might be stepping near the soap box line. Scraping the thousand word mark. G'night!

24:

Regarding the idea of the Internet as augmented reality, "draped all over the landscape around you, invisible until you put on a pair of glasses": you might be interested in this lovely, though slightly uneven, anime about exactly that: http://en.wikipedia.org/wiki/Denno_Coil

Most of the ideas in the show had already shown up in SF novels, academic papers, and so forth, but watching them spring to animated life is fascinating.

25:

The microprocessor revolution is as doomed as the jet revolution; doomed to success. In terms of its social implications on a big scale, civil aviation only mattered post-747, when people stopped going to films about glamorous airports and started actually flying. The mark of really successful revolutions is that they become normality.

Lyubov Popova said her greatest artistic achievement was seeing a peasant woman buying a bolt of one of her patterns to make a dress.

whatever 4G standard emerges as victor in the upcoming war for WWAN preeminance.

A few months back Verizon Wireless announced they were going LTE, which meant that the CDMA/Qualcomm crowd no longer had their staple US national market, and that therefore the GSM world had triumphed. It was already clear that if you already had a GSM/UMTS network, you'd do LTE, so that's Europe and most of Asia and Africa and Latin America, and in the US, AT&T and T-Mobile. With VZW going that way, and Sprint-Nextel gone WiMAX, there's nothing left.

The standards wars are over and we won. Relatedly, and interestingly, everyone was quite surprised to see that VZW gave the first lot of contracts for the LTE build to...Alcatel and Ericsson. You know the motto: Europe. We're Still Here.


user-pic
26:

The rising costs of the fabrication line does seem likely to produce a "natural monopoly" where eventually only one factory gets built and then everyone uses its products for the next 20 years untill the global economy has grown enough that the capital for an even more eyebleedingly expensive fab line can be raised.

This primarily has consequences for coding, because it means the base assumptions of software and hardware design go completely topsy-turvey. Efficient use of processing cycles becomes the primary way to get more preformance, and the money currently directed at building faster hardware will in large measure flow to whoever can write the cleanest, most efficient code.

user-pic
27:

@23

4. Intellectual Property, Copyright and Content Ownership

You can pretty much guess what's going to bend and break. Most of IP today revolves around Copyright. It made sense when the act of making the copy was an expensive one, best left to the professionals of copy making. In a world where the act of making a copy of content costs effectively close to 0 (less than a milliwatt-hour of electricity, a few seconds) and can be done with tools that everyone has - sometimes in multiples, the concept of paying by the copy of something is going to take a beating. Market forces will bear relentlessly on that until the model breaks.

It might survive beyond 2030, who knows. But it's going to show major prosthetic appendages to support it at that point.

28:

Anssi Vanjöki of Nokia, meanwhile, said at our conference that technology is easy to forecast because they make it, but customers are really hard. Also, only 12% of user time on an N-series device is telephony/SMS; the rest is Web browsing, e-mail, camera, media playback, or applications. I.e. a PC use case.

user-pic
29:

Great speech Charlie. I think I saw an Japanese anime, where the kids wore glasses that let them see stuff that was virtual and so forth (can't think of the name right now.) Pretty interesting concept. I feel, I feel an idea coming on...

30:

Very nice and informative. But please don't go the Sterling design prof & prophet way stopping churning out very cool books.

31:

You would be visiting Seattle just as I head off for more jet lag :(.

On the demise of Moore's Law: there has to be another order of magnitude or so available in compiler and interpreter slop. In the past, software has gotten faster for free, so there hasn't been much incentive to optimize a lot of it (games may be an exception).

Compilers like ocamlc or Steel Bank Common Lisp show that it is possible in principle to have a reasonably civilized language run fast; you don't have to write in C (or, even worse, C++). The Java JIT systems show that even fairly small amounts of empirical data on code paths is useful in optimization. Once architectures and speeds become more stable, it should be worth spending a lot more effort on compilation and on integrating profiling data with optimization. This doesn't require someone to have a brilliant idea and invent a massively-parallel programming environment that doesn't suck, it just requires a lot of engineering work that currently isn't cost-effective.


32:

Has anyone come across any research or articles done on predictions from times just before breakthroughs have occured? How long after the first powered flight, the first valve, the first antibiotic, laser, genetic modification was it that people started looking up and saying "Bugger me, that's going to change everything".

I ask because I'm rather more fond of N.N. Taleb's "things change in huge, unexpected leaps" view of progress and have been wondering for a while now whether there's a timeframe after which you can confidently dismiss possible world changing developments as, well, just another increment of the same old.

user-pic
33:

Read the TOS:
The augmented memory (the "Memory") of the customer ("you") consists of a compilation of data, which may include, but is not limited to, images, graphics, audio, text and other sensory output (collectively "Content"). The Memory is the proprietary intellectual property of CerebroCorp (the "Service Provider"). The Service Provider hereby grants to you a revocable license to the Memory.

You acknowledge that the Service Provider and other proivders of Content have rights in their respective Content under copyright and other applicable laws and treaty provisions, and that except as described in this TOS, such rights are not licensed or otherwise transferred by mere use of the Service. You accept full responsibility and liability for your use of any Content in violation of any such rights.

You acknowledge that providers of Content may, at any time, revoke any license or other rights that you may have in Content, in which case the applicable Content will be deleted from your Memory

34:

Great talk Charlie. You wonder what a name for this marvelous handheld gadget could be (if it needs one). In Germany, people refer to "mein Handy" when they talk about their cellphone/mobile. Suitably function-agnostic? I've read SF which talks about a "pad" as well.

35:

Re names for the gadget: "mobile" works even without the "-phone".

36:

This is the best thing I've read all day. Thank you.

Sidebar: This weekend someone asked me what kind of SF I write, and when I described to him the plots of the stories, he said: "Oh, so the near future. You know, that's dangerous. Very dangerous. There's a hundred-year sweet spot you should aim for, really, just because it's so dangerous."

I said: "You're right, it can be. Charlie Stross refers to that as the Black Swan Problem."

To which he replied: "...Who?"

...I was only too happy to explain.

37:

Jez @22 --

Didn't get it to be a physical object as eats electrons himself, did he?

Some guys I went to high school with managed to make money by coming up with a new RAM logic layout in the early 80s. That's not "chip design" or "processor design" as a whole.

Processor design starts with "where and how do we get sufficiently pure materials?", because what you can get constrains utterly what the circuitry guys can actually do, wanders through some ferocious process engineering -- Charlie's comment about soft xrays making your NIC a death ray apply to the lithography; current cutting-edge processes happen under water-equivalent because of optical limits of air -- and into the solid-state physics of the folks who take the abstract logical layout and turn it into a physical component layout; that involves management of *at least* power, thermal, and materials diffusion under current load to get right, and the automated systems aren't entirely up to that (and on the basis of the NP-completeness of the bin-packing problem, it could be a good long while), plus actually managing the process of making the thing, adding the conductive metal top layer (another horrible specialty cluster all by itself; complex tiny wires...), and sticking the thing in a durable package that allows it to be connected to something useful. (Note that Nvidia of all people managed to mangle that last step recently.)

So, sure; there are (good, widely-used) open-source design tools for the logic, and one really smart guy obviously can produce RISC processor logic. One smart guy can't begin to do the whole physics-on-up design to hand to a fab company like TSMC, and the fab company is doing about half the work of getting the chip to be practice instead of theory in that case. (This is one of Intel's real advantages; they can get everybody involved from the silicon refinery on up in a large room and make them talk to each other.)

Once you've got the whole stack, it really is a couple hundred people minimum.

38:

I'm a 50 yo female codger (a very negatively biased towards aging word, btw) who already resents that there are no longer any games that I really enjoy. The gaming community has pretty much already lost me as a customer -- and hubby is an IT manager for Playstation, so hey, we live off these games. ;^)

The Wii people think we all want to bowl and play tennis with idiotic avatars, The Playstation people think everyone either wants FPS games or to be cute little avatars, and the Xbox people never seemed to think about women at all.

An entire 50% of the population ignored by the gaming market. Sad, really.

39:

Brad DeLong just quoted you on the 1 Tb/s per channel limit. I'd point out that the normal visible light going through an optical fiber has a frequency of 400 THz so if we're dreaming about 20 years from now, we ought to be able to modulate that 400 THz signal at some rate over 100 THz and get 100 Tb/s without increasing the frequency.
I'm not saying it will be easy. Inducing that modulation is quite challenging. However, in a fundamental physics sense, we're a long ways from death rays at 100 THz bandwidths in fiber. (Microwave transmission is a different story though. There, you're really coming up against bandwidth limits.)

user-pic
40:

Graydon @37: "Every polygon of the Propeller's mask artwork was made here at Parallax. We designed our own logic, RAMs, ROMs, PLLs, band-gap references, oscillators, and even ESD-hardened I/O pads." - from "About design quality"

I'd call that processor design.

Yes, there's a lot more to producing a chip and yes, if you want to be competing in the bleeding edge market of what's inside everyone's laptops, then you need to be pushing the performance boundaries and expanding what's possible. That takes an Intel-sized organisation to afford a new fab with new capabilities.

If you want to make a niche chip that can compete in a fairly big niche, then you can do it in your garage. It won't need to be physically cutting edge to be useful or profitable. Yes, it'll run through someone else's fab, but so what? Your market size might be small (3 million Basic STAMPs) or huge - there's six billion PIC microcontrollers out there, and they're hardly cutting edge technology.

41:

Donna, what would you like to be doing? Or will answering in public give the game away.
I've never got further than versions of 'staring into the fire'. But I always wondered what something like DNA's ' SS Titanic' would have been like, intelligent, witty but not CUTE.
(wish that he was still around and part of the conversation)

I don't think I would have gone for Charlie's 'Spooks' TM, but then I'm not a great joiner.

42:

Jez @39 --

It's an element of processor design, sure. It's not whole-processor processor design.

Division of labour is a real thing.

Leaving that aside, you were originally talking about open source design, and open source is completely dependent on three things for being useful; the activity has to be constrained by the number of effectively smart and interested people (certainly true in the case of chip design!), you have to be able to share solutions to common problems in a net-positive way (somewhat true in the case of chip design; you won't have the same layout or overall design but can certainly share logic blocks), and you have to be able to present complete build instructions, so someone else can build what you've got (true in software, not at all true in any kind of manufacturing, most especially including chip fabrication).

I'd be deeply, deeply surprised if the information to produce the mask artwork in the Propeller's case is all outside of people's heads. (Produce, not draw; the Gerber or whichever file will obviously allow it to be drawn.) It's this transmission of actual mechanism problem that makes open source hardware really tough until you have generalized fabricators at least as predictable as compilers. If you don't have that, you've got a social tradition around how to get stuff to actually work, and it's very tough to do the necessary solution sharing in that case.

user-pic
43:

I just don't believe in VR glasses.

I mean obviously they are in some sense possible. They exist today. But, look:

For augmented reality glasses, you need at least one camera, two projector screens or whatever (output devices), a wireless network connection, and batteries to power all of these. And the result needs to be light enough to not cause massive headaches (which is to say: extremely light), and look good enough that people are willing to be seen in them.

(Incidentally, Charlie, do you really wear glass glasses? The overwhelming majority of spectacles these days are made of plastic, and are much lighter than their glass forebearers, which were apparently quite uncomfortable by comparison.)

Small cameras suck. They have bad resolution, bad fields of view, poor dynamic range, etc. Current output devices (besides stuff like digital ink, which obviously has its own problems) have real issues with displaying in outdoor light conditions. Batteries remain heavy in comparison to their size. High throughput wireless network connections use significant amounts of power.

These problems are going to strongly limit the viability of early generations of VR glasses. Low uptake is going to mean low financial rewards for people trying to develop content aimed towards VR glasses. Negative feedback cycle, and I just can't see them taking off.

I'd be willing to bet fairly significant amounts of money that 10 years from now, VR glasses won't be significantly more common than they are now. 20 years from now, I could believe that they'll be catching on if and only if there are good strides in the above problems based on other uses of those technologies (which, to be fair, is likely: there are lots of good uses for small, non-sucky cameras).

44:

Ian @21: Actually, those codgers probably will be spry. Far more so than people of the same age alive today. For a start, far less of them will have been smokers (my grandfather, in his eighties, can run rings around his few surviving peers; he was one of the tiny proportion of his generation who didn't smoke, even when the Army rations included cigarettes).

user-pic
45:

Chris L@44: "Tiny" percentage of his generation who didn't smoke? This page suggests that about 35% of men, and 55% of women, in 1948, did not smoke, in the UK.

http://info.cancerresearchuk.org/cancerstats/types/lung/smoking/

user-pic
46:

Ian Smith @21 >

Contains two errors: First off, you won't be going to the bank. In 2030, banks as physical entities will be going or have gone the way of secondhand book stores. After all, all they are is information transfer institutions.

Secondly, you obviously don't live in California. In 1980, when you saw someone walking down the street, talking loudly to themselves, and gesticulation in a manner that was dangerous to passers by, you knew you were dealing with a crazy, homeless person. These days, its a businessman taking a phone call on his handsfree phone.

All this technology is having a big effect on people's social behaviour.

Charlie -- a good, solid peace of extrapolative work.

47:

@ Michael B et al.
isn't there a problem, not with the VR hard ware, but with the nausea that comes with a argumentative inner ear?
But then I've only been in a 'cave' and that smelled of bad air con and feet, (which didn't help.) Goggles wouldn't smell, but still give your sense of balance a run for its money, unless you stick to AR.

(Being a VR researcher who couldn't hack VR would be a bummer.)

48:

Great speech. Lots of interesting ideas in there.

One thing I think you've missed is the possibilities in robotics, especially home robotics (aka home computers). As I told a friend the other day, I want a little robot that works tirelessly to catch the slugs in my veg patch.

Then there is always the game changer, the one we didn't see coming.

user-pic
49:

Maggie @47: The concept of using VR glasses for augmented reality (as opposed to immersive, wholly virtual worlds) would, I think, take an end-run around the nausea problem. Visual movement would be congruent with physical movement, so no eye/inner ear disagreement.

50:

Charlie,

Great speech. I like that your point was not dependent on exactly when Moore's Law gives out, or exactly what technologies are used for high-band wireless. What's important to note is that the augmented world is enabled by the user interface; again it's not technology that's the driver, but the way humans are designed, which isn't going to change before 2030.

But just because I can't let it go, I'd like to point out that there's one missing piece in your hardware scenario: end-user cost. The first generation of the personal gadget to be marketed after the device shrink hits the wall will be expensive for a given cpu power. But the semiconductory manufacturers will still be able to decrease cost for several more generations; if they can also decrease power consumption they can push cheaper silicon into cases with more components, since less space will be needed for thermal management and batteries. That means that even if you can't cram more transistors into a cpu chip, you can put more cpu chips in the gadget. That could translate into more kinds of application-specific circuitry per gadget at the same cost.

And there's still one variable that hasn't been tweaked much in cutting cost: the amount of fab lifetime to process each chip. That can be dscreased drastically by using new processing technologies like highly parallel atomic placement.

Graydon,

A great deal of the complexity of modern cpu chips comes from the extreme lengths that designers have gone to optimize cpu usage for single execution threads (execution queues, parallel arithmetic processors, speculative execution engines, etc.) If we ever do get highly-parallel symmetric multiprocessing to work well for consumer applications, most of that will be wasted real-estate; more cores with less functionality on a chip will be more cost-effective. That makes the chips much easier to design, and much more regular too, so the tools can be more powerful. The real competitive advantage in cpu design will be in creating the basic circuits, which get replicated millions of times in each chip; small improvements get major leverage there.

user-pic
51:

Where did the use of technology to aid development of technology disappear to? Where are the computers that can design their own successors?

Where's the improvements from software optimisation once Moore's Law plateaus?

Where's the skepticism about the continuation of Chinese and Indian growth? Where's the allowance for the rise of new powers, just as the ascendance of China and India would've been unthinkable 30 years ago?

Heck, where's the allowance for entirely new sectors to arise, just as the contemporary ICT sector would've been unthinkable 30 years ago? 30 years ago we couldn't have imagined Google Earth or the iPhone.

52:

When I see a person walking through downtown while wearing earbuds, my first impression of them is crime/accident victim waiting to happen. Muggers go after people wearing earbuds, not just because they know the person has sellable electronics but also because the person isn't paying attention. And people using portable media get into more accidents, because they aren't involved in what they're actually doing right now.

'Augmented reality' is marketing-speak for covering your perceptions with spam and spyware.

53:

I did research into AR a few years ago. All well and good, has technical and interaction issues, so any atttention that draws programmers to its problems is a good thing.

There is plenty of room in optimisation; if Croquet already can do realtime collaborative 3D environments on a PC over a 56K link with a 10 meg executable interpreting Smalltalk code in real time, then I am sure we can do far far better.

Deep parallel processing is a real issue; solving it is hard. Start with Danny Hillis' Connection Machine and its flaws.

54:

hiya Charles/room

just a fan hoping to get insight from like-minded folks on

a-what's the recommended reading order of the Stross library (or doesn't it matter) i'm partial to future-shock firehose stuff and by 'future-schock firehose' I mean dense visionary mind-shifting fast-paced hard-to-wrap-your-gray-meat-around tech/spec fiction i.e. Accelerando or Diamond Age or similar stuff from Sterling/Stephenson/Gibson)

b-is there anyone else in the future-shock firehose business outside of the 3 ST's (Stross/Sterling/Stephenson) - nice bit of convenience there - all on the same-shelf at the soon-to-be-obsolete bookstore/library


clearly, compelling characters/stories written by bright and talented folks with the unique combination of hard science knowledge, classic sci-fi awareness, technology fascination/addiction, and originality are going to be rare. I'm just wondering if there's anybody else in the game that belongs at the same table.

i've pretty much digested the collective output of Sterling/Stephenson/Gibson, recently discovered Stross, and have also been studying the 'classics' aka Asimov/Clarke/Heinlein/Bradbury/Wells

I was excited about finding Mirror-Shades but it didn't live up to my expectations. I was hoping to find 10 burning-chrome type thought-provokers but found that the anthology focused more on 'style' was sort of limited due to genre-definition.

basically the above is an inefficient way to ask the room and/or the man himself 'what have you read recently that blew your mind?' and/or 'what/who should I add to my pure-text play-list?'

55:

Michael @45: OK, try "of his peers". The guys who got smokes in their paypacket. The guys he used to sell his to.

Even that 35% would have had secondhand smoke all day at work, too.

56:

Fantastic reading Charlie; I too am a computer engineer and have through time paralleled much of this discourse on convergence on my own blog, humbly residing at http://thefowle.livejournal.com for the moment.

Your ubiquitous prognostications were pretty spot on, with one omission. The one thing crucial aspect I felt missing is the ability to connect to an environment. Lugging around your own mini AR environment is growing technically possible, but it has limitations; if your host has a nice array of displays, a killer sound system, and a l33t Gibson supercomputer, being able to easily jack in to that environment is really a defining characteristic.

We've recently made headway in the "instant connection" front with DisplayPort/HDMI having onboard audio and soon USB/Ethernet respectively, but a) this eventually needs to be replaced with wireless and b) socially we havent really broken down the notion that a portal system is capable and something we'd want to use or interface to a network of home/home computer peripherals. As mobile power grows more capable of driving fixed-infrastructure IO, I am certain we'll start to see demand for better system composability creep upwards. The upcoming cell phone chips can drive 1080P, and thats really the first checkpoint to heavily leveraged pluggable IO becoming interesting.

Eventually I think it starts trending towards the really really interesting questions of how you share IO; if you have a room full of people at an office presentation and each has a portable supercomputer, how do they contribute and collaborate in some kind of shared cohesive computing environment? There have been some superb harbingers of the future come to visit recently in this realm; Multi-Pointer-X being by far the most obvious sign post on the road.

rektide@voodoowarez.com

user-pic
57:

B @ 54:a. It mostly doesn't matter. You probably want to read Singularity Sky before Iron Sunrise, Atrocity Archives before Jennifer Morgue, and of course the Family Trade series is something unto itself; but you probably won't have many problems if you don't read them in order.

Adrian Forest @ 51: He does specifically say he's assuming that nothing really game-changing like that will happen. After all, most of those events are by definition impossible to foresee or predict beyond.

Andy @ 32: The first antibiotics had a very rapid uptake, IIRC (since the utility would have been very obvious to anyone even mildly knowledgeable about disease at that time). I believe that the laser, the genetically engineered organism, and the airplane took rather longer to see practical use, though.

58:

webpages are more than just content, how about making it easier to read? A little css goes a long way.

59:

Adam P @ 53

To the best of my knowledge, the Croquet developers blew off solving the general synchronization problem in order to build first-generation working systems. But massively distributed systems won't scale very far without solving it, and it is very hard. Mostly because there is no general solution; we'll need to use a bag of partial solutions and very clever tricks, along with some heuristics about when to abandon them and what to do then.

The Connection Machine has a lot to teach us about parallel computation, but it won't solve the urgent software problem we're facing: how to we get existing applications, especially end-user applications like client-side user interfaces and local application computation (the stuff we need to deal with on the personal gadget even if we have an ideal cloud computing environment) to play nicely on multi-core platforms. Oh, that includes the OS too.

I have my own pet solution: real object-oriented programming, where each object instance is capable of running single-threaded, or multi-threaded, with the external message protocol being the synchronized entry points. Then, if the applications are actually OO programs (a lot of existing ones aren't), they should automatically be provisioned onto whatever set of processors are available, and take advantage of as much parallelism as there are objects. Of course, C and C++ code will mostly be useless for this. And it does require that no object is allowed to directly read or modify another object's slots, which almost all OO languages allow to, nay, insist be violated.

60:

dude please get some frames or space out this web page very hard to read if you have a wide screen.

61:

shadow: are you aware that applications don't have to be maximised ALL THE TIME?

62:

Charlie

nice summary, and makes me realise how much background research you must have put into books like Accelerando.

I have to agree, especially with the 'interface' issues, as I already said here: http://blog.david.bailey.net/blog/_archives/2008/9/25/3900352.html

The processor curve is reasonable, but we might just see the processor core technology jump the rails from 'silicon substrates' before 2020, and that would give us some more options.

Modulated light (short ranges) as well as multiply connected wireless streams above 5GHz (longer ranges) can still give us more bandwidth (if we can route it, source it and store the data somewhere reasonably cheaply).

What we also need is a major uplift in battery technology. Power savings appear to be eaten up by processor and screen demands, as well as the 'hard physics' of handling all that wireless data. We really need a power generation and storage revolution as well as steady improvements to power consumption.

Finally, I have personal concerns about the 'single convergent point of damn failure' for my digital world. My preference would be for a 'personal smart network' of several highly optimised devices that are embedded in my clothing, skin, spectacles, watch, badges, etc and which communicate locally. That was a failure in one (say the camera in my badge button) does not cripple everything (especially my location aware augmented memory device!)

63:

Everything great, plausable and probably correct, except one thing add in the human stupidty factor wich delays everything.

All your dates, all your milestones just double when you would expect them to be and they'll happen eventually.

Until then be happy people and enjoy the ride.

user-pic
64:

I can't help noticing that what we consider paralyzingly expensive for a fab plant is chump change for the financial bail out.
Changes in how we allocate capital (in other words, what long-term projects society chooses to fund) could change things.
So could reformation of education, which at this point is basically upgraded Industrial Revolution at the starting levels and upgraded Middle Ages at the top.
I also wonder about the combination of the ubiquitous net + transportation. All sorts of interesting gaming possibilities when you "car" can drive itself.

65:

I highly doubt that AR goggles will take off because people aren't good at having layers of stuff placed in front of their view when they're trying to walk across a busy street! One 'killer' app will result in the whole system being legislated into the stone ages.

However I'm sure that a '3D' console will hit sometime soon. I've seen some very clever videos of a guy who shows a really good 3D view on an ordinary sized TV using a current console and a tracking camera.

If you extrapolate that out to a wall sized screen then most people's living rooms will rapidly become VR/AR gaming areas.

I'd predict a rise in home workers who work in virtual offices. And fully immersive MMOs too.

66:

"Actually, those codgers probably will be spry. Far more so than people of the same age alive today. For a start, far less of them will have been smokers (my grandfather, in his eighties, can run rings around his few surviving peers; he was one of the tiny proportion of his generation who didn't smoke, even when the Army rations included cigarettes)."

I dunno -- on current trends their average weight is going to be about 30stone...


"very hard to read if you have a wide screen."

Er. Make the window narrower perhaps?

67:

#21 - This has been handled in many different ways by meatspace RPGs already. People have 'abilities' that can be applied. Examples: An overweight player fighting someone fitter may call a 'fat break' in some systems to catch their breath. In others, there may be a "shield breaker" (or 'weapon breaker' or what-have-you) move that older characters (often played by older, perhaps less physically fit players) can use to help even the field.

Heck, golf has handicaps - why not IMMORPGs? (I for 'immersive')

68:

Scott @57, Shadow @59: my blog does not exist for your convenience. If you want warm furry buttons or frames, roll your own.

Everyone else: I'm travelling today, so unable to comment at length. May have more time tomorrow ...

69:

#63 - Walking across the street gets easier when your AR goggles pick up the chip attached every vehicle/bike/pedestrian/lamppost/mailbox and highlights dangers in your field of view (or provide audio cues for those outside it). For extra fun, skin the world with a mod: steampunk street-scene? spaceport loading dock?

in the future the truly hardcore will be those who walk around *without* their built-in map/alert/info system. option: view world source.

70:

Scott @57 You'll find your browser will allow you to set your own choice of font and size. Just go to the equivalent of Firefox -> Preferences -> Content -> Fonts & Colours and pick something you are comfortable with reading on screen. Mine's 13pt Palatino. You'll probably also have to stop sites from deciding for you, as well. For a more complicated approach, Firefox (and probably some others too) allows you to roll your own CSS and use that stylesheet for viewing the web. I think it can be enabled on a per-site basis.

There is CSS on this site - I did it. It's optimised for accessibility, and for those people who know best what their eyes can read. After all, if I'd forced you to look at it in 13pt Palatino, you'd probably complain because your eyes are not my eyes.

user-pic
71:

Should have made a mention of the possibility of graphene chip technologies, which would immediately revive the MegaHertz (possibly TerraHertz) wars.

72:

Great speech, the future is going to be weird.

One thing that could come from in from left field is computers based on biological systems. Knowledge in neuroscience is increasing at an astounding pace and people are going to increasingly look to make technology out of that knowledge. But brains are very different than our current silicon chips and sand boxing everything is not an efficient solution, so I'd expect a lot of future computational technology to not necessarily be faster or smaller than what we have now, just extremely different. This also would effect the economics, we have the best computers in the world in our heads right now and they are made with nanotech and a few pounds of carbon and salty water.

Not that I think the singularity is approaching soon. The problem with human style computers is it takes 10-15 years to make them end user compatible, let alone smart enough to design themselves. The first conscious computers are going to very strange individuals who will require constant attention and patience to merely be communicative let alone do anything useful.

user-pic
73:

Shadow @59, in addition to Feòrag @68; even easier -make your browser window narrower, the text fits itself.

Haven't had a chance to read the speech yet, so I've nothing to say (not that I necessarily would).

74:

I'm keeping this short, because I have too much to say about too many people. Summary: don't knock the "aerospace engineer[s] from the class of '37."

They did not have your newfangled digital computers, and so had to do things the serious way, with analytical solutions to gnarly partial differential equations. Reality check: the wind tunnel at Caltech. There's a reason that the main auditorium at JPL is named after von Karman.

This weekend I expect to be talking to some of the "aerospace engineer[s] from the class of '37" at Caltech's annual Alumni Reunion extended weekend. These guys kicked ass, geek-wise. And read (and wrote) science fiction about spaceships in their so-called spare time.

75:

I was born in 1960 but I never played console games or had anything to do with computers until I went to college in 1996. Yeah, I know, a very late start. The point being, I'm not a compu geek, like the rest of you seem to be, so my take on this is somewhat different. A few points to consider:

I find these predictions to be very, very scary and very, very sad. Things are already bad enough now with young people walking around plugged in instead of interacting with the physical (what you all know as meatspace) world. Imagine how much worse that will be twenty years from now. People will no longer care about the actual community they live in, only the online ones. It seems like the perfect setup for any would be Ceaser. Provide the people with their bread and (virtual) circuses and rule the real world with an iron fist.

You think obesity is bad now? When this brave new world comes into being it will skyrocket. When you can have a simulation of exercise why would you bother with the real thing?

I read once that more knowledge has been lost in the last fifty years, since the dawn of the computer revolution, then was lost in the five hundred years before. This is due to changing and evolving hardware and programs. (An aside: just today, in the parking lot at work, I found an old computer punch card that someone had written their shopping list on!) This problem seems destined to get worse. And what happens when all the world's knowledge is online? I guarantee that right now there's some seventeen year old in his parent's basement somewhere figuring out the next great malicious worm. He presses a button, and were back to the stone age.

These technological advances seem destined to increase the gap between the haves and have nots. Believe it or not, their are still a heck of a lot of people that are out in the cold because they can't afford the hardware or the access fees.

Lastly, it really bugs and bothers me that I, along with everyone else, will apparently be forced to give up the pleasure of reading an actual book.

Like I said, I'm not a compu geek, so I probably don't know what I'm talking about. I would be interested to hear any comments though.

berek

(Oh, and you darn kids stay off the grass.)

76:

@75 Berek, be careful, not all those pesky kids are kids ;)

and some of use are very interested in the stone age ( when ever that finished and what exactly it contrived ), and just use this new fangled telecommunications to keep in touch around the world. (saves the walk)

Alex @32 I think you'll find that electricity had a very long lead in.

77:

JamesPadraicR@73: Worried now - the style sheet is supposed to do everything proportionally, so the various elements should take up a certain proportion of the window regardless of how wide the user has it. In the case of this page, there's only one element - the text - anyway, and it should automatically take up 100% of the available width.

78:

Closing comments due to a storm of spam. (May reopen them later.)

79:

Unlocked again. Let's see if the spammers have fucked off ...

user-pic
80:
Closing comments due to a storm of spam.
Being linked on Slashdot usually has this effect.
81:

With the release of the mind reading controller for computers and the assumption that this can/will be a 2 way thing surely Mancx specs are a bit dated already? direct interface and response using a non invasive minaturized MRI scan (or whoever it works) is surely the way forward (with all of the terrifying consequences thereof).

user-pic
82:

That was very good M. Stross. So good I'm nicking some of your observations so I can shine on in work meetings. Especially:

"Here in the world of human beings — call it monkeyspace — we are all primates who respond well to certain types of psychological stimulus."

That'll certainly render some otherwise interminable discussions down to the core point.

"Ahem, I think we've finally reached the monkeyspace moment here."

Have you considered writing for a living? If so, I reckon you could really give up your day job.

83:

Hoo boy lots of to thinkabouts in there.

It occurs to me that the kind of high bandwidth available world you discuss is not going to be everywhere. We're going to see large chunks of the world where bandwidth availability will be severely constrained. In some places (e.g. antarctica) this probably won't matter much as there won't be much demand, but its going to be yet another hurdle for slum dwellers in São Paulo to need to get over. And it may turn out that, just as langauge abilities tend to stagnate as we get older so may the ability to learn to use the broadband world. I'm not sure if this is a good or a bad thing but it does suggest that we'll be in a world where some people are at 1980s levels of familiarity with games and the internet while others are at 1990s, 2000s etc. for some time to come.

PS Note to Feorag. The Stylesheet for this page has body text defined as "font-family: sans-serif; "
This makes it vaguely tricky to read in a serif font such as Palatino Linotype which I think is the font you mention you prefer (and which I prefer too).

84:

In some places (e.g. antarctica) this probably won't matter much as there won't be much demand, but its going to be yet another hurdle for slum dwellers in São Paulo to need to get over.

What makes you think America Moviles won't be rolling out LTE? (You can already get 100/100 to the bog in Brazil...for a price of course, but the economics of mobile is that a) it gets cheaper the thicker the urbanisation and b) once the nodes B are in place, much of the cost is fixed, so c) you price the service to go.) MTNL just turned up UMTS/xPA in Mumbai...

It's farm-dwellers in Amazonas, or for that matter the Yorkshire Dales, who are likely to be mobile connectivity-constrained, because of the backhaul issues - more trench mileage, not much demand to share the cost, and only one fat-arsed national telco supplier if you're lucky. The Aussies are so right about this.

85:

I'm an ASIC designer. And I've seen first hand what happens when product lines hit their performance limits. All the competitors build chips with the exact same performance capabilities, the only distinguishing factor becomes whoever is cheapest, and the other side gets laid off. way back when, we worked on CD burners, and our chips were always a little faster than the competition. But then we hit a limit. Above 60x or so, the discs started self destructing in the drives. When the competition started offering chips that had all the same performance numbers that we did, the company sold the CD division.

The consumer stuff I've worked on was 70nm or so. We didn't operate in big enough quantities to justify 40nm. But when the fabs all start cranking out 10nm stuff, and everyone's doing it, I'm guessing it's going to be a lot of consolidation.

The current job market for EE's like me is 1 ASIC design job for every 10 FPGA design job. I think what's going to happen is that when we've landed on the 10nm limit, established an outpost and built it into a colony, then 10nm FPGA's are going to start to proliferate. The FPGA can be built in massive numbers to justify the tapeout cost. And then small design runs can be done in the bitstream.

Either that, or someone will build a 10nm processor, a 10nm network box, a 10 nm wireless box, a 10nm memory, and really, what else do you need? After that, it's the same processor and different software.

If anyone's hiring EE's, lemme know

86:
In some places (e.g. antarctica) this probably won't matter much as there won't be much demand, but its going to be yet another hurdle for slum dwellers in São Paulo to need to get over.

What makes you think America Moviles won't be rolling out LTE? (You can already get 100/100 to the bog in Brazil...for a price of course, but the economics of mobile is that a) it gets cheaper the thicker the urbanisation and b) once the nodes B are in place, much of the cost is fixed, so c) you price the service to go.) MTNL just turned up UMTS/xPA in Mumbai...

Perhaps I picked my example wrongly. But I do believe that quite a bit of the benefits of the new high bandwidth culture will require a lot more bandwidth than LTE or even IMO any other non-directional wireless technology. LTE, like all wireless technologies, is a shared medium like ye olde CSMA/CD Ethernet. Ok so there are optimizations so that collisions are rare (think Token-ring vs Ethernet) but you still have a finite relatively limited chunk of bandwidth that has to be shared amongst many. In a densely populated slum many will potentially be hundreds if not thousands. In a leafy residential zone it'll be more like dozens (and quite possibly it will be no one because you'll have fiber to the residence and a wireless link inside that is used by you alone).

Wireless also hits the issues that distance ruins signal strength and hence data rates. Take UMTS or CDMA. If you happen to be near the transmitter you get multi-megabits of available bandwidth. Go 50-100m away with a wall or two in the way and you're lucky to get 100kb/s. The same is bound to apply in future technologies even if you manage to come up with directional radio because attenuation is just one of those things - the only way to get out of it is lasers in a vacuum.

Unless someone rolls out FTTH to the Favelas I think they are still going to be bandwidth starved and that is going to impact what they can do in the new almost ubiquitous broadband world

87:

An interesting followup to this would be a look at the security and privacy implications of all this. When you have a 'mobile' that knows where you are, where you're going, where you've been, what you're looking at and hearing, what you've been buying, what you own and where it is.... Who else is going to want access to that data? Governments, police and thieves.

Does your shopping list contain ingredients that could make a bomb? Have you been meeting with 'suspect' people? Do you buy the 'wrong' kind of books, download the wrong kinds of movie? I can see governments getting very interested in this data.

What happens if your mobile is stolen? Or silently wirelessly hacked? Do you own a ferrari? Where is it parked and where are the keys? What's in your house and what's the layout, where are the valuables? You might never know why your stuff has vanished. Though perhaps the real targets might be your credit and bank accounts...

Scary!

88:

1984 @87: one point to note is that most criminals are extremely stupid -- poorly educated and with poor impulse control. Yes, there'll be master criminals doing fiendish and sophisticated identity theft scams, but for the most part your stolen mobile will be sold down the pub for a pack of cigarettes, wiped, and sold on to a dumb 12 year old who dropped theirs in the bath and doesn't have insurance.

And that's assuming that there isn't a remote IMEI-based kill switch coupled to biometric authentication before anyone can use the thing, which is a questionable assumption.

We have all the tools, already, to make mobile phone theft unattractive.

Government/police mission creep is a much more plausible threat, and that's one that has already emerged -- on the more restricted scale required by today's more restricted devices.

89:


Video glasses are kind of trite.

How about a display eyeshade just above your ordinary line of sight. It matches people's habit of looking upward when they're trying to recall something. When looking straight ahead, information would seem to float over people's heads, for ready access at a glance. It also provides a good place to mount a camera and microphone, without blocking eye contact.

You could still pull it down where someone has installed full localization for AR - say to look at a store display - and pop it back up so no one can hack your vision as you're walking around.

Codgers: I'm kind of hoping that in ~20 years we'll have solved at least a FEW problems of aging - Alzheimers and frailty would be nice to be rid of.

90:

The VR aspect could be in for a step change, google 'wetware' you won't need VR glasses when
your plumbed-in!

user-pic
91:

I think you are missing the boat on the aging gamer population.
The real treat is going to be remotely piloting submersible drones, and mining astroides for fun n profit.
that 1lb irobot is going to be the start of this, and when we are to old to hike much, we will just take out remotes.

cool, and we get to check off stuff on our bucket list.

92:

Okay, the spammers are still bombarding this posting like crazy -- it was slashdotted; go figure.

Comments closed, folks. If you want to continue, yell at me on the next thread and I'll give you an open discussion thread.