Back to: Competition Time! | Forward to: One week to go to THE RHESUS CHART

YAPC::NA 2014 keynote: Programming Perl in 2034

This is the keynote talk I just gave at YAPC::NA 2014 in Orlando, Fl.

YouTube video below: click the link below to read the full text instead.

Keynote talk for YAPC::NA 2034

No, that's not right ...

This should be titled Keynote talk for YAPC::NA 2014. What's up with the title?

Obviously I must have had some success with the experiment on applied algorithmic causality violation — that's time travel as applied to computing — that I was thinking about starting some time in the next twenty years, in my next career, as a card-carrying Mad Scientist.

Or maybe that was some other me, in some other parallel universe.

But this isn't the file I remember writing, it's some other talk about a conference that hasn't happened yet and probably won't happen now if I read you their keynote. It's probably not a good idea to read it to you — we wouldn't want to cause any temporal paradoxes, would we? So I'm not going to go there — at least, not yet. Before we take a look at the state of Perl in 2034, we need to know where we stand today, in 2014. So please allow me to start again:

The world in 2014

Back in the 1990s I used to argue with Perl for a living. These days I'm no longer a programmer by profession: instead, I tell lies for money. I'm a science fiction writer. As my friend and fellow-writer Ken Macleod observes, the secret weapon in science fiction's armory is history. So I'd like to approach the subject of this keynote sideways, by way of a trip down memory lane, from the year 2014 — late in English summer afternoon of the computer revolution, just before the sun set — all the way back to 1914.

To the extent that the computing and information technology revolution is a late 20th and early 21st century revolution, we can draw some lessons about where it may be going by observing the trajectory of one of the other major technological revolutions that came before it — the mass transportation revolution.

Like all technological revolutions, the development of computers followed a sigmoidal curve of increasing performance over time. Each new generation of technology contributed to the next by providing the tools and machines needed to bootstrap their successors.

The computer revolution started slowly enough, but the development of the transistor galvanized it, and the integrated circuit, and its offspring, the monolithic processor-on-a-chip, up-ended the entire board game. Over a fifty year period, from roughly 1970 to 2020, we grew so accustomed to Moore's Law — the law that the transistor count of a dense integrated circuit doubles roughly once every two years — that we unconsciously came to mistake it for a law of nature. But in reality, it was nothing of the kind: it merely represented our ability to iteratively improve a process governed by physics until it converged on a hard limit.

In the case of Moore's law, the primary governing constraint was electrical resistivity. As you shrink the length of a circuit, the resistance decreases: you can use lower voltages, or lower current flows, and run at a higher frequency. Physically smaller circuits can be made to switch faster. We build smaller integrated circuits by increasing the resolution of the lithographic process by which we print or etch surface features. But we are doomed to run into the limits of physics. First, we lose energy as heat if we try to switch too fast. Secondly, current leakage becomes a problem as our circuits become smaller. And thirdly, at the end of the day, we're making stuff out of atoms, not magic pixie dust: it's not obvious how to build circuits with tracks less than one atom wide.

Similarly, if we look back to an earlier century we can see that the speed and cost of mass transportation followed a similar sigmoid development curve between roughly 1830 and 1970.

And for me, one of the most interesting things about this sort of technological revolution is what happens after we hit the end of the curve ...

Addressing YAPC::NA in 2014 I feel a lot like a fat, self-satisfied locomotive boiler designer addressing a convention of railway design engineers in 1914. We've come a long way in a relatively short period of time. From the first steam locomotive — Richard Trevithick's 1804 Merthyr Tydfil Tramroad engine — to 1914, steam locomotives surged out of the mines and ironworks and onto permanent roads linking cities all over the world, crossing the American continent from east to west, reaching the dizzy speed of almost a hundred miles per hour and hauling hundreds of passengers or hundreds of tons of freight.

Speaking from 1914's perspective, it is apparent that if the current rate of improvement in the technology can be maintained, then steam locomotion has a bright future ahead of it! We can reasonably expect that, by 2014, with improvements in signaling and boiler technology our 200 mile per hour passenger trains will constitute the bedrock of human transport, and we, as boiler engineers, will be the princes of industry.

Pay no attention to those gasoline-burning automobiles! We can safely ignore them. They're inefficient and break down all the time, away from the race track they're no faster than a horse-drawn carriage — cobblestones and dirt trails hammer their suspensions, quite unlike our steel rails lying on carefully leveled sleepers — and the carnage that results when you entrust motorized transport to the hands of the general public is so frightful that it's bound to be banned.

As for the so-called aeroplane, it's a marginal case. To make it work at all requires an engine that can produce one horsepower per pound of weight — a preposterous power to weight ratio — and it's ability to carry freight is marginal. We might eventually see an aeroplane that can fly for a hundred miles, at a hundred miles per hour, carrying up to a ton of mail or a dozen passengers: but it will never displace the mature, steadily improving technology of the steam locomotive from its position at the apex of mass transportation.

So, that's the view from 1914. What actually happened?

Well, as it happens, our locomotive boiler-maker was absolutely right: 200 mph steam-powered trains are the backbone of passenger transportation.

Admittedly the steam is heated in Électricité de France's nuclear reactors and the motive power conveyed to the trains by overhead electrical wires — the French aren't stupid: nothing makes a steam boiler explosion worse like adding fifty tons of reactor-grade uranium to the problem — but it's not too much of a stretch to say that the European and Chinese high speed rail networks are so efficient that they're taking passengers away from low cost airlines on routes of less than 500 miles.

But in places where we don't have a determined government building out the infrastructure to support shiny 200mph atomic-powered trains, or where we have to travel more than about 500 miles, airliners ate the railways' lunch. The steam engines of 1914 and their lineal descendants were nowhere near the theoretical limits of a Carnot heat-cycle engine, nor were they optimized for maximum efficiency in either power output or weight. Gas turbines offered a higher power density and lower weight and made long-haul air travel feasible. At the same time, the amount of infrastructure you need to build at ground level to support a given air route — namely two airports — is pretty much constant however far apart the airports are, whereas the cost of railroad tracks scales linearly with the distance. A 2000 mile railroad route costs at least ten times as much as a 200 mile railroad route, and takes ten times as long to traverse. Whereas a 2000 mile plane journey — given jet airliners traveling at 500 mph — costs no more to build and little more to operate than a 200 mile route. Furthermore, a big chunk of the duration of any airline flight is a fixed overhead, the latency imposed by pre-flight boarding and post-flight unloading. Assuming two hours at the start and one hour at the end of the journey, a 2000 mile flight may take seven hours, only twice the duration of a 200 mile flight. So air wipes the floor with rail once we cross a critical time threshold of about three hours.

As for automobiles, our railroad engineer of 1914 overlooked their key advantage: flexibility. It turns out that many people find personal transport to be more valuable than fast or efficient transport. So much so, that they were willing to pay for an unprecedented build-out of roads and a wholesale reconstruction of cities and communities around the premise of mass automobile ownership. At which point the cobblestones and dirt trails were replaced by concrete and tarmac, driver and vehicle licensing laws were enacted, and cars got a whole lot faster and safer.

Mind you, even as the steam locomotive fell into eclipse, it wasn't all plain sailing for the aircraft and automobiles. Today's airliners actually fly more slowly than the fastest passenger airliners of 1994. It turns out that physical limits apply: we are constrained by the energy density of our fuels and the ability of our airframes to deal with the thermal stress of high speed flight through air.

Concorde, the type specimen of the supersonic airliner, was a gorgeous, technologically sophisticated, white elephant that, in the end, couldn't compete economically with aircraft that flew at half the speed but consumed a fifth as much fuel per seat. Concorde, in service, crossed the Atlantic in three hours, with a hundred passengers, while burning a hundred tons of jet fuel. A Boeing 747 would take twice as long, but could fly twice as far with nearly five times as many passengers on the same fuel load.

Automobiles have more subtle limitations, imposed largely by our requirements for safety. They operate in close proximity to other people and vehicles, not to mention large animals: they have to be able to protect their precious cargo of passengers from the forces of impact if something goes wrong, while not imposing undue safety externalities on innocent by-standers. Furthermore, they have to be manually controlled by poorly-trained and absent-minded ordinary people. We have speed limits on our highways not because we can't build 200 mph cars — we can — but because we can't reliably train all our drivers to be as safe as Michael Schumacher at 200 mph.

Now, the fact that we don't have 200 mph automobiles in every garage, or Mach 4 SSTs at every airline terminal, or 200 mph nuclear-powered express trains on Amtrak, shouldn't blind us to the fact that the mass transportation industry is still making of progress. But the progress it's making is much less visible than it used to be. It's incremental progress.

For example, the first-generation Boeing 747 jumbo jet, the 747-100, carried roughly 400 passengers and had a maximum range of just over 6000 miles. Today's 747-8 can fly 50% further on 30% more fuel, thanks to its more efficient engines, with 460 passengers in equivalent seating. Other airliners have become even more efficient. With Pratt & Whitney and Rolls now moving towards commercialization of geared turbofan engines, we can expect to see up to 30% greater efficiency in the jet engines of airliners in service in the next 30 years. But 30 years is also the span of time that once separated the Wright Flyer from the Douglas DC-3, or the Spitfire from the SR-71.

(Incidentally, I'm going to exclude from this discussion of incremental change the implications of the Tesla Model S for the automobile industry — an electric car that people actually aspire to drive — or Google's self-driving car project, or Volvo's equivalent. These are properly understood as developments emerging from the next technological revolution, the computing and information field, which is still undergoing revolutionary change and disrupting earlier industries.)

The point I'd like to emphasize is that, over time, a series of incremental improvements to a mature technological field — be it in engine efficiency, or safety design, or guidance technology — can add up to more than the sum of its parts. But it's nothing like as flashy or obvious as a doubling of performance every two years while a new technology is exploding towards the limits physics imposes on what is possible. Linear improvements look flat when they follow an exponential curve, even if they quietly revolutionize the industry they apply to.

And that, I think, is what the future of the computing industry looks like in 2014.

2014: the view forward

As of 2014, we're inching closer to the end of Moore's Law. It seems inevitable that within the next decade the biannual doubling of performance we've come to expect as our birthright will be a thing of the past.

We had a brief taste of the end of the exponential revolution in the early noughties, when the clock frequency wars that had taken us from 33MHz 80386s to 3GHz Pentium IVs in just one decade ended, killed by spiraling power consumption and RF interference. There will come a point some time between 2020 and 2030 when we will no longer be able to draw ever finer features on our atomically perfect semiconductor crystals because to do so we'd need to create features less than one atom wide. For a while progress will appear to continue. We will see stacked architectures with more and more layers plastered atop one another. And we'll see more and more parallelism. But the writing is on the wall: barring paradigm shifts such as the development of a mass-producible room temperature quantum computing architecture, we're going to hit the buffers within the next few years.

An interesting side-effect of Moore's Law, seldom commented on, is that each successive generation of chip fab — necessary in order to produce circuit features at ever finer resolutions — also double in price. Once the manufacturers of the highly specialized equipment that goes into fab lines can no longer up-sell Intel and the other foundries on new hardware, there are going to be interesting repercussions. We may see a vast shake-out in the hardware side of the manufacturing business. For example, in aerospace, between 1965 and 1975 roughly half the US aerospace engineering faculty found themselves thrown out of work. Or we may see a short-lived commodification of semiconductor manufacturing plant, as the suppliers desperately compete to stay in business and the cost of a new factory drops by an order of magnitude. Either way, once the manufacturing costs of the factories are amortized we can look forward to the commodification of the chips themselves. There seems to be no market-imposed lower floor to the price of computing machinery: that is, the cheaper we can make chips, the more uses we can find for them.

At the same time, improvements in the efficiency of microprocessors at any given lithographic scale may continue for some time. Power consumption can be cut. Incremental design improvements can be applied. A 64-bit ARM core from 2034, made using a 7-nm process, will undoubtedly out-perform a 7-nm 64-bit ARM core from 2020, both on energy efficiency and manufacturing cost — both factors in the all-important total cost of ownership per MIP.

But by 2034 the kind of progress we see in hardware will resemble the slow, incremental improvements in the transportation industry of today rather than the wildly surging sigmoid curve we experienced during the hey-day of the semiconductor revolution.

And we're going to be dealing with a world full of ubiquitous dirt-cheap low-powered microprocessors with on-die sensors and wireless networking, which remain in production for decades because there is no prospect of a faster, cheaper better product coming along any time soon.

2034: The view backward

Okay, so I'm eventually going to give you a digest of what I found in the YAPC keynote that my time-travelling future self sent me from 2034.

But first, having taken a look at the world of 1914, I'd now like you to bear with me as I describe the experiences of an earlier me, visiting the world of today by time machine from 1994. Then we're going to borrow his time machine and visit the world of 2034 together.

The world of 2014 actually looks a lot like 1994. And this shouldn't surprise us. Change is gradual. Most of the buildings around us today were already here 20 years ago. Most of the people around us were already alive back then, too. The world of 2014 is a wrapper around the smaller, less complicated world of 1994, adding depth and texture and novelties. And so my earlier self, visiting from 1994, would have found lots of things about the future unsurprisingly familiar.

My 1994 self would have been utterly underwhelmed by the automobiles and airliners and architecture and fashion changes visible in 2014. After all, these are ephemera that follow constant — if unpredictable — trajectories. The appearance of URLs in adverts everywhere might have made 1994-me raise an eyebrow — the world wide web was new and shiny and kinda slow and clunky in 1994 — but it was at least a thing and I was aware of it, so predicting that it would have spread like weed would have been an easy target. Nor would the laptops everyone here is carrying have been particularly remarkable. They're slimmer, shinier, cheaper, and much more powerful than the laptop I owned in 1994, but they're not a fundamentally different type of object.

What would have weirded 1994-me out about the 2014-present would have been the way everyone walks around staring at these little glowing slabs of glass as if they're windows onto the sum total of human knowledge. Which, after all, they are. Yes, the idea of ubiquitous wireless networking and pocket computers with touchscreens that integrate cellular phone services with data is the kind of thing that trips off the tongue and any vaguely tech-savvy science fiction writer from 1994 could be expected to understand. But that such devices are in every hand, from eight years old to eighty, would be a bit of a reach. We tend to forget that in the early 1990s, the internet was an elite experience, a rare and recondite tool that most people had little use for. 1994 was still the age of CompuServe and AOL — remember AOL, that's kind of like a pre-internet version of Facebook? Computers were twenty years newer than they are today: older folks didn't know how to type, or use a mouse, and this was normal.

But the mere existence of smartphones would only be the start of it. The uses people made of their smartphones — that would be endlessly surprising. Cat macros. Online dating websites. Geocaching. Wikipedia. Twitter. 4chan.

If 1994 me had gotten onto 2014 twitter, that would have been an eye-opener. The cultural shifts of the past two decades, facilitated by the internet, have been more subtle and far-reaching than 1994-me would have imagined. Briefly: the internet disintermediates people and things. Formerly isolated individuals with shared interests can form communities and find a voice. And once groups of people find a voice they will not be silenced easily. Half the shouting and social upheaval on the internet today comes from entrenched groups who are outraged to learn that their opinions and views are not universally agreed upon; the other half comes from those whose silence was previously mistaken for assent.

Once technologies get into the hands of ordinary people, nobody can even begin to guess where they're going to end up, or what kind of social changes they're going to catalyze. The internet has become a tool for revolutions, from Egypt to Yemen by way of Ukraine; it's also a tool for political repression.

(And I'm straying off-topic.)

Now, let's go and borrow that time machine and take a look at 2034.

2034 superficially looks a lot like 2014, only not. After all, most of 2034 is already here, for real, in 2014.

The one stunningly big difference is that today we're still living through exponential change: by 2034, the semiconductor revolution will have slowed down to the steady state of gradual incremental changes I described earlier. Change won't have stopped — but the armature of technological revolution will have moved elsewhere.

Now for a whistle-stop tour of 2034:

Of the people alive in 2014, about 75% of us will still be alive. (I feel safe in making this prediction because if I'm wildly wrong — if we've undergone a species extinction-level event — you won't be around to call me on my mistake. That's the great thing about futurology: when you get it really wrong, nobody cares.)

About two-thirds of the buildings standing in 2034 are already there in 2014. Except in low-lying areas where the well-known liberal bias of climatological science has taken its toll.

Automobiles look pretty much the same, although a lot more of them are electric or diesel-electric hybrids, and they exhibit a mysterious reluctance to run over pedestrians, shoot stop lights, or exceed the speed limit. In fact, the main force opposing the universal adoption of self-driving automobiles will probably be the Police unions: and it's only a matter of time before the insurance companies arm-wrestle the traffic cops into submission.

Airliners in 2034 look even more similar to those of 2014 than the automobiles. That's because airliners have a design life of 30 years; about a third of those flying in 2034 are already in service in 2014. And another third are new-build specimens of models already flying — Boeing 787s, Airbus 350s.

Not everything progresses linearly Every decade brings a WTF moment or two to the history books: 9/11, Edward Snowden, the collapse of the USSR. And there are some obvious technology-driven radical changes. By 2034 Elon Musk has either declared bankruptcy or taken his fluffy white cat and retired to his billionaire's lair on Mars. China has a moon base. One of Apple, Ford, Disney, or Boeing has gone bust or fallen upon hard times, their niche usurped by someone utterly unpredictable. And I'm pretty sure that there will be some utterly bizarre, Rumsfeldian unknown-unknowns to disturb us all. A cure for old age, a global collapse of the financial institutions, a devastating epidemic of Martian hyper-scabies. But most of the changes, however radical, are not in fact very visible at first glance.

Most change is gradual, and it's only when we stack enough iterative changes atop one another that we get something that's immediately striking from a distance. The structures we inhabit in 2034 are going to look much the same: I think it's fairly safe to say that we will still live in buildings and wear clothes, even if the buildings are assembled by robots and the clothes emerge fully-formed from 3D printers that bond fibres suspended in a liquid matrix, and the particular fashions change. The ways we use buildings and clothes seem to be pretty much immutable across deep historical time.

So let me repeat that: buildings and clothing are examples of artifacts that may be manufactured using a variety of different techniques, some of which are not widespread today, but where the use-case is unlikely to change.

But then, there's a correspondingly different class of artifact that may be built or assembled using familiar techniques but put to utterly different uses.

Take the concrete paving slabs that sidewalks are made from, for example. Our concrete paving slab of 2034 is likely to be almost identical to the paving slab of 2014 — except for the trivial addition of a dirt-cheap microcontroller powered by an on-die photovoltaic cell, with a handful of MEMS sensors and a low power transceiver. Manufactured in bulk, the chip in the paving slab adds about a dollar to its price — it makes about as much of a difference the logistics of building a pavement as adding a barcoded label does to the manufacture and distribution of t-shirts. But the effect of the change, of adding an embedded sensor and control processor to a paving stone, is revolutionary: suddenly the sidewalk is part of the internet of things.

What sort of things does our internet-ified paving slab do?

For one thing, it can monitor its ambient temperature and warn its neighbors to tell incoming vehicle traffic if there's a danger of ice, or if a pot-hole is developing. Maybe it can also monitor atmospheric pressure and humidity, providing the city with a micro-level weather map. Genome sequencing is rapidly becoming the domain of micro-electromechanical systems, MEMS, which as semiconductor devices are amenable to Moore's law: we could do ambient genome sequencing, looking for the tell-tale signs of pathogens in the environment. Does that puddle harbor mosquito larvae infected with malaria parasites?

With low-power transceivers our networked sidewalk slab can ping any RFID transponders that cross it, thereby providing a slew of rich metadata about its users. If you can read the unique product identifier labels in a random pedestrian's clothing you can build up a database that identifies citizens uniquely — unless they habitually borrow each other's underwear. You can probably tell from their gait pattern if they're unwell, or depressed, or about to impulsively step out into the road. In which case your internet-of-things enabled sidewalk can notify any automobiles in the vicinity to steer wide of the self-propelled traffic obstacle.

It's not just automobiles and paving slabs that have internet-connected processors in them in 2034, of course. Your domestic washing machine is going to have a much simpler user interface, for one thing: you shove clothing items inside it and it asks them how they want to be washed, then moans at you until you remove the crimson-dyed tee shirt from the batch of whites that will otherwise come out pink.

And meanwhile your cheap Indonesian toaster oven has a concealed processor embedded in its power cable that is being rented out by the hour to spammers or bitcoin miners or whatever the equivalent theft-of-service nuisance threat happens to be in 2034.

In fact, by 2034, thanks to the fallout left behind by the end of Moore's law and it's corollary Koomey's law (that power consumption per MIP decreases by 50% every 18 months), we can reasonably assume that any object more durable than a bar of soap and with a retail value of over $5 probably has as much computing power as your laptop today — and if you can't think of a use for it, the advertising industry will be happy to do so for you (because we have, for better or worse, chosen advertising as the underlying business model for monetizing the internet: and the internet of things is, after all, an out-growth of the internet).

The world of 2034 is going to superficially, outwardly, resemble the world of 2014, subject to some obvious minor differences — more extreme weather, more expensive gas — but there are going to be some really creepy differences under the surface. In particular, with the build-out of the internet of things and the stabilization of standards once the semiconductor revolution has run its course, the world of 2034 is going to be dominated by metadata.

Today in 2014 we can reasonably to be tracked by CCTV whenever we show our faces in public, and for any photograph of us to be uploaded to Facebook and tagged by location, time, and identity using face recognition software. We know our phones are tracking us from picocell to picocell and, at the behest of the NSA, can be turned into bugging devices without our knowledge or consent (as long as we're locked out of our own baseband processors).

By 2034 the monitoring is going to be even more pervasive. The NETMIT group at MIT's Computer Science and Artificial Intelligence Lab are currently using WiFi signals to detect the breathing and heart rate of individuals in a room: wireless transmitters with steerable phased-array antennae that can beam bandwidth through a house are by definition excellent wall-penetrating radar devices, and just as the NSA has rooted many domestic routers to inspect our packets, so we can expect the next generation of spies to attempt to use our routers to examine our bodies.

The internet of things needs to be able to rapidly create dynamic routing tables so that objects can communicate with each other, and a corollary of that requirement is that everything knows where it is and who it belongs to and who has permission to use them. This has good consequences and bad consequences.

Shoplifting and theft are going to be difficult to get away with in a world where unsold goods know when they're being abducted and call for help. That's good. Collapsing and dying of a stroke in your own home may also become a rare event, if our environment is smart enough to monitor us for anomalous behavior indicative of a medical emergency.

On the other hand, do you really want your exact pattern of eye movements across the screen of your smartphone to be monitored and analyzed, the better to beam tailored advertisements into your peripheral field of vision while you check your email? Or every conversation you have in any public space within range of a microphone to be converted via speech-to-text, indexed, and analyzed by the NSA's server farms for the Bayesian spoor of conspiracy? Or for your implanted cardiac defibrillator to be rooted and held to ransom by a piece of malware that doesn't know it's running on a life-critical medical device?

Arguably, these are the paranoid worries of a poopy-head from 2014, not a savvy native of 2034 who's had two decades to get used to the emergence of these new phenomena. To an actual denizen of 2034, one who's been sitting in the steadily warming saucepan of water for two decades, the concerns will be different.

The worst thing about the internet of things is that it's built atop the seventy year old bones of ARPAnet. It's insecure by design, horribly flawed, and susceptible to subversion. Back in the early days, national security bureaucrats deliberately weakened the protocols for computer-to-computer communications so that they could monitor at-will, never quite anticipating that it would become so fundamental to our entire civilization that by so doing, they were preparing the field for entire criminal industries and rendering what should have been secure infrastructure vulnerable to what is unironically termed cyber-attack. Vetoing endpoint encryption in TCP might have seemed like a good idea in the early 1980s, when only a few hundred thousand people — mostly industry professionals and scientists — were expected to use the internet, but it's a disaster when your automobile needs a reliable, secure stream of trusted environment data to tell it that it's safe to turn the next corner.


We hit the buffers at the end of the railroad track of exponentially accelerating semiconductor tech. The industry downsized, and aged. There's no money to develop and roll out new standards, nor the willpower to do so: trying to secure the internet of things is like trying to switch the USA to driving on the left, or using the metric system. Pre-existing infrastructure has tremendous cultural inertia: to change it you first have to flatten it, and nobody much wants to destroy western civilization in order to clear the ground for rolling out IPv8.

So here's my takeaway list of bullet-points for 2034:

  • It's going to superficially resemble 2014.

  • However, every object in the real world is going to be providing a constant stream of metadata about its environment — and I mean every object.

  • The frameworks used for channeling this firehose of environment data are going to be insecure and ramshackle, with foundations built on decades-old design errors.

  • The commercial internet funding model of 1994 — advertising — is still influential, and its blind-spots underpin the attitude of the internet of things to our privacy and security.

  • How physical products are manufactured and distributed may be quite different from 2014. In particular, expect more 3D printing at end-points and less long-range shipment of centrally manufactured products. But in many cases, how we use the products may be the same.

  • The continuing trend towards fewer people being employed in manufacturing, and greater automation of service jobs, will continue: our current societal model, whereby we work to earn money with which to buy the goods and services we need may not be sustainable in the face of a continuing squeeze on employment. But since when has consistency or coherency or even humanity been a prerequisite of any human civilization in history? We'll muddle on, even when an objective observer might look at us and shake her head in despair.

And now, the state of Perl in 2034

(I'm reading from the keynote talk for YAPC::NA 2034 by Charles Stross, recovering Perl hacker, science fiction writer, and card-carrying Mad Scientist — Paratemporal Meddling Management Group, speciality: screwing up history).

Frankly I'm kind of astonished to be standing here, talking to you about a programming language that first escaped into the wild forty-five years ago. And not just because my continued existence is a tribute to medical science: it's because the half life of a programming language, back when people were still inventing new programming languages, was typically about ten years.

Programming languages come and go, and mostly they go.

Back in the dim and distant past, programming languages were rare. We rode out the 1950s on just FORTRAN LISP, and the embryonic product of the CODASYL Conference on Data Systems Languages, COBOL. Then the 1960s saw a small pre-Cambrian explosion, bequeathing us ALGOL, GOTO considered harmful, BASIC (as supporting evidence for the prosecution), and a bunch of hopeful monsters like SNOBOL4, BCPL, and Pascal, some of which went on the rampage and did enormous damage to our ideas of what computers are good for.

Then, between about 1970 and 1990, compiler design wormed its way into the syllabus of undergraduate CS degree courses, and the number of languages mushroomed. Even though most sane CS students stick to re-implementing Lisp in Haskell and similar five-finger exercises, there are enough fools out there who suffer from the delusion that their ideas are not only new but useful to other people to keep the linguistic taxonomists in business.

Student projects seldom have the opportunity to do much harm — for a language to do real damage it needs a flag and an army — but if by some mischance a frustrated language designer later finds themselves in a managerial role at a company that ships code, they can inflict their personal demons on everyone unlucky enough to be caught within the blast radius of a proprietary platform and a supercritical mass of extremely bad ideas.

Much more rarely, a language designer actually has something useful to say — not just an urge to scratch a personal itch, but an urge to scratch an itch that lots of other programmers share. The degree of success with which their ideas are met often depends as much on the timing — when they go public — as on the content. Which brings me to the matter at hand ...

Even twenty years ago, in 2014, Perl was no longer a sexy paradigm-busting newcomer but a staid middle-aged citizen, living in a sprawling but somehow cluttered mansion full of fussily decorated modules of questionable utility. That people are still gathering to talk about new developments in Perl after 45 years is ... well, it's no crazier than the idea that people would be drafting new standards for COBOL in the 21st century would have seemed if you'd put the idea to Grace Hopper in the early 1960s. Much less Object-Oriented COBOL. Or the 2018 standard for Functional COBOL with immutable objects.

So why is Perl still going in 2034, and why is there any prospect whatsoever of it still being a thing in 2134?

By rights, Perl in 2034 ought to have been a dead language. The law of averages is against it: the half-life of a programming language in the latter half of the 20th century was around a decade, and as a hold-over from 1987 it should be well past its sell-by date.

Perl, like other scripting languages of the late 20th century, was susceptible to a decade-long cycle of fashion trends. In the 1990s it was all about the web, and in particular the web 1.0 transactional model — now dying, if not dead, replaced by more sophisticated client/server or distributed processing frameworks. While Perl was always far more than just a scripting language for writing quick and dirty server-side CGI scripts, that's the context in which many programmers first encountered it. And indeed, many people approached Perl as if they thought it was a slightly less web-specific version of PHP.

But Perl isn't PHP — any more than it's Python or Ruby. Perl 5 is a powerful, expressive general-purpose high level programming language with a huge archive of modules for processing data and interfacing to databases. Perl 6 — if and when we get there — is almost a different beast, essentially a toolkit for creating application domain-specific sub-languages. And while Perl and its modules were once a bit of a beast (as anyone who ever had to build perl 5 from source on a workstation powered by a 33MHz 68030 will recall), by todays standards it's svelte and fast.

If what you're juggling is a city-wide street network with an average of one processor per paving slab, generating metadata at a rate of megabytes per minute per square metre of sidewalk, it pays to distill down your data as close to source as possible. And if those paving slabs are all running some descendant of Linux and talking to each other over IP, then some kind of data reduction and data mangling language is probably the ideal duct tape to hold the whole thing together.

But Perl also has a secret weapon in the language longevity wars. And that secret weapon is: you.

Back when I went to my first YAPC in London in the late 1990s, I had no idea that I'd return to one in Orlando in 2014 and see several familiar faces in the audience. And I'm pretty sure that 2034 my future hypothetical self will recognize some of those faces again in the audience at YAPC::NA 2034.

Perl has a culture — curated since the early days via the perl5-porters mailing list and the comp.lang.perl usenet group, and elsewhere. I don't know whether it was intentional or not, but for better or worse Perl tends to attract die-hard loyalists and has a culture not only of language use but of contribution to the pool of extensions and modules known as CPAN.

And Perl was invented just late enough in the semiconductor revolution that it stands a chance of still being in use by a die-hard core of loyalists when the progression dictated by Moore's law comes to an end, and everything slows down.

If a technology is invented and discarded during a technological revolution before the revolution matures and reaches the limits dictated by physical law, then it will probably remain forgotten or a niche player at best. In aerospace, perhaps the classic examples are the biplane and the rigid airship or Zeppelin. They worked, but they were inefficient compared to alternative designs and so they are unlikely to be resurrected. But if a technology was still in use when the revolution ended and the dust settles, then it will probably remain in use for a very long time. The variable-pitch propeller, the turbofan, and the aileron: it's hard to see any of them vanishing from the skies any time soon.

Perl is, in 2014, a mature language — but it's not a dead language. The community of Perl loyalists is aging and greying, but we're still here and still relevant. And the revolution is due to end some time in the next ten years. If Perl is still relevant in 2024, then it will certainly still be relevant in 2034 because the world of operating systems (research into which, as Rob Pike lamented, stagnated after 1990) and the world of programming languages are intimately dependent on the pace of change of the underlying hardware, and once the hardware freezes (or switches to incremental change over a period of decades) the drive to develop new tools and languages will evaporate.

Just keep going, folks. Focus on modules. Focus on unit testing. Focus on big data, on data mining and transformation, on large scale distributed low-energy processing. Focus on staying alive. Perl is 27 in this year, 2014. If Perl is still in use in 2024, then the odds are good that it will make it to 2034 and then to 2114.

Let's hope we get that cure for old age: people are going to need you to still be around for a long time to come!

Thank you and good night.



I strongly suspect that actually, one of the major occupations of programmers in 2034 — Perl and otherwise — is going to be finding and cleaning up the remaining 32-bit time_t's.

Hopefully they'll be out of system APIs by that point, but they're all over file formats and protocols...


Transistors & The End of Moore's Law:

My take home is that today the gaps in transistors are about 50 Silicon atoms across; by about 2025 it will have shrunk to 3-4 atoms, and quantum mechanics, and quantum tunnelling in particular, will be a major problem. The size of atoms is what will end Moore's Law, and we're not too far away.


I'm less qualified to speak about the second half than the first half (last third, second third?). But in 2034, I predict that much of the world will be virtually unchanged technology wise. Millions (even a billion or so) of people will continue to "make a living" via the time honoured process of "scratching dirt" (subsistence farming). Tens to hundreds of cities will continue to have really good roads, from the presidential palace to the airport, and really shit roads, almost everywhere else.

And once you get out of the cities, the villages will continue to have insufficient exit roads that are impassable for one or two months a year, and just shit the rest of the year. And many of those villages will continue to not have running water, and certainly electricity will not be hooked up to each house.

And the slums in the major cities will have electricity, but it will look similar to now, a man has hooked up ("illegally") to the grid, and is selling connections. The money doesn't go to the actual electricity generators.

And life will continue has it has been.

But in the rich world, life is wonderful. For those who can afford it.


I'd posit a few guesses:

Oil prices at least quadruple. Cars are on the way out. Electric cars never get cheap enough for the masses.

China falls apart. Between pollution and poor economic growth, the Chinese Communist Party can't keep a lid on riots.

Scotland is sole remaining member of the EU.


Back in the 80s (I'm guessing 1984 or 1985) in the school playground (I must have been 14 or so at the time) I made the prophetic statement "The future will be just like now, only more so". I meant "more intense". So "more TV", "more radio", "more cars", "more pollution", "more noise"... and so on. I still feel I was right, here.

That kid would not have been surprised by today's mobile internet... he'd seen computers shrink down from (in books) room sized megaliths down to the size of a zx81; he'd played with hand held calculators; wireless TV, wireless music, two way communication ("walkie talkie")... all in his experience. Combining "small computer" with "walkie talkie" wouldn't have been a stretch. Mostly because that kid just didn't understand the problems necessary to make that a reality.

In 1993 I made another prophetic statement "why would people want IP connections when they can get their email with UUCP or off-line readers or similar; BT phone bills would work against it". (yay, spuddy!) This time I was totally wrong.

So "glowing slabs of glass" wouldn't really have surprised my teenage self. The data available, however... that's where I got it wrong!


Hmmm, I'm utterly unconcerned about the icecaps disappearing, since we seem to have a divergence problem between the predicted dooms and the observed realities.

On the other hand, having processors embedded in everything means everything is generating heat which needs to be shed. With two computers and a TV in a fairly small room it is noticeably different when everything is on and when it is not.

Though I do love the idea of linux running on the sidewalks and roads!


A Network of Things, depending on photo-voltaic power.

Winter, Edinburgh, 2034, the early hours: the Snoopernet goes down as the on-chip power storage in the paving stones drains.

It's not an existential threat, but it happens every night in winter.


With all due respect, puddles in the road that report of mosquito nests are funny. On the other hand, drone spraying of biodegradable sequencing or more likely other type of sensor (both WiFi-ed, of course) over the fields and woods are more likely.

Otherwise, spot on. Except that interesting and potentially crucial player of genetic engineering is left out. That beast is in its exponential developmental phase and what is even more important, the effects of one species change blast all over through the biosphere (tough to predict). Regulated in some countries but China doesn't care much when it is matter of advance. Plus regulation that is/will be a process of many debates. Radical change of the food industry (for the better) would be sensible but people are even more resistant to food change than to clothes or even houses (or furniture).


the secret weapon in science fiction's armory is history.

LURVE your transportation analogy (One of mine, too of course) ... However, I think automobiles have now (2014) shot their bolt - even electric ones have to be parked, somewhere, & are still slower than HS rail. Aviation? Lifting bodies, like the one at Cardington? [ Google for HAV 304 ] In 1914, aviation was actually more promising than motor transport (I think) Air wipes the floor with rail at over FOUR hours, actually. { The King's Cross - Newcastle trains are well-loaded. But London - Glasgow / Edinburgh? }

"Hit the buffers" ?? Even with optical computing in diamond substrates? I don't have enough techinal nous on that one....

Automobile 2034 - &, of course the few diehards, still using old ones - I will be 88 in 2034 & if my eyesight is up to it & my reactions haven't slowed too much, I will still be piolting the Great Green Beast ( Come on, ny aunt, who learnt on an Army 3-tonner in 1942, only gave up driving at age 91, becaue her eyes were not good enough! )

The internet of things is horribly vulrenable to "terrorist" or even angry-kids attack: EMP - ridiculously easy to do, actually. I nearly built one, in the days of Walkmans, but dissuaded myself, when I remembered: "Pacemakers".

Jay @ 4 THAT made me laugh - very good!

Ah, but WHICH history. Is Syria/ISIS the 30 years war, or the start of a new Lebensraum ... err ... Decisions can be difficult.

Also note comparisons might influence our course of action; another one that comes to mind concerning ISIS is the rise of the Khmer Rouge with the Vietnam war. Which would predispose towards activism (maybe into unwise values) somewhat more than the Thirty Year War one...


Hmm. Where does the communications bandwidth come from for the internet of things? The electromagnetic spectrum and compression are limited, so there's no bandwidth for everything to communicate directly back to base. Presumably it would have to be short-range, so things only communicate directly with nearby things. But that means it would take a lot of hops to get to where the information is used. More likely the processing would be distributed, and only highly filtered information would be relayed back. The street would have to decide for itself whether that's a terrorist walking along it, not send the raw information back to the security services (or to "Terrorists 'R' us" for targeted advertising).


As far as bandwidth is concerned, you can do a lot using picocells plus back-haul fibre. I certainly see that as the way that mobile telephony is going in high-density areas.

On the other hand, your suggestion that most of the data would never leave the locality is almost certainly right. I recall hearing that the Large Hadron Collider discards the vast majority of its data almost instantly, to avoid overwhelming its data storage abilities. I'd expect similar in this case.


Agreed though first, I expect the compression in time. Real world is so slow compared to these.


I think there's a parallel with other resources that are going to run out. Peak Oil will mean that there is less cheap energy to go by, to which a likely outcome is a contraction of the world economy. Peak Moore will mean that you won't be able to rely on next generation hardware to scale in certain directions. Where and how this is going to hit, I don't know, but it's important to keep in mind.


Air wipes the floor with rail at over FOUR hours, actually.

Don't agree, depends on circumstances. From where I live (Ulm, Germany), I can go to Paris by TGV (no changeovers) in 4:50. Still much better than flying to Paris.


I think there is at least one conclusion I'd have to disagree with - that when hardware improvements hit the wall, the software will atrophy too.

There is at least an order of magnitude improvements in speed/efficiency/size/etc. possible by optimising and improving the software side of things. Too much of the kruft is built on relying on faster hardware to hide programmer inefficiency.

So I'd assume that automated refactoring and reengineering of software, away from human readable, will be the order of the day for at least another 5-10 years after hardware has spluttered.

Of course, that ignores that oil will have gotten impossible to source, and that civilisation as we know it will have curled up and died first anyway.


"...optimising and improving the software side of things."

Isn't that the 'incremental improvement' part of the sigmoid though?

Regards Luke


Hmm. Where does the communications bandwidth come from for the internet of things? The electromagnetic spectrum and compression are limited, so there's no bandwidth for everything to communicate directly back to base.

Yup. That's why I'm assuming a lot of digestion of data gets done on the distributed nodes themselves. Then it's peer-to-peer, and relatively slow, for low priority traffic: only high-priority stuff ("6 year old following boy into road in path of 18-wheeler!") that gets the long-range high-power transmission.


You need to go do some reading up on Koomey's law. Then compare the thermal output of a modern 37" LCD TV to a 40-year-old tube-driven 20" colour TV.


I ran the numbers on PV-powered chips in paving slabs a few years ago. We're talking microwatts, spiking to 1-2mW at peak drain: you use the PV cell to charge a capacitor and run off it overnight. Or you point a couple of the laser diodes in your smart next-gen LED/laser street lights at each chip to keep them powered directly through the night.


I'd have to disagree on the oil front; liquid hydrocarbons are just too useful in terms of ease of storage, transportability and energy density ( ), plus we have a huge sunk investment in hydrocarbon processing and distribution. I do think we'll see a fundamental change away from extracting oil from under the ground to manufacturing it from bioengineered feedstocks and from existing biomass/waste.

The profit potential for an economically viable means of "growing" oil, particularly one that doesn't impact food sources like ethanol from corn, is so great that the oil and transport corporations will fund the research. Prices will most likely be higher than today, but I doubt they'll go more than double the current point of about $100/bbl. Oil from algae seems a likely path:

This, of course, has negative implications for CO2 emissions, but I think the continuing drive for mileage efficiency and cost of fuel will result in a moderate decline in emissions in the West; I'm more concerned about the ecological impact of emerging economies using hydrocarbon fuels (i.e., killer smog in Beijing).


Plus, if global warming goes well enough, you won't have to worry about snow. You might want to invest in a canoe to get about, though.


This, of course, has negative implications for CO2 emissions,

No it doesn't.

The "carbon emissions" we're worried about are fossil carbon emissions, i.e. adding carbon to the atmospheric/biosphere CO2 cycle that was previously sequestrated deep in the earth's crust.

If we're using biosynthetic fuels or fuels synthesized via the Fischer-Tropsch pathway, the carbon comes out of the circulating atmospheric CO2 pool, and is released back into it when it's burned. It's net carbon neutral -- the equation balances out -- so doesn't contribute additional greenhouse gas.

(Personally I'd rather see Fischer-Trospch synthesis driven by nuclear or PV cells in agriculturally-non-viable locations than biofuels that divert useful edible crops into an inefficient fuel cycle for the rich. But that's just me.)


Ah yes, the giggle factor.

Because I read an article on metals recycling right before I read this, I have a different take on it.

Here's the problem: a lot of rare earths are recycled at <1%. Indium is the worst culprit in this, because it's used in a lot of semiconductors and almost never recycled from them. (I won't even get into the whole peak phosphorus mess, but if the Peak P crowd is right, which they likely aren't, we'll run way short of P around 2034).

Anyway, the problem with paving the streets with chips is two-fold. One is durability and lifespan, because they're almost certainly going to have to sit in the ground for at least 10 years, due to fiscal constraints on municipal street construction. The second is recycling the chip once the street is replaced because, trust me, if we spread rare earths around that much, our streets will have higher concentrations of rare earths than many mines.

That's the third problem, actually. A dude with a sledgehammer and some cheap electronics can swipe the processors and recycle them for some money. That goes for almost any part of the internet of things. As with the copper wiring in homes, if rare earths and other materials become sufficiently valuable, they will be stolen. It's not just rare earths, either. To pick another example, your computer has a higher concentration of gold in it than do South African mines, and it's a lot easier to get to and extract as well.

I'm with Mike Davis (Planet of Slums) on this one. In 2034, there will be an internet of things, but it will be confined mostly to gated communities. These, erm, lovely places will look increasingly like the White House and hold the dwindling remnants of the middle and upper middle classes. Oh yes, it will also be found in the homes of the 1%. The White House is, of course, on of the world's poshest prisons, due to the need to create a massive security cordon around the President. We may see the collapse of the prison-industrial complex (speed the day!), but the people who created that security will increasingly set it up around the well-heeled, using paranoia as the excuse to up-sell them into an increasingly limited life inside a high security fence.

We're already seeing such bubble-worlds on the edges of cities around the world, and they invert the prison technology to keep their residents safe, or at least, feeling safe.

The other 95% of civilization will live in a dumb and increasingly brutal world, with megaslums being the fastest-growing type of development. Cheap electronics will also be the norm here, as it is in 2014 (more people have cell phones than have access to clean drinking water or toilets), but smart infrastructure will not exist in the slums, because infrastructure for the most part will not exist in the slums.

This is, of course, if we get the recycling issue under control and actually start recycling the rare earth and lithium basis for electronics and solar panels. If we do not, only the rich will have them, and the rest of us will be storming their walls.


Bret Victor's "The Future of Programming" talk at DBX last year is worth a visit for it's similarity in style and themes:


Well, that's good news.

On a completely different note, I was looking at Edinburgh on Google Maps after my canoe comment, and my eye lit on the Scottish Parliament Building ( ). WT Holy F?!?!? Who foisted that garish, ridiculous heap of post-modern "architecture" on your grey, rectangular city?


I'm certainly not advocating oil production from edible crops; I was talking more about production from non-food plants, bio waste, and algae.


Non-food plants for biofuel implies either a decrease in food grown or more land under the plough - both deeply unrecommended. Incidentally, some algae are an edible crop.


Potentially but not necessarily true. The biomass could come from sewage or discarded parts of food crops (hint: you don't eat most of a corn stalk), and algae doesn't have to be grown on arable land: . I came from El Paso and believe me, where they're working, you don't grow food.


Fertilizers are made from natural gas; modern agricultural productivity depends heavily on fossil fuels.


The farmers need that biomass to keep the land fertile.

A patch of land isn't magic; anything that comes off of it has to be replaced if the land is going to stay productive. The less you take, the less you have to fertilize. If you take "excess" biomass off to make fuels, you have to replace the carbon in the soil somehow. For practical purposes that means fossil fuel based fertilizers.


Maybe I am missing something: efficiency of photosynthesis is order of percent so what is the rationale to convert solar into plants and plants into oil and oil into energy, instead solar into energy? I agree that oil would more matter for chemical industry.


As for CO2 emissions we can't stop the world from burning cheap, plentiful, useful and dirty coal. Even the Germans with their famous committment to solar energy are burning more coal than ever before. Whether its its burning coal or smoking grass,if somehting is useful and popular no amount of law or regulation will stop it from happening.

To believe otherwise is just wishful thinking.

So where does that leave us in regards to global warming. Our only remaining choice is sequestration by large scale geo-engineering. Fortunately, iniital efforts at CO2 sequestration via iron fertilization of the ocenas is lookng very promising - and replenishes fish stock:

An international research team has published the results of an ocean iron fertilization experiment (EIFEX) carried out in 2004 in the current issue of the scientific journal Nature. Unlike the LOHAFEX experiment carried out in 2009, EIFEX has shown that a substantial proportion of carbon from the induced algal bloom sank to the deep sea floor. These results, which were thoroughly analysed before being published now, provide a valuable contribution to our better understanding of the global carbon cycle.

Over 50 per cent of the plankton bloom sank below 1000 metre depth indicating that their carbon content can be stored in the deep ocean and in the underlying seafloor sediments for time scales of well over a century.

Iron Fertilization helps restore fish populations

In 2012, the distribution of 120 tons of iron sulfate into the northeast Pacific to stimulate a phytoplankton bloom which in turn would provide ample food for baby salmon.

The verdict is now in on this highly controversial experiment: It worked.

In fact it has been a stunningly over-the-top success. This year, the number of salmon caught in the northeast Pacific more than quadrupled, going from 50 million to 226 million. In the Fraser River, which only once before in history had a salmon run greater than 25 million fish (about 45 million in 2010), the number of salmon increased to 72 million.

The cost for iron fertilization would be “ridiculously low” as compared with any other possible method of carbon sequestration. For quite seriously all you need to do is throw rubbish over the side of the ship to make it happen.

No, really: ferrous sulphate is a waste product of a number of different industrial processes (if I’m recalling correctly, one source would be the production of titanium dioxide for making white paint, a large industry) and it really is a waste. It gets thrown into holes in the ground

(Could it really be this simple?)


I'd also point out that biomass is problematic for a couple of reasons.

It's about the math. We're currently blowing something like 8.5 gigatonnes of carbon (GtC) per year into the air from fossil fuels, and blowing another 2 GtC into the air through deforestation. The plants is taking up about 2 GtC, mostly in the northern boreal forest (read Siberia), not in the tropics where it's being lost. That's about 0.25% turnover in the plants, if I have the numbers right.

If we get serious about biofuels, I don't think we're going to get carbon uptake at the same rate as we'd need to blow it into the atmosphere to keep our society fueled.

This gets more serious when you see proposals for biomass fueled plants which would run off of ground-up forests and shrublands. This is a favorite in chaparral country in California, because they want to scrape the hillsides bare and use the resulting slash to run some small power plants. To the crack-brained theorists, this will free everyone from the terror of chaparral fires and provide energy. What could possibly go wrong? (aside from more ignitable weeds replacing the shrubs and increasing the fire danger, plus landslides where the shrubs no longer hold the slopes together, and numerous extinctions, that is).

Well, it takes the shrubs about 30 years to regrow, so that's how long it takes to get the carbon out of the air. That's what's wrong. It's not carbon neutral on an annual basis, it's running a carbon debt ("please let me burn down this shrub. I promise that, in 100 years, the next shrub will have recaptured the carbon I will put into the air tomorrow.")


Brilliate essay on the coing technological changes. But won't our world be shaped more by demographic changes such as urbanization:

By 1990, less than 40% of the global population lived in a city, but as of 2010, more than half of all people live in an urban area. By 2030, 6 out of every 10 people will live in a city, and by 2050, this proportion will increase to 7 out of 10 people.

... and collapsing birth rates resulting in aging (and eventually declining) populations, with Japan as the canary in the coal mine:

Japan's aging society is rapidly shrinking the workforce. Aging is forcing Japan to attempt to relax old prejudices against women and foreigners. Plus there is an attempt to achieve a robotic revolution. Japan is pulling out all the stops to hit the 2 percent GDP growth target the government says is needed to reduce its mammoth public debt.

Japan’s government is also urged the nation’s business leaders to do more to boost the role of working women. That is seen as vital due to the shrinking workforce in one of the world’s most rapidly ageing societies. Japan is takingsteps to increase the number of highly skilled foreign workers, and expand a controversial foreign trainee programme, which has been accused of exploiting participants. Another aspect of the growth strategy is the boosting of productivity through a “robotic revolution”, but experts have warned that even with automated aid it will take years for Japan to achieve the necessary growth.


This really sounds great! Thanks!


In response to Big Brother (and a host of Little Brothers) watching our every move and recording every word, won't technology allow the Common Man to turn the tables at watch the Watchers? This is fellow SF writer David' Brin's concept of Sousveillance:

But watching and recording the cops would only be the first step. Technology could allow anyone to become a Snowden:

So what would a perfectly transparent society (both up and down the power structure) be like?


Something that I was recently made aware of: concrete production has a massive carbon footprint, and any carbon tax makes it substantially more expensive. Assuming a carbon tax is in place, I'm not sure how quick cities will be to replace dumb pavement with smart pavement.


Mr. Stross - Your comments on the slowing down of Moore's Law and technology in general seems to dovetail with Horgan's "End of Science". Do you agree with him on the slowing or even halting of significant scientific advances?


I hope it works as advertised, but its hard to draw final conclusions from one test.

Yet it does make intuitive sense that it is easier to alter the industrial carbon cycle than to try to stop it competely.


Looks simple enough though extra biomass on a large scale will alter the water temperature and thus ocean currents. Did they check how would that change weather patterns (precipitation)?

Otherwise it makes sense to use mostly underutilized space in oceans.


Charlie, I agree with much of what you say, and admire your writing style. You've inspired me enough to go to amazon and look up your titles. However: I slightly disagree about your conclusions on semiconductors. My disagreement is only in the sense that you identify silicon as the substrate for the future. I would speculate that it will not be so, and that future materials developments will make the silicon-based technology revolution of the last 40 years look like a hiccup compared to what's coming. Think of a chunk of silicon carbide that has a single molecular sheet of graphene on top. Put down a layer of glass and punch some vias thru it (or maybe find a guy with a teenie tiny drill and very small hands to do it for you.) ;-) Drop in nanotubes for the vias and lay down another molecular sheet of graphene. The nanotubes should (theoretically) have an easy time connecting the two sheets as a quasi-perfect conductor. The sheets could use quantum dots for circuit nodes, interconnected by "wrinkles" in the sheet. Keep laying down glass, tubes and sheets until you're bored with it. Such a microchip technology (should all the current research pan out) would make a world in 2034 that, when compared to 2014, would be like comparing the space shuttle to a horse and buggy. Who knows, maybe I'm a delusional idiot. If you have the time to spare, come by and let me know what you think, I'd very much enjoy hearing a second opinion.


What is the consequence of Moore's law coming to an end? First off, there's another big sigma curve that lies on the other side of Moore's law, in terms of software and hardware: efficiency and optimization.

Right now there's a lot of inefficiencies in how computers are designed on a hardware level and how the programs running on them are coded. We don't worry about it too much because development costs are important and we can always count on next year's computers to run the bloated applications faster than they are now. Once Moore's law goes away, we can't count on more CPU and more memory for improved performance. At that point, the only way to improve performance is to start doing more with less. And for that matter, the only way to improve functionality is to put more features in the same memory footprint and CPU usage, which again means a focus on efficiency and optimization.

I have no idea of what software coding is going to be like in a world where efficiency is everything. We're probably going to see some new programming languages and development environments that focus heavily on efficiencies. I'm not sure that our current programming languages and development environments are up to the task.

Another big point is that at some point, if you want new features, you may have to toss old features out. Currently, we have a lot of applications that have a 'just toss it all in' approach. We're going to have to take a new approach to functionality in the future when you have a fixed amount of CPU and memory to count on. There may be an optimization cycle of functionality in programs as well, with less used features being tossed out or some system set up to ensure that less used functionality has minimum resource impacts.

On the complementary side of hardware, I see the same thing going. Transistor counts (at least transistor counters per unit of currency) are going to flatten in computer hardware, and so at that point if you want to improve performance, you need to start shifting computational demands away from general purpose CPUs to specialized chips like GPUs. There's going to be a messy period where everyone tries to figure out where it's best to shift to specialized hardware and where it's more efficient to just leave it to the CPU or programmable GPU, but I see a sigma curve there.

The downside is that computers are likely to have more complicated internal architectures. That's the price of efficiency. It has been observed that the internals of automobiles have gotten increasingly complex over the years. That complexity is not just there to jack up your repair bills, but a lot of it is the price of trying to make a car run as efficiently as possible. This is going to put an extra burden on programmers who have to start dealing with more complicated hardware architectures, and have to design their programs carefully to take full advantage of the new specialized hardware.


"Optimising compiler" is the concept you have missed.

That has something which has been happening for a long time. Back in the days of interpreted BASIC the human could do a lot, and comments were a liability. People would use a crude tools to strip out comments from the source.

As soon as the BASIC compilers appeared, GWBASIC and Turbo BASIC were competing to produce better machine code, and computer science teachers were trying to get their students to realise that verbose comments were useful.

I think there is less slack in the compiled code than you think.


Liquid fuel from biomass has a lot less impact on CO2 levels than the same fuel from an oil well. And there's been some research on using solar energy on the processing.

We might find ways of bypassing chlorophyll.

Also, it looks as though China is already trying to do something about their pollution problem. They seem to have a long term plan to bring down the cost of solar energy. The report I saw glossed over some of the pollution problems from producing solar cells, and the problems of how the energy can be used, but I have an electric storage radiator here. It would have to be engineered differently, but I can't see any fundamental reason why it couldn't be running from daylight-only solar energy rather than cheap-rate electricity after mid-night.


In addition to advertising being the underlying funding model for the internet, I believe that insurance underwriting (and other complicated financial instruments) will have a lot of traction by 2034.

Yes, your IoT car can (and will) display adverts, but the bandwidth and telemetry systems are most likely to be paid for by your insurer. I already work on vehicle tracking systems that are used for insurance purposes (not just driving behaviour, but detecting crashes and getting the contracted tow truck to the scene first). The bandwidth and the IoT device is paid for by the insurer.

IoT telemetry has the opportunity to reduce risk, and some city boy will figure out how to underwrite that risk (and repackage it to sell to someone else). The paving slab and home automation system can tell your healthcare insurer whether or not you sit on the couch too much and adjust premiums accordingly. Impacts on personal risk are easier to imagine, and will be mainstream by 2034. By 2034 everyone will have devices that monitor health, driving, and home risks. Only people who don't want insurance or are prepared to pay stupidly high premiums will go without.

It gets interesting when you look at broader risks, and how those are underwritten. Even government services can have their cost of funding infrastructure development (hospitals, training of doctors, emergency services, flood defences) influenced by data from their citizens. A 'bad' neighbourhood can be classified by IoT fridges moaning about not getting your 5-a-day, steps report that people are using lifts (elevators) instead, smoke detectors pick up smoking habits, and paving slabs and cars comparing notes that people drive when they should rather walk. 'Bad' neighbourhoods may find it more costly to get finance. Insurers may insist that stormwater drain maintenance sensors are installed city-wide and deny claims accordingly (thereby placing the unacceptable risk on the city).

The advent of data storage and processing (big data) makes things interesting. Patterns can be detected, in order to assess risk on a minute-by-minute basis, or to profile policy-holders against established risk criteria. By 2034 we may be less worried about eHealth releasing personal data (to advertisers and people that we don't want to) and more concerned about the 'valid' use of the data for an agreed contract (terms of cover). Your office chair may be reporting data and your health premiums calculated on a minute-by-minute basis. If you slouch for too long, or don't take a break when told to, you can get charged a penalty (to cover the added risk) or have your claims denied.

While you can opt-out of advertising (sort of - by pretending to ignore the advert), your opt-out of insurance telemetry monitoring is to be automatically placed in the highest-risk category with eye-watering premiums.


Corn stalks, whether maize, wheat, or barley, remove nutrients from the soil. Ploughing these residues into the soil minimises that effect. You can recycle sewage to recover phosphates, but when that was being pushed in the Nineties the supermarkets just didn't want to know. Processing as a feedstock for fuels might allow nutrient recovery, but the farmer will get the short end of the stick on paying for it.

Oh, and plant fibre makes a difference to soil structure.

There's a crop of biomass being grown locally for a power station. There's a lot of fossil energy went into the fertiliser.


Pessimism about insurance changes driven by ubiquitous monitoring isn't necessarily fully justified. The monitoring could be done entirely privately and owned by the individuals, and they could adjust their behavior to reduce estimated risk to a much lower level, and might decide to e.g. self-insure. In this world, your "bad neighborhoods" issue dominates, with problems obtaining insurance against the risks not under a person's direct control, e.g. combined high neighborhood rates of gun possession and drunkenness.


"Hmmm, I'm utterly unconcerned about the icecaps disappearing, since we seem to have a divergence problem between the predicted dooms and the observed realities."

The ice melting is way ahead of schedule; surface temps are sorta stagnant (for the past several years)[1]; ocean temps, and especially deeper temps, are going up a lot.

[1] If anybody uses 1998 as a starting point, be aware that there was a massive El Nino then, spiking surface temps. They're cherry-picking.


"A Network of Things, depending on photo-voltaic power.

Winter, Edinburgh, 2034, the early hours: the Snoopernet goes down as the on-chip power storage in the paving stones drains.

It's not an existential threat, but it happens every night in winter."

I guess it must work differently where you come from, but where I live we have things which magically store electricity :)


From an internet of things, the next step may be making the inanimate into the animate capable of self-monitoring and self-repair. Not nanotechnology, but biology.


The intention about highlighting insurance-based financing is not intended to come across as pessimistic. In the bit that I have been directly involved with, based in the UK, the device makes a call to an emergency call centre when an impact is detected - 'Are you okay, can I help'. For the insured, there are safety benefits, and for the insurer there is the benefit of people being more honest or clear at the time of the incident. Asking how many people are in the car in front (and are they okay) reduces the likelihood of false whiplash claims which, at least in the UK, is a major factor overloading premiums. As someone who pays premiums I welcome an overall reduction in insurance fraud and also welcome not being lumped into a higher premium risk category that I, as a unique snowflake, do not belong.

The two big problems are a) there is a lot of very personal data being recorded, and accessibility to the data is not clear. Currently, if you borrow your wife's car to go and visit your girlfriend, 'her' car won't report it. It gets more complicated when your wife asks for access to 'her' data as the owner/policyholder. b) actuaries, while liking the concept of more data to determine risk, don't actually have mature enough models. They may do by 2034, but they're going to make a few mistakes along the way. For example 'driving behaviour' by measuring g-forces (acceleration, braking, cornering) is subjective and depends on having a performance model for every car (some cars are safer going fast around corners than others) mapped against road condition, mapped against driving conditions (sunlight, rain), mapped against positions of other vehicles on the road at the time, mapped against driver skill, mapped against alertness and situational awareness, and so on. Personally, it will be a long time before I trust an actuarial model to determine how 'well' I was driving based on sensor input.

More future-shock than pessimism.


"The Internet of Things" - where everything becomes alive and inanimate object are given mind and personality just like in a fantasy novel. Trees, rocks, weapons, etc. will come alive aand speak like in any sword and sorcery novel. Like the One Ring your wedding band will have a will of its own. Your toaster will come alive like the "Brave Little Toaster".

For real.

Cool idea for a novel - a fantasy adventure set in a Narnia or Middle Earth like world created not by magic but by the "internet of things". Dwarves, elfs, orcs, centaurs, talking lions, etc. created by genetic engineering. Mages and warriors given super human abilities by implants.


Optimizing compilers are just a small part of software bloat. Currently we design programs for easy readability, easy maintainability, and easy modification. At least in theory. In practice it is often get something out the door in a hurry that isn't too buggy.

It's rare to see a program designed from the ground up to be run as fast as possible in as small a memory footprint as possible. Right now, we code in a very lazy way and rely on the compiler to pick up the slack.


I wonder how much all that information is really worth to insurers. People tend to suffer from attribution bias; most smokers think cigarettes will kill them when only 1/3 of them will die from smoking related causes. It may be that your insurer just doesn't care that much whether you have a cheeseburger or go 5 mph over the speed limit; the additional risk may not be significant compared to the routine risks you run every day. Insurers price for actuarial risk, not moral guardianship.


That is a good point, and that the mountains of data may not be useful to insurers. We are talking 2034 and in 2014 it is already possible for supermarket loyalty cards to determine whether or not you are pregnant, so correlation between datasets with the amount of processing power available is conceivable.

My initial point about insurers is more simple - that insurance will contribute greatly to the funding model of IoT. Charlie's example of unsold goods calling for help when they are being abducted is likely to be insurance rather than advertising funded. I have already seen low-power product tags for high-end watches. They look like normal product tags but have a battery, sensors and other bits to cry for help. Because of the power requirements they cannot use 3G and GPS, and rely on simpler radio transmissions to 'cry out' and be triangulated. It is conceivable that in eliminating theft, a shopping mall or district will collectively pay for the 'cry for help' infrastructure in order to get lower premiums. So infrastructure for non-GPS location awareness that tells an overly-told story in advertising (knowing where you are and stores pushing offers to you as you walk past) may be funded by risk reduction, rather than advertising.


Is it just me who thinks "Perl 6 will still be 'if and when' in 2034" is the really important thing here? :-)


I'm not saying it's a huge amount of heat per item, just that with a huge enough amount of items we're looking at an impressive amount of heat. Obviously you are well aware of this, having written Accelerando and whatnot, love your books btw! Sadly my favorite Scottish writers are down to just you after a GCU snatched Banks up last year.


To a person of the 1980s, I wonder if by 2034 we're going to look a bit like the borg. With all our everything "smart" from clothes, to glasses, to watches to bags, to our houses, vehicles, and public spaces. Everything spewing context for each other to munch on, better access to instant communication to absolutly every person and all our shared info and data. At what point will we tip over into hive mind lite? Also people privacy continues to erode, we notice it, and fight in certain areas, but trends like facebook also show we're willing to give up a lot. so yeah, a continual human meshing process for anothe 20 years might be... interesting. You will be assimilated?


At what point will we tip over into hive mind lite?

You know about Wikipedia, right?


Continuing from my first post, what happens after we're through the sigma curve of rebuilding our software from scratch to minimize CPU and memory usage, and rebuilding our hardware to take advantage of specialized hardware chips that can perform certain functions much better than a flexible if inefficient general purpose CPU? Then we go through the sigma curve of bug fixes and security fixes. Once the software has stabilized on a stabilized hardware platform and there's no more performance improvements to be had, what the folks want is reliability and security.

The analogy here is much like American auto makers in the seventies who for years focused primarily on performance and size, and then discovered that what people really want is reliability, safety and fuel efficiency. Of course people would like power consumption to go down in computers, but in many ways that's tied to performance issues and I tend to see reduced electricity consumption being loaded in with hardware performance improvements.

The bad news for commercial software developers, those folks who sell software to consumers for money, is that with the exception of video game developers, I don't see their business model being sustainable once performance improvements are gone and whatever room for new functionality they got from optimizing their software and eliminating bloat is gone. It's hard to sell an upgrade for money that provides no new functionality, just performance improvements, and it's going to be impossible to do it for bug fixes and security patches. In the long run, I see open source software dominating in the operating system and basic applications area. Note there will be plenty of programmers needed for web services, embedded software and the like, it's just Microsoft and Adobe and the like who will be in serious trouble.

On the hardware side, things are also going to get increasingly grim. TV makers now are having to deal with the problem of flat panel technologies producing longer lasting televisions than CRTs, and as the life cycle for computers skyrockets after the hardware platforms stabilize, there's going to be a collapse of computer manufacturers. There's still money to be made in maintenance contracts and the like, but the market created by everyone replacing their smartphone or laptop every two to five years is going to be going away.


Actually optimising compilers is not something I've missed.

You are thinking of the low level of things - akin to taking prose in English and translating to Mandarin by replacing word by word. It might occasionally get to the stage of replacing one idiomatic phrase with an equivalent.

I'm talking about rewriting the entire book, moving major plot structures around and turning "War and Peace" into a "Janet and John" version.

Lots of scope for orders of magnitude improvement there - even if hardware scales have hit maturity. Lots of scope for more complex hardware design at the same scale that still gets revolutionary levels of change out of things.

And the key point is that in getting those improvements most of the existing programming languages go away. No good being a perl programmer if we hit that level of code refactoring - you'll be explaining what you want in English (or Mandarin) and the software will be tut, tutting, refactoring your ideas, and writing the code - because you won't be able to understand what it does, or why.

If you want the equivalent today, can you successfully beat an optimising compiler at generating assembler for a super scaler, out of order, multi-cored, CPU?


The bit about power consumption is what Intel does with the tick-tock cycle, just got a chromebook for the missus (C720) to replace my old hand-me-down netbook she was using.

Went from a 1.6 Ghz single core atom from several generations back which would wheeze and start to sit over 50 C full time if she tried to have youtube in one tab and any site in another tab. Battery life was like 6 hours with a big huge multicell battery, and the fans ran constantly PLUS I had rigged up a CPU case fan plugged to USB bolted onto a cooling stand/lapdesk thing.

Now she's got what looks like a slower chip, "only" 1.4 Ghz, but it's a Haswell Celeron so it's dual core, better optimized, and it sips power. The fan rarely comes on, and she can get 7 or 8 hours of Hangouts before she needs to grab the plug.

Continuing the trend even a few more steps has some nice payoffs, but as the original thread noted, eventually we need to face the facts that we don't know how to make circuits with components less than 1 atom wide.


I am up to my elbows in compiler optimization issues at the moment, so I have my doubts regarding the all-seeing power of The Mighty Compiler. Look at the shipwreck of the Itanium if you want an example of "The Compiler Will Just Sort It Out" hubris.

Another example, the current state of the art is fairly terrible at using SIMD vectorization for anything more than the simplest cases. For a specific use case just the other week, I got about 2x speedup when I was able to coax the compiler into automatically vectorizing my compute-intensive loop. But when I hand-implemented the loop using SIMD intrinsics, I got 4.5x. The compiler has a long way to improve there.


A compiler smart enough to convert English (or any other natural language) descriptions to optimized code would not be a compiler, it would be an AI. And since "Bootstrap yourself into the Singularity" would be a valid natural language description to give to such a compiler, it would have to be a godlike singularity AI.

If, on the other hand, the programmer still has to carefully think about the problem, and couch his phrasing to squeeze his design into something the compiler can understand, then we already have that, it's called "every programming language and compiler ever".

The compiler doesn't know what you want. You have to tell it what you want. Which means you have to figure out what you want. No compiler will ever eliminate that "figure out what you want" step, or if it does, you will be on the other side of the Singularity, have fun with that.


Personally I'd rather see Fischer-Trospch synthesis driven by nuclear or PV cells in agriculturally-non-viable locations than biofuels that divert useful edible crops into an inefficient fuel cycle for the rich. But that's just me.

There are a lot of us (at least in the US) who think that converting corn (and other plants) to gasoline is a stupid idea. People of all political stripes. But those corn farmers have political clout in the halls of Congress so that's what we do.


My last point on the post-Moore's law future is that we're going to see the cloud as an increasing way to get around Moore's law, especially for portable devices. The fact is that in the future, the amount of hardware you can pack into a smartphone or tablet is going to grind to a halt. So the question is, how do you satisfy people's demand for more performance and functionality? Simple. You turn your smartphone into a dumb terminal for some server on the cloud, and stream your apps from a server using some hybrid bitmap/SVG protocol.

To be honest, I see that also eliminating most of the desktop computer market as well, replacing them with smartTVs that just have a mouse, keyboard and network cable plugged into them. Most people don't want to be their own sysadmins, and using a shared cloud account for smartphone, tablet and laptop/desktop setups makes file synchronization a lot easier. With biometrics, RFID security jewelry/implants and the like, it also makes stealing smartphones a lot less appealing as well, much as there was not much point to stealing a dumb terminal. And you don't worry about losing data when you drop your smartphone or tablet and it breaks.

Of course now you've pushed all the problems onto the server farms, that are going to hit walls, and in the long run, people will simply come to learn to live with diminished expectations, just as we live with the fact that cars and planes aren't appreciably faster than they were decades ago.


I think there is less slack in the compiled code than you think.

I don't think so. But I'll have to ask a friend. He specializes in patching object code and works with the output of multiple compilers. He should have a good idea of how optimized current compiler code really is.


Corn stalks are also used as animal bedding, then fertilizer. Or am I the only one who's forked straw and shovelled manure? :-)

Part of the problem with using sewage as fertilizer is that the Victorians only built one combined sewage system for household and industrial waste, and there tends to be quite a lot of heavy metals and such in the treatment plant's finished product.


I suggest you take a look at Karl Schroeder's "Ventus".


There are some objections (Shava Nerad's second comment; Google Plus is apparently too braindead to allow direct links to comments) to Brin's frankly Panglossian attitude.


The monitoring could be done entirely privately and owned by the individuals, and they could adjust their behavior to reduce estimated risk to a much lower level, and might decide to e.g. self-insure.

You may not understand how most people think. A friend got an e coli infection a couple of years ago. Almost lost his kidneys. Did lose much of their function. So he got to try a special diet to see if he could stay off dialysis. So far he has. But what he found was that most people in his situation would rather eat what they want and do the dialysis than change their diet.

Personally I can't imagine not avoiding dialysis several times a week if I could but I guess I'm not typical.


For example 'driving behaviour' by measuring g-forces (acceleration, braking, cornering) is subjective and depends on having a performance model for every car (some cars are safer going fast around corners than others) mapped against road condition, mapped against driving conditions (sunlight, rain), mapped against positions of other vehicles on the road at the time, mapped against driver skill, mapped against alertness and situational awareness, and so on.

Yes. In the US State Farm Insurance has a driving app that will tell you how SF thinks you are driving. You tell it when to start and stop and it then gives you a report on how you did. I tried it and it told me I was not safe on 3 points on the route. All three were where a PERSON would have to see the road to realize I was fine in how I drove the curves it didn't like. Coming to a stop when the speed limit is 45MPH at a somewhat sharp curve but with very wide lanes and flat land all around is more dangerous than driving the curve with no chance of spinning out. And similar.


"Peak Oil" Isn't actually going to matter ... What with PV costs falling steadily, people like "Fuel Air Solutions" manufacturing hydrocarbons, & advances in artificial photosynthesis (this one is lagging compared to the others, but progress is being made), then, no we are not going to run out of fuel, especially as more & more transport becomes electrically powered. See also Charlie's point about modern steam-powered trains ....


That is almost what the previously mentioned Fuel Air people are doing, Charlie!

See also Dave P @ # 29 AND ... DD @ 33 I really hope you (& the experimenters) are correct. If so, that solves several problems at one go. Does anyone know of any snags or vested interests against such a useful move?


ATT Which is why, over most of the UK, now ... Local councils (or groups of them) collect biodgreadable waste ("Brown bins" usually - "green bins" are for recycling) which is then "Composted" in purpose-built bunkers at up to approx 80C ... it's then given to allotments & sold to farmers. It has mazing regenerative properties for soil & fertility, not just by the recycling of nutrients, but also in the physical structure of the soil (Particularly if that base-soil is a clay) This model just needs further spreading around, if you'll pardon the pun ......


LIKE THIS do you mean? [ Implant(s) allowing a partially paralysed man to move extremities by "thought control" .... ]


Sigh - I think we've been through this one before. Key words are "scale" and "growth".

We've been on a crude plateau for quite a while now, so let's assume that the next move is down, and soon. Furthermore, let's assume we lose just 1.2% of our current production level per year. That's 1Mbpd of new oil production needed, per year, just to stand still. Are people like "Fuel Air Solutions" going to deliver a million barrels of oil, every day, straight off the bat? Are they going to be able to double that by next year?

Practically we've been exchanging growth and dumb uses of oil (power stations) to get keep supply and demand balanced and the price of oil at $100-110. But that's limited and the base decline rates of the major fields is 6.5% pa.

Any solution has to scale at a rate that delivers all the useful liquids from fracking in the US, each and every year, at a minimum, just to be in the game. It has to have growth rates that look exponential to get there - at a time when spiking oil prices collapses economies, and probably banks (the next time we get a GFC).

Your solution might have been a solution if implementation started in the 1990s, but as it is we'd need an all out war-type effort to avoid a crunch. What we'll probably get instead is just plain war.


Where I am, they take garden waste but not food waste. They say there are special problems.


North Herts takes food and garden waste together. Initially they didn't (I think the biggie was the problem with rats), but the new generation of composters are reportedly better sealed and, should a rat get in anyway, it probably rots down quite nicely.

I can't speak as to the quality of the output as we've never used it. The provenance wouldn't support Soil Association rules, since the input wouldn't be properly 'organic'.

(Granted, SA rules can be daft. I mean, guys, the seeds have to be 'organic'? That's bordering on superstition.)


I think you're analysing the situation differently to the insewerer. I'm using data for my car for rate of loss of speed on a closed throttle.

The insewerer will see closing the throttle 200 yards out and coasting down on engine braking alone as safer behaviour than braking at 0.7g from 40 yards out. As it happens, the firm braking is probably safer in my car because the weight transfer forwards gives better and more stable turn-in.

Are people like "Fuel Air Solutions" going to deliver a million barrels of oil, every day, straight off the bat? Are they going to be able to double that by next year?


You're talking about a $30bn/year market demand, with growth potential. If the process is economical at all, capitalisation won't be the problem.

Interference from incumbents, that could be an issue; anything from market manipulation to high explosives, really... There's a lot of money at stake, and the livelihoods of more than a few sovereign states.

Getting it built quickly may be a logistical issue, but we've built big things before. It's just a question of money.


I strongly suspect that actually, one of the major occupations of programmers in 2034 — Perl and otherwise — is going to be finding and cleaning up the remaining 32-bit time_t's.

Dumb question, but is 64-bit architecture more or less locked-in? When would say, 128-bit machines be a better investment?


What other languages would be locked in besides Perl? C? Fortran? Cobol? Conversely, how much scope is there for new languages to emerge? I can't think of anything that hasn't already been done somewhere (though I cheerfully admit I don't know bupkas about the subject) in some language. Naturally parallel (massively parallel) programming? If that nut hasn't been cracked yet, I don't think it's going to happen by 2034.


@80 (Granted, SA rules can be daft. I mean, guys, the seeds have to be 'organic'? That's bordering on superstition.)

Well, they work much better than the inorganic seeds (see what happens when you mix "green" with science?).



There is not actually any real incentive to going 128-bit for general purpose computing.

On the memory level, the important metrics these days are 1) the cache line size, which is already 512 bits (64 bytes) on Intel and 1024 bits (128 bytes) on Power architectures; and 2) the memory bandwidth at the various cache levels, which isn't really number-of-bits-wide oriented. People don't care how many bits wide the bus might physically be, they only care about the latency and bandwidth that the bus delivers.

On the memory addressing level, 32 bits addressed 4GB, which is a lot but it's certainly feasible to scan the entire range in a reasonable amount of time. 48 bits (the amount of the 64 bits that current architectures max out at) gets you 256 terabytes. Just loading that much data from disk at 1000Mbit/sec would take almost a month. An actual full 64 bit address space would be 65536 times bigger than that. So it's hard to imagine what benefits would accrue from going beyond 64 bits. Any problems that big which need solving are big enough to also be partitioned onto separate computers- many, many separate computers, in all likelihood.

There are some applications which might benefit from 128 bit floating point. But if you're only talking floating point, you don't need the entire architecture to be 128 bit, you just need to provide some floating point support and some extended registers.

There aren't a whole lot of applications which would need 128 bit integer math, at least not that would be happy with ONLY 128 bit integer math rather than arbitrary precision math.


I'm reliably informed Erlang is what you're asking about, though perhaps not massively parallel (that's what Hadoop is for).


What other languages would be locked in besides Perl? C? Fortran? Cobol? Conversely, how much scope is there for new languages to emerge? Probably C++.

There was recently an exercise in the Meteorological community to compare OOP languages, specifically OO Fortran vs. C++. C++ won, but on grounds of compiler advancements rather than language preferrement. While OOP is a bit of a shock to the community, who are fairly happy with Fortran90 thank you, C++ has compiler developer support; less effort is going into optimising Fortran even where the scope exists.

OTOH, a vision of C++ is pushing the same people to using Python for more stuff. Python as a wrapper to existing libraries. So there is scope for new languages as long as they interface and wrap existing code well.


One wrapper to bind them all and in the darkness find them . . .


Taking from your 2034 bullet points, I get:

  • complex, tightly coupled systems, built on
  • rickety, kluge-ridden infrastructure, with
  • little or no regulation, and what exists is either inept or cronyist, with possibility
  • near-biological replication of material instrumentalities, using
  • bug-embedded human-fallible automation.

Not exactly a Three Mile Island or good 'ol American-style patriotic spree kiIling every week, but something to look forward to. Perhaps perl should be made less efficient, to provide for some circuit breakers.


I'd be curious to hear the audience's take on next big future. I find it a little on the gee-whiz, tech-uber-alles side of things with a huge helping of classic libertarian optimism. That's differentiated from the randian right wing objectivist libertarian that's far more common these days.

I saw this article and am deeply, deeply skeptical, though I really have no qualifications to ask anything but "what does a real marine ecologist have to say about this?" It seems like the iron rule of ecology is "If you catch yourself saying 'Well, this seems simple enough,' you probably don't understand it at all." Start tampering with things and you'll always learn why you shouldn't, usually the hard way.

That being said, I'd be pleased as punch if it really was this simple and tuck into our lox and bagels without a care in the world.


"...the Bayesian spoor of conspiracy..."

By far my favorite phrase I've read and/or heard this week.


So your next, totally new book will have this as part of the background? I would look forward to reading a story where the world is much more aware and even sentient at the IoT level.


Languages aren't developing as fast as the hardware they run on (relatively speaking, we're only just over the idea of having to fit the running implementation in a meg or less!). One piece of evidence for that'd be the sheer extent to which they've been informed by developments in formal logic (itself younger than one might expect, but older than computing), to the point where it's now understood that the difference between a formal logic and a programming language is one of point of view. So I reckon language development'll continue for a while yet - but then I would say that, I'm a proper PL wonk.


Given the ad-driven mechanism of funding, I'm seeing the IoT as being a lot more Phil-Dickian than Culture. How'd you like to wake up every morning with your toaster asking if you'd like Aunt Jemima syrup with your waffles and Porky's sausage always goes good with waffles and there's a sale on both at your local Aldi's? Just press synch your phone to get a coupon for 10% off either item or 15% for both.


scentofviolets asked "Dumb question, but is 64-bit architecture more or less locked-in? When would say, 128-bit machines be a better investment?"

Not a dumb question at all, it was and is very interesting to CPU designers and the firms that employ them.

John Mashey, CPU architecture guy at Silicon Graphics, wrote back in 1995 that memory demand did grow at 2/3rds of a bit per year, in line with Moore's Law. (Programmers can always find a way to fill available RAM.) The first 32 bit microprocessors came out around 1980 and were standard in PCs by the 1990s. The first 64 bit microprocessors came out around 1995, were standard for PCs by the mid 2000s, and mobile phones are just going 64 bit now.

flanagan.ffs wrote that there isn't any real incentive for 128 bit general purpose computing. I disagree because historical predictions that X bits are enough have never worked out. We're just starting to move to solid state disks, so maybe we'll start thinking of memory storage as permanent rather than transient. Big problems, even if they're distributed over many CPUs, are easier to organise with a global address space. Heck, at 128 bits we could use IP6 addresses in some way I can't even imagine.


WRT the IoT, there's one very important ingredient needed to make this work (or at least, work passably well), and that is -- standards. They haven't been hammered out yet and possibly that's because hardware capabilities are changing so fast. But in 2014? Would it be too much to ask that HTML 5 be finally committed to paper? The same for any of a number of different forms.


Er, that should have been 2034, not 2014. I don't think it too much to ask that standards for html 5 be committed to by 2034.


John Mashey, CPU architecture guy at Silicon Graphics, wrote back in 1995 that memory demand did grow at 2/3rds of a bit per year, in line with Moore's Law. (Programmers can always find a way to fill available RAM.) The first 32 bit microprocessors came out around 1980 and were standard in PCs by the 1990s. The first 64 bit microprocessors came out around 1995, were standard for PCs by the mid 2000s, and mobile phones are just going 64 bit now.

But the big point is that Moore's Law is running out of steam. With the way things are going you will run into other hardware or budget limits before you bump up against 64 bit pointer size trying to address RAM, even in 2034.

We can double check John Mashey's prediction against historical data to see that the pace of change is already slowing. SGI's Indigo2 was their first workstation with a 64 bit processor. When it was released in 1994, it could hold up to an astonishing 384 MB of RAM:

If the 2/3 of a bit per year rule held true over 20 years, we should expect today's high end workstations to hold up to...

2 ^ ((2/3) * 20) * 384 ~= 3963368 MB of RAM. That is, 3870 GB of RAM. There's nothing even close today. I can't find a current workstation that holds more than 512 GB of RAM, though counterexamples would be welcome.


There are very good reasons that most computer systems can't support that much memory: physical space and heat.

Having large amounts of memory means you have to have space for them. If the lines are too long, there's delay (even with a perfect conductor, speed of light does come into play -- a nanosecond is about six inches, remember). You have to be able to situate the memory, and do so in such a way that it doesn't introduce extra latency.

The other problem is head: 1TByte of RAM is hot. Even 512gbytes produces quite a bit of heat, even when it's idle. (DRAM requires constant refreshes, remember, so it generates heat when it's just sitting there. And SRAM is prohibitively expensive at those quantities.)

Another problem is that OSes turn out to be ill-equipped to deal with such staggering amounts of memory: 4GBytes of RAM uses quite a bit for the page tables. Get up to 512GBytes, and you'll be using large page sizes, and multiple levels of page table entries, the latter causing your available memory to be reduced.

In response to the original question about a 128-bit computer, I have to ask: what does 128 bits mean? Address space? That's probably not going to be required for physical memory, but there are some nice things you can do with it if you have it.

Do you mean 128 bits for registers? We're already there for floating point in some cases, and vector registers would be happier with 256 bits or more.

Hm. I've already ranted about C in another discussion, but another failing C has is that all pointers are the same. That causes problem on a machine with, say, 24 bit address registers, and 64 it data registers.


At this point in history, we've had hundreds of companies over at least three decades promising a scalable biofuel replacement for petroleum. Over and over again, they just didn't perform economically and/or they just couldn't scale because the input resource wasn't cheaply available in sufficient quantity.

Maybe this one is different. Who knows? But my rule of thumb is that if many smart people have tried to do something over 20+ years and failed, it's probably pretty damn hard to do if not actually impossible, and news of immanent success is almost certainly wrong.


Ok, this isn't my field, but is there scope for cost/MB efficiencies in SRAM?


Is there any chance that you could see programming go through the equivalent of what the Industrial Revolution did to textile manufacturing, with lots of it broken down into highly specialized tasks that can be done by someone with the equivalent of a two-year college degree and no special aptitudes for math? Someone mentioned way up-thread that you might see tons more specialization in processors and computers, like with the innards of automobiles. And that would really go hand-in-hand with what economist Tyler Cowen likes to talk about in terms of constantly monitoring and evaluating employee performance in the future.

It's always near-impossible to guess where the jobs are going to be, but my money's on most of us in the US being "managers" - of robots, automated systems, patients, police targets, etc. Companies will love this at first, since "managers" can be paid on salary and can't be unionized. And the increasingly complex world is going to require more and more work just to manage that complexity, deal with the incredibly tangled legal/liability issues, and then make sure all these systems aren't screwing up with preventative maintenance (like how advanced military aircraft and both commercial and military ships have crews that mostly do maintenance all the time because the consequences for failures are huge).

Of course, I tend to think the truly weird stuff is going to come from Augmented Reality devices (such as Google Glass's descendants) and the Medical field. We're already printing hollow organs and skin now - imagine if we could print just about every kind of tissue in the human body, and then do excellent surgical reconstruction with it. It would be a plastic surgeon's fever dream fantasy. And the "Augmented Reality" stuff means that our world could seem weirdly magical, with us constantly interacting with generated content and the avatars for various programs (as well as each other over long distances).


At this point in history, we've had hundreds of companies over at least three decades promising a scalable biofuel replacement for petroleum. Over and over again, they just didn't perform economically and/or they just couldn't scale because the input resource wasn't cheaply available in sufficient quantity.

But the incentives of it just haven't been strong for most of that period. Even now, $3/gallon gasoline is cheap enough that most people just eat the costs of it, especially with automobiles getting better gas mileage.


The problem with 32 bit time_t is the year 2038 problem, not the choice of 16/32/64/128 bit architecture. Current processors have no problem working with different bit sizes and with 64 bit I doubt any problems with address range will appear (64 bits is enough to address 18 million TB of RAM)

Is there any chance that you could see programming go through the equivalent of what the Industrial Revolution did to textile manufacturing, with lots of it broken down into highly specialized tasks that can be done by someone with the equivalent of a two-year college degree and no special aptitudes for math?


To elaborate: either it'll never happen, or it already has. There's a chance lots of it may be broken down to highly specialized tasks and automated, because that's been happening since at least the invention of assemblers. The base job description of programmers is "take an idea and decompose it into machine-readable form"; what that entails is a constantly moving target. And there's lots of programmers with a two-year degree and no special aptitude for maths.


Thank you for another thought provoking article - that has provoked some thoughts!

I do have a professional interest in Moore's Law(s), since a proportion of my job is about achieving it. So I hope you're wrong about the 2030 end limit, since my new mortgage will run for 25 years starting next week!

But what's important is that Moore talks about density of minimum priced transistors. That's actually quite a subtle thing once you start digging into the details. It's all to easy to think of it as minimum feature length and talk about atomic limits. But transistor gate thickness is NOT the limit of performance per $, it's just the thing the process engineers and semi-equipment guys have been working on to do their part. I think your 2020-2030 is a good range for the end of this scaling.

The gate end-stop will be hit by the memory and fpga guys first, they use extremely optimized designs repeated a lot (you could argue that memory bitcells are the only feature where process engineering actually achieves progress alone on silicon, for everything else they move the goalposts at each node and pass extra work up the chain to the Digital Physical Designers/Library teams/EDA companies).

Predicting exactly where in 2020-2030 this will happen is another matter. Progress is currently stalled on light source issues (we have issues with the feature size being similar to the wavelength of the light used, this is being worked around for current nodes by doubling up the masks (for each layer you do half the exposure with one mask and half with another, which doubles the mask cost and process steps, newer nodes will go to quadruple, while this remains a problems Moore's law is on hold). 2020 Is pretty much the "light source issue dosn't get fixed" date as the majority of the industry catches up onto the most advanced nodes. The general consensus is that it will be fixed either with Extreme UV or with something fancy like electron beam direct write (which has massive implications itself if it can be made to work!).

I'd say 2030 is the 'light source fixed' and process nodes progress smoothly to plan date. So it may stretch out. Note that this is for highly optimized stuff like memory/fpga. For digital logic there are one or two generations worth of inefficiency on the table.

But then what happens? You mentioned Moore's second law - that for every generation fab cost goes up by a factor of two. I think you dismiss the effects of the 'commoditization' of semi equipment a little bit too lightly. This is industry that has been doubling the cost of it's product every two years for a long time. That is a lot of cost to strip back out. Remember just because you're not reducing feature size doesn't mean you can't stick to Moore's law, you can do it by halving manufacturing costs too.

And then there is the 3rd dimension.... I guess some people are thinking, well ok, but where do we fit all these cheap chips. Stacked die is in it's infancy. One of the big challenges for the Semi equipment R+D teams as the process shrink tails off will be to reduce chip thickness (and still get the heat out). The thinner the chips the easier to stack them without burning huge area driving the through silicon vias. Then imagine when we start building them with multiple layers of graphene. I've no idea how close you can get graphene sheets before you have graphite, but i'd guess it's not very big. Interconnect is an ever increasingly dominant proportion of the performance limit. If I can cluster stuff together in 3d instead of the 2d I have to do today the performance implications are mind blowing, as are the savings in silicon area (because logic area isn't dominated purely by function, it's dominated by the need to drive a distance and therefore a capacitance) and power (for the same reason). NB if you can just give me a way to stack one chip flipped on top of another and wire then together I half the transistor area - that's worth 2 years of Moore's law by itself.

So in summary, i think you're correct, there is a limit to scaling, but I also think the mortgage payments will be fine. Does this mean Perl WILL die? Do I need to learn another language to hack chips around?


Python akhbar! BOOM :)


Give me credit for knowing that much! I'm just curious about what features would be considered 'universal' in the sense of what other civilizations are using. The pink starfish men of Patrick V probably don't use 50/60 Hz A.C. It's possible that their machines don't use a von Neumann architecture. But almost certainly -- given the universal physical constraints on feature size -- those machines are 64-bit at most, not 128-bit or anything else.

Standards like HTML5 are more elastic, but I can see path dependency locking them in by 2034, 2050 at the latest. Any bets as to whether Microsoft Office ever becomes mean lean processing machine? Whatever the date?


The pink starfish men of Patrick V probably don't use 50/60 Hz A.C. It's possible that their machines don't use a von Neumann architecture. But almost certainly -- given the universal physical constraints on feature size -- those machines are 64-bit at most, not 128-bit or anything else. The pink starfish use a trinary* logic and their current top processers use 96bit architecture, but they are looking to go to 192 bit soon. *Think +1v,0v and -1v states for each bit


Even worse: the cognitive workload of programming requires that the programmer, in order to be anything other than a cut-and-paste monkey with a text editor, has to understand at the least a handful of key abstractions: variables, looping, and indirection operators (pointers). These form a conceptual hierarchy (not unlike Maslow's pyramid of needs in appearance) and it turns out that as many as 50% of trainee programmers can't even grasp the abstract idea of variables properly. They can limp along doing cut-and-paste of existing code to tailor it to specific tasks, but that's about the limit. And there's no well-established way of determining, short of an introductory programming course, which category the student will fall into. Arts graduates with no technical background at all are just as likely to make excellent programmers as folks with degrees in pure mathematics or engineering. (They may lack a few of the advanced conceptual bells and whistles, but they've got the cognitive equipment to grasp it when they meet it and need it. Unlike the Ctrl-C monkeys, who never will.)


I regularly write code without two of these, and with only immutable variables, but I must be weird.

Which isn't to say that general recursion is the way to anything but circular logic, but there's a lot of room for improvement.

One lecturer has observed a test that does appear to help re the 50% btw: regardless of whether the student uses the right rules for =, do they use consistent ones? Despite my irritation with it as a kid, I think there's a lot to be said for := for assignment/setting rather than binding.


I'm very surprised that the general level of comprehension is so low, or is that because of the techie types I always seem to mix with, I wonder? Code using pointers came in very late in my personal experience ... I could just about get my head around the idea, slowly, but I never got a chance to learn it properly ... this seems to be a clssic case of learning by doing, IMHO, very like propely learning a foreign language (what a surprise!) So, you are saying that the level of abstraction required ofr really understanding pointer-operating logic is actually "harder" (or "different") than or from the mental process required to comprehend & use the other "Normal" operations involved in coding instructions? F'rinstance, are pointers now used in PERL? Or other modern UNIX variants? For other readers, I was "educated" in pre-visual BASIC & cut my teeth on FORTRAN IV, learnt a little UNIX & haven't written ANY code since approx 1989.


The pink starfish use a trinary* logic and their current top processers use 96bit architecture, but they are looking to go to 192 bit soon.

Three-valued logic has some nice enough properties, computationally speaking that people have been trying to implement this one for a long time. I say try, because no one has been able to do trinary despite being very clever, very determined, and having access to a lot of resources. I conclude that binary machines are by far the norm in the universe, not trinary or anything else.

I'm also going to guess that programming languages are going to be the same way -- everybody out there is going to have something that incorporates assignment, branching, looping/recursion, etc. Contrariwise if some new programming feature isn't there by, 2034 (2050 at the latest), I'm going to say you're not going to see it in any extraterrestrial language either.


Greg Tingey wrote "I'm very surprised that the general level of comprehension is so low, or is that because of the techie types I always seem to mix with, I wonder?"

I'd guess you mix with techie types. A couple of British academics in 2006 wrote a paper about the failure rates for new students in computing. They'd developed a test to evaluate how well the students were learning and, in the best tradition of serendipity, accidentally gave it to the students before they'd done any classes. It turned out to be an excellent (and somewhat depressing) predictor! The draft paper is a fun read and worth skimming:

Greg also asked "So, you are saying that the level of abstraction required ofr really understanding pointer-operating logic is actually "harder" (or "different") than or from the mental process required to comprehend & use the other "Normal" operations involved in coding instructions? "

Yeah pointers are hard but necessary. Joel Spolsky expressed it better than I can:


Bruce Sterling in his novel Heavy Weather had a similar sounding future for computing. Thanks to open source and plain theft, the expensive corporate systems of today can be bought dirt cheap by anybody. One of the characters is the best sysadmin for their security system because she is careful about the order of operations and which talks to to which, not from hacking the code.

Anecdotally a lot of "programming" already means understanding which programs will successfully talk to others, how to interpret the real meaning of the protocol messages and other outputs, and where buffering is necessary to prevent two systems coming into direct conflict. Programming as 18th century etiquette ...

(Nope, pointers aren't important at all. Contradicting yourself is fun!)


I have been unable to comment for a bit due to a security foo. Catching up...

There are x86 systems out there which can do more than 4 TB of RAM, in the server space at least; 6 TB in the Dell R920 and SuperMicro 4048-TRFT, probably others. These are both small workgroup servers (5 U of rack space, etc). For non-mainstream-x86 they go to at least 32 TB.

Regarding Moore's law and the end of transistors and such; we have a number of strange changes coming. Memristors, phase-change RAM, quantum computing, truly massive parallelism reaching down out of computer science R&D into practical problems. We could already "do away with disks" putting FLASH on the RAM bus - but it wears out quickly enough that that's probably a mistake. Memristors and PCRAM don't. A lot of system stuff is designed around there being Register-Cache-Cache-Cache-RAM-(bus)-Disk-Network speed heirarchies of access to data. It may well flatten to Register-Cache-Cache-Cache-RAM/Persistent Fast Storage-(bus)-Network. Everything has been a file, but may not be shortly.


If a 5U system is in the running in 2014, I think that IBM's POWER2+ Model R20 should be in in the running for 1994. It could hold up to 2 GB of RAM. And applying the 2/3 of a bit per year rule... we should now have a 6U system that holds a bit over 20 TB of RAM. Do we? Find one and surprise me!

Of course x86-64 can go to big RAM in a single system image, especially with custom system glue. There's nothing weird about x86 there. SGI's UV line goes to 64 TB and 256 sockets and it's built on Xeons. The 64 TB limit is in fact an Intel limit -- currently only 46 bits physical address on Xeon. AMD offers 48 bits physical address but is behind on all other fronts, so I don't know if anyone makes bigger-memory systems with AMD chips.

If memristors eventually deliver it'll be a big change. We'll see. HP has so far slipped and failed to deliver on prior announcements of memristor products. Phase change memory at least has some products shipping but who knows if it will be the next big thing or if it'll fizzle like bubble memory.


They'd developed a test to evaluate how well the students were learning and, in the best tradition of serendipity, accidentally gave it to the students before they'd done any classes. It turned out to be an excellent (and somewhat depressing) predictor! The draft paper is a fun read and worth skimming

"Rarely is the question asked: Is our children learning?" - G.W. Bush

I've been in situations where I found I was not so much teaching something to someone as emitting words and teaching something at someone. That's a human layer problem, though; it's probably not subject to a hardware patch.


Three-valued logic has some nice enough properties, computationally speaking that people have been trying to implement this one for a long time. I say try, because no one has been able to do trinary despite being very clever, very determined, and having access to a lot of resources.

Not so. In fact, R & D work way back in the 1950s came up with completely successful ternary circuitry based on parametrons (non-linear oscillators pumped by three phase AC). It was just that work on conventional approaches had moved on by then and overtaken the state of the art requirements that the parametron project had originally been given, plus there was already a material installed base of conventional approach stuff, so it missed the boat just like selectron memory did with ferrite cores. It was a case of being overtaken by events, a bit like the Rolls-Royce Crecy engine being ready just in time for gas turbines (though there it was the disruptive technology that got there first).

So the pink starfish might well be using ternary logic, if their own path dependences led them that way.


Even worse: the cognitive workload of programming requires that the programmer, in order to be anything other than a cut-and-paste monkey with a text editor, has to understand at the least a handful of key abstractions: variables, looping, and indirection operators (pointers).

Some functional languages don't need no steenking variables, looping, or indirection operators (I even worked up one myself, Furphy, using the Forth virtual machine as a base - and why hasn't anyone else brought Forth up yet?). Of course, they need other stuff that programmers have to wrap their heads around in order to achieve anything non-trivial, e.g. recursion, indirect function calls provided via parameters, lazy versus eager evaluation, ...


This is all starting to get interesting, but I have a headache now.


Not so. In fact, R & D work way back in the 1950s came up with completely successful ternary circuitry based on parametrons (non-linear oscillators pumped by three phase AC). It was just that work on conventional approaches had moved on by then

You forgot 'successful' tristate hardware developed in the 60's, 70's, etc. All of which could make the same claim about being outpaced by conventional approaches. IOW your claim is unfalsifiable, or at the very least needs a lot more work before it becomes tenable.

Same thing for stuff like the memristor, which also has been coming Real Soon Now for decades. Note that I'm not flatly rejecting claims of this sort; it's just that after spans approaching half a century of relentless hyping which never panned out, I've been burned enough times to be somewhat skeptical of them. Note also that I would be very happy to be proven wrong (I'd love to be wrong about aneutronic table-top fusion, for example), but there's a rather short distance separating realistic expectations and delusions. So until proven otherwise, I'm going to go with binary as the dominant computing paradigm in the universe. (And why not? These are after all just differing implementations of the universal Turing machine.)


Obligatory Joel Spolski reference:


You forgot 'successful' tristate hardware developed in the 60's, 70's, etc. All of which could make the same claim about being outpaced by conventional approaches.

Well, yes.

It's not as if someone playing with electronics - for pure research or just as a hobby - couldn't make a trinary logic circuit now. But there hasn't been a commercial application for a long time. Imagine a trinary based computer system with twenty years of development behind it, from hand wired components to a full integrated circuit chip; such a thing might be as capable as an 8086. The famous Intel 8086 did have decades of expensive R&D between it and ENIAC. It's also long obsolete now.

To be sure, someone trying to build a Turing-complete computational device with trinary logic in the WWII era would have to become clever with the memory storage system. For example, a Williams Tube only worked in binary though they could store over a thousand bits for as long as the power lasted. On the other hand, drum memory was patented back in 1932 and could in theory be coded with +, 0, and - charges.

The window in which an N-state logical base could be a reasonable contender was very small. It took a bit longer for designers to stop fiddling with word length and adopt the 8-bit byte. You can no doubt supply your own examples of technology lock-in.


To be honest I don't think your comparison is valid. A 9 or 10 bit byte wouldn't change the way computers work or how we program them all that much. Tri-state logic would require a rethinking of how we write code and organize the logic systems.

Fred Brooks said one of the things he was most proud of was winning the debate on the IBM 360 project to use 8 bits instead of 6 for byte length. I think he said Gene Amdahl was on the other side of the debate.


The window in which an N-state logical base could be a reasonable contender was very small. It took a bit longer for designers to stop fiddling with word length and adopt the 8-bit byte. You can no doubt supply your own examples of technology lock-in.

Waitaminute, you're sneaking in the assumption that trinary implementations are essentially no better than binary. Yeah, I'll grant you that the development 50 Hz vs 60 Hz networks demonstrates strong path dependencies -- because at the end of the day there are no strong reasons to prefer one over the other. And that's most definitely not what was claimed by the advocates for trinary logic. Now if you're right, the 'successful' development in the 50's should have been a one-off (as I've already pointed out.) Instead, people have kepg trying to do full implementations not just in the 50's, but also the 60's, 70's, and 80's (anyone remember fluidics?)

Why is this the case if -- per your claim -- there really isn't much of an advantage to trinary logic? Were they all just plain wrong?


"gets you 256 terabytes. Just loading that much data from disk at 1000Mbit/sec would take almost a month. "

i did that last night, so no, it doesn't. Why would you load it at 1000Mb/sec? Of course my system is a distributed cluster, but HP just announced this


"All pointers are the same" is a faulty assumption of many C programmers, not of C, just as "bytes are octets" is. (I've used a machine where it wasn't true, with char* being larger than int*; many programmers have had to deal with near and far pointers, though that's not standard C.) cf. "everything is a VAX", "everything is a Sun", "everything is an x86", "everything is a 32 bit machine" etc..

C++ may be reducing the prevalence of this, as C++ programmers are generally aware that "pointer to function" and "pointer to member function" are not interchangeable.


One serious problem with trinary is that it doesn't scale downwards very well. The difference between a plus and a zero and a minus voltage signal is easy to determine when the voltages are significant but as voltage levels drop to reduce power dissipation and increase switching speeds the guard bands around the various levels get eaten away. The time required to swing a signal line from + through 0 to - or vice-versa is another problem to be dealt with, in classical binary circuitry the time to switch between the only two legal states remains almost constant in either direction (usually).


One thing about decreasing costs and power consumption - if such a thing happens - is that the personal computers and such that cost the same will be able to pack more processing power than they do now, exactly the same as if Moore's law continued.

What I suspect is going to happen, though, that once Moore's law - presently the cheapest way to add more computing power - dies out, the decrease in costs of computation (both the capital costs and the power costs) is going to hit rapidly diminishing returns. There's a certain feedback here - now that people are constantly replacing their hardware with newer hardware, there's a lot of money in hardware manufacturing, but once people stop doing that, there's less money, there's fewer smart people working on advancements, and the improvements slowly grind to a halt.



About this Entry

This page contains a single entry by Charlie Stross published on June 25, 2014 12:56 AM.

Competition Time! was the previous entry in this blog.

One week to go to THE RHESUS CHART is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Search this blog