Back to: Another HALTING STATE moment ... | Forward to: Public Service Announcement

Blindsided by the future

Trying to second-guess the near future is increasingly impossible, and a recent article in The Economist raised an interesting possible explanation in my mind. In discussing the uptake of technologies in developing countries, they mentioned a World Bank study. To quote:

The World Bank looked at how much time elapsed between the invention of something and its widespread adoption (defined as when 80% of countries that use a technology first report it; see chart). For 19th-century technologies the gap was long: 120 years for trains and open-hearth steel furnaces, 100 years for the telephone. For aviation and radio, invented in the early 20th century, the lag was 60 years. But for the PC and CAT scans the gap was around 20 years and for mobile phones just 16. In most countries, most technologies are available in some degree.

But the degree varies widely. In almost all industrialised countries, once a technology is adopted it goes on to achieve mass-market scale, reaching 25% of the market for that particular device. Usually it hits 50%. In the World Bank's (admittedly incomplete) database, there are 28 examples of a new technology reaching 5% of the market in a rich country; of those, 23 went on to achieve over 50%. In other words, if something gets a foothold in a rich country, it usually spreads widely.

This is, as they say a very interesting graph, outwith the context of technology uptake in developing countries. Here, in a nutshell, is why writing near-future SF has become so difficult. Say you want to set a story 30 years out, and as part of your world-building exercise you want to work out what technologies will be in widespread use by the time of the story. Back in 1900 to 1950 you could do so with a fair degree of accuracy; pick a couple of embryonic technologies and assume they'll be widespread (automobiles, aircraft, television): maybe throw in a couple of wildcards for good measure (wrist-watch telephones), and you're there. But today, that 30-year window is inaccessible. Even a 15-year horizon is pushing it. Something new could come along tomorrow and overrun the entire developed world before 2023.

Speed up this uptake curve a little bit by pushing it 20 years out, and you begin to see the outline of an onrushing singularity ... from the pages of The Economist.

(This post was prompted by the discovery that what I thought was a new and imaginative candidate for a not-here-today everywhere-by-2023 technology to stick in my next SF novel is, in fact, already here in concept form and will doubtless be around by 2013 and as unremarkable as wallpaper by 2023 ...)

103 Comments

1:

What technology was it? (If you feel you can share :)

Indeed a very interesting graph. I'm only 29 and I'm still shocked at the speed that mobile phones have spread throughout even 3rd world countries. What does this mean for poor countries if the singularity approaches or hits?

2:

Mobile phones have spread so widely in developing countries because it's an easy technology to deploy; far easier than wireline telephony. It does look like a singularity in history is on us (well, we knew that), but I don't think it's the Singularity, on-rushing.

3:

This graph is biased in a key way: It only focuses on hardware rather than software. I'd be fascinated to see graphs on software technologies such as email, USENET, the web and so on.

4:

It seems like scientific and technological advances always accelerate immediately after a new form of mass communication is adopted. For examples, the invention of the movable type preceded the industrial revolution; the computer revolution immediately followed the adoption of telephones and broadcast media. Now amazing new technologies are about to be unleashed upon us as a result of the introduction and widespread adoption of the latest advance in communication technology, the Internet. The Internet, more than any other form of communication before it, is facilitating the cross-pollination of ideas on a scale heretofore unimaginable.

It's exciting and scary at the same time. Get ready to be amazed by what's ahead. It won't be long now.

5:

Interesting. More interesting is that the graph is asymptotically approaching some number that looks like a couple of years. I estimated the numbers on the Economist graph, and put it in a spreadsheet in Excel, looked up the invention dates via Google and Wikipedia (YMMV) and did a trendline. Hard to tell which was better...it seemed like an exponential one worked best. I have the spreadsheet if you want to play with it. If you want it, let me know where I can e-mail it to.

By the way...been home sick with the flu. Read Accelerando in about two days. Very awesome. I had been sorta "stuck" lately and that was a great kick in the butt.

6:

Where the hell are usable videophones? 50 years since the idea, and what we have still sucks.

7:

This assumes, of course, that there continue to be new technologies to invent and deploy. But if we have run out of possibilities, or cease to be able to exploit or deploy them (eg because of lack of cheap energy), you can imagine the whole thing halting or going backwards. Then you'd have to predict which existing technologies remain in 30 years.

Take Concord as an example - something which was technologically possible 10 years ago now isn't.

David

9:

Anatoly @ no 6:

We already have the technology and bandwith for videophones but judging from the failure of the videocall function integrated in the 3G mobile phone-systems in Sweden, we don't want to have to look at the person we talk to. Since people often do other things while talking on the phone (writing, driving, cooking etc), why would we want to be distracted by a persons face? Sure, videophones might have their use for hearing impaired but others don't want the hassle. Give us better hands free instead.

10:

Infraljud @ 9:

Mobile phones is a bad example. Screen is too small, you have to hold it. It is just not comfortable.

I think the problem is in the price of big screens.

11:

Anatoly: But are big screens so expensive? In areas where high speed internet is common many use IP-based communication (Skype, internet phone, streaming video and so on) where the only difference with videophone is how we name it. Webcam during instant messaging is a primitive version of it. TV-quality might not be available right now but it could be implemented quite easily by borrowing encoding technologies from HDTV-broadcasting. It's all a matter of bandwith in my opinion.

12:

Anatoly @ 6, 10:

Jetpacks, flying cars, and video phones are the tomorrow of yesterday. Basically, dumb ideas once all other technicalities are considered.

13:

We already have the technology and bandwith for videophones but judging from the failure of the videocall function integrated in the 3G mobile phone-systems in Sweden, we don't want to have to look at the person we talk to. Since people often do other things while talking on the phone (writing, driving, cooking etc), why would we want to be distracted by a persons face? Sure, videophones might have their use for hearing impaired but others don't want the hassle. Give us better hands free instead.

Consider that communication has gone the other way - to short txt messages. People did not want rich communication - they wanted quick and universal communication, and when they got it, they (and I'm thinking of teenagers here) adopted it enthusiastically and promisciously.

So to add to Charlie's woes, not only do you have to think about the technology, you have to think about what people really want and how those needs will be met. I bet when Darpa was putting together the Internet, they never considered that the industry driving the best commercial and technological progress on the thing would be dirty fetish pictures.

14:

"Where the hell are usable videophones?"

We have them--many laptops can act as videophones. Two missing pieces: one is that people really, really, really want peripheral vision, which means that a certain amount of design for installation is required, and ubiquitous high-speed networking which, at least in the USA, does not exist.

15:

I'm sometimes jarred when looking backwards as well, when I realize that some key technology I take for granted wasn't common back then. Even 15 years ago, things were different but I tend to overlook that in my memories of the time.

There's a clever video I came across, a spoof of the US show "24" as it would have been if produced in 1994 instead of today: http://www.youtube.com/watch?v=JMLH_QyPTYM

16:

Anatoly @ 6, 10:

Videoconferencing is here today. It's been working in business for a decade. Cisco call's it's next-gen VC 'Telepresence', and by all accounts, it adds in that peripheral being there that makes a standard VC just an audio conf with TV. But it's gosh-darned expensive right now.

Want a videophone at your desk? Skype works.

It's just none of this is a "videophone". Same way 25GBP flights to the continent aren't personal jetcars. It's a 1950s future that's as irrelevant today as the 1980s trodesets of Neuromancer are. We have our global interweb. The consensual hallucination is of connectedness, not VR...

17:

The graph is also immediately reminiscent of 'The future is here. It's just not evenly distributed yet.' I wonder who's already trying to determine coefficient[s] of diffusion....

18:

Videophones: If you really want to show your face to the person you are talking to (and they really want to see you), then you need a camera pointing towards your face. This is not consonant with how present day portable phones are designed to be used, held against the side of the head. Likewise, once the call has been placed/answered and the phone is in its normal place, you cannot see the screen even if it is showing the other party. Anything that would put the camera and screen where they are usable for the purpose would make the phone awkward to hold and operate. No thank you.

You could do it with a desk phone, with the camera and screen on the base unit, but as mobiles become more the norm (and most people will value mobility more than video facilities), there is not much point.

Also, as Infraljud points out, most people don't want videophones, not enough to pay for them.

JHomes.

19:

Spare a thought for the late Isaac Asimov. "I, Robot" stops being readable the moment you find yourself on an interplanetary spacecraft with two intrepid astronauts debugging a faulty AI with slide-rules and graph paper.

I've read an interview with Asimov in which he expresses his exasperation that the most common question fans asked him was how to build a positronic brain.

So uh, when are you going to post the source code to SCORPION STARE, Charles?

20:

Charlie, the Cessna Citation X was at least transonic 15 years ago.

Anyway, Andrew, that would be even more irresponsible than developing it in the first place:-)

21:

The "leapfrogging" idea is traceable mostly to Alvin Toffler (Future Shock, Third Wave etc), academically to Alexander Gerschenkron. It has merit, but can lead to confusion if intrinsically "massive" technologies are lumped with data-centric ones.

E.g., 20 years ago India had maybe 5M telephones; today it's adding more cellphones than that per month, which would be flatly impossible with landlines, telephone poles etc. A perfect example of leapfrogging -- but there's no sensible analogy when it comes to water, sanitation, grid power, all-weather roads, etc. Sometimes, I'm afraid you just have to haul around and manipulate a bunch of dumb old matter.

Think of all the times we've heard (seriously or jokingly) "if cars [or airliners or whatever] had improved as fast as computers, we'd have..." Yeah, well... if passengers and cargo could be instantiated at ever-smaller scales with ever-better functionality, as bit patterns can, the comparison might mean something., They can't, so it doesn't.

Or my own bete noire, the persistent "when will we get Moore's Law for SPAAAAAACE?" Hey, gang, we've got it: in comsats, navsats, and remote sensing sats that provide far more function and reliability per kg than their early counterparts did, and for which the cost is amortized over millions of users. But expecting Moore's Law to trump the rocket equation for big, non-data-driven payloads (including people) is just silly.

FWIW, this seems to me the logical foundation for the whole starwisp + upload meme.

22:

I think the cost of implementation is the important factor to consider. Cellphones and computers can be produced in one country and then sold all over the world for just the cost of production + transport. Cell towers are easier to set up than thousands of miles of copper wire that have to be attached to every home, etc.

Transport is another example. Trains are efficient in that they can carry a lot of people, but it's expensive to build railways. Lots of steel, wood, modifications to landscape, etc. Automobiles are a bit less efficient, but can work on dirt and gravel. Even upgrading to a real road is cheaper than a railroad. Aircraft are even easier -- you just need to build runways at each end of your route. And easy to expand too, just add more runways and air traffic control.

And refinements in the technology can cause jumps in adoption of it as well. Take radio -- the first ones were large, broke easily, required a constant source of power. After transistors were invented you could make them small enough to fit in your pocket for a fraction of the cost and power them off of batteries.

23:

One should also consider institutional barriers. Some technologies are simply hard to disseminate due to dominant vested interests. A few examples: In the US, power generation is a centralized activity, so it is hard for local systems to make much headway against these players and the layers of regulation to protect them. Europe has been particularly resistant to the dissemination of GMOs, in this case by cultural scare stories of "FrankenFoods". New anti-biotics are now not seriously researched by Big Pharma because only chronic diseases can become blockbusters. And of course, the previously mentioned Concorde was initially banned from important overland flights by the US, possibly to buy time for the ultimately abandoned Boeing SST, crippling its operational and business use, especially for transcontinental flights.

24:

I'm not so interested in mobile videophones, but I'd love one on my wall. I can sit in front of it in a comfortable position so that complaint's out.

And it'd save a heck of a lot of time and money traveling for initial interviews!

Monte@21: No, the answer is "Fucking NASA". Yes, there are limits, certainly. But having the manned space prescence as ancient Russian tech (and no they're really not improving it much) and a tin can with a taxi designed to go to the tin can, forcing a huge canister to fall back to Earth each time it goes up? Fucking NASA.

Private enterprise is catching up, but it's taken a hell of a long time for NASA to realise some basic things, like "hey we can have a small lift rocket for people and a big one for payloads!"

25:

One possible reason as to why key technologies were not picked up as quickly in developing countries until about fifty years ago: most of the developed world until that time were colonies. India didn't get proper steel furnaces because that would compete with Northern England... These countries could for the most part not control their own development and their "owners" saw little profit in doing much more than the necessary.

26:

Andrew@24: I was no NASA fanboy when I covered them for science magazines in the 1970s and early 1980s, nor am I today. But I do think they get targeted for a lot of frustration and impatience that ought properly to be directed at the physics and economics of space access by anyone, public or private.

Could NASA have made a number of better choices? Sure. Would any sequence of choices have yielded Kubrick's 2001 by 2001? I really, really doubt it. And I expect a lot of the heavy-breathing enthusiasm for alt. space or New Space in recent years will be tempered as experience makes that clear.

27:

I think I can still see more progress than using 70's tech (the shuttle) to go to a tin can in space, though, Monte.

28:

Isn't the difficulty with projecting near-term futures more analogous to the difficulty in picking a stock for a quick gain versus investing in a long-term investment? Near-term micro trends would seem to be more "random",while longer term trends are much more predictable, while far futures are simultenously less open to examination.

29:

Andrew, the Shuttle isn't 70s tech -- it's late 1960s. It was conceived as a prototype, then pushed into service and patched up to perform a mission that differs radically from its original design goal. It's designed the way it is, with a number of lethal flaws, because those flaws were necessary in order to get USAF support that was needed to get the budget for it through Congress. It is, in short, the classic example of a mouse designed by a committee (and then mis-sold as suitable for the roles of horse, elephant, and performing bear).

The Shuttle is where NASA went wrong. If they'd stuck with the Saturn series rockets, keeping production going and working on improving them and adding features like a reusable first stage, they'd have had a Mir-grade-but-larger space station by the mid 1980s and the ability to fly heavy outer-system probes and manned NEO missions as well as messing around in LEO.

30:

Buckminster Fuller took a look at adoption rates of disruptive technologies in his Critical Path. While we've come to understand a lot of other factors than the ones he focused on (he was largely interested in technologies that were specific to life support technology: architecture, power generation, and such), it's still really worth a read.

31:

Charlie@29 "have had a Mir-grade-but-larger space station by the mid 1980s"

Arguably we already had that with Skylab in the 1970's. It could have easily been extended to be quite a large object with plenty of space for living and working. Their is something quite dumb about launching space station modules as payloads rather than making a piece of the launch vehicle itself as the payload to maximize the payload.

As for the shuttle, the USAF "Dynasoar", which was canceled for political reasons, was a shuttle precursor that would have been launched on conventional rockets and been operational by the early 1970's rather than the 1980's for the shuttle. Of course it is pure speculation as to how useful or reliable it would have been.

32:

I mind back in national school* in the very early 80's we had an ancient comic book (well from the 1950s) which depicted the US space programme as it then was.

In addition to whitewashing Von Braun's record in proper Paperclip style, it also had a picture of the Dynasoar which made it clear that this was a military project - I remember it had USAF markings rather than NASA insignia.

So I assume the political reasons which led to the Dynasoar's (brilliant name, btw) were to do with the Outer Space treaty of 1967, which (IIRC) banned the militarisation of space?

*Irish primary school - equivalent to 'grade school' in US?

33:

DJPO'K@32: Dynasoar was canceled in 1963, a month after the Kennedy assassination. Ostensibly it was canceled because of costs and changing mission profiles for the USAF manned space program. As history has unfolded, we now know that the military never really developed a manned space capability, and that the civilian agency NASA was the primary vehicle for manned spaceflight development. Each branch of the military wanted a program and this would have resulted in a costly duplication of effort. However it is clear in the post Apollo era that manned spaceflight has pretty much gone nowhere for nearly 40 years. Maybe the private space entrepreneurs will change that.

34:

Thanks for the response, Alex. Private space entreprenuers may be preferable to heavily armed imperialists, but I'm afraid I can't work up enthusiasm for what promises to be little more than a series of expensive jollies for the jaded rich (sub-orbital jollies at that).

35: 19: "Isaac Asimov.... 'I, Robot' stops being readable the moment you find yourself on an interplanetary spacecraft with two intrepid astronauts debugging a faulty AI with slide-rules and graph paper."

Sorry, Andrew Smith. Asimov was right. Someone you may have seen on TV, Buzz Aldrin, was known as "Mr. Rendezvous" within NASA for figuring out how to get a Gemini to orbit-change and dock with an Agena, using graph paper, printed charts, hand-held sextant, and a kind of transparent nomogram stuck in the "window" and lined up with the Agena and the astronaut's eyeball.

As wikipedia summarizes: "A nomogram, nomograph, or abac is a graphical calculating device, a two-dimensional diagram designed to allow the approximate graphical computation of a function. Like a slide rule, it is a graphical analog computation device; and, like the slide rule, its accuracy is limited by the precision with which physical markings can be drawn, reproduced, viewed, and aligned...."

As wikipedia says of Gemini 12: "Gemini XII, new, improved restraints were added to the outside of the capsule, and a new technique—underwater training—was introduced, which would become a staple of all future space-walk simulation. Aldrin's two-hour, 20-minute tethered space-walk, during which he photographed star fields, retrieved a micrometeorite collector and did other chores, at last demonstrated the feasibility of extravehicular activity. Two more stand-up EVAs also went smoothly, as did the by­ now routine rendezvous and docking with an Agena which was done 'manually' using the onboard computer and charts when a rendezvous radar failed.... documentaries afterward largely credit the spacewalk innovations, including the underwater training, to Aldrin himself." Maybe you also recall Armstrong and Aldrin cutting out a faulty computer and doing the first manned Moon landing manually. I say this 3 times. Asimov was right. And Heinlein had people use sliderules in space. Bill Gates take heed: sliderules don't crash.

29: I substantially agree with Mr. Stross, based on the years I spent working at Rockwell International on the Space Shuttle (I worked algorithms; equations; safety; software; hardware; systems; simulation; propulsion; guidance, navigation & control; user interfaces; databases; documentation; R&D; proposals; and much more).

One problem with the Shuttle was not engineering (many brilliant engineers could kludge their way around almost any hardware limitations) was but management. Specifically, blatant and widespread criminal corruption. Another problem -- especially sad to say in this Science Fictional venue -- was demented Science Fiction fans such as a Mr. Paul C. Turner, Big Name Fan of LASFS, staunch defender of and co-conspirator with compulsive liar Ronald M. Jones' management-encouraged plagiarism. Fan Turner, famous for literally breaking his neck by sawing off a branch of a tree he was in, boasted that he was an engineer (but didn't know calculus) and a supervisor (he basically carried a clipboard around to collect signatures from the people who were stealing money on each mission's extensive re-wiring of the payload bay). But Paul (an abusive common-law spouse and, like George W. Bush, a "dry drunk") had once put some little lightbulbs in the model Enterprise of a Star Trek film, and a vast ignorant majority prefer what Benford called "Sci-Fi Lite" to the real Space Program.

I think that Mr. Stross has already shown, in an earlier thread, how the laws of Physics almost completely trump the tropes of Sci-Fi, when it comes to manned space flight extrapolated to colonization and galactic migration.

36:

The single and only thing you need to say to summarize and finalize the "shuttle" argument is the following sentence:

The Shuttle is a 100-tonnes-to-LEO launch platform that BRINGS 90 TONNES BACK TO EARTH.

Strip the fracking Winni-space-go off the stack and you have an insta-Saturn-class LEO delivery system. 100 tonnes to LEO = 40 tonnes to Mars. 40 tonnes to Mars = Mars Direct. Google it.

We could have a solar-system-class civilization if we wanted one. But no-one (except us weird sci-fi geeks, and not even all of us) wants one.

37:

Sigh...

38:

Look, people, space exploration\expansion is extremely unprofitable right now. It is an equivalent of fundamental science, funded out of taxes. So you can`t possibly expect efficiency from it. Be grateful it exists at all at this point.

I propose to be patient. Technology to make it real is here already, just wait for other stuff to catch up.

39:

Oh, and do invest your money in biotechnology, if you actually want to live to see the "solar-system civilization".

wink

40:

It seems to me that the big thing which stalls innovation in the USA is a large unpopular war. It soaks up resources, and people are unwilling to pay for it.

The total cost or Iraq is getting close to 500 billion dollars. Comparisons are tricky because of inflation, but it looks as though 50 years of NASA cost the USA less than one small land war in Asia.

Of course, whether you're making ammunition or Saturns, the money doesn't vanish downrange. People have to be paid to make the stuff. But I remember reading that, when the B-2 bomber was being built, the USAF bought all the CRT/TV tubes they expected to need for the hardware, because there wasn't a company in the USA that could make them.

41:

Dave: the $500Bn price tag is the direct cost to the US military. If you factor in the indirect costs to the global economy, the price (c.f. Stiglitz) is an order of magnitude higher. We've trashed the economy and infrastructure of a medium-sized middle eastern state, killing hundreds of thousands and driving millions into exile. The price of oil has skyrocketed, and the externalities -- damage to the global economy, a slowdown of manufacturing due to the raised cost of energy -- is vastly greater than the cost of the guns and soldiers.

42:

35: #19: "Isaac Asimov.... 'I, Robot' stops being readable the moment you find yourself on an interplanetary spacecraft with two intrepid astronauts debugging a faulty AI with slide-rules and graph paper."

Sorry, Andrew Smith. Asimov was right. Someone you may have seen on TV, Buzz Aldrin, was known as "Mr. Rendezvous" within NASA for figuring out how to get a Gemini to orbit-change and dock with an Agena, using graph paper, printed charts, hand-held sextant, and a kind of transparent nomogram stuck in the "window" and lined up with the Agena and the astronaut's eyeball.

Remember Michael Foale on Mir; his first thought was to fetch his ThinkPad and fire up Mathematica to work out how to restabilise the ship after the collision and power loss, but then he realised he'd left the lappy in the part of ship that had lost pressure. Eventually used an actual physical model of Mir to model Mir. You'll have to read the rest for the punchline.

43:

The graph looks like a graph Ray Kurzweil could have made.

44:

@ 40-41:

facepalm

Right, War in Iraq is what stopping us from achieving Singularity.

Seriously, LOL.

45:

alex @ 42: It's amusing that he had so much trouble with the Mathematica password protection. Not really what you want to deal with on a troubled space station.

46:

In 33 ALex comments

Each branch of the military wanted a program and this would have resulted in a costly duplication of effort. However it is clear in the post Apollo era that manned spaceflight has pretty much gone nowhere for nearly 40 years.

That makes sense in the context of a space race taking place as part of the Cold War. You don't need domestic competition to keep a program honest when there is foreign competition. Things change once USSR drops out.

There is an intriguing sense in which large organisations, such as the government of the USA are 'too big'. The people at the very top have fought their way there out of a love of power, but when they pull on the levers of power there are too many stretchy cables and worns links with play. Not much happens.

The people at the top are prisoners of their advisors, who may in turn be prisoners of their advisors. The President says do that, and when progress is too slow and the expense too great, he has little choice but to accept what his advisors say about how hard the technical aspects are.

Things change if the President can preside over a wastful triplication of effort. He can play off NASA against the Federal Space Effort and the Strategic Space Command. Provided those at the top can pay for creating and maintaining a Pepsi versus Coke dynamic (PVCD) they can actually be in charge. How cool is that?

Maybe this opens up new territory for near future science fiction. Set the story in an England in which the Prison Service has been split into three, the Penitence and Punishment Program, Nation Jail, and Reform and Rehabilitation Regimes.

Our hero, a graduate of the 3rd National Mechanism Design College, reports directly to the Minister for Asymetrical Information. His mission is to infiltrate an evil plot, cloakatively to save money by sharing costs between prison services, but actually to deprive elected politicians of their power over punishment.

Local=future colour is provided by the three way split motiff. There is no longer a National Health Service. Instead Longlife, Health Care for All, and the Disease Control Commisioners are all free at the point of use and funded out of general taxation.

One source of dramatic tension is the rivalry between the hero and his nemesis, who graduated from the 1st National Mechanism Design College. Both of them know that the Minister for Asymetrical Information while also have put a graduate of 2nd NMDC on the case; where is he?

Previously science fiction lived off radical technical change. That is coming too quickly but Game Theory/Mechanism Design is evolving into an agent for techologically based, radical social change. Such change is slow. The author of "Three Way Split" gets to write a book set in a future that differs from the present in interesting and controversial ways that will not soon be overtaken by events.

47:

Charlie, I'm quite struck by your "unremarkable as wallpaper" figure of speech. In the western United States, finding a room with wallpaper in it is actually pretty remarkable -- probably almost as rare as finding a house that still has a hard-wired black dial telephone with no push buttons. Painted gypsum wall board ("sheetrock") is the norm, unless you're quite rich, or live in a bizarrely old house.

I'm guessing that housing stocks in Britain tend to be a fair bit older, on average? So maybe wallpaper's still pretty unremarkable there.

I'd argue that wallpaper is another thing that's been blindsided by the future, probable because of the difficulty of gluing the stuff up, as compared to just slapping on a coat of water-based interior latex paint.

48:

Alan@46: "That makes sense in the context of a space race taking place as part of the Cold War. You don't need domestic competition to keep a program honest when there is foreign competition. Things change once USSR drops out."

Which may be why we are seeing some small signs of life again now that the Chinese are threatening to land on the moon.

In some ways, I'm more hopeful of the results from the private space guys. We've seen many different and interesting designs built and tested for what is essentially a pittance. The Rutan design is now the basis for Virgin Galactic's sub-orbital shots. It still amazes me that Spaceship One had a performance envelope comparable to the X-15.

Shan@36. Freeman Dyson pretty much made the same point in one of his books. He felt that if the life-support were ripped out - the shuttle would make a much better vehicle, plus much of the refurbishment could be eliminated reducing costs substantially with faster turnaround times. Why are humans piloting space trucks anyway in the C21st? Something wrong with computers?

49:

Anatoly: space exploration\expansion is extremely unprofitable right now.

This is precisely the kind of problems that a huge advance in communication technology like the Internet is bound to solve. The internet is changing humanity into a huge hyperactive superbrain. Indeed, since the inception and widespread adoption of the internet, we have witnessed the introduction of several awesome technologies, e.g., wi-fi, social networks, e-voting, genome decoding, genetic engineering, cloning, advanced robotics, etc...

I propose to be patient.

It won't be long now.

50:

There's an old saw about if 1 Million years ago a Congress of Apes was formed to predict the next stage in evolution they probably wouldn't have predicted Homo Sapiens. The future is not only evolutionary it's revolutionary; something new comes along all the time that no one expects. The basis of Singularity modeling is that these "revolutionary" changes are coming faster and faster, closer and closer together, so that after the "Singularity" in 2012 (or 2040, or whenever it happens) we have no idea whatsoever what the world will be like. Rocket cars, personal jet packs, videophones, etc., are projections of current technology into the future. But by the time we get there something so revolutionary will emerge that these will be discarded like yesterday's toys.

52:

@51: "Here, it is argued that technology is in a lull, as most estimated for this decade, made in the 1990s, have been found to be over-optimistic. Most of Kurzweil's 1999 predictions for 2009 are going to be found to be too ambitious."

Short term predictions are usually optimistic regarding trends, whilst long term ones are unduly pessimistic. The same problem is stated in another way by Orion@50.

53:

The graph looks like they combined cell phones with mobile phones. I'd like to see a chart with only cell phones--that would be much quicker than 16 years. (Remember, the original mobile phones were radiotelephones that didn't have the capability to switch reception automatically from one tower to another.)

54:

In the western United States, finding a room with wallpaper in it is actually pretty remarkable -- probably almost as rare as finding a house that still has a hard-wired black dial telephone with no push buttons. Painted gypsum wall board ("sheetrock") is the norm, unless you're quite rich, or live in a bizarrely old house.

What are you smoking? Lowe's has an aisle and an end caps devoted to wallpaper and a second endcap devoted to textured wallpaper. That's about half the space devoted to indoor paint.

The amount of space is a pretty good indicator of profit. I doubt that wallpaper margins are so much higher than paint margins that the sales ratio isn't roughly the same as the space ratio.

Wallpaper is rare in brand new inexpensive houses, but those are rare throughout much of the west, at least where people live.

55:

@41: To the extent your list of consequences is accurate, one sees from it how dangerous it is for the world's rulers to fail to destroy men who threaten the Americans' wellbeing or dignity. If the Americans' self-defense is as disruptive as you say, then it seems good for foresighted men to move swiftly to destroy the Americans' enemies as soon as they raise themselves, and before the Americans take matters into their own hands.

56:

Poor people have mobile phones, and got them more quickly than they got railroads. Ergo, life in the universe is about to be transformed into something fundamentally different we can't possibly imagine.

"The Singularity is the Rapture for nerds," indeed.

Here's the dirty little secret about the Singularity: Ray Kurzweil is going to die in bed in thirty to fifty years. The moment he kicks off he'll have a computer that's ridiculously fast by today's standards, he'll film his death with the integrated ultra-high-definition camera on his phone, and he'll probably be getting a wicked hogsmoke from the Fellatiobot 9000, but he'll still be as dead and disappointed as every prophet of the end times who came before him.

57:

Hey, wallpaper is interesting. Ya gotta have walls that are straight, adhesives that stick over a narrow range of humidity and temperature, HVAC systems to maintain that environment, and a wife to make you put that stuff up, then replace it after five or so years.

But you�re the writer, so create wallpaper that changes colors and patterns -- bright in the AM in your interior hallways, maybe with characteristics of a widescreen TV, and so forth. This afternoon I want an Alpine scene: motion video, the sounds of crunching snow, maybe an avalanche; later, give me a cascading brook down the hallway into my living room. Tomorrow morning get me psyched up by simulating a stadium hallway as I the champ swagger off to battle.

Wallpaper has possibilities. Just be careful about the glue; I like pre-pasted, but tastes vary.

58:

Waiting for practicable voice recognition. Other than answering prompts on phone menus, the technology has not lived up to the promise.

59:

I had to laugh, Daniel @ 47, as I was sitting next to my hard-wired black rotary dial phone as I read your post. You can enjoy the past while living a "future" life. And what a ring!

60:

Warren Ellis said much the same thing a few years ago, regarding his "Global Frequency" property. When he wrote the comic book, mobile phones were still (relatively) big, clunky things that could barely make a phone call--cameras in your phone was, like, whoa. GF had these superphones that could take video, tie into GPS, patch in remote camera views, access the internet, do all sorts of things.

A few years later he tried to make it into a TV series, and he found that not only were off-the-shelf commodity phones doing everything the GF superphones did--they were actually doing it BETTER.

61:

Waiting for practicable voice recognition.

Orwell thought we'd have it by 1984.

62:

29 "The Shuttle is where NASA went wrong. If they'd stuck with the Saturn series rockets...."

But of course this was simply not an option. The general feeling at the time was that "Apollo is over," and thus it was politically necessary for NASA to come up with something entirely different for a follow-up manned program. Continuing Apollo or Saturn simply wasn't in the cards, and if NASA's leaders had attempted to do so they would have been sacked.

You may think the space shuttle program sucked rocks. In the light of history, perhaps it did. But it was the most ambitious manned space project American politicians were willing to sanction in the early 1970's, and about the only one they were willing to accept until 2004.

From a science fictional standpoint, this business of being stuck in LEO has been very sad. On the other hand, that was a policy decision made by democratically elected presidents and congressmen, exercising their Constitutional powers in a legal and proper manner. Shouldn't we rejoice that the forms have been so well followed, instead of kvetching that NASA's civil service bureaucracy hasn't run amuck and mutinied?

63:

I'll tell you why that graph is so striking, Charlie: because it's rubbish. Their choice of "new technologies" is a classic example of what evolutionary biologists call the retrospective fallacy; classifying significant aspects of past objects based on what's important today.

If we look back 100 or 200 years, there are really only few technologies that stand out. Looking back 20 years, we see more things, but it's an illusion based on our lack of perspective. "Personal computers" and "Internet use" aren't comparable to "Aviation" as a category. If you wanted to apply the same standards across the board, you'd call mobile phones part of "Radio" or "Telephony", in which case they were invented over 100 years ago.

I've seen a quite a few of those graphs, and they all make that mistake, including the ones used to predict the Singularity. Anyone who thinks that the discovery of fire and the first mobile phone carry equal weight as significant cultural events probably needs to get out more...

I'd put one new technology in the last 50 years: consumer electronics.

64:

Idea and graphs stolen from Kurzweil's book on the Singularity.

65:

We have quite good voice recognition. It's been in wide use for many years by those who have the genuine need.

What we don't have is good voice comprehension by computers. This limits the value of what can be done with voice recognition to those lacking the ability to use more practical interfaces.

AI is one of those technologies that have been just around the corner my entire lifetime. When I was a toddler guys like Minsky and Papert were making outlandish promises that remain just as far fetched today. Moore's Law is no help because we simply haven't any idea how to do it no matter how many transistors are in the box. Modern software may seem more clever but that has far more to do with immensely expanded resources than any major advancement in the underlying structure.

I'm not holding my breath on AI.

66: 62 --

If NASA hadn't, for example, screwed up the DC-XA/X-33 development program so badly, blaming the politicians might be plausible. As it is, I'm going to have to blame the last three and a half decades and fourteen dead astronauts on NASA incompetence, not political constraints.

67:

You could put the roots of the mobile phone to about 1942, with the US Army's SCR-536 system from Motorola. Known at the time as a handie-talkie, with the walkie-talkie being a backpack radio.

"Fox One Niner, over."

My camera is older than I am, I can't figure out how to send a text with my mobile phone, and on a bad day I give my postcode as "Dog, Nan, three, eight,..."

The singularity is here, and for my father it's the TV remote.

68:

Videophone aren't ubiquitous because nobody has made a screen that can integrate a camera in the middle of the display without impeding the display.

If you can't look at the picture of your conversational partner and see him looking at you, there is no point in a videophone. No eye contact means there is little point to the videophone.

When videophones let people actually look at each other, they might end up being more popular.

-Ray

69:

epobirs @ 65:

AI is not around the corner. No one even knows what intelligence is. So how can we create an artificial one?

70:

This post suffers from the same basic mistake that all writing about the Singularity does: it confuses inputs with outputs.

If you want to assess the impact that technology has on the world, it doesn't matter how fast your computer's processor is, or how cheap mobile phones become, or how long it takes for wireless broadband networks to cover the globe. Those are inputs. What matters are the outputs: productivity and, ultimately, human happiness.

My mobile phone's processor is thousands of times more powerful than that of my first computer. I wouldn't say that my life is thousands of times better now. The video games I play on my xbox 360 are millions of times more complex than the Atari games I played in 1983. I wouldn't say that I'm having a million times more fun. My monitor is a high resolution lcd displaying millions of color, and yet my work hasn't improved much since I was banging it out on a monochrome monitor.

My life is unquestionably better today thanks to technology. It's great that I can use my phone to post this on a boring morning before work starts. But when it comes to what's really important - love, family, friendship, joy, fulfillment - that stuff is the same as it has always been, and always will be, even when I'm using a computer a million times more powerful than the one on my desk.

71:

Anatoly@69: And even if we did, why would make an AI? We allready have billions of intelligent beings arround. Enhancing a picked group of the ones we've got is more sensible.

72:

Andrew @71: Train the AI to a task, copy it a million times: instant workforce. Or army.

73:

ChrisL@63: bingo on the "other end of the telescope" effect (obElvisCostello). It's trivial to come up with another list, just as tendentious in the other direction, to show how many technologies have reached plateaus -- or "stagnated," for extra tendentiousnesss points. Variations are abundant at Paleo-Future, Tales of Future Past... the whole "dude, where's my jet pack?" or "fusion is always 30 years away" meme.

I claim an early data point: Arno Penzias, then leading AT&T research, pointing out in an OMNI interview I did in 1979 or 1980 that the biggest obstacles to wide use of videophones were more psycho-cultural than technical.

74:

NelC@72:

Slavery is abolished. 8-)

75:

(Been away from home, having a life: Siouxsie was playing in Glasgow, so I've been away from the computer for >24 hours ...)

Mike @57: Hey, wallpaper is interesting. Ya gotta have walls that are straight, adhesives that stick over a narrow range of humidity and temperature, HVAC systems to maintain that environment, and a wife to make you put that stuff up, then replace it after five or so years.

Are you nuts?

Sorry. Wallpaper is cheap, glues that stick the stuff to plaster or wallboard (aka sheetrock) have been around for a couple of centuries, and they work fine in any temperature I'd consider habitable, from sub-zero up to thirty-plus. HVAC is vanishingly rare in the UK, but I'd say wallpaper accounts for about the same proportion of domestic indoor wall covering as paint does. Finally, if it's done right, you don't replace it after five years. There are stately homes in this country, open to the public, where the wallpaper is part of the exhibit, and it predates the US Declaration of Independence.

As for straight walls, tell that to whoever wallpapered our living room. Which is elliptical, with a semi-major axis 18 feet long, and dates to 1829, before the plumb-bob and set-square were invented (or so it would seem if you examine the weird non-euclidean geometry and concave flooring -- ahem). (NB: the wallpaper in question is only a decade old; it's the flat that's been around for close to two centuries ...)

You're describing a regional culturally-determined variable and assuming it's a global constant. Please don't do that.

Earl @70, et all: those of you who think I was posting about the Singularity Rapture of the Nerds are engaging in projection: the primary point I was making (throwaway aside) was that this sort of rapid uptake of new technologies makes life very difficult for writers of near-future SF. And by extension, why it makes it so tempting for them to rush around in circles screaming "oh noes! The Singularity is coming!" Than to actually engage with the near future. See also DensityDuck @60, who gets it. (For a grip on my current perspectives, throw away that copy of ACCELERANDO and read HALTING STATE.)

(Where are all you strange folks coming from, anyway? I see a lot of new handles around here today ...)

76:

Andrew@71: And even if we did, why would make an AI? We allready have billions of intelligent beings arround.

You're kidding me? Imagine this: all ground vehicles are self-driving. As a result, we prevent over 40,000 traffic-related deaths every year in the US alone. And this is just the tip of the iceberg of the benefits of AI. I personally would not mind having a synthetic housekeeper that can change almost instantly into a sushi chef or a masseuse by downloading the needed expertise from the net.

77:

Earl@70:

"Love, family, friendship, joy, fulfillment" have very little to do with technology. You could have them all in a cave at 15000 BC.

78:

@Charlie:

Here is an unexpected suggestion: write a book that happens 10 years into the future. You`ll avoid breaking the horizon.

I`m not sure it will qualify as SF, though... 8-)

79:

Mapou@76:

What self-driving vehicles have to do with AI? Based on DARPA, I dare say self-driving vehicles are possible rightnow_, with enough money. It is not AI, just clever programming.

80:

Good thread!

(1) It is very cool to see Monte Davis posting here. I was also having cover articles published in OMNI in 1979 and 1980. Paid damned well, and had many wonderful editors and writers and (millions of) readers.

(2) Re: #62, #66: "... blame the last three and a half decades and fourteen dead astronauts on NASA incompetence..." I've detailed this on Mr. Stross's blog and elsewhere, so won't sing the same song again. But the name of the song is: "it was not accident or bad luck or incompetence that killed 14 Space Shuttle astronauts; it was criminal negligence, by criminals who are still lining their pockets and failing to prevent the next fatalities."

(3) Getting back to Solipsism and the Singularity, let me first quote Russell Letson from Locus:

"Probably the best reason for the existence of this volume is to juxtapose three of Heinlein's most striking nightmare visions. 'They' and 'All You Zombies' are variations on the solipsism theme that shows up throughout Heinlein''s career (notably anything featuring Lazarus Long). 'The Unpleasant Profession of Johnathan Hoag' runs some metaphysical games with links to both of the solipsism stories and finishes with an image (the loving couple sleeping handcuffed to each other) that complements the curtain line of 'Zombies'. Maybe these explorations of the extremes of the human condition are important because they are so unlike what we think of as 'Heinlein SF.' Fantasy, after all, is the back door to the mind, and tracing the connections that link these stories to each other and to the rest of his work reveals a good bit of what made Heinlein tick."

Of course, we'll understand more when Mr. Stross releases his New "Late Heinlein" fiction.

Wikipedia, in the Heinlein article, mentions:

"In To Sail Beyond the Sunset, Heinlein has the main character, Maureen, state that the purpose of metaphysics is to ask questions: Why are we here? Where are we going after we die? (and so on), and that 'you are not allowed to answer the questions.' Asking the questions is the point for metaphysics, but answering them is not, because once you answer them, you cross the line into religion. Maureen does not state a reason for this; she simply remarks that such questions are 'beautiful' but lack answers. Maureen's son/lover Lazarus Long makes a related remark in Time Enough For Love. In order for us to answer the 'big questions' about the universe, Lazarus states at one point, it would be necessary to stand outside the universe."

(4) The Singularity happened over a Googol years ago. You are a simulation made of a dilute ambiplasma (electron-positron gas). Of course you don't remember this. But you should remember that I've posted comments on this blog as to my edited published article on the subject, as used in fiction by Greg Benford.

"Human Destiny and the End of Time" by Jonathan V. Post [Quantum, No.39, Winter 1991/1992, Thrust Publications, 8217 Langport Terrace, Gaithersburg, MD 20877] ISSN 0198-6686

This was rediscovered a decade later with great plagiaristic fanfare in "Are You Living in a Computer Simulation?" Nick Bostrom, Department of Philosophy, Oxford University. Philosophical Quarterly (2003), Vol. 53, No. 211, pp. 243-255, and many press releases, interviews, New York Times articles, New Scientist references, and defamations by the Cult of Bostrom. Last warning, Nick baby: next time we leave you out of the simulation. But let the Heinlein "Jonathan Hoag" simulation [1942] have the Taj Mahal next time; he really likes it.

81: 54, actually, wallpaper is rare now because it has gone out of style. Same with ceiling fans. It still sells a little, mostly to older people who don't know better. But most (expensive) remodels I do now involve removing large quantities of old wallpaper, not to be replaced. Bel
82:

Anatoly@79:

I'm sure that a viable automated traffic system can be implemented in a rigid and constrained environment without AI but it can never be deployed on a massive scale because of complexity, liability and cost issues.

We need an intelligent system that is as good as (or better than) humans at telling the difference between animate and inanimate objects. We don't have that yet and I'm afraid that tweaking millions of lines of code manually will not do it. There are a zillion other hard problems that a self-driving vehicle must solve (e.g., recognizing gestures from road work crews or spoken commands from passengers). The DARPA Urban Challenge experiment merely scratched the surface. We need AI, IMO.

83:

This is, as they say a very interesting graph

It's much less interesting than it could have been, due to both the Economist and the World Bank engaging in some lazy and unnecessary data agglomeration. Anybody know the best way for me to go about wheedling a copy of the CHAT data set (the original source of this stuff) from its compilers, the economists Diego Comin and Bart Hobijn? I'd like to have a go at doing something a little better.

84:

Bel@81: The ceiling fan is still going strong in urban Australia. Cultural norms strike again (do people really have ceiling fans in the UK? What on earth for?)

Although "urban Australia" is something of pointless phrase, it being the most urbanised country in the world...

85:

Anatoly @78: you haven't read HALTING STATE, have you? Sigh ...

86:

Daniel Boone @47, modern wallpaper isn't that hard to put up. Haven't you seen wallpaper borders at the top of some rooms? I'm not that fussy, but people who want wallpaper and are moderately competent with household work can easily put it up themselves. And the modern wallpaper comes down easily, too.

Monte @73, I see you also continue to capitalize OMNI properly! I did the day-to-day management of the GMI forums on AOL, which included OMNI.

Mapou @76, there are already self-driving cars. They're not ready for the streets yet, but the yearly competitions are bringing better and better vehicles.

Bel @81, in my area, most houses have ceiling fans. Sometimes you have to go with the economics instead of fashion.

87:

For me, the eye-opener about AI was a story in which the AIs could easily make off-site backups of themselves. So they couldn't "die". And when the war with the big bad started, the humans were shocked to learn that every missile carried an AI, who would squirt themselves into backup as they started a suicide run on an enemy ship. And since each AI was smart enough to run the whole war, they didn't moke masstakes...

The handwaving was GP standard, but I'm not sure if it was "Doc" Smith or not. One of the standalones, if it was.

88:

The original graphs are wrong - in one important point, at least. The first steam-powered railways in Britain (& therefore the world) were about 1815 - Middleton Colliery, Leeds, followed by the S&D 1825, and the first "Proper" railway, the Liverpool & Manchester of 1830. By 1850 it was easily possible to go from London to Glasgow or Edinburgh or Swansea or Plymouth by train. The US first trnanscontinental was (?) 1869 (?) So, from introduction to worldwide usage in 50 - 75 years, not 125.

As for the big S - how long, really? We need (at leat) TWO sets of technologies to merge - IT/IA and wetware and probably improved transportation of physical objects. I suspect that the passage is going to be very rough, and a lot of people will be permanently killed, especially as the religious nutters, of all stripes, will fanatically resist the change.

89:

Charlie, I think that you could do with reading David Edgerton's The Shock of the Old. Review and interview here:

http://www.hughpearman.com/2007/01.html

90:

One example I noticed today.

Watching the 1988 anime, Appleseed, inspired by the Masamune Shirow manga, the world depicted is a little odd when seen from twenty years later.

They have all sorts of hi-tech, most obviously forms of power armour and industrial exoskeleton, cyborg, and constructed person. But people use what amounts to a smart typewriter, and put printed letters in the mail. They have videophones, and desktop terminals which can record the message, but they don't have mobile phones. Somebody has a huge wall-screen, but is manually checking every record in a huge database.

There's a few common threads running through Shirow's works. A lot of the tech is the same in Ghost In The Shell, but that does have something like an internet, manipulated by hackers. Unfortunately, it's set in 2030, and I don't think there's really enough time left to rebuild Motoko Kusanagi.

91:

Dave, it's what Charlie covered in a previous post - predicting which technologies would enter a period of rapid growth, and which would grow linearly. The example he used was the post-WWII period, where transportation had been going up a steep curve, only to level out. What did happen was the transportation became more widespread - more people had cars, more people flew. Meanwhile the computing and communications revolutions were not anticipated.

Of course, there's also visuals to think of - in 'Halting State', for example, the police (and everybody else) are seriously cyberinfoenhanced. No powered armor, though.

92:

It's not just SF novels that suffer from this problem. Ordinary movies, with their very long development and lead times, are starting to suffer from it.

Consider 2003's Phone Booth, whose central plot point involves a phone booth in downtown Manhattan, or the just-released Be Kind Rewind, about a video store full of VHS tapes. You can just picture studio executives having a now-or-never moment as they dust off old scripts and push them into production before technological change makes them worthless.

93:

If videophones ever catch on, they will be a separate category from ordinary phones, in much the same way that television and radio are different categories.

You can listen to the radio while stuck in rush hour traffic in your car, but you can't watch TV. Similarly, you can talk on the phone in your underwear while folding laundry, or on the go while walking down the street, where video would be superfluous or undesirable.

Videophones do not lend themselves to spontaneous, unplanned calls. One company reportedly implented videophones over LAN for calls between employees, and were dismayed to find that people would let phones ring much longer before answering because they would primp in a mirror and fix their hair before taking the call.

Instead, videophones will likely find their niche in prearranged "appointment" calling: the Christmas morning phone call when the children say hi to Grandma, or business meeting conference calls, or inventive variations of phone sex. It's unlikely that people will buy dedicated videophones; rather, when your television becomes a giant flatscreen display panel that occupies much of your wall, it will have videophony capability as an add-on. We'll need much cheaper flat panel prices and ultra-high-bandwidth Internet for this to catch on, and then it will be like talking to someone in the next room.

However, the bread-and-butter thirty-second "I'm here, where are you?" phone calls will remain voice-only.

94:

That list doesn't compare like to like. Specifically, it compares producer inventions which are very "lumpy", very capital intensive and which have high economies of scale to those which have far fewer of those characteristics.

Eg., an open-hearth steel furnace (usually producing bulk steel from liquid blast furnace iron) vs. an electric-arc one (usually producing from scrap in smaller batches).

Of course the latter spreads more rapidly -- it's cheaper and more flexible and doesn't require as much in the way of backward linkages (iron-or mines, railways)and huge markets.

The list also doesn't take into account the long-term investment in basic infrastructure and education which allows technological diffusion to occur more rapidly.

When the open-hearth steel furnace was developed in the 1860-1880 period, only a few countries were above the ox-cart level; some areas of western Europe, North America, and (newly) Japan.

Open-hearth furnaces spread rapidly to -those- countries. Even India had a modern steel mill by 1910.

Proportionately more of the world is above the abysmal-primitive-gourd-sheath-on-its-dick level now.

To compare like to like, look at the speed with which magazine rifles (1880's-90's) were adopted vs. a vs. personal computers, both being consumer durables.

The rifles were all over the world in less than a decade. By 1900, artisans in Waziristan and Kabul were making copies with hand tools.

Or look at radio receivers; those spread very rapidly too. Or movies -- film projectors could be found in places as remote as Afghanistan and Tibet by the 1920's.

When you compare apples to apples rather than apples to oranges, the speed of diffusion hasn't increased much, if at all.

95:

And, that, readers, is the first time I've agreed with an entire Steve Stirling post since about 1997.

96:

Interestingly enough, I was at a reading of William Gibson yesterday, and he took a quite different approach to the technology problem: of course, yesterday's science fiction novels are quaint; but that's really not what it's about. Scifi isn't propheticism, but it's a statement regarding the state of the world as of the time of writing.

He also said that Bruce Sterling, back when Bruce only published one, and Gibson nil, books, came up to him and said: "We're going to have the greatest job ever: we can be charlatans" with a pat on the back.

[Also, I had to ramble a bit about human stupidity and the future, but I'll spare you from the details; you can look it up on my page, if you're terribly interested.]

97:

I definitely strongly disrecommend 'The Shock Of The Old' mentioned above. A very long and tedious debunking of a ludicrous straw-man model of how technological progress works which is one that no-one who thought about it for more than about 30 seconds would entertain for a moment.

98:

Mapou@76: Precisely the sort of apps where you want a very fast expert system and not a true AI.

anonymous@93: Actually, it's less of an issue in terms of bandwidth than you think. With a fast processor and an image map with a limited number of plotting points, you can make a pretty good construct of a face.

99:

Jonathan@35: Asimov, by his own admission (which is part of the point - woosh!), was not trying to be right; he was indulging himself and his readers with stories about intellects that are utterly constrained by certain arbitrary rules (the "three laws of robotics").

Of course you can do Newtonian physics with graph paper and a slide rule. Newton managed it with little more than a quill pen.

Debugging a MIND is a problem of vastly greater difficulty, one which is not tractable with simple calculating aids and two-dimensional representations, however nicely ruled. If it were, we would already have strong AI.

100:

Andrew @99 > Debugging a MIND is a problem of vastly greater difficulty, one which is not tractable with simple calculating aids and two-dimensional representations, however nicely ruled.

You should meet my neurologist.

101:

An aside on ceiling fans (surely an entirely pedestrian tangent, but still): In the McMansions still being built in the US that have ridiculously high 'cathedral' ceilings, a fan is a very efficient way to circulate the air within a space and prevent the warm air from pooling near the ceiling, and the cold air from settling near the floor.

As such, they are a very reasonable adjunct to an HVAC system and can easily reduce heating and cooling costs by 25% or more.

102:

Michael @101, in condos, too. A lot of my heat goes upstairs to my neighbor, and he has a living room ceiling fan to bring it back from his ceiling.

103:

Anatoly @79 wrote, of self-driving vehicles: "It is not AI, just clever programming."

Which I consider a fascinating observation, as well as perhaps the primary reason we'll never have AI. Because, once we have it, we'll say "it's not AI, just clever programming."

Specials

Merchandise

About this Entry

This page contains a single entry by Charlie Stross published on March 1, 2008 4:27 PM.

Another HALTING STATE moment ... was the previous entry in this blog.

Public Service Announcement is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Search this blog

Propaganda