Back to: A Rebuttal of 'The Singularity is Further Than it Appears' | Forward to: Beer, Boston, Tuesday

Can We Merge Minds and Machines?

In my science fiction novels, Nexus and Crux, I write about technology ('Nexus') that makes it possible to send information in and out of human brains, making it possible for humans to share what they're seeing, hearing, feeling, and even thinking with one another; and also for human minds to exchange data with computers.

The early versions of that sort of technology are real. We've sent video signals into the brains of blind people, audio into the brains of the deaf, touch into the brains of the paralyzed. We've pulled what people are seeing, their desired movements, and more out of the brains of others. In animals we've gone farther, boosting memory and pattern matching skills, and linking the minds of two animals even thousands of miles apart.

I gave a recent TEDx talk on linking human brains about the science in this area, and where I see it going. You can watch the video below.

Now, just as with AI, I don't foresee this leading to a Singularity. If anything, the feedback loop here is slower than in AI. We're extremely loathe to get things wrong when tinkering with humans. The safety bar for doing a surgery to implant something in the human body (let along the human brain) is extremely high.

Indeed, brain surgery itself is biggest barrier to progress here. We're going to need new, less invasive ways to interface brains and electronics if we ever want this to take off. In Nexus I proposed doing this by self-assembling nano-structures, each component of which is small enough to cross the blood-brain barrier. It's sort of a barely-plausible hand wave. Real neuroscientists, however, have somewhat similar ideas. Rudolfo Llinas, who was the Editor in Chief of the journal Neuroscience for 20 years, has proposed inserting somewhere between tens of thousands and millions of nanowires into the brain by sliding them into an artery somewhere else in your body. This approach needs no brain surgery.

If this field does move quickly enough, it presents an alternative model to the 'Race Against the Machine' where humans see themselves eclipsed over time by digital technologies. A recent post on big data used the term 'Race With the Machine' and I think that's exactly what's possible here, wherein advances in AI and all other fields of computing essentially enhance human abilities.

Or you can think of this in the way that some commenters on the Singularity is Further Than It Appears thread did. This is IA - Intelligence Augmentation (or perhaps more accurately 'CA' - Capability Augmentation) which allows humans to latch onto the advances being made in digital technology and draft them, gaining in capabilities as they go.

There's an excellent case that this is what's happening today. My phone (which posseses more computing power than all of NATO did at its formation) doesn't do much thinking on its own. But it does work as a cognitive prosthesis for me, boosting my capabilities, whether its directly plugged into my brain, or just interfacing through the old-fashioned I/O of my eyes and fingers.

In any case, I encourage you to watch the video.

And if you want to learn more, I talk about the science behind neural interfaces in afterwards to Nexus and Crux; and even moreso in a pair of chapters in my first non-fiction book, More Than Human

No technology, of course, lives in a vacuum. You'll note that in the TEDx talk I'm rather optimistic about the impact that greater communication abilities have on society. But there are certainly other darker scenarios for the future of information technology that have been put to the page, with 1984 and riffs off that theme high among them. By analogy, in Nexus and Crux much of the action is driven by the existence of a War-on-Drugs / War-on-Terror style crackdown on human enhancement technologies. I'll be back to talk about those political aspects of technology, and how much our world does or doesn't resemble 1984, later this week.

And thanks again to Charlie for having me here.

--

Ramez Naam is the author of Nexus and Crux. You can follow him at @ramez.

66 Comments

1:

"The Street finds its own uses for technology." I can't imagine that won't apply to the ramp up to full artificial intelligence, MMI, and the like.

Your novels do hit the War on Terror metaphor, hard, but I see where that comes from. It is A possible future and reaction to this sort of tech.

2:

Hmmm, I'm sitting here in the middle of my external shared brain 1.0 (my book collection), staring at my external shared mind 2.0 (my computer, hooked to the internet, hooked to your eyeballs), and wondering why people keep insisting that external shared brain version 3.0, isn't vaporware, it will be released "in a few years," and will be so much better...

3:

To get some mucky biology into the technology, if you think the anbiotic age is about to end, doesn't that mean that the feedback loop is going to get even slower?

Certainly the chance that any persistent implant might develop an infection - with a chance of death - makes the benefit required before I'd have an implant much higher!

The cybertech future might have been called off by people not finishing their courses of medication...

4:

I think augmentation will be slow to spread outside of medical purposes. Not because it won't work, but because it has to compete with the second-best alternative that doesn't require invasive surgery (unless the stuff you mention works out, at which point that doesn't apply).

I brought this up way back, but think of the choice between wearing some future version of Google Glass, and having implants put directly into your retina. If the Glass 10.0 does a pretty good job without requiring surgery to install it and repair it, then you'd have to really need the benefits of the retinal implant for it to get chosen in most cases.

. . . Of course, as you mentioned, there may be less invasive ways to do this. I hope that's the case.

Certainly the chance that any persistent implant might develop an infection - with a chance of death - makes the benefit required before I'd have an implant much higher!

There's some risks with it right now even with functioning antibiotics*. Most hip and knee replacements, for example, go off pretty well, but about 1-2% of them have to have follow-up surgeries because a biofilm of bacteria formed on the implant (and are difficult to kill).

  • For the record, I don't think we'll see the End of Antibiotics. Good hospital procedure and constrictions on their usage could get the rates of resistant strains down a lot, like how Israel got CRE infections down after some outbreaks.
5:

One thing to consider about the public reaction to these technologies: in the background of your Sci-Fi books, the anti-augmentation attitudes were fueled in part by several major incidents where the technology was used to terrifying effect. It seems that the overall public reaction will depend very much on how the miraculous effects of these developing fields weigh against the terrifying ones.

Also, something that's been bugging me since I finished reading Nexus: was the character of Su-Yong Shu inspired in any way by Magneto?

6:

Can we merge minds and machines?

Kind of a funny question.

We already HAVE. It began with the telegraph, and the ability to communicate essentially instantly with other minds that were at a significant distance from our own.

Then we got radio. TV. Interwebs. Always-connectedness via smartphones. I'm pretty convinced that the next couple of years will finally see "tangible VR." Who knows where we'll be ten years from now.

Why people obsess over physical implants is beyond me. We already ARE cyborgs, we just pulled it off in a fashion that's much more elegant than cutting into our brains for no good reason. But if you think for an instant that we're not already augmented beyond recognition of someone twenty years ago, you are WAY out of touch. We've ALREADY externalized "intelligence," "memory," and many other things via advanced technologies. Every time I whip out my iPhone to do a Google or Wikipedia search to get the answer to some random question within a couple of seconds, I am demonstrating this.

Take a vacation, away from all technology, for a week or two. See if you can pull it off without going nuts, for one. Even if you can, say that your mind isn't functioning in a RADICALLY different fashion in this scenario after a few days.

Ah, the idea of physical implants is such a... red herring? Meanwhile, the rest of us augmented cyborgs are sitting back and laughing at those of the augmented cyborgs who haven't realized what they already are...

7:

Hmmm, I'm sitting here in the middle of my external shared brain 1.0 (my book collection), staring at my external shared mind 2.0 (my computer, hooked to the internet, hooked to your eyeballs), and wondering why people keep insisting that external shared brain version 3.0, isn't vaporware, it will be released "in a few years," and will be so much better..

grin It's certainly true that we're cyborgs already, and we already use cognitive prostheses. That doesn't mean even better ones aren't coming.

8:

To get some mucky biology into the technology, if you think the anbiotic age is about to end, doesn't that mean that the feedback loop is going to get even slower?

Certainly the chance that any persistent implant might develop an infection - with a chance of death - makes the benefit required before I'd have an implant much higher!

I don't see the end of the antibiotic age as being as likely as some. Certainly I don't see the end of effective control of bacterial infection being as likely as some. But yes, sticking things in your body is risky! That's both due to infection risk and other risks, like excess bleeding (not a good thing in the brain) and miscellaneous risks of surgery.

All of that together is the largest inhibiter of progress in this area.

9:

Also, something that's been bugging me since I finished reading Nexus: was the character of Su-Yong Shu inspired in any way by Magneto?

I don't want to give too much away for readers who haven't read the books. But I'll say that while Su-Yong wasn't directly inspired by Magneto, I've always loved Magneto as a character.

And in general, I'm not a big fan of compelling motivations for characters. Whether they're the hero, the villain, or something else, they probably think of themself as the hero, and there's probably a reason for what they're doing. I like Magneto quite a bit in that regard.

10:

Can we merge minds and machines?

Kind of a funny question.

We already HAVE. It began with the telegraph, and the ability to communicate essentially instantly with other minds that were at a significant distance from our own.

Indeed, I agree, as you'll note in the post.

Perhaps a better way to frame it is: Can we integrate our minds, our technology, and other minds even more closely than today? (And do we want to, with all the associated risks and so on.)

11:

As someone who intuitively leans toward belief in both the "simulated reality" hypothesis (although with a different perspective than most I'd wager), as well as panpsychism, I am not inherently troubled by the idea of mind becoming less fleshy and more circuity, because I believe that consciousness is perhaps the fundamental aspect of reality, and even "inanimate" matter may possess consciousness, in a sense. Or be a manifestation of consciousness, is maybe a better way of putting it. I also think there's a difference between my personality, and my mind -- that is to say, the "I" who perceives. I think of these two things as symbiotic organisms, if you will. I'm not, as such, very attached to the idea that our mind is this "pure" thing is generated from the brain and the brain alone, and we must not "corrupt" it with more and more deeply-integrated technology. I fully acknowledge it's a bit of an oddball perspective of things, but hey, to each their own, right? :) I look forward to reading your books btw, Mr. Naam.

12:

Isn't there a very well-funded drive against any sort of augmentation & life-extension going on in the USA? Backed (what a surprise/not) by just about every form of god-botherer & the ultra-right. I find the last peculiar - Id'd have thought they would be in favour of restricting the technology - probably by price - so that they can benefit & everyone else gets left out in the cold. Please tell me if I have the wrong end of the stick, here, btw.

13:

This is going to read like I'm anti-technology: I'm not, but I don't believe in shipping new features and extensions to software until the bugs in the existing version are fixed; and this is a general principle you would do well to apply - or at least aspire to - with any technology that has serious bugs in the current implementation.

Software and computing devices are a world of 'bugs': not just software errors - societal and commercial and legal failures that reduce or corrupt the technology's utility. And, as the reach of existing devices extends, and as new capabilities are rolled out, the unfixed bugs become more and more damaging.

The first big problem with a wired-in connection is the risk of malicious use. Same as any device, but worse: this time it's hardwired in your head and the off-switch is in the software.

What backdoors will be installed in the proprietary software, known or unknown to the owner and available to criminals in 3-5 years?

Remember, also, that the owner will be the manufacturer, not the licensee who wears it; and the terms & conditions of a 'cloud' storage agreement will be Google at its worst in the NymWars. Or iTunes. But that's just fine -when your data is corrupted or sold on, or deleted because your bank dropped a monthly payment, you have the legal right to sue for restitution. If you think that there's any prospect of sucess against a corporation with a billion dollar legal budget and a habit of winning cases by the financial exhaustion of the smaller party.

Unequal terms of contract and a government unwilling or unable to weight the scales with strong consumer legislation are a societal 'bug' that keeps on getting worse as technology extends in reach and the technologies we sign up to get more powerful.

I look forward to seeing the advertising streaming into those wires in your brain: you'll love every minute of it and those commercial partnerships will be worth a lot more money to the technology's owner than any individual user. And, just like advertising injected into your secure connection by the cable company, the law and your 'right' to opt out of it is a fiction. It's in the small print and you signed it of your own free will.

Worse, a technolgy this powerful induces dependence: can you remember any of the numbers in your mobile phone? How will you deal with brainwired data deletion and cutoff? ...When the supplier is a monopoly and the price doubles every year? Or you're accused of file-sharing for the third and final time? Or denounced as a a private critic of the manufacurer, or suspected of owning a copy of the Koran?

Will you have a thousand dollars handy for the upgrade when your Windows XPBrain software runs out of support and security patches?

What if it's running your insulin pump, or an electric stimulation that replaces chemical antidepressants? Will you definitely have that thousand dollars when your health insurance runs out?

This technology isn't going to happen in the USA: it can only happen in the Free World under the protection of a government accountable to the citizen and with the equitable rule of law.

Assuming, of course, that the economic model is that citizens will purchase the technology and have it installed of their own free will.

If USians are lucky, liability lawsuits and patent battles will strangle the technology in the fifty states.

And nobody from a free country, no matter how white, rich, and lawyered-up they they can be, is going to willingly risk passing a wired-up brain though the US border and the warrantless searches imposed by Homeland Security.

14:

Well, I'm on my 3rd system box and second monitor in the last decade (before that I didn't have a personal work machine). That does make "built-in" silicon hardware somewhat less than tempting to me. A plug-in interface that lets me control the computer at thought speed rather than the speed that Fingers_V1.00 can manage might be rather more tempting, but I'm going to want to know a bit more about the driver software first.

15:

Isn't there a very well-funded drive against any sort of augmentation & life-extension going on in the USA? Backed (what a surprise/not) by just about every form of god-botherer & the ultra-right.

I don't think so, at least not any more well funded than drives against asteroid colonies, fusion power, or other staples of SF. In certain circles people who are hopeful for life extension spend a lot of time decrying "deathists" as holding back progress, since that's a nice distraction from the fundamental difficulty of life extension and the inability of most life extension boosters to actually do biological research. Sometimes one or two real "deathists" will show up to oblige them with crazy reasons that it's actually great that everyone should die before living ~100 years: it's natural, Population Bomb panic, life isn't meaningful if you aren't at risk of unwanted death, and so on. Basically two camps are praying to different lifespan gods, both resorting to prayer because not one in ten of either camp can make any material contributions toward their desired outcome.

A related trope among life extension fans, often overlapping with libertarianism: "the FDA is holding back life extension because they won't recognize aging as a disease!" Of course the FDA does recognize Alzheimer's, macular degeneration, osteoporosis, cataracts, and a dozen other conditions that any healthspan extension worth its salt should be able to prevent or reverse. Once you have FDA-approved prophylaxis for Alzheimer's that also happens to prevent wrinkles and gray hair as side effects, doctors can prescribe it off-label for anything. Or everyone can take it in the name of preventing Alzheimer's, either way.

16:

I am rather more pessimistic. We are making great strides on the "I" bit of the I/O problem, but the "O" is far more difficult. The only major advances in recent years have been the artificial hippocampus (which may be computer linkable) and photosensitive and/or fluorescent neurons.

17:

Personally I don't want to be the first human to have his brain hacked (a la Ghost in the Shell.) I am looking forward to the direct stimulation of my pleasure centers, however.

18:

I know there's lots of downsides to this, but that's what late adopters like the Amish are for.

The first rule of improv (and rank speculation) is: Always Say Yes. My takeaway from this is: Yes, and therefore, once it happens, open everything. The reason we've had any kind of progress is not what people call the Enlightenment, not Capitalism, not Science, but the underlying cause, which is the dreaded p-word (play). Play, meaning, more sharing, more shared knowledge, more experiences, more skills, more fun. Isn't that what makes those strange folks known as scientists go through all that tedium and shitty pay? So, if this leads to more play for everyone, I'll accept the occasional exploding head from electronically remediated telepathic feedback.

19:

Yep. Eyes and fingers are already very good interfaces. Hard to know how to improve on them.

20:

Larry Niven has a name for people who do this - wireheads. And he describes the horrible results: people who starve to death while plugged in. No thanks.

21:
A plug-in interface that lets me control the computer at thought speed rather than the speed that Fingers_V1.00 can manage might be rather more tempting, . . .

What makes you think that 'thought speed' would be any quicker than fingers? What makes you think that such an interface would be any more precise than the equipment you already have? Remember that your fingers are already 'thought controlled'!

23:

I was joking. I also read Niven in my youth. The topic also leads one to the 'Mood Organ' of Dick's "Do Androids Dream of Electric Sheep."

24:

Isn't there a very well-funded drive against any sort of augmentation & life-extension going on in the USA? Backed (what a surprise/not) by just about every form of god-botherer & the ultra-right.

I wonder where you heard that, because to the best of my knowledge, that is 100% false. And I am pretty sure that if such well-funded drive existed, I would know about it.

God-botherers simply have not woken up to the potential of extended lifespan yet. The biggest "opposition" to life-extension consists of ignoring it. Very few people who actively oppose augmentation & life-extension are the extreme left/greens, such as Jeremy Rifkin and Bill McKibben, not the extreme right.

25:
It's certainly true that we're cyborgs already, and we already use cognitive prostheses. That doesn't mean even better ones aren't coming.

There will be better cognitive prostheses, but I suspect there will be a limit due to the processing speed of our brains. It is one thing to have a device that could instantly retrieve a memory, compute a result, or overlay augmented reality over our visual cortex, but we are still limited by our ability to process meaning and reflect. Without that, it all becomes a reflex action, which we can understand only after a fraction of a second at best. Chained analyses will still be limited by the speed of our thinking, which is essentially the slowest link in the chain.

An AI would not have that problem. Therefore, to my way of thinking, the drive will be to eliminate as much of the wetware involvement as possible. But this will be like having self-driving cars, rather than a manually driven car. We will simply be along for the ride, rather than doing the driving. And as with a self driving car, the interface between driver and car systems will be reduced, not increased, so there is less need to have a sophisticated, low level, neural interface.

26:

I can think (and speak) faster than I can type. I'd be most surprised if you can't do so too.

27:

Antibiotic resistance is bad, but it's not an insuperable problem.

We've got cheap DNA sequencers. Sequencing and assembling a human genome is expensive, but bacteria and viruses -- once we know what we're looking for -- are pretty simple in comparison. And Russia in particular has got a lot of expertise in the use of bacteriophages -- viruses that parasitize and kill infectious-disease causing bacteria (among others). It used to take days to grow a bacterial sample on a culture medium in order to identify a pathogen, but we should in principle be able to get to the point where we're not using antibiotics on serious infections; instead, we're sequencing a sample in hours then using it to select an appropriate phage from a hospital's virus library.

And then again, antibiotic resistance imposes a fitness cost on bacteria that aren't constantly challenged by that environmental threat. We're seeing this with malaria (not a bacterium, but the same principle applies): Chloroquine stopped working in the mid-1980s due to widespread resistance and has subsequently been replaced by artemisinin. Artemisinin resistance is now appearing ... but after 30 years of disuse, many strains of malaria are now vulnerable to chloroquine again. We may not be moving into the post-antibiotic era so much as into an era where antibiotics are used carefully and individually subjected to decades-long periods of enforced disuse, but collectively remain available -- supplemented by highly specific phages.

(But I've wandered off-topic. And in any case: the key objection I have to brain implants is that they're a lot harder to upgrade than an external device like a smartphone. So while the state of the art is advancing, who wants to undergo the considerable expense and risk of installing a 2014 model when the 2016 version will be so much more powerful? Answer: only people for whom there's no alternative -- which mostly means people with severe medical conditions.)

28:

Really? How do you know that? You may think you think faster than you can type, but isn't that just like a so-called speed reader claiming to read 800 or 1,000 wpm with 'perfect comprehension' when numerous tests have shown people max out in the 400 to 600 wpm range for perfectly explicable and completely biological reasons? [1]

My own experience of 'thinking fast' tends to be with mental imagery, not words. When I try to put those thoughts down on paper they're, well, imprecise is being kind.

[1] I've found this one to be one of the last bastions of the fans are slans type. It's also what finally put me off sf newsgroups for good -- sorry and unremarkable wights claiming they could read 20,000 wpm and that they didn't need to steenking tests for comprehension to tell them that they understood everything 'just fine'.

29:

You are behind the curve - all you need is some saline sponges, a 9V battery and some wires. Go for it!

http://jnewbio.edublogs.org/2013/01/05/researchers-harness-natural-painkillers-with-electric-stimulation/

30:

Given the risks and limitatations of an implant versus every other form of external interface I have a hard time seeing why they would be used in anything other than a commercial setting. Even supposing a brain implant could interpret a user's internal monologue and translate that into a meaningful output or create sensory input how is that any more desirable than wearable tech with natural language recognition?

The only way I really see implants taking off is in sophisticated intelligence amplification. Sure external tech can make an individual more capable but if an implant could interface that natural connectome with simulated ones designed to add/enhance mental faculties would be much better. However that seems like such a massive development that it's quite in the realm of science fiction. We don't even have a map of the human connectome, let alone the knowledge to weave it together with artificial ones for beneficial purposes.

31:

The War On Drugs Used By Those People is getting more and more unpopular. The Inner Party members who run it are going to be looking for another war pretty soon; bio augmentation might make a nice target for them.

On a more pleasant note, I think many of the initial obstacles to Brain-Computer Interface, like invasiveness, difficulty of updating, and complexity of connection😷 might be solved by detecting electric or magnetic fields near the skull for output, and transcranial magnetic stimulation for input. There's lots of room outside the skull for the computing power needed to interpret and synthesize the signals involved.

😷 Connecting those millions of nanowires correctly is an interesting problem!

32:

I have a hard time seeing why they would be used in anything other than a commercial setting

Oops typo. That's meant to read medical setting.

33:

Overheard, Jean Harlow to Albert Einstein: "Our baby would be both beautiful and brilliant".

Chances are that the merging of tech and neuron data sets is at least as complicated as the merging of sperm and ovum data sets.

34:

The War On Drugs Used By Those People is getting more and more unpopular. The Inner Party members who run it are going to be looking for another war pretty soon; bio augmentation might make a nice target for them

Sorry, but bio augmentation is not suitable for this kind of "war". What makes drugs a perfect target to perpetually sink resources into are:

Drugs are cheap and easy to obtain. Drugs are really popular. Drugs are easy enough to conceal that one can use them for quite a long time without getting caught (which makes Those People so much scarier).

None of which will be true of bio augmentation any time soon -- certainly not soon enough to substitute for War on Drugs. Not much point in War on Whatever if it can be actually won, and with little effort.

35:

Overheard, Jean Harlow to Albert Einstein

I like the retort Anatole France is supposed to have delivered better- France to Isadora Duncan: "Yes, but imagine a child with my beauty and your brains"

36:

Isn't there a very well-funded drive against any sort of augmentation & life-extension going on in the USA? Backed (what a surprise/not) by just about every form of god-botherer & the ultra-right.

I don't think so, at least not any more well funded than drives against asteroid colonies, fusion power, or other staples of SF. In certain circles people who are hopeful for life extension spend a lot of time decrying "deathists" as holding back progress, since that's a nice distraction from the fundamental difficulty of life extension and the inability of most life extension boosters to actually do biological research.

My own take is that there are "deathists" but they are not a "well-funded drive" or anything like that organized. Instead, prospective life extension runs afoul of a psychological coping mechanism we have developed to allow us to live life knowing that it will end all too shortly. If we convince ourselves that "death is a good thing, a necessary thing" then the prospect of it happening to us doesn't seem quite as terrifying.

The problem with anti-aging research is that it undermines this coping mechanism without (yet) having delivered the goods that would render such a mechanism unnecessary. My suspicion is that faced with actual anti-aging treatments, most people would take an approach similar to St. Augustine had to chastity: "God grant me mortality, but not just yet."

37:

ily187 Thanks for the correction. As I said - I wasn't sure - looks like I was misinformed.

38:

Charlie Two points 1: We may not be moving into the post-antibiotic era so much as into an era where antibiotics are used carefully .. So how soon before someone finally manages to STOP the terminally-stupids in the USA industrial-scale dispensation of antibiotics to cattle? 2: There are also the long-neglected first generation of anibiotics, the "Sulfa" drugs. Which can have wierd side-effects, but are pretty good at nuking the infections. If these were (carefully) re-introduced, I suspect the bacteria might not know what had hit them. [ Case example: My late mother had a persistent blood infection (this was late 1930's) resulting in continual boils. A very early sulfa drug, caused her a week of vertigo, the boils flared up for two days, shrivelled - and were never seen again. Whatever vaiety of Staph aureus that was in her blood just keeled over, permanently.

39:

I think the first wave of implants will be about I/O bandwidth, instead of processing power. The processing will be done on hardware outside the brain, all we need is some high bandwidth connection to the neurons. This will also make the implant less likely to become obsolete, since the bandwidth will be inherently limited by the neuron firing frequency.

40:

You're the one who's insisting on hanging numbers on this. I can type at maybe 35 mistakespm, which isn't that atypical of people not trained as touch typists. Human speach with equal coherence runs more like 60 wpm.

41:

Yep - that's the punchline!

42:

Huh? This makes no sense. I think my point went about 50,000 feet over your head. Now, I type much faster than you apparently, but let's stipulate speech instead of typing (even though there is a tremendous amount of processing -- and error correction -- involved in getting the message out of the signal). I'll repeat my question and please answer it this time: How do you know you think faster (all other things being equal) than you can talk? How do you know this isn't an illusion? You see, it's been my experience that no matter how carefully or how precisely I think I think, it turns out 99% of the time that when I go to put those musings down on paper I haven't thought the thing through well enough or clearly enough at all. Usually not by a long shot.

Yes. It's tempting to think that if you had this magic 'neural interface' you'd be five times as quick and ten times as accurate, but really, your nervous system has already been optimized to work one speed, or rather, at one set of speeds. I'm not saying you're wrong in pushing the notion of the 'neural interface' as the Magic Productivity Enhancer per se, I'm just a bit skeptical in automatically assuming this is the case. Remember my example of all those speed readers who read with 'perfect comprehension' -- and so sure of this they didn't need no steenkin' test to give them some objective numbers ;-)

I see this problem writ small when I see students doing integration with Mathematica, btw; they think the hard part is solving the integral when of course the hard part is setting it up.

43:

Does anyone else see an issue with putting in more stuff into a rigid non-expandable cranium? If you add tiny machines/nanobots, what do you remove to make room for said 'bots? Grey matter? White matter? Cerebrospinal fluid? Or do you replace the bony cranium with some sort of (inert) expandable plastic to make room? But you'd still need to check that adding 'bots doesn't change any of the key environmental characteristics of the brain and its constituents, e.g., specific gravity, flow rates, ionization, PH, distance between key structures, etc.

If our guest host has addressed this in his books, I look forward to finding out as soon as I can locate said books at my local bookstore.

44:

Agree, and there was an article on this a few years back looking at the differences in processing speed between various parts of the brain. -- "Yes. It's tempting to think that if you had this magic 'neural interface' you'd be five times as quick and ten times as accurate, but really, your nervous system has already been optimized to work one speed, or rather, at one set of speeds."

The punchline was that the 'executive' function of the brain was the slowest, hence, last to know. Also, IMO, these results basically confirm that our brain's prefrontal cortex/executive role is not so much to contribute the 'rational/abstract thinking' part of the in-tandem processing but to rationalize (explain) what the rest of its brain/body had just done.

"I don't sing because I'm happy; I'm happy because I sing." William James

45:

IIRC, back in the 80s, Rebecca Ore in Being Alien suggested replacing the temporal bone with the computer, although she left the connection between computer and brain undefined.

As for future humans, while it won't work for us, one solution might be to use optogenetics on the human victim, erm, side of the interface, and hook the person up to an optical computer that transmits light via very tiny fibers on the other side. This would take more than a little tinkering, but you could possibly set it up to run optical signals between neurons and a computer, and that might be safer biologically than running electrodes.

46:

...optogenetics...hook the person up to an optical computer that transmits light via very tiny fibers on the other side.

100bn fiber optic cables, one for each neuron, is going to take up a lot of space in the cranium. Even worse if glial cells have to be included. I think we need to be talking molecular scale wires. My thought is that you would want a multiplexed wiring system with addressed sensor/effectors to reduce wiring volume, which could be electrical using conductive polymers.

47:

Contarywise - You're the one who keeps moving the goalposts in a bid to prove that my claim that I can think faster than I can type is incorrect.

48:

What the hell are you on about? I'm getting a hostile vibe here rather than the words of someone who really wants to talk. Let's rewind the tape here to see what you said:

A plug-in interface that lets me control the computer at thought speed rather than the speed that Fingers_V1.00 can manage might be rather more tempting, but I'm going to want to know a bit more about the driver software first.

All I've done -- repeatedly -- is ask how you know that 'thought speed' is really any faster than a conventional interface? You're free to quote me on where I moved the goal posts, of course (And frankly, I don't see where I have, your bald assertions to the contrary).

But wouldn't it be better to just answer the original questions: "What makes you think that 'thought speed' would be any quicker than fingers? What makes you think that such an interface would be any more precise than the equipment you already have?" I've already explained why I'm skeptical. You're free to claim that being skeptical is really just taking the opposing viewpoint, of course. But if you do I see no reason to continue this conversation.

49:

If you find yourself posting in anger, please take a few deep breaths before doing so.

50:

Eh? Why do you think I'm posting in anger? I'm not. Could you please do me the courtesy of pointing out the relevant passages and why you think they indicate anger? Because personally, I think I've been very good about delineating my position.

For that matter, why do you think paws4thot isn't being hostile?

51:

"What the hell".

So, more blunt now: stop it. This is the last time I'll say that.

52:

What?!?! I can't even get a clarification of the guidelines without said clarification being deleted?!?! Well, it's your ball.

53:

Err .... SoV Appears to have asked a straightforward, legitimate, technical question. As in: What makes you think that 'thought speed' would be any quicker than fingers? So, what evidence is available to show that our "thoughts" are significantly faster than our actions, (& added by me) especially where we are well-trained in those actions?

Some objective experimental empirical evidence on this subject, either way or neither would be welcome. Wouldn't it?

P.S. Examples - [A]how fast is a well-practised musician's brain operating when playing, & is there a significant difference between a very familar piece, even if a difficult one, & an unfamiliar piece, even of an easy one? [B] Similarly, how fast is my brain processing, when I'm doing a complicated dance step, or set of steps, particularly in a "formal" dance with a supposedly set order of moves & figures? And is my brain processing faster, if I'm learning a new dance? Both these examples, of course, demand a physical output.

Answers double-spaced, on one side of the paper, only, please.

54:

"So, what evidence is available to show that our "thoughts" are significantly faster than our actions"

Martial arts. I can think through a series of kicks far faster than I can execute them

55:

And that's a "muscle memory" activity, where even something as aledgedly trivial as this blog posting is intellectual in that I need to think what to type and then type it.

For the record and the scoffers, the only changes I made to what I thought were typo corrections.

56:

Sorry, Dirk, but, you can't. In fact, that was probably just about the worst example you could think of. Bear in mind, contra to what paws4thot posted (I'm guessing he doesn't have any proof for his earlier statements, but doesn't want to say so), that there is a lot of processing going on albeit at an unconscious level. What Dirk is probably doing is naming the moves, perhaps with a visual tag; but that's just calling a subroutine, to use a hoary metaphor. And in fact, most people will tell you that in something like boxing, wrestling, karate, etc. (some of which I used to do, so I speak from personal experience) the athletes react far faster than they can think.

Note - again (and shades of that speed reading thing) - that I am not flatly claiming pure mentation isn't faster than going through your fingers and eyes; just that a lot of people seem to assume it without actually seeing whether this is really so. And any demonstrations to that effect are still lacking on this thread, alas. Also note - again - that vague mental imagery, interior dialogue certain seems faster than what you can put on paper. But as I have found personally, what I think I'm thinking and what's actually up there when you commit it to the outside world are two very different things. No doubt my interior 'cirle' is an exterior sloppy curve that can't even charitably be called closed, let alone oval-shaped :-)

57:

Some idle googling gives me literally tons and tons of anecdotes about people who 'think faster than they talk' but then:

This is true, as I often have huge explanations to drop on people, and it's usually when I'm under pressure, that I have a lump of information all jammed up in my mouth and I don't know yet how to organize it into information processable to others. Hence, I literally try to say everything at once. I spend a ton of time thinking up huge processes or ideas, and when I want to tell people, the only time I can explain effectively is when I'm talking to someone who doesn't care, or when I have time to organize it in my head. Basically when i'm under minimal pressure. It's really a matter of creating heaps of information, only organizing it enough that you understand it, and when it comes down to explaining, you have to re organize it on the spot, Sometimes when I wanna say something I start with just a random element, build onto the idea from other information, until I've literally explained an unorganized heap to someone. I'll then ask them if they were able to connect the dots, sometimes they say yes, which is nice, but usually they say no, and I go ahead and connect the dots there. It's not that you aren't thinking. I always have a hard time finding words to say, and it's because I have a complicated concept I want to present to someone, and I'm jumping through my head trying to find a word that describes it, because I don't think in words. I think in abstract images that can be made out into literally any idea. So the problem is turning the images into words. We don't speak images.

Yeah, this sort of thing happens to me too. A lot. But it's because organizing what's in my head into a suitable output takes time. Conscious time. If I was attached to one of these hypothetical neural interfaces, it wouldn't magically rearrange that idiosyncratic jumble; it would just print out ADD gibberish. Not that what goes on in my head in producing the raw material before it undergoes I/O formatting isn't important; it is. But is anyone suggesting that a 'neural interface' is going to change that?

58:

"... the athletes react far faster than they can think."

Sometimes, in that one can blink faster than reasoning out that a fly is about to hit your eyeball. However, as a martial arts teacher with more than 30 years experience I can definitely tell you that after laying off training for some time, and not being on top form, my mind has often completed the sequence of moves before my body finishes them.

59:

Moreover, an examination of actual human output demonstrates that many people speak, or post to the internet, much faster than they think. In some cases, thinking may not happen until several days later, or at all...

60:

Shrug. Between all the cites I've found on line and your personal (and entirely subjective) impressions, I'll take the cites. You're free to do otherwise, of course, but that's what I'm going with. I'll say it again: it sounds like you're making a call to a subroutine and say your subjective experience of that call is the subroutine itself. That's metaphorically speaking and it may be the wrong metaphor, but I'm loathe to privilege your experience over everybody else's.

61:

Even if they aren't the radical performers some posters here think they would be, 'neural interface' technology is still worth developing, btw. Think about people who suffer some degree of paralysis, for example, or are deaf, or blind.

62:

Well, let's take your position to it's absurd conclusion. If I have an injured arm that limits its speed to a snails pace, I cannot think through a punch any faster than the ten to twenty seconds it would take me to execute it.

63:

Please read my comment number 61 immediately above and adjacent to your comment 62.

64:

My reply is that "injury" is a somewhat relative term. If I lose half my physical speed through old age, but do not lose half my mental speed...

65:

Reflexes - faster than thought. E.g. pain reflex.

Trained responses, emotional responses. - faster than thinking. "Fast thinking" c.f. Kahneman. This is probably what Birk was alluding to with his martial arts example. It is also the almost reflexive response of sports players, amongst others.

Deliberative thinking ("slow thinking"). Very much slower. You can write using your trained fingers far faster than you can create deliberated thought output.

The problem with neural interfaces is that they will need to be able to determine the desired output from the multiplicity of circulating thoughts surrounding the desired output. It will be very tedious if you will need to focus on every thought in order to get a good output. For example, as I write this, I am also thinking about other things, but allowing my executive brain to "time slice" in order for my fingers to type these words.

66:

I'm not sure how I'm meant to "prove this" to your satisfaction, but I assure you that I'd worked out what post #55 was going to say completely before I'd typed the first 6 words.

Since you can't prove that statement to be untrue I suggest that you either accept it as my accurate recounting of my perceptions, or bear in mind that you appear to be accusing me of being economical with the truth.

Specials

Merchandise

About this Entry

This page contains a single entry by Ramez Naam published on February 17, 2014 6:24 PM.

A Rebuttal of 'The Singularity is Further Than it Appears' was the previous entry in this blog.

Beer, Boston, Tuesday is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Search this blog

Propaganda