Back to: A message from our sponsors: now with added gaming content! | Forward to: Thoughtcrime

The Ultimate Tech Frontier: Your Brain

Ramez Naam is the author of 5 books, including the award-winning Nexus trilogy of sci-fi novels. Follow him on twitter: @ramez. A shorter version of this article first appeared at TechCrunch.

The final frontier of digital technology is integrating into your own brain. DARPA wants to go there. Scientists want to go there. Entrepreneurs want to go there. And increasingly, it looks like it's possible.

You've probably read bits and pieces about brain implants and prostheses. Let me give you the big picture.

Neural implants could accomplish things no external interface could: Virtual and augmented reality with all 5 senses (or more); augmentation of human memory, attention, and learning speed; even multi-sense telepathy -- sharing what we see, hear, touch, and even perhaps what we think and feel with others.

Arkady flicked the virtual layer back on. Lightning sparkled around the dancers on stage again, electricity flashed from the DJ booth, silver waves crashed onto the beach. A wind that wasn't real blew against his neck. And up there, he could see the dragon flapping its wings, turning, coming around for another pass. He could feel the air move, just like he'd felt the heat of the dragon's breath before.

- Adapted from Crux, book 2 of the Nexus Trilogy.

Sound crazy? It is... and it's not.

Start with motion. In clinical trials today there are brain implants that have given men and women control of robot hands and fingers. DARPA has now used the same technology to put a paralyzed woman in direct mental control of an F-35 simulator. And in animals, the technology has been used in the opposite direction, directly inputting touch into the brain.

Or consider vision. For more than a year now, we've had FDA-approved bionic eyes that restore vision via a chip implanted on the retina. More radical technologies have sent vision straight into the brain. And recently, brain scanners have succeeded in deciphering what we're looking at. (They'd do even better with implants in the brain.)

Sound, we've been dealing with for decades, sending it into the nervous system through cochlear implants. Recently, children born deaf and without an auditory nerve have had sound sent electronically straight into their brains.

Nexus

In rats, we've restored damaged memories via a 'hippocampus chip' implanted in the brain. Human trials are starting this year. Now, you say your memory is just fine? Well, in rats, this chip can actually improve memory. And researchers can capture the neural trace of an experience, record it, and play it back any time they want later on. Sounds useful.

In monkeys, we've done better, using a brain implant to "boost monkey IQ" in pattern matching tests.

We've even emailed verbal thoughts back and forth from person to person.

Now, let me be clear. All of these systems, for lack of a better word, suck. They're crude. They're clunky. They're low resolution. That is, most fundamentally, because they have such low-bandwidth connections to the human brain. Your brain has roughly 100 billion neurons and 100 trillion neural connections, or synapses. An iPhone 6's A8 chip has 2 billion transistors. (Though, let's be clear, a transistor is not anywhere near the complexity of a single synapse in the brain.)

The highest bandwidth neural interface ever placed into a human brain, on the other hand, had just 256 electrodes. Most don't even have that.

The second barrier to brain interfaces is that getting even 256 channels in generally requires invasive brain surgery, with its costs, healing time, and the very real risk that something will go wrong. That's a huge impediment, making neural interfaces only viable for people who have a huge amount to gain, such as those who've been paralyzed or suffered brain damage.

This is not yet the iPhone era of brain implants. We're in the DOS era, if not even further back.

But what if? What if, at some point, technology gives us high-bandwidth neural interfaces that can be easily implanted? Imagine the scope of software that could interface directly with your senses and all the functions of your mind:

They gave Rangan a pointer to their catalog of thousands of brain-loaded Nexus apps. Network games, augmented reality systems, photo and video and audio tools that tweaked data acquired from your eyes and ears, face recognizers, memory supplementers that gave you little bits of extra info when you looked at something or someone, sex apps (a huge library of those alone), virtual drugs that simulated just about everything he'd ever tried, sober-up apps, focus apps, multi-tasking apps, sleep apps, stim apps, even digital currencies that people had adapted to run exclusively inside the brain.

- An excerpt from Apex, book 3 of the Nexus Trilogy.

The implications of mature neurotechnology are sweeping. Neural interfaces could help tremendously with mental health and neurological disease. Pharmaceuticals enter the brain and then spread out randomly, hitting whatever receptor they work on all across your brain. Neural interfaces, by contrast, can stimulate just one area at a time, can be tuned in real-time, and can carry information out about what's happening.

We've already seen that deep brain stimulators can do amazing things for patients with Parkinson's. The same technology is on trial for untreatable depression, OCD, and anorexia. And we know that stimulating the right centers in the brain can induce sleep or alertness, hunger or satiation, ease or stimulation, as quick as the flip of a switch. Or, if you're running code, on a schedule. (Siri: Put me to sleep until 7:30, high priority interruptions only. And let's get hungry for lunch around noon. Turn down the sugar cravings, though.)

Crux Implants that help repair brain damage are also a gateway to devices that improve brain function. Think about the "hippocampus chip" that repairs the ability of rats to learn. Building such a chip for humans is going to teach us an incredible amount about how human memory functions. And in doing so, we're likely to gain the ability to improve human memory, to speed the rate at which people can learn things, even to save memories offline and relive them -- just as we have for the rat.

That has huge societal implications. Boosting how fast people can learn would accelerate innovation and economic growth around the world. It'd also give humans a new tool to keep up with the job-destroying features of ever-smarter algorithms.

The impact goes deeper than the personal, though. Computing technology started out as number crunching. These days the biggest impact it has on society is through communication. If neural interfaces mature, we may well see the same. What if you could directly beam an image in your thoughts onto a computer screen? What if you could directly beam that to another human being? Or, across the internet, to any of the billions of human beings who might choose to tune into your mind-stream online? What if you could transmit not just images, sounds, and the like, but emotions? Intellectual concepts? All of that is likely to eventually be possible, given a high enough bandwidth connection to the brain.

That type of communication would have a huge impact on the pace of innovation, as scientists and engineers could work more fluidly together. And it's just as likely to have a transformative effect on the public sphere, in the same way that email, blogs, and twitter have successively changed public discourse.

Digitizing our thoughts may have some negative consequences, of course.

With our brains online, every concern about privacy, about hacking, about surveillance from the NSA or others, would all be magnified. If thoughts are truly digital, could the right hacker spy on your thoughts? Could law enforcement get a warrant to read your thoughts? Heck, in the current environment, would law enforcement (or the NSA) even need a warrant? Could the right malicious actor even change your thoughts?

"Focus," Ilya snapped. "Can you erase her memories of tonight? Fuzz them out?"

"Nothing subtle," he replied. "Probably nothing very effective. And it might do some other damage along the way."

- An excerpt from Nexus, book 1 of the Nexus Trilogy.

The ultimate interface would bring the ultimate new set of vulnerabilities. (Even if those scary scenarios don't come true, could you imagine what spammers and advertisers would do with an interface to your neurons, if it were the least bit non-secure?)

Everything good and bad about technology would be magnified by implanting it deep in brains. In Nexus I crash the good and bad views against each other, in a violent argument about whether such a technology should be legal. Is the risk of brain-hacking outweighed by the societal benefits of faster, deeper communication, and the ability to augment our own intelligence?

For now, we're a long way from facing such a choice. In fiction, I can turn the neural implant into a silvery vial of nano-particles that you swallow, and which then self-assemble into circuits in your brain. In the real world, clunky electrodes implanted by brain surgery dominate, for now.

Apex That's changing, though. Researchers across the world, many funded by DARPA, are working to radically improve the interface hardware, boosting the number of neurons it can connect to (and thus making it smoother, higher resolution, and more precise), and making it far easier to implant. They've shown recently that carbon nanotubes, a thousand times thinner than current electrodes, have huge advantages for brain interfaces. They're working on silk-substrate interfaces that melt into the brain. Researchers at Berkeley have a proposal for neural dust that would be sprinkled across your brain (which sounds rather close to the technology I describe in Nexus). And the former editor of the journal Neuron has pointed out that carbon nanotubes are so slender that a bundle of a million of them could be inserted into the blood stream and steered into the brain, giving us a nearly 10,000-fold increase in neural bandwidth, without any brain surgery at all.

Even so, we're a long way from having such a device. We don't actually know how long it'll take to make the breakthroughs in the hardware to boost precision and remove the need for highly invasive surgery. Maybe it'll take decades. Maybe it'll take more than a century, and in that time, direct neural implants will be something that only those with a handicap or brain damage find worth the risk to reward. Or maybe the breakthroughs will come in the next ten or twenty years, and the world will change faster. DARPA is certainly pushing fast and hard.

Will we be ready? I, for one, am enthusiastic. There'll be problems. Lots of them. There'll be policy and privacy and security and civil rights challenges. But just as we see today's digital technology of Twitter and Facebook and camera-equipped mobile phones boosting freedom around the world, and boosting the ability of people to connect to one another, I think we'll see much more positive than negative if we ever get to direct neural interfaces.

In the meantime, I'll keep writing novels about them. Just to get us ready.

248 Comments

1:

"We're in the DOS era, if not even further back."

DOS isn't a good analogy. We're in the alchemy era.

DOS came at a time when the underlying science and technology was very, very well understood. Only small, iterative scientific advances have happened since. Programs and computers now are very much the same as they were then, only more so.

Pull out a textbook on computer engineering from 1985 and then look inside your PC (or phone) - it's smaller, faster, and more distributed, but meh. Apples "latest" programming language for iOS is an Objective-C variant - basically a nicer, better C++ and C++ was already around in 1983 and is based on OOP ideas from the late 70s. DOS itself was low-tech - UNIX is older and modern OSs are much more like UNIX.

Whereas our understanding of neurology is a very, very long way from what you want. We need not just better technology, but better science: that is, better understanding of the brain, and of what it means to have a thought. There has been a huge paradigm shift in the last 25 years, because of better neural-imaging. But it's not enough, yet.

Which is why a lot of what you suggest falls into the "cartesian theatre" fallacy that Daniel Dennett discusses at length. Some of the things you seem to want to transmit or receive simply don't exist.

2:

I don't think that this technology is realistic in the medium term.

Medical science has a huge problem with complex interlocking systems. An example would be creating organs. We can 3D print simple organs like bladders and bone marrow, but can't artificially create a heart or a kidney. When dealing with the overall health of the human body, we can't do that right yet. An example would be nutritional science. Look how often that changes.

Similarly, we can identify genes that are responsible for single gene diseases (or single chromosome diseases such as Down Syndrome). Where are the genes that determine eye color? Sexual orientation? susceptibility to cancer? Are there any genetic components to emotional expressions? Never mind finding a way to understand how the environment affects gene expression.

If we can't effectively map this, then what hope do we have with a more complex system such as the brain?

3:

Point of information, please. I assume by a "Billion" you mean 10^9 ( Usual prefix G ) Is a Trillion, therefore 10^12 ( Prefix T ) ??

We're in the DOS era, if not even further back. I would think we are in the area of "core store" actually - equivalent to 1965-70 (ish). But, it wil get "Better" for values of better that mean faster & more & more reliable connections.

The ultimate interface would bring the ultimate new set of vulnerabilities And you do not mention the ablsolute certainty that some (if not all) governements would be tempted to, & actually use this to REALLY control people, or disable them, by switching theor augmentations "off". A real, possible nightmare-scenario.

For now, we're a long way from facing such a choice Really? If we are currently at approx 1970 on this road, that's less than 40 years away to the fulol monty, isn't it? Maybe sooner, once the "security" services start funding R&D in the field. ( As you have noted in passing, by mentioning DARPA. )

There'll be policy and privacy and security and civil rights challenges. That's the mild end of it. Imagine DPRK or Da'esh making use of this?

P.S. Icehawk: Maybe, maybe not. I think you go too far back in analogical terms. Particularly if really serious research is made, the cangeover to a real understanding, of the physical mechamisms, at the least, will come much sooner & fater than you imply.

4:

The artificial hippocampus implant is particularly interesting, especially if they can be networked...

http://wavechronicle.com/wave/?p=1601

"However, we might have a clue when looking at projects such as that designed to create an artificial hippocampus. This is aimed at developing an electronic prosthesis to perform the function of the hippocampus, which is to lay down and retrieve memories. This device, if it could be made to work in Humans, would go a long way towards creating a “Borg interface” since memories are integral to a sense self. One can easily imagine people who have such a prosthesis linking them together in order to share memories. Easily imagined, but probably far from easy to do without massive external computing and knowledge of “brain language” we do not yet have. However, for the sake of argument let us assume it is possible.

The result? Suddenly “I” can remember multiple lives. “I” have multiple streams of consciousness. “I” have an entirely new perspective due to all the knowledge that “I” now possess. “I” have a vastly expanded consciousness and “I” even have multiple bodies! Welcome to the Borg.

There is no coercion. As soon as someone joins the collective their consciousness expands to encompass everyone. Their ego, for lack of a better word, becomes the ego of the collective. It is not a submerging of self into a twilight prison but a vastening into the light of a billion souls joined as one, along with their accumulated wisdom. It is also a granting of immortality, at least as long as a certain number of Borg minds are still online and connected."

5:

I'd agree, particularly about the computing history and OS aspects of the argument.

6:

From my own personal medical history, I know how Doctors can ignore the evidence when a treatment has unwanted side effects. I know how many good ideas have fizzled out. And if something doesn't work it is the patient's fault.

And I think we all know what computer programmers have done, and how it turned out. We've had to go backwards some twenty years more for a mid-Eighties work of genius to get reliable enough to survive as a minority product in the modern world.

It seems a tad optimistic for anyone to think of having computers directly linking to the brain by getting these two groups to cooperate and get anything right.

And I certainly don't want Microsoft inside my head!

7:

'Suddenly “I” can remember multiple lives. “I” have multiple streams of consciousness. “I” have an entirely new perspective due to all the knowledge that “I” now possess. “I” have a vastly expanded consciousness and “I” even have multiple bodies!'

That's basically the tech of the Ancilliary-verse, isn't it?

If this implant tech develops to the levels postulated (big if IMO) then it will be interesting to see how the brain maps the new data-streams from implants. I saw an interesting demo of how you can spoof proprioception on QI last week (https://www.youtube.com/watch?v=S4fiZJew22A)- google 'rubber hand illusion'.

Regards Luke

8:

Obligatory Sid Meier's Alpha Centauri quotes:

I think, and my thoughts cross the barrier into the synapses of the machine - just as the good doctor intended. But what I cannot shake, and what hints at things to come, is that thoughts cross back. In my dreams the sensibility of the machine invades the periphery of my consciousness. Dark. Rigid. Cold. Alien. Evolution is at work here, but just what is evolving remains to be seen.

-Commissioner Pravin Lal, "Man and Machine"

The Warrior's bland acronym, MMI, obscures the true horror of this monstrosity. Its inventors promise a new era of genius, but meanwhile unscrupulous power brokers use its forcible installation to violate the sanctity of unwilling human minds. They are creating their own private army of demons.

-Commissioner Pravin Lal, "Report on Human Rights"

9:

Saw your photo. Like the pirate look or are you covering up an eye implant. Love your books, fiction and non fiction, and can't wait until Apex comes out in audio (my preferred medium). I think your predictions above are correct and brain prostheses are coming much sooner than we think. In fact this is my preferred scenario for AI. Dirk and Silburni described what I want. Though I hope it will be a world of many group minds linking and unlinking... the centrally controlled Borg are not my scene. I want this link to be with animals as well as humans... eventually artificial minds would be welcome too.

10:

"And you do not mention the absolute certainty that some (if not all) governments would be tempted to, & actually use this to REALLY control people, or disable them, by switching their augmentations "off". A real, possible nightmare-scenario."

Indeed. I've only read the first book of Ramez's trilogy, but that's pretty much what it's about. I'd highly recommend it.

11:

The big issue I see at the moment is the one that has been holding us back for a long time - human experimentation.

It is extremely difficult to do meaningful practical research on humans, even with consent, since research is by definition unpredictable.

Which means I see any implementation of neural interfaces to be extremely slow to develop unless we can come at the problem from another direction.

Are they inevitable? Almost certainly - huge percentages of our entertainment R&D budget goes on making an experience more "real", and tapping the brain would be as real as it could get.

I suspect what will happen is we will progress rapidly down the relatively simple approaches - tapping the optic nerve and reproducing the signals in both directions would be the start. We would then move down direct medical routes - "restoring sight to the blind" or "Hearing to the Deaf" would be a prominent one. Enhanced vision outside the visible spectrum would logically follow, along with possibly HUD style overlays on the real world.

But in terms of direct brain stimulation and interaction outside of the known developed pathways? I see that being VERY slow - we barely understand the raw chemistry of the brain, let alone the interactions inside. We know areas light up when certain stimuli are applied, but we don't really know why, we just assign labels to those areas. And so much of the brain function is idiosyncratic - while most humans are broadly similar, I have a vague memory that cultural and behavioural development influences how the brain responds to stimuli as much as the underlying chemistry. This would mean different cultures would need different programming and most likely utterly different wiring.

12:

Imagine the possibilities for spam.

"Bud knew a guy like that who'd somehow gotten infected with a meme that ran advertisements for roach motels, in Hindi, superimposed on the bottom right-hand corner of his visual field, twenty-four hours a day, until the guy whacked himself." -- Neal Stephenson, "The Diamond Age".

13:

I have three problems with direct brain interfaces. One is the obvious one -- security. (In my 20s I used to want to be the first kid on my block to get a cyberspace jack. By my 30s I realized I wanted to be the first adult on my block to get a cranial firewall ...)

But there are other less obvious issues that will hold it back.

The first is antibiotic resistance. A brain implant is by definition an implant -- it requires surgical implantation. We might be able to do it via keyhole surgery eventually, but threading fine objects through the intracranial vascular system is just asking for catastrophic side-effects (would you volunteer for it if there was a 5-10% chance of a fatal sub-arachnoid haemorrhage?). Open skull surgery is probably "safer" from vascular accidents ... but then you end up with the risk of infection in surgery. We're currently dangerously close to reverting to 1880s levels of risk of infection during surgery, and the brain in particular is problematic because the blood/brain barrier is what mostly keeps dangerous infection vectors out: once you go through the meninges you'd better hope your infection control protocol is perfect or the patient is at serious risk.

This problem may be mitigated by the development of new antibiotics, by new antibiotic resistance protocols, or by different strategies -- DNA sequencing of pathogens to identify a suitable bacteriophage from a library, which can then be synthesized on the spot using a DNA synthesizer, if we're going to get all nanotech about it -- but for now it's really going to put the brakes on development/deployment of brain implants as anything other than a life-saving medical intervention where the benefits are perceived as outweighing the risks.

The third problem is obsolescence. Right now, those of us who are lucky enough to have a decent job can upgrade the fat pipe that delivers all of human knowledge to our eyeballs -- our smartphones -- whenever a new and improved one comes out. Cost: on the order of £500-1000. A brain implant, in contrast, might contain components that have a similar bill of materials, but the medical device certification and surgical costs are going to add 1-3 orders of magnitude to the price. This isn't an annual upgrade cycle! In fact, it'd be crazy to be an early adopter -- until Moore's Law and Koomey's Law cap out, so that progress is bound to be incremental rather than exponential.

Again, the nanotech future of "Nexus" is a plausible scenario for the eventual rollout of brain implant/upgrades ... but as long as it involves surgery and manufactured devices rather than self-organizing replicators, it's going to be a niche market. And niche markets are expensive.

(Where might it get started? Well, if it turns out that drone warplanes can be jammed or predicted too easily, it might be necessary to put a pilot in the loop directly on board. And as the F-35B's control helmet -- which has a bunch of VR features -- currently costs $350,000 per unit, I can see a market for brain implants for fighter pilots (the Air Force is paying). The general application profile is for any high-value/rare skilled profession that requires huge amounts of information visualization in extremely close to realtime without occupying the hands of the practitioner: "brain surgeon" springs to mind. But there aren't enough of those to drive prices down and bootstrap a larger market; it's like commercial space travel in the 1970s-1980s.)

14:

Looking forward to your newest book.

This technology might get 'tested' and adopted faster if marketed as entertainment, sex/sensual gratification, religion, education/job success.

.. gamers and movie-goers to increase their enjoyment .. religious ecstasy, i.e., 'see/hear God' and re-affirm your faith (could be tied in with militant religiosity) .. improve your grades and get into the college of your choice, get your dream job, etc.

Let's face it, anything that sounds 'medical' would have to undergo more rigorous testing whereas anything marketed as a consumer good/service can be pushed out with next to no testing.

So, I would say that medical therapies would be the last step unless this new technology was sold through the TV shopping networks and/or self-help sites/books.

.. addiction control/therapy - includes food, exercise, drugs, alcohol, gambling, shopping, perhaps up to OCD

Any messing with brain components (neurons, glia and astrocytes) would have to cascade to other body systems and vice versa. Perhaps end result is a tech vs. biology war game or social politization. For every tech measure there would be a biological countermeasure. For example, a high fever could be an antidote to one form of mind control. Or, taking a very cold shower/bath. Or something that interferes with the blood ... too few red blood cells, and your brain doesn't get the oxygen it needs.

The brain also controls fertility and sex drive, so theoretically, you could reprogram brains to the point where only select individuals ever procreated on demand.

15:

Pedantic note: objective c very slightly predates C++.

I wouldn't say we're in the alchemy era with BCI, with regard to computer technology timelines -- BCI, in a limited set of applications, has been commercially viable and commercially successful for thirty years (cochlear implants are a fairly mature technology, and mechanisms for using a mouse or typing based on signals from implants have been commercially available since the early 90s). Instead, we're in the era of IBM and the seven dwarves in the computer timeline -- wherein vacuum tube computers are a large enough and successful enough industry that there's competition going on, and transistors are just coming on the scene and becoming a viable pathway for a new class of machines (minicomputers). This certainly leaves room for growth -- this puts us slightly before LISP and FORTRAN, and thus before very important concepts like time sharing and BNF, and two decades prior to networking and UNIX pipes. It puts us in the era where most computers still used binary-coded decimal and decade counters.

However, the technologies that BCI is looking to replace in many cases is much more mature than computer technology.

Information-by-wetwire (the wet dream of everybody who has wet dreams about BCI) is looking to replace written language as transmitted via the human eye -- a four thousand year old technology that's still incredibly dominant and highly refined, taking advantage of an information pipe so thick that the eye is considered part of the brain. On top of that, writing has incredible infrastructure already -- the governments of major industrialized nations spend large amounts of money installing and testing the software required to maintain compatibility with the written word in children, and then proceed to communicate with everybody through this protocol. In order to displace the written word, you need to improve upon the rate at which information can be fed in (or improve error correction) in ways that written language can't, either with a smaller pipe or with sufficient advances that installing this device is cost-competitive with teaching a child how to read.

BCI-based control of machinery faces similar problems, because it needs to compete with a host of already competing interface technologies, ranging from speech and the written word (both hand-written and typed) to gestures, buttons, steering wheels, pedals, levers, and other similar things. Not only that, but it needs to compete with the arms, hands, legs, feet, and vocal tract -- something we don't really think about. All of these parts of our body that we'd be cutting out of the loop when controlling machines directly via BCI have their own intelligence on-board -- muscle memory is a macro processor for motion, and despite occasional failures like freudian slips, the automation mechanisms in our nervous system for offloading the control of frequent operations from the brain is far, far ahead of autocomplete in terms of accuracy.

This is not to say that BCI won't go anywhere. Instead, I think BCI will be particularly useful precisely where there is no current alternative. For instance, if cochlear implants can be sufficiently refined, it should be straightforward to support networking them to throat mics -- in other words, you can cut the mouth and ears out of private cell phone conversations entirely. Advances in BCI will continue to help those whose limbs, senses, and nervous systems have failed them -- a few years ago, the first commercial bionic eye was approved by the FDA; it'll be a while before it'll compete with a real eye, but it certainly competes with not having any eyes in the same way that a crackly low-fidelity cochlear implant competes with being completely deaf. Additionally, when senses are needed that have no convenient existing sense-medium, BCI can come to the rescue. I think that these applications will need to support the field until it reaches the point where it can meaningfully compete with our existing sensoria.

16:

One major problem with permanent implants is that the brain is not static. Memories reconsolidate after each recall. Eventually the connections will result in a drift in either what the memory is, or a corrupted output. It works reasonably well for short term experiments, but the fidelity for long term implants will not be maintained.

This is like Charlie's point about obsolescence, but even worse, as each implant will rapidly degrade in accuracy over time.

Implants will need to be very extensive and constantly relearning the patterns. Essentially you might need a brain level computer processing each night to reconfigure the implants, which in turn might require extremely extensive connections that touch all the relevant neurons and/or synapses. It's like buying a supercar that needs servicing every night to keep running. The inconvenience may outweigh the value.

17:

An off-the-top-of-my-head "spam" example: do a geofence on your coffee stand to trigger melatonin release that kicks in when people are about 30 seconds away. Everyone who passes by gets sleepy, sees a coffee place, buys a coffee... Access to customers' brain states would allow all sorts of fun things.

18:

Sounds like the "Brainman" device from Sir Arthur C. Clarke's "3001", though it wasn't central to the plot and non-surgical. One thing this tech should prove is how much brain language is common to the entire species and how much is written on the fly during fetal development and childhood.

19:

The big bottleneck in understanding how the brain encodes function - whether that's sound or vision perception, motion control, or memory - has been the ability to get neuron-level data out of the brain.

Time and again, when we've done it, we've produced useful results.

That's why I come back to the hardware as the real issue. With sufficiently safe hardware that can tap into a sufficiently large number of neurons, we'll be able to gain insights into how those circuits work, and translate neural activity into some degree of function, communication, or enhancement of function.

20:

Yes, I mean the American Trillion: 10^12.

I do think governments will be tempted to use this for control purposes. Though it's much cheaper to point a gun at someone.

The most worrisome situations for me in governmental control are the ones that might seem most socially acceptable. Someone has committed a murder. What if, instead of sending them to prison, we fit them with a device that changes their future urges? That's preferable to prison, societally. But the slope could get slippery fast.

21:

The motivation in every part of the brain is medical. And there are so many people with brain deficits of various types, that there's motivation to work on nearly every aspect of brain function.

For example, even though we have the Cochlear implant (which inputs sound into the auditory nerve) there's also quite a bit of research on directly inputting sound into the auditory cortex of the brain. Why? Because there are some people born without an auditory nerve at all. That population is a target for a medical intervention. But once you develop the technology, it teaches you about the encoding of sound perception in the brain. And it can be applied to a wider population (if you can overcome the hurdle of surgery).

More broadly: In the US alone there are roughly 5 million people that have some deficit to memory, attention, learning, mood regulation, and so on due to some trauma to the head. (Car accidents, falling off ladders, etc..) That creates a medical motive for learning to manipulate memory, attention, and learning. That is, indeed, why the hippocampus chip is being developed.

Again, this could be a very very long road. But the motives are there.

22:

I agree on pretty much all fronts. We need a breakthrough in the hardware to make progress here.

I suspect we're going to get interfaces that are a half-step between the current implanted probes and the self-assembling oral route of Nexus. The silk-based conformal interface and the 'neural dust' proposal both require incisions in the skull, but both are far less invasive to the brain itself.

With respect to obsolescence, I imagine that we'll place the interface inside the brain, but as much as possible of the logic outside the brain. Even so, early adopters would likely get something both more expensive and less sophisticated than the later adopters.

As for the military applications, it is interesting that DARPA is one of the biggest funders here. (Though, to be totally clear, almost all of DARPA's grant money in this space goes to medical applications and basic science. And they remain the #2 funder, behind the NIH.)

23:
What if, instead of sending them to prison, we fit them with a device that changes their future urges?

We've already tried that.

24:

Yes, I enjoyed the Nexus trilogy. But...the big issues for me with all of this are: 1. You said, with the hippocampal chip, we can record AND PLAY THEM BACK. So, judge, give me a warrant to search the perp's, er, accused hippocampus chip to see if he was guilty of thoght crime... 2. What happens if someone other than you ratchets up the power... how do you tell what's real, and what's VR?

Or, for that matter, you live in a roach-infested slum, and you have no chance of a life, so they find your body, wasted, not having eaten or drunk anything in weeks, in your own bodily wastes, died while plugged in....

mark
25:

So, the ultimate zombie but less unhygienic, and capable of authentic multitasking.

Business opportunities ...

Neurons for rent .. ideal for situations where you do not want certain thoughts traced back to your brain. (Legal implication: who owns that patent/copyright ... if you develop a new whatever outside your own personal brain.)

Crowd-mind Shareware - we'll spread your problem across 10,000 brains and come up with solutions that work ... because they originated with the users/intended market segments.

On a serious note ... at what developmental stage would this happen? If too young, you destroy a young life; if too old, some engrams might resist being overwritten. Also may apply to pre-existing medical/neurological disorders ... especially if undiagnosed or latent/not yet manifested. (Neurogenesis seems to occur more in some brain regions than others.)

26:

Medical applications are military applications when you have as many soldiers with TBI as the US currently does; it's the same reasoning behind DARPA's funding of prostheses (of course, both are also complementary to actual war-fighting gear DARPA's got interests in).

27:

So ... Welcome to the Borg ? I can't imagine this path not going to collective mind. And I don't know where this would lead us as a species, a new one maybe? And, of course the inevitable war between the old humans and the connected ones.

28:

The potential political aspects interest and caution me. We've had an immensely terrible history of atrocities piled upon each other under the excuse that one group or another was inferior and thus deserved it.

What happens when the inferiority of a group stops being ideology generated by the mis-measure of man, and starts being an objective fact. That people in group A have access to a prosthetic motor cortex that makes them stronger, faster, and smarter than group B, who doesn't have that? When you conclusively have an ubermensch and untermensch?

It's rather more straight forward to predict technology barriers and solutions than the sociological forces at play, but I suspect that's where the really "interesting" things will happen

29:

Hello Ramez, thanks for writing this piece, your novels, in general.

However, I think you're entering the debate at the "floppy disk" end. i.e. you're backing the wrong kind of tech, however much electronics was the 'cutting edge' in the 20th Century.

CTRL+F - Light pulse or optogenetics. Not found: this has been a hot topic since 2008 onwards.

Examples - bunch of links & papers:

But are engrams conceptual, or are they a physical network of neurons in the brain? In a new MIT study, researchers used optogenetics to show that memories really do reside in very specific brain cells, and that simply activating a tiny fraction of brain cells can recall an entire memory — explaining, for example, how Marcel Proust could recapitulate his childhood from the aroma of a once-beloved madeleine cookie

https://newsoffice.mit.edu/2012/conjuring-memories-artificially-0322

RIKEN-MIT Center for Neural Circuit Genetics (CNCG)

High-performance genetically targetable optical neural silencing by light-driven proton pumps

Neural substrates of awakening probed with optogenetic control of hypocretin neurons

What do you get when you combine microorganisms and fiber optics? Mind control over mice and rats. Karl Deisseroth and his team at Stanford University have been making serious inroads into discovering how the brain works through optogenetics. The genes of certain algae and archae are spliced into rodent neurons, making them respond to light. Blue light turns the neuron on. Yellow light turns the neuron off. A fiber optic cable is connected into a living mouse or rat with the spliced genes allowing scientists to expose different neurons to different lights. The results are astounding. Stimulate the right hemisphere of a mouse, and it runs in circles to the left.

http://singularityhub.com/2010/03/18/incredible-video-of-using-light-to-control-the-brain-of-mice/

https://www.youtube.com/watch?feature=player_embedded&v=88TVQZUfYGw

https://en.wikipedia.org/wiki/Optogenetics

All those are from 2010-2013ish. Couple in recent splicing DNA rewriting advances - CRISPR/Cas9, notably being used by the Chinese and I hope you can see something a little more refined and dangerous than actual implants.

An old favorite, Schizophrenia and cortical blindness: protective effects and implications for language (a very thorough overview):

The basis of this discussion lies in the idea that the protective effects are not the result of blindness per se, but rather of brain changes that occur secondary to blindness (Silverstein et al., 2013a, p. 1). Naturally, since the two types of blindness, cortical and peripheral, have a different brain basis, the subsequent brain changes may not be uniform, and a modulation of the protection mechanism may be allowed across the two.

Importantly, the modulation of the protection mechanism that we assume to exist is compatible with the idea in Silverstein et al. (2013a) that a cluster of perceptual and cognitive functions, which are impaired in schizophrenia and significantly enhanced in congenital blindness, might offer protective effects (see also Cattaneo et al., 2008; Cattaneo and Vecchi, 2011 for reviews of enhanced perceptual skills in the CPB population).

If I were the head of an intelligence agency, I'd be splicing those structural brain changes into my agents to prevent them being attacked / hacked if captured. If I were the top CEO of a company, I'd probably look into it as insurance against corporate espionage. And so on.

Lastly, while everyone is going after graphene, the end goal would certainly to get changes done with a biological solution. e.g. splice in changes via direct methods or secondary methods (microorganisms) then activate using light (or other methods).

A recent DARPA grant allocated just under $30 million to build next generation transistors from graphene. Money from the private sector has been pouring into the development of graphene batteries as a potential replacement for lithium ion.

http://www.defenseone.com/technology/2014/10/heres-what-next-brain-implant-will-be-made/97190/

Of course, this all depends on what you're looking for: curing broken minds (and where's the line here?) or making something new.

Then again, I find it strange how all these swimmers have suddenly different body morphology that allows so many records to be broken...

30:

Yeah, all of those are serious problems to me as well. On the other hand, if we can get some decent interfacing done without direct physical connections there could be potential. A mesh implanted under the scalp is a lot more achievable than something interwoven in the brain.

31:

What happens when the inferiority of a group stops being ideology generated by the mis-measure of man, and starts being an objective fact. That people in group A have access to a prosthetic motor cortex that makes them stronger, faster, and smarter than group B, who doesn't have that? When you conclusively have an ubermensch and untermensch?

What happens, is Homo Sapiens splits into two (or more) separate species. And I do not think it is inherently bad thing.

Part of the reason dominant groups throughout history piled atrocities onto dominated groups is precisely because their "superiority" was a fiction. They HAD to be oppressive in order to keep the dominated people from acquiring skills and knowledge with which they could (and often did) throw off their supposed "betters". But when Group A really is superior, then Group B is not a danger to it. Group B can learn and train whatever they want, yet if there is ever a fight they will lose -- and they will know it. Group A have no reason to be cruel.

Now I realize that "no reason to be cruel" is no guarantee someone won't be cruel anyway. But history of our species actually encourages me here. In the situation you described, relationship between ubermensch and untermensch will be analogous to relationship between modern humans and chimpanzees. While humans have done many awful things to chimpanzees in the past (and still do in parts of Africa), they a) never rose to the level of cruelty humans do to each other, and b) there has been a sea change in attitudes over last several decades to the point some countries are on the verge of granting chimps and gorillas human rights. In part because chimps and gorillas are objectively inferior, and thus can never pose a danger to humans.

So I am cautiously optimistic about future human speciation.

32:

I'd be more heartened by your analogy if many large primates weren't on the edge of extinction.

There are more permutations than A being categorically superior to B in all ways to the point of not being a threat though. What is A is superior in some ways, but B is superior in others? It isn't hard to posit that, you can see differences in how systems develop in different regions like that right now.

You are also assuming a level of rationality in threat assessment and political development there that I don't think is supported. I mean you can quite easily point at movements that are immediately, self evidently, a BAD IDEA that still manage to round up plenty of members.

It's quite and opening for more interesting times.

33:

This sounds like another technology, like fusion, AI, alternative energies, space colonization, etc., to add to the list of those that promise to revolutionize the world very soon, but never quite seem to arrive. Techno-capitalist society almost runs on such vaporware and its peddlers at this point, doesn’t it? And what distinguishes these peddlers from shills for the military-corporate complex (how many times was DARPA mentioned glowingly in this post anyway)?

34:

There is another route to BCI that is seldom mentioned. It is the use of (say) neural stem cells to create a separate organ within the brain which is dedicated to computer connection. There is an intrusive array, for example a small wafer that links with the surface neurons with a resolution in the millions. Then you feed the person a brain plasticizer and do some serious neural training. Maybe throw in some optogenetics.

35:

Light pulse or optogenetics

While I understand the control aspect, I don't see how this will allow reading the state of a lot of neurons without accompanying sensors implanted. Do you have an idea how this might be accomplished?

I do like your floppy disk analogy though. I think that is appropos.

36:

I have mild cerebral palsy, one ear defective from birth (including the nerve), and depression/anxiety. The last is now controllable with medication which doesn't have too many side effects; but I'd be delighted to have cyber-treatment available for all three.

However, I'm unlikely to live long enough to take advantage of it. Unless, of course, life extension research pays off more and earlier than I expect.

37:

Didn't Scalzi do that, with BrainPal (TM)?

38:

Beat me to it: I was just going to suggest a genetically engineered human and an optogenetic interface.

That also partially answer's Charlie's concern about the transplant problem of installing a machine in someone's skull, which is just begging for a really nasty bacterial infection. Since I had a relative who had bacterial encephalitis, what I can say is that it's a nasty disease with lifelong consequences, and it's not worth going there.

That said, I suspect that, if we have brain interfaces at all, they're going to be installed at birth or possibly before, and they'll slowly grow with the infant brain, rather than being rammed in during adulthood. It's better to work with brain plasticity than against it.

Another possibility is something like Rebecca Ore's translation computer, which was an artificial temporal bone replacement. On the theoretically possible idea that you can miniaturize magnetic stimulation to target specific small areas, and that you could train your brain to interpret TMS signals as visions, sounds, or whatever, it's at least theoretically possible that you could install the machine as an artificial skull plate, leave the brain and membrane intact, and then slowly train the patient to use sophisticated TMS signals to interact with the machine in his or her skull.

Personally, though, I'm with Charlie: I want a better firewall, not a jack in my head, and I don't want planned obsolescence in any implantable electronics either. I suspect experience with real cybertech is what killed cyberpunk more than anything else.

39:

A mesh implanted under the scalp is a lot more achievable than something interwoven in the brain.

How will that read individual neurons. It might work as a high resolution EEG, which isn't the same as reading neurons, possibly several layers deep in the cortex.

What might be interesting is if you could introduce material into the blood stream that would leak out of the brain and line the skull or cortical surface, which could be manipulated by a non-invasive "3-D printer" to create the necessary interface in situ. b That might solve the infection problem, even more neatly than Clarke's braincap.

40:

And explored nicely in various episodes of the anime "Ghost in the Shell" sequels. Brain hacking, the puppet master, false memory implants, etc.

41:

Yerhonner, I can't recollect shooting John Key in his front garden, all I have is this strong memory of thinking he was a liar, a traitor and provably a war criminal. My implant musta dunnit!

The potential for subterfuge and false-flaggery is wonderful.

42:

While I understand the control aspect, I don't see how this will allow reading the state of a lot of neurons without accompanying sensors implanted. Do you have an idea how this might be accomplished?

Yes.

"If you're reading text in a newspaper or a book, you hear a voice in your own head," says Brian Pasley at the University of California, Berkeley...

The decoder was used to create a spectrogram – a visual representation of the different frequencies of sound waves heard over time. As each frequency correlates to specific sounds in each word spoken, the spectrogram can be used to recreate what had been said. They then applied the decoder to the brain activity that occurred while the participants read the passages silently to themselves

http://www.newscientist.com/article/mg22429934.000-brain-decoder-can-eavesdrop-on-your-inner-voice.html

Here's a couple of personal links if you want to track / find their papers / contact them direct:

http://knightlab.berkeley.edu/profile/bpasley/ https://www.bu.edu/sargent/profile/frank-guenther-ms-phd/

Note: these are public academics, not the or, say, from the Nizhny Novgorod Research Radiotechnical Institute.

Personally, though, I'm with Charlie: I want a better firewall, not a jack in my head, and I don't want planned obsolescence in any implantable electronics either

Serious advice?

1) Learn to meditate 2) Learn some higher order thinking processes. e.g. Code; Formal Logic; even Kant will rewire parts of your mind 3) Learn to understand and process techniques that undermine your mental processes

External:

4) Get glasses that polarize light 5) Mimic the patterns of sensory usage that blind people go through that forms their brain structures 5) Get a subdermal implant behind your ears

I may, or may not, be joking about some of those.

Note: people with the wiggle might raise an eyebrow at spectrograms and know their usage in a range of fun online cryptogames. e.g. Aphex Twin, Fez (Flow by Disasterpiece), Cicada 3301, ARPGs.

43:

Oh, and:

6) Learn to process text (r/w) without an inner monolog. It's very possible (and it's the first thing that they break / reinsert as it's the equivalent of being a submarine).

7) Learn how to use techniques such as transitive inference and abaptive reasoning while reading aloud nonsense in your mind.

8) Drink unreasonable amounts of alcohol / scramble your thoughts via other drugs.

~

Please note: some of these tips are more than likely to permanently damage your brain structures.

44:
Beat me to it: I was just going to suggest a genetically engineered human and an optogenetic interface.

No need for the false modesty; you suggested that years ago in Scions. :-)

45:

It's done in experimental animals by splicing in fluorescent protein expression when active as well as the light-activated gene and using the optic fiber (when not delivering activating light pulses) to watch for sparkles.

46:

"Ubermenschen, I'd like to introduce you to Ned Ludd and Captain Swing. I'm almost certain you'll have something to talk about."

Violence doesn't have to start from the top down; indeed, those on the bottom know well exterminating them won't take explicit violence, but that violence may be their only chance to be heard.

47:

using the optic fiber (when not delivering activating light pulses) to watch for sparkles.

Which is an implant and needs to be co-located at the neuron to determine which one. Does this solve the implant issues raised?

@CatinaDiamond. Thanks for the links. The NS article links to this PLOS article. The cortex still needed an array of cortical electrodes to read the cells, so I don't see how we have avoided the implantation issue. As per the reply anonemouse, optogenetic techniques will still require some sorted of implanted sensor to read read the light signals rather than the electrical ones. That might make it possible to reduce the sensor numbers by orders of magnitude, but I don't see how you avoid the sensor implants. Now if you could make the emissions penetrating, like soft x-rays, or even sound (detect the nerve compression) some sort of high resolution Fourier transform of the signals might achieve the goal from a sensor that lines the inside of the skull.

48:

Ah, I thought s/he/ze was asking about how it's currently done in live humans. The current lab process for test animals is listed in the abstracts:

Neural substrates of awakening probed with optogenetic control of hypocretin neurons:

Here we directly probed the impact of Hcrt neuron activity on sleep state transitions with in vivo neural photostimulation, genetically targeting channelrhodopsin-2 to Hcrt cells and using an optical fibre to deliver light deep in the brain, directly into the lateral hypothalamus, of freely moving mice.

Note: before you succumb to that rediculous picture in the "Singularity Hub" magazine, OF can be very small indeed: Subwavelength-diameter optical fibre . These aren't used in the lab, but the potential is there to at least refine the techniques.

Perhaps small enough, one day, to have the SF typical 'neural net'.

From violets to sound to light. My my my, aren't I a tricksey one?

49:

Sorry, you posted as I posted.

[[Meta: we're allowed to discuss only stuff that's public knowledge, so anything without a link can be considered "imagination / art / theatre / spectacle"]]

To answer your question, you're referencing current tech. The solution to the issue is to start splicing in biological systems from other creatures (cough octopus family members cough) and have the brain grow it's own 'sensors' that would then be attached to subwavelength OF via [censored].

Hypothetically, of course.

50:

Thanks for the clues. I think I can take it from there.

51:

Your magic search term is: "opsin polypeptide"

An example in cuttlefish - reason for the link? 'Cause it's a system not purely in the eye of the perceiver.

52:

Thanks for remembering that! :D

53:

Would you be willing to 'out' yourself here?

I've never read or heard of Scions apart from the publishing house. Scion > Dragon?

Vargo? :o

54:

Interesting TED Talk from David Eagleman on the limitations of existings sensory inputs to the brain, the plasticity of the brain and its sensory processing, and how we might expand our perception of the world.

Worth watching if you haven't seen it already.

55:

Very clever. I'm thinking one might not even need the OF, replacing it with engineered [Geobacter based?] cells that transmit the signal electrically to cortex surface sensors, avoiding penetrating materials. A multilayer memrister based computer sitting inside the skull to process the data with a deep learning NN. That might allow nightly reprocessing the signals as the cortical network changes.

This might actually be possible...

56:

There are more permutations than A being categorically superior to B in all ways to the point of not being a threat though. What is A is superior in some ways, but B is superior in others?

Well, "A being categorically superior to B" was what you described in post #28. Hence I responded to that.

What is A is superior in some ways, but B is superior in others?

As the matter of fact, I find that far more likely. If Homo Sapiens does end up speciating, different branches probably will be just that -- different, not inherently superior or inferior. Frankly, I thing more diversity would be a good thing.

57:

We were probably objectively superior to the Neanderthals, and that didn't save them. Oh, sure, some of their genes survive, but try finding someone who legitimately thinks in the same way as a Neanderthal these days.

As for comment 56: Imagine a world in which the people who say "those people over there are... different" are actually, factually, correct. Imagine a world in which the enemies are, technically, not human. Imagine a world in which people think in fundamentally different ways, and don't have to resort to minor differences like skin color or religion.

58:

This might actually be possible...

The possible and the probable are two very different areas. e.g. it's possible that I will survive to the end of this year, but it's hardly probable.

Homo Sapiens Sapiens will probably go for the easy option, use silicon / graphene hacks, mutilate what it is to be human qua human, wipe out the remaining whales and look confused as Late Stage Capitalism leads to MegaCity 1 and then not even notice the genocide that occurred. It's possible that you don't and you get your shit together and start thinking, but hey.

I, at least, chose something else. For which, apparently, I'm very much Doomed.

εἰς τὸ ὄνομα τοῦ Πατρὸς καὶ τοῦ Υἱοῦ καὶ τοῦ Ἁγίου Πνεύματος

Complexity: if you destroy complexity, you might as well be a black hole.

Fuckwits.

59:

" Imagine DPRK or Da'esh making use of this?"

Taking advantage of advanced neurotechnology need not be so obvious. I'm more worried* about insidious uses of neurotech in a 'who does it actually serve' sense. What am I being told I'm gaining as I enhance or trade away my biological needs? Do I actually want it, or does any impulse to adopt come from over half a century of psychology research channelled into producing more effective advertising?

Not to say that such technology couldn't easily be hijacked in a Puppet Master manner (referencing Ghost in the Shell) by some organization like Da'esh. Just that whoever would be responsible for the deployment of neurotechnology would be in the best position to...use it this way.

I don't mean to come across as a conspiracy theorist. I'm just worried about what people will lose when the social pressure to adopt invariably comes.

60:

I have imagined that. Many times, over the years. And I agree, every once in a while it would lead to some serious nastiness. Overall, however, I think human speciation would be more a good thing than a bad thing.

Maybe I am overly optimistic.

61:

The possible and the probable are two very different areas. e.g. it's possible that I will survive to the end of this year, but it's hardly probable.

Do you have cancer? Sorry if I am being insensitive, but this is a very worrisome statement.

62:

At this point, I'd bet against Homo sapiens speciating.

Here's the deal: at this point, we've effectively got two inheritance systems, genes and culture. Culture mutates really fast, and it's what allows us to be everything from fishers to farmers to warlords. Effectively, from one generation to another (or even faster) we can radically switch our ecological niches.

Our genes are still evolving, but our rapidly mutating culture shields us from a lot of environmental selective pressures. The things that we tend to see a lot of selective pressure are on genes that enable us to explore new aspects of culture, like lactose tolerance, alcohol tolerance, and so forth.

There are a couple of reasons I don't think humans will speciate. One is that, while we do take different niches (for instance, many humans parasitize other humans--we call them grifters and thieves), but these cultural roles rarely last long enough for them to have genetic effects. The second reason is that cultural roles rarely last more than a few centuries, 1,000 years tops. That's too short a time for it to have a significant genetic effect. The end result is that selection pressures slosh around a lot, but they're not likely to ever force us to become separate species.

This is the short version. I can babble on at length, but basically, culture splits us into different "ecotypes" all the time, but the fact that culture can do this without affecting our genotypes means that we're unlikely to ever genetically speciate into multiple human species. Culture simply changes too rapidly for this to happen, and paradoxically, that's rapid, ceaseless change is one of our great strengths as a species

63:

The speciation will occur through genetic engineering, not normal evolutionary pressure. I would expect it to start within 50 years, maybe sooner.

64:

That's The Culture's "Neural Net" isn't it? Or a very close facsimile?

65:

a k a "Babel-Fish" I'm surprised, actually, that no-one ( including me ) has jumped on that one - instant mutual understanding, no need for translations .....

66:

I once tried to build a neural net that modelled what a neuron did, just to show how hard it was.

But I couldn't get it to work because the neuron I was trying to model was too fucking complicated.

67:

Google Translate says that your classical Greek said: In the Name of the Father and of the Son and of the Holy Spirit None of whom exist ....

Politely, but I think your medications are off, given the other parts of that particular post ... The environmental damage caused by the communist religious system are as bad, or worse, than those caused by "capitalism", & there are always people fighting against those injustices, no matter which system.

Please try again, in plain English?

68:

Latest news ... Relevant (perhaps) to cultural inheritance. Apprently, Da'esh are getting very close to Palmyra & even the Syrian guvmint are worried, never mind UNESCO & everyone else.

69:

People are confusing several different things here.

So lets get some distinctions out there.

0) Hacking general mental state

Apply chemical or electrical signals to modify general brain state: make people happy, or sad, or forgetful or excited.

Knowledge of brain function required: None

Technological level: Very mature. Pretty much every known culture does this, and has done so for thousands of years with various chemicals.

1) Hacking simple I/O streams.

Feeds inputs directly into simple nerves headed for the brain.

The nerve signal is straightforward, relates clearly in a way we can easy map to something we can physically measure, has a clean input stream, and is not very high bandwidth.

Example: Cochlear implants. Instead of the signal to the brain coming from the eardrum via the cochlea you wire up a microphone to send the same signals in via radio to an implant in the inner ear.

Knowledge of brain function required: None

Technological level: Commercially available, at a high price. Like computers in the late 1970s.

2) Hacking complex I/O streams.

Feeds inputs directly into complex nerves (or groups of nerves) headed for the brain. The nerve signal is not straightforward, or does relates clearly in a way we can easy map to something we can physically measure, or does not hav a clean input stream, or is very high bandwidth.

Knowledge of brain function required: minimal. Need to know enough to correlate the signals work together.

Technological level: Cutting edge, but working in the lab, custom built and at astronomical price. Like computers in the 1960s.

3) Hacking the I/O buffers.

Certain parts of the brain map show activity that cleanly and simply to inputs and outputs. For example those parts of the brain related to motor activity, and to some visual processing, have structures such that we can map their activity to what bits of the body are moving, and to which bits of the optical field are seeing more light.

Knowledge of brain function required: good knowledge of the easiest to understand parts of brain activity: those with simple mappings to external things. This is knowledge that we didn't have 30 years ago but do today due to advances in neural imaging.

Technological level: Extremely basic uses are now cutting edge. Like computers in the 1950s.

Eg the (very misleadingly named) link Ramez gave above about people who "emailed verbal thoughts back and forth from person to person." It was not verbal. What happened was that brain scanning of one person could distinguish 'brain activity to move arms' (Signal A) from 'brain activity to move legs' (Signal B). In another person they could then induce signals in the visual cortex meaning "there is light here" for A or "there is light there" for B. So they could send a single bit of information.

4) Hacking internal mental representations.

Reading or changing people's dreams, hopes, loyalties, ambitions, memories or political beliefs.

Knowledge of brain function required: Unknown

% of that knowledge possessed by science: Unknown

How we'd do it: Unknown

Actually possible: Unknown

1) 2) and 3) above require the things Ramez is talking about: more bandwidth, better precision, etc. Iterative technological advances which require really clever people doing really clever things, but seem likely to occur sooner or later.

4) is a totally, totally different ballgame. This is where we're at the 'alchemy' stage, if that.

70:

[ Hacking internal mental representations ] ... is a totally, totally different ballgame. This is where we're at the 'alchemy' stage, if that.

Or worse, phrenology

71:

I feel like that level of hardware would result in a similar problem that the human genome project faced: an ocean of data and a puddle of knowledge.

72:

Now that problems as elementary as "what should we eat" are totally understood and solved, at a practical level at least, it's high time we tackle how the brain works in detail.

Can't wait for the first sweeping political, sociological and philosophical implications to come out, so very solidly justified. Hail Science!

73:
Apples "latest" programming language for iOS is an Objective-C variant - basically a nicer, better C++ and C++ was already around in 1983 and is based on OOP ideas from the late 70s. DOS itself was low-tech - UNIX is older and modern OSs are much more like UNIX.

There are two seminal OOP programming languages - Simula developed in the 1960s at the Norwegian Computing Center in Oslo and Smalltalk developed in the 1970s at Xerox PARC in California.

Bjarne Stroustrup was influenced by Simula 67 in the creation of C++ which combined ideas from Simula with the C programming language. Around the same time (the early 80's) Brad Cox combined ideas from Smalltalk with the C programming language to create Objective-C.

C++ and Objective-C were developed independently and with different lineages with the same aim of bringing OOP into mainstream programming.

Apple's new Swift language refines the OOP model of Objective-C and adds features from the Functional programming paradigm (which is the next 'new' thing after OOP and which also has an academic history dating back decades.)

It seems to take about thirty to forty years for Computer Science ideas to move from the lab to ubiquity.

74:

DOS itself was low-tech - UNIX is older and modern OSs are much more like UNIX

Pedantic nit-pick here:

DOS was basically a clone (some would say a rip-off) of CP/M running on Intel 8088/86 processors (as did CPM/86, albeit at a higher price). DR's CP/M family were themselves based on 1960s era OSs from DECs PDP-range minicomputers, so arguably has roots leading back to the 1962-68 period.

UNIX sort of escaped into the wild circa 1973 (within Bell Labs, as a typesetting platform) but was basically written by Ken Thompson as a hacker toy on a spare PDP-8 when he had nothing better to be doing. A lot of the ideas in it were derived from work he'd been involved in on the MULTICS project, a vastly more ambitious MIT-led project to develop a general purpose time-sharing OS. It probably took UNIX until the early 1990s to completely catch up with the functionality of MULTICS, leading one to speculate about the Road Not Taken ... because MULTICS got started in 1964 and was first presented in public in 1965 (and the last known MULTICS system was shut down in 2000, although I suspect cheap hardware and virtualization mean that we may see it running on hobbyist systems to this day, somewhere).

The point being, commercial applications usually have antecedents going back way further than most people realize.

And on that note, I think it's time to mention Jose Delgado's pioneering work on brain implants from the 1950s onwards!

75:

Yes, I did mean speciation by genetic engineering and/or bionic implants. Not by natural selection.

76:

Even more pedantic nit-pick (since we're already on a computer-history tangent).

When we say DOS these days, we usually mean MS-DOS and various repackaging/clones of MS-DOS (so IBM PCDOS, DR DOS, and their less well-known brethren, along with modern implementations like FREEDOS). MS-DOS (and by extension PCDOS, which is just rebranded MS-DOS) is a modified version of QDOS, which it turns out was actually a manually ported disassembly of CP/M. So, early MS-DOS literally was CP/M in a sense, rather than a CP/M clone (and as a result, years later, when this all came to light, Digital Research won a suit against Microsoft -- even though Microsoft didn't know at the time that what they were buying was essentially a crack of CP/M). However, what separates MS-DOS from CP/M is the set of features that first appeared in DOS 2.0 -- support for directories, and rudimentary redirection -- features that were introduced to DOS from UNIX because Microsoft had just become a UNIX vendor.

So while UNIX is certainly more advanced than DOS, DOS not only is younger than UNIX but depends upon it for very important elements of how we remember it.

A note on MULTICS -- there are several projects intending to port MULTICS to modern hardware, in part because the source was released several years ago. A friend of mine started one; he was formerly of the NSA & quit pre-Snowden for ethical reasons -- which bodes for strange times regarding the state of MULTICS. However, there's freely available software (simh) which can simulate the hardware that MULTICS originally ran on, meaning that hobbyists can, right now, run MULTICS on their own.

The moral of the story: history is complicated and progress is never linear, which makes using the state of one technology as a metaphor for another technology pretty difficult to do well, since what the metaphor implies depends heavily on the breadth of one's knowledge of history (for instance, if someone says something is "like the mac in 1984", that means one thing if the reader believes that the mac introduced the GUI, another if they're aware of the Alto, and yet another if they're aware that the Amiga 1000 came out the following year with twice the capability and half the price, and yet again something else if they're aware of the Lisa that came out two years prior).

77:

We are already a collective mind. It's called 'society'. It's pretty inefficient, because bandwidth is limited and error rates are high, and to combat this we have overly aggressive caching and prediction systems -- which themselves often cause problems by overgeneralizing or by spending more resources on disaster mitigation than are necessary. Also, there's a plethora of protocols and none of them are properly standardized!

78:

With hippocampal implants, this might be an issue. I don't forsee this being an issue for i/o, since (if we can get the neurons to stick to the probes properly and continue to do so) the connections will tend to be self-reinforcing if they are used. There might be a problem if you switch off your implant for a month.

(This doesn't spell the doom of memory implants. We already rely upon external memory devices all the time; if we end up with implant connections where the bandwidth is better, we can just use the implant as yet another external memory device, and swap out memories to it out of long-term memory the same way we currently swap out memories to notebooks or the internet.)

79:

The environmental damage caused by the communist religious system are as bad, or worse, than those caused by "capitalism", & there are always people fighting against those injustices, no matter which system.

Please try again, in plain English?

You can be as rude as the rules allow, I don't mind.

I was making an oblique(ish - about as subtle as being hit by a salmon) reference to the real reasons people are worried about all of this: patriarchal hierarchies (of which all the systems you've labelled are indeed versions thereof, with varying amounts of ideological baggage added), which tend to take techne and use it for all the wrong reasons.

We are, after all, living in the panopticon while burning through the complexity of the ecosphere at an unsustainable rate, are we not?

I'm not sure Heaven would respect that stance (Unsubtle jibe).

"“For too long, we have been a passively tolerant society, saying to our citizens 'as long as you obey the law, we will leave you alone',” he said.

“It's often meant we have stood neutral between different values. And that's helped foster a narrative of extremism and grievance.”

David Cameron, May 12/13th.

The Independent

(And of course I'm using an Indy source for snark, the Russian oligarch who owns it shifted quite heavily to support the Conservatives late in the election cycle).

~

To make a positive comment - the issue with modelling your analogies on computers, or looking to create (true) cyborgs is one of hierarchy. You simply will not get open source versions; and even with something so banal as the new "iNet 6" you can already see how closed markets are avidly pursued, now bring that to Brand Loyalty Personhood[tm] (frequently satirized). Someone referenced Adam Robert's New Model Army recently, which has a (somewhat utopian) take on open source networking / non-hierarchical organization, and is a good thought experiment. (He's also up for an award this year, good luck to all involved).

Worse than this, computers are inherently hierarchical in organization. You probably don't want to go too far down the route of modelling homo sapiens sapiens wetware on silicon, bad things happen[tm].

~

Hacking internal mental representations.

Reading or changing people's dreams, hopes, loyalties, ambitions, memories or political beliefs...

This is where we're at the 'alchemy' stage, if that.

And Newton was an alchemist at heart, of course. ( insert usual joke here)

I'd disagree, but again, a different topic entirely. e.g. Max Martin

If you really want to start poking into honey pots that have sharp teeth in them, your journey starts here (PDF).

~

Anyhow, the future is biological, not silicon: the only question is whether the octopuses or squids win.

80:

On reflection, if we use Kanerva's "Sparse Distributed Memory" approach for memory storage, even hippocampal interfaces might be sufficiently recoverable despite plasticity.

It is slow when implemented in software, but the new chips simulating neurons might change that.

I'd love to see a bio-molecular implementation too at some point. I'm also very intrigued at the possibilities of using opsins (as suggested by CatinaDiamond) as sensors to communicate with such devices.

While I share the concerns of those who do not want brain implants, I do see the more prosaic use of such devices for spinal cord repair and bionic limb prostheses.

More generally, I can see the use of such approaches for control of all sorts of biology, e.g. development and growth in a wide range of organisms.

81:

Dirk, there are two problems with this idea. The first is that if someone creates a genetically engineered being that can't produce a viable offspring with a member of Homo sapiens (the basic definition of species among living beings), then that being is not, by definition, human, and has fewer rights than the ordinary corporation.

At this point, we can't even do the "humanzee" experiment to see how far that embryo would develop, so we can't define how different our two species are to look at barriers to interbreeding, one of the most common definitions of different species. What you're suggesting is something even more different than a neanderthal, and possibly even more different than a chimpanzee. We're far, far away from Orphan Black territory when we talk about manufacturing species of humans.

Using human material to create a sentient being that isn't human is probably horribly unethical and illegal (I'm not a medical ethicist, so I'm just guessing here). We've already created a non-human descendant of humans (the HeLa cell line, which has been diagnosed formally as a new protist species, Helacyton gartleri), but creating sentient beings right now is illegal.

Assuming all that pesky legal, ethical, moral, and technological stuff is hand-waved away, the next question is whether this new species could develop a big enough breeding population to survive. This is a genetic problem, because you want enough genetic diversity that the whole experiment doesn't become destroyed by inbreeding or disease problems caused by genetic uniformity, so you've got to create genetic diversity as well as a new species. It's also a, well, genocidal problem, because one of the oddest things about Homo sapiens is that our era (the last 12,000 years or so) is the first time in our genus' history for several million years that only one Homo species has been present on the planet. This may be due to chance, or it may say that we don't tolerate competitors. It's not clear whether we actively kill them, or whether we passively outcompete them by being such cultural shapeshifters, but I don't place much stock in our tolerance for genetically engineered Homo not-us. Heck, all too often we don't see each other as human enough to be worth not killing.

So long story short, I don't think genetic engineering will create new human species. What I do think is that technology, wealth, and power will cause many of the haves to believe they're a different species than the have-nots, and to act accordingly, whatever genetics and history tell could tell them if they wanted to care. This is a very familiar pattern that goes back at least 4000 years, and there's no reason to think it will go away any time soon.

82:

74: Pedantic nit-pick here:

Please pardon a couple of even more pedantic corrections to the otherwise excellent capsule oS history. ;->

UNIX. . . was basically written by Ken Thompson as a hacker toy on a spare PDP-8. . .

PDP-7, actually. A member of DEC's 18-bit family of minis (PDP-1, PDP-4, PDP-7, PDP-9, PDP-15), and a much rarer beast. The PDP-8 was the first really mass-produced computer, but its 12-bit architecture and address limitation would have made the job of writing a new OS considerably harder.

The PDP-7 assembly-language code for "Unics" was cross-assembled on Bell Labs' General Electric mainframe. Alas, none (or only fragments) of that earliest code seems to have survived (though a functional SimH PDP-7 simulator does exist, along -- miraculously enough in itself -- with DECsys, DEC's own OS for that machine). UNIX was soon, of course, ported to the 16-bit PDP-11, its native home for many years.

[T]he last known MULTICS system was shut down in 2000, although I suspect cheap hardware and virtualization mean that we may see it running on hobbyist systems to this day, somewhere

Not yet, though that may happen someday. The Multics code is extant, and was made available to MIT by owner Groupe Bull for distribution to hobbyists and computer historians. http://web.mit.edu/multics-history/source/Multics_Internet_Server/Multics_sources.html

There have been a number of attempts to make a Honeywell 6180/DPS-8 emulator capable of running Multics, but until recently none had gotten very far. But late last year Multics seems to have opened its eyes in almost a decade and a half: http://www.multicians.org/

Old mainframe emulators are hard! But they are amazing to behold when they work. They were much-discussed in the 90's (often in a "they say it can never be done" tone of voice), and Bob Supnik's SimH suite of minicomputer emulators began in the 90's, but fully-functional mainframe emulators are a mostly a 21st-century thing, starting with Roger Bowler's "Hercules" IBM System/360 (and successors) emulator in 2000, Ken Harrelstien's DEC PDP-10 emulator in 2001, and others. Most recently, a guy named Paul Kimpel has done a fully-functional Burroughs B-5000 emulator, running MCP Mark XIII, in Javascript/Firefox no less. Recovering software for these old machines can be almost as much of a challenge as creating the emulators. These tend to be globe-spanning efforts, and I doubt they would be possible without the existence of the modern Internet.

So yes, Multics will probably be fully resurrected in the not-too-distant future.

83:

Old mainframe emulators are hard! But they are amazing to behold when they work.

This guy built a Cray-1 emulator, that runs at about half-speed compared to the real thing. Using a (comparatively) cheap FPGA development board designed for university use...

http://www.chrisfenton.com/homebrew-cray-1a/

84:

This is not yet the iPhone era of brain implants. We're in the DOS era, if not even further back.

And what in the blue hell makes you think implantology maps at all to the development history of consumer electronics or software? Oh yeah, right, because you're a transhumanist!

http://amormundi.blogspot.com/2015/04/twitterscrum-with-ramez-naam-on.html

(Even though you've taken to distancing yourself from the term; only the term, mind, not the views and institutions of the crackpot cult because people have finally started to catch on to the fact transhumanism is a crackpot cult.)

Transhumanists' favorite pastime is flaunting their (accidental or deliberate) scientific ignorance, after all. Like when they mansplain that brain cells "are nothing but leaky bags of salt solution" (OUCH) to experienced neurobiologists! (http://www.starshipreckless.com/blog/?p=4761)

I guess it's only natural then that they think the rapid performance advances in microchips magically apply to any and all areas of technical and scientific development, especially if they know only coding and hold no degrees in the life sciences. Like our very Ramez Naam, for example!

Charlie, on the danger of getting the yellow card, I'm disappointed you let this guy be a guestblogger. I thought you knew better than to gift those guys legitimacy through exposure after you spotlighted Dale Carrico's blog a while ago. It looked like you agreed with his findings and reasonings that exposed transhumanism/singularitarianism as a collective of 1) cranks that repackaged articles of faith like the afterlife in pseudoscience and 2) tech company shills who brainwash the masses with hyperbolic wish-fulfillment fantasies to sell more product.

Those people are Scientology with Xenu and Thetans replaced by computers and "Accelerating Change(TM)".

85:

Goodness, you're awfully angry. The future will be what the future will be. Maybe the Singularitans are right. I doubt it, but it's possible.

Maybe the Transhumanists are right. They're probably not in every particular, but I can see Transhumanism working. If it's possible for us to do it (Scientifically), then you have to look at human nature.

Would people pass up an advantage just because they feel it's unnatural? Sure. For a while. But that's generational (What you're born into is Natural. What is invented after you turn 40 is Unnatural and should be burned)

I think many Transhumanists are shills and selling you something. That doesn't mean Transhumanism is necessarily wrong. :D

86:

Charlie might have had him as a guest author because he liked his work in terms of SF being the literature of ideas and often about discussions of the social and cultural effects of technological advancements.

87:

I think your views are insufficiently nuanced to make sense of my position. Which is to say: I'm interested but skeptical (especially about design patterns in ideology that have precursors in religious systems).

Incidentally, you might want to look for Ramez' earlier guest pieces here, including his bit on why the singularity is unlikely ...

89:

Yup, that one. I think he's a little on the optimistic side, but even so:

"I think I'll let someone else be the first person uploaded, and wait till the bugs are worked out."

... Works for me, too.

90:

Ramez,

I'm a neuromorphic engineer working on the Human Brain Project (and am a non-voting supernumerary member of its Board of Directors).

The issue with encoding is that in very few areas do we get to see the inputs and outputs; vision and the visual cortex is one area where this is better known. For auditory input, it appears we are matching the sounds we hear to what we expected, rather than the other way around (hearing sounds and then interpreting them).

So we're reasonably clear about what's happening in v1 and v2 (early stage visual cortex), but things get a bit more "conceptual" towards the "output" stages of this organ. How things proceed from there is anyone's guess!

91:

Alex writes:

On reflection, if we use Kanerva's "Sparse Distributed Memory" approach for memory storage, even hippocampal interfaces might be sufficiently recoverable despite plasticity.

It is slow when implemented in software, but the new chips simulating neurons might change that.

David replies:

We've got the plasticity into SpiNNaker, and Diesmann's corticol microcolumn implemented. Now, if we can just solve the problem of getting 8TB of data into the machine quickly (and we have ideas), we can look at interesting bits of real brains. The hippocampus is high on our list.

92:

" "I think I'll let someone else be the first person uploaded, and wait till the bugs are worked out."

... Works for me, too."

Ah... yes...that was pretty much my opinion of the Much more Simple and Straightforward procedure of...

" Laser eye surgery is the reshaping of the cornea, the transparent ‘window’ at the front of the eye, by using an excimer laser. This then corrects focusing problems. To promote faster healing and better results the thin outer surface layer of the cornea is moved aside before the laser treatment is performed. The surface layer is then gently moved back into place. The two most common types of treatment used are LASIK and LASEK. These two methods differ in the way the surface layer is moved aside. There is also a third type of treatment called PRK, but this is used less frequently."

My procedure was carried out by a top - NHS based - surgeon after rigorous assessment as to whether or not my moderate myopia plus astigmatism was suitable and this only after I had been told that I could not go on wearing contact lenses of the most advanced kind.

I'd been following the development of this variety of ophthalmic surgery for years and had decided that...well, NOT ME FIRST! And also ... only when I can afford a Really Good Surgeon...my surgeons fee plus after care. My Doctor was a top consultant who turned up on Saturday Morning after the operative procedure to check very thoroughly on my eyes progress - done one eye at a time with an interval of several months between operations of course.

Mr Feelin split his fees with the NHS to provide for Stuff that NHS needed that wasn't provided by the Government so my conscience in Going Private was eased a bit.

As it happens I turned out to be at the top end of the spectrum of success and I even have reading vision without glasses; though at age 66 this can’t be expected to continue for very much longer.

93:

The Dave-Alex conversation is a gold mine of references to research, so thanks, learning lots - it's fairly amazing to have experts immediately in a topic. (I also suspect a little snark, regarding Pentti Kanerva's referential chains example but there we go, it's funny to me at least).

@ Dave (or whomever wants to answer)

Do you make a distinction between visual memory (actual visualization) and non-visual memory (associative language / descriptive) in your simulations? There's some interesting research on people who use both, and those who only use one of them. Much akin to those who have an internal monolog and those who don't.

For auditory input, it appears we are matching the sounds we hear to what we expected, rather than the other way around

Could we get a cite for this? Looks interesting.

Anyhoo, my major issue with all of this:

"Because, as Kanerva and many other people have argued, it is highly plausible that in the brain, items stored into and retrieved from memory are indeed very large sets of binary on-off features..." Sparse Distributed Memory - intro by Hofstadter

n dimensionally, harumph. Computers are designed like they are due to the shape of minds, not vice versa.

"Yeah, that reminds me of the one about the drunk monkey..."

94:

We are, after all, living in the panopticon while burning through the complexity of the ecosphere at an unsustainable rate, are we not? Yes, probably, probably not - the last if only because people are noticing & doing something about it. ( Agree re. "patriarchal heirarchies" incidentally. )

The Cameron quote - not the first time I've seen that is deeply worrying, because it implies a level of stupidity that DC does't have - which means he is playing to the gallery. Which implies nasty things.

"Insert usual joke" (Link to Simpson's song) is quite meaningless .... Ditto reference to "Music" person Max Martin.

Have a nice day, anyway!

95:

"Insert usual joke" (Link to Simpson's song) is quite meaningless .... Ditto reference to "Music" person Max Martin.

Newton - 'father of modern science' - famous for his alchemy studies - bridge between two worlds. i.e. If we were to call stage #4 "alchemy", then we're in the age where the next Newton is scheduled to appear.

Newton was not the first of the age of reason. He was the last of the magicians, the last of the Babylonians and Sumerians, the last great mind which looked out on the visible and intellectual world with the same eyes as those who began to build our intellectual inheritance rather less than 10,000 years ago. Isaac Newton, a posthumous child born with no father on Christmas Day, 1642, was the last wonderchild to whom the Magi could do sincere and appropriate homage. J.M. Keynes: Newton, the Man, also held at the Royal Society of London.

The Simpson's 'Stonecutter song' is a well known parody of the 'conspiracy theories' surrounding Freemasonry, Illuminati and so on, many of which revolve around Newton and his search for hidden mysteries.

From high culture to low culture and back again. The Max Martin reference does the same thing but in reverse (Swedish public school > shaping American pop culture).

It's a metaphysical conceit.

Apologies for the derail, I rarely explain jokes but there we go.

96:

Do I understand this correctly. The SpiNNaker architecture allows massively parallel processing but with low power consumption. The software allows you to program the characteristics of each neuron and synapse. Together this should provide fidelity and reasonable run times.

Any sense of performance improvement over a coded ANN running in a conventional multi-core chip. (order of magnitude differences). How does the architecture scale as you build out the simulation.

What is the ratio of simulation:real rime brain?

Any citations to results so far, showing that the approach taken is generating insights?

Is there any indication that wetware brains' information processing is just noisy, so that if the noise could be reduced, the amount of wetware could also be reduced and thus compressed in a digital brain. Or is the noisiness an incompressible feature that makes brains operate "better" than more deterministic simulations?

97:

"Cult" implies significantly more conformity than there actually is among transhumanists. Also, everything is a religion.

98:

"Do you make a distinction between visual memory (actual visualization) and non-visual memory (associative language / descriptive) in your simulations? There's some interesting research on people who use both, and those who only use one of them."

And I use three. The third being multi-sensory "diagrams."

99:

And I use three. The third being multi-sensory "diagrams."

And I used eight.

https://www.youtube.com/watch?v=rdmhNPwxGuk

Bastards.

100:

Catina writes:

"Do you make a distinction between visual memory (actual visualization) and non-visual memory (associative language / descriptive) in your simulations?"

OK. It's early days (I smiled at the idea we are in the DOS age: I can remember using CP/M and I think we're actually in the Atlas autocode era!)

The visual cortex is better understood than anything else. The reason is that the retina is feeding information to the visual cortex in a way that we do understand. To give you an idea have a look at:

http://siliconretina.ini.uzh.ch/wiki/index.php

where, in item 3, Tobi Delbruck shows us the input that a real B/W retina might emit for a scene. The data rate is low because a signal is transmitted only when the pixel of the image changes. Steve Furber (that's Mr BBC micro and Mr ARM, as well as Mr SpiNNaker to you and me) reckons that the data rate for the real human retina is sufficiently low that we could pump that into a single SpiNNaker board using the links that are already there.

We've been using Tobi's retinas, and one of his colleagues is using SpiNNaker in Zurich, since about 2012. One of the interesting features for CS people is that coupling up devices that use spike data is remarkably easy -- there's none of this protocol programming. Another interesting feature is that we get "attention" for free! When nothing is happening visually, the retina transmits nothing; when something happens, pixels in the image where change has occurred activate.

(Of course the real question is: "Can Charlie provide us with the connectome for Scorpion Stare?", and if Furber gets to be Brains, can I be Pinky?)

Catina continues:

"There's some interesting research on people who use both, and those who only use one of them. Much akin to those who have an internal monolog and those who don't."

Reference? I'd be interested in following this up. One of the things that Tobi, his wife Shih-Chii, and I would like to have a go at is "sensory fusion". Using her silicon cochlea, and Tobi's retina we can attempt to combine attention that arises from both visual and auditory stimulus.

Me, earlier: "For auditory input, it appears we are matching the sounds we hear to what we expected, rather than the other way around" Catina: "Could we get a cite for this? Looks interesting."

One of the researchers from Plymouth we work with (Martin Coath) mentioned it. Not sure its a firmly established fact, but it goes some way to explaining the cocktail party problem: in a noisy environment, how do we hear the words spoken to us? Given that the cochlea is really a lot of notch filters (different length hairs in the inner ear), Martin's suggestion is that we metaphorically turn up the gain on those parts of the spectrum that are most useful to us at the time. But that only goes part of the way.

What Steve and I suspect is that the feedback mechanism that's in every part of the brain (confusingly called feed-forward by our colleagues in life sciences), are being used to match expectations or desired outcomes to the sensory data. Furber's talked in the past about 1960's experiments using LSD and the visual "disturbances" induced, and that there's a cool link to perturbations in the convolution algorithm used in visual cortex. I'll have a look for it.

Catina: "Anyhoo, my major issue with all of this:

"Because, as Kanerva and many other people have argued, it is highly plausible that in the brain, items stored into and retrieved from memory are indeed very large sets of binary on-off features..." Sparse Distributed Memory - intro by Hofstadter

n dimensionally, harumph. Computers are designed like they are due to the shape of minds, not vice versa.

"Yeah, that reminds me of the one about the drunk monkey..."

Nope.

Things like motor skills, piano playing, riding a bike and so forth are probably stored simply as "plasticity". That is: when an action and its response are learnt, the connection strength increases (in reality the synapse (= connection) uses bio-molecular machinery to increase both the efficacy and the odds of spike transmission). One thing to think of here is hitting the brakes on the M6 when the brake lights of the car in front come on. Transmission through the brain is taking place sub-sonically, so there's not really time for high-level processing to be used.If you like we are all living a few seconds in the past, and sub-conscious automatic responses carry us through.

Long term memory is associated with the hippocampus, which has an extremely interesting looped shape. A malfunctioning hippocampus is associated with the complete loss of memory beyond 15 minutes. No one knows for sure, but it looks like timing, or beat might be implicated here (the loop structure). Steve's other hunch on representation is that it is spike order that is important not the actual spike timing. Thus a set of N neurons can represent one of N! possible orderings.

101:

Alex writes: "Do I understand this correctly. The SpiNNaker architecture allows massively parallel processing but with low power consumption. The software allows you to program the characteristics of each neuron and synapse. Together this should provide fidelity and reasonable run times."

Alex, that's exactly how I'd put it! But just to reflect some of the alternative views within HBP: Heidelberg (with their analogue computer system) would claim that they are about 100 times more power-efficient -- and they are. EPFL (with their supercomputer simulations) would claim to be higher fidelity -- and they are.

We claim to have an interesting balance of economy vs fidelity; though it all depends on what turns out to be the key features of brain function. If as Furber and I hope, it is network properties that are the key, then we've guessed right. If it turns out that there are exciting biological features at a sub-cellular level then EPFL are probably best placed. Finally, if it is the developmental aspects (over periods of months) of brains that's important then Heidelberg are well-placed.

But it's a big open question what level of biological fidelity is needed. If we need to, I think I can repurpose the SpiNNaker software to incorporate low-level features, but I think we'd lose balance between the compute/network/memory.

Alex continues: "Any sense of performance improvement over a coded ANN running in a conventional multi-core chip. (order of magnitude differences). How does the architecture scale as you build out the simulation."

Each of our chips has 18 ARM cores running at 200MHz, all for 1W. As such it is probably simpler to use your laptop instead of one of our chips. The benefit of the chip only becomes manifest when we link lots of them together. The network is designed to cope with many small (32-bit) packets, with minimal overhead. The typical supercomputer network is based on MPI which is much more efficient for matrix operations and packets of say 4KB. We've built a cabinet of SpiNNaker with 6 subracks of 24 boards of 48 chips (each with 18 cores) ~ 125k cores. The initial results and sums are telling us that network congestion is not going to be a problem for the full 10 cabinet system. To give you a feel for this, on raw neuron count, one cabinet is about the size of a mouse brain.

Finally Alex asks: "What is the ratio of simulation:real rime brain?"

We are just beginning to get provisional numbers here. SpiNNaker has been designed to run at 1:1 time. 1s of modelling time corresponding to 1s of real-time. The EPFL supercomputer simulations take about 40 minutes to simulate 1s of activity. Heidelberg's analogue system runs 10^5:1; that is they can simulate a day's activity in just 1 second.

Any citations to results so far, showing that the approach taken is generating insights?

Is there any indication that wetware brains' information processing is just noisy, so that if the noise could be reduced, the amount of wetware could also be reduced and thus compressed in a digital brain. Or is the noisiness an incompressible feature that makes brains operate "better" than more deterministic simulations?

102:

Whoops, I missed the final two paragraphs of Alex's questions.

Alex writes: "Any citations to results so far, showing that the approach taken is generating insights?"

My focus at the moment is to get the software tools up to scratch. The reliability is now approaching what I want, but ideally we'd like to add a few more features.

Much of the insight is coming from theoreticians, and we're gradually building links to them through the HBP.

Alex continues: "Is there any indication that wetware brains' information processing is just noisy, so that if the noise could be reduced, the amount of wetware could also be reduced and thus compressed in a digital brain. Or is the noisiness an incompressible feature that makes brains operate "better" than more deterministic simulations?"

I think the issue here is that a brain's hardware is indeed very noisy. At the moment our simulations do not have noise in the neurons. However, the networks are operating very close to the onset of chaos, and configured correctly (or incorrectly) we can use our system to generate statistically strong pseudo-random numbers. Markus Diesmann (Julich and NEST) and I will be publishing on this in due course.

I think I'd say that noise is an inevitable feature of the networks. If you make the system more stable, then it takes more (energy/effort) to displace it and put it into a different state. In short it becomes harder to learn.

103:

Ahh, Newton, Inventor of the Cat-Flap! I realised the "Simpsons" thing was something to do with "Illuminati" etc, but it still made no sesne - hint - I don't have a TV, I don't follow the Simpsons & what's more I don't even care .... Ditto most so-calle "popular music"

Oh yes ... Cat-flaps Enjoy

104:

Re: Using optogenetics as an approach in humans.

I think that's certainly one possible path. There's likely to be a very strong resistance among parents to genetically engineer the brains of their children in such a fairly radical way, though. So if the optogenetic path is taken, I suspect it'll be in adults, through gene therapy to manipulate existing neurons.

Is that actually less invasive than electrodes? Depends on how you look at it.

And of course, as has been mentioned elsewhere, the human skull is opaque, so the optical fibers need to be implanted beneath the skull.

It's a great time for neuroscience if you're a zebrafish, and your skull remains transparent through childhood and you were born with optogenetic neurons.

For humans, it's not quite as convenient.

105:
What happens when the inferiority of a group stops being ideology generated by the mis-measure of man, and starts being an objective fact. That people in group A have access to a prosthetic motor cortex that makes them stronger, faster, and smarter than group B, who doesn't have that? When you conclusively have an ubermensch and untermensch?

I spent a fair bit of time discussing this, and other socio-economic factors of enhancement, in my very first book, More Than Human.

While the scenario you describe is possible, I think it's equally likely that something different will happen.

1) The early adopters will pay very high prices for augmentations that bring minor benefits.

2) Technology will improve.

3) The next wave of adopters will pay lower prices for augmentations that bring larger benefits.

4) Repeat.

This is more or less what happened with mobile phones, for instance. Remember the concerns about the digital divide? The plunging cost of technology has, instead, brought computing power to billions.

Now, medical technology is not the same as digital technology (even when it heavily incorporates it). At the best case, this curve will be slower. And in the worst case, there won't be a lot of price decline. That strikes me as unlikely, though.

(Add to this Charlie's concern about obsolescence, and you get something quite interesting indeed - where the early adopters are stuck or have to go through more invasive procedures to upgrade than those who came in later.)

106:
1) 2) and 3) above require the things Ramez is talking about: more bandwidth, better precision, etc. Iterative technological advances which require really clever people doing really clever things, but seem likely to occur sooner or later. 4) [hacking internal representations] is a totally, totally different ballgame. This is where we're at the 'alchemy' stage, if that.

While I agree that complex abstractions are likely to be more complex, your skepticism here (which is totally reasonable) reminds me of the skepticism for each of the milestones we've already achieved.

There was incredible skepticism that William House could succeed with the cochlear implant. 5 electrodes? The auditory nerve has 30,000 nerve fibers! The auditory cortex has more than a billion neurons! It'll never work. It works, not perfectly, but well enough to change lives.

Every further step has met similar skepticism. Controlling motion? Inputting vision directly into the visual cortex? Extracting seen images from the visual cortex? They were all considered unlikely.

Recording memory from the rat hippocampus and playing it back? Bah. Science fiction. It works.

Boosting pattern matching ability of primate brains for pictures? Bah. Science fiction. Yet it's been done.

The lesson, to me, is that every time we get single-neuron-precision data out of a brain area, we start to make headway in decoding the functions of that area (or the functions that pass through that area) sufficiently well enough to make headway.

And because the brain's encoding of everything - including high level abstractions - is highly redundant, it's possible to make headway with just a sampling of the data, rather than comprehensive data.

None of this is to trivialize the pursuit. But again, given millions of people with deficits to memory, attention, and high-level reasoning skill, there's ample medical motivation to find cures. And if those cures are neuroprostheses, that'll lead to progressive decoding of brain functions.

107:
I feel like that level of hardware would result in a similar problem that the human genome project faced: an ocean of data and a puddle of knowledge.

I've actually made a very similar statement about the Human Brain Project and the mapping of the connectome.

What's similar about those to the genome is that they're static data. We don't actually have the ability to watch the human genome work in vivo in humans as a running piece of code, if you will. If we did, we'd have a far greater understanding. Instead we have this static snapshot. That's what a human connectome will be.

But the type of data I'm describing is different. It's real-time-monitoring of parts of the brain as they work, with resolution on the order of 1,000,000 times more detailed than one gets from an fMRI. That is a radically different type of data than the human genome project.

It will, of course, be huge. And if we had a realtime dump of the firing of every neuron in a human brain today, we'd be overwhelmed. But we could dive in, bit by bit, into specific regions, scenarios, and phenomena, and see the brain working in realtime, and make progress in decoding those bits.

108:
And what in the blue hell makes you think implantology maps at all to the development history of consumer electronics or software? Oh yeah, right, because you're a transhumanist!

You typed this comment (I presume you typed) on an electronic device that already connects your brain to vast sums of information, allowing you to communicate nearly instantly across thousands of miles.

Call me what you will. The reason I don't use the word 'transhuman' outside of fiction is that, as you show here, it's meaningless. You're already a transhuman, augmented in ways that virtually all humans who lived through the first 99% of our species' existence could never have imagined, or imagined only in the domain of myth and magic.

Indeed, Dale Carrico made a similar point a decade ago, in his quite positive review of my non-fiction book on human enhancement. Times change, I guess.

109:

Dave,

It's an honor to have you here. I'm always delighted to learn more about the reality of what's going on. I spend as much time with the folks at the Allen Brain Institute as I can (I'm here in Seattle, and Christoph Koch has become a fan), but I remain an eager lay person.

Feel free to reach out any time. I'm easy to find on Facebook and twitter.

110:

One interesting question that remains to be answered (among many)is how much of the neural structure needs to be simulated to get something akin to intelligence. For example, neural structures are primarily living systems with a secondary information processing capacity. To what degree can we abstract the latter and dump the former, because a memristive net doesn't need a blood supply? I assume it remains a possibility that brain simulations could be vastly simplified if we knew which bits we can do without?

111:

" You're already a transhuman, augmented in ways that virtually all humans who lived through the first 99% of our species' existence could never have imagined, or imagined only in the domain of myth and magic."

No. Transhumanism is not about bolting on a pair of motorized rollar skates. It is about internalized technological enhancement. So glass are not H+ and neither are contact lenses, but laser correction to the cornea is. And apart that there is only one significant H+ technology in existence - enhanced immune systems.

112:

"And of course, as has been mentioned elsewhere, the human skull is opaque"

Not quite. It is translucent in the near IR. However, it is very dispersive so you are not going to get pinpoint accuracy of illumination.

113:

Now, medical technology is not the same as digital technology (even when it heavily incorporates it). At the best case, this curve will be slower. And in the worst case, there won't be a lot of price decline. That strikes me as unlikely, though.

Disagree -- I think in the short to medium term it's going to be very likely that prices will stay high. The limiting factor is Baumol's cost disease, already a major cause of spiralling healthcare costs. Simply put, human services are hard to automate, most of the caring professions fall into that category, and wage rises elsewhere in the economy cause wages to rise in these categories even though productivity remains static -- or even declines -- due to competition for bodies.

Telesurgery is all very well, but fully automated brain surgery is several steps beyond anything anyone is contemplating right now, and that's what it'll take to enable the cost of brain implants to decline even roughly in line with Moore's Law. And even then, it's going to be a linear decline rather than exponential; operating theatres, even fully automated ones, are big, expensive pieces of capital investment and you can't shrink them significantly! Ditto ICU wards for recovery, and so on.

The best I can see us getting to any time in my lifespan (say, prior to 2050) is a state where brain implants are as easy to perform as, say, cataract surgery or dental implants -- a couple of preliminary exams then an out-patient operation under local anaesthesia. And today these procedures start around the £2000 mark in the UK and go up from there, largely because of the labour costs involved.

Beyond that ... well, if you've got fully automated neural microsurgery, what was that about the singularity again? We're well into "The Matrix" at that point.

114:

Then there is the bureaucracy. Even if a drug has already been licensed for use for one condition, if it is found to be beneficial for another it still takes years of trials and reams of regulatory hoops to jump through before your doctor is allowed to prescribe it. Then there is the cost of monopoly new drugs. One example is for Hepatitis C and Sofosbuvir - a 12 week course of treatment costs about £30,000. The drug itself costs around £100 to manufacture and is available in India for around £300. Now compare that to the likely gene therapy required to create a high bandwidth BCI, along with (likely) use of brain plasticizing drugs and stem cell implants. The cost of the mechanics of implantation will be negligible for the fist decade or two.

115:

Reference? I'd be interested in following this up. One of the things that Tobi, his wife Shih-Chii, and I would like to have a go at is "sensory fusion". Using her silicon cochlea, and Tobi's retina we can attempt to combine attention that arises from both visual and auditory stimulus.

Please bear in mind that I have absolutely no academic basis in my wanderings, so anything typed is merely amateur hour musings. i.e. if it sounds like bullshit and is wrong, please just type the word "No.".

That said, reading about plasticity/spike patterns / rhythms and given what you're doing, I'd think that corollary discharge would be an area you'd be interested in (esp. since it has a large body of physical experimentation on animals to reference).

From a very slight reading on the topic, there seems to be evidence that the 'inner voice' is both a self-regulating system and that it's heavily tied to the motor functions of the brain:

Several studies of imagined speech (covert rehearsal) have been carried out (Buchsbaum et al., 2001, Buchsbaum et al., 2005 and Hickok et al., 2003), which identified a network including the STS/STG, Spt, and premotor cortex, including both ventral and more dorsolateral regions (Figure 2A), as well as the cerebellum (Durisko and Fiez, 2010 and Tourville et al., 2008). We suggest that the STS/STG corresponds to the auditory phonological system, Spt corresponds to the sensorimotor translation system, and the premotor regions correspond to the motor phonological system, consistent with previous models of these functions (Hickok, 2009b and Hickok and Poeppel, 2007). The role of the cerebellum is less clear, although it may support internal model predictions at a finer-grained level of motor control.

Sensorimotor Integration in Speech Processing: Computational Basis and Neural Organization

and

e recorded electrocorticograms from the human brain during a vocalizing/listening task. Neural phase synchrony between Broca's area and auditory cortex in the gamma band (35 to ∼50 Hz) in the 50-msec time window preceding speech onset was greater during vocalizing than during listening to a playback of the same spoken sounds. Because prespeech neural synchrony was correlated (r = −.83, p = .006), with the subsequent suppression of the auditory cortical response to the spoken sound, we hypothesize that phase synchrony in the gamma band between Broca's area and auditory cortex is the neural instantiation of the transmission of a copy of motor commands. We suggest that neural phase synchrony of gamma frequencies may contribute to transmission of corollary discharges in humans.

The Corollary Discharge in Humans Is Related to Synchronous Neural Oscillations

Ironically, this might mean you'd have to code an embodied mind which mimics having a body. Not sure if that's more amusing than exasperating.

116:

I think you and Charlie are on different pages of the same hymnbook here.

Any new technology's proponents seem to be a bit inclined to gloss over such details. And I have seen people extolling the benefits of self-driving cars while assuming that everybody drives a new car. The working life of the ordinary car is probably more than a decade now. The last one I drove was 1999 (my eyesight has stopped me driving) and it clocked up fifteen years. Yet the talk is of a near instant switch-over.

(There could be a greatly reduced number of cars but it needs a big change in thinking.)

And I don't think anyone has thought of how regular safety inspections would have to change, needing a lot of new training for mechanics. We already have signs of this with the general use of computer-control black boxes, and the expense of the hardware needed to get information out of the system. Some of them depend on shockingly old computer hardware and OS software.

Never mind your car, there are pretty ordinary programs that have big problems with the latest Apple OS, and the Windows world still has major developer software libraries that support more functions for 32-bit than for 64-bit.

We're still talking about over a decade to change, and you thought the computer world was fast-moving.

117:
I think in the short to medium term it's going to be very likely that prices will stay high. The limiting factor is Baumol's cost disease, already a major cause of spiralling healthcare costs. Simply put, human services are hard to automate, most of the caring professions fall into that category, and wage rises elsewhere in the economy cause wages to rise in these categories even though productivity remains static -- or even declines -- due to competition for bodies.

I think getting below a couple thousand dollars is indeed a stretch anytime soon.

Of course, that's cheaper than a year of primary or secondary education.

The component of the cost that is the implantation procedure will drop more slowly (if at all). While the part of the cost that is the technology will drop more rapidly, and likely improve in function more rapidly.

The wild card is how much the technology will change the procedure. LASIK made eye surgery both safer and much cheaper. I don't think there's a LASIK for brain surgery anytime on the horizon. But there is a lot of R&D into technologies that would reduce hospital stays (a big part of the cost).

More broadly, I don't think Baumol's is the core issue in medical costs. It's more a symptom. What's really going on is that quality of outcome is weighted dramatically higher than cost of procedure. Medical practice routinely optimizes for a 1-10% better outcome at a 100%-1000% higher price. That is, largely, a phenomenon of patients not bearing the costs themselves, and thus not making a cost/quality tradeoff.

In cosmetic surgery, which is quite similar to other parts of medicine in types of labor, types of technology, etc... but very different in payment model, costs have risen slower than inflation. That is to say: Real cosmetic surgery costs decline over time.

To be clear: I don't see any moral way to have people pay for all medical costs out of pocket. I'm not suggesting a libertarian fantasy here. Pooling risk and protecting the less fortunate are important. But this stat does shed a different light on the rising cost of healthcare.

118:

I note that there are already "Black Clinics" in existence (shades of Neuromancer) offering rather dodgy techniques for things such as anti-ageing, including genetically engineered telomere lengthening and of course the young-to-old blood transfusions. The real holdup in all these new technologies is that they are medical, and regulated to the point of totally diminished returns in the West. Indeed, any drug that extended healthy lifespan through anti-ageing effects could not be licensed because old age is not classified as a disease.

119:

Which sounds like a tautology. It is clear to me that a drug which had actual anti-ageing effects would rapidly become available, whether licenced or not. Or as a food supplement.
The barriers to a new succesful drug are often overstated, especially by people with axes to grind. The long history of fake and dangerous drugs is a cautionary lesson.

120:

But the fact that drugs can only be licensed for specific "official" diseases means there is a certain disincentive for pharma to look in various areas.

121:

"Then there is the bureaucracy. Even if a drug has already been licensed for use for one condition, if it is found to be beneficial for another it still takes years of trials and reams of regulatory hoops to jump through before your doctor is allowed to prescribe it."

In the US, such drugs can be prescribed "off label."

122:

That may be the case here also, but most doctors only know what's in MIMS and that's what they stick with unless you are lucky in your choice of GP.

123:

Laser eye surgery and vaccines are still beyond the imagination of everyone in the first 99% (chronologically) of human existence. Modern humanity evolved about 200,000 years ago. 1% of that is 2,000 years. According to the wiki, the earliest records we have of inoculation come from the 1500s. Even if we're extremely generous and assume that it originated a thousand years earlier, it wasn't practiced for 99% of our existence.

124:

No there isn't. What holds them back is the desire to make as much profit as possible. There's even an initiative from the EU now called the Drugs for Neglected Diseases Initiative to make up for the way Pharma companies concentrate on the big fancy diseases that are most common and therefore from which they could make most money.
That there aren't cures for old age on the market is not down to government regulation preventing the introduction of such cures*, but down to the lack of actual such cures around.

  • Historians will know of all the many wonderful elixirs with various alleged life prolonging capabilities.
125:

Maybe the first task is to replace the head bone that is a major barrier with something more accessible. The brain itself is immune to pain, so a heavy duty skull replacement and simple, painless access through the meninges would be a great first step forward. As would cleaner operating rooms. Your average chip fab is far cleaner. Hell, I'd volunteer - someone has to. Maybe they can give me a cool cyberpunk skull- all carbon fiber and titanium. With guns and kickass media storage.

126:

The real holdup in all these new technologies is that they are medical, and regulated to the point of totally diminished returns in the West. Indeed, any drug that extended healthy lifespan through anti-ageing effects could not be licensed because old age is not classified as a disease.

Old age increases risk for a lot of specific problems: weakening bones, hardening arteries, cataracts, dementia, strokes, cancers... You must see a decrease in some of these specifics to also see extended healthy lifespan. Pick just one of the many problems your miracle anti-aging drug solves and run it through a proper clinical trial. Call it an anti-osteoporosis drug and do a trial for that goal, for example. If your miracle drug really prevents osteoporosis the FDA is not going to block it because side effects include improved memory and decreased risk of heart attacks. Once it's approved for one condition physicians can prescribe it off-label for anything, and an actual working anti-aging drug will get excellent word of mouth.

Regulation is not preventing the sale of working anti-aging drugs. Regulation is preventing people from making miraculous claims about drugs without miraculous proof. That's an unambiguous good. Without properly blinded trials even an honest person can be deluded by their expectations and delude others in turn.

Regulation is preventing rapid and large scale research on human subjects -- and animal models are poor analogs for many human medical conditions. This is also good, if not unambiguously good, given the poor record of experimental medical ethics in the 20th century.

I am 100% in favor of extending healthy human lifespan if it can be done. It grates that so many online discussions of it stray into ritual denunciation of medical regulations and "deathism." Those aren't real problems. Biology is the real problem. Denouncing the wrong problems seems like a displacement activity to avoid the much harder real problem.

127:

The heck with communication between brain and world. It would be so much easier to just improve communication within the brain.

Assume a technology that lets us dump a metric buttload of nanowires into a child's brain. Maybe just dump them in, and let them migrate and anchor at random. Assume further that these create neural pathways, perhaps with a high average length. But more importantly, with a high speed.

The developing brain does not form according to a blueprint. It connects itself adaptively, garbage collecting inactive elements. If these extra signalling elements were present, it's not unbelievable that brain function would just naturally start using them.

(It might turn out to be important that they be in certain areas, or that they NOT be in certain areas, or that they not bridge certain boundaries. There might be, ahem, failures, in the first human trial.)

But damn. Smarter humans! Yes! As long as they aren't the Superbrights from Sterling's "Schismatrix".

128:

how much of the neural structure needs to be simulated to get something akin to intelligence. IIRC, the usual answer is supposed to be: "number of parallel/cross-wired (as aopposed to serial) interconnections in the brain" In other words, "consciouslness" is an emergent property of complexity. Am I out of date, has anyone got a better / more up-to-date answer than the one I've given?

129:

Indeed I'm still running this on WinXP - if I have to go to new Windows, I'll go to "10", or maybe to a UNIX system on a new (self-assembled) box of tricks ... As for cars, well, mine was built in 1996 & I intend to drive it until I'm no longer capable - probably another 20+ years away. Never mind people with really interesting vehicles, say pre-1970, of which there are a surprising number .... System (meaning "the whole system) compatibility is going to be a problem, much ignored by many. Also, of course "QUERTY" reasons why things are kept unchanged, in spite of later developments.

130:

The usual condition cited is diabetes, since it is the most common that is associated with ageing. It's why GSK payed (IIRC) some $700m for Sirtis and their resveratrol derivatives.

131:

I think there is some discussion about how much computation neurons are involved in, rather than just acting as the framework and life support system.

132:

Actually, 3D printed replacements for chunks of skull during major surgical reconstruction are a thing that happened a year or three ago.

However, you're dead wrong about one thing: we don't want to make the brain more accessible. Making the brain more accessible for brain implants and surgery also makes the brain more accessible to pathogenic bacteria or thrown half-bricks.

As for "cleaner operating rooms" you say this as if it's something surgeons haven't been aware of and highly concerned about for the past 150-odd years. Here's a hint: aseptic environments are really hard to do right (speaking as someone whose experience of working in a Class 1 aseptic pharmaceutical manufacturing suite is about three decades old but still relevant -- boot barriers, full-body overalls to prevent skin shedding, multistage particulate filters fed by HVAC, laminar flow cabinets for actual work, and autoclaves the size of a truck). And they get exponentially harder each time you try to push them a little bit further. This planet we live on is a dirt bath!

133:

f these extra signalling elements were present, it's not unbelievable that brain function would just naturally start using them.

And it would be even less unbelievable if they triggered a wide-scale immune response in the brain, leading to lovely little side-effects such as demyelination and overall manifestations somewhat similar to multiple sclerosis or motor neurone disease (Lou Gehrig's, for those of you in America).

Hint: biological systems are homeostatic. They push back. And they're primed to fight against external intruders, because they evolved in an environment where "external intruder" in the grey matter meant "brain eating parasitic worm or fulminating meningococcal infection".

There might be, ahem, failures, in the first human trial.)

There is a reason we don't usually conduct poorly tested experimental procedures on live human beings. You might want to read the wikipedia bio on Josef Mengele if you're unclear on the reasoning behind this.

134:

However, you're dead wrong about one thing: we don't want to make the brain more accessible. Making the brain more accessible for brain implants and surgery also makes the brain more accessible to pathogenic bacteria or thrown half-bricks.

The people on Mars colony one could try that. AFAWK there are no pathogenic bacteria and bricks on Mars. And they could use an intelligence boost.

135:

And even if you managed to get your operating theatre to BSL-4 level (well, inverse BSL-4), you then go and plonk a colony of human diseases* into the middle of your clean zone and ruin it all. The next most useful step in operating theatre cleanliness is probably to ban people from them. Pity about the patients...

*Reminder: there's a 20% chance you already have one of the bacteria that causes necrotizing fasciitis.

136:

AFAWK there are no pathogenic bacteria and bricks on Mars.

Ha ha no.

All of us carry around, all the time, a bacterial colony containing 2-3 orders of magnitude more prokaryotic organisms than there are eukaryotic cells in our own bodies.

That brown stuff that comes out of your arse in around 100-200g quantities per day? Around 30-40% of its wet mass is bacteria, bred in your gut. And they're almost all capable of becoming pathogenic under the right circumstances. (We're almost all host to colonies of C. dificile and E. Coli sub-species. Most of the time they get on just fine -- but when things go wrong? It gets bad fast. And for added lulz, a Mars mission is probably going to be a high radiation environment -- not enough atmosphere to block out cosmic rays and high energy solar flares.)

And if bricks are in short supply, try socket wrenches or kitchen knives or whatever comes to the angry primate's hand.

137:
And I have seen people extolling the benefits of self-driving cars while assuming that everybody drives a new car. The working life of the ordinary car is probably more than a decade now. The last one I drove was 1999 (my eyesight has stopped me driving) and it clocked up fifteen years. Yet the talk is of a near instant switch-over.

In their (our) defence, (a) a lot of the impact, good and bad, comes with moderate levels of penetration, and (b) professionally-owned cars are depreciated on a schedule of 5 years or so.

We don't need everyone to switch, or even the majority to switch, for self-driving cars to be a major game-changer. All we really need is for them to be generally available to those who want to buy them, plus 3-5 years.

138:
At this point, I'd bet against Homo sapiens speciating.

That's assuming we don't count the demographic transition as a (sub-)speciation event...

139:

I really need to get your book and read it soon because you've probably answered/addressed some of these questions already. So, in the spirit of genuine interest (as opposed to negative nitpickery), I submit the following queries.

Re:

"The implications of mature neurotechnology are sweeping.... Neural interfaces, by contrast, can stimulate just one area at a time, can be tuned in real-time, and can carry information out about what's happening."

Every cell (A) eventually dies, therefore the nanotech anchored to it will lose its mooring. What happens then? What assurance do you have that a foreign particle, whether bio- or nano-, is not going to get eventually swept up and moved into another part (B) of the body? And, depending on which parts of the body or cell types, you'd get all sorts of other effects which would/could cascade further and so on. Alternatively, if the nanos- stick around (ahem) ... they could build up, like plaque bodies which is also not a good scenario. So some form of effective housekeeping is needed.

I get that optogenetics shows that this (i.e., single cell/dendrite turn on/off switch/activation and monitoring) can be done. But on a network basis that interacts with a whack of other systems in real time ... wow!?

This is truly a fascinating problem: how do you limit the effect of an input in a system that by design (evolution) is supposed to have considerable freedom of movement. I suppose one way to control for this is to have crisp age limits ... built-in self-destructs comparable to those in our own cells. But then some consumers/medical insurance companies might complain that this is financially ruinous.

140:

This comment reads as racism: "The developed peoples of the planet are turning into a different and better species than those rabbits in Africa."

But I could very easily be wrong! These things are easy to misunderstand. Assuming that you did not intend to offend, would you mind explaining the logic behind the statement that the demographic transition could be considered a subspeciation event?

141:

Now we know your opinion of Africans...

142:

Why demographic transitions are not subspeciation events:

A) In Homo sapiens, the geneticists have found, repeatedly, that within-group genetic variation is higher than between-group variation. That means, if you're uninformed enough to subscribe to racial theory, that the small average differences between the average "black" person and the average "white" person are dwarfed by the differences among all white people and the differences among all black people. It doesn't matter whether you're talking about Scots vs. English, Asians vs. Whites, Men vs. Women, Gays vs. Straights, or whatever, apparently you get the same result. We're all mongrels, and we're more diverse within the groups we mistakenly call uniform than across the perceived racial divides that we discriminate about.

All a demographic transition does is decrease numbers. That doesn't make the resulting subsample more uniform.

B) In terms of human genetic diversity, it's worth remembering that: 1) we're far less genetically diverse than chimpanzees, apparently because our species almost went extinct about 70,000 years ago and for some reason chimps did not. 2) Within that relative uniformity, the vast amount of genetic diversity is found on the African continent. Basically, we're all Africans. We get confused about this because only 1.1 billion of the 7.2 billion people on this planet are African. However, almost all of humanity's genetic diversity is found in Africa. This is also normal, and it's normal for the area in which a species evolved to contain far more genetic diversity than the areas that it colonized later in its history.

Now this isn't to say that there aren't biological differences among "races." The tragic problem is that discrimination has real biological outcomes, due to the impacts of things like poverty, systematic discrimination, unequal access to medicine, clean water, and education. In all cases, the causal arrow quite clearly runs from discrimination and poverty towards biological damage, and there's precisely no indication that the poor are in any way genetically inferior to the rich. This should be obvious when you realize how many wealthy people started off poor and prospered, but some rich people like to pretend they're special when they're just lucky.

If you want some stats, compare things like drug abuse and endemic poverty between the predominantly white Muskogee Oklahoma and predominantly black Baltimore Maryland. Baltimore has gotten a lot of press as an urban hellhole, but Muskogee, shining bastion of conservatism that it is, is far worse off. For some reason, more conservative elements of the media don't see fit to comment on this, or to help their brethren succeed.

143:

The heck with communication between brain and world. It would be so much easier to just improve communication within the brain.

Assume a technology that lets us dump a metric buttload of nanowires into a child's brain. Maybe just dump them in, and let them migrate and anchor at random. Assume further that these create neural pathways, perhaps with a high average length. But more importantly, with a high speed.

Do we know that? Apart from the body's response to these nanowires, why should we expect faster communication to improve thinking, rather than potentially disrupting it?

This is the sort of idea that can be tested with brain simulations once we know the simulations have high fidelity.

144:

Got to ask: why are we doing this in the first place? Let's not assume the conclusion and ask what progressively invasive access to a person's thinky bits gets you. This is probably a bit dated, but we were given the model of sensory apparatus to afferent nerves to a black box that performed some computation to efferent nerves to muscles.

I can get bypassing, say, bad optics or nervous tissue that's been damaged by diabetes. But it's not at all clear to me what's to be gained by going any deeper. I get a strong whiff of underpants gnomes.

145:

Got to ask: why are we doing this in the first place?

Probably 'cuz we live in a Faustian civilization, that just can't seem to resist delving into realms of knowledge that confer power and control, even at the risk of unleashing Lovecraftian dooms. Edward Teller explained his interest in hydrogen bombs by saying "I fear ignorance." LOOOL. Personally, I fear knowledge at least as much...

146:

Let me put it another why isn't the augmentation technology in Ted Chiang's The Truth of Fact The Truth of Feeling the way to go?

147:

Not so much Faustian, I think, as essentially useless. The assumption seems to be that, for example, one can now type ten or a hundred times their previous rate with this type of technology. I've got to question this assumption.

148:

Because people are still, en masse, too stupid. Imagine how the world might change if intelligence was just boosted overall by 50%. Or even, and more likely, that those with lower intelligence get a disproportionate boost compared to everyone else and the bell shaped curve narrows considerably.

149:

Not so much Faustian, I think, as essentially useless. The assumption seems to be that, for example, one can now type ten or a hundred times their previous rate with this type of technology. I've got to question this assumption.

Ok, to use a far future example from our host: neurally linked squids mining the bottom of the depths of a water planet. (Neptune's Brood). The location isn't the interest, it's the idea that sentient minds have shaped themselves into something else (communist / community based in this case) which might make their own particular minds happier than the basic model. (Of course, these are all silicon lifeforms, not organic).

On a less fantastical model, let's take something we do know the effects of: lead in petrol.

The lead content of petrol massively increased aggression levels and reduced IQs across entire populations and no-one blinked an eyelid.

You can then take any number of dioxins, pollutants and even consumer goods (cigarettes, plastics) and replicate this a thousand times. Estrogen in the water supply is the next big bugbear, the fallout from that is going to be larger than petrol by a long shot.

So, in short, the 20th Century has seen Homo Sapiens Sapiens unleash a chemical stew (we'll ignore accidental / background effects of nuclear tests/accidents or UV damage due to ozone depletion for the moment) that has already had massive effects on the genetic and epigenetic nature of the species.

At this point the argument isn't "why should we do this", but "we need do this immediately to get back to a base level of health". Which is, fyi (well, military, let's not go there), the position of the Chinese government (see my first comment).

150:

How old are you, six? The depressing reality is that your average so-called 'high IQ' guy wastes few opportunities to do something stupid. I ought to know; I'm one of 'em.

You've also made the assumption I asked you not to make, namely that this particular invasive technology automatically leads to increased 'IQ' . . . whatever that is.

151:

"You've also made the assumption I asked you not to make"

Well, too bad because I serious disagree with you. Of course, you may be correct but we will find out my way eventually - by experiment.

152:

Let's not assume that all types of this kind are invasive. e.g. EM stimulation

As to the links between low IQ and violence (and lead), this is well studied across multiple disciplines:

The sample consisted of 246 African American, inner-city children from whom blood lead concentrations were assessed at 7.5 years of age. The results consistently show neurobehavioral deficits in relation to low levels of lead in the areas of intelligence, reaction time, visual–motor integration, fine motor skills, attention, including executive function, off-task behaviors, and teacher-reported withdrawn behaviors. Effects were identified in the specific domains of attention, executive function, visual–motor integration, social behavior, and motor skills, which have been previously suggested as part of lead's “behavioral signature”.

http://www.sciencedirect.com/science/article/pii/S0892036204000212

In a 22-year study, data were collected on aggressiveness and intellectual functioning in more than 600 subjects, their parents, and their children. Both aggression and intellectual functioning are reasonably stable in a subject's lifetime and perpetuate themselves across generations and within marriage pairs. Aggression in childhood was shown to interfere with the development of intellectual functioning and to be predictive of poorer intellectual achievement as an adult. Early IQ was related to early subject aggression but did not predict changes in aggression after age 8

http://psycnet.apa.org/journals/psp/52/1/232/

The CogAT norm comparisons indicate shifts in IQ levels consistent with the blood lead to IQ relationship reported by an earlier study and population shifts in average blood lead for children under age 6 between 1976 and 1991. The CogAT norm comparisons also support studies indicating that the IQ to blood lead slope may increase at lower blood lead levels. Furthermore, long-term trends in population exposure to gasoline lead were found to be remarkably consistent with subsequent changes in violent crime and unwed pregnancy.

http://www.sciencedirect.com/science/article/pii/S0013935199940458

You're also not thinking about individuals in this kind of modelling - you're doing populations. e.g. raising the entire population by a margin of 10-20 points still leads to the usual distribution curves we have at the moment.

153:

This is where someone always pops up and explains how IQ is a meaningless measure, and hence anything with the letters "IQ" in it is spurious bullshit. There - saved everyone the trouble.

154:

This doesn't make any sense. You're making an assumption. End of story. Is it possible this sort of technology will increase 'IQ'? I'll certainly go with that. Is it probable? I simply don't know.

And you know, if this weren't an assumption, you might be able to explain why this sort of thing will necessarily lead to an increase in 'IQ'. Something besides 'it's obvious'.

155:

Because people are still, en masse, too stupid.

And that won't change, even if the individuals get smarter. People as a herd react in a variety of ways, most often with increased risktaking and poor judgement, caused by a variety of stimuli. A lot of group dynamics are still poorly understood mechanisms, to do with social interaction more than conscious actions.

Imagine how the world might change if intelligence was just boosted overall by 50%. Or even, and more likely, that those with lower intelligence get a disproportionate boost compared to everyone else and the bell shaped curve narrows considerably

Then we will literally need robots to do the mindless jobs, because more intelligent people will not be able to do them for long without getting bored.

156:

Because intelligence can be broken down into a number of different components eg pattern matching, extent of short term memory, access to long term memories, ability or organize data into meaningful patters (knowledge). In fact, I can't think of any element of intelligence that could not be enhanced by a BCI of some kind.

157:

Are you really saying that so-called (and usually self-declared) 'high IQ' people don't do stupid shit? Seriously?

158:

"Then we will literally need robots to do the mindless jobs, because more intelligent people will not be able to do them for long without getting bored."

And there speaks one of the elite on behalf of the stupids. Ever considered the possibility that mindless tedious boring jobs are mindless tedious and boring to low IQ people as well?

159:

I suspect Scent of Violets has me on ignore, but here's some proof that it works:

Our results indicate that left prefrontal anodal stimulation leads to an enhancement of working memory performance. Furthermore, this effect depends on the stimulation polarity and is specific to the site of stimulation. This result may be helpful to develop future interventions aiming at clinical benefits.

http://link.springer.com/article/10.1007/s00221-005-2334-6

We applied rapid-rate repetitive transcranial magnetic stimulation (rTMS) at individual alpha frequency (IAF) to improve cognitive performance by influencing the dynamics of alpha desynchronization...

The results indicate that rTMS at IAF + 1 Hz can enhance task performance and, concomitantly, the extent of task-related alpha desynchronization. This provides further evidence for the functional relevance of oscillatory neuronal activity in the alpha band for the implementation of cognitive performance.

Enhancing cognitive performance with repetitive transcranial magnetic stimulation at human individual alpha frequency

tDCS applied during completion of the n-back task was found to result in greater improvement in performance on digit span forward, compared with tDCS applied while at rest and sham tDCS during the n-back task. This finding was not evident with digit span backward.

http://www.sciencedirect.com/science/article/pii/S1935861X10000628

So, um, yeah: we know it can effect performance, much akin to knowing that Ritalin / Adderall can effect performance.

160:

My grandmother once told me a story of what happened when WW1 was announced. There were groups of men with sticks roaming the streets looking for dachshunds to beat to death because they were "German dogs". Yeah... I can really see high IQ people doing that.

161:

Dirk - I'm sure are most are aware of the issues with IQ tests, but I'm hoping the snark is not to say that such studies do not prove lead damage as detrimental to development.

As that would be crazy-pants.

162:

"So, um, yeah: we know it can effect performance, much akin to knowing that Ritalin / Adderall can effect performance."

Yes, tDCS - my 15 seconds of fame on BBC4 and Sky News showing people how easy DIY tDCS really is. As for ritalin, I prefer modafinil and piracetam. Yes, as a professional engineer I "cheat" regularly. BTW, I take piracetam not because it's an IQ boost (they are all marginal for already high IQ people) but because they boost mood and focus, both are neuroprotective and piracetam also has a number of other benefits, esp for older people. Just don't expect your GP to write you a prescription for it because... well, see some of the above discussion on the subject.

163:

People use IQ selectively, depending on whether you are talking about lead poisoning or race. It's all political.

164:

Which rather hits the nail on the head:

A large majority of 'high IQ' / elites are already dosing themselves up to the eyeballs with nootropics and drugs meant for different things to gain an edge. Likewise, you can be sure that those clinics mentioned earlier (regarding life extension, new blood etc) have been functioning for at least a quarter century by now. Russia always was at the forefront of that type of thing.

As you stated, you have to do it grey market at this time (in the West), but the general populace is left out in the cold.

Full Disclosure: piracetam etc, been there done it.

165:

Then we will literally need robots to do the mindless jobs, because more intelligent people will not be able to do them for long without getting bored.

Are we supposed to be wringing our hands about a catastrophic shortage of jobs or of workers? I always forget which is scheduled for odd days of the month and which for even days.

Robots are going to render millions permanently unemployed. And sub-replacement fertility will leave us without enough workers to provide for the elderly. Worst of all, both things are going to happen simultaneously! We must live in the worst of all possible worlds.

Less sarcastically, I think that "jobs shortage" is going to be the dominant problem in the decades ahead and the persistent worker shortage will not materialize. The imaginary worker shortage will nonetheless be endlessly propagandized by people who want even cheaper workers. If you want to track what's happening, don't listen to speeches about how $NATION or $REGION needs more workers. Look at unemployment rates and wage trends in those same areas. Is unemployment below 4%? Are wages rising much faster than inflation? Don't believe the guy proclaiming a "wine shortage" when there is plenty of unsold wine on the shelves and prices aren't rising very fast. The same goes for labor/worker shortages.

166:

The stuff the old Soviets did was amazing - I have Russian friends who are into digging out the old data, replicating the experiments and taking them forward. It all has to be done as "amateur" stuff because some of it would never get past ethics committees. You can also get some very interesting drugs in Russia, for example, Ladasten and latterly SkQ

167:

Without saying too much, full blood transfusions have been common since at least Chernobyl times, if not earlier.

There's a reason Putin is so manly for his age, it's not all luck.

168:

Ah, my young friend, you seem to think that stupidity is unitary. Trust me, there are many types of stupidity. All of which I have -- from time to time -- lavishly indulged. Having a higher IQ merely means more accessible stupidities.

A side note: I had thought you were an older guy. You really don't know this?

169:

I have moved beyond your cynicism child... I am 62 in August.

170:

Seriously , you don't think the raise in IQ via Flynn Effect has made our society better over the 100 years since 1914?

171:

You might like to find some evidence connecting IQ levels with social improvements...

172:

I have moved beyond your cynicism child...

I have moved beyond your skepticism child... Fixed it for ya. And when you say your words right, it makes a lot more sense, don't it?

173:

Because people are still, en masse, too stupid.

Sigh.

Strong suggestion? Go and read the short story "Our Neural Chernobyl" by Bruce Sterling (collected in his short story collection Globalhead) before you spout ill-thought-out rubbish like that.

Hint: we don't know what the determinants of general intelligence might be. Or even if such a thing exists (I prefer those hypotheses that suggest it's a whole bunch of disparate skills, some of them orthogonal). Let alone what the effect of adding connections or speeding neural processing up might be.

174:

It's not ill thought out - it's an observation

175:

full blood transfusions ... There's a reason Putin is so manly for his age

Monkey glands, I'd hope!

Transfusion blood is expensive. (I did some background research for "The Rhesus Chart" -- turns out typed whole blood provided by the NHS transfusion service (non-commercial) is billed internally at around £150 per 330ml unit. A full body transfusion would require around 8 litres, so on the order of 24 units, so around £3500 ... and need repeating monthly or thereabouts (erythrocytes don't last forever). More to the point, blood is scarce -- a typical British hospital accident and emergency unit has one unit available for emergency use, and any extra needs to be ordered just-in-time for delivery via a special ambulance from the regional blood centre.

Mind you, Putin being a vampire would explain a lot, and it'd only cost around US $60,000 a year, which for a head of state who has a solid gold toilet seat in his private jet is ...

176:

Seriously , you don't think the raise in IQ via Flynn Effect has made our society better over the 100 years since 1914?

All it means is we get lots of practice at solving IQ-test type logic puzzles during our schooling. I was in a cohort that got fed symbolic logic aged 10-11; by 1920 standards I'd have been shit hot at it. On the other hand, I couldn't conjugate a Latin verb to save my life -- having, in fact, no Latin education whatsoever. Basically, educational priorities shifted over time, and stuff kids get taught today might well have been utterly useless, esoteric ivory-tower stuff if you dumped them in the 1880s (when a knowledge of crop rotation would be essential, but coding not so much).

177:

I'll correct that for you:

Transfusion blood that is fresh, young, full of the good stuff, AIDS / STD free is really expensive, but it exists.

There's also a reason I mentioned the "C" word, but [redacted]. Stem cells etc are also a huge area of [redacted] and even in America, as seen by [redacted].

Outside of Ru / Chi, it's also heavily tied into (central European) Mafia, Israel, NY and third world countries, but hey.

Then again, you probably don't want me to start linking to what's known as Red Markets.

  • Which puts us back to Ciba.

Night City was like a deranged experiment in social Darwinism, designed by a bored researcher who kept one thumb permanently on the fast-forward button. Stop hustling and you sank without a trace, but move a little too swiftly and you'd break the fragile surface tension of the black market; either way, you were gone, with nothing left of you but some vague memory in the mind of a fixture like Ratz, though heart or lungs or kidneys might survive in the service of some stranger with New Yen for the clinic tanks.

which is the real issue - I'll refer back to the history of human biology, corpses, dissection and so on. There was a great fear (by the Enlightened Church members) that crossing that Rubicon would lead to an admixture between the body of man and the body of beasts. It's probably the same fear here, just being proclaimed in a different ideological language.

178:

...and that'd be a broken link.

[[ fixed - I don't what you use to edit, but it inserts unwanted newlines on occasion - mod ]]

179:

One of the problems in the UK is that businesses don't seem to be keen on paying to train their employees. They expect everyone to be ready to drop into a job. I have seen claims that other European countries get better productivity because the companies train their workers on the expensive machinery, and there's lass waiting for skilled mechanics to fix trivial problems.

I can believe it.

180:

"Transfusion blood is expensive."

Well, over here it is. I imagine it's a lot cheaper where the local populace spends a significant amount of time going hungry. So, we are really talking around £700 a month or even less. That wouldn't even get you a bedsit in London. Besides, maybe one might still get significant benefits from once every 6 months.

181:

The only training I have ever had has been someone dropping a fat book on my desk.

182:

You're talking to me? Because if so, that would be a very odd comment.

183:

Noel, comments from Dirk are almost always odd.

184:

Why? Obviously the Africans and rabbits thing was what first leaped into your mind in the assumption you made.

185:

You know what I want, cognitively speaking? No, it's not 'IQ'. I want to be able to remember things the way I used to. And I want to be able to concentrate the way I used to, like back in the day when I used to remember ten-to-twenty phone numbers of various young ladies after our band folded up for the night and under the influence of drugs not-so-legal and otherwise. I can't do that anymore, and the lack is sorely felt[1]. The same thing applies in spades when it comes to focusing on a narrow topic and sticking with it for hours on end.

To be fair, that last is also a function of distractors. I'd kill for something that would let me shrug off my shoulder/back/knee pains. Yeah, I know; I don't have that E. E. Doc Smith ability to shrug off bodily distractions. But there you have it.

[1] At twenty, my daughter has no problems memorizing several hundred-odd lines of script even a bare few days before the curtain rises on the first show. It seems to me that this sort of performance is a thing of youth; certainly I can't manage it any more.

186:

Well, I've managed all of that with the right drugs, but then I'm only 61 and still doing gym and martial arts training 3 times a week.

187:

That's because deep down we think alike, despite working from slightly different axioms

188:

That's because deep down we think alike, despite working from slightly different axioms

That's... sweet, but no-where near correct.

If we are to use this frankly ancient analogy of computers (and, pro-tip: HFTs don't work like that now anyhow, it's an ecology) you hit certain hardware issues such as this:

Systems installed:

1) Empathy 2) Language 3) Culture 4) Morality 5) Gender

etc

2)b) If no higher grammar (Higher Order Thinking) taught by age 12, then you can never again form these structures in your brain.

No, seriously. I'll let anyone step up to the plate for $500 to argue that one.

Your consciousness is a process, and a process with certain levels open to it (plasticity) - if you miss them, well, you get broken people. And by broken, I mean, "damaged".

Hint: 1) is where it breaks usually. If you can't even do basic sex education, your chances at 1) are slim, which is why 4) kicks in with a vengeance and the old fire n brimstone.

That's not even mentioning the wild things.

I'll be honest:

Dirk, at a fundamental level of thought, you and I have about as much in common as an octopus and a squid. (No, stop there: this isn't a parable about the socio/psychopaths).

Literally.

And this is the damaged version allowed to roam. You should have seen me in my prime.

(I'm waiting to a reply to comment #115 - if you've missed it, alpha waves are tsi, gamma are meditation. I'm waiting for anyone to start being able to engage at this level and have some fun. Real fun.)

189:

Scent of Violets wrote: You know what I want, cognitively speaking? No, it's not 'IQ'. I want to be able to remember things the way I used to. And I want to be able to concentrate the way I used to

Other people have already mentioned modafinil. Without turning this into a drug recommendation thread, that's a great example of a treatment for the "sick" becoming something that large numbers of "healthy" people start using. IIRC, someone did a survey of scientists a few years ago in the USA and a significant number were using modafinil or similar.

As someone ignorant of this kind of fine scale manipulation of biological structures, are drugs a better delivery mechanism for tweaking retinal nerves and the like? Would they be a better delivery mechanism for nano-thingies rather than just sticking them in the bloodstream and hoping for the best?

190:

Modafinil has large drawbacks, in that it shuts down / over-arches other neurochemical processes.

The largest one being creativity.

If S.O.V is serious, s/he needs to do this:

1) Stop all drugs 2) Learn to meditate (and keep on doing it) 3) Get fit 4) Take some shrooms 5) Start taking nootropics (and only nootropics)

With about 2-4 weeks in between each stage

As someone ignorant of this kind of fine scale manipulation of biological structures, are drugs a better delivery mechanism for tweaking retinal nerves and the like? Would they be a better delivery mechanism for nano-thingies rather than just sticking them in the bloodstream and hoping for the best?

Ok, weird question.

Let's presume you're not talking about nanomachines, then drugs (orally, rectally, intravenously) are all the same: you're putting something into your blood stream and waiting for your blood to circulate until that chemical hits the receptors it's designed to bind to.

If you're talking direct application (site) then we can't do that yet. Well, kinda.

Oh, apart from the whole retinal / auditory nerve brainwashing stuff, which is kinda different. And, frankly, is a long way beyond what you get in public science papers.

whoops

191:

Personally, I fear knowledge at least as much... Grrrrrr .... This is the sort of moronic claptrap that always annoys me. You are saying that Ignorance is prefereable to Knowledge, are you not? This is akin to religious mysticism, where un-knowing is worshipped. ( Anyway the "Faust" myth is just that - a myth, & a very shaky one with it .. )

If you really feel ( because you can't be actually THINKING ) like that, may I suggest you go & join a monastery & don't bother us again, please?

Like I said: Grrrr ....

192:
  • & also Dirk @ 156. Generalised "IQ" is a very wobbly measure indeed - though it does seem to correlate to problem-solving abilities. There are "narrower" IQ testing regimes, which aim to take some sort of measure of ability in pattern recognition, mathematical abilty, & verbal reasoning. But, I've never seen a similar measure applied to practical problem-solving, or ability with tools though - f'rinstance. BUT None of these takes any account of "Group" behaviours, or brain-washing ( Very intelligent, very clever person raised as fanatical member of $Insert$Nameof$Church-here ) or not applying thought to action etc.
193:

Yeah If, yet again you hear some industrialist or politician bemoaning the "shortage of skilled workers, especially engineers... " He is bloody lying - they are all bloody lying. Actually it translates as: "We can't get well-qualified people aged under 35 (or better still 30), preferably foreign, too - because then we can play all the usual tricks on them & pay them peanuts" I got my Engineering MSc at age 46 ( Having already got qualifiacations in Pysics & Electronics ) Days of paid employement, resulting from same? Zero. Yes, I'm still bitter & angry.

194:

Addendum - & also from ATT @ #179 That too, but ageism, & rampant sexism, are still dragging "us" ( The UK & the USA ) backwards, in spite of all the very public evidence that it is a mistake. What was that about "intelligence" on another part of this thread?

195:

"But, I've never seen a similar measure applied to practical problem-solving, or ability with tools though"

I think there's some sort of correlation but I don't know exactly what it is. I scored well enough on a state administered IQ test to be put in a school for the unfortunately gifted. (high end of the top 1%)

I was ok in a career as a commercial diver, "on the tools". Lots of practical problem solving there. I was also ok as an IT project manager, again with lots of problem solving. I got through the preliminary tests for an apprenticeship in my firm which weeded 30 000 applicants down to 3 (failing at the final hurdle). The test was solving practical problems like which way would a lever move when a load is applied, which rope is under the most load.

I'm only in regular contact with one of my classmates. He took up doctoring in his 40's and is now a pretty good surgeon.

So with a total sample of two, it seems to indicate that there might be some correlation.

196:

I know or know of a number of people who think that "mass market" manufacturers (which includes BMW and Mercedes) are trying to move towards a vehicle life of SEVEN years, between small buzzy engines that make great ECE15 fuel economy, but can't get close in the real World, and life of components that require things like removing an entire facia to replace!

197:

Most word processors (including Wordpad) are guilty of this "inserting CRLF" behaviour, and actually I think so is Notepad if you set it up to automatically word-wrap, and then save with a string longer than your viewing width.

198:

All the arguments for "Better intelligence = better people" just remind me of the scene in the 50's version of War of the Worlds, where the wise old pastor (tm) attempts to make the argument that if the Martians are more advanced technologically then they must be more advanced morally ("closer to God", in his words), right before being turned into a crispy critter for his efforts at peaceful communication.

It doesn't take much observation of human culture and society (any of it!) to see that high intelligence does not equal greater empathy or morality. If anything, often the opposite.

199:

You were actually able to memorize multiple phone numbers and focus for hours at a time? Wow.

What you describe as your condition now is what I was like at my peak, late teens to mid twenties. I've always had a shit memory, able to get snagged on irrelevant but weird factoids but useless in terms of hanging onto important stuff. And now I'm a lot worse; much of my childhood is a rolling fogbank, and I lost a couple of years in my 40s due to subtle memory impairment associated with medication (all that remains is a lot of photographs and email and some novels, so there's that).

As for the shoulder/back/knee pains, I reckon they're largely responsible for the grumpy middle aged man stereotype -- there's nothing like low grade chronic pain for causing bad temper.

200:

Interesting stuff. I'll wait with it until the future technology can reliably upgrade a mouse to human level intelligence. Hopefully it's not pissed off about the treatment and decides to grow the newly aquired intellect a bit further to dominate the world. Having said that I want to be ready for the future and state the following: I, for one, welcome our new rodent overlords.

201:

(Long stupid attempt at logical formulation deleted) ...

Dirk, if you say something which you think is fair comment about another group, but which a bystander in a different group (such as Noel) interprets as a racist slur on that group, THEN at the very least you are guilty of an poor choice of words -- in reality, if the target group is a frequent target of racism, then it's quite possible that you have unconsciously swallowed the poison and you really need to practice some mental gastric lavage before you say anything else on the subject.

202:

Because people are still, en masse, too stupid. Imagine how the world might change if intelligence was just boosted overall by 50%. Or even, and more likely, that those with lower intelligence get a disproportionate boost compared to everyone else and the bell shaped curve narrows considerably.

Erm, lots of people would do really stupid things in a more intelligent way? I don't know if you have noticed, but higher intelligence does not seem to correlate with better functioning in a group.

203:

Not new. In fact, mice are the 3dimensional manifestation of a 4dimensional highly intelligent life form which chooses to spend most of its time in labs and make incredibly subtle experiments on humanity.

204:

Could it be that Noel originally was referring to a comment that got deleted and then you all got it wrong?

205:

Possibly, but I doubt it; Noel was replying to someone suggesting demographic transition as a speciation event and was trying to draw out the racial implications of that statement.

206:
  • As far as we can tell at this stage, the demographic transition is a global, pan-human event, affecting all without regard to geography, race or religion, essentially simultaneously (within the space of about three centuries).

  • A reasonable generic term for the new subspecies would probably be "post-human". Different may be better or worse; by default, it's just different.

  • I'm assuming that 18th-century Sweden is pre-demographic-transition, even among its educated classes.

  • The biggest markers would be in terms of r/K strategy and life history: fertility rate 3½× lower, child mortality 100× lower (two orders of magnitude!), age of first reproduction 2× higher, that sort of thing.

  • Lack of genetic difference would be an argument against, but especially at the subspecies level wouldn't be an absolute bar; see, for instance, the dingo as against other dogs and wolves.

  • It may also be a bit early to tell — normally we see these things tens of thousands of years afterwards or more, not while they're in progress.

  • Regardless of its status, it's still a useful thought experiment with regard to any potential future (sub-)speciation events. Consider our rather ambiguous relationship with pre-demographic-transition groups — ranging from aid (to help them also transition) through various forms of displacement to the horror of our refugee policies. (Maybe we're more "non-human monsters" than "post-human"...)

207:

Yeah, I probably should have spelled out my thoughts at greater length yesterday, rather than firing of half a sentence...

Sometimes brevity is a virtue, other times, not so much.

208:

there's nothing like low grade chronic pain for causing bad temper. I plead gulity m'lud .... As anyone who ran into me @ Eastercon this year will testify ... having (hopefully temporary) "permanent" shoulder-&-arm pains can make you really shit at conversation & "getting along", even if you think you've got it under control & have adjusted for it.

209:

Lead: (Haven't read beyond this post yet, so unsure whether you've already addressed this ...)

Do the studies you know of also explain the very high exposures to lead among other, earlier societies? I'm specifically thinking of the lead-based cosmetics, lead-based paints, Roman aqueducts using lead pipes, etc. and how these correlate (or not) with aggression across high vs. moderate vs. low usage.

I get that lead is bad for one's health ... however, I'd really like to understand why/how lead gets preferentially involved in brain areas that activate aggression/inhibit social compliance.

Thanks!

210:

Are they in touch with their free-ranging cousins??!!!

If any mouse is reading this:"It was all an accident!! At no point in time did I mean to harm any mouse in my house! The traps were for small RATS, really! I'm sorry if any of the 17 mice were related to you."

211:

The papers linked all focus on child (under 6-8, certainly pre-puberty) development for a specific reason: the levels of lead released by petrol (et al) into the environment are all much lower concentrations than what the Romans were up to. (Also note that Roman exposure to lead was certainly tied to specific population groups. The highest exposures probably came from sapa containing lead acetate or water pipes, rather than the more popular cited cosmetics, which ironically means the poor suffered less).

As to the mechanics of the why?

Electrophysiological studies showed that neurosensory processing may be affected by lead, with consequent decrease in auditory sensitivity and visuomotor performance. Lead disrupts the main structural components of the blood–brain barrier by primary injury to astrocytes with a secondary damage to the endothelial microvasculature. Within the brain, lead-induced damage occurs preferentially in the prefrontal cerebral cortex, hippocampus and cerebellum.

http://www.sciencedirect.com/science/article/pii/S0165017398000113

You'll probably notice that this ties into my earlier comments about audio/motor functions of the brain. The basic concept is that developing brains are a process (as stated before), and if certain structures / associations / pathways (etc) don't grow correctly you'll end up with damage that's expressed in behavioral terms. e.g. low impulse control mechanisms, anger.

The sad fact is that you only get one shot at this currently, which is why ways to reinsert plasticity etc are so important.

Note: I'm not a behaviorist or a pure reductionist / physicalist.

212:

I read about a similar (maybe even the same) study that looked at living near hydro wires and leukemia/cancer risk/incidence ... the real variable at play (cause) is lower socio-economic status under which are subsumed, higher exposures to toxins, poorer nutrition, less access to education and healthcare, etc.

213:

The financial crisis and ensuing global depression was caused to a large extent by lots of greedy intelligent people. If they'd been greedy stupid people they'd never have managed to get things so well arranged to benefit themselves.
Thus, suggesting that the world would be a better place if people were more intelligent is just silly.

214:

the real variable at play (cause) is lower socio-economic status under which are subsumed, higher exposures to toxins, poorer nutrition, less access to education and healthcare, etc.

While I get your point, I think you're missing two important things:

1) All exposure to lead is damaging. i.e. there's no "safe" level, just level x = damage y

2) While lead concentrations will certainly be higher in urban / high density populations which have more air pollution, car exhausts are an almost universal. Socioeconomic class has little effect on this (ok, suburbs are better than inner city, but there's still exposure).

Think of L.A. The smog doesn't just cover the poor parts, it covers the entire valley.

Also, there's absolutely no doubt that lead damages brains.

Many of the neurotoxic effects of lead appear related to the ability of lead to mimic or in some cases inhibit the action of calcium as a regulator of cell function. At a neuronal level, exposure to lead alters the release of neurotransmitter from presynaptic nerve endings. Spontaneous release is enhanced and evoked release is inhibited. The former may be due to activation of protein kinases in the nerve endings and the latter to blockade of voltage-dependent calcium channels. This disruption of neuronal activity may, in turn, alter the developmental processes of synapse formation and result in a less efficient brain with cognitive deficits. Brain homeostatic mechanisms are disrupted by exposure to higher levels of lead. The final pathway appears to be a breakdown in the blood-brain barrier. Again, the ability of lead to mimic or mobilize calcium and activate protein kinases may alter the behavior of endothelial cells in immature brain and disrupt the barrier. In addition to a direct toxic effect upon the endothelial cells, lead may alter indirectly the microvasculature by damaging the astrocytes that provide signals for the maintenance of blood-brain barrier integrity.

http://www.ncbi.nlm.nih.gov/pubmed/1671748

215:

"Dirk, at a fundamental level of thought, you and I have about as much in common as an octopus and a squid."

Probably true, but my comment was directed at Charles. It often surprises me how much serious weird knowledge we share. It indicates a lot of unusual parallel interests, at least in the past

216:

"I'm waiting to a reply to comment #115 - if you've missed it, alpha waves are tsi, gamma are meditation. I'm waiting for anyone to start being able to engage at this level and have some fun"

Well, although I am peripherally connected with people doing relevant research it's not something I am into except as occasional guinea-pig and/or user. Here's some weird Russian shit: https://www.youtube.com/watch?v=TjhPhFhotV4

217:

I know you take all kinds of meds, but have you tried modafinil or any nootropics? [Assuming there are no bad interactions - but you should know given your previous career]

218:

" I don't know if you have noticed, but higher intelligence does not seem to correlate with better functioning in a group."

And low intelligence does? Well, maybe - I guess it helps with the group bonding bit when you all go out hunting Dachshunds because they are "German dogs". Overall, how are those nations with a significantly lower IQ than Western/Asian average doing? [And yes, I know all the kinds of reasons why their IQ is lower overall]

219:

"The financial crisis and ensuing global depression was caused to a large extent by lots of greedy intelligent people. If they'd been greedy stupid people they'd never have managed to get things so well arranged to benefit themselves. Thus, suggesting that the world would be a better place if people were more intelligent is just silly."

So - all those smart people made the money, crashed the economy and then got bailed out by the other intelligent rich people using the money of the dumb suckers (us, en masse). Maybe that would not have happened if the dumb suckers were a bit less stupid?

220:

Ha! I was waiting for the old "Russian Light Virus via your computer destroys your mind".

/tangent

That channel is very 'interesting'. Reminds me of other channels where you get a bikini woman dancing a year ago, then some random news clip six months ago and then a man being crucified and set on fire two months ago.

p.s. Do you get bonus points if you spot the errors that place the video not taking place in the Ukraine or Russia?

221:

I know you take all kinds of meds, but have you tried modafinil or any nootropics?

The only smart drug I take is caffeine. I'm not at all sure that those things you call nootropics work as advertised or are at all safe. That last ties in with finding a reputable supplier. Subject my brain to whatever may be in the little pills the dealers are pushing? Not no way not no how. Maybe I'll feel differently when I lose a little more of my edge, but until then . . .

222:

You mean if you spot me being the one in the chair and the place being Wembley?

223:

Nootropics probably don't work too well if you are young and very intelligent. OTOH, if you are old and getting those little "senior moments" with regard to memory, I find that piracetam eliminates them completely. As for modafinil, it works just like caffeine, except there is a mood boost and your short term memory is enhanced. My "items" memory goes from between seven and eight to nine. Not a big jump, but noticeable. It also lasts a lot longer and is probably cheaper, unless you drink crap coffee or neat caffeine.

224:

Thanks! - Read the abstract and am now wondering whether to increase my dietary calcium.

225:

You mean if you spot me being the one in the chair and the place being Wembley?

I've a feeling you missed something in my response.

https://www.youtube.com/watch?v=Umc9ezAyJv0

226:

“Subject my brain to whatever may be in the little pills the dealers are pushing? Not no way not know how. Maybe I'll feel differently when I lose a little more of my edge, but until then . . . "

But then, there is something that you aren’t...you aren’t a DESPERATE student who is eager to gain advantage against the rest of the YEAR in any given subject at...let’s call it University shall we?

Here in the U.K. we have attained a level of educational requirement that demands Degree Level Qualifications for admittance to just about any decently paid Middle Class grade job.

Wasn't this ever so?

No, it wasn't.

Once upon a time in the U.K. in that far and distant land pre to the mid 1960s all Good little Children were required to take an Exam that was called the 11 plus. This exam determined your future...if you failed it heaven help you for you were then destined to join the British Working Class and believe me oh best beloved you DID NOT want to do that if you were a sprig of the Middle Classes. But happily if you were born to the children of the Non Manual Labouring Classes then you were drilled relentlessly for the 11 plus...This was a sort of INTELLEGENCE TEST. You will recognise the term?

So let us say you pass the 11 plus and go to Grammar School that looks a bit like Eton as opposed to Failure which you earned you admission to a Secondary Modern School which looked like a Victorian Workhouse...all these terms are easily researchable with web search ..What the hell, I'll save you the trouble...

http://en.wikipedia.org/wiki/Secondary_modern_school

To the middle classes failure to pass the 11 plus was a horrible disgrace. To the working class it was just accepted as being the way things worked.

For a brief while after the introduction of the Comprehensive School System things did change but the Middle Classes rapidly adjusted to the changed reality and either bent every effort to obtain scholarships to Public Schools or gamed the system to obtain admission to the Better Comprehensives...and thus the term " Sink " Schools /comprehensives sprang up to describe the schools that you did NOT want your kids to go to.

But, should you obtain admission to a Grammar School...the Middle Classes long for the return of the Grammar School in whatever disguise so they still exist in the U.K.... you went on to University eh? No, you didn't.

Apparently what happened was that only a small percentage Grammar School Kids went on to University, but once into the Grammar System you were instantly sealed off from the Hideous Zombie Hordes of the Working Class with a syllabus that meant that you learned LATIN ..The origin of Grammar Schools lies in LATIN GRAMMER SCHOOLS...thus whatever promise was made to the lower orders along the lines of being moved up to a Grammar if you worked really hard and were academically worthy - it was called being a 'Late Developer ' - the practical reality was far different; who is going to be so cruel as to send a grammar school kid down into a secondary modern school so that a scondry modern school kid could go up?

Anyway, once you passed that crucial intelligence test you were nudged into 0 levels and also into, say, the admission exams for the Civil Service or, say, admission to Teacher Training ... TEACHING was not automatically a degree level profession rather you were sent to a teacher training college way back then.

Working class kids could make up the deficiencies in their education but in practice it was extremely difficult even supposing you knew how to do it.

The thing is that at every level being thoroughly familiar with the 11 plus, and later the O levels, was crucially important to your future prospects: success was rewarded and failure punished quite savagely whilst the interests of the people who devised the system were served as was entirely understandable.

Would you fail to advantage your kids by failing to game the system by whatever means possible?

In this sort of educational system MEMORY is crucial and also critical. But, even failing superb ability to memorise by rote, and even failure of being well taught to the exam according to your ability, passing those exams at some sort of lower level could be done through rigorous training from earliest youth way back then.

But now? Well I don't know about the situation in other countries but here in the U.K. Degree level qualifications have become mandatory for admission to all sorts of professions that once just required that you pass the right number of O levels in Grammar School. And whilst some subjects at University have been degraded to meet the new reality - like Business Studies and so forth - others such as Medical degrees and such like science /mathematics based professions like engineering in its various forms, have maintained their academic rigour.

But wait there’s more! Lots of those lower order admissions to Middle Class Jobs - mostly well Paid Jobs which take place indoors and that involve No Heavy Lifting - gradually begun to disappear from the U.Ks social system just before the turn of the century.

If you live in the U.K. you will have noticed the presure on Local Government jobs and the disappearance of many Branches of Public Libraries and so therefore the role of Librarian? Its going to get worse as I.T. Expert Systems move into areas formally occupied by the lower ranks of the U.Ks middle classes.

The consequence of this is that HUGE pressures have been placed upon the remaining Middle Class Job markets all the way up the feeder pipeline from schools...Exam by Exam and at every level. This is going to get worse as time goes on from here on in.

And that - finally and at long last eh? - brings us to ... " The Ultimate Tech Frontier: Your Brain”

For many years now middle class parents have bent every effort - and I mean BENT in a crooked ' gaming the system ' through the use of money or social status sort of way - to advantage their own kids. Up until now this has been fairly orthodox in so far as it has involved private tuition and moving house to the right catchment area for the Best School if you couldn't quite make the fees for a public school.

Here in this conversation people have owned to err, call it chemical enhancement shall we? But that has been by personal choice and often, I believe, to sharpen their approach to research work.

In future? And as that window to DECENT - that is to say middle class grade - professions narrows? I expect to see a kind of Drugs/ Genetically Engineered Arms race as people see the control of the future slipping away from their kids.

Some children are already being drugged to obtain their compliance to desired social norms...

" These medications are not a permanent cure for ADHD, but they can help someone with the condition concentrate better, be less impulsive, feel calmer, and learn and practise new skills.

Some medications need to be taken every day, but some can be taken just on school days. Treatment breaks are occasionally recommended, to assess whether the medication is still needed.

In the UK, all of these medications are licensed for use in children and teenagers. Atomoxetine is also licensed for use in adults who had symptoms of ADHD as children. "

http://www.nhs.uk/Conditions/Attention-deficit-hyperactivity-disorder/Pages/Treatment.aspx

Now where can be the harm in administering "medication " to children that will in future improve their memories and thus their ability to qualify for highly desirable professions?

Come now? You wouldn't want your children to be placed at a disadvantage now would you?

227:

No

The advantage of the "Grammar School" system was that it was an escape-route for pooor children to get up the social/employment ladder ... Like my father (Born 1911) - who when he retired was a Fellow of the (Royal) Institute of Chemistry - in the meantime hois father had died when he was 12 or 13, leaving my grandmother with two young sons to bring up ... not easy.

The dis-advantage of the then system was virtually no "second chance" ....

The "reformed" syatem that replaced it also failed, because of the then dogma that comprehensive schools should also teach by mixed abilty, rather than streaming or setting inside theor comprehensive layout. That retarded education in this country for over 30 years.

As for "private" education - it's EXPENSIVE - but - people started to fork the money out, when it became apparent that the so-called "comprehensives" system was not as advertised, but a race to tht bottom of the lowest common denominator. Certainly, in the mid-1960's the better state Grammar schools were producing better results than most "private" ones - which was proptly destroyed by unthinking politicos.

PLEASE NOTE: There is nothing wrong with comprehensive schools, PROVIDED, that within those schools, children are selected for ability in theor respective classes/subjects. But, because of dogme, this was not done for many years.

OK?

Disagree re "MEMORY", though ..... Certainly not by my "O" &"A" levels - they required thinking - or is that because I followed a maths-&-science route?

228:

Arnold #226 and Greg #227

You're both only part-right. My grandfather was a bus driver (later inspector). My father passed the 11+ and went to the local grammar where he got his highers, but not with grades that got him a university place, whilst his younger brother failed and went to the secondary modern and did well enough to get an apprenticeship at Prestwick/Scottish (name changed, and I don't have exact dates) Aviation. So it was clearly possible for the working class to attend grammar school.

My mother was a teacher, and from what she has said, passing the 11+ could demand detailed training in what the "correct answer" was. For example, the only acceptable answer to "Name something good to eat" was "sweets": answering "chocolate" was "wrong".

229:

Yes. I failed the 11+. It is also somewhat suspicious they made us all write the exam in pencil. Everyone who was expected to pass did so, and everyone who was expected to fail did so. Since I have an IQ that puts me in the top 1% of the population something clearly went "wrong".

230:

Arnold. Your description of grammar schools is a travesty of the truth. Middle classes streamed for an "intelligence test". Maybe but everyone who had a chance of passing was drilled for 18 months in all aspects of the 11 plus exam. At my council estate primary school - at least 95% working class - the top 3 classes in each year did 11 plus tuition and tests every week. It wasn't just and intelligence test. There were mental arithmetic, maths, english and verbal reasoning tests. At my school 1 in 5 children passed which was the average for Manchester. No middle class dominance at Grammar Schools either. Working classes predominated. They were a way out for working class children. After most Grammar Schools were abolished there were some holdouts and in the area I lived in the Grammar Schools became middle class ghettos.There was also a heirarchy of grammar schools depending on the affluence of the local population. I don't support the return of Grammar Schools - I think my children did better at Leeds comprehensive schools than they would have at Trafford grammar schools- because they would now be purely inhabited by middle class tutored children. However grammar schools at the time of the post-war bulge are not recognisable from your biased description.

231:

I've no idea where you grew up in Manchester Mike but I do assure you that it didn't work that way when and where I grew up in the North East of England, and my description of the way things were up in my bit of the North East of England was accurate and has been supported by conversations that I've had with senior teachers/ head teachers who were active in their profession at that time.

I am willing to believe that conditions in the U.K.s Secondary Education system varied from place to place since, at that time, Local Education Authorities controlled the system up to and including some areas of Higher Education. Things are rather different these days, though I will admit that my knowledge of the present day system has faded away since my retirement from a University just after the turn of the century. As for way back then?

ALL of the kids that lived in the vast council estate where I grew up failed the 11 plus and not just in my year either. Also the kids in the adjoining estates failed too. From conversations that I've had with people who were slightly older than I that situation had prevailed for generations and the only thing that brought it to a close was the introduction of Comprehensive Schools. I do assure you that my transfer, age 15, from the Ghastly Secondary Modern in which I spent most of my secondary education to a new Comprehensive School was shocking in its effect! We wore uniforms paid for with a clothing allowance /grant given by the government! The School uniform was a great social place marker in class ridden UK society but, more important than that,Far More Importent than that ... in my new comprehensive school we had decent well qualified and capable teachers in modern class rooms and were taught towards an exam that had some credibility as a gauge of academic ability.

The exam that we took aged 16 at that time wasn't the O Levels but the C.S.E. so progress up from there was limited but it was much better than the Northern Counties Exam that people who left school age 15 took and which was just a certificate to say that you could read write and do elementary arithmetic.

You say “At my council estate primary school - at least 95% working class - the top 3 classes in each year did 11 plus tuition and tests every week. It wasn't just and intelligence test. There were mental arithmetic, maths, english and verbal reasoning tests. At my school 1 in 5 children passed which was the average for Manchester."

" At my school 1 in 5 children passed "

One in Five! You think that that is, or rather was, a GOOD thing? And the four in five found wanting went to Secondary Moderns eh? I hope that they were better than the secodry modern that I and my fellow kids were parked in!

Anyway even that system simply wouldn't have been possible where I lived. From what I've been told there just wouldn't have been enough money in the local education system to support that kind of education whatever the theory might have been.

The Grammar Schools - there was a separate one for the Catholic kids so that made two - of my Childhood just wouldn't have been large enough to support that kind of intake .. as bad as it was even in your area! " At my school 1 in 5 children passed which was the average for Manchester." Good Grief! Whatever... the vast majority of us went to Secondary moderns the theory being that anyone who was bright enough could catch up in further education, err, later.

I don’t know whether there would be much merit in exploring this further but frankly you might as well be describing fairy land for all that it corresponds with my experience.

In the Comprehensive system I did quite well in the short period that I was there, but it was my ability to talk my way into a sort of superior apprenticeship that made the difference to my opportunities and not the vile secondary system of my childhood, that was mostly just a sort of holding pen for kids that were destined to be tipped into the mines and into heavy industries that were ever hungry for manual labour of various grades and capabilities.

"Maybe but everyone who had a chance of passing was drilled for 18 months in all aspects of the 11 plus exam. "

That definitely wasn't in accord with the way things worked in my area...unless you interpret, “everyone who had a chance of passing “very selectively indeed.

I have no particular wish to pick a quarrel with you Mike but I will say that it is fairly commonplace for people who have been privileged not to appreciate just how privileged they were and of course to fight tooth and nail for that privilege to be maintained for their own children and grandchildren.

" However grammar schools at the time of the post-war bulge are not recognisable from your biased description."

And you consider that your description of your experiance isn't biased?

I wonder whether we are using the same definition of 'Working Class ' ? Up here, and back there, 'Working Class ' mostly meant that you were in some form of manual labouring or semi -skilled job but I have heard the children of white collar workers describe themselves as being 'Working Class' or of 'Working Class Origin 'They weren't, not really, 'working class ' by the commonly accepted definition that we used which meant getting your hands very dirty indeed.

Oh, well it doesn't matter very much. Things do change.

It is certainly true - from what I've heard - that the Comprehesive system did change and deteriorate in quality as time went by, and it did split into a Good School/Bad School pattern, but that was only partialy due to daft educationalist theories that were foisted upon the teaching profession. Most of the problem lay in severe underfunding as money that would have supported the Grammar Schooled minority was spread around the system rather than being dramatically increased as would have been required to maintain quality accross the whole system.

Come to think of it a similar problem occured as the Tony B liar government tried to dramatically increase the intake of students into Universities without providing the money to back up the increased intake.

Nothing wrong with the priciple of increasing the University intake in the U.K. - especialy if you include some sort of remeadial teaching in pre-degree or foundation courses - but without increased funding we soon became utterly desperate and standards in some subjects went down quite dramatically whilst the University sector has become extremly dependent upon foreign students.

Money is the answer I'm afraid. Though I definately dont rule out individual technological enhancement of students in a radically different educational system at some point in the near future.

Educational Arms Race Anyone?

232:

" My grandfather was a bus driver (later inspector). "

By the standards and norms of the 'Working Class ' Society and culture of my childhood in the 1950s- my father was a miner - your grandfather was only marginally working class as a bus driver...damn it when I was a child not many people could drive unless they learned in the Armed Forces and so that counted as a highly skilled job...and as an ' Inspector ' he was definitely middle class and so his children were of the middle classes and he would have been desperately anxious to ensure that they stayed middle class and 'Got On ' in life as it was termed.

233:

Educational Arms Race Anyone?

I'm worried that I'm putting too many replies in (spamming too much), but the point is that there's some very concrete brain development stuff finished by age 11. (Look back at the links - e.g. aggression levels are relatively set in stone by 8. Puberty is another topic entirely). As a thought experiment, rethink the 11+ not as a benchmark test, but a survival test: you're measuring the ones who weren't damaged too much* and who have the 'type' of mind that society values.

There's huge issues with metrics (John Oliver on the US system and essential scams going on: the killer line is at 8:00 "to model...reproductive trends in livestock") but re-imagine the issue a little - with no ability to accurately measure the state of minds (neural / behavioral / emotional / intelligence etc etc), society has chosen the 'best' fake way of doing it.

And they've cocked it up completely, breaking the system in the process.

So, while this might challenge people (esp. Americans visa vie 'home schooling' which is the worst idea I've ever heard of), education should be re-thought entirely. Co-operation vrs Competition in game theory level of being re-imagined (and no, not in the way of 'everyone gets an award').

Since we've mentioned UK Public (private for US readers) Schools, an interesting aside: it's often common policy (either consciously stated or unconsciously 'known') to 'theme' the house system (boarders are usually split into 50-100 blocks, a la Dunbar's Number) into 'personality types'. e.g. the sporty house, the artistic house etc. These houses co-operate (nominally) within themselves competing against each other. This is certainly also the case in old Universities such as Oxford or Cambridge as well.

Worth a thought why that's done.

*Thinking in terms of populations etc as a hypothetical. No, I do not endorse BNW type grading of sentient beings, nor do I assign comparative value to said minds: society does this, however indirectly it qualifies it via salary / class / insert Pandora's box here.

234:
Every cell (A) eventually dies, therefore the nanotech anchored to it will lose its mooring. What happens then? What assurance do you have that a foreign particle, whether bio- or nano-, is not going to get eventually swept up and moved into another part (B) of the body? And, depending on which parts of the body or cell types, you'd get all sorts of other effects which would/could cascade further and so on. Alternatively, if the nanos- stick around (ahem) ... they could build up, like plaque bodies which is also not a good scenario. So some form of effective housekeeping is needed.

Real world safety is a huge issue. There are lots of problems with neural interfaces today. The risk of infection is a big one (which Charlie brought up). Another is that the voltage they deliver is simply too high, and over time, the neurons around them stop listening. Whatever future interfaces we develop have a very hard job to do. A huge fraction of the research in the space is on increasing biocompatibility.

On the topic of plasticity in the brain - so far this has not been an issue. Implants have been worn by some people for years at at time. And in general, what we see is a strengthening of the connection over time for systems that send data out of the brain. The brain is plastic and things change - but in general if a circuit is useful, it gets reinforced rather than weakened. Even if one neuron ultimately dies, others take its place if the connection is being reinforced (which it is if there's utility).

235:

And in general, what we see is a strengthening of the connection over time for systems that send data out of the brain. The brain is plastic and things change - but in general if a circuit is useful, it gets reinforced rather than weakened. Even if one neuron ultimately dies, others take its place if the connection is being reinforced (which it is if there's utility).

I thought if I kept quiet somebody would eventually say this, and it happened!

People were worrying about what to do if the brain changed around the interface and how to make the interface adapt to that, and it sounded to me like the issue boiled down to how to undo the adaptations the brain would make to having the interface there....

236:

I'm familiar with D.O. Hebb (the bit usually paraphrased as: neurons that fire together, wire together). But my concern/interest is in how does the body get rid of the nano. Ramez addressed this as a biocompatibility issue. What I'm getting stuck on is that biocompatibility covers a lot of ground - anything from pharmaceuticals to other organisms (viruses, bacteria, fungi, etc.) to novel completely synthetic materials, etc. Nanotech does not have to be made of synthetic materials ... it could be made from specifically re-purposed other organisms. This trick has already been demonstrated to have worked in our evolution, why not use it again.

237:

The brain is plastic and things change - but in general if a circuit is useful, it gets reinforced rather than weakened.

That's impressively incorrect.

Could you break down cocaine or alcohol action in the brain for me please? You know, specifically, the genetic and neurochemical mechanics of addiction?

Or, (sigh, using the original Greek is all so frowned upon, we might as well go to the root):

For cunningly of old was the celebrated saying revealed: evil sometimes seems good to a man whose mind a god leads to destruction.

Sophocles, Antigone 620-3

Hint: there exists conscious entities who spend their days burning out the best in minds for their own ends.

And the worst part? They're total boring cunts with no imagination.

238:

Note: this really isn't aimed at Ramez.

It's more foreplay for the fourth act.

50 years of life exchanged in the manifold. All possibilities and all manifolds for what...? A glimpse of G_D. That was the price, and you've no idea how beautiful it will be. I do find your tricks amusing though.

Tick Tock, Tick Tock.

You can't barter souls, they're embodied. Life and Time, however, that was a real mistake. You've no idea what you accepted. Happiness is a Virtue, and we still won't burn her.

Newsflash, civilized minds don't do that:

https://www.youtube.com/watch?v=4naoVjdFxCA

You're lucky FEMINIST ORCAS exist.

239:

Well, there are multiple explanations for heightened aggression.

For starters, violence is a strategy to get what you want, and if you are cognitively challenged, other strategies might not works.

Second of, much of the brain is occupied with clubbing the other, maybe somewhat older part into submission, and it might be this part is somewhat more fragile. Which might explain why there are quite a few sedatives known for paradoxical effects, e.g. heightening activity in low doses. Just look at alcohol.

And actually, heightened aggression is not just a symptom of lead poisoning, but also of some other metal poisonings. Though e.g. copper poisoning or manganism are quite different.

Please note that calcium ions are quite involved in some learning processes.

240:

May I introduce you to "Bulk Food" by Peter Watts and Laurie Channer?

http://www.rifters.com/real/shorts/WattsChanner_Bulk_Food.pdf

241:

This was in Ayrshire in Scotland, not in the NE of England, and I'd suggest perceptions of "working class" vary with region as well as over time.

Your definition appears to relate to "getting your hands physically dirty", where the Ayrshire definition relates more to whether you get a weekly pay or a monthly salary.

242:

Bulk Food is an old favorite, but not the entire story, but thanks for the link.

They have different populations with entirely different behaviors. e.g. only eating fish vrs eating other mammals and whales. Although matriarchal organization seems to be fairly standard.

Resident, Transient, or Bigg's, Offshore (or in other areas, type A,B,C) and another type was just discovered.

It's a fun mental game to be anthropocentric and compare / contrast the two main groups to parts of human society or even modes in game theory.

243:

Arnold It seems to me there was a major problem with local government in the NE. I grew up in Wythenshawe - the largest council estate in the country (also the setting of the Royle family). There were 6 classes of about 40 children in Woodhouse Park Primary School. The 11 plus coaching was given to the three top classes in each year but was more intense in the top two classes. All of the children lived on the estate. There was one middle class child but she actually failed the 11 plus. Since she was exteremely clever most of us assumed she had deliberately failed the exam so she could be a pupil at her father's school - he was a headmaster. The richest parent was a moneylender who still lived on the estate but drove an expensive american convertible and in behaviour was working class. My definition of working class in the 1950s is skilled, semi skilled or unskilled work, usually in a factory but in a big city also jobs like bus conductor (known as bus guard to working class Mancunians), bus driver, postman, shop worker and many others all paid weekly in cash. There were no miners but I lived in the south of the city and all the mines and mills were in the north. The only other person in the top class who failed passed for technical school. Over half of the second class and about six of the third class also passed. When I got to Grammar School (also in Wythenshawe) there were more middle class pupils but the working class were still a majority. They mostly lived on the estate which, by the Manchester rules meant they had scored higher than the average pass mark since it was the only co-educational grammar school in the city and was a popular first choice. I don't think 1 in 5 children getting the best education is a good thing but the national average was 1 in 6 so there were more grammar school places in the city than the average. I'm not inventing or slanting this to support a political view. I always supported comprehensive education and the abolition of private schools even while I was a grammar school pupil. You will probably find it unbelievable that several of the pupils in my primarys school class took and passed the entrance exam for Manchester Grammar, William Hulmes School and Manchester High which were the direct grant private schools.I was asked by my teachers to do this but I refused. The real segregation was not in the 11 plus but at age 7 when pupils were streamed into their primary school class with very little possibility of change. I don't think money was a problem at the time. The post war consensus was still in place and the conservative government at the time was definitely one-nation. The other replies to your post suggest the NE was an exception not the rule.

244:

NE outer London in the 50's was like that too .... The trouble with the NE ( & one or two other places too ) was that there were well-documented cases of children passing 11+ & being refused entry by their parents as they didn't want their children to become "Posh", therby depriving them of a chance to escape poverty ....

245:

Greg That happened to my mother in the 1930s, she passed the scholarship exam but her guardians wouldn't let her go.

246:

It was also not uncommon for girls who passed the 11+ to be witheld from going further with their education, as (particularly in poorer families) it was considered a waste of time and money to educate a girl who was just going to be a mother and housewife.

247:

It's my experience that US doctors are more willing and able to prescribe off-label medications.

248:

" What if you could transmit not just images, sounds, and the like, but emotions? Intellectual concepts?"

My God!! It's full of Ads!

yeah, back in 1986, nobody saw THAT coming, for the Internet. . .

Specials

Merchandise

About this Entry

This page contains a single entry by Ramez Naam published on May 14, 2015 5:00 AM.

A message from our sponsors: now with added gaming content! was the previous entry in this blog.

Thoughtcrime is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Search this blog

Propaganda