« Second Life, this Sunday | Main | Another job done »

Today's research grazing

Anders Sandberg tackles mind uploading as a way of achieving sustainable environmental living.

Gary Shaffer on using carbon emissions to stave off the next ice age.

New Scientist asks, What is the most remote place on Earth? Surprisingly, roughly 90% of the Earth's surface area lies within 48 hours' ground travel of a major city.

(I am now going to go and lie down while my brain tries to recover from indigestion. Nothing to see here, carry on ...)




Rudy Rucker has suggested that simulations of the environment (for those uploaded minds) wouldn't make sense. I seriously doubt a full fledged simulation of our minds and planet is going to be more energy efficient than the real thing.

Now while this musing is part of the oeuvre for answering the Fermi Paradox, I think a sizable fraction of humanity would rather inhabit a real environment than a virtual one. If I had a chance to have my mind uploaded, I would want it to interchangeably reside in various casings, including human simulacra and spacecraft to explore the universe.

Mind you, the approach would solve the New Scientist's question. Live alone in a simulation without being able to be interrupted and you would be in a remote place indeed.

Controlling the ice-ages. A question everyone else. Climate scientists have debunked the idea that "in the 1970's scientists thought the earth was cooling, not warming". But I have a memory of quite a few SF novels and short stories set in an near future ice age, yet rather fewer in a hot world. Is there any data to support the idea that fiction was biased towards a cooler rather than hotter world?


Seems to me that estimates of processing power to simulate a human mind must be extreme over-estimates, if running a human mind simulation is supposed to require exawatts or reversible computing or other extreme methods.

The modeling method being assumed - calculating the state of every neuron/synapse at some update rate, I presume - must simply be too flawed to validly base any projections upon it. Either the proposed brute force simulation method won't work at all, or with modestly clever implementation to avoid un-needed computations, there must be many orders of magnitude reduction in processing requirements available.

Either way, projections of power consumption based on that model are simply too uncertain to bother with. One would be better off simply using the power consumption of a human brain as a starting point, and assuming a couple of orders of magnitude higher and lower energy consumption.


We have an existence proof for human-equivalent cognitive modules that run on an energy budget of roughly 100 watts: they is us.

If we eschew Cartesian dualism/supernatural explanations, and see no mechanism for quantum effects to be involved (unlikely, in my view, because the scale on which quantum effects prevail is several orders of magnitude smaller than that of a synapse), then some kind of classical mechanism is in play.

Now, I don't want to open the whole is-the-brain-a-computer can of worms (here's a whistle-stop tour), but if the simulation argument holds any water at all we're looking at a process that can be mediated by a ystem weighing 1-2Kg with a volume of 1-2 litres and a power input of 100 watts or less. And a well-designed platform for this process probably wouldn't need to incorporate a full suite of Von Neumann replicators massively ramified into each of on the order of a trillion sub-components, to say the least. Seagulls fly; so do Boeing 737s, but the latter don't lay eggs.


In 1994 Roger Penrose, among other things a physicist, and Stuart Hameroff, an MD, have postulated that conciousness may be a quantum phenomenon. The stuctures in the brain called protein microtubules are the seat of this activity. They are more numerous than neurons and are affected by anesthesia. The principle criticism is by Max Tegmark who says that quantum decoherance would be too slow at high temperature to support any calculation or conciousness. Disagreement abounds. The point is that if conciousness is quantum then getting a state copy to upload is going to be "quite difficult" Green arguments aside, it would be very cool to upload yourself to the nearest node... oh wait!


So, what about the poles, New Scientist? The least accessible place's gotta be the less popular parts of Antarctica. And, they seem to've missed the news that there are now regular trains to their favorite unreachable spot. The opening of those trains got lots of geek press because they were seriously hard and expensive to build.

There's no shortage of simulated neurons on chips with much smaller-than-human feature size, and thus lower energy consumption as well. Of course, you do have to put an asterisk there until we see more supporting simulation results.

And, I notice a certain lack of good supporting *evidence* behind the quantum brain theory, despite a quarter of a century to supply the lack.


Whoops - 15 years, not 25. But still a long time.


Jon, that would be the railway that terminates in Lhasa? About 300 miles east of the spot New Scientist says is a three week trip to Lhasa? Dunno, but that sounds reasonable to me, even a little optimistic, considering the terrain.


Bill B @4:

I remember reading Penrose's book - up to a point. The little demonstration of how human mathematicians cannot be using a "knowably sound algorithm" to comprehend Godel's incompleteness theorem was neat, but why do the meatbrain algorithms we're running have to be "knowably sound"? Was there any discussion of this?

But just to clarify: "protein microtubules" are present in every cell in the body, not just neurons. They are fundamental components of the eukaryotic cytoskeleton, involved in a whole range of processes. What makes microtubules in neurons any different? Did Penrose et al offer any suggestions?


I've not read anything about 'quantum brain theory' but to me it sounds like a roundabout way to try to prove the existence of the soul and from there...

I'm an atheist so I could be biased but I don't believe in god _or_ the soul. Brains seem to be fairly crude mechanisms that produce complex results. There are numerous examples of this, simple rules that produce complex results.

Trying to find a complex rule that explains a complex result is usually the wrong way to go, see occam's razor for examples.

What people perceive of as 'I' seems to be mostly an illusion, if you start to ask yourself what 'I' is you invariable and up breaking it down into numerous bits and bobs, none of which is 'I'. 'I' is the whole not the parts.

Wish I could write better so that I could explain this properly.


If I recall correctly - from my underqualified experience. It's not the simulating the human brain that sucks all the energy, it's simulating a shared environment to a degree of accuracy similar to the real world.


I'm with serraphin @10 on this one. The power consumption lies in simulating the mass/energy/momentum variations around the simulated being. Theoretically one could get that down to synaptic stimulus simulation, but it's going to be awfully tough to design the computer programs that maintain coherence of synaptic stimuli because to my knowledge our longitudinal studies on healthy people run to hours and suffer from being either high level or highly targeted. Depending on how much credence you give David Hume and JS Mill, that might not matter to uploaded infants, but anyone with actual experience could be in real trouble. Especially if the response synapses are buggy, too.

But thanks for taking me away from the drudgery of diffusion models.

Also, on the ice age stuff: geoengineering is powerful magic bordering on hubris. Like the iron-ore seeding experiment, the world is more complicated than our models. Newtonian gravity models are good enough for bridges, but not Mercury's orbit. I've seen nothing to indicate that our climate models are good enough to undertake intentionally trying to change the world.


Bill B @4: have you read Penrose? (Great writer, but I had a throw-book-at-wall moment part way through the second book, "Shadows of the Mind", when he airily decreed that one need not define or understand "consciousness" in order to prove that A Machine Can't Do It.)

Oh, and I don't think much of the quantum gravity/microtubule woo-woo either. (Penrose is rather better informed at mathematics and physics than he is at cytology and neurobiology.) Microtubules are indeed ubiquitous, and their disruption by anaesthesia agents should be no surprise -- they're also IIRC essential to neurotransmitter vesicle transport within synapses, so anything which disrupts their normal activity is going to have a marked effect on neural transmission.


Charlie: your memory serves you well. Neurotransmitters are loaded into synaptic vesicles which are transported to the synapse-proximal membrane in a kinesin-dependent manner - kinesin being the motor which scuttles along microtubules to drag stuff around the cell. Microtubules are implicated in various other vesicular transport processes around the cell, but not all such transport is microtubule-dependent.


I wonder what inspired what inspired Gary Shaffer into his considerations on fossil fuel emissions as an offset for glaciation, so many parties are concerned and interested by the problem. Although changing the intrinsic value of the reserves from energy to climate regulation is interesting. But whatever the scale, any problem is just a matter of energy management and it is far easier to add energy to a system than to extract it. It is questionable whether further manipulation of a system so vast and with such inertia is wise. Especially since all over developments in energy production are close to emergence.


Anyone read "Why Minds are Not Like Computers" by Ari N. Schulman in New Atlantis? (see http://www.thenewatlantis.com/publications/why-minds-are-not-like-computers) He has a good argument against the assumption that the mind is anything analogous to computer software (and by implication, against the idea that the mind can somehow by "downloaded"):

People who believe that the mind can be replicated on a computer tend to explain the mind in terms of a computer. When theorizing about the mind, especially to outsiders but also to one another, defenders of artificial intelligence (AI) often rely on computational concepts. They regularly describe the mind and brain as the “software and hardware” of thinking, the mind as a “pattern” and the brain as a “substrate,” senses as “inputs” and behaviors as “outputs,” neurons as “processing units” and synapses as “circuitry,” to give just a few common examples.

Those who employ this analogy tend to do so with casual presumption. They rarely justify it by reference to the actual workings of computers, and they misuse and abuse terms that have clear and established definitions in computer science—established not merely because they are well understood, but because they in fact are products of human engineering. An examination of what this usage means and whether it is correct reveals a great deal about the history and present state of artificial intelligence research. And it highlights the aspirations of some of the luminaries of AI—researchers, writers, and advocates for whom the metaphor of mind-as-machine is dogma rather than discipline.

When all is said and done, the only intellectually honest position on mind and consciousness is thatof the New Mysterians who believe that the Mind cannot be explained (even in principle) in materialistic terms. This is not to say that the Mind is immaterial (an expression of the Soul), only that we will never have the tools or the ability to explain the Self materially.



Seagulls and 747s both fly, but I have a suspicion that an autonomous drone that flies like a seagull would probably be more expensive than a 747 and significantly more complex. I'll bet that the energy/resource consumption of the autonomous drone(required for both fabrication and function)will be greater than that of the seagull. And if I'm going to upload, I want the simulated brain to be at least as good as the meat brain, not like some clunky ornithopter that real seagulls can fly rings around.

If classical mechanism explains brain function, then we know for sure that it is possible to build a human-equivalent brain from the same stuff that our brains are made of, but that's all we know. I don't think we yet know enough about subtle variations in neuronal function to begin to guess how much processing power would be required to simulate a brain in silico, or how much energy a brain simulation on some platform other than meat would consume. Simulating the details of an actually existing mind, rather than just some generic human-equivalent brain, would be a whole 'nother kettle of fish.


Apparently we have been altering the climate long before the industrial age with its CO2 emissions. In fact, we've been keeping the next ice age at bay since the start of agriculture (http://www.cbsnews.com/stories/2003/09/23/tech/main574644.shtml):

Measurements of ancient air bubbles trapped in Antarctic ice suggest humans have been changing the global climate thousands of years before the industrial revolution. Beginning 8,000 years ago, atmospheric levels of carbon dioxide began to rise as humans started clearing forests, planting crops and raising livestock, a scientist said Tuesday. Methane levels started increasing 3,000 years later. The combined increases of the two greenhouse gases implicated in global warming were slow but steady and staved off what should have been a period of significant natural cooling, said Bill Ruddiman, emeritus professor at the University of Virginia.

If given a choice beween man made global warming and naturally occuring global cooling, I'd rather see the Earth heat up. Compared to the spread of tropical forests and longer growing seasons for higher latitudes, the spread of glaciers (as opposed to their melting) can be pretty nasty. Civilization can survive warmer summers and even rising sea levels, but it can't survive ice sheets miles thick covering North America and Eurasia. And all that ice locks up the Earth's fresh water into an unuseable solid state, turning the rest of the planet into deserts. The Amazon becomes a small forest uner this scenario. Biodiversity collapses when half the planet gets covered in an ecology hostile to complicated food chains and diverse life forms.


Bernie @ 8:
The little demonstration of how human mathematicians cannot be using a "knowably sound algorithm" to comprehend Godel's incompleteness theorem was neat, but why do the meatbrain algorithms we're running have to be "knowably sound"? Was there any discussion of this?

Daniel Dennett has a good discussion of just this point in Darwin's Dangerous Idea: it's quite possible to argue that human mathematicians are running imperfect and inconsistent -- but nontheless generally useful -- alorgorithms when they do math.

(The fact that most if not all mathematicians occasionally make mistakes is consistent with this. If mathematicians really never made mistakes, then Penrose might have more of an argument.)

Dennett offers an analogy. Unlike the case for mathematics, there is a perfect and complete algorithm for playing chess: simply evaluate every legal move, and every legal countermove to each move, and so forth, until you reach a state of win, lose, or draw for each possible branch. Unfortunately, this is one of those algorithms that takes longer than the age of the universe to run on a realistic computer. So real chess-playing computers have to use imperfect, inconsistent, cobbled-together algorithms to play chess. And yet they can often play chess better than humans.


How could you get everyone to upload themselves when Dualism is so ingrained in our culture!? You'd have to hold a gun to my head to get me into a transporter! Besides, if we can't trust the suits with our money what about our "SOULS"?


Bernie @8 and Charlie @12, My apologies, I have not read the two source books by Penrose and Hameroff. Having read A LOT of science fiction I am pretty sure that is where I first learned of the theory. I brought it up to generate conversation. The main objection to the quantum consciousness theory is the environment of the brain, primarily heat, is antithetical to quantum processes. There is some back and forth on that with Hameroff and others claiming mistakes by Tegmark and gel protection of the micro tubules to allow quantum decoherence (info retrieval) at the needed time frame. I don't know if there are any differences between the micro tubules in the various macro structures. One reference I saw pointed out that paramecium have neither brains nor neurons but the do have micro tubules and they can learn, and they can change and retain behavior patterns. The quantum conciousness theory is not proved or disproved and there are a lot of people working in it and publishing papers. I do admit that a lot of this info is from Hameroff's web site www.quantumconsciousness.org. I don't think that quantum conciousness invalidates uploading to computers.


doowop@15, I don't suppose you'd care to prove your assertion "When all is said and done, the only intellectually honest position on mind and consciousness is thatof the New Mysterians who believe that the Mind cannot be explained (even in principle) in materialistic terms."?

Do please note that any proof that consists of something more than "because I say so" handwaving is likely to consist of materialistic terminology.


I'm not a dualist, but I currently don't care much for the upload idea. I am a programmer though, and I know how fragile software is. I currently live in a physical body with a number of failure modes, most of which are avoidable to an extent.

I can easily be killed by things I'm not expecting. I go climbing regularly, and accept a degree of risk there too. Fortunately for you lot most of the things that can kill me are unlikely to wipe out the rest of the human race at the same time. The same can't be said of common mode failures in brain simulators unfortunately.

This is all just a set of excuses though. The fact is that I quite enjoy life, and the fact that it might be snuffed out for no reason as I go to work tomorrow is something that motivates me to get things done. I don't want it to happen, but at least I might be lucky enough to see it coming.

The idea of spending eternity with hordes of uploaded geriatrics fills me with horror too.


dave b@22: So, what's your alternative? Physical immortality in meat bodies? (In which case you just get to spend eternity with hordes of physically fit geriatrics instead.) Death? (Count me out, thanks.)


I'd settle for an extra 20 or more years of 'youth'; immortality is overrated - but a great idea for fiction!


Peter (18): Of course! *slaps head*. I'd read Dennett's book and completely forgotten about that discussion, despite going around telling people (in my capacity as a working biochemist) that "Darwin's Dangerous Idea" is a brilliant introduction to biology that beats the pants off a lot of "actual" biology introductory texts...

Bill B (20): It's a cute idea, certainly. But as for paramecium: it's clear that microorganisms can show adaptive behaviours as a result of "simple" gene-regulatory switches ("genetic circuits", as it were) - there's no need to invoke spooky quantum weirdness. I think the same is likely to go for brain function but that ain't my area of expertise.


Alex @1: Certain mountain ranges accumulated a fairly prodigious load of snow in the 20-odd years leading up to the early 70s. People in a position to study such things did begin to talk of ice ages (one of them was Steven Schneider, who now refuses to deal in trends of less than 25 years; if you ever get the chance to see him talk, go along, he's a seriously clever guy and speaks well). Said mountain ranges then proceeded to lose snow even faster...

About the iron enrichment thing: it's rubbish. I saw a talk by a guy who took part on the "Southern Ocean Iron Enrichment Project", which is exactly what it sounds like. They were able to produce a local planktonic bloom, but he made the point pretty firmly that it could only be local. Once you enrich algae with iron, the next most limiting factor is things like silica, which they need in much greater quantities. Bear in mind that the surface water that flows up from Antarctica then ends up in the tropics, so sucking all the silica and other plant macronutrients out of it en route would probably be a Bad Idea with Unknown Consequences.

Global climate braindump brought to you by the replubic of time for bed...


Jon R@22:

I'm happy being mortal, although I wouldn't mind good health until the end. Choosing when I go would be nice, but unrealistic.

I just don't see anything that simulated immortality can offer me that I want.


If given a choice beween man made global warming and naturally occuring global cooling, I'd rather see the Earth heat up. Compared to the spread of tropical forests and longer growing seasons for higher latitudes, the spread of glaciers (as opposed to their melting) can be pretty nasty.

If it was going to be balmy and lush I don't think we'd have a problem with global warming. Try extensive desertification (droughts) and extreme weather events. Civilisation will adapt, but not at present numbers.


Alex Tolley @ 1:
But I have a memory of quite a few SF novels and short stories set in an near future ice age, yet rather fewer in a hot world. Is there any data to support the idea that fiction was biased towards a cooler rather than hotter world?

I can remember at least one story from Analog in the late 1970s or early 1980s that took place in a future where the the ice caps had melted due to global warming...

I'd hypothesize that if there was a tendency for fiction in the 1970s to emphasize ice ages (I don't know if there actually was), then it might be because the simplistic public-science understanding at the time seemed to suggest that another one was coming soon. The crude picture was that you had glaciations ("ice ages") lasting ~ 100,000 years, followed by 10-15,000 years of interglacial warmth. Since the last glaciation ended about 12,000 years ago, the simple inference was that the next one should start soon.

The reality is that this model is much too simplistic. For one thing, the data now show that this ~100,000-year cycling has only been in place for the last 400,000 years or so; earlier glacial- and inter-glacial periods were different. More importantly, understanding of the driving mechanisms has improved, to the point that one can understand why the periodicity changed, and thus predict whether it will continue in the future or not. The best estimates now seem to be that -- in the absence of anthropogenic changes -- the next glaciation isn't due for another 50,000 years.

(I do have a memory that North America experienced some rather cold winters in the mid-70s, and this may have led the US press to focus on "could we be heading into an ice age?" stories. Not that scientists had any sort of consensus on this; more a matter of newpaper and magazine editors saying, "Gosh, what a cold winter! Go write some stories on this topic!")


The NS thing on connectedness has other faults.
Train speeds (even in Britain) are faster than 1.5 minutes for 1km = 40 kph = 25mph.....
Which is about right for an all-stations suburban service, or a tube train.The fastest "normal" trains in this country can get from London to Peterborough, 76 miles in 46 minutes ...
99+ mph = 159kph.
It's out-of date, too, not only does it not show the Lhasa railway, it does't show the Alice-Darwin extemsion in Aus....


Oops, you didn't realise it Charlie, but you've jumped the gun!
Here is a report, dated Wednesday, sying that a Simulated brain is closer to thought and could be done within 10 years.
IF, the money is available......


Imagine the politics of a world where everyone's uploaded-- when you can multiply your own software to the extent that your resources allow, democracy starts looking a bit nonfunctional.

"The "Me Party" is now a majority. And we vote no."


C: we have a term for people who multiply like that -- we call them "spammers".

And I think we know what to do with them.


Genocide, in this case?

Considering that the self-spammer'd surely put just enough variation between copies for each to qualify as a separate person under whatever laws are there...


Charlie @ 33: or a more biologically-based term: "cancer".

Re "Quantum Consciousness": This is a theory looking for a hypothesis (sound backwards?). I read the original Penrose book, and was underwhelmed. Penrose presented absolutely no reason to suppose that the physical mechanism he postulated existed: he clearly created the theory to have some sort of "scientific" basis for non-physical consciousness (a soul by any other name ...).

And in the years since then, there has still not been a single bit of experimental evidence, or even well-grounded computer simulation, to support the existence of such a mechanism. And even if it existed, there's nothing to indicate that it would support some different form of consciousness than could be supported by classical theory.


Since this seems to have turned into the open thread, Jacqui "Not a Stalinist" Smith has announced her plans to monitor every Internet contact between everybody and give all sorts of UK government departments and entities access to it. The Tories claim victory because she's reluctantly said the government will not keep all of it in a database forever.

That's no obstacle, of course, once it's in a database - you might just keep forgetting to run the database purge, and anyway a sound backup policy surely dictates keeping backups indefinitely....


@36 ... and it is CHEAPER and SIMPLER than ID Cards, which look as if the cost is finally going to sink the project (we hope)