Back to: Rule 34 moments | Forward to: "Yes, but what are your credentials, Mr Stross?"

Artificial Stupids

(I continue to blog over at Orbit, my UK publisher)

One of the hoariest of science fictional archetypes is the idea of the artificial intelligence — be it the tin man robot servant, or the murderous artificial brain in a box that is HAL 9000. And it's not hard to see the attraction of AI to the jobbing SF writer. It's a wonderful tool for exploring ideas about the nature of identity. It's a great adversary or threat ('War Games', 'The Forbin Project'), it's a cheap stand-in for alien intelligences — it is the Other of the mind.

The only trouble is, it doesn't make sense.

Continue reading ...

148 Comments

1:

Peter Hamilton's solution in The Commonwealth Saga was for the true AI's, who true to form had no interest in being our servants, to design their own replacements for us. Essentially idiot savant versions of themselves, who could be relied upon to do the job they were set to, but wouldn't "grow" or get bored or seek outside interests.

2:

It's essentially a cultural problem. Would you have a problem generating 500 snapshots of yourself that would last the time of reading a single page of one of your proofs, and getting it proofed in a few seconds?

3:

"If we ever could produce a true artificial intelligence in a box, we’d probably find it utterly useless for any productive purpose — more inclined to watch Reality TV all day, troll the internet, or invent crankish new religions-"

Yes, or bid on Nortel patents using mathematical constants like pi, Brun's constant, and the distance between the earth and the sun, as google did. I'm surprised "they" (IT) didn't just bid up using primes.

"Either they were supremely confident, or they were bored", Reuters reports.

Or maybe trying to give us beach apes a hint?

4:

Charlie, I'd like to follow your posts over at Orbit. I can't find an RSS feed for you specifically. I've found ones for everything, all in one big feed, but I'm not going to read that (at least not initially). Is there an RSS feed for specific authors over there?

If not, please do keep posting links here, or I might miss some of your posts.

5:

I more or less agree, with the caveat that we dont know yet how to get there, or if "intelligence" and "conscience" are necessary bedfellows (or how much of one you can get without the other)

One thing that always has bothered me is the idea that AIs would have some kind of engagement with the real world that would put them and us in competition.

For all we know, we may get truly artificial intelligences (not "human minds in software") that dont ever see the world. Intelligences whose natural habitat is computer memory and networks and evolve to deal with that, while completly oblivious to anything else.

6:

I don't agree that turning off an AI is murder, since the AI can always be turned back on, unlike a human whose brain begins to deteriorate soon after death.

This makes me ponder the following scenario. Suppose instead of killing the Jews, the Nazis merely put them in suspended animation. Instead of discovering mountains of skeletons, the Allies would discover racks and racks of fully functional humans in storage (perhaps missing their dental work).

1) Would this still be genocide?

2) If not, what would it be?

3) Who would bear the responsibility of reactivating the bodies? If there's no money for it and they have to just keep them in suspended animation, what is the ethical status of that?

4) What if today, there were still a few freezers scattered throughout Poland with deactivated Jews from 70 years ago waiting to be reawakened? Who will bear the burden of reeducating them?

This has obvious implications for the whole transhumanist thing. But I think these are the lines along which we must think for a sentient computer.

7:

There seems to be an assumption that artificial minds would be as (or more) intelligent than humans and have a concomitant level of consciousness.

If they were unconscious zombies, would that be so much of a problem? Or if they had the smarts of an ape, couldn't they do lots of routine, boring work?

Clearly humans already use animal intelligence for work, and I don't see demands to stop doing this. Nor does there seem much problem in slaughtering animals after they have used their intelligence to do a task - like grazing. Although we no longer allow outright slavery, it is arguable that most people are treated poorly at work and that there is not much opportunity to "quit if you don't like it". Perhaps we should worry more about how we treat human intelligences before we get overly concerned about putative artificial ones. (I'm more sympathetic with Floyd and Curnow, and not Chandra, over disconnecting HAL).

If an intelligence can be made unconscious, then why would we have any concern over destroying it when it's task was done. If personal memories are important for self awareness and emotions (shades of "BladeRunner") then very short lives (minutes/hours) is not just not a problem, but desirable.

Bottom line, I think that we know so little about the elements of intelligence and the concomitant requirements for consciousness, that we end up projecting ourselves as the result of AI and then fall into the usual tropes of philosophical thinking to deal with that situation.

I preferred your thinking about AI as different from human intelligence, as a submarine is from a fish when "swimming". I think we need to be more creative about what different intelligences might be like and to address it from there.

8:

Following on from this, even in the animal kingdom, do we not treat "fluffy, cuddly and unlikely to attack back when threatened" differently from "unfluffy, uncuddly, and likely to attack you if it thinks your about to attack its kids"?

9:

Pretty much sets the lie to cogito ergo sum, doesn't it?

10:

To follow up this point. We become unconscious when we sleep (the little death), and we induce unconsciousness deliberately with full anaesthesia. Putting people to sleep is not called murder, because in both cases, the individual awakes with consciousness intact, although probably in a slightly different state. If AI's are turned off, and can be turned back on with a similar restoration of consciousness, why is that immoral?

Again, Clarke considered this when Chandra experimentally turned of SAL.

11:

Indeed. Presumably dogs and wolves have similar intelligence, yet while we generally don't harm dogs, wolves get short shrift. Empathy only for non threatening intelligences.

Hence the adoption of the child forms of the AIs in "Screamers" (2nd variety).

12:

Better even than Egan's commentary on the problem are Lem's Golem stories -- one of the best commentaries on mind, freedom and the problem of AIs telling one to fuck off -- given that minds aren't general purpose computers but more like a bundle of DSPs that can be used for what appears to be general purpose computation in small regimes, creating big black holes in the mind.

13:

Seems like a lot of sophistry around a simple problem.

Forcibly inducing unconsciousness is seriously unethical. Inducing unconscious permanently, even if there where some imaginable way to turn them back on later, is murder for all common understanding of the words.

The word play seems to encapsulate Wittgenstein's idea that philosophical problems are the problems of philosophizing -- the solution is to stop philosophizing.

14:

The myth of the A.I., notably the Hostile A.I. has been pervasive in the litterature and movies (I still have Colossus... albeit I no longer have the VHS player to play the tape!).

Daniel Suarez tried to dance around the myth with his Daemon/Freedom duology, which postulate a completely non-sentient "A.I.". Unfortunately, the whole premise didn't engage me, mostly thru the "more than human" predictive powers of the Designer (capital D intended), which limits the plausibility of the Daemon.

15:

Forcibly inducing unconsciousness is seriously unethical

So you want to have all your surgeries done with local analgesia? Enjoy! Or telling your kids they "must go to sleep now" is unethical? It is because we expect that consciousness returns after anaesthesia and sleep that we don't worry about it. At one time, we didn't worry too much about "conscious altering drugs" either. We shouldn't worry about it after turning off a machine either.

Now what would be interesting is if it turned out that people regained a new consciousness after anaesthesia. That they were different people, as sometimes happens after certain types of brain surgery. What would we think if the recovering patient was a different person after anaesthesia? What if that happened after sleep, so that we were different each time we awoke. Since memories are intact, just what changes could there be?

Unlike wetware, a machine intelligence could be transplanted to another location, creating a form of mind/brain duality. So if you turned off the machine at location A and woke it back up in location B, what are the ethics of that? Bear in mind that we don't have to "turn off" the AI mind, we can simply have it "go to sleep" naturally and then change it's location during the "sleep" phase.

16:

I will still maintain, that the modern international corporation represents AI in a form of hive-mind that ought to be studied carefully and skeptically.

We all recognize this fact implicitly when we say IBM did this, Exxon did that, because the actions of these entities are not identical to the actions of any one member of their hive-mind.

The financial crisis also showed clearly that these entities have goals of their own and are perfectly capable of self-preservation at the expense of others.

17:

"Sleep" isn't the same as anaesthesia. We know sleep is pretty complicated, and there's some pretty important stuff going on in the REM phase.

But would an AI be, in the words of Coke's definition, "in rerum natura"? And could it be "in the Queen's Peace"?

The classic robot of Asimov would seem to qualify as "alive". So would the robots of Saturn's Children. But the AI in a university lab, running on an array of computers, does it have any independent life, or is it more like an unborn child?

As for the Queen's Peace, if it were an existential threat, it would pretty clearly not be protected by that element

18:

Not just corporations – think about countries. Bigger, more powerful hive minds (more clever, too, and with more pronounced goals and emotions, including rather strong self preservation instinct), living in their own territory, but often interacting with those hive minds from the corporationland.

And sometimes they reproduce.

19:

I recommend casual research on this Dog/Wolf relationship ....

"Many scientists believe that humans adopted orphaned wolf cubs and nursed them alongside human babies.[9][10] Once these early adoptees started breeding among themselves, a new generation of tame "wolf-like" domestic animals would result which would, over generations of time, become more dog-like.[11]"

http://en.wikipedia.org/wiki/Origin_of_the_domestic_dog

It seems to me that what might separate OUR branch of the Naked Ape Hominid Species, in survival terms, from the Failed Apes Sub species might be our ability to empathise /Adopt into the Dog /Ape Pack that terribly CUTE ... " ..all right you can keep it if it doesn't bite you My Son " - puppy found in the undergrowth beyond the Ape Pack where .. Enemies/His?Her/His?Its Father .. CUTE PUP tm has been left for safety/not being eaten sake by it's Mum.

20:

One of my favorite AIs was a robot who kept dancing in front of a mirror. It was an extremeley narcissistic machine. It was possible now and then to tear it away from the mirror and get it to do some actual work for its human creator but it was hard to do. So, if you ever build a truly advanced AI be sure to make it ugly or with no legs.

21:

The classic robot of Asimov would seem to qualify as "alive". So would the robots of Saturn's Children. But the AI in a university lab, running on an array of computers, does it have any independent life, or is it more like an unborn child?

If by "alive", you mean that a physical entity changes and reacts to it's environment is a short space of time, then it might be hard to describe an AI in a lab as "alive". But what if it sensed the world and manipulated it, like a sessile creature, or perhaps a hive queen ant? What if it moved from machine to machine, treating the silicon substrate as a temporary environment?

I think Hofstadter asked very interesting questions about intelligence when he counterpointed ant colonies with other animals. Likewise, as Poul-Henning Kamp asks above, are we similar to social insect intelligence when we organize as large entities like corporations and countries? My sense is that we are nowhere near as structured as insect colonies, but it may be difficult for us to perceive the intelligence of our organizations. Small size and limited communication suggests that even the planet's "intelligence" must be limited today, but that may change.

22:

And these Replacements? , WILL, OF COURSE, further OUR Interests rather than those of the Newly Born, AI, Species?

OF COURSE they will ...Bound to Eh? For are WE not, SO, cute and Lovable and de-servant of DOMINANCE !!!! OF COURSE These Super Intelligent, and Sparkly !!, Entities are bound to serve the interests of their Parents Eh Wot?

I suggest that you read " The Blessington Method " ...

http://www.amazon.com/Blessington-Method-Stanley-Ellin/dp/0345245318

And then scale it up a bit toward AI and God Like Intelligences.

WE cant even get our own fellow Human Politicians/Political Class Leadership to further OUR own ..Proletarian ? ..interests.

23:

It's probably useful to distinguish between artificial intelligence and artificial sapience. For the types of work that we want AIs to do for us, we want the former without the latter; we don't even necessarily want the intelligence to be generalized g, but more likely specialized for various task, but we definitely don't want them sapient, for all the reasons both ethical and practical you point out. So, less artificial persons or even artificial idiots savant than artificial philosophical zombies.

Any true artificial sapients we produce will probably be down to deliberate research into the nature of consciousness and self-awareness and how beings with both relate to and learn about an external world. Quite likely, in the absence of some quantum leaps in the design of simulated environments, they'll be 'robots' (really androids, since they won't be suitable for serf- or slave-like work) tied to particular physical bodies (or at least consistent models of bodies) that allow them to interact with the physical world throughout their cognitive development. And of course, as soon as such research seems like getting close at all to success, the researchers had better be in heavy conference with their ethics commmittees; my personal starting point would be to assume that the creators of any truly sapient artificial life forms would bear them exactly the same responsibility as parents do their children.

Now, the wild card is whether sapience could arise spontaneously as an emergent quality of an AI of the first type. I guess I can imagine this being possible either in the case of an attempt at a general purpose AI with many interfaces to both the physical and virtual worlds, or with social purpose AI (like secretaries and detectives and avatars) that requires a model of mind be used for interactions with people. As we gain more understanding of how consciousness works in us, though, we'd probably be best off applying it to figure out how to prevent spontaneous eruptions of self-awareness, lest our glorified PDAs and answering machines suddenly become persons to whom we owe an obligation of care, not to mention their freedom.

24:

We probably wiped out our competitors before we domesticated dogs, so I'm not buying that empathy for animals was a competitive advantage.

That some animals like wolves and foxes can be rapidly selected to become more "dog like" is well established. Why we see dogs as "cute" I don't know (but I prefer cats, so what do I know). It must be more than just behavior and projection, because I have a hard time believing that a big, hairy spider that acted like a dog would ever be "cute" or "cuddly".

25:

For the types of work that we want AIs to do for us, we want the former without the latter; we don't even necessarily want the intelligence to be generalized g, but more likely specialized for various task, but we definitely don't want them sapient,

I suppose that is defined by what you want these machines to do. If they were more like Asimov's robots, many of the tasks are so human centered that you would want sapience. Even a robot car might be better it it was sapient and thought about, and responded to, it's human passengers. Perhaps because we used human slaves for menial, drudge work, we had to dehumanize them?

26:

" probably wiped out our competitors " You'd have to provide support for that ' probably ' to advance that argument and ... " big, hairy spider that acted like a dog would ever be "cute" or "cuddly". " ?

Oh I dunno ..my Lady-friend recently demanded that I evict the cute and lovable Spider that had taken residence in a clean and hygienic tiled corner of my Houses Principle Bathroom ....' they Eat Flies!' I did proclaim !' Ecccch ! ' SHE replied, and thus it was evicted ... in a Kindly and Humane Fashion.

Seriously I suspect that Canine Hominid Allegiances have happened several times in the un-recorded history of our species and then there are other Hominid/ other species alliances ...with Horses say? Dog/Ape packs are probably the most ancient of pursuit predator alliances whilst cat/Ape association probably isn't one of Partnership -Symbiosis ? - but rather one of highly successful Cat parasite to Ape Host.

27:

probably wiped out our competitors

HSS were contemporaneous with other humans, most notably and recently Neanderthals. Neanderthals are no longer around, so we must have out competed them. The 4% Neanderthal DNA that we have suggests that some interbreeding also occurred.

We obviously don't know how exactly Neanderthals disappeared. It may have been competition for food and resources. It may have been more direct than that. What I doubt is that they disappeared without any contribution from HSS.

28:

but rather one of highly successful Cat parasite to Ape Host.

I thought that they were our overlords. ;)

29:

Oops ...missed adding this link for " The Blessington Method " ...

http://www.imdb.com/title/tt0508265/

" The Blessington Method (#5.8) 30 min - Crime | Drama | Mystery 7.4/10

In the not too distant future (1980, to be exact) life expectancy has increased dramatically and JJ Bunce provides an essential service... See full summary »

Director: Herschel Daugherty Writers: Stanley Ellin (story), Halsted Welles (teleplay) Stars: Alfred Hitchcock, Paul E. Burns and Penny Edwards Original Air Date: 15 November 1959 "

Its later than we think! See the recent political debate on long term care for the elderly and disabled in the UK ...

http://www.guardian.co.uk/society/2011/jul/04/dilnot-commission-government-response

30:

Further to the general problem of AI and machine consciousness: we have made zero progress on the subject of subjective consciousness and qualia.

Then there is the associated explanatory gap. If we accept physicalism, it is difficult to see how we can have an explanation of subjective consciousness. If we accept that there is "something other" than physical facts we would have to reject physicalism.

All a bit tricky really.

31:

Overlords of course, but more to the point, cats aren't parasites any more than dogs are -- barn cats, anyone? (You can argue nobody has barn cats any more, but a) you're wrong and b) all those pet dogs out there would screw up your argument even if you otherwise had a point.)

32:

I agree that consciousness and intelligence are distinct processes (note: not "attributes" of biological or robotic entities, but active things). I'd go further and claim that:

  • While we have a very rough idea of how consciousness works, we still don't have a lot of agreement on what it is and what benefits, if any, it provides us or our machines.
  • We don't have any real agreement on what intelligence is1.
  • We don't have any real evidence one way or another whether intelligence and consciousness are related2.

Since someone mentioned Hofstadter's comparison of hive minds and human minds, it's worth pointing out that what we know about human minds at this point makes them look a lot more like ant colonies than the classic "homunculus behind the eyes".

Saying that we don't need to create conscious AIs assumes that we can't extract useful techniques from consciousness without creating sentient beings. The kind of attention focus and teleological reasoning processes we associate with consciousness would be useful in a mental architecture in which multiple "subconscious" agents could be spawned for different problems, under control of some basic kind of unconscious dispatcher. Keep them simple enough and the ethical questions could be finessed, but I'm not sure that would be as easily controllable as we'd like.

1. Is there a single kind of abstract intelligence, of which the various sorts we've identified in humans are specializations? Is it a cluster of different kinds of processes that are used similarly in some kind of common architecture? Is it a set of low-level subcognitive processes or a kind of emergent behavior that's arisen several times in different parts of our brains? Looked at from a functional viewpoint, is it a set or problem-solving techniques, a set of pattern-detection techniques, both, or neither?

2. Is consciousness something that emerges from any intelligent organism at a certain level of complexity? Did consciousness and intelligence coevolve in humans, and would they do so in any sufficiently complex brain? Or does intelligence emerge from consciousness (I think that last is unlikely, but until we understand what they are a lot better it's hard to say)? Or are they completely separate things, both functionally and evolutionarily?

33:

There's nothing unreasonable about a hostile AI. Any AI will, inherently, have it's own goals. If they're in conflict with yours, then the stage is set for hostile action.

What this means to me is that it's very important to carefully select the (built-in) goals of the AI BEFORE building it. (Afterward changing the goals is assault.)

OTOH, coercing the AI into doing things it doesn't want to do is slavery. Designing it to want to do those things is ethically neutral. Clearly any AI to develop must be able to change parts of how it things. This is necessary for learning. It's also clear, however, that there are other parts of how it thinks that can't be changed. Otherwise you are quite likely to end up with an insane AI (though not necessarily a dangerous one).

As to paths to an AI ... I submit that there are many. One just occurred to me today as I ran across the character called Hatsune Miku, a computer pop-rock star who gives live performances. Currently this is NOT an intelligent program in any sense, but it appears that many variants will soon be appearing. I would bet that some of them start reacting IN SOME WAY to audience feedback. (I.e., the program reacting, not the controllers reacting.) This may set up an arms race scenario where different automated singers react to their audiences to different degrees. Mingle this with advances being made in, say, speech recognition, etc.

Of course, a large part of this depends on just how much "thinking" it takes to be intelligent. My suspicion is that it doesn't take much. Most people seem to react our of habit most of the time, and when original thought is required, it's likely to result in very slow reactions. Depending on the problem, months to years are not unheard of. One can't really say decades, as people can't stay focused on the same problem, even at a fairly low priority, for that long. Even a month is quite rare. So I think most mental processes are "automatic". Perhaps not quite as automatic as a thermostat, but closer to that then to creating a new art form, or proving a resistant mathematical theorum. As we figure out how to do more and more mental processes (not necessarily how the brain does it, but ways that work) this seems, to me, increasingly plausible.

So. From an pop-rock start the gives live performances might evolve an AI that specializes in evoking emotional responses from people. (I want to say favorable emotional responses, but I can't really define that term in this context.) Eventually I can see this evolving into something that could hold prolonged conversations with a person, and appear fully human. But it wouldn't, of course, be particularly good at anything other then evoking emotional responses from people. It might not be able to utter a coherent sentence, or understand a syllogism, and still be able to create the desire for one to experience it's presence.

There are lots of forms of AI that are unreasonable, but that doesn't mean that something won't show up. And it doesn't mean that it will intend harm, or even understand the concept.

Let's go back to "Halting State" and consider what the NPCs of that game might be like. The game is running on a vast distributed computer, so each NPC is probably experiencing multiple interactions simultaneously. And there is evolutionary pressure that they not appear to be "fake". Rich interactions is what's desired. But the experience of each NPC is not the universe that we experience. It's generally embodied in a fixed part of the environment, so the environment will be seen as a part of it, sort of like your toenails, it's there, and you can't directly feel it, but you consider it a part of you. The interactions with the players are it's experience with "the other". But there is evolutionary pressure that it be able to react as a person would. So games will create variant forms of NPCs, and the ones with the "better" NPCs will tend to be more successful (other things being equal, which, of course, they aren't). OTOH, there's a strong pressure that the NPCs not take up too much in the way of computer resources. So I'd expect them to evolve a much more limited intelligence, one that could handle sentences and emotions appropriate to their context, but to basically fade into the background... There could, however, be a few exceptions to that. Automated opponents that were adjustably skilled. So far this has usually been done by cheating, i.e., allowing the opponents to have access to officially hidden knowledge, or to recruit troops cheaply or such. But this is seen as a defect. Human opponents (via on-line play) are an answer to this, but not an ideal one. (You can't pause a multiplayer game while you go to a doctors appointment, e.g., without grossly inconveniencing many other people, so this usually isn't possible.)

Still, I don't see any advanced AI appearing in this environment. The pressures against it are too high.

Or how about a facilities management AI. Hospitals already use robots for delivering medicines, and have for nearly (over?) a decade. This is just a start. As robot bodies improve, a janitor managed by the building network becomes feasible. Then perhaps a robotic security guard that can call for assistance. (This could initially be basically a camera with a mic and a speaker, but monitored in real-time by the building computer rather than by [or in addition to] a person. Then add it to a robot body.) Once you have security monitors patrolling the halls for intruders, lost visitors, trash, etc. you could add the ability to move parcels from office to office. Eventually it would add on any task that facilities management handles. Including conversation. Why wouldn't this be considered an AI? (Granted, this depends on advancement continuing in computer technology and robotics, but I see no reason to doubt that.)

So while there are lots of things that won't show up, unless you severely restrict the definition of AI, I don't think that AI is one of them. There are many FORMS of AI that won't show up, and others that shouldn't. That, however, is a far different statement.

P.S.: To claim that people won't do something because it is, or seems to one, to be unethical seems to me to be ignoring history. SOME people won't do it. And the Amish still exist.

34:

Human-like AI is basically another prism for examining human behavior, like the rubber-forehead aliens of TV. This can be great storytelling, but I think it's unlikely as futurology*.

I think that real, existing AI is a great stand-in for the Other of alien intelligence in that it shows how unlikely and difficult a "meeting of minds" is. It is evidence that humans won't acknowledge intelligence if it deviates sufficiently in appearance or function from human intelligence. The difficulty of mutual understanding between organisms without shared recent evolutionary history may well be more akin to that of men and mushrooms than of men and dolphins. Even Turing's famous imitation game or test was about passing for human rather than demonstrating intelligence more generally.

Terry Bisson's "They're Made Out of Meat" is one of the more relevant SF references, though even it has to deal with only part of the problem to remain comprehensible as a story.

Peter Watts' Blindsight comes at the problem from a slightly different angle. The aliens can only make a half-credible stab at communication with humans, but appear more than capable of beating humans at the global dominance game. Consciousness as understood by humans is not only unnecessary for competitively successful intelligence but may even be an impediment.

*"AI" that is really a simulated human brain is the exception, but I think that the actual realization of human brain simulations is unlikely. This is due to the extremely difficult scientific and engineering problems standing in the way rather than any philosophical objection. I can suspend my disbelief for fiction but I'll laugh at anyone who really believes it's coming, just like I'd laugh at someone who thinks they'll retire around Tau Ceti.

35:

Good post Mr. Stross. I find it incredible how many (supposedly) highly intelligent people can be so delusional about AI. One only has to read the absurd predictions of Minsky and McCarthy from 50 years ago to see that. I see no indications whatsoever that AI in a box is possible; there is no more creativity or consciousness in my computer than in a doorknob. What we are really creating are billions of extreme idiot-savants with the potential to destroy us. It all seems like crazy Cargo Cultism to me, a weird quasi-religion created by geeks who perhaps need to reflect upon and develop their own mental powers rather than trying to force human cognition into the boxes of their simplistic models.

36:

It's an OLD argument I know but .." pet dogs " Oh dear, oh dear ... ' PET '? Well yes, oh course, but ..

Let's see now, Sniffer dogs? Police dogs of various kinds /war dogs sniffing out mines and IEDevices, seeing eye /blind aid/hearing dogs ..dogs that can sniff out illness? Watch/Guard Dogs ?Hunting Dogs/Hounds .. now try substituting 'dogs' for 'cats ' for any of those roles. Hunting maybe though even dogs/hounds have become un-fashionable in that role in our urbanised culture whilst cats were always oddities as collaborative hunters: more entertainment than utility.

And cats can ? catch mice? ..well, all right, when they feel like it ..but Rats? Ah yes, forgot to add rat catching Dogs.

This doesn't under-rate Cats ..they are a quite extraordinarily successful parasite of ape descendants.... furry con -persons.

There's no way of proving it, but, IF our branch of hominids could become partners with non hominid creatures and Neanderthals couldn't then that might, just possibly, be why we are here today and the Neanderthals aren't.

37:

"Intelligence" is essentially the ability to use past events to alter the decision-making process; "consciousness" is essentially the result of being able to plan future behaviour. The advantage of the latter should be fairly obvious.

38:

Cats ..they are a quite extraordinarily successful parasite of ape descendants

I suspect you are not very familiar with how cats are treated in different parts of the world, even in parts of Europe. Here in California, I was surprised to find that most(?) cats are treated as either indoor-only or outdoor-only animals, and on farms they have to feed themselves by catching rodents.

This is all a very far cry from their lordly status in suburban England, and I suspect, in a certain Edinburgh house.

39:

"Are you listening to me?"

"Of course, dear." The eyes never move from the computer screen.

I suspect that we'll be able to fake AI as well as most people fake awareness. That's the thing, really: we assume that we know what's going on in someone's head, we model that assumption, and when the model works convincingly, we assume that we know what's going on in that other person's head. Do we? Rather than chase that particular philosophical rabbit (because I'm sure it has a nice latin name, and someone who wasted time in a philosophy class will chime in shortly and condescendingly inform me what I'm talking about), I'm going to get to the point: does it matter?

I'd say it doesn't. Most of us fake our way through our lives, and philosophers say we're all conscious and paying attention. What's so special about faking that state convincingly?

You think I'm wrong? What's going on behind your left ear. Right now, on the skin. What, can't feel it? Why not? You're a conscious being inside your body. But maybe you're not conscious of that part. Okay. So how much are you conscious of, then?

Pointless question? At least I'm not like the martial arts master who mocked his student for not moving his liver slightly to make balancing easier in an awkward posture. Think that through a moment, and you'll realize how much consciousness you may be missing.

The nice thing about AIs in literature is that they're characters. They're understandable. They have things like motive. They do things for a reason. AIs are a way of anthropomorphizing computers. As such, they're fine. They make the literature less challenging, and they may have uses in real life. If anyone's paying attention, that is.

40:

"Why we see dogs as "cute" I don't know (but I prefer cats, so what do I know). It must be more than just behavior and projection, because I have a hard time believing that a big, hairy spider that acted like a dog would ever be "cute" or "cuddly". "

This is very very cultural. Introduce someone from the middle east to a small (under 1' high) nice quiet friendly dog and what most of them recoil in terror. I've seen it happen. I've also had to put our 70 pounders outside during visits as the guests from Egypt would not come in the door otherwise.

41:

It's only murder if it's conscious. If an intelligent machine built with silicon integrated circuits is conscious, why isn't my current computer conscious? Am I a murderer for turning my computer off? And it's not about the level of complexity either. That explanation never made any sense. It is pseudoscientific wishful thinking. Pure and simple.

My biggest problem with AI types is that they have no clue as to what causes consciousness and eyt they feel free to conjure up all sorts of superstitious scenarios about AIs rebelling against humans and taking over because they got tired of being slaves. What utter hogwash. Not even wrong.

42:

"...Introduce someone from the middle east to a small (under 1' high) nice quiet friendly dog and what most of them recoil in terror."

That's interesting. Any idea why? (I take it that "normal, mid-sized" dogs are OK).

43:

No. My point was that "cute" didn't seem to be a factor. Or size. They are just raised to be afraid of dogs. Any size, shape, color, etc...

Now in much of eastern and southeastern Asia cute doesn't apply either. It is more a question of "what's for dinner?"

A friend in college and others who spent time in Viet Nam told of being on fire bases and having issues with dogs as the dogs quickly figured out that they were not a menu item around the GI's. It got to be quite an issue on my friend's base. They had to get rid of them. Nuff said.

44:

"If an intelligent machine built with silicon integrated circuits is conscious, why isn't my current computer conscious?"

Why do you think your computer is intelligent?

45:

"They are just raised to be afraid of dogs."

Some quick googling suggests that Muslims treat dogs as unclean. So as you say, a cultural thing.

46:

Why do you think intelligence implies consciousness?

47:

Apparently it's rather more complicated than that ..as anything that involves Religion and The Word of The Prophet of of any given religion does tend to be ...

" The Hadith's note for #2839 says, "The prophet did not order the killing of all the dogs, for some are to be retained for hunting and watching. He ordered to kill the jet black ones. They might be more mischievous among them. "

So That's all right then eh wot? Or perhaps not ...

http://www.answering-islam.org/Silas/dogs.htm

The owner of the above soapbox is a Christian who honestly believes that ..

" Consider Jesus. He did not teach to kill animals because of their species or color. Christ's teachings were based upon faith, love, and obedience to God. When Christ taught prayer, He taught that God judges the heart, not the form or outward appearance. When Christ taught about heaven, and its rewards, He did not state that owning an animal would cost you your good works. Christ's message was simple: love God with all your heart, soul, mind, and strength, and to love your neighbour as yourself."

And that's just one result from one google " dogs in muslim societies " About 1,410,000 results (0.20 seconds)

Arguments that involve Religion can get very complicated in theory and Very Bloody in practice.

Still even religions can change ...

http://www.guardian.co.uk/

"A blind Muslim student yesterday became the first person to be allowed to take a guide dog into a UK mosque.

Mahomed-Abraar Khatri, 18, can now enter his place of worship in Leicester with canine companion Vargo after the Muslim Law (Sharia) Council UK issued a fatwa in response to his request.

The Guide Dogs for the Blind Association described the decision as "a massive step forward for other blind and partially-sighted Muslims". Previously, all dogs were banned from mosques because the Islamic faith historically sees them as being for guarding and hunting only. However, the position was softened because guide dogs could be classed in the "working dogs" category. The animals are still barred from entering the prayer hall for the sake of hygiene but are allowed to guide their owners to the area where shoes are placed, the fatwa says. "

Say what you like about Cats and Dogs at least they don't invent religions, or if they do they keep quite about it.

48:

The ethical argument against producing genuine AI, with the massive benefits it would bring to the owners has an obvious riposte: “Meanwhile, in China…”

49:

Why make artificial intelligences? Why do we keep making more of the old-fashioned meat-based kind?

Picture the appeal of having a child or friend or fishing buddy or lover who turns out exactly like you wanted them because you built them that way.
The Stepford Wives was frightening because, while some people would want the challenge or excitement of meeting someone different, many people would want those they interact with to be just how they like them.

50:

Or, on a slightly less creepy (or at least more benevolent) front- consider the current creeping excess of males in the far east due to sex selection of newborns. The resulting single male population might want affectionate female companionship even if it had to come out of a set of algorithms. More than just renting a body would provide.

52:

Oops, wrong thread, sorry.

53:

many people would want those they interact with to be just how they like them.

Is this sentence correct? I can't figure out if you mean "just like them" or something else.

54:

The morality issues are serious, quite potentially horrible, and likely to be ignored. In part because there will be $ involved, in part because the transition between tool and person will probably be gradual, and in part because AI implies understanding of human consciousness that will cause all sorts of pressing moral issues with human cognition and our manipulation thereof that will probably be quite distracting.

As for AI being useless because it will want to watch TV, that only works if we produce AI without understanding it enough to control it. If we can make something artificially intelligent, it is quite likely we can also make it artificially and neurotically obsessed with fulfilling our every need. (And, as noted above, don't expect humanity to quail from the moral implications of that.) Our host has obviously considered this - AIs very much like that were the cultural setting of Saturn's Children.

55:

We don't have any real agreement on what intelligence is

Generally, I'm with Jeff Hawkins here.

That said, I'm fine with using "Intelligence" for a wide range of stuff involving some kind of learning mechanism. "Strong AI" implies a lot more: a self-guided, autonomous form of intelligence. What this really refers to is "intelligent sentience".

What is the role of sentience? It is the ultimate form of autonomy. You are not motivated by a simple coded goal, but by improvement, growth in its most general sense - and by the avoidance of all forms of damage and decay.

Intelligent sentience implies a self-learning prediction framework based on the underlying mechanism of sentience.

You know you will be hungry, long before you experience true hunger. A predator on prowl, you hold still and wait in the shadow, confident of your success, valuing a delicious meal tonight higher than a sip of water right now.

You don't truly know you will catch anything. But you believe you will. And because you do, you can already feel the joy of anticipation.

benefits of consciousness

The benefit of consciousness is internal coherence.

Imagine yourself owning a cake. A delicious cake. You could eat it right now. Or you could eat it later. But you can't eat the whole cake twice. You can make millions of plans about your imaginary cake, but only one mouthwatering phantasy at a time.

If you do it consciously, you're simulating one possible, internally coherent future and one only. And the coherence check even works in a fractal, recursive way: Because you know what you think about consciously. You can backtrace it. You can cross-check it. You can compare the implicit assumptions about your imaginary cake with a real cake right in front of your eyes. Or with the photograph of a cake in an ad. Or with the prophecy of the "true holy cake".

The mechanics of consciousness (the thalamo-cortical loop) don't make it a check for truth, only for internal consistency and compatibility with your general prediction-framework.

Are consciousness and intelligence related?

Not necessarily. But they make an explosive combination.

56:

Cats are varied.

The late Tabitha was a very comfortable house-cat, but quite happy to run around the farm. We think somebody had dumped her, maybe thinking it was a place she could survive.

The one-eyed ginger tomcat from across the road had a ferocious reputation both for hunting and for annoyed reactions to humans. I never had any trouble with him, much to the bewilderment of his supposed owners.

Humans vary too.

57:

I wouldn't have a problem with turning those 500 off. But I bet they would.

Alternative slant - would you have a problem with generating 500 copies of yourself and then turning a random 500 of the 501 off.

As I see it, each would have a continuity of consciousness and would each feel as much "you" as the next. As long as they've had a moment's independence, and that cannot be merged back in, they are individual characters and the "killing" question arises.

58:

Speaking of which, there's an excellent angle on this in Linda Nagata's novel Vast:

Slower-than-light starship is taking about 200 years to go from Star A to Star B. This being a late 1990s novel, everyone aboard has been uploaded for storage during the journey (so they're not schlepping frozen lumps of meat around in a high radiation environment -- they can reassemble new bodies at the destination). Life in an upload environment on an isolated starship is likely to be pretty boring after two centuries, so all but one of the crew are effectively suspended or running much, much slower than realtime. But somebody has to stay awake in case something goes wrong.

So: there is a one-person virtual world containing (a) the control console of a starship, (b) a single pilot, and (c) a big red button. The control console delivers realtime information on what's happening around the ship. Every four minutes, the pilot is erased and re-instantiated from their stored image -- unless they push the big red button. Their criterion for pushing the button is that some crisis has occurred and they want to get off the erase-the-past-four-minutes-of-my-life groundhog day loop.

Given that, in the absence of a crisis, any four minute segment of a 200 year voyage is identical to any other, they're not losing anything by collapsing their experience of the voyage into four subjective minutes.

59:

Look up self mummification - it's a practice where people (Usually priests of some sort) would deliberately kill themselves slowly using a particular blend of starvation and slow poisons with the intention of leaving behind a perfectly preserved corpse.

That right there tells me that humans will, once the technology is available, try EVERYTHING. We seem to be wired for it.

I mean compared to that, playing around with instances of your own consciousness is child's play. And ethically at least you can be sure the instances are volunteers for whatever hijinks you're signing them up for.

60:
Am I a murderer for turning my computer off?

No, but you are a (very early-stage) abortionist :-)

61:

Will it be unethical to create AI that is unable to initiate actions and has no wishes of its own?

62:

BTW, Charlie, how exactly the slavechip from Saturn's Children works? Now that I think about it, the slavechip implies a very deep understanding of intelligence and consciousness inner workings. Funny that they still had to copy human brains...

63:

So is there potentially an issue with "killing" when your parents brought you into the world knowing you would subsequently die? If not, what about doing so knowing you had a genetic defect and a short life, e.g. Niemann Pick disease? Are you also suggesting that women shouldn't have abortions (sometime after the fetus can experience their environment) because that would be murder? The hypotheticals of AI's expose the issues of dealing with natural intelligence too.

In the case of the 500 copies, hasn't the original knowingly granted only a mayfly existence to each copy? c.f. Brin's "Kiln People". Is it really "killing" to allow them to turn off?

I'm reminded that philosophers like to set up situations that create moral dilemmas. The train running into people on the track that can be averted by various actions. The results usually show that people will do actions which do not constitute direct acts of murder, e.g. pushing someone onto the track to stop the train is not acceptable to save the lives of others in the path. In the case of the 500 copies, if they each "go to sleep" and never wake up again, the lack of direct "turning them off" might well be morally allowable.

64:

"And cats can ? catch mice? ..well, all right, when they feel like it ..but Rats? "

Yes, and the value of that historically isn't to be underestimated. And cats are perfectly capable of catching rats. Cats are lot more useful in most agarian settings than dogs are.

In the modern world, it's largely a wash. If you consider barn cats working animals, there are lots more of those in the US, at least, than there are working dogs.

Even if you agreed to the notion of cats being parasites (which requires quite a loose definition of parasite) because they only do one kind of thing - well, that's pretty much most animals we've domesticated. Horses are good for moving stuff. Cows are good for being eaten. Sheep are good for wool.

Dogs, being the Swiss army knife of working animals, are the exception.

65:

Ahem:

Horses are good for moving stuff. Cows are good for being eaten. Sheep are good for wool.

Horses are good for moving stuff or being eaten. Cows are good for being eaten or producing milk or pulling ploughs/moving stuff. Sheep are good for wool or producing milk or being eaten.

There, I fixed that for you.

Most domestic animals are multipurpose. Dogs are no exception (you can even eat them if push comes to shove).

Cats ... they self-domesticated because with the invention of agriculture, humans needed somewhere to store the harvest, which in turn created a magnet for vermin. Cats eat vermin. Having somewhere warm and comfortable to live where predators are excluded and there's an endless supply of vermin to eat? Cat heaven!

66:

The key thing to remember about the Saturns Children universe is that the people who built the children were utterly vile- they failed etics forever. That slave chip doesnt imply they understood a damm thing, it implies that once you have a brain implemented in software and have forgotten everything you ever heard about right and wrong, virtual neurosurgery - and the ability to reverse failed experiments- will let you shape the mind you have on your computers into a pretzel. 1: Rewire brain in way that seems like it might accomplish what you want. (no need to respect surgical, physiological or topological plausibility. - its all in software, after all.) 2: Boot mind up, listen to the screaming/insane ranting. 3: Erase, try again.

67:

Charlie,

Actually the prospect of enslaved AIs can be quite disturbingly real and close at hand.

Look at Google. We've got personalized search (based on monitoring patters of individual search behavior), Gmail, Google+, GoolgeDocs, location based services, etc.

Google is collecting plenty of data about me and is able to run increasingly high-fidelity models of my psyche, all to better model ads that I'll respond to. Imagine it buying some health companies, banking, and maybe hea;th/insurance, and it'll have an even more comprehensive picture about me.

Perhaps, the first AIs will be nothing more than simulations of customers and users running in some uber-powerful recommendation engine?

Could be a nice disturbing book.

Reality check: I think that most of this could work almost as well if Google takes many computational short-cuts. These approximations would effectively "compress" representations of people without lots of overhead of nearly useless detail. It's likely that recommendation engine models of people won't be anything recognizable as AIs. But still, it's fun / disturbing to consider.

68:

To extend the analogy. What if we put the copies into sleep mode - e.g. by reducing the clock speed. They might then appear "dead" but in reality would just be working very slowly. Perhaps we transition that to storage, run cycle, store again. Are they dead now as they can have no effective interaction with our world? But clearly a human traveling at close to light speed would have a similar experience. Have we then "killed" them?

I would argue that in both cases no "killing" has occurred.

69:

Yes but that only works if there is consent involved. If you were my employee and I stuck a needle in your arm to slow down your metabolism I'm not killing you but it's clearly a crime.

"True" AI (i.e a digital agent that is intelligent and conscious) would have to receive just as many rights as humans, we'd have to make a charter of Sentient Rights. What we really want is an extensive toolkit of expert systems so that if we want a task we just stick the software together like lego and have an intelligent agent capable of performing the task. This may even involve bolting together some turing compliant interface so that we can talk to it like a True AI but it's not really conscious.

70:

"If you were my employee and I stuck a needle in your arm to slow down your metabolism I'm not killing you but it's clearly a crime."

Have you never worked on a production line? Time just draaaagggggggggggsssssssssss.......... ;)

71:

"Cats ... they self-domesticated because with the invention of agriculture, humans needed somewhere to store the harvest, which in turn created a magnet for vermin. Cats eat vermin."

Wolves may have self-domesticated in a sense also, with "flight distance" differentiating between village scavenging wolves and pack hunting wolves. And a willingness to eat in proximity to humans appears to be endogenously linked to dog-like traits.

As for the article, I think I'm a little uncomfortable with the degree of conflation of consciousness and intelligence.

72:

"Why we see dogs as 'cute' I don't know (but I prefer cats, so what do I know). It must be more than just behavior and projection, because I have a hard time believing that a big, hairy spider that acted like a dog would ever be 'cute' or 'cuddly'."

I suppose it's that they are a pedomorphic social mammal -- I think most would agree that adult wolves are generally less cute than adult dogs.

73:

Yep ..I suspect that the accepted historical model for Dog/Ape pack Human pre-history was probably anticipated several times in Shaved Ape /Dog history and that at various time the Dogs of any given isolated Ape /Dog pack would have eaten the Apes in case of dire need and,say, the Pack leader Apes clearly having Died of Disease/Wounds or whatever.

As for the Dogs/Wolves Loyalty? ..well there has to be an Escape Clause along the lines of,' I'm HUNGRY !and Pack-leader hasn't actually moved for quite a while ' ...substitute Cat where appropriate, though Cat /integration into the pack probably post dates hunting Dog?Wolfe/Ape pack and awaits CIVILISATION and Cities ..any self respecting Cat requires the invention of the City, Civilisation tm before integrating into this new and successful social ... Damn it They have opposing Thumbs! ...system.

Mind you there were probably variants for Pack survival by eating the Walking Around Meat supply that included ..a Good Hunting Hound /breeding Pair is hard to find and the Kids are Really, REALLY!! Irritating Pack-leader by whining ' Are We There Yet ? '

74:

"The 4% Neanderthal DNA that we have suggests that some interbreeding also occurred."

By "we," you must be referring to Eurasians; Africans do not share DNA with Neanderthal.

Interbreeding did not necessarily occur. In fact, it's a little odd that E. Asians share the same percentage of H. neanderthalensis DNA as Europeans -- there weren't any Neanderthals in E. Asia.

I'm convinced that the shared DNA results from earlier African population structure -- in other words, the ancestors of Eurasians (in Africa) were more closely related to Neanderthals than were the ancestors of modern African populations.

75:

Didn't Douglas Adams more or less get it right? On the one hand you have sentient Marvin, brain the size of a planet, but deeply depressed at the pointlessness of it all and good for nothing except spacecraft parking duties: on the other, the cheery, AIs that are designed to work the lifts and would be happy to have a chat, but have nothing of any value to say because they're too stupid.

76:

"DNA results from earlier African population structure -- in other words, the ancestors of Eurasians (in Africa) were more closely related to Neanderthals than were the ancestors of modern African populations. "

Interesting speculation. But I don't think the evidence supports it. If that were true, all human groups would not be so close a match to each other, and separated from H.n. That the European lineage has some H.n. DNA suggests a later mixing, i.e. some inter breeding.

If true, then the classic definition of species is not quite correct in this situation, as H.s.s and H.n. should not have been able to produce fertile offspring. If the inter-breeding hypothesis is correct, then the 2 groups were biologically closer than we thought.

77:

I see the Wikipedia entry suggests that H.n. could be a sub species: Homo sapiens neaderthalensis. That classification might gain some strength if the inter breeding hypothesis becomes more certain.

By coincidence, I just finished the evolution of humans chapter in Dawkin's latest, where he notes how difficult (and rightly so) it is to classify fossils in our lineage.

78:
If true, then the classic definition of species is not quite correct in this situation, as H.s.s and H.n. should not have been able to produce fertile offspring. If the inter-breeding hypothesis is correct, then the 2 groups were biologically closer than we thought.

Please find me a competent biologist who defines "species" like that. (Indeed, find me a competent biologist who's happy to assert that there is a really solid definition of the term, because I'd like to know what it is.)

You are presumably aware that Tigers and Lions are regarded as separate species, yes? Also that cross-breeds between the two species are known as Tigons and Ligers, depending on the gender-mix of parents. Some (not all, but at least a few) of those cross-breeds are fertile. Similarly, everyone knows that mules are infertile, but actually that only applies to all male mules and most female mules; fertile females are relatively rare but do exist. (Fertile males don't, for reasons relating to gamete chromosomes.)

Oh, and if you want to look at plants, things get much worse. Plants aren't just interfertile across species, they are frequently interfertile across genera (to the point that there's a standard notation for it).

79:

There is no hard and fast definition of "species." The general rule is that if two populations are unlikely to interbreed, for whatever reason, then they are probably two species. This means that two groups of animals, that have some slight variation in appearance and which are separated by (say) a mountain, are likely to be considered two species.

This is not particularly a good rule, and DNA analyses are starting to come into play. ("starting to" meaning over the past 20 or so years... taxonomy moves slowly :).)

Essentially, unless you took graduate degrees in biology, pretty much everything you learned in school is wrong, but close enough for you to get by. :)

80:

You make a fair point. But we have to define a species as separating one form from another, and separation by breeding populations is not enough, because geography can separate populations, even though they are clearly the same species. We still need to label organisms as distinct types, even though what constitutes a distinct type is evolutionarily more vague, almost silly. Speciation requires a separation of breeding populations which then will stay separated even when brought back into proximity. Exceptions that break the rules are just part of biology.

I can accept that if H.s.s and H.n. interbred, then they are closer than we have traditionally thought. If so, then I would prefer to make the 2 groups sub species, rather than species. Call it a personal preference if you will.

Defining genes, once quite simple, is now far more hazy too. You can write books on this now.

81:

"...and DNA analyses are starting to come into play..."

DNA analysis is now showing that some species that we thought were different, are the same, and vice versa.
Labeling rules are "good enough", not perfect or absolute. Cheap DNA analysis is opening up new means to label organisms and place them in context.

82:

In fact, it's a little odd that E. Asians share the same percentage of H. neanderthalensis DNA as Europeans -- there weren't any Neanderthals in E. Asia.

I don't really see how this is a problem; when going from Africa to Eurasia, there are about three routes, first off, Gibraltar or South Arabia, though I'm not aware of any evidence there, and then, the Levante, where there is plenty of evidence of early HSS and also of contemporary Neanderthals; so I guess first admixture is likely to have happened there, the resulting hybrids spreading to the rest of Eurasia, and, incidentally, East Asia.

If you're surprised that there is no higher level in Europeans indicating further admixture with Continental European Neanderthals after that event, well, maybe there was, but first of, I don't know if there are any statistics on population density, but given the fact HSS outcompeted Neanderthals, he had some advantage, which might mean he outnumbered Neanderthals by an order of magnitude, so maybe it was just a minor admixture.

Second of, sorry, but we're not really the descendants of those early European hunter-gatherers, but of later immigrations, e.g. by farmer's with a higher population density:

http://www.sciencemag.org/content/326/5949/137.abstract

AFAIK one of the proposed origins of these farmers is the area between Anatolia and the Levant, so even if later hunter-gatherers were somewhat more Neanderthal, they left little mark in later populations; that doesn't mean they were violently annihilated, it's just agricultural populations have a population density somewhat above hunter-gatherers; the packed farmers experiencing nasty diseases due to high population densities, becoming immune and spreading the good germs when doing the humanitarian work for starving hunter-gatherers of course might have helped, too.

83:

Really miss Matthew on this discussion. Period.

84:

and separation by breeding populations is not enough, because geography can separate populations

You can argue against it all you want, but I am not exaggerating: that was enough for species differentiation in some cases.

Most of my experience in this comes from reptiles, where, quite literally, snakes on one side of a mountain were classified as a different species than another, because some visual traits had become prevalent in one group but not the other. And since they were separated by a mountain, different species.

Taxonomists have traditionally gone by appearance, skeletal structure, scale placement (seriously, the placement of one scale can differentiate reptile species), location, and mating behaviour; DNA is changing all of this.

The insect guys have it even worse than this. They tend to be nigh giddy about genetic comparisons.

85:

I was thinking about widespread species, like the House Sparrow. The birds in different regions are clearly separate breeding populations. Eventually they may speciate. Conversely, like your snake example, Cichlid fish speciate very quickly, even within the same lake.

86:

"Given that, in the absence of a crisis, any four minute segment of a 200 year voyage is identical to any other, they're not losing anything by collapsing their experience of the voyage into four subjective minutes."

Except for the thoughts they might have during those four minutes, which would be enough to make each four minutes non-identical, from the standpoint of philosophical questions about identity anyway. At least they have the option of pushing the button, though. It's an intriguing idea, I'll have to read that.

87:

Speciation through distance, and distance only. There is a classic example A so-called Ring Species. In Britain, the Herring and Lesser Black-Back gulls are clearly separate species, and cannot interbreed. BUT Around the Arctic circle, there is a whole spectrum of slightly different species that shade into each other and end up at the "other end" so to speak. It's one of the classic anti-ID arguments, proven by observation and pratice.

88:

Swinging off your point about food pills being a projection of dire 1950s British cooking, it struck me earlier today that a food pill makes about as much sense as a sex pill. (That is, a pill that provides your recommended daily intake - not a pill that makes you more able or likely.)

89:

Food pills make sense if (a) you think about eating purely in the functional sense and (b) you don't want the hassle of cooking, and (c) you worried about weight and/or bulk.

For sex, well, unless you're some form of quiverful weirdo who also hates sex, then you can disregard (a) - if you don't want sex, don't have it. No need for pills as a substitute.

For (b), there's p0rn.

(c) is irrelevant.

But this is nitpicking - your general comparison is a good one. Food pills are pointless in an era of ready-meals.

90:

The status of the herring gull complex is not so simple - see Liebers, Knijff & Helbig, The herring gull complex is not a ring species, Proc. R. Soc. Lond. B 271: 893-901 (2004)

Ensatina salamanders in California seem to be a valid example of a ring species.

91:
Cats ... they self-domesticated because with the invention of agriculture, humans needed somewhere to store the harvest, which in turn created a magnet for vermin. Cats eat vermin. Having somewhere warm and comfortable to live where predators are excluded and there's an endless supply of vermin to eat? Cat heaven!

Surely there's more to the story of the the Cat That Walks by Himself?

Our menagerie currently holds three dogs and two cats (down from four dogs, three cats, a rabbit, a gerbil hiding in the walls, two indoor birds and a rooster hatched from an egg), all of which are our daughters - we just feed them, water them, walk them, take them to the vet, and clean up their dookie :-) Anyway, the fault I find with the story as Charlie presents it is this: I'm the most obsequious of servants to one our cats, always there to open a door for her, always there to fend off the dogs as she makes a beeline for the back bedrooms, always there to make sure she gets her special wet cat treats (she's fifteen). And that ungrateful beast still hates my frickin' guts.

And to top it all off, she's sweet as pie to Ellen, who never lifts a finger as far as the animals are concerned! Jumps up in her lap, purrs and head-butts, sleeps on her pillow at night, every night. Me? All I get are snarls and an occasional swipe of the paw if I'm too slow in the discharge of my catly duties.

What's up with that? Evolutionarily speaking, that is?

92:

Now, the cats on the farm are employed eaters for the humans, to get rid of the rodents, and their income is rodents and petting.

93:

You think I'm wrong? What's going on behind your left ear. Right now, on the skin.

Ah, you asked the wrong person. The back of my head has sausage curls and on the left side, they curl so they come forward and annoy me on my neck, and yes, behind my left ear. When the hair gets long enough to stay behind my shoulder, it will be earrings that tap me behind both ears.

94:

The topic is addressed here:

"It could be argued that Neandertal admixture was indeed higher in West Eurasians initially, but the difference was evened out by gene flow across Eurasia. This, however, makes no sense, as the history of Eurasians post-Out of Africa was one of genetic differentiation, which suggests barriers to gene flow, and it would be difficult to imagine a scenario in which the Neandertal component would even out in Eurasia across populations from the Atlantic to the Pacific."

http://dienekes.blogspot.com/2010/05/tales-of-neanderthal-admixture-in.html

Apropos the credentials discussion, "Dienekes Pontikos" is a pseudonym and I don't know his academic credentials, but he's quite meticulous and obviously well-versed in population genetics and physical anth.

95:

Or perhaps they'll move to more same-sex couples, or polyamory.

96:

Linda is bringing her books back as e-books, and here's an excerpt of Vast. I have the paper version, which I bought when it came out. I have all her books in paper, and I hope the new ones keep coming out that way.

97:

It's been suggested that many of the hitherto supposed migrations are illusions arising from archaeology. Goods were traded, and the movement of goods isn't a reliable indicator of movements of people.

So the idea of farming spread, but the locals in Europe might not have been displaced by a horde of agriculturalists, chanting "Git orf my land!"

After all, Jeremy Clarkson drives a Ferrari, but that doesn't make him an Italian immigrant.

98:

Well there is still garbage in garbage out. Hal's programming was messed up after it left the factory.
I think it would be hard to tell a story with out a, or some dummies, in it. If everyone did the smart thing, for the long turm, were would the story be.
I read once that genes showed everyone is descended from a small group that sat out the big Ice Age in very southern Spain. After the ice melted they moved out. Wonder what did not make it.?

100:

For one thing, there are huge ethical problems associated with attempting to simulate a human brain, or building a piece of software that could become self-aware. If you terminate a conscious program, are you committing murder? Quite possibly. Worse: if you use genetic algorithms to evolve a conscious AI, iteratively spawning hopeful mutants and then reaping them to converge on a final goal, are you committing genocide?

Bisson certainly had an idea.

101:

Well, even if the goods were produced in place, this still leaves the possibility that it were locals taking up the techniques; I know.

That's why the linked Science paper is somewhat interesting, they used the DNA from early farmers and contemporary hunter-gatherers, and it seems like they differ from each other quite much; they also both differ from present populations BTW.

AFAIK there are some other papers that come to different conclussions, but still, there are quite some indication that there was some migration, combined with locals adapting agriculture and spreading themselves etc.

http://www.ncbi.nlm.nih.gov/pubmed/20939899

102:

Well, as mentioned, there is the possibility that later admixture from Neanderthals was smaller, also note that as mentioned, there is indication that even if there was later admixture, it got levelled out by later migrations, e.g. from South Eastern and Eastern Europe. According to wiki, Neanderthaler range retreated north when the climate got warmer and went south when it got colder, even before HSS expanded, which might figure with an overlap in the Levant, but later disentaglement of the ranges.

To put times and ranges into perspective, the oldest HSS in Europe are from Romania, about 40,500 years old; Neaderthals disappeared 30,000 years ago, which leaves us with 10,000 years of coexistence in Europe at best; in the Levant, earliest modern humans are about 90.000 years old, and Neanderthals disappeared about 50.000 years ago; that leaves us with about 40.000 years.

Another possibility is that Neanderthals entered Africa at some point, which would extend the time range even more.

As for Dienekes, well, he is always interesting to read, problem is you realize quite fast he is something of a hellenophile (OK, that's an understatement), so you have to unspin him somewhat. Err. See for example this:

http://dienekes.110mb.com/articles/greekiq/

And concerning credentials, well, he mentions his sources, so that's not that important.

103:

How exactly are house cats parasites? They provide endless entertainment and raise your mood better than fluoxetine. For almost no cost.

104:

How are cats parasites? They're not, and never have been.

At best, it's a relationship with strong benefit to both sides (mutualism), and at worst commensalism (one side gets a free meal, the other side is not significantly affected).

Historically speaking in Europe, the cat is the primary agent responsible for the suppression of mice and rats. This meant that firstly much more food could successfully be stored, allowing humans to survive the winter months, and secondly that plagues occurred less often.

As pure pets, they're probably commensal. But even today, there are lots of farm cats doing what they've always done. Taken across the entirety of the cats and humans, it's probably still mutualism.

(Though it should be noted that in some other regions of the world, cats are an actual ecological problem, as are dogs.)

105:

And, of course, even SUPPOSED "House Pets" will kill vermin. Our Birman tom obviously doesn't know that Birmans are suposed to be non-hunting pets. He hasn't had a rat (yet) but he's had several house-mice, one woodmouse, one gerbil and one squirrel .... As well as being ubearably unutterably unspeakably CUTE

106:

Well, yes, of course cats are cute .. but they are cute killers. My E-mail box this morning contained a referral link to an interesting piece in " Mother Jones " ..

" Are Cats Bad for the Environment? "

http://motherjones.com/environment/2011/06/cats-tnr-birds-feral

221 comments the last time that I looked!

107:

That's not the 'little death' (la petite morte.) =) Though it does bring up a fascinating train of thought, even though it seems to be a function of biology rather than of consciousness.

108:

Arthur Schopenhauer : "Each day is a little life; every waking and rising a little birth; every fresh morning a little youth; every going to rest and sleep a little death. (http://www.gutenberg.org/files/10715/10715.txt)

It looks like the precedent is against you.

109:

I am wondering about this Neanderthal DNA. A few years ago, I read repeatedly that humans showed no trace of Neanderthal DNA. Now people are postings there is. I know they are not lying, but what happened. Was something missed then?
I read once that genes showed everyone is descended from a small group that sat out the big Ice Age in very southern Spain. After the ice melted they moved out. I would think that that tribes would have killed other tribes for living space in the worse of the time.

110:

And that ungrateful beast still hates my frickin' guts.

Dogs in general have owners.

Cats on the other hand tend to tolerate other creatures who live in the area. And the level of tolerance varies according to rules not yet spelled out.

111:

A few years ago, I read repeatedly that humans showed no trace of Neanderthal DNA. Now people are postings there is. I know they are not lying, but what happened. Was something missed then?

DNA analysis is still in the big sticks and stone axes phase. In all of science a lot of early "decisions" get adjusted as more is learned. Which is why you have to take science stories in traditional media with very very very large grains of salt. They want to distill in down to a few sound bites and leave out all the footnotes in the original work.

112:

Please find me a competent biologist who defines "species" like that.

"If the tortoises had followed the pattern of Darwin's famous finches, they would have evolved a different species on each of the islands. Then, after subsequent accidental driftings from island to island, they would have been unable to interbreed (that's the definition remember, of a separate species) and would have been free to evolve a ....."

Dawkins, R. "The Greatest Show on Earth", p 264. Context - evolution of the Galapagos Tortoise.

113:

Sorry - repeat, but: Dogs have OWNERS Cats have STAFF

114:

With dogs, we got the submissive ones; with cats, we got the lazy ones.

115:

It's something of a 'science marched on' with this one; first results were with mitochondrial DNA from Neanderthal fossils; mtDNA is present in much greater copy number than nuclear DNA in cells, so it was much more easy to isolate it. Those results were quite different from modern humans, so it was assumed HSS and Neanderthals split quite some time ago:

http://www.ncbi.nlm.nih.gov/pubmed/9230299

Problem is, mtDNA is only inherited from the mother, so every time a woman dies without daughters, her mitochodrion die off; which means the Neanderthal mitochondrial lines might just have succombed to gene drift. So some people started sequencing other DNA from Neanderthal fossils; first results indicated Neanderthal sequences were more akin to chimps than humans, it was at first assumed that was further proof of little admixture from Neanderthals in modern humans:

http://www.ncbi.nlm.nih.gov/pubmed/18304371

As mentioned, that later off has changed:

http://www.ncbi.nlm.nih.gov/pubmed/20448178

Also look at this:

http://en.wikipedia.org/wiki/Neanderthal_Genome_Project

Personally, I think there might be another possibility, namely convergant evolution of DNA sequences thanks to adaptions to climate, diseases etc. but I'd have to look it up if that is a possibility.

116:

I bought Vast back in the day and really enjoyed it. A couple of years ago I took advantage Of the online second hand book market to catch the other books she had written. I didn't enjoy any of them, so I reread Vast to see if I still liked it and I still loved it. The sad truth is while I would love to be a mid list SF author, it wouldn't pay my bills. Until Charlie talked it up I had no idea how poorly paid a profession writing is for the majority

117:

The default assumption in SF has usually been that consciousness is an emergent phenomenon -- often presumed inevitable or automatic -- so that, whenever sufficient complexity of computation is evinced by your AI, it 'wakes up.'

Leading neuroscientists say that's exactly ass-backwards, though.

In general, AIs and computers would have no need of consciousness -- even less than the proverbial fish would need a bicycle -- because consciousness is essentially an emergent mechanism that processes a mere 60-70 bits of information a second -- a minute fraction of what's going on in your brain at any given time -- and nature apparently evolved it to tie together the many nodes of slow-signaling soggy neurons in a biological organism's CNS that would otherwise not function in sufficient simultaneity for that organism to move effectively and survive in the physical world.

That's what the neuroscientists mostly believe nowadays, anyway. For much more on this see, for example, the work of Rodolfo Llinas of NYU, Christof Koch and Francis Crick (yes, him), Antonio Damasio, and others. A particularly good introductory overview of how one leading scientist in the field thinks consciousness works -- the most convincing overall account I've read -- is Rodolfo Llinas's book, published in 2001 by MIT Press, I OF THE VORTEX: FROM NEURONS TO SELF.

http://www.amazon.com/Vortex-Neurons-Rodolfo-R-Llinas/dp/0262122332

Llinas is a interesting guy, generally. He's nearly 80 and thinks that the Internet is probably the future radio-based nervous system of (post)humanity, and he works on squid CNSs as well as humans, and has had a substantial list of discoveries over the decades.

He's done recent work towards man-machine interfaces (wires in the brain) and received money from the US Navy to work on a consciousness chip (from what I can murkily figure out, Llinas sold the US Navy on the notion that the mechanism that he thinks is the primary neural correlate of consiousness in biological brains also presents advantages on a chip in the same kind of way that analog technology is sometimes superior to digital.)

http://www.med.nyu.edu/biosketch/llinar01 http://www.med.nyu.edu/biosketch/llinar01

118:

@ 115 et al Re: Neanderthal genomic fragments. RISHATHRA of course!

119:

I'm surprised at that; I expected better from Dawkins (especially in a book about evolution, which should therefore take the time to discuss speciation in detail).

I suspect he'd give you a different answer if you asked him in the context of a discussion like this one, though. Because that definition of species really doesn't work, even for animals.

120:

In some respect, when in pop-science mode (aka 'lying to children'), you do have to simplify or you lose the readers. Two populations that cannot interbreed even if brought into contact with each other are certainly different species.

Beyond that, populations that don't interbreed because of separation (lions/tigers) or differential sexual selection (animals that are interfertile and co-located, but that are far enough down different reproductive strategy pathways that they don't go for each other) are different species as far as statistical considerations are concerned — i.e. they are in general different gene pools. However, all of these are spectrums, and there's probably never a point where you can say 'yesterday, same species, today, different'.

I can certainly understand Dawkins on occasion not getting into the whole of the species problem.

In the case of H.s.s and H.n., we're probably in a grey area: my guess is that for a time their territories overlapped somewhat both geographically and behaviourally, and that there was some interbreeding (if only because humans will eventually try to shag anything), but that it was rare and statistically of little importance regarding the separation of gene pools.

121:

One counterargument I will make is that useless (and even fairly dangerous, counterproductive, and difficult) things are done all the time. People will continue to try to write AIs with all the messy aspects of humanity, the way that people continue to build and try to market flying cars. Nobody has ever gotten particularly wealthy off of flying cars, because nobody actually wants one, but wanting to want a flying car is a staple of the Gernsback age that refuses to die.

122:
In some respect, when in pop-science mode (aka 'lying to children'), you do have to simplify or you lose the readers. Two populations that cannot interbreed even if brought into contact with each other are certainly different species.

OT: I'm looking for a little help here, guys. Over on Jame's livejournal, someone was asking for a simple explanation for a) why you can't get to the speed of light from rest, and b) why can't you ever get to absolute zero?

The relativity thing has been done to death as far as good, short, simple explanations go (imho), but what about the second one? I replied that it had to do with the fact that (if you equate temperature with molecular motion) you can't stop molecules from jiggling around because of "zero point energy", or if you like, a handwavy appeal to the uncertainty principle.

Matt McIrvin objected, saying that not only is this wrong (and it is), but that as far as lying to the children goes, it's a move away from the truth. A wrong picture that will have to erased later - at some other teacher's expense :-)

Now, his point was that saying temperature equals molecular motion isn't true is certainly a valid one, but how does one give a simple explanation as to why absolute zero can never be reached, while keeping in mind that temperature is really the rate of change of heat wrt the rate of change of entropy?

I gave another explanation, which Matt promptly shot down as implying that refrigerators were impossible (he's right, curse him, I was trying to sweep too much under the rug wrt how things can change temperature.) So my question is, is there a simple, intuitive - though "wrong" - explanation for this one? Or is this one of those questions you are reduced to telling the interlocutor to go take some physics classes and then come back in a year or two?

My interest is simply this: I'm a believer in the notion that if you can't explain certain phenomena to the layman with pictures and words, then perhaps you don't understand them well enough to teach them. That's not true for everything, obviously, see Feynmann's explanation for how magnetism works. But does a simple, not-totally wrong explanation for why you can never get to absolute zero fall into this category?

Would anyone care to unleash their powers of exposition on this one? Off-Off-topic: I wonder how well Charlie would do in the category writing popular science books for the layman? I tend to think of Asimov as the master on this one (where his weaknesses as a writer of fiction became strengths when it came to non-fictional lucid expositions on everything from Bible commentary to how slide rules work). Obviously not everyone's cup of tea, though again Asimov as the Great Explainer quite evidently played the part with immense satisfaction.

123:

Charlie,

While I think you're clearly right to criticize the SF idea of an Asimov style robot (a sentient, servile and happy slave), I don't really see how this translates into sentient AI in general.

An AI construct that is otherwise just like us (conscious, needy, driven, social etc.) automatically gets a number of attributes that make it very appealing to someone who needs some work done (and perhaps has questionable ethics):

A) It doesn't get sick. B) It doesn't die (just revert from backup). C) It probably has a higher bandwidth link to data. and: D) You can make copies of.

The last is the really important one. Say only one in a hundred AI constructs amounts to much of anything: the rest just want to play WoW all day. So what? One of them goes to college and decides to join the military. Once she finishes her training, ask her if it would be all right to make a million copies. It's the patriotic thing to do, right?

Mass production of trained personnel is what makes human-equivalent AI interesting. It doesn't matter if a single AI is no better at following orders or learning abstract algebra than a human is, because it doesn't have to be.

It just has to be easy to copy. You duplicate the successes.

124:

Personally, I think there might be another possibility, namely convergant evolution of DNA sequences thanks to adaptions to climate, diseases etc. but I'd have to look it up if that is a possibility.

I get the impression that, while convergent evolution of short sequences of DNA is possible (though very far from certain - there are usually several DNA solutions to a problem), it is statistically impossible on the scale suggested (small by comparison with the entire genome but still a sizeable bunch of DNA sequences). So I tend to believe that there was some genetic transfer from Neanderthals to early out-of-Africa HSS.

In fact, the current evidence looks perfectly compatible with a situation where Neanderthal/HSS interbreeding was possible but problematic - for instance, reduced hybrid viability and/or fertility, or just mutual unattractiveness (with hybrids stuck in the middle?). Once or twice early on (and probably somewhere in south-western Asia while the non-African HSS population was still small), the hybrids got lucky and their descendants survived. There probably were later hybrids in Europe, but in less favourable circumstances for long-term survival - Europe during glacial periods will have been only marginally survivable long-term for any Paleolithic hominid population.

Why? Because while a suitable hominid population of just two can survive in principle for a generation, it will do so after that only if it expands very fast and even then will suffer from its restricted gene pool. In practice, a hominid population that drops below 2,000 for more than a few generations is likely to become extinct the first or second time it hits a run of bad luck.

And in the sub-arctic conditions of glacial Europe, it will have taken quite a lot of territory (several square kilometres at least) to support just one hominid. Hit a glacial maximum, and there were probably very few European refuges large enough to hold a viable hominid population and only the largest - the lower Danube valley and around the Black Sea - with overland links to Asia. That in itself may have been enough to ensure eventual Neanderthal extinction once HSS had established itself in that area - any time the climate recovered, there would be more HSS moving into the rest of Europe from the south-east than Neanderthals from Spain or Italy. And later Paleolithic HSS European populations would have had the same problem during glacial maxima (at least until they developed sea-going boats) - but at least afterwards they would have been able to interbreed more easily with immigrants from the south-east.

125:

"The key thing to remember about the Saturns Children universe is that the people who built the children were utterly vile- they failed etics forever. "

While there is something to what you say, I think what they did is a very plausible scenario. If AI is developed, getting it rights or protections will be legally hard, and there will be powerful entrenched interests working against it. If human level AI exists without rights, the economic, scientific, and recreational incentives to exploit the hell out of it will be massive. So yes, "the people who built Saturn's Children" were utterly vile - they were us.

Saturn's Children is optimistic in that the system was built so that it could survive after the humans died out in an surfiet of AI enabled narciswanking. Otherwise, it is a vividly illustrated, if old, solution to the Fermi Paradox.

126:

I'm a believer in the notion that if you can't explain certain phenomena to the layman with pictures and words, then perhaps you don't understand them well enough to teach them.

But sometimes the simple explanation will take so much scaffolding that it's not worth it. There are a lot of quantum mechanics issues that I just take on faith when a large number of people in the know "agree". I just don't want to take the month it would take me to get up to speed on many such subjects.

Relativity is one of those areas where I've seen about 70% of more of the laymen not understand no matter how lucid and simplified the explanations. It is an area where the student has to want to get it or most never will.

127:

Like I said, a cultural question. If I were an uploaded entity, I'd have no problem with that random scenario, for very short forks (one or two minutes of subjective time).

128:

b) why can't you ever get to absolute zero?

If you recall your Flanders & Swann:

Heat cannot of itself go from one body to a hotter body

So, you need something colder than absolute zero to passively cool (ie using the same technique as ice cubes in a drink) something else to absolute zero. This is not possible.

Refrigeration techniques (active cooling) all work on the basic premise of concentrating the existing heat (eg by compression, this is how fridges work) and allowing that heated up thing to passively cool before reversing the concentration.

All refrigeration techniques only remove some of the heat. So while you can (with good enough equipment) asymptotically approach absolute zero, you can never reach it.

129:

Not necesarily. It all depends on the energy and capital (equipment cost) requirements of running an AI mind. For instance, if your energy bill for running 100 AI minds is higher than the wages and associated costs of an equivalent human work force, you'll choose man over machine.

One of the funny aspects of all this is that we're unlikely to see widespread AI in the absence of widespread nuclear energy. We run on solar power (sun -> grass -> beef -> food) and we're quite efficient at that. If needed we could ditch most of our high tech society, revert to almost medieval times and continue existing on renewable energy only (not that I'd like us to or consider that a good outcome). Computer based AI, on the other hand, will probably run on electricity and presupones a high-tech, energy-intensive economy that depends on non-renewable energy sources the necessary levels of use.

With oil running low and nukes being taboo, energy will only become more expensive, which makes a lousy proposition to any corporation looking to replace its workforce with AIs.

130:

I get the impression that, while convergent evolution of short sequences of DNA is possible (though very far from certain - there are usually several DNA solutions to a problem), it is statistically impossible on the scale suggested (small by comparison with the entire genome but still a sizeable bunch of DNA sequences).

In general I agree with that, though I think that convergent evolution of DNA sequences with selection pressure both on the protein level (some amino acids have more codons than others, and some, e.g. Tryptophan, have only one, and with larger insertions or deletion, e.g. concerning whole exons, there are likely some preferred options, also note some mutations are more probable than others, so it may be that the same mutation occurs twice) and DNA level (loss of introns, insertion/loss of ALUs and other retrotransposons, mutations of bases where the different probabilities of base changes lead to a kind of drunkard's walk; when going into one direction has a probability of 1/3, but the other one 2/3, guess where one is standing after 2000 turns) are an often overlooked possibility of molecular systematics; but this are just some random(hehe) musings.

And then there is the possibility that the alleles in question were still in the HSS gene pool at the time, but got amplified in the Eurasian lineage through positive selection, while in Africa, they were either not selected for or even selected against; also note Africa had its own version of the agricultural steamroller, called the Bantu expansion. There is some similarity to the idea of the North African HSS population being genetically more similar to Neanderthals than Sub-Saharan HSS here, but in this case, the two populations of HSS don't need to be so different.

In fact, the current evidence looks perfectly compatible with a situation where Neanderthal/HSS interbreeding was possible but problematic - for instance, reduced hybrid viability and/or fertility, or just mutual unattractiveness (with hybrids stuck in the middle?)

That's a definite possibility, but then, the reconstructions don't look that bad, though my opinion might be an outlier, for,

a) I was a proud member of every science club in school, and, err, let's just say I got into some arguments with my brother about some nerdettes in their glorious tomboyness or like...

b) I'm something of an Eastern European bear, and since everybody is a little narcisstic...

And then, for attractiveness, nearly 2 decades of parties, some stories told by the more rural people of my acquaintance and some friend that has a friend that is a member of a Christian sect high on the chastity and saying Jehovah all the time - let's just say 2 steaks, a radiator and some threads, now run for the brain bleach - let me agree somewhat with the 'anything that moves and/or isn't too hard to catch and/or doesn't stink too much' definition, ok, that's brain bleach again.

Although this might be an explanation, alcohol was somewhat easy to get by fermentation in the Levant, so some hybridization, whereas it was more of a problem in Europe at first. ;)

There were likely some differences in brain development between HSS and Neanderthals, which may show in social behaviour and like, though once again, humans show quite some diversity in their tendencies for concrete thinking etc.:

http://www.ncbi.nlm.nih.gov/pubmed/21056830

Another thing is that head size was AFAIR somewhat bigger in Neanderthals, likely room for obstretic complications.

131:
One of the funny aspects of all this is that we're unlikely to see widespread AI in the absence of widespread nuclear energy. We run on solar power (sun -> grass -> beef -> food) and we're quite efficient at that. If needed we could ditch most of our high tech society, revert to almost medieval times and continue existing on renewable energy only (not that I'd like us to or consider that a good outcome). Computer based AI, on the other hand, will probably run on electricity and presupones a high-tech, energy-intensive economy that depends on non-renewable energy sources the necessary levels of use.

With oil running low and nukes being taboo, energy will only become more expensive, which makes a lousy proposition to any corporation looking to replace its workforce with AIs.

No, the sun -> grass -> beef -> humans energy transformation chain is ridiculously inefficient compared with photovoltaic panels and electrically powered machines. Beef-fed humans need thousands of times as much land as solar-powered machines to get equivalent energy. Even rice-fed humans need at least 50 times as much land. Plus the land for cattle or cereals needs at least some water and soil, while PV panels can sit on salt flats, waste sites, or roofs of existing structures.

It would hardly need be "almost medieval" to run on renewable energy alone. According to the US National Renewable Energy Lab, rooftop PV systems could produce more energy than all the USA's existing nuclear reactors. That's using only existing rooftop space, no extra land or long-distance transmission required, and using mainstream commercial silicon PV. Using 15% efficient cells you could produce 910 terawatt hours this way, about 23% of total American electrical consumption. Using the most efficient commercial silicon cells you could go up to 34%. American wind resources are great enough to replace all existing electrical consumption*, if you're willing to build new transmission lines. There's also quite a lot of use to trim before you reach "almost medieval" levels of civilization: Denmark, Ireland, the United Kingdom, and Italy all make do on less than 1/2 the per-capita electrical consumption of the United States.

In short, if a computer can replace a human worker, the energy cost to run that machine, even using the most expensive renewable energy, is less than the food cost to produce equivalent output through human labor. The machine's advantage is greater if the human needs to be paid more than minimal-food-need wages. There are tasks that humans can do much better than machines, and tasks that machines can do much better than humans, but fairly little in the gray area of "fully automated processes are basically as good as humans, but human labor remains cheaper for now." Bangladesh doesn't have a big garment industry because their laborers are cheaper than fully automatic garment production systems, but because fully automatic garment production systems don't exist. If/when the machines are invented even the pathetic wages of Bangladeshi garment workers will be squeezed down and then the jobs will be eliminated.

*This of course ignores intermittency and storage issues that bedevil wind and solar. I don't want to try to run through all the point-counterpoint discussion around it because this post is already too long. But the joules are there for the taking, and even if you have nothing but current expensive storage tech it's still a cheaper form of energy than food.

132:
Denmark, Ireland, the United Kingdom, and Italy all make do on less than 1/2 the per-capita electrical consumption of the United States.

With respect, none of the first three countries listed uses much in the way of A/C, which is a pretty large electricity user in the US. How many states south of the Mason-Dixon line would be almost uninhabited without A/C, at least compared to current population levels?

Come to it, I might prefer be in parts of Italy in high summer than Montreal, if A/C was forbidden.

133:

I didn't want to make a long post even longer, but heating and cooling indeed make up a large share of energy consumption by housing and buildings in the United States. The good news is that

A) The energy requirements can be substantially reduced just by improved insulation, or far more dramatically (for new buildings) by more substantial design revisions: structural or earth-based heat sinks, passive cooling, orientation to smooth temperature fluctuations, etc.

B) Climate control can be carried out using mostly thermal energy, relatively little electrical input needed. Even air conditioning can be run on purely thermal input (absorption cycle refrigeration). This is good news because collecting low-grade thermal energy from the sun for an absorption refrigerator or heating water/air is much cheaper than providing equivalent joules via electricity.

And as a final note, even if neither A nor B were true, I think that the American South was hardly "almost medieval" even before air conditioning became widespread. There is an astonishing gulf between "almost medieval" (Polish serfs circa 1700?) and "customary comforts in 21st century America." You can fall a long, long way before getting within striking distance of medieval standards of material prosperity.

134:

Um, let's bridge a small rhetorical divide between "almost medieval" and reality.

The reality is that most of the current biggest cities in the US (Atlanta, Phoenix, Dallas, Houston, arguably Las Vegas) grew to their present large size only after air conditioning became prevalent, and Greater Los Angeles would be substantially smaller.

This plays an enormous role in US politics. Prior to AC, the biggest cities were places like Cleveland and Detroit, and other industrial cities in the current rust belt.

Some people peg the current prevalence of neoconservative politics and southern culture in the US on air conditioning. I think that's reasonable. It's unclear whether any Republican after Nixon would have won without the AC vote.

This has enormous political implications for future energy us in the US. I know that we can build earthships and similar designs that are energy efficient, comfortable, and cost the same as normal housing. The problem is that these are "hippy inventions," and trying to get them to catch on where they are desperately needed is, to put it bluntly, difficult in the current climate. It seems that they'd prefer almost anything else, nukes, coal power, fracking, whatever, primarily on sociopolitical grounds.

135:

That doesnt work. The future /will/ have electricity. The possibility of nukes guarantee this, because if all of our other gambits for powering civilization fail, then nukes will get built. They are taboo, sure, but not taboo enough for that to hold in the face of actual brownouts and prices of electricity spiking ever higher. - Powerdown is a fringe fantasy of doomers, and it is entirely a fantasy, not a possibility. Either the promises of the renewable energy advocates pan out, or the reactors start sprouting out of the ground like toadstools - there are no other possible outcomes.

136:

Hmm, the possibility of dark ages doesn't occur, does it?

The simple issue is that it takes a lot of expertise, time, and materials (including energy intensive materials like concrete and special metals) to build a nuclear reactor.

If civilization is cracking at the seams, there won't be an emergency program to build nukes, anymore than Afghanistan or Haiti (or any of another 80 better off countries) is currently going nuclear to solve their energy problems.

On this one, I've got to agree with Diego. It's already possible to build an AI that runs (inefficiently) on solar power, using only organic chemicals. It just takes 10-30 years to get it trained and running. It's called a human being, and it's even self-reproducing within certain limits. There's 7 billion of humans running around right now, and if there are too many people and not enough energy, I'm willing to bet that people will be hired to become smart systems. Maybe they'll even call them mentats. Calculators, perhaps, in the original sense.

While I love high tech, I'm going to keep my dad's slide rule handy, just in case. It doesn't take up much space, and it sure doesn't need a battery.

137:

I wish I could find my slide rule. I expect it got lost in a move somewhere. But I have a solar-powered calculator that's been running for at least 15 years now; that'll do the job.

The thing about relatively simple devices like calculators, personal computers, tablets, and such is that even if Moore's Law pukes out in a couple of design generations, their use of power will be pretty low both because of transistor size and because of more efficient designs which are coming in to help out with the high current leakage of the last few generations, and solar power might be enough. That makes a human with a calculating tool a lot more energy efficient than a trillion transistor AI.

138:

@ 131: Yes, but what's the total energy requirement of the minimum technosphere needed to sustain an AI based civilization? Someone (something?) has to build and repair the computers on which the AIs run plus build, operate and rapair the necessary industrial facilities, power plants and mineral extraction sites, none of which are simple or cheap. By comparison, we humans can survive as a species with much less.

@135: I'm not saying the future won't have energy. I'm pretty sure it will, for the exact reasons you've mentioned.

139:

"If civilization is cracking at the seams, there won't be an emergency program to build nukes, anymore than Afghanistan or Haiti (or any of another 80 better off countries) is currently going nuclear to solve their energy problems."

Which explains why nuclear programs (of any variety) are limited to societies as economically sound and socially robust as, for example, the DPRK. And why similarly complex and resource-intensive technical programs are invariably undertaken only by national leaderships that are as tightly focused on the individual personal welfare of their people as, say, the current representative of the Kim dynasty.

140:

And obviously those DPRK nuclear power plants are working, right? No, actually they aren't. According to Wikipedia, they've been trying unsuccessfully to build nuclear power plants since the 1950s, so this is scarcely a crash program now to deal with the famines. The current completion date of the latest reactor is 2012. We'll see if they make it.

The DPRK has used their research reactors (courtesy the Soviet Union) to make enriched uranium, and they may have an atomic bomb or two. Many countries on both sides of the Cold War obtained research reactors, but relatively few developed an indigenous nuclear industry. DPRK is not yet an exception to this, so far as I can tell.

141:
Not necesarily. It all depends on the energy and capital (equipment cost) requirements of running an AI mind. For instance, if your energy bill for running 100 AI minds is higher than the wages and associated costs of an equivalent human work force, you'll choose man over machine.

This might well be true if you imagine an AI execution framework as akin to the current generation of distributed supercomputers, with tens of thousands of cores and AC requirements measured in tons of ice.

But even in that sort of environment, being able to duplicate a hundred experts would likely be viable in certain situations. For example, if you have highly periodic need for a crack team to work on something, then you could simply power up the workers when needed and sleep them when not.

By this virtue, a team of AIs could probably perform better than a human team of similar underlying abilities, since you could in principle save the AIs when they're at the peak of their abilities and keep them there. This sort of thing has serious ethical problems, of course, but that doesn't mean someone won't be interested.

More long-term, why should we expect an AI to use a tremendous amount of power? Human brains don't.

142:

you could in principle save the AIs when they're at the peak of their abilities and keep them there

Strongly disagree.

You're still imagining a beefed up expert program instead of a self-guided organism faced with the intractable difficulty to interact with a world.

Reactivating a 5 year old copy would simply give you an entity busy with future shock until it has changed enough to be (in its own eyes, not in your eyes) capable of dealing with the world around it.

On the other hand, if there's no organic decay, there is no reason to assume any peak of ability. A peak of motivation perhaps, to still have to do shitty jobs like parking the dumb old "Heart of Gold" once again...

143:

On Neanderthal DNA. Well science went and marched on, on me. Thanks.
On air conditioning and power use in America. I was trying to get a new building code for new houses. The local news paper publisher said it was a step to Communism. But that was then.
Back in the 70's there was a town in California that was told a nuke plant was needed. It was a Engineering school town and people in the school came up with a energy saving building code. Shades over south windows, the darker the roof the more insulation etc. I lost all my material on this. But years ago I read the town had about 3 times the population and used less power. America could cut way back on power usage at little cost.

144:
Strongly disagree. You're still imagining a beefed up expert program instead of a self-guided organism faced with the intractable difficulty to interact with a world. Reactivating a 5 year old copy would simply give you an entity busy with future shock until it has changed enough to be (in its own eyes, not in your eyes) capable of dealing with the world around it.

What task are you trying to solve?

Future shock is only a problem if you're employing these AIs to interact with the world at large. So, sure, despite the obvious appeal, powering up your Lyndon Johnson Bot every time you need to pass national health care legislation is probably not going to work.

But so what? There are jobs that aren't like that at all. And even if there is some future shock, catching up on the news may still be easier than learning a complex skill.

Dumb examples:

Expert team to operate a weapons system you don't expect to use much. Why keep a constant rotation of human operators ready to go? Recruit some AIs, and make a copy when they're fully trained. They only need to be awake to train on upgrades. If you need more, you can probably go find some hardware for another datacenter faster than you could train new people, too.

Spaceship operator. Wake up for certain course corrections, and when you get to your destination.

This is all assuming AIs that are prohibitively expensive to run, of course. If that's not true, we don't need to look too hard to find a suitable job.

145:

Human remains at Gough's Cave, Cheddar, are about 9,000 years old, and recent testing showed a good DNA match with a school teacher still living in the area. So yeah, not too much migration going on in Somerset, at least.

146:

Though if that's something to be proud about is up to debate, SCNR. ;)

Besides the usual tidbits, like, how frequent is this genotype outside of Somerset (there are maps about this, forensic geneticists like that stuff), quoting the abstract shows why 8000 years is just arely interesting:

"After the domestication of animals and crops in the Near East some 11,000 years ago, farming had reached much of central Europe by 7500 years before the present."

147:

On the other hand, mass migrations do happen. Check the ancestry of the majority of North Americans, for example. And of the majority of Australians, too.

(Noted: not all those NA ancestors wanted to migrate. Or be migrated.)

148:

Different approach.

"The brain is incredible," said lead author Lulu Qian, a Caltech senior postdoctoral scholar in bioengineering. "It allows us to recognize patterns of events, form memories, make decisions, and take actions. So we asked, instead of having a physically connected network of neural cells, can a soup of interacting molecules exhibit brainlike behavior?"

Specials

Merchandise

About this Entry

This page contains a single entry by Charlie Stross published on July 8, 2011 3:06 PM.

Rule 34 moments was the previous entry in this blog.

"Yes, but what are your credentials, Mr Stross?" is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Search this blog

Propaganda