If you asked me, "How does this character feel?" in regards to a story, and I said, "Well, they're sad, obviously.", could you tell if I said that because I imagined myself in the character's position, and felt sad, or if I worked though a sequence such as "This character is a human. At this point, the character's father died. Humans are sad when parents die, unless there are extenuating circumstances. No keywords or patterns prior to this point indicate such circumstances. Therefore, the character is sad." Because, ultimately, that's all an "interior life" is -- us running precisely that kind of algorithm, just doing it so quickly and subconsciously that we don't realize we're running it.
This is a theme I keep coming back to over and over in my writing... the boundary between a mechanism and a person. (Which has nothing to do with their physical form.) There's a line in the current story I'm working on: "Any sufficiently complex system is indistinguishable from a sapient mind". At some point, there is the emergent property of self-awareness. I would go out on a limb and say that a perfect stimulus/response replica of a human must, to be perfect, include that self-awareness (or so perfect a simulation of it that it makes no difference), because self-awareness is part of what determines our responses to stimuli. We consider multiple alternatives and imagine the future consequences of them. A simulation that included all of the effects of self-awareness would be, de facto, self aware. It would, left to itself, ask questions about its own existence, because that is our response to the stimulus of existing. If a perfect stimulus/response mechanism failed to act in such a way -- to seek stimulus after an interval, to create when bored, to make up a story to explain a phenomenon it couldn't explain -- it would not be perfect. Again, for all I can prove, only I have actual self-awareness. Thus, the argument seems somewhat moot: A planet of zombies would not be distinguishable from a planet of non-zombies, including zombies claiming consciousness to each other, philosophizing about it, writing books and treatises on the topic, etc.
]]>Did you ever read Richard Bach's Illusions?
...as a twist on Descartre's cogito, sort of.
]]>Basically, a zombie is a hypothetical being that acts as if it has mirror neurons, without actually having mirror neurons. Why do I doubt that such a being exists?
The point here is that the neurologists actually have documentedphysical structures (mirror neurons) that show how an interior life is sensed. Philosophy is no longer needed to explain how this phenomenon occurs.
]]>All three are likely to become extinct for obvious reasons.
]]>It's a little bit of an overstatement, but based on the behaviour of all three groups there's a germ of truth buried in it.
]]>There's a weird notion that evolution will perfect things to their environment. This is not actually the case.
In the case of sociopathy, assuming that there's an evolutionary explanation for it continuing to be a thing assumes a lot of stuff.
Off the top of my head:
That it is heritable.
That it provides a breeding advantage.
That it provides a survival advantage.
But, you know, reproduction does toss out a fair few imperfect creations. People usually don't assume that there's an evolutionary reason people with subnormal intelligence are born fairly frequently, but they seem to not apply the same logic to people with abnormal emotions.
]]>Or as tribal warlords.
People are trying to find evolutionary advantage to sociopathy because it happens to be... well... evolutionarily advantageous. Sociopaths, or at least male sociopaths, do have a demonstrable breeding advantage in both modern and pre-modern societies.
Whether it is heritable is an open question.
]]>