Back to: Incidentally ... | Forward to: Rudy #3: Looking For The Big Aha.

Rudy #2. The Nu Yu Lifebox.

Nice cat photo, Charlie! I like the green eyes.

In my previous post, I was talking about the idea of creating an online simulation of oneself--what I call a lifebox. For now, the lifebox is simply a largish database of your writings, spoken words, and/or images, with links among the components, and a front-end that's an interactive search engine.

[At this point I should explain that I'm prone to illustrating my blog posts with images that aren't always quite rigorously related to the topic. But here, clearly, we see a painting by me in which a person is inputting their life's information into a keyboard. Or vice-versa.]

Today I have a few more remarks on the lifebox concept, although at this point much of this material has already been anticipated in the impressively vigorous and high-level discussions in the comments. But I'll go ahead and post this anyway, and move onto something else in a day or two.

As I've been saying, my expectation is that in not too many years, great numbers of people will be able to preserve their mental software by means of the lifebox. In a rudimentary kind of way, the lifebox concept is already being implemented as blogs. People post journal notes and snapshots of themselves, and if you follow a blog closely enough you can indeed get a feeling of identification with the blogger. And many blogs already come with search engines that automatically provide some links.

As I mentioned before, I recently published an old-school version of a lifebox, that is, a written autobiography, Nested Scrolls.

But today I'm talking about something more interactive. If you're prone to placing large amounts of material on your website, you're already partway to having a lifebox. You can base it on a fairly cheap trick--make a so-called lifebox page which has a search box which uses, say, the Google search engine to comb through all the material that you've placed upon your website. As I already mentioned in the comments, I myself set up a primitive Rudy's Lifebox page in 2010.

At this point, the Rudy's Lifebox page has a totally feeble front end. But it should be feasible to endow this kind of website-search-app with interactive abilities; people could ask it questions and have it answer with appropriate links and words.

For a fully effective user experience, I'd want my lifebox to remember the people who talked to it. This is standard technology--a user signs onto a site, and the site remembers the interactions that the user has. In effect, the lifebox creates mini-lifebox models of the people it talks to, remembering their interests, perhaps interviewing them a bit, and never accidentally telling the same story twice--unless prompted to.

Suppose you developed a lifebox version of yourself that worked quite well. Then what? You might start letting your lifebox carry out those online-interview gigs that you don't have the time or energy to fulfill.

Your lifebox could become a sophisticated spam-bot. It might actively go out and post things on social networking sites, raising your profile on the web and perhaps garnering some in-person speaking invitations. This could of course go too far--what if your lifebox became so good at emulating your that people preferred its outputs to those of your own cantankerous self?

So is a lifebox a full personality upload? Well--no. As yet, there's no ghost in the machine. On their own, your memories and links aren't enough to generate an emulation of you.

This said, another person who studies your memories and links can get into your customary frame of mind, at least for a short period of time. We humans are universal computers and we're exquisitely tuned for absorbing inputs in the form of anecdotes and memories. One's memories and links can act as a special kind of software that needs to be run on a very specialized kind of hardware: another human being. Putting it a bit differently, your memories and links, if properly presented, become an emulation code that runs on human beings. A subtle point.

Looking further ahead, how would one go about creating a human-like intelligence that would emulate you on plain dumb computers? That is, how would we animate a lifebox so as to have an artificial person?

A short answer is that, given that our brains have acquired their inherent structures by the process of evolution, the likeliest method for creating intelligent software is via a simulated process of evolution within the virtual world of a computer. There is, however, a difficulty with simulated evolution -- even with the best computers imaginable, it may take an exceedingly long time to bear fruit. The 1990s craze for artificial life has pretty well petered out.

An alternate hope is that there may yet be some fairly simple model of the working of human consciousness which we can model and implement in the coming decades. The best idea for a model that I've seen is in a book by Jeff Hawkins and Sandra Blakeslee, On Intelligence. Their model describes a directed evolution based upon a rich data base that develops by continually moving to higher-level symbol systems. One of their bits of evidence is that, looked at simply as wetware, the parts of the brain that do sound or touch or vision are all about the same as each other: stacked layers of neurons.

And we can still dream that there's some really simple AI trick that we haven't yet thought of. Something as slick as simplifying arithmetic by using zeroes and positional notation. An exponentially better approach.

But do keep in mind that, even without an intelligent spark, a lifebox can be exceedingly lifelike. If you have a big enough data set, a search engine can do quite a good job of simulating thought. Indeed, when I'm in conversation with strangers, well over ninety percent of my utterances are being produced via simple search and display methods.

But I live for the happy moments when I stumble, get something wrong, and actually say something inappropriate. Like mentioning that, in the light of quantum computation, even a stone is performing insanely fast computations all the time--with its octillion or so particles jiggling around like balls on springs. So the stone is potentially as intelligent as a person. All we need is the i/o...

54 Comments

1:

Having only just caught up with this and the other post I think you've skimmed over something fairly important, at least to me.

I don't want a chat bot hooked up as the front end to my (to steal a phrase from our host) lifeblog. Until it has a moderately reasonable facsimile of my personality I'm not interested.

For example, I'm friends with the local coffee shop owner. We share a love of rugby and some practical experience of computer programming. I also know his wife. She hates rugby, but with her I share some experience in 3D design work - although with her being an architect and me a builder in virtual environments we have a lot of differences in skills and attitudes too. But my life box should be able to recognise him and chat about how badly his teams (national and provincial) are doing, how well mine are doing, latest news and gossip and so on. With her, not a mention of rugby unless she raises it first, but happily discussing 3d modelling, architecture and the perversities of the local planning system.

I could carry on and on about other social groups and conversational topics and gambits - but hopefully the point is clear. Certainly when interacting with someone that knows (or knew) me (or their personality imbued life bot of course) access to those other parts of my conversational range should be allowed but only very rarely if ever volunteered. If I do lifeblog fully, I wouldn't want a client to be able to access my rude comments about their stupid request, bizarre ideas and gross stupidity. Not that they're all subject to this, but I was contacted for "a short, urgent job" in June. 23rd Dec and I still haven't had the information required - and acknowledged as required by the client - to do the job sent to me. Questioning his parentage and sense of urgency might not be fair, but vents frustration - but I'd still rather he didn't know about it!

And that, I guess is the problem.

At what point, if it starts displaying behaviour similar to my personality, does it become a genuine alternate to me? Is this an extension to the Turing test - not only does it appear human, it appears to be a specific human.

2:

it would be interesting to find a way to integrate all your lifebox data to something like a CyberTwin front end. You'd probably be pretty much there.

3:

Perhaps we can frame it this way.

Intelligence = knowledge + personality, with personality being the (gradually evolving) patterns of preferences that determine, given our current state of mind and set of immediately external stimuli, the next 'step' in the pattern to take.

Might be relevant for AI too – instead of trying to model adult brain functionality, we should be trying to model baby brain functionality (which has to be easier to do – at the very least, I'm sure we have passed the ability to beat a baby Turing test a long time ago).

4:

One idea, which would only be practical with massive AGI already a fact, is using lifebox data along with DNA, medical records, photos etc to do an actual reconstruction of a person. If, for example, the reconstruction of Dirk Bruere was so accurate it had me typing these exact words at this exact time on this blog, I would say that it is me. The difficult bit, requiring the AGI, would be interpolating missing information. It would have to "guess" the info, run the prog to see if these words matched the original, and if not try again.

5:

The one thing that a baby does best is open-ended learning along with creation of ever-higher levels of inference and abstraction by observing and imitating the world. That's something we still can't do in AI. It may turn to be relatively easy to do: the way babies seem to do it involves massive darwinian selection of synapses and the neurons that connect them. There's a good deal of evidence that a human baby starts out with more than 1 1/2 times as many neurons as it will have after a year or so of development, and Edelman's "Neural Darwinism" theory that neurons and synapses are created and die off in an evolutionary process is gaining traction1. If this is the way we work, then we might be able to duplicate that process in a machine. The disadvantage is that we'd have to let the machine intelligence start at infancy or before, and grow up at approximately a human rate.

  • When Edelman first discussed the theory in public, a lot of scientists were highly skeptical, some downright insulting about it, but as supporting evidence has come in over the last 30 years or so, some (generally younger) scientists have started to accept the idea. Yet another example of science advancing one grave at a time.
  • 6:

    "The disadvantage is that we'd have to let the machine intelligence start at infancy or before, and grow up at approximately a human rate."

    Why? Just run it in a simulated environment at faster than realtime

    7:

    Eloise: Perhaps the job isn't as urgent as initially advertised? In my experience of similar client behavior, the cause is either that, or that the client is unwilling to admit he's found himself short of funds with which to pay for the work to be done.

    In general: I find this conversation provides yet another reason to keep up the tobacco habit, in spite of the politically dominant progressive urge to stamp out every form of vice. That is: even if it turns out I'm wrong to regard anyone who'd seriously advance such an idea as even more wildly hubristic than most futurists -- which, in a field including Vannevar Bush, is really saying something -- smoking two packs a day will do a lot to ensure I don't find myself forced to share my declining years with the babbling ghosts of dead rich people. (Or, worse still, their robot zombies.)

    I have no problem with the concept of artificial people, and I suspect I'd find a world with them in it richer than one without. I unrepentantly admit myself horrified by the thought of a world which contains such imperfect, mindless facsimiles of actual people as are described and seriously considered here -- to think of a human being, a person, any person, but especially anyone I've ever actually respected, reduced to little more than a MegaHAL bot and a batch of old quotes and blog posts, leaves me regretting my lack of a language with a stronger word for 'travesty'.

    If this is transhumanist "immortality", you can have it. I'll stick with futures imagined by people who, like me, aren't so terrified to acknowledge the inevitability of death that they prefer to imagine replacing themselves with a database and fifteen lines of Perl -- futures which aren't so unrecognizably alien that I can only imagine them appealing to people who despise their own humanity in the first place. Certainly I've known enough such people; I've been one of them myself, and thank God I'm not one any longer.

    And, finally, a direct challenge to our interlocutor: ...another person who studies your memories and links can get into your customary frame of mind, at least for a short period of time.

    Prove it.

    I'll grant that reading enough of a given person's writings can produce a change in one's own customary frame of mind, and that the result of that change may briefly offer a semblance of what it might be like to think like the person who did the writing -- I say 'may', of course, because such a proposition seems impossible of anything resembling proof. But any serious reader has encountered the phenomenon.

    But to assume that, by reading the entire non-fiction oevure, blogs &c. included, of a Heinlein or a Sturgeon or a Watts or a Stross -- or, for that matter and God forfend, a Rucker -- one may actually begin to understand the minute-to-minute experience of being that person? I find this unutterably presumptuous, not to mention even more impossible of proof than the last proposition I described in those terms. Perhaps I'm wrong; if so, I eagerly anticipate a no doubt trivial demonstration of precisely where my error lies -- and if not, then given the "if you can't hit it with a stick, it doesn't exist" sort of materialist essentialism through which the last comment thread was so amply shot through, I have to say I don't feel too bad about describing the whole idea as arrant nonsense.

    8:

    Seems to me that the answer's probably in having a blog hooked up to a social networking account. I'm picturing the Lifebox as most successful when applied to somebody who uses Facebook for 8 hours a day.

    It would still be fallible, of course, but it would have a much easier time honing in on individual users' interests and what their relationship to you is.

    9:

    The deprecation of chat-bots here is a little sad. They've advanced a lot - look at their results in the Loebner Prize, how many humans are fooled every year.

    Now, the interesting question is, would it be possible for Rucker to feed his novels and all his emails and posts into a chatbot as its corpus? Are the chatbots constructed in such a way as to make this useful?

    (I'd like to know just out of curiosity how well it'd appear to mimick me; I have tons of emails, IRC logs, writings, and online comments I could feed into it...)

    10:

    If, for example, the reconstruction of Dirk Bruere was so accurate it had me typing these exact words at this exact time on this blog, I would say that it is me.

    But if he's that close a copy, does he have free will?

    11:

    But to assume that, by reading the entire non-fiction oevure, blogs &c. included, of a Heinlein or a Sturgeon or a Watts or a Stross -- or, for that matter and God forfend, a Rucker -- one may actually begin to understand the minute-to-minute experience of being that person? I find this unutterably presumptuous, not to mention even more impossible of proof than the last proposition I described in those terms.

    I don't disagree with you at all. In fact, I'd say you completely stuck that one and while I could play harmony with your main theme on the subject of knowing Heinlein's soul through his novels - or whatever - I could go on about the foolishness of this for pages and pages, not to mention what Heinlein would say about it...

    No.

    I'm going to go a step further and say this: There is stuff about me which will never go into writing. It will never go into a database. It will not live in the cloud. It will certainly not be backed up. There are a host of things about which I feel guilty, foolish, ashamed, angry, hurt, embarrassed, etc., I will leave to everyone's imagination just how the things I feel guilty and embarrassed about have formed my personality, and affect my responses to others and to myself, but I'd say the effect of these things is pretty profound.

    Let's consider also just how many links exist in my own wetware database that connect to these awful things? What can remind me of them? What can stir the feelings of rage, hurt, shame?

    Now ask me if I want to live a thousand years still feeling guilty about something I did when I was twelve? Ask me how I feel about being forgiven by someone's computer ghost five hundred years from now? Ask me how much that would mean to me regardless of whether my wares were hard or soft?

    12:

    Eventually that may be possible, though there was considerable discussion about that in a previous thread here, and I don't think I was convinced that it is. But since we don't know what the baby brain needs in terms of an external environment to develop properly, it's going to take several generations of trial and error to find out, and I do mean nearly the equivalent of human generations, by having AIs develop in real environments and seeing what they need.

    And as Charlie pointed out in a previous thread, building AIs of that sort is heavily fraught with legal, ethical and moral issues that might be best not to mess with. We might be better off just building AIs without consciousness, or with a notion of self that identifies it with some human (like the Toymaker in Rule 34.

    13:

    "Just run it in a simulated environment at faster than realtime"

    This may be possible after we've got a few other AIs who could join in the speeded-up simulation. But for the first one, won't we want to have as much genuine human interaction as possible? You'd have to literally baby-sit it for years. Quite a project.

    14:

    Looking further ahead, how would one go about creating a human-like intelligence that would emulate you on plain dumb computers?

    It depends on which side of the interface the human intelligence sits. We know that humans project onto non intelligent systems, from their pets, to dumb objects. They even quite like interacting with machines, e.g. even the very non-smart Eliza.

    Conversely, even we would get a bit creeped out if a another human responded identically every time we interrogated it in the same way (unless it is that social lubricant, like a greeting).

    I would propose that what makes a Lifebox interesting is novelty, offering slightly different responses. This may be due to responses having path dependence, of just plain randomness in the response.

    To this end, I really like the extremely simple models of brains based on CAs as explained in the Lifebox, the Seashell, and the Soul... Why not have the elements of the Lifebox connected not just statically, but also to the patterns in the CA simulating cortex? The different unfolding patterns would change the connection strengths and hence sets of memories being returned, as well as leading to every widening memory sets that are constrained by the interrogator's responses to the Lifebox output.

    A cheap trick perhaps, but something that would appear to make the Lifebox more "animate" than a search engine.

    15:
    But my life box should be able to recognise him and chat about how badly his teams (national and provincial) are doing, how well mine are doing, latest news and gossip and so on. With her, not a mention of rugby unless she raises it first, but happily discussing 3d modelling, architecture and the perversities of the local planning system.

    Wasn't this already covered:

    For a fully effective user experience, I'd want my lifebox to remember the people who talked to it. This is standard technology--a user signs onto a site, and the site remembers the interactions that the user has.

    Did you mean that you wanted the box to have some type of pattern recognition bolted on, audio or video? That strikes me as being a nice feature, but hardly required. I react to email all the time in ways that are different from person to person solely on the strength of the naming code, after all: an email from Mom about my plans for Christmas will get a much different reply than one from one of my colleagues.

    16:

    "But if he's that close a copy, does he have free will?"

    No, not until the copy is deemed accurate and then "released" as complete.

    17:

    "And as Charlie pointed out in a previous thread, building AIs of that sort is heavily fraught with legal, ethical and moral issues that might be best not to mess with."

    My standard riposte to that argument is: "Meanwhile, in China...". If it is deemed useful, it will be done

    18:

    I think we need to stop and ask ourselves what the purpose of a Lifebox is. Is it really something you would consider a version of you, fit to carry on your life? Is it syntactic sugar on top of a corpus of writings and thoughts that make it easy for others (especially your friends, relations, and descendants?) to find out about you after you die? Is it an animatronic image of you that can be installed in a theme park?

    And we need to recognize that there's a pretty vast gulf between a Lifebox, however sophisticated, and a true copy of a human brain including a coherent set of brain states. If, and it's a big if, we actually can make a faithful copy of a brain, such that it can act as a conscious person, a true copy of the human, that's a lot more than a database with a fancy expert system front end, and it raises a huge set of legal and moral issues that a Lifebox doesn't. On the other hand, Eliza proves it wouldn't take a very sophisticated Lifebox to convince many, perhaps most, people that in fact it is a copy of a person, and the resultant debate about legality and morality would have its own set of complications.

    19:

    The same argument, with the same kind of legal and moral consequences, can be made about the traffic of humans for sex tourism in Thailand (or anywhere else, for that matter, but that's about the most visible example). Are you saying that we shouldn't object because someone else is doing it, and we might get some benefit?

    20:

    Give me a definition of "free will" that's grounded in the material world (i.e., not dependent on a dualistic explanation of human consciousness) and I might be able to answer that question. As it is, most definitions I've heard are pretty incoherent when actually applied to the way people (and for that matter, animals) act and react to events in their environments.

    21:
    If, for example, the reconstruction of Dirk Bruere was so accurate it had me typing these exact words at this exact time on this blog, I would say that it *is* me.

    Well, you might. But if I knew that this was because a look up table was matching input to a prerecorded response, I definitely would not. Note that this wouldn't be any different in principle than using branching storyline books to play at aerial dogfights (I had a couple of these back in the 80's; does anybody remember the game? I think it was called Aces but I can't track it down on Google.)

    22:
    Perhaps I'm wrong; if so, I eagerly anticipate a no doubt trivial demonstration of precisely where my error lies -- and if not, then given the "if you can't hit it with a stick, it doesn't exist" sort of materialist essentialism through which the last comment thread was so amply shot through, I have to say I don't feel too bad about describing the whole idea as arrant nonsense.

    Going with this standard, you can't even do this with people you've known all your life. Which seems pretty sad to me; but I'm one of those people who treat pets as family and don't think twice about spending hundreds (actually, thousands) of dollars for any medical problems they may have, so you can right me off as too sappily sentimental to have a worthwhile opinion if you like :-)

    23:

    ... what the purpose of a Lifebox is. ...Is it syntactic sugar on top of a corpus of writings and thoughts that make it easy for others... to find out about you after you die? Is it an animatronic image of you that can be installed in a theme park?

    I think a very compelling commercial case would be for adding to an animatronic person at a theme park of museum. Mme. Tussauds for example.

    I also think that it could make a very nice, simple front end for life logged data. It wouldn't be bad as a "helper" when your own memories start failing, as it would respond to your own questions to yourself as you would. Yes, definitely getting creepy.

    24:
    And as Charlie pointed out in a previous thread, building AIs of that sort is heavily fraught with legal, ethical and moral issues that might be best not to mess with. We might be better off just building AIs without consciousness, or with a notion of self that identifies it with some human (like the Toymaker in Rule 34.

    To what extent could one of these pruned-down neural networks be deemed an expert system? ISTR a story from way back about humans being replaced with almost exact copies when they died. The copies were so good, in fact, that even people who knew them quite well could be fooled into thinking that no substitution had occurred (that was the point of the copy - for non-malevolent reasons). The only way to catch the copies out was to have them try to create something genuinely new, an act of which they were constitutionally incapable of.

    Obviously, these copies could pass the traditional Turing test with ease. But fifty years on from the original story, it's easy to think of these android duplicates as "really" being non-conscious rule-based expert systems. In which case, were they "really" thinking?

    25:

    I postulate we already have "a largish database of your writings, spoken words, and/or images, with links among the components, and a front-end that's an interactive search engine. and it is called "Google".

    For the rest of it, AI and anything that can pass the Turing test in a meaningful way is decades off from where I am sitting. I don't understand why people think it is around the corner?

    26:
    I think we need to stop and ask ourselves what the purpose of a Lifebox is. Is it really something you would consider a version of you, fit to carry on your life? Is it syntactic sugar on top of a corpus of writings and thoughts that make it easy for others (especially your friends, relations, and descendants?) to find out about you after you die?

    Exactly so. And for the latter purpose, "life boxes" in one form or another have been with us for a long, long time. In fact, when I see family over Christmas, I'll be looking through my Mom's version: her cache of thirty or so photo albums. The earliest are black-and-white with the pictures secured by those sticky black corners and chronicle the days when my parents were young and Dad was a Devil-may-care hot-rodder that turned her head.

    So yeah, for this purpose, I find the life-box to be a very plausible refinement, in fact, see it as a very plausible refinement of an online photo-album where each picture and video can be indexed with suitable tags and also has the option for an accompanying text/speech commentary.

    Which brings up a related question: how comfortable is anyone with complete strangers flipping through those old family albums right now? For myself and for the most part, I don't particularly mind. But there are one or two (or three or four) pics of me that I really rather wish hadn't been been taken :-)

    27:

    You know a lot more than I do. A really expert program, maybe. More than than? And I would still be dead, dead, dead. Oh well, I'm not likely to be proven wrong before I die.

    28:

    SoV you just described Facebook no?

    29:

    Honestly the most likely point or origin for an AI like human simulation would be the ad and content optimization engines of google and facebook. That is the place where there is a ton of money ot be made in mathematically modeling what makes a person tick to the point where he can be simulated.

    Likely to be something that happens accidentally rather then on purpose

    30:

    "The best minds of my generation are thinking about how to make people click ads," "That sucks." Jeff Hammerbacher

    31:
    SoV you just described Facebook no?

    I don't know. I don't have a Facebook account and have zero interest in ever getting one. But if it's some sort of annotative scrapbook with additional functionalities, I suppose so.

    32:

    I'm saying that once it is done nobody is going to turn down the benefit. Afterwards there will be plenty of law passing and moralizing and speeches about "it must never happen again". Same as we still use medical data the Nazis accumulated on involuntary test subjects in the concentration camps.

    34:

    Hey there, Alex R.! I have to admit I wasn't expecting the first response to any of my points to express any agreement at all; I was honestly expecting either to be ignored as unworthy or to receive a thundering denunciation in reply, so thanks very much for the pleasant surprise. That said, though, I have to admit I don't find your argument terribly consonant.

    On the one hand, of course I agree that -- as with anyone -- certain of the events, which went into making me-now the man he is, are the private property of me and the other people involved, and generally speaking are nobody else's goddamned business. It seems to me that this assertion could be controversial only to a few people whom even other historians would find extremely weird, but maybe I'm wrong about that; if so, and as with everything else I've said, I invite anyone who disagrees to take his best shot.

    On the other hand, I don't know that anyone is suggesting a lifebox's contents should be otherwise; our interlocutor states in his initial post on the matter that it'd be created with the participation of the subject, and I assume 'participation', in that usage, is intended to imply both consent and discernment as to what ends up in the lifebox and what doesn't.

    (I could be wrong about that, in which case I'd appreciate a brief explanation of what that 'participation' is intended to mean -- and a rather longer explanation of what moral or ethical justification there could possibly be for making an interactive copy, however faithful, of someone's personality or mind, without that person's consent and without giving that person control over at least what goes in to the thing, if not over the final disposition of the result.)

    The 'lifebox' concept presented here, the idea of a simulacrum built to serve as an imperfect mimic of someone who is now dead, doesn't bother me because it presents a risk of having my secrets stolen from me; as is generally the case with just about everyone, my secrets mean a lot to me and nothing to anyone else, so even if they all became available on the Internet five seconds after my heart ceased to beat, there's no reason to imagine anyone would care.

    Besides, as with the meanderings, no matter how seriously taken, of every futurist ever, we're talking about things which will not happen soon if they ever happen at all. I'm confident that a brief period of decomposition, followed by destruction in a blast furnace and dispersal of the resulting detritus, will sufficiently disorganize my physical remains to preclude the possibility of any information being recovered from them short of the kind of far-future super-SF capabilities that'd need an entire new cosmology, as yet unsuspected throughout the entire span of human history, to become even remotely plausible.

    (In short: per family tradition, once I'm no longer in need of my battered old carcass I intend that it be cremated and the ashes scattered over my ancestral home, and I have no cause to doubt that my wishes in that regard will be obeyed. Any time-traveling future historians with brain-copying machines are invited to make the most of they can of the result -- and I wish them the very best of good luck in finding it.)

    And as far as living a thousand years feeling guilty over something you did when you were twelve, I'd argue you have no need in any case to worry about that, because (unless I'm missing something very fundamental) neither continuity of experience, nor even continuity of consciousness, is implied at any point in the discussion. Even if we were talking about a perfect copy, after all -- even were such a thing possible -- to describe it as 'immortality' would still be a sick joke, because it wouldn't be you; it would be as far as it was concerned, of course, but you would know better.

    [I had intended to continue further along the same line of argument, but the result ended up being something of a thundering denunciation in its own right on the subject of transhumanism in general, and thus not something which seemed terribly topical (or particularly polite) to deposit upon Mr. Stross's blog.

    I therefore invite any interested parties to join me here, where the only ironclad rule of discussion is that I'll delete anything the hosting of which would be criminal in the jurisdiction where I live; unlike those lucky enough to be dissatisfied by the prevailing hegemony only where it fails of completion, I've been forced to accustom myself to the existence of arguments whose premises, operations, and conclusions I find entirely abhorrent, so you can come over to my place and say whatever the hell you please.]

    35:

    "For the rest of it, AI and anything that can pass the Turing test in a meaningful way is decades off from where I am sitting. I don't understand why people think it is around the corner?"

    That's actually two statements. The first is about the original Turing Test. I expect a machine to pass that test by fooling most ordinary people most of the time to be here within a decade. However, the goal posts of the test have been subtly moved - now it's about fooling experts in computer science and psychology all the time.

    This latter test is probably the decades away.

    36:

    I have no idea what you're getting at with this comment, let alone whether it's accurate. And uh, Dirk? I think you can take it as pretty much given that I already know all the basic stuff. I haven't said so before, but you come across as rather . . . young. Now, I don't think that being young - or at least younger than me - is a particularly damning circumstance. In fact, it really doesn't have a place in this conversation, so I've refrained more than once from making that observation. Until now. Enough said, eh?

    37:

    "And we can still dream that there's some really simple AI trick that we haven't yet thought of. Something as slick as simplifying arithmetic by using zeroes and positional notation. An exponentially better approach."

    Note that once those "simple" things are discovered there still remains a lot of complicated society thingies to work out in those machines before any AI can be useful in any way:

    http://www.questionablecontent.net/

    That's where the fun really starts.

    38:

    I've force-fed various instances of MegaHal with the complete works of a few authors from Project Gutenberg, song lyrics etc. It works pretty well at producing Wasteland-type mixes and postings that resemble the more rabid fans of particular bands.

    I've also fed them with just the dialogue, and with scripts for various movies. It's not hard to do once you have a method for separating your input text into chunks with double linefeeds between the sections.

    A.I. it ain't, but what interests me is that the best output resembles the writing of a psychotic - who is also trying their best to simulate being a human.

    It's also very handy for getting a verse for a new song.

    39:

    " I haven't said so before, but you come across as rather . . . young. Now, I don't think that being young - or at least younger than me - is a particularly damning circumstance. In fact, it really doesn't have a place in this conversation, so I've refrained more than once from making that observation. Until now. Enough said, eh?"

    Almost. I'll provide the missing snipped of data - I'm 58 http://www.neopax.com/dcbatlantis1.jpg

    40:

    What I was getting at is your comment: "a look up table was matching input to a prerecorded response", is a variation on the Chinese Room, with all the objections that go with it, which you assure me you are aware of.

    41:

    "that it'd be created with the participation of the subject"

    Rudy, there are two sides to everything and if at least one of those sides discloses an event, picture, conversation, etc, then it is out there

    Get a big enough social graph and the effect will be that most everything will be inferrable even without someone's consent

    Also, Rudy, if you are not familiar with modern day social networking sites (Facebook, twitter, etc) and I'm not saying you have to like them, or approve of them or be an active user of them but at least understand what they do and the features they offer, might be a good idea to study up.

    42:

    An interesting question is that of how much data is needed to fully reconstruct a personality given that some interpolation will be needed for missing events or internal thoughts.

    43:

    @Dirk I should clarify "meaningful" I was using it in the "fool someone who actually knew the person" sense, like Rudy is suggesting, which is a ton harder then a conversation about the weather and how the dodgers are doing.

    Agree we are pretty close to the pure Turing test criteria being met

    44:
    What I was getting at is your comment: "a look up table was matching input to a prerecorded response", is a variation on the Chinese Room, with all the objections that go with it, which you assure me you are aware of.

    And can you think of why this particular "variation" might be significant? I'll give you a hint: The Library of Babel and the second law of thermodynamics.

    We appear to be roughly the same age, btw, and apparently go to the same barber.

    45:

    First: I am not Rucker.

    Second: Congratulations! You've reinvented Wikipedia. But I don't get the impression that was the purpose of the exercise.

    46:

    (Well, more like reinvented Facebook, I suppose.)

    47:

    Erm ..

    The Map is not the country The finger is not the Moon The Box is not the being

    What then, is the true Thing? Is it in itself? Itself resides in where?

    48:

    The Platonic Realm, along with its mathematical friends.

    49:

    If the object of the exercise is to try to emulate the experience of conversing with a person now gone, how does one properly present an individual's ignorance and cussedness?

    Instances of cussedness: Outside of examples like this, I never use the terms "proactive" or even "sci fi." I loathe both Coke AND Pepsi, having started out simply not liking them and then experiencing a half century of their ad campaigns.

    I'd regale you with instances of my particular ignorance, but being ignorant of those, I'll leave them to your imagination. (Please be merciful.)

    My point is that these are as much factors in a personality as knowledge and experience are.

    50:

    For that matter, how much of personality is controlled by things which we have no conscious knowledge of, and no way for others to see directly? Attitudes and emotions resulting from causes of which we no longer have any memory for instance. The environment in the womb for the last three or four months of gestation must have a large effect on psychological development, but even if we recorded a fetus' ultrasound scans continuously for that time I doubt we'd be able to interpret the effects on the mind in any definitive way.

    51:

    Rudy seeded, This said, another person who studies your memories and links can get into your customary frame of mind, at least for a short period of time. We humans are universal computers and we're exquisitely tuned for absorbing inputs in the form of anecdotes and memories. One's memories and links can act as a special kind of software that needs to be run on a very specialized kind of hardware: another human being. Aaron countered, ...I'll grant that reading enough of a given person's writings can produce a change in one's own customary frame of mind, and that the result of that change may briefly offer a semblance of what it might be like to think like the person who did the writing...

    Here's a review of a very successful "method editor"--Bev Kimes did her best to inhabit the authors that she edited (toward the end of the book review): http://speedreaders.info/253-equations_of_motion_by_willam_f_milliken_1

    52:

    brucecohen pdx @ 50 Or life-changing (in my case very nearly ending) experiences? Resulting in differing outputs?

    53:

    I like to use the word "twinking" for the process of getting so deeply into a work of art (book, painting, movie, scientific oeuvre) that you have the feeling of emulating the writer/artist/scientist.

    I talk about twinking in my nonfiction book THE LIFEBOX, THE SEASHELL, AND THE SOUL, and in my recent autobio NESTED SCROLLS.

    54:

    I really like the lifebox idea, but I wouldnt like to be limited to one social media (depedency and surveillance issues mostly). I also would like to be able to restrict and guide anyone who interacts with the bot to the stuff they find most interesting, and even restrict some stuff to a close circle of friends.

    A tool that could do this and integrate all my info from gmail,google reader, wordpress, google+, facebook and twitter would be a great start and not too complicated to do I think. (Just let everyone who interacts either start an account that i then personally put in a category according to access level or just interact anonymously which will automatically put them at the lowest access level)

    Specials

    Merchandise

    About this Entry

    This page contains a single entry by Rudy Rucker published on December 23, 2011 5:17 AM.

    Incidentally ... was the previous entry in this blog.

    Rudy #3: Looking For The Big Aha. is the next entry in this blog.

    Find recent content on the main index or look in the archives to find all content.

    Search this blog

    Propaganda