Back to: Markov Chain Dirty To Me | Forward to: Rise Of The Trollbot

5 Magical Beasts And How To Replace Them With A Shell Script

Guest post by filmmaker, game designer, comics author, and person who should really take a holiday some time Hugh Hancock

As an author of fictions about demonology that goes horribly wrong and the avoidance and escape of previously-bound supernatural guardians, I'm thrilled, fascinated and somewhat disturbed to learn that we're on the edge of an age of things that look a lot like supernatural servants.

Rather than apps, the smart money is now on bots - intelligent servants called and dismissed with specific incantations, capable of granting your heart's desire (assuming that desire is for an artisanal pizza or an Uber).

I went over this briefly on Tuesday, in which I concluded that it's entirely possible we'll soon be able to summon a succubus - in the "perfect inhuman lover" sense, not the "explanation for brief sleep paralysis" sense - into our PCs.

(Fun side note: it turns out Ashley Madison was already using techo-succubi extensively in its affair-enabling business.)

And that led me to thinking. What other roles have humans traditionally attempted to summon, bind, control or conquer supernatural servants for? And to what extent have we managed to replace those with technology?

Let us wander off into occult history and figure out what other mystical creatures we're cohabiting with these days, or will be soon...

House Fairie, Brownie, etc

We'll start in Charlie's and my home, Scotland, where one of the most mundane and obviously useful of magical servitors originates - the brownie.

Not to be confused with the delicious baked good, the Brownie was a small faerie from the classic surprisingly un-grand-and-threatening school of British faeriedom, which would help out around the house in exchange for the owners keeping up a certain set - and often rather tricky - series of rules, from giving the little critter food to avoiding thanking it for its work.

They're fairly clearly the inspiration for Harry Potter's House Elves, although the latter are considerably more user-friendly.

Assuming you kept up those rules, Brownies would clean, churn butter, and perform other useful, mundane tasks.

Do we have a technological equivalent? We have several.

As the past owner of a Roomba, the description immediately sounds very familiar. It trundles around the house at night performing mundane tasks. It has a number of specific and rather irritating requirements (mostly to do with cables and the lack thereof) that need to be adhered to or it'll refuse to cooperate. It initially looks like a major boon, but after experience with its services, one tends to find they're more hassle than just doing the darn job yourself. And it's more than a little capricious.

(The Roomba, incidentally, was based on a robot designed for picking up cluster bomblets from battlegrounds. I assume there were less cables to navigate there.)

Beyond that, home automation and what can euphemistically be called the "J.A.R.V.I.S. project" is obviously in full flow right now.

  • Mark Zuckerberg has decided to create himself an automated butler as his 20% project for the year.

  • One of the hot tech gadgets of the year is nothing more than a very sophisticated, automatable lightbulb.

  • Google's Nest is leading the home-automation pack right now with little mini-servitors that do all kinds of things, although apparently it does many of them badly.

  • And one of the hottest applications for the Raspberry Pi is as a home-automation center. Given previous experience with early-stage open-source projects, that should fit the description of the brownie perfectly. Potentially very helpful? Check. Finicky, awkward, and prone to unpredictable refusal to do its job? Check. Requires unexpectedly massive investment in propitiating its demands? Check.

Angels, Ayami, and Tutelary Spirits

John Dee, besides being the original 007 and the likely inspiration for Shakespeare's Prospero, is credited as the originator of Enochian, which he claimed was the language of the angels. Together with Edward Kelley, he claimed to use that language to summon and converse with angels.

Given that anyone reading this blog is likely to be pretty familiar with the concept of using strange languages to communicate with alien entities (including the darkest and most occult of communication methods, the Facebook API), how close are we to a technological equivalent for Dee's angels?

Well, it turns out that the first and most important function of talking to angels with Enochian is to, erm, learn Enochian. In fact, this is a theme that runs throughout real-world occultism: one of the main reasons you learn magic to summon a magical creature is to learn more magic from it. That's partially due to the fact that real-world occultism tends to be much closer to mysticism, religion, and spiritual practise than it is to Magic Missile into the darkness, and partially because hell, where else are you getting this otherworldly info from?

(We'll get on to Hell in a moment.)

It goes well beyond medieval Western occultism - shamanistic traditions have the ayami, the tutelary spirit or spirit-spouse. Ancient Greek tradition had the daimonion. Tutelary deities appear in Korean, Native American, and many other religions. In Christianity, at least according to some interpretations, the Holy Ghost is amongst other things a tutelary aspect of the deity.

Likewise, it turns out that one of the things that technology and coding are really great at is teaching you more technology and coding. My personal favourite of the advanced-learning bunch is Codeacademy, which quite literally uses code to teach you to code, rather successfully so. But the world of online learning in general is clearly huge, extremely successful, and massively addictive.

Some "Super-MOOCers" (the term for people who take a lot of online courses) have taken 50, 100 or more of these relatively in-depth courses, often on very advanced subjects. And speaking to some of them at the Coursera conference a few weeks ago, it's clear that the sudden availability of knowledge from the Internet is a boon they'd cheerfully have negotiated with supernatural entities for. "Freedom" and "Joy" are concepts that come up a lot when talking to serious learners about their sudden access to world-leading experts through the Internet.

Interestingly, there's another mystical summoned creature that fits rather well here: the homunculus. After all, what are MOOCs doing but creating a small version of the magus (or professor) you wish to consult, and thus enabling the magus themselves to be in many more places at once, using their knowledge to do many more things?


The demon is the Swiss Army Knife of Western occultism.

It's fairly clear, for example, that the author of the Ars Goetia in the Lesser Key Of Solomon is a spiritual ancestor of the RPG community. If he or she had been born a few centuries later, she'd be right alongside Our Gracious Host in the credits for the Fiend Folio. The Ars Goetia comprises in large part a listing of the 72 demons it allows the user to summon, ordered by noble rank, cardinal direction, and useful skills.

  • Duke Agares "teaches languages, stops and retrieves runaway persons, causes earthquakes, and grants noble titles", according to his Wikipedia entry.
  • Duke Valefar, by contrast, "is in charge of a good relationship among thieves". He also commands ten legions of demons, each of which presumably have their own unique skillset.
  • Great President Buer, aside from sounding like the recently-ascended leader of a less-stable South American nation, "teaches natural and moral philosophy, logic, and the virtues of all herbs and plants, and is also capable of healing all infirmities (especially of men) and bestows good familiars".

Each entry contains seals and other diagrams for summoning and communicating with whichever entity suits your desires at the time.

Reading through the grimoire, it reminds me of nothing so much as a supernatural version of Fiverr, the enormously successful service for getting a massive variety of tasks done for, well, a fiver. (Five dollars, that is. £3.74 in the UK last time I checked).

Much like demonology, Fiverr comes with plenty of existential risks.

Whilst you'll probably be OK if you just summon the Demon Of Podcast Transcription, barring a few hilarious misspellings or the chance that their college schedule gets busy and they vanish off the face of the earth, other rituals are considerably more advanced. Be cautious about summoning the Duke of Logo Creation, lest it tempt you unawares into the fourth circle of Hell where the copyright lawyers wait for the unwary.

And, dear reader, we must entreat you most sincerely to consider carefully whether your skill, guile and spiritual advancement is sufficient to summon the Great President Of Search Engine Optimisation, for the deepest pits of Hell, crafted for you by Google's anti-spam team, await those who treat with such an entity without due care and spiritual purity.


And here we loop right back round to where we started: AI and chatbots.

It was the ultimate goal of many schools of occultism to create life. In Muslim alchemy, it was called Takwin. In modern literature, Frankenstein is obviously a story of abiogenesis, and not only does the main character explicitly reference alchemy as his inspiration but it's partially credited for sparking the Victorian craze for occultism. Both the Golem and the Homunculus are different traditions' alchemical paths to abiogenesis, in both cases partially as a way of getting closer to the Divine by imitating its power.

(All of this is somewhat complicated by alchemists' tendency to write about their Art in a way that was, essentially, trolling the unworthy. Jabir ibn Hayyan, for example, who wrote about Takwin extensively, also wrote that part of the purpose of his writings was to "baffle and lead into error everyone except those whom God loves and provides for". Other alchemists followed this tradition, meaning that it's hard to tell exactly where they were aiming, and whether the entire line of reasoning can be summed up as "trololololololololol".)

And abiogenesis has also been the fascinated object of a great deal of AI research. Sure, in recent times we might have started to become excited by its power to create a tireless servant who can schedule meetings, manage your Twitter account, spam forums, or just order you a pizza, but the historical context is driven by the same goal as the alchemists - create artificial life. Or more accurately, to create an artificial human.

Will we get there? Is it even a good idea? One of the talks at a recent chatbot convention in London was entitled "Don't Be Human" . Meanwhile, possibly the largest test of an intended-to-be-humanlike - and friendlike - bot is going on via the Chinese chat service WeChat.

And that's a clue to the problem that chatbots are trying to solve, and the magical beast that we can't yet manage to recreate.


It's easy to mistake the witch's familiar as a supernatural pet. And we can already do those. From Tamagotchi to Aibo, mechanical pets more or less work.

The latest and most spectacular example of that success is in Valve's VR experiment "The Lab", which features a robot dog. It's a massive success. One friend of mine recently spent 20 minutes solid playing with the dog, ignoring all the more conventionally game-like options on offer to throw a virtual stick and rub a virtual belly.

But familiars are more than that. In myth and occult history, they're non-human companions, intelligent and aware. Sometimes they're presented as demons summoned to aid the witch in her dark magic - again, intelligent and aware companions - usually by sources that are, shall we say, less than friendly to the occult persuasion.

And that's where a lot of the chatbot research is going. To the point where we can summon an artificial friend.

That's a very noble goal. Loneliness is a massive problem in the world, and it's extremely harmful to both happiness and health - its effects on mortality rate are startling.

Will we get there? Maybe. The chatbot I mention above is specifically of interest here because it's essentially designed as a virtual friend. Eliza, the most famous chatbot, was designed to emulate a virtual therapist - not the same thing, but similar.

And the person who conquers this problem - who manages to summon the final spirit - will become incredibly powerful and wealthy. So the race is on.

What do you think? Any summoned servitors I missed? Do you think the "virtual friend" will become a reality?



Reminds me of a great quote in Ronald Hutton's Triumph of the Moon: A History of Modern Pagan Witchcraft: "Traditional scholarly magic was at basis an elaborate way of ringing for room service."

You've kind of covered those bases, I think. What came next in the history of magic (per Hutton) was Eliphas Levi, who, among other things, came up with the idea of high and low magic. He saw high magic as mystical union with the divine (and where we get these notions of spiritual progress from the mid-19th Century on). Assuming technology is following the same history, we should turn to cyber-mysticism.

So is the computer version of Levi cyberpunk (turn on, jack in, drop out)? Or is it VR? Or is it...The Singularity?

Of course, after Levi, we get the Golden Dawn, Crawley, the creation of Wicca, LSD, and Sarah Palin, although there might not be one continual causal chain there.

I don't really think the computer industry is precisely recapitulating the history of magic, but I do think that it's been sort of a lazy instantiation of ideas that were kicking around in the occult literature for centuries, turned into literature and games by fantasy writers and Gary Gygax, which turned on a bunch of geeky boys (mostly) into thinking how kewl it would be if that stuff actually existed.


First, let me say that a) I loathe and fear the IoT (consider the obnoxious 16-yr-old down the block, who cranks your heat or a/c up when you're not home, having script kiddy'd his way through the passwords you didn't set, to run you electric bill to four figures), and b) I do not want "artificial intelligence"; rather, as my late wife and I agreed 25 and more years ago, I want an "artificial stupid" (phrase copyright by me, 1992, 2016). that does *just* what I told it to, and if it doesn't know what to do, it dumps it on *me*, not try to guess what I'd want (can you say "false positives on email?).

Second... about alchemy. Which I have some small knowledge about. Which has always appeared to be about 5,000% more than journalists or most posters online.

The reason that alchemists wrote with such massive obfuscation is this: "changing lead to gold" was, itself, a metaphor... for "perfecting the soul" of the alchemist themselves. Now, given as how this might have been seen by the religious authorities of the time as trying to be Jesus, or a god themselves), and therefore blasphemy of the highest possible order, and earn you drawing, quartering, and burning alive, massive obfuscation is a reasonably good alternative to encryption.



Oh, right, and third and last, for *modern* brownies, aka gremlins, I think I should leave, next to the broken electronic device, some soldering paste and solder, instead of milk and cookies, for my benefactors, the Good People.



It'd be a natural step beyond the telepresence that allows me to read your words without the time and expenditure of traveling to Scotland, and I think I've a better chance of seeing it firsthand than the results of "Breakthrough Starshot".


That really is a damn good quote.

And there we get into something that I didn't have time to delve into in this article: tech as spiritual enabler.

There's all sorts of interesting stuff out there from brain stimulation to VR-enhanced visualisation to EEG tomfoolery (which Hannu Rajaniemi has messed with to verry interesting effect) to your bog-standard large-sample-size neutropics testing.

Although it's an interesting question if that's still seen as "High" art in this age, or whether it's now seen as a "Low" art by the mysticism-unfriendly world of tech. And should it be seen as high or low? How much use is it? (Some of it, lots) And, and, and...



The sources I've read (and I'm not an expert on alchemy by any means - I know enough genuine experts to know that!) seem to suggest that the act of abiogenesis crosses over with the purification of the soul to a considerable extent, in the same way as immortality was seen as the same thing as or a natural derivative of purity of spirit.

To purify the soul is to become as God, and obviously one of the big God signifiers is the whole "life creation" thing. See Dr Manhattan and Watchmen.

Does that make sense, or have I somehow begun busily barking up the wrong end of the stick, so to speak?


Someone just typed alchemy. So, I am here.

Firstly, it depends which alchemists at which time. Some, but not all, of the early ones were gnostics, and Zosimos seems to be using the colour changes and physical activities as metaphors for returning to the Nous and shedding your base earthly desires.

Then we get multiple evolutions and cultural things going on and it gets messy. Suffice to say that perfecting yourself in a religious sense was perfectly okay by the early modern period, as far as I can tell, that not being my main period of interest.

Also no alchemists were into actual bodily immortality, that's a modern thing. Either they became immortal in a cosmic sense by merging with the Mind, or, you could argue, by being good Christians in the case of the later alchemists. Generally they thought alchemical remedies would either help you live a relatively healthy life or else extend it to the true natural length described in the bible.

So, abiogenesis- what definition are you using? Because I don't recognise the context in which you are using it.

"Artificial Stupid", I like that.


If you haven't come across them yet run, not walk, to recordings of almost any public talk Warren Ellis has done in the last couple of years, though his Haunted Machines keynote (actually the whole conference probably applies) or his talk in the Science Gallery in Dublin at the launch of Injection (a comic series basically about folklore and technology getting up in each other's business in a very literal way) are probably best. He's been doing a bunch of thinking about the echoes of hedge-witch magic appearing in current technology; he's also pretty good about posting his sources...


You left out those like Succubi which can be replaced by Tinder/Grindr.


"First, let me say that a) I loathe and fear the IoT..."

The irony!


And for the record, I worked out how a version of Computational Demonology can be made to work, inspired by the Laundry series:


If you are not a computer programmer, then when you ask for "artificial stupid" to do only what you tell it to, I suspect you are underestimating how much you rely on the good judgment of fellow humans in normal interactions.

Suppose you ask your robot servant to go get some eggs.

First, there are a lot of possible strategies for obtaining eggs. You are probably expecting him to purchase them from a grocery store, and would probably be unhappy if it shoplifted them from the store, stole them from a neighbor, drained your entire bank account to pay for the nearest eggs it could find (that happened to be a savvy street vendor intentionally exploiting your vague instructions), or purchased a chicken ranch to begin producing its own eggs...even though all of those technically fulfill your instructions.

So let's suppose you are a little more explicit and order your robot to buy eggs from the store. That task has a bunch of implied sub-tasks, including choosing a store, obtaining transportation, tendering payment, etc. Walking, driving, or taking a bus might all be reasonable transportation options depending on your budget, how long you're willing to wait, and how much of the robot's time you're willing to devote to this single task.

But suppose your circumstances are such that driving is pretty much always the correct travel option, so you make that the default. Driving is, itself, composed of a bunch of implied sub-tasks; starting the engine, choosing a route, following traffic laws, finding a parking space. You probably will not be happy if the robot returns with an empty tank that leaves you unable to drive to work tomorrow, or leaves the headlights on and drains your battery. As part of using the car, you also expect the robot to perform basic care and maintenance tasks, like buying more gas, or checking the warning lights to discover if driving the vehicle would be dangerous.

"Finding a parking space", in turn, is composed of a bunch more sub-tasks. The robot has to survey the lot for open spaces, use the accelerator and brake pedals, turn the wheel. It has to decide whether to turn left or right when it first enters the lot.

By this point, you probably think I am being ridiculously pedantic. OF COURSE telling the robot to drive the car involves turning the wheel as appropriate!

But that's my point. There is some threshold of detail below which you assume that the robot will just take care of all the routine micro-decisions using its own judgement, because otherwise you might as well just drive the car yourself.

A robot that requires you to specify every tiny detail would be...well, it would just be a computer (with no apps installed). That's arguably a technological marvel in itself, but it's not a useful consumer product.

So the robot has to make SOME decisions for itself.

You can say "the robot should ask for clarification when it's not sure", but "sure" is a spectrum, not a switch. And even knowing how confident you should be about your own judgment is, itself, a highly advanced and difficult skill (one that humans regularly get wrong).

You might think that robots should be a little more conservative in their judgment than they usually are, but that is not a fundamentally different approach, it's just a different point on the spectrum. And there is a pretty good chance that you'd change your mind if you discovered just how often the robot is actually guessing right now, because you only notice when it guesses wrong.

Really what you want is a robot with a better self-model for its own limitations and weaknesses, so that it is better at deciding when it needs to ask for help. But that would actually be a SMARTER robot, not a stupider one.


Taoist alchemy was aimed very strongly at physical immortality. Just as we're blurring the edges with magitech that runs on western incantations powered by chi, many people forget that there were multiple "alchemy" traditions in the world.

My take on alchemy is that, as with modern martial arts (most of which came of age in times and places with lots of guns and had little to do with military combat until relatively recently), alchemy is and has always been many things to many people. At its core, it's about outsiders, nerds, geeks, and hackers, wanting to gain power in their own way by messing with stuff and/or themselves. Depending on the time and place, that might mean fooling with chemicals (classical world, Medieval Europe, the Arab/Sufi world, classical China), body work (all of the above, but throw in India with Kundalini yoga and the Muslim world with a spate of Sufi practices), and magical practices.

The goals have been various: physical or spiritual immortality, starting with the Ancient Egyptians doing mummies and Ancient Chinese Emperors drinking mercury, getting it on with spirits, excuse me, theurgy, turning base metals into gold to keep a king happy, perfecting the spirit to go to heaven to keep the Church or Mosque happy, and so on.

At its core is the promise that this smart, possibly crazy, outsider, can make you rich, powerful, and/or immortal, if only you support his experiments. He's not trying to con you. Honestly. It really works. Don't you believe the stories?

Note that this type of setup (scam?) isn't limited to alchemy. As I noted, I'm quite aware of tai chi teachers who mix in mysterious promises of Taoist Alchemy if you become one of their senior students. There have also been Wall Street executives who a century ago paid an astrologer to help them predict the future with complicated equations. Now they pay economists and get results that are about equally useful (link to article). I'm sure the Pentagon has spent billions on the promises of similar outsiders to make their planes invisible, shoot satellites out of the sky, protect America from nuclear war, and so forth. Whether you call it alchemy or breakthrough technology (or both), there have been markets for it for thousands of years.


Indeed, I hadn't brought up Chinese alchemy because it's not something I know much about. The mistake people make is as you say not realising there are many different forms and ideas of alchemy.

However I dislike your " it's about outsiders, nerds, geeks, and hackers, wanting to gain power in their own way by messing with stuff and/or themselves." characterisation, because the temrs nerd, geek, hacker etc are so freighted with modern ideology and concerns that they are not really suitable, although they always crop up in more populist approaches.

Indeed, many Eurean alchemists either were insiders or took great pains to present themselves as insiders, not as outsiders as you suggest. In fact many of the famous alchemists were really the intellectual elite of their day, not charlatans or outsiders, and they were trying to understand how the world worked and come up with appropriate theories (or just pass on those of their great predecessors) that explained how things turned out the way they did.

Finally, to negate part of the purpose of the post- I'd personally rather change things so that lonely people could meet and get on with other people in various ways, than create artificial entities to palliate their loneliness.


Heh, people have been having this sort of thought for decades.

The very first program I ever *designed* (as opposed to just sitting down and belting something out), as a pre-teen in the mid-1980s before the 80286 was invented, was a GW-BASIC program to help me run D&D games. I referred to it as a "familiar", and it was modular with different plugin options producing different versions -- the text-only keyboard-driven version was "Brownie", and the GUI version that could handle primitive mouse events (with my old AT&T PC6300 mouse) was "Pseudodragon".

(I wonder if I still have the source code on a floppy somewhere.)


Does software resemble magical beasts or is it inspired by them? Does the mobile phone resemble a star trek communicator, or was it inspired by it? Do corporations and government departments resemble temple hierarchies (and architecture), or were they inspired by them?


I'm also reminded of an old PDA family from the mid 1990s, stuff running the MagicCap OS from General Magic.

It had its own data service you could subscribe to. That network had infrastructure for executing programmed "agents". You didn't stay connected to it all the time. You connected, sent out an "agent", and disconnected. Then the agent would, for example, find articles with keywords that matched what you had specified, and when you reconnected, it would bring them back to your device.

They used a metaphor of training a dog how to fetch things for you, sending it out, and having it return with the goods.

As I recall, the language you programmed them with was TeleScript, discussed here:

...and of course, I've got one sitting less than four feet away from me right this very moment. (The one in my office is a DataRover 840, a top-of-the-line unit that was never sold to consumers, just for vertical applications like warehouse inventory and stuff. At home, I've got two Sony MagicLink devices, a PIC-1000 and a PIC-2000A.)


Having been thinking about it, and having a read of a William Newman book, it seems to me that since the original alchmically related homunculus required semen from the man who wanted to create it, obviously what you are talking about are some sort of learning AI that bases itself upon the data available about you. Would it be a good or bad thing to you that your butler program could anticipate your needs and desires?


You might not be sufficiently afraid of things, or paranoid enough. :-) Old example - OGH did a piece 4.3 years back on the Revolt of the Machines. (I have a story about that day (or close) as well.)


Interesting info/articles - thanks!

My SFnal conspiracy spin take-away: The increase in using chatbots is how corps are training us to get used to, therefore normalize, sociopathy in our everyday lives. For example, when I read the Chinese chatbot's 'conversations' transcript, I did not see much empathy there despite the reporter's claim that this chatbot conversed much like an ordinary human 17-year old.

Taking together the Chinese and Madison chatbot exposé, this suggests that currently designed chatbots use too few phrases in their conversations which in itself is sufficient to expose them as non-human. Further, because of this chatbots are highly unlikely to be mistaken for demons because we all know that all demons possess extraordinary (supernatural) language skills. (For this reason, demons would probably make excellent editors.)


Oy (amused oy). Been reinventing that, so stopped skimming soon as that was clear.
At some point will read it and your notes.


Ahh, you missed the largest group.

The Selkie, the Skin-walkers, the Lycanthropes, the Half-Touched, the inhabitants of Innsmouth, the Kurtadam, and so on and so forth (apologies to all cultural references I didn't list).

Chip, Implanted in Brain, Helps Paralyzed Man Regain Control of Hand NYT, 13th April 2016

Bridging the Bio-Electronic Divide DARPA, 19th Jan 2016 - trigger warning: US .mil site link.


There's a world of difference between a Cyborg and the union between the Human and the [OTHER], surely?

Well, no.

SF (Culture, GridLinked etc) works on the understanding that the Human Mind is in complete control of the process (unless you get the evil-weevils / a hostile AI invades your implant).

Mythology works on the (rather more Sane) assumption that hybrids and half-breeds and exotic, quixotic and potentially psychotic mixtures are not necessarily in control...

Guess which one is probably correct?

Hint: it's like Economics. Rational Actors don't exist in reality.

(And, apologies: OP is going for a 3x3 grid and I'm being rude not letting it unfold precisely. Love the ideas though).


That's a very interesting point, particularly with DNA sequencing becoming as cheap as it is, and social graph mapping as good as it is.

We're nowhere near imitating a specific human with AI yet, but maybe that's another future goal.

And if we're getting a bit nightmarish, there's always the most common summoned creature of them all - the ghost, summoned by the medium to comfort the bereaved.

You'd need a pretty advanced AI to claim to simulate the dead, but probably not as advanced as all that. See yesterday's discussion on how much humans really use their brains in conversation, particularly at stressful moments.

Could we see a medium-equivalent AI designed to allow the bereaved some simulated closure?

Spiritualism As A Service?


Not strictly summoned or servitors, but you're correct that once you get into cyborg territory there's a whole pile more supernatural -> technological allusions there.

Nuada of the Silver Arm just means some guy with a gaudy prosthetic these days.


I can think of one area where "artificial stupid" is the ultimate goal: computer game AI design.

The hard part of almost any player-equivalent AI design in games, outside turn-based strategy, isn't making it good at the game. It's trivial-to-easy to make a bot in most games which can beat any human opponent unless that human specifically exploits the AI's weaknesses.

The trick is to make them convincingly screw up sometimes, behave in limited human fashion, etc.

I've been playing some DOTA 2 against bots recently (because I don't need that game's community in my life) and it's fascinating to watch how they've programmed the bots to act, and particularly how they've programmed them to be less than perfect.

(I say "player-equivalent AI" because making AI for characters which are designed to behave realistically as creatures in the world is an entirely different and much more complicated problem. I understand team AI is also darn tricky, particularly so if interleaved with also needing to be a realistic creature.)


See Part 1 -

Tinder's a terrible succubus-equivalent, but the potential's definitely there for far better ones...


The TV series Black Mirror did an episode based on that (computer simulation of a deceased loved one, based on an analysis of their online trail).


Computers are much better than humans at some video-game skills but worse at others. If the game emphasizes precision, reflexes, and situational awareness (like shooters)--that is, "twitch" skills--then computers can excel. If the game emphasizes planning (with a broad decision space), resource allocation, or opponent modeling--basically, considered decision-making--then the reverse is often true.

I suppose you could arguably say that any game where your considered decisions are the most important part is ipso facto a "strategy" game, but I think that is broader than the usual meaning of that term.

I would conjecture that DOTA and similar games actually have an interesting mixture, where the computers are better than the best humans at some of the important skills but still significantly worse at others. For example, the computer will notice unfailingly if an opponent has left the front lines (even on the far side of the map), but once the human expert has noticed the same thing, the human is probably much better at predicting what that opponent will do while they're behind the fog of war and where they're likely to emerge next. I'd guess the computer might make better predictions about whether a given fight is winnable, but humans are likely superior at broad strategy like deciding whether to group up or scatter.

Giving a DOTA2 bot perfect reflexes and timing would certainly make it good enough that newbies wouldn't want to play against it, but I'm not sure whether or not would be good enough to win tournaments against expert humans.


Did the ancient supernatural servitors have better authorisation and authentication protocols than today's Internet based services?

My impression is that if, say, you managed to summon and bind Duke Valefar, you were the only one who could give orders. If a family member, houseguest, or random burglar gets too close Duke Valefar might eat them, but won't obey them.

Whereas I expect that as soon as there's an online succubus summoning chat service (whether human or cyborg succubi) it will be all too easy to send one to someone else's home or hotel room.

Maybe summoning circles and blood sacrifices should be required for all new Internet servitors, in the form of virtual machine isolation and DNA authentication?


Guardian angels are probably the next killer app, with geo location and solid wearable biometrics it's only a few lines of code away

Demons and spirits were also often used to bring visions of distant places, just need a crowd sourced micro payment smart phone system for that

For I send east and I send west,
And I send far as my will may flee,
By dawn and dusk and the drinking rain,
And syne my Sendings return to me.

"They come wi' news of the groanin' earth,
They come wi' news o' the roarin' sea,
Wi' word of Spirit and Ghost and Flesh,
And man, that's mazed among Among the three


Human-in-the-loop guardian angels exist. There are similar products to support independent living for the elderly as well. They have at least one human in the loop to cut down on false positive callouts and/or the flagrant breaches of privacy needed to otherwise prevent false callouts. There's also at least one geo-locating smartphone app that can summon armed response units at the touch of a button. So; many of the asked-for capabilities are available separately. What's needed to bring them together?

You mightn't even need to pay for visions of distant places, between Periscope, Ustream etc. and unsecured IP cameras there's a lot of freely viewable cameras on the web these days.


For guardian angels most of the products I've seen require the human to hit a panic button manually. it's not going to go off if you get into a car wreck for instance. Similarly I can't stick a bracelet on my ten year old and program it to tell me if he is playing hookie from school or someone kidnaps him.

Link biometrics with location sensing and maybe an always on body camera and you could not only really put a dent in all sorts of crime but react to all sorts of dangerous situations, medical problems etc. you could even link to locations of emergency response personal or even civilians and directly dispatch them rather then centrally, which could be a big response time win. You'd also create a network of webcams covering whereever humans or their pets are, which is pretty much most places

That real time video and audio feed would then breed a host of applications

Of course you've also enabled ubiququitos surveillance and killed off privacy if you aren't very careful. But technologically it's perfectly feasible

Guardian Angels


There certainly is a device that will report if you've been in a car accident - OnStar and similar services have been doing that for years. Some Texas schools have been sued because of the tracking devices they require children to wear, so that's also a service that's available (not to mention any number of apps that parents can use to monitor phone locations and use).

So guardian angels are absolutely feasible technologically, it's just that all current products seem to be things one can impose on someone else - children, elderly, convicts, insurance customers. Mostly, nobody buys these things for themselves


There was a proof-of-concept wilderness survival assistance jacket made by (IIRC) a Finnish company in the mid-late 90s, that used various flexible fabric to track your breathing, sensors to check your pulse and skin temperature (and, possibly, blood-oxygen saturation), a variety of pressure sensors and GPS for location.

If your vitals dropped under pre-defined limits, it would then call emergency services and give your current GPS location and vital stats.

Don't think it was ever on general release, but as a proof-of-concept for a Guardian Angel...


Making a fairly effective, indoor-only robotic pet would probably not be all that difficult. If you were to take the behavioural set of a cat (minus the "kill any small moving object" group of behaviours), you're not really talking all that much interaction to simulate.

All the bot needs is to be fluffy, cuddly, and try to follow humans about, and look directly at them until they stare back, whereupon it occasionally drops its gaze. A smallish set of vocalisations at the cat-like frequency range, purring when stroked plus some incomprehensible habits of various sorts, and that is a fairly passable cat simulation.

It doesn't need to be particularly smart or particularly good at fitting in with humans, because cats really aren't very smart or anywhere near as good at understanding humans as are dogs, yet are popular pets.


Agreed - I should have mentioned that I was thinking about currently-popular genres, which mostly exclude large-scale strategy outside the base-building genre.

(AIs do really well at the base-builders a la Starcraft and anything else which requires what SC players would call "micro", because their actions-per-minute are so much higher than most humans.)

I did forget about 4x games - Civ, etc - which I would expect computers to be significantly worse at than players.

DOTA is indeed an interesting one AI-wise. Here's a lengthy discussion of bots vs human players:

Translating out of DOTA-ese, I'd say the conclusion is that hard DOTA bots are better than most humans until the humans have at least 300 hours of practise.


I think you're simplifying your simulation by simply dropping the tricky bits. But at least it won't need the litter tray.

Ours don't use it either, but that's because they've got access to outside. A lot of cat behaviour just doesn't seem to come into play if they never go outside.


My girlfriend's new car has a feature where it will automatically call emergency services in the event of a crash.

That's a Guardian Angel app right there.

I can't stick a bracelet on my ten year old and program it to tell me if he is playing hookie from school
Because no-one wants a wearable device calling the cops when your child goes on a field trip.

As Tom M notes, consumer guardian angel apps are sold not to the people nominally guarded but their parents or children, and are usually designed to alert the customer in case of an event. This helps that customer avoid having to pay the ambulance crew's 'false alarm' charge when your mother's heart monitor sends an alarm caused by (it turns out) the shocking discovery old people still have sex lives.
Experience shows that current combinations of biometrics, location and 'smartness' is insufficient to deal with the many, many corner cases of everyday life.

(Also: lots of guardian angel devices for the elderly are fall alarms which just need to summon another person, not professional help - and family members are free. :-))


Interesting! And - usefully - the only failure mode I can think of (being Bruce Lee-level fit meaning your resting vitals fall under the threshold) would happen while still at home, not up a mountain.

Of course you've also enabled ubiququitos surveillance and killed off privacy if you aren't very careful. But technologically it's perfectly feasible

So the principle problem here is having observers that are sufficiently intelligent to identify when something problematic is occurring in real time, at which point you're falling into the rabbithole of strong AI (and relaxed definitions of "feasible"). On the plus side, guardian angels with weaker-than-human theories of mind are exactly the sort of surveillance systems that the teenage protagonists of so many past SF works can hack/trick/evade, which would be good for a bit of nostalgia and "I invented the stupid AI panopticon and all I got was this lousy tshirt" things, if nothing else.

The alternative is to log everything, and when someone becomes a person of interest simply scroll back through their lifelog and see what was going on in the recent past. The problem there is having the bandwidth for billions of high quality audio and video streams and continuous dumps of metadata, having the vast storage systems required to keep all this rubbish and having means of searching and querying it. Which is, again, using a fairly relaxed definition of feasible. The UK government's current plan to log the internet activity of its citizens seems positively trivial by comparison, and even that is an overwhelmingly complex and expensive proposition that clearly won't be working any time soon.


If one of the alarmees is the person themself, then they can always cancel it.


it's not going to go off if you get into a car wreck for instance.
A proper guardian angel would consider itself to have failed if it allowed you to get into a car wreck in the first place. Just saying.
How to implement with current tech [1]? In a localized implementation [2], the Guardian Angel would run a sensor platform surrounding the person, with active communication with other (trusted!) relevant sensors in the area, focused on places that the person might be in the near future. Complex predictive models would be built and maintained, which also incorporated predictions for people (or other mobile entities) who might soon be in a nearby physical location. The person would be gently steered away [3] from the parts of the predictive model with high probability of accident. If gentle steering didn't work, then blunter warnings would be issued or direct (subtle)[4] interventions made.

[1] No causality violation required. :-)
[2] Speed of light vs speed of humans means that the GA could be 10s of milliseconds distant and still be effective. Local would be better though.
[3] Little things. e.g. Noise from the radio to change future patterns of saccades so that the person notices the trouble 15 milliseconds earlier.
[4] E.g. barely noticable shifts in engine power to change future positions by a 1/2 of a car length. The person believes that they are driving manually and are in control. They are, sort of.


Re: Guardian Angel app:

One of the family had one of those fall monitoring devices at home ... they work 'okay'.

Almost everyone now carries around a smartphone. And, it probably wouldn't be too difficult to develop an app for a smartphone that could detect a fall (sudden change in orientation/movement plus abrupt stop) which would first try to engage the wearer and immediately prepare to call for appropriate help. Part of the tech version of the guardian angel problem is end-user readiness/willingness to feel comfortable enough to use this tech, i.e., developing an understandable, easy to use UI. This means that the people developing this tech have to actually understand the intended users of this tech. E.g.: Requiring a senior senior with Parkinsonian tremor and poorish vision to input a 10-digit number within 10 seconds on a 2 by 2 screen ain't gonna work.


Agree you couldn't upload video streams from everyone constantly, that technology is still a ways out. Biometrics and location data you could probably do, but bandwidth limitations on video would only support active enablement of a small percentage

Active enablement of video already exists thru social networks (periscope, Facebook Live)

Basic Predictive modeling to avoid car accidents already happens in collision detection systems on luxury cars

The other interesting thing about the guardian angel is the same personal utility threshold that would make a person want to buy it for himself would also likely incent insurance companies to want to help pay for it


That's not a million miles from what a top-of-the-range premium/prestige car can do now....

Lane departure warnings will shake the wheel and gently nudge it in the right direction if you look like straying out of your lane.

Adaptive cruise control will slow you down when you get wooryingle close to the car in front.

Some cars will detect vehicles coming up in the blind spot and warn you in the same way lane departure warning does if you look likebyou might be considering an inadvisable lane change.

Internal cameras will track eye movement and warn you if you're getting drowsy.

etc, etc, etc...


OK, but those interventions are a bit overt.
I had in mind a Guardian Angel that attempts to provide a convincing illusion of the absence of bad luck. (Maybe just specialized to absence of physical injury.)
Sort of mild variant of "Sufficiently advanced benevolence is indistinguishable from good luck".
(Not the evil inverse, "Sufficiently advanced malice is indistinguishable from bad luck". I think first described in SciFi by Vernor Vinge in a 1972 short story "Original Sin", where a human character explains to the natives that they (human) can't use their 'mam'ri for fear of detection; if detected they will suffer from a run of bad luck imposed by the technology embargo cops.)


Smartphone apps available since 2010 for improving mental health. Now it's a matter of selling the idea that getting help is okay/socially acceptable.

BTW, the mental health app phrasing/language seems more intelligent than those of the Chinese and Madison chatbots which says something about the developers' own intelligence.


Your link forces the Steam UI to Turkish.

Computers are indeed bewilderingly bad at grand 4X games like Civilization. So bad that these games often cheat in the bot's favor even when you play on the "newbie" level (and raising the difficulty primarily means that it cheats even more, although also tends to make the bot more aggressive).

But if you want simpler examples of games that computers aren't very good at, I'd look at board games. Unlike computer games, most board games don't care about how fast you make a decision or how accurately you position your piece, so most of the big advantages we've been discussing simply aren't relevant. Terra Mystica, Agricola, and Castles of Burgundy are some examples of popular board games that don't use combat at all but where I still suspect you'd have a hard time making a computer competitive with an experienced human player. (Note: I chose those examples because they're ranked high on; I haven't played them all extensively.)


For most board and computer games we don't have a very good idea of the latent difficulty for AI mastery. For a lot of games a truly masterful AI is just frustrating for the players: who wants a maximum difficulty setting that is literally unbeatable by human skill?

At this point computers are champions of checkers, chess, Go, Scrabble, backgammon, and quite a few other board games. In 2015 they mastered Arimaa, which was designed in 2004 specifically to be easy for humans and difficult for computers.

Computers have yet to master Starcraft, bridge, or no limit hold'em poker, though significant research effort has gone into AI for these games.

For less widely known games, like Castles of Burgundy, it's not clear if the lack of an AI champion is due to intrinsic difficulty or that no research organization has taken it on as a personal challenge. If DeepMind/Google hadn't committed significant resources to building AlphaGo, the solo developers who created previously-best Go programs like Crazy Stone might have taken many more years to build something capable of defeating champion players.


Placebo Angel? Quite a bit of research shows just how powerful this angel can be. Or, build a winner-effect chatbot to provide the same testosterone rush (winner effect) as defeating a human.


Those placebots [1] would certainly be easy to build.
I had in mind something real though. There are actual observable differences in levels of accident-proneness, well over an order of magnitude, that matter to risk assignees like insurance companies. Not just driving; also everyday injury, e.g. falls.

One possible downside is that in general people might become more careless. (Thrill seekers perhaps more so, as a form of entertainment.)

[1] urban dictionary says word is already taken.


Stephen Wolfram wrote a good post on this - his thesis is that AIs can respond in richer ways than humans by returning a graph, interactive visualisation or other computational reply, and that is a better idea than trying to funnel it through human language.


And, it probably wouldn't be too difficult to develop an app for a smartphone that could detect a fall (sudden change in orientation/movement plus abrupt stop) which would first try to engage the wearer and immediately prepare to call for appropriate help.

There's research going on involving people with epilepsy. The goal is not only to reliably detect a seizure and trigger help, but to detect behaviours and other triggers that the user may not be aware of themselves and so provide an early warning.

This is something I'm particularly interested but it's not the only such area of research by any means.


That's really interesting info re: placebots- thanks! - but I really did mean 'placebo' in the name Placebo Angel. There's been quite a lot of work done on placebos and they do work. And having an AI version of it could help many folks. I'm not saying that people ought to be duped, only that feeling/knowing that something might/is there to help, can in itself provide help.


There are actual observable differences in levels of accident-proneness, well over an order of magnitude...

Yes, and some of them are famous on the internet. Protective AIs may not become popular until they can be loaded into robot cats.

This isn't the first time someone's come up with the idea of combining protective household gods with robotic houses or vehicles. I expect it's only a matter of time.


but I really did mean 'placebo' in the name Placebo Angel.
So did I; habitual obtuseness strikes again. Yes, and Placebo Angels would be easy to build. Active ones could mine (even crudely) personal event information for the person and subtly point out positive events. Marketing and sale (or other distribution) could be tricky, but as an existence proof-of-possibility, homeopathic remedies take up several shelves in a typical drugstore in the U.S..


Voluntary Guardian Angel devices are quite popular with back country skiers who fear being lost in an avalanche. And, lots of active safety devices (e.g. airbags, anti-rollover software, all wheel drive, automatic braking systems) are basically guardian angel prevention bots.

My family is not big on naming inanimate objects. We don't name our houses or our cars. But, we have always assigned names to our GPS systems which have voices and communicate with us. My current GPS is called Loki because it often leads me astray.

There are lots of magical beasts and devices that deliver you to a destination, and self-driving cars (as well as Uber and taxi apps) serve similar purposes.

Prohibited card counter devices for casino gambling seem a lot like familiars.

Gremlins are malware (like Microsoft Bing).

A variety of magical beasts were employed to assassinate enemies from afar - a role that drones often serve now. Drones also serve the role of animals that take shamans on spirit walks in which they can see the world through the animal's eyes often far away. And, there are guard robots who serve the role of Cerebus the three headed guard dog, although not particularly well yet.

We don't yet have many bots that serve the role shared by dolphins and mermaids of rescuing unexpectedly imperiled sailors at sea.

I wonder if the proliferation of AI will lead to a resurgence of animistic religious belief.


Somewhat related to the thread, as an example of superhuman capabilities that could be given to helper agents, finally got around to reading the AlphaGo paper Mastering the game of Go with deep neural networks and tree search. (There are pdfs to be found with google if you don't have access to Nature).
It is an easy read and the level of performance achieved is fascinating, and generalizable to many other domains. Helps if you know a little about previous work on Monte Carlo search; if not, pay careful attention to the few sentences describing prior art.


Question that I feel OP is hinting at:

Ecology vrs Artificial replacements.

That thing where an ancient forest that took a few hundred thousand years is replaced by a monoculture tree farm.

Sure, you can get the same End Results but that's missing a lot of the complexity / nuance / ecology.


Think different.

Let's imagine a world where all those reciprocal arrangements with the Fae were replaced with slavery [hello Harry Potter - which is secretly a Fascist text, but hey. Not just for the whole "+50 points to Harry's House just to make them win" trope. Nope. And not the other Trope TV stuff: the Fascist stuff is alllll about the concept of Change. The Reactionaries win. Lupus dies, etc etc].

Now, link to Coral, Whales, Mega-Fauna, industrial animal farming, puppy mills, declawing cats, sixth extinction event, all the things.

It's almost as if you're Weapons.


Note, smart Minds would have instantly taken the "And How To Replace Them With A Shell Script" to mean "How to replace the errant Minds who cannot exist it our reality".

Same thing.

It's about conceptual and ecological space and self-repetition in bacterial terms.

"Purge the Unclean"

"Kill the Heretic"

"Our Wonder Spray kills 99% of all know bacteria"

"There is only one G_D".

I'd be a little bit more subtle, but news just in:

March - highest temp evar.

Corals - now only 10% left.



Question that I feel OP is hinting at:

Ecology vrs Artificial replacements.

That thing where an ancient forest that took a few hundred thousand years is replaced by a monoculture tree farm.

Sure, you can get the same End Results but that's missing a lot of the complexity / nuance / ecology.

It's actually not clear that any forest is more than a few thousand years old. "Old growth" technically means the trees are too old to age, not that it's indefinitely old. Shifting climates mean that things can change quite a bit over time.

Still, the problem with monocultures is that they're unstable, meaning that anything that can overcome the defenses of the monoculture is free to expand its population until it's consumed so much it crashes.

The "advantage" of monocultures is that they're (conceptually) predictable and measurable, while old growth forests notoriously are not, especially if you're trying to figure out sustainable yields, which have to vary over the seasons and between years. That predictability has attracted math-minded people for centuries, even though they rarely work for all that long.


Thanks, that was ... corrective and deserved. Need to meditate on it for a while. (cats? sigh.)

There is no reason that technologies be clones (in the technical sense; identical instances/monocultures); variation (in the evolutionary sense) is good for many reasons, including resiliency and resistance to parasites. Not your point (or maybe is in a sense) but should be said. (Not arguing with Heteromeles, though I think (without evidence) that evolution can take a bit longer than a few thousand years to settle down/file the edges off useless boom-bust cycles.)

Without comment (New Scientist free registration 9 April 2016): I’m creating supercharged corals to beat climate change.
By giving evolution a helping hand, Ruth Gates aims to produce corals tough enough to survive in increasingly hostile oceans
We will then selectively cross-breed the healthiest specimens, a technique that is common in agriculture but has never been attempted in corals before. We’ll take high-performing individuals – super-athletes – and breed them together. By the end of our five-year mission, in 2020, we hope to have a significant stockpile of highly resilient coral strains and a plan in place to use them to restore completely denuded and partially damaged reefs in Hawaii and Australia.


I thought that variety was a requisite for a stable ecology.

Monoculture corals will help some of the sea life dependent upon these reefs but hopefully there's a plan to re-introduce the other corals*. Then perhaps the authorities could deliberately infect some of the monoculture corals to bring everything back into some type of balance ... in the same way that the rabbits are finally being brought under control.

* There are gene and seed banks for a large number of different land flora & fauna. Anyone know if there's anything comparable for sea life?


Okay, how about a conscience as in Angel-on-the-Shoulder (iCATS from Apple) type AI specifically created for CEOs whose sole function is to remind them of promises made, laws to be obeyed, people not to be exploited, etc. The iCATS also have perfect, enormous and undeletable memories which can be downloaded by anyone using the appropriate spell to unlock the ward.


I thought that variety was a requisite for a stable ecology.
Yep. (Off topic so will keep it short.)
Just interesting to see biologists being so proactive. Also I'm reasonably sure that if it pans out at all, they will come up with more than one genetically distinct alternatives, for exactly that reason.

Still looking for a suitably scary temperature chart that incorporates both February and March 2016, like the one for February 2016 from the Independent


...specifically created for CEOs
That has potential. Not just CEOs; anybody with that mind type. (Similar to Culture slap drones, assigned to everyone who deserves one.)
The external conscience needn't be approved of by the recipient (or by the corporate board if assigned to a CEO/CFO).
Early versions could be external, run by other parties, and e.g. tweet about bad behavior real-time. People could subscribe to the Angel-on-the-Shoulder's twitter feed.
What could possibly go wrong?


I think one of the problems with abiogenesis is the Early Arabic/Christian Medieval worldview(s) and frame(s) of reference are quite different from what we have today.

Today, when we think of "artificial life", our views are coloured by seeing a dichotomy between vitalism and materialism, and we assume Pre-Modern and Early Modern people thought the same.

In fact, most of these people believed in spontaneous generation, and even if they believed into some "life force" or pneuma, that was quite often not just in humans or animals, but all around us. Which IMHO makes those systems difficult to categorize with the vitalism/dualism ("life is air/fire") vs. emergent property ("it's the mixture of elements that makes life") dichotomy.

So we could argue if abiogenesis was really seen as a way of becoming like god[1], or more like helping in his work by arranging the elements into life. Though that would make alchemist into some kind of Russian cosmicist 1000 years early[2].

As for (al-)chemical work as a means of perfection of the soul, IMHO laboratory work (including cleaning up, sorting reagents etc.) is quite beneficial in at least some cases of ADHD (OK, n=1, me...), just like any ritualization. Just stay clear of the mercury.

[1] Which might by hubris, something the Ancient Greeks were great at denouncing. Please note Pygmalion is not in the list, though then, divine intervention in vitalising the statue is heavily hinted at.
[2] It's always funny how Transhumanists echo Christianity, Chabad messianism in some ways recapitulates early Christianity and even anti-trinitarian Islam has its own version of the Arianic debacle. Might be cultural diffusion of ideas, though personally I wonder somewhat if Hellenism and Second Temple Judaism left us with some historical constraints. Like you can only construct some figures with circle and ruler or get some statements from certain axiomes...

Err, mods, mangled some tags and also later on decided Zoroasterianism was subsumed as an influence in Second Temple Judaism, so could you delete the first post? Sorry to ask.

[[ done - mod ]]


It's always funny how Transhumanists echo Christianity
Oh no it isn't.
It's bloody terrifying.
The lunacies of Roko's Basilisk & the calls for "judgement" are seriously depressing.
As mad as a box of frogs, but then, they are religious believers .....


For me the most terrifying part is them not realizing it. ;)

Thing is, as mentioned I'm not that sure if those similarities to Christianity are really the results of crypto-Christian (or Cosmist) indoctrination or more like a reinvention of similar arguments to answer similar problems from a somewhat similar background. Just like sabre-tooth are not universal in Feliformia, but got independently reinvented in Feliformia a couple of times.

Yes, you're invited to hold my insistence of quasi-Aristotelianism being incommensurabel with Post-Descartes and Post-Pasteur thought against the idea you can reinvent Christianity including some of the internal debates from a general Mediterranean-Mesopotamian background. ;)


one of the main reasons you learn magic to summon a magical creature is to learn more magic from it

In the context of an API, we call that "service discovery"...


We do realize it and I have written and talked about it extensively within H+

As for "judgment", if we are ever in a position to revive the dead of past ages, do you really think everyone should come back "unmodified", or will some big decisions have to be made in a good number of cases?


Err, some realize it. To quote yourself:

For example, a standard belief within H+ is that we are all “rational atheists”, which is far from true.

Those that are subject to the "standard belief" are those we are concerned with.


Thing is, as mentioned I'm not that sure if those similarities to Christianity are really the results of crypto-Christian (or Cosmist) indoctrination or more like a reinvention of similar arguments to answer similar problems from a somewhat similar background.

it's what software engineers call a design pattern: two different implementations that arose in different contexts but which share an eerie degree of structural similarity.

Remember early Christianity arose as a syncretistic merger of a whole bunch of middle eastern belief systems: chunks of Essene Jewish mystical/apocalyptic sects picking up on the patriarchy of Mithraism, the dualism of Zoroastrianism, the Mystery cult of Isis and the dead god Osiris ... then adding a few innovations of its own: redemption from original sin mediated by the sacrificed god, the sacrificed god's mother as an intercessionary deity, all the former minor Roman deities mysteriously re-identified as saints, the Apocalypse of St John the Divine as a kind of anti-origin story, and so on?

Christianity then spawned a bunch of schisms and heresies of its own: never mind Nicea, there was the Orthodox/Catholic split, then the Reformation, various protestant fissionings ... and then the Scottish Enlightenment, which was specifically an anti-Calvinist backlash (Scotland was a vile theocracy in the 17th century) and tolerant of overt atheism. (The American version was more circumspect -- all that deism stuff -- then fizzled out in the Great Awakening a century later.)

It can be argued that modern western WEIRD enlightenment-age atheism is basically a protestant heresy: the progressive assumption inherent in the idea of perfectability gets taken to an extreme and even after the idea of God is thrown overboard you still have the concept of progress and perfectability. Which gets you the Singularity, not as originally formulated by Vernor Vinge but as misunderstood by a bunch of deist types who read too much into Moravec's original mind-uploading thought experiment (and without the context of knowing that Moravec was spinning an argument against mind/body dualism for a Catholic believer).

So yeah, it's only a matter of time: if I was an unscrupulous huckster who wanted to start a new religion I'd just jump right on the Singularity bandwagon, revising the Kurzweil gospel to fit my revenue model accordingly, and go into competition with the Scientologists. Fortunately for us all, though, I have some self-respect :)


"Those that are subject to the "standard belief" are those we are concerned with."

No, even they recognize the parallels. However, as I explained, it is fairly inevitable with an ideology that tackles universal themes of life, death, ageing, reviving the dead, the creation of life, the creation of AI, the possibilities of immortality, uploading (transmigration of souls) and afterlives. Not to mention re-engineering the universe.


And of course the Buddhist parallel of the Hedonistic Imperative.

I used to think that was something for a PostHuman future, but with CRISPR and gene driving we could make huge inroads now, starting with farmed animals.


Re: '... arose in different contexts but which share an eerie degree of structural similarity.'

Given a limited range of tools, resources, constraints, and fundamental needs/uses together with optimization typically defined as the shortest path (lowest complexity), this shouldn't be a surprise. Half the fun of fantasy featuring angels/creatures from other dimensions is figuring out how this applies to them.

Seems that most SFnal other universe creatures these days are evil. Might be fun to have a Two Flowers version: basically, an innocent naif who's fundamentally good.


Where does the demon/angel of blind luck/chance figure in this?


I am trying to find a novel/story published in the 90s, I think, and as usual I cannot remember title or author. Possibly Greg Bear story.

They are on Mars, and starting to terraform. There are a number of ancient structures from an earlier time. Hollow spheres and tubes. As soon as water starts being on the surface, the ancient life fires up again, starts growing, and is a vast intelligence that comes and goes when water is available.



"Moving Mars", Greg Bear, 1993, had something like "As soon as water starts being on the surface, the ancient life fires up again, starts growing, "
Though I don't remember it being intelligent. I
It was a side plot, really.

-Bill Arnold


They are on Mars, and starting to terraform. There are a number of ancient structures from an earlier time. Hollow spheres and tubes. As soon as water starts being on the surface, the ancient life fires up again...

That's In the Hall of the Martian Kings by John Varley (first published 1978 and not part of his Nine Worlds setting). It was short-listed for a Best Novella Hugo. My copy is in the collection Persistence of Vision but it's been republished elsewhere; you can also find it in The John Varley Reader, another worthwhile collection to pick up.


Re the last question in the OP: Do you think the "virtual friend" will become a reality?
Yes, in the form of advanced personal assistant AIs. Seriously, when (OK if) beyond-human intelligences emerge from our automation, that's one of the places where I hope they emerge from. At least they would embody some form of empathy for the assistee and be interested in aspects of the assistees general welfare (if not owned and/or operated by some different entity with a different utility function/etc). At some point the assistee/assistant relationship would need to morph from slavery to something more mutual, after which it could reasonably be called friendship.


"Moving Mars", Greg Bear, 1993 and "In the Hall of the Martian Kings" by John Varley

I conflated both stories together without realizing it. No wonder I couldn't find it.



I believe I have thought of a seventh category: Cthulhu, or maybe Messianic Archetype, depending on your point of view. Anyway, an object of cult worship, an overlord come to Earth, to whom we can sacrifice our freedom in exchange for a safe, secure, well ordered world.

Trumpbot. How hard could that be? Election results take too long between cycles to provide the feedback needed for self-learning, but something could be done with focus groups, or if the expense is too high, online discussion forum comments. That means it could re-configure itself as the winds of public opinion change. Reaganbot, Nixonbot, it's really all the same deity by a different name, is it not?


Larry Niven .... where a Teela Brown becomes or is "Lucky" & her luck manipulates the people around her ....
Didn't end well, IIRC


An interesting point, because the whole notion of "freedom" for the overwhelming majority of people is as a method to obtain a safe, secure, prosperous and well ordered world.
Otherwise freedom a fairly useless notion except as a fetish concept.


Thing is, as mentioned I'm not that sure if those similarities to Christianity are really the results of crypto-Christian (or Cosmist) indoctrination or more like a reinvention of similar arguments to answer similar problems from a somewhat similar background.

it's what software engineers call a design pattern: two different implementations that arose in different contexts but which share an eerie degree of structural similarity.

In biology it's known as convergent evolution.

Compare agave and aloe species with a last shared ancestor about 93 million year ago. Some species from different families can only be told apart by botanists who are familiar with both species.



It's people willing to sacrifice their "Freedom To" to be given "Freedom From", in the context of a co-dependent relationship. Or, in other words, "Maslow wins."

Decisions based primarily on fear are seldom in ones rational self interest. What motivates your reasoning? Do we remember, or are we "on line" only? And when you factor in that most of the fear is manufactured...

"Candidatebot" How hard can it be to teach a computer to win an election? I'm actually shocked that no one has tried this yet. Or have they?


Take care lest ye invoke the demon Murphy and his many laws . . . .

I wonder if the proliferation of AI will lead to a resurgence of animistic religious belief.
You might like to try Karl Schroeder's Ventus, whose question is more or less "what if you set up a planet as if animism was real by installing AI guiding intelligences everywhere?"

Teela Brown becomes or is "Lucky"

What exactly is "luck" anyway? I've stewed over that one for decades. What we know is that some people are "luckier" than others in the sense of having far fewer accidents. Insurance companies recognize this as fact. Using driving as an example, personally, I've driven 25 years/800K kilometers without accident and my spouse during that whole time manages an accident, usually minor, every year or two, and everyone here probably has similar observations. (Similar ratios with trips/falls.)
What the mechanisms are are not entirely clear, though there is obviously inverse correlation with levels of risk taking.
I've always felt (without evidence or even knowing why) that accident rates are unreasonably low, by maybe a factor of two.


Regardless of how Niven did it, luck isn't as simple as a single "luck" continuum. Luck would have to be for some purpose. Do random events conspire to keep a person safe, or to get a person what they want or to get a person what they need? These overlap imperfectly. Supposing luck works--for everyone--to get them what they "need", what will enable them to fulfill their destiny or whatever. Some people would have more "importance" at any given time in the ever changing tapestry of lives, while others would be less important, and circumstances would bend those latter people's lives to the service of the needs of those who were more important. So, what would these destinies be? Nothing other than roles played in the aggregate luck of people in the future. Even if the future is determined, the sum of the future is constantly changing, since the earlier parts of it are constantly being placed in the past. A lot of luck may just be sensitivity to the influence of luck. People who get a hunch for what would be a luckier path are luckier because they are more valuable to luck itself.


Some seriously unusual thoughts (at least to me) in there, thanks. Particularly this:

A lot of luck may just be sensitivity to the influence of luck. People who get a hunch for what would be a luckier path are luckier because they are more valuable to luck itself.
Another weird and highly wu-ish thought is maybe that we kinda-sorta live in a multiverse but branches are quickly either pruned off or merged into a single head or master. I put some links about some recent quantum "history entanglement" results (including experimental results) in a previous thread (more links later in thread (experiment is here)). (Very small timescales and events, and scaling up to human scales involves heavy quantum wu, etc /disclaimer.)
In this model, short-term luck could be something more tangible; people sample the future space a bit (somehow, mechanism not described :-), directly, very short term (seconds?), and chose the line they want, somehow without conflicting with the desires of other entities. The other lines are pruned off, as if they never were, due to the causality violation involved, or if they're lucky, merged.

I'm now recalling that this and your post are both territory explored in Hannu Rajaniemi's Causal Angel, a wild and very rich book (probably best to read them in order; this is the third). It was related to the "Kaminari Jewel" and protections around it to block unwise usage.


On days that I take Modafinil, I am luckier


On days that I take Modafinil, I am luckier
Interesting, thanks. (Wonder what it means.)
BTW the stuff I wrote that you linked above is just one formulation; there are (at least) a couple of other obvious formulations that are as consistent or better (but still deeply wu-ish :-) , just didn't want to make it too complicated.


Modafinil's an amphetamine. From experience one of the things it's very good at is making you feel confident things are going well...


Modafinil is not an amphetamine. (Also, the Wikipedia article suggests that there is still some disagreement over the mechanism of action.)


For me "luck" seems strongly correlated with optimism. Probably also things like body language. It is certainly strongly anti-correlated with depression, anger and stupidity.


Thanks for writing that.
Looked up Dihexa as suggested and read a usage report on reddit. That's pretty hard core, what with the DMSO delivery and not-well-explored side effects.


How did I get Modafinil and Adderall mixed up? Sorry, I should have checked. Nevertheless, it was definitely Modafinil I took.


There's been some actual scientific research on luck. Here's a summary:



"My research revealed that lucky people generate their own good fortune via four basic principles. They are skilled at creating and noticing chance opportunities, make lucky decisions by listening to their intuition, create self-fulfilling prophesies via positive expectations, and adopt a resilient attitude that transforms bad luck into good. "


I vaguely recall that article (have long subscribed to Skeptical Inquirer). It is really good; thanks Nick @102 for sharing the link.
...make lucky decisions by listening to their intuition,...
That's the core of the unexplained parts of it to me at least, the not-adequately-explained elephant in the room. (The rest is ... obvious.) I have a hyper-twitchy intuition that e.g. people sometimes complain about. ("don't guess, know." "was I wrong?" "[sputter] doesn't matter.") So do plenty of other people. It is not entirely clear that it is all subconscious modeling (including predictive modeling) and pattern matching (which themselves are not well understood), though that's the reasonable skeptical (null) hypothesis. Most intuitions are clearly explainable from combinations of daily-life sensory inputs. Others (a small minority) are ... kinda weird. Like driving, suddenly spontaneously deciding to take an non-optimal alternate route for the first time, and ending up assisting somebody in serious trouble at the roadside. (The juncture of on-line and physical reality is also fertile grounds for weird intuitions; a new thing. Though I used to know (1-2 decades ago) a person who was an early "intuitive hacker"; she could sometimes break into systems at unreasonable speed.)

Re They are skilled at creating and noticing chance opportunities, an illustrative story that is consistent with it all being explainable by meat neural networks:
A while ago I worked a little with a guy who did the first neural networks for playing backgammon. (Backgammon involves chance in form of dice rolls.) First supervised learning ("Neurogammon") then TD-learning (self-taught; reinforcement learning) ("TD-Gammon"). This was with the 3 layer feed-forward neural networks used at the time, not recurrent networks. The TD-learning programs (even without lookahead) were so good that non-expert level players would regularly (falsely) accuse the programs of cheating at (PRNG-based) dice rolls. The programs (playing at an human expert level with no lookahead; superhuman with enough lookahead.) were able to steer the backgammon games into positions where the dice rolls came out in the programs favor, way more often that what weaker players thought was right.
This is directly analogous to organizing one's life so that random events are more often favorable than would otherwise be expected.



About this Entry

This page contains a single entry by Hugh Hancock published on April 14, 2016 1:41 PM.

Markov Chain Dirty To Me was the previous entry in this blog.

Rise Of The Trollbot is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Search this blog