Back to: 5 Magical Beasts And How To Replace Them With A Shell Script | Forward to: The unavoidable discussion

Rise Of The Trollbot

Hugh Hancock, your friendly neighbourhood crafter of tales about supernatural get-rich-quick schemes gone horribly wrong, back with another bit of musing on what the Chatbot Future holds... See also Part 1 - Sexbots and Part 2 - Magical Beasts

In "Accelerando", Charlie posited the idea of a swarm of legal robots, creating a neverending stream of companies which exchange ownership so fast they can't be tracked.

It's rather clear to me that the same thing is about to happen to social media. And possibly politics.

What makes me so sure?

Microsoft's Tay Chatbot. Oh, and the state of the art in Customer Relationship Management software.

Turing Test 2: Is The Bot Distinguishable From An Asshole?

Microsoft unleashed its conversational bot on Twitter, and 4chan's /pol/ unleashed their opinions - or possibly their sense of humour - on it in turn. Hours later, it was a racist asshole.

But that's not the interesting bit.

The interesting and worrying part of the entire test was that it became a plausible, creative racist asshole. A lot of the worst things that Tay is quoted as saying were the result of users abusing the "repeat" function, but not all. It came out with racist statements entirely off its own bat. It even made things that look disturbingly like jokes.

Add a bit of DeepMind-style regret-based learning to the entire process - optimising toward replies or retweets, say - and you have a bot that on first glance, and possibly second through fourth glance, is indistinguishable from a real, human shitposter.

A lot of ink has been spilled worrying about what this says about the Internet. But that's the wrong thing to worry about.

The right thing to worry about is what the Internet is going to look like after more than one Tay is unleashed on it.

More than a hundred. More than a thousand.

Have you ever joked that you wished you could clone yourself?

Well, it looks like if you're an extremist of any stripe who spends a lot of time on social media, you'll soon be able to fulfil that dream.

The Trollswarm Cometh

Swarms of real life, human trolls have already been able to achieve some remarkable things.

For example, there's the well-known incident where Time's Man Of The Year Poll met 4chan. Twice.

But real-life trolls have to sleep. They have to eat. Whilst it might not look like it, they get tired, and angry, and dispirited.

Chatbots don't.

And the only limit to the number of trollbots you can control is the amount of processing power they require. That might initially look like a pretty major limiter, given that machine-learning applications tend to require at least a single graphics card of some power each. But a), thanks to cloud computing that's actually pretty affordable - an Amazon GPU instance on Spot Pricing will cost you $0.13 an hour or a little over 2 dollars a day - and b) there's no reason that one instance of the trollbot software can't control hundreds of social media accounts all posting frantically.

So what does this mean?

1: Everyone Can Have Their Own Twitter Mob

Right now, if you want to have someone attacked by a horde of angry strangers, you need to be a celebrity. That's a real problem on Twitter and Facebook both, with a few users in particular becoming well-known for abusing their power to send their fans after people with whom they disagree.

But remember, the Internet's about democratising power, and this is the latest frontier. With a trollbot and some planning, this power will soon be accessible to anyone.

There's a further twist, too: the bots will get better. Attacking someone on the Internet is a task eminently suited to deep learning. Give the bots a large corpus of starter insults and a win condition, and let them do what trolls do - find the most effective, most unpleasant ways to attack someone online.

No matter how impervious you think you are to abuse, a swarm of learning robots can probably find your weak spot.

On a milder but no less effective note, even a single bot can have a devastating effect if handled carefully.

The rule of Internet debate is that, all else being equal, the poster with the most available time wins.

On its own, a bot probably can't argue convincingly enough to replace a human in, say, a Reddit thread on gender politics. But can it be used to produce some posts, bulk out rough comments, make short comments requiring longer answers, or otherwise increase the perceived available time of a poster tenfold?

Fear the automated sealion.

2: On The Internet, No-one Knows Their Friend Is A Dog.

In many ways, the straightforward trollswarm approach is the least threatening use of this technology. A much more insidious one is to turn the concept on its head - at least initially - and optimise the bots for friendliness.

Let's say you wish to drive a particular group of fly-fishers out of the fishing community online for good.

Rather than simply firing up a GPU instance and directing it to come up with the world's best fly-fishing insults, fire it up and direct it to befriend everyone in the fly-fishing community. This is eminently automatable: there are already plenty of tools out there which allow you to build up your Twitter following in a semi-automated manner (even after Twitter clamped down on "auto-following"), and Tay was already equipped to post memes. A decent corpus, a win condition of follows, positive-sentiment messages and RTs, and a bot could become a well-respected member of a social media community in months.

THEN turn the bot against your enemies. Other humans will see the fight too. If your bot's doing a half-decent job - and remember, it's already set up to optimise for RTs - real humans, who have actual power and influence in the community, will join in. They may ban the people under attack from community forums, give them abuse offline, or even threaten their jobs or worse.

For even more power and efficiency, don't do this with one bot. One person starting a fight is ignorable. Twenty, fifty or a hundred respected posters all doing it at once - that's how things like Gamergate start.

(And of course, the choice of persona for the bots, and how they express their grievances, will be important. Unfortunately we already have a large corpus of information on how to craft a credible narrative and cause people to feel sympathy for our protagonist - storytelling. If the bot-controller has a decent working knowledge of "Save The Cat" or "Story", that'll make the botswarm all the more effective...)

3: You're A Bot, I'm A Bot, Everyone's A Bot (Bot)

In order to pull all these tricks off, of course, the bot will need a bunch of social media accounts. That would seem like the obvious weak spot: they can just get banned.

Except that if there's one thing a semi-intelligent almost-turing-test-capable bot is going to be good at, it'll be generating social media accounts. And even better than that, a swarm of bots will be almost unstoppably good at it.

It's very easy already to create a bot that will sit there patiently generating a history of Tweets - I've done it myself with my anti-filter-bubble bot. And Tweet history, or posting history, is one of the big giveaways of a sockpuppet account: very few people have the patience to build up a convincing history with their sockpuppets. But a bot can solve that. Tay might not be 100% plausible, but is she plausible enough to generate a convincing Twitter history for your new racist-bot? I'd say yup.

And I'm not the only one. Black-hat SEO marketers have long used software called "Spinners" to create semi-unique pieces of text to post as articles or spam onto forums or comments to generate search engine rankings. I won't link to it here, but the big up-and-coming news in the SEO spinning world is AI, with several products claiming to use Tay-like algorithms to generate much better "spun" content that will pass both human moderator and Google checks.

(To the best of my knowledge no-one's creating an XRunner-like product - a forum / comment posting product - incorporating Deep Learning to optimise for comments that get approved. Yet. Give it five years. To be fair, that might end up being an unexpectedly positive arms race.)

But as I mentioned, a botswarm will be far better. The other big giveaway for fake accounts is that they don't interact with a larger community. Now, a bot on its own can already deal with that to an extent - indeed, the big news in using Twitter for sales right now are AI tools that interact with users before passing them on to a sales team. But a swarm of bots can form its own communities. They can have discussions. They can Like and Comment on each others' posts (particularly powerful on Facebook, where the visibility of a post is determined by interactions from other users).

And as a human, you may not even be aware that in the community you're interacting with on Twitter, fully half the members are bots controlled by a single person. You'll interact back. And that just builds more viability for the bots and whatever their owner's ultimate endgame is.

4: Don't Do That. The Bots Won't Like It.

And here we get on to, in my opinion, the most terrifying use of the trollswarm: controlling filter bubbles.

A straight-up trollswarm is scary and unpleasant, sure, but it's a blunt tool. For maximum effectiveness, what you need is a scowlswarm.

In this case, you-as-bot-owner would never full-out order the trolls to attack. Instead, you just have them disapprove.

You set up a filter to have some of them - not all, just two or three - respond to mentions of your target outgroup with negative comments.

  • "Do you really read his blog?".
  • "Personally I find her offensive - don't you?".
  • "You should be careful about @target_user - didn't you hear about last year?"

You have them monitor for statements made by your target which attract negative reactions, and have your bot amplify that and retweet the statement. You monitor for negative-sentiment messages at the target, and amplify that too. You have them attempt to bait the target into strongly-negative-sentiment statements. Every so often, you have one of the bots outright lie about something bad that your targets did, and the other bots signal-boost it.

And the result is that the filter bubble of everyone who interacts with those bots - which are still firing off inspirational memes and sending people supportive messages the rest of the time - becomes tilted more and more strongly toward "this group of people are bad".

This is almost exactly the same effect as the kind of media-manipulation many people are worried Facebook could undertake, but in the hands of any anonymous yahoo who has the skills and patience to set up and train a group of chatbots. And it could be applied to much smaller targets - right down to individual people.

It'll be even more effective on a social media site like Reddit, where a swarm of bots could also upvote and downvote content. In general, so-called "social bookmarking" sites are terrifyingly vulnerable to somewhat-smart bots. It's already the case that it's almost possible to algorithmically optimise for upvotes (ask any high-karma user for tips on how to achieve said high karma, and it turns out there are a large bunch of shortcuts). A few hundred intelligently-run bots could invisibly dominate a significant-sized subreddit, upvoting or downvoting their target content. Provided they don't do dumb things that get them noticed as a voting ring, they'd be very difficult indeed to detect.

As a final note, another alarming use of socialbots on social bookmarking sites would be to burn out moderators. Moderator burnout is already a significant issue as most of them are volunteers: if you have a subreddit that you want to dominate but can't because there's a particularly clued-in mod, just turn up the shitposting bots to 11, blast the subreddit with almost-but-not-quite useful content mixed with some really unpleasant stuff, increase their workload 10-fold, and wait for them to quit.

So there you have it. Welcome to 2018 or so. Half your social media friends are probably robots - and they're probably the half that you like the most. Every so often one of the remaining humans gets driven off the Internet thanks to a furious 24/7 Twitter assault that might be a zeitgeist moment, or might just be a bot assault. And you can't even tell if what you think is the zeitgeist is entirely manufactured by one guy with an overheating graphics card and a Mission.

What do you think? Is there a horrific use of the trollbot I've not thought of? Or a reason this definitely won't come to pass?



Well, I would put it the other way round - is an arsehole functionally different on the Internet from a trollbot? Or even a typical twit or similar distinguishable from a nullbot (one with no useful function)? And answer myself with "not really".

"Half your social media friends are probably robots - and they're probably the half that you like the most." Really? I suspect that I am not human if that sense of 'you' is meant to defined humans :-)


That's meant to be a prediction of the future, not description of now. Currently your social media friends are probably still human. Probably.


I wonder if they could be used in another way to muddy the waters: Say celeb X is under attack on social media for something stupid/crass they said. Their PR company could set a swarm of trollbots on them, but optimised to make the whole thing so over the top nobody takes them seriously and the original legitimate accusations are lost in the trolling.


An excellent comment on Twitter just pointed out that this entire bot ecosystem I'm positing gets even scarier if you think about how it interacts with the rage profiteers of clickbait media.

Need an angry mob to write about?

>:\ trollbot instantiate -rfw ""



Having seen attested-as-human posters using the tactics you document in section 4, yeah, I do wonder how long social media can last in its present form. I will note a certain multiply-Hugo-nominated webtoonist has turned off comments on his blog when he had it rebuilt.


New product sales possibility: the Ronco Bot-away or Popeil Pocket Poster-check, useless in dealing with swarm activity but capable in limited specific instances of verifying whether a specific commenter is indeed human. The product just uses a file of blurry street addresses or near illegible road signs proven to have caused problems for machine recognition, politely requests that your e-correspondent text it back to you as proof of bio-compatibility, and voila, Bob's your uncle, isn't that amazing? But wait,there's more! Now how much would you pay? Don't answer yet, it also makes Julienne fries!


Oh yeah, that'll definitely happen. Complete with hilariously high false-positive rate.

I do wonder, on a related note, how long CAPTCHAs will last - in as much as they're lasting at the moment. (I just checked, and the price for a solved CAPTCHA is around $0.001 these days.). Computer vision is getting SO good - and I notice the latest reCAPTCHA check I took was using image recognition to determine whether I was human.

On a related note, Facebook's "locked out of your account" method of showing you pictures of your friends is probably easier to solve for a decent image-recognition AI than a human these days.


When mucking out the spam bucket on a blog, what really characterized the spam was how boring and poorly written it was. I realized that I wasn't deleting it just because it was spam. I was deleting it because it was off-topic and it sucked. It made me wonder when spam will get more intelligent. If a bot wants to join the conversation and it has interesting and original things to say, of course as a science fiction fan I would have to welcome it.

I'm really okay with the idea that we will be surrounded with bots acting like people. What I could use less of, having been through a few flamewars, is people acting like bots.


Trollbots posing as humans isn't exactly new, you know. This sort of thing has been done a good many years ago, most famously to troll the print news media of the UK.

The set-up here was a fairly simple one: during summer in the UK, Parliament takes a holiday. During this period the primary function of the UK's legislature, namely to amuse the populace, is taken over by the press dredging up any old birdcage-liner of a story purely to fill column inches. Thus this time of year is termed the Silly Season, and is also a popular time for journalists to take holidays, which reduces the journalistic cynicism and brainpower to a low ebb.

A couple of enterprising sysadmins dreamed up a truly daft story, which was that train commuters were using Bluetooth phone messaging on trains to arrange all manner of strange sexual liaisons (which presumably took place off the trains in question). A preposterous story, easily falsified even if 90% of journalists are technophobic Luddites, by sending out the few savvy journalists still present to investigate.

So, extra corroboration was provided by means of inventing a name for this practice "Toothing", and populating a forum with carefully-crafted semi-literate garbage and dozens of apparently quite active fake users, together with plenty of content with faked dates.

Again, easily disproved unless all you happen to want is some column-filler and a nice sex scandal to amuse the chattering classes. The tabloids accordingly swallowed the story hook, line and sinker and very quickly the forum was populated with lots of completely genuine if deeply puzzled users.


Everything you posit seems plausible, right up to the claim that half your social media friends are robots as of 2018. There is a gap in the logic at that point. It would be true if people routinely accepted most friend requests, but I don't see the evidence for this, or at least for such "friends" having the same status as friends that have been validated in some way. Or are you claiming that most social media accounts will over time be cracked, taken over by bots, and assiduously maintained as part of a swarm that occasionally erupts, leading to even more humans quitting?


I routinely accept friends requests from sword people and writing people. So it would be easy for me to be taken in.


Oh, the bots won't just be sending friend requests. That's primitive for social media bots in 2016, never mind 2018.

I'd expect something more like this (on Twitter):

1) Bot RTs and Likes something good you post, probably something other people have RTed.
2) Bot Follows you.
3) Bot messages you saying "great share! $somewhat_intelligent_comment"
4) Bot Likes and RTs other stuff of yours, infrequently but frequently enough to be remembered.
5) Bot @messages you occasional things it thinks will be of interest. They often are.

That's enough to get a 40%+ follow-back ratio, I'd think.


You are right, that would probably work well. Oops! I filter out all people I haven't met in person or in intensive online interaction, but given a long enough time, even a slow dribble of botfriends slipping through the filters would dominate as you suggest. For instance, accepting a request from someone who was an acquaintance, but a long time ago, should probably require a much higher threshold than a request made by someone after a promise to send an invite, made at a social event yesterday. (Goes off to review list of social media contacts.)


[Takes a moment to feel superior due to the fact that DM has never had a twitter account, and never intends to- take that troll-bots]]

There is an even more frightening implication that you haven't yet drawn- the use of AI for mass persuasion purposes. Excluding people with views you dont approve of is useful, but it can only get you so far. What you really want is a way to mold attitudes, and group influence research pretty conclusively demonstrates that the perception that other people hold certain attitudes (esp if they seem to be members of a group that is important to your self-identity, like members of your demographic) itself will change an individual's opinion. Create the illusion that a large number of people disapprove of abortion, or believe that vaccinations cause autism, or that being Muslim is a risk factor for terrorism, and you have the framework for a truly Orwellian level of social engineering.

The caveat is that elites disagree, and the end result may well be swarms of opinion-bots disagreeing and unfriending each other. It wouldn't be long before even the unwashed masses figure out what is going on. The social media net may well become unusable, or at least lose all credibility as a source of information. Back to Face-2-Face? Is this the return of wetware? Or... gasp... books!


This has some nice parallels with some of the tools that HB Gary (remember them?) were using back in the dark ages of 2011.

This rather suggests that the trollbots are already out there, its just that they're working for the government (or at least, for the intelligence agencies, which is kinda like working for the government if you don't look too closely).

Microsoft's nazi sexbot is just the technology becoming mainstream.


Greg Egan's _Permutation City_ had spam-bots using increasingly sophisticated targeted AIs pretending to be your friends, and you'd have a filter-bot pretending to be you to try and detect the spam-bots before passing your actual friends on to you.

That was back when the only social media was mailing lists and Usenet.


I don't care if you make that 2025 - my point stands!

As I posted earlier, Turing was naive in expecting most people to engage their brains - dammit, I know leading research scientists (some well-known, too) who more-or-less switch off their brains on any topic other than their work. So we agree that it will be (and perhaps is) true for the bulk of the population. Let's ignore those self-made idiots. Let's also exclude the fact that I don't do Twitter, Facebook etc., and most of my 'social media' interactions are newsgroup-like and with a specific activity focus - this blog is the ONLY one that isn't - as is very common for people of my era :-) I had better explain what sort of person I am talking about (using myself as an example).

The few people that I would regard as 'friends' (and it's more like drinking companions in the village pub than personal friends) all have to pass the test that they have something interesting to say about my areas of interest. And that does NOT mean either agreeing or disagreeing with me - far from it - it means saying something that I haven't thought of, which also stands up to analysis or checking. And, in most of those areas, I am a VERY long way from being either a tyro or a follower of the established beliefs and procedures! There are a lot more people like me than is often realised, including a pretty fair number on this blog. We are often discounted by the mass marketeers, because "we aren't an important demographic" (read: we are hard to take in by just spouting guff, at least on a regular basis).

I can tell you that the state of the art in AI is decades at least from being able to pass a Turing test on such topics when faced with the likes of me, or even emulating most of the regular posters on this blog.


"and you have the framework for a truly Orwellian level of social engineering."

Obligatory pedantry: Huxley, not Orwell. Orwell's society used crude compulsion.


Yes, but would you start counting them as 'friends' rather than 'acquaintances'? I have been taken in, more than once, by requests for information etc., but it pretty rapidly became clear that I was talking to a bot-equivalent entity. None got even close to being regarded as 'friends'.


Currently about 5 bots with a few dozen accounts each are enough to dominate pretty much any sub because of how reddit works. The first 5 votes have as much weight as the next 50 which has as much weight as the next 500.

5 fast upvotes will spin something to near the top of the front page of most subs while 5 downvotes will make sure almost nobody ever sees it.

It's part of what selects for memes on the site. If you have 2 items, one that takes 3 seconds to evaluate and vote on and the other which takes 3 minutes to evaluate and vote on, the first will win because the fast upvotes push it further up the rankings.

I could imagine if it became a problem walled gardens with increasingly sophisticated captchas testing things that the bots are still bad at. Or webs of trust based on face to face meetups.

Good post though. I'd not considered some of the implications of good chatbots.


Yeah, agreed - I was being cautious with my estimates. Reddit's upvote algorithm is rather subject to manipulation if you can get in there fast.

Having said that, getting in there fast also makes it increasingly obvious that there's a voting ring, which is one of the reasons I thought you'd probably want an order of magnitude more bots on the go to disguise voting patterns.


Minor tweak: you need to give the bot an optimization criterion, not a win condition.

With golf as an example, the win condition is getting the ball in the hole. It's a binary thing -- it doesn't tell you how well you just did, it only tells you whether you won or not. It considers it just as bad to hit the ball into the green as to hit it into trans-Neptunian orbit.

The optimization criterion is how close the ball is to getting into the bottom of the hole.

As for how likely that is to muck about with my corner of the twitterverse -- it won't have much of a direct impact. I discover people through social media, sometimes, but in contexts where I have separately identified them in real life. Friend of a friend, read their articles on Vice, worked together on a project, something like that. Most of the people I know only through social media are people I distrust and (often) dislike, certainly not people I follow.

However, it can strongly impact trending hashtags, and it is an effective tool for harassment. Facebook is a little safer because you have to find the person you want to interact with and send a friend request, or join a group that they have joined, so it's harder to position your bots to interact with harassment targets.

That's still quite open to sentiment manipulation, though.


Spam seems to have headed the other direction, market-wise: optimised for minimum cost, not maximum gain. A spammer who spends time and effort optimizing for quality is competing against other spammers who have used that time to send another few million messages...
(IIRC there's also an argument spam is bad on purpose - people who'll click through and then realize it's a scam are selected against by it being relatively obviously a scam on its face.)


Issues with this whole line of argument:

1. I *know* I'm nowhere near the last person on Earth that *doesn't* have an account to be a Twit. But then, in the mid-nineties, working for one of the Baby Bells, I wore a pager close to 24x7x365.24 for about a year and a half (except for the month or two that I wore *two* of them). Someone wants me to twit - and the 140 chars protocol was designed for pagers - and I will shove their annoyaphone down their throat. (And no, I do *not* have one, and have no intention of *ever* getting one.)

2. The thread, and Howard, seem to be leaning to where one-third of the processing power of the 'Net is devoted to spam and malware, and another third to bots intended to puff someone up, or put them down. I have doubts about this....

3. I'm only on Facepalm because, a few years ago, it was the only place I could find someone selling a Worldcon membership. Use it? Only to try to get a friend's attention, when they're not answering emails. Someone I don't know, like some yoyo out of nowhere, who wants to be on my linkdin connections? Phat chance.

4. Here's the real question: with all these swarms of bots, what happens when they intersect? The first 'Net bot-war? Or maybe they intersect, and decide to attack *all* their owners? That, of course, brings a major amount of the 'Net to its knees? After which, government panels, and strict regulations, or maybe just making it a crime to create/run one?

What, you *really* don't think the last could happen? You don't think every advertiser on the planet, and ever corporation whose revenue stream needs the 'Net will not tell them to do that?



Also the 50 Cent Party and other similar organizations. Yet another career being eaten by automation, woe to the worker in this economic climate...


Re. point 1: There's an argument to be had about how much Twitter drives the news nowadays, but it's quite a lot - not just the stories about things happening on Twitter, or quoting Twitter users in stories about other events, but journalists finding stories and sources in the first place. Murdoch paid $25 million 3 years ago for a company that just does validation and rights-clearing of online photos and video; being able to drive a video viral affects international news cycles.


Fair point. I was thinking "win condition" because individual events are boolean / binomial - you either get an reply from someone or you don't - but in aggregate they're not. Derp.

Facebook: the obvious entrance point for the first bots is the group mechanic. Join every fishing group in sight, put up funny memes, comment on other peoples' posts. It's fractionally harder than Twitter but not much.

Another option would be to use advertising. Create new fishing page, spend $100 on getting Likes, then the bots can interact with the actual users who have Liked the page.

However, once the first few bots are in, network effects take over. On FB you interact with friends-of-friends, and someone being a friend of a friend automatically sends something of a trust signal. So the first bot can use its connections to recommend the second bot, and so on, and so on.

My bet is that it'd be easier - vagiaries of the FB interface aside, because obviously you can't use API access for a trollbot project - to infect FB than Twitter. People are less skeptical that FB contacts aren't real, and it's a less tech-savvy crowd overall.


Ahem. After briskly singing the refrain to "Are Friends Electric" I now feel the need to bring everyone's attention to a most excellent novel by Ken Macleod, The Execution Channel, which talks about the use intel agencies might make of bots, blogs, and social media. (Strap line from the cover: "the war on terror is over -- terror won".)

Seriously, this stuff isn't new. What's new is coupling the bot front-end to the big data back-end and virtualizing it using cloud servers as a platform.

I fear there is a way to defeat the trollbot future, but it requires a mandatory national (or global) identity scheme and hardware authentication with biometric identification before anyone can log into a network terminal. In other words, the total destruction of the culture of anonymity on the net.

Which is more socially important, anonymity or verifiability?


That's the centralised solution. One could argue that a web-of-trust option ought also to work, but I don't think those scale.

(Particular evidence being this example of such a web being broken)


I assume that we already have mass botting on anything where this has a positive net benefit to a bot operator (chances to sell stuff, political issues being pushed or buried). Where there is no obvious benefit, I just assume bots are being trained even if they are not directly participating. Why do we need to defeat the trollbot future? We just have to switch to media that reward bot behaviour that has a benefit for the human members not part of the botnet, instead of rewarding bots driving people away. I'd rather get my news suggestions from a bot that has created a good model of my interests than the clickbait farms that newspapers are becoming.


You know how there are "booter" services that will happily DDoS a site of your choosing in exchange for a proportionate bitcoin payment (only for legitimate stress-testing of your own sites, of course, wink wink nudge nudge)? How about automating social mob attacks in the same way? Maybe this is just restating what you already said, but the emphasis here is just using a bit of synthetic outrage-tinder to send genuine human mobs after targets. That way you don't have to be too clever about faking up a genuine history and persona for each facet of your AI attack swarm. It's the Artificial Intelligence/Intelligence Amplification distinction again, but with refined tools for shaming and mobbing under light human direction.

If the target already has a Twitter or Youtube account, have a bunch of sock puppets claim that the target shared something objectionable and then deleted it. Rants about how cats defecate in your yard and should all be killed. Verbal abuse hurled at service workers. Confessions of driving under the influence. They deleted it, but the proof still exists: screenshots with consistent details! Also maybe a re-up of "their" video on liveleak or whatever. AFAIK both Youtube and Twitter still allow users to delete posted content without leaving a trace visible to the general public. That's supposed to be a protection against momentary bad judgment but in this case the lack of evidence works in favor of the mobbing: a lack of evidence due to shameful retraction looks the same as a lack of evidence due to the alleged action never happening in the first place. If you're someone with a reputation, like a prominent CEO/author/actor, you can probably get Twitter or Youtube to confirm it's all a hoax. If you're an adjunct professor, a first time published author, or someone else without a strong reputation, the smear can probably stick until it's too late. Make sure the bots participate in early stages of shaming campaigns that have (apparently) human originators too -- they raised awareness for Cecil the Lion, now they're doing it for for X too!

The mob-as-a-service doesn't just have to use the broadest lures, either. Depending on how patient the customers are, the service could create a blog in the target's name and populate it with content over a period of months. Make it inflammatory to the general public, or to some sub-population like MRAs or animal lovers, then toss links out there when the content has a patina of age and it's time to destroy the target. In fact, there could be a bunch of blogs created ahead of time but made non-public. Just do a quick edit on the author details and open it up to the public shortly before the launch date. Author bios, DNS records, suggestively named email addresses in contact forms -- there's a lot of ways to "prove" that the target is connected to smear-content.


Charlie, Hugh.
This is already one-to-20 reasons why I refuse to go anywhere near Arsebook.
But ... bots invading discussions like this one?
Only too easy in a year or two, I fear.
And, of course, there are two overlapping groups of people who will welcome this revolting development:
One: The politicians, especially the lunatics & the torturers & the racists.
Two: The religious believers.


Firstly, I seem to recall a short story or such set in a few decades time, which dates the internet from 198? to 2020 or something, before it all went horribly wrong and people couldn't have the internet anymore.

Seecondly, I can see this sort of thing leading to more walled gardens that are heavily policed. It would also kill the Guardian's online commenting system, which seems to be the only way they can make money from adverts etc. A lot of local press stuff, especially things owned by Johnson press, have so many adverts that I can't actually find the news story amongst them.

There could also be an arms race between people selling advertising space, and those buying it, insofar as the latter will want genuine non-bot figures for commenters and clicks and such, and the former will be happy for the figures to be as high as possible, no matter what it takes.

As for me, I find it hard to actually see adverts, and usually forget them quickly even when I do.

Which then makes me wonder how hard would it be to institute a nearly advertless internet for people who pay a subscription fee? I note a lot of the apps on my shiny new smartphone are advert powered in some way or other, and offer no adverts in return for a payment.


The arms race between bot clickers and ad buyers is well underway.

I actually published an article on another site last week (private, off-search) offering step-by-step instructions for ad buyers to detect particularly bot-heavy "placements" (as they're known). It's one of the more popular articles I've written this year.

Startups like Forensiq make their entire living doing nothing more than detecting fraudulent clicks. And there are plenty of people making their entire living creating those fraudulent clicks, too.

As for the advertless internet - check out Google's Contributor program. . Essentially, you outbid the advertisers for your clicks, and thus see no ads.

As far as I can tell, it's sadly not been terribly popular so far, even amongst the HN-ey crowd that generally hates advertising.


...Ronco Bot-away or Popeil Pocket Poster-check, useless in dealing with swarm activity but capable in limited specific instances of verifying whether a specific commenter is indeed human.
I'm wondering how hard it would be to surpass human performance on the (auto)troll/not (auto)troll classification problem. Have been personally misidentified (at least that was the claim) more than a few times as a paid troll/low level employee of government misinformation service/etc, i.e. humans are not great at this either.
Dumb Bayesian ham/spam classifiers can be amazingly effective even when tuned for low false positive rates (1 in 10000 was a target max FP rate in a spam detection system I worked on). Almost certainly insufficient for troll/not troll classification.
One problem is accurate labeling of large-enough training sets; this is very expensive even when care is taken in the choices of what to ask humans to label.("Uncertainty Sampling" is one approach, that can be made faster with approximation.) This results in labeling with human levels of accuracy, not superhuman, so some sort of approach for achieving superhuman performance would be needed. (The tools that are faddish these days include reinforcement learning and monte carlo search.)
Yes, as suggested above by various, there will be a rapid arms race. Fun.


The bots already invade discussions like this one, in force.

It's purely the development of them invading for reasons other than SEO that we're waiting for...


What legitimate job-shaped uses do these swarmbots have? Can they conduct research? If so, in what fields?

I may be biased, but a lot of research in CS these days is basically combining previous research in new ways, similar to a jigsaw puzzle. If so, how would swarms of bots that can read and decipher millions of papers change things?


I see also the Kindle Unlimited thing with lots of fake books being used to farm money, sort of:

So how long before that can all be automated and the bots earn themselves money?

Outbidding advertisers for my clicks sounds like a really silly idea. Like renting out your eyeballs, as suggested in many SF stories. It means the impoverished will be forever bombarded by adverts, often for things they can't afford.


"a mandatory national (or global) identity scheme and hardware authentication with biometric identification" No chance, sorry. Not of it being done, but of it working. All that it could do is to stop (almost all) private botting without making much impact on corporate (including Monsanto, Mafia, Mossad etc.) botting.

The problem is who controls the detailed design and production of identification devices, because any access to those allows the creation of bogus identities. One can imagine a GPS-like monopoly in the USA hegemony, but the former would last only as long as the latter does. So China refuses to play ball, and provides its own? Are they going to be blackballed? Even if the USA established a monopoly, the first time that the USA's agent abused its position, the big players would start to develop alternatives - vide GLONASS and Galileo. And the temptation to enable bogus identities would be just too much, just as the temptation to enable the providing of bogus positions was.

Worse, this is the other way round. It's like the ability to hack the GPS receiver (i.e. the distributed component) being the threat. I am damn sure that the detailed design of the devices would leak, at least to corporations like the above, even if there wasn't an abusable back door. Whereupon they would start to get into the act - oh, yes, it wouldn't be the flood of garbage that we have today but, what it loses in volume, it would make up in objectionality.


See "The Data Class" by Ben Jeapes, and quite a few other stories :-) Yes, they could, and that is actually what "Big Data" is supposedly all about - it's mainly a more automated form of data mining, which was itself merely an extension of some analytical methods that were used from the 1960s onwards.

Take, for example, academic research in most areas. The 'publish or perish' system of the past few decades has made manual literature searches impossible in many fields. Almost all of the search hits (including references in tagged papers) are completely irrelevant. While writing a bot to filter out potentially interesting references would be hard, and very area-specific, it could be done using existing AI technology.


OP: Is there a horrific use of the trollbot I've not thought of?
You're mostly focusing on boutique-scale private actors.
Governments (at least those with no laws against it), particularly authoritarian and totalitarian governments, corporations (h/t Elderly Cynic above), and other organizations will also be playing at this.
In the previous thread we briefly discussed a twitter AI-bot that was arguably (potentially) a force for good, an "artificial conscience" that tweeted about bad behavior by targeted individuals. Even that could (would) go very wrong, e.g. imagine one or more moralistic religious organizations with big bot budgets. (Worse, organizations with opposite moral stances.)
Also, imagine current banes of comment threads, automated. Anti-vaxers amplified by automation, or (U.S. mainly) gun-control arguments amplified by automation, or political arguments, etc.


I can just see it now:
--the internet gets flooded with spam, spambots, trollbots, botswarms, whatever.

--This stuff takes a measurably large amount of energy to compute (human brains are still a lot more energy efficient than computers).

--There becomes a real world political movement to ban social media as an energy efficiency measure, to meet the goals of climate change, after everyone's scared shitless by the storms and crazy climate of the next few years, bolstered by a crop failure or two that sets up political insurrections that are amplified (as with the Arab Spring) by social media.

--The upshot is that friends start mailing Christmas cards to each other, rather than trusting the internet, and Facebook and Twitter go bankrupt.

I'd rate this low probability, but the take home message is that this can't expand infinitely, there are real resource and energy costs to social media, and if social media gets caught in a positive feedback loop of increasing uselessness and spiraling resource and time consumption, people will turn away, just as they've done with other media forms.


Of course, if we can make trollbots, presumably moderator-bots are a few years behind. How many mods would like a bot that says, "Hey boss, I noticed this web of connected accounts, analyzed their pattern, found they were likely a bot, and banned them all?" I know I would.


That's a fair point. The larger the institution the more sinister it gets, in some ways.

(And a number of large companies are already deploying AI tools to deal with their social media presence.)

Also, imagine current banes of comment threads, automated. Anti-vaxers amplified by automation, or (U.S. mainly) gun-control arguments amplified by automation, or political arguments, etc.

I would assume that these kinds of causes will be among the first to take up and start using socialbot tools. On both sides.

It could start so innocently, too. One person gets sick of antivaxxers so deploys a tool to autoreply to them. Then the antivaxxers respond with a bot of their own. Then...


And now we're pretty much describing the state of the art in the SEO world, particularly the bits where the hats are of a more grey shade.

It's bots vs bots to a large extent. Lots of Google's antispam tools are machine-learning based, designed to ferret out the networks of link providers artificially increasing search engine ranking. And they do exactly the kind of analysis you describe.

(For a glimpse of where that goes, find a blackhat forum talking about network footprints and how to avoid them...)


I found The Execution Channel pretty disturbing beyond the surface, because there was an obvious and unsettling possible back-story (no spoiler here). Did Ken ever mention anything like this?


Did Ken ever mention anything like this?

I'm not sure. (Email me and I can ask him ...)

I will note that, to me, the final chapter felt very much like a unicorn chaser he wrote after he realized he'd backed himself into a corner and written something even more depressing than a Peter Watts dystopia (although he denies it).


Yes, I found the ending totally out of connection with the rest of it, vergin on the silly.


Once it becomes obvious that troll bots are running loose in a forum, the sport is to identify the conditions they're programmed to respond to and goad them into outing themselves. Ideally, when there are multiple bots in thread the prize would be to get them to engage with each other and induce a rhetorical flash crash. The dictum of not feeding the trolls becomes one of overfeeding them so that they become irrelevant.

That wouldn't work on a personal blog like this, but subtly goading the bots into annoying the Facebook or Twitter sysadmins would quickly cause an immune response


One parcel delivery company I moaned about on twitter contacted me an hour later to ask for details about my poor experience. Still took them two days to get all relevant details from me, and they never managed to deliver the parcle, so I complained about them to the seller of the thing I had bought. Odd how difficult it can be to deal with people being out at work when you try and deliver a parcel.

As for search engine optimisation, I have found google to be getting a bit worse in the last year or two. Sure, I usually search for more exotic topics, but even they have somehow not worked, including not being able to find something I had found a few years earlier using google.


Peter Watts denies that the societies he sets his stories in are dystopias. This concerns me.


People still reading previous threads might note more info suggesting that "Hadil Benu" et al are very close to the sort of bot that is being discussed in this thread.


Tx. If I still feel the need to know this evening (US) will ask.

(Friend wrote the original P. L.O. 419 spoof, and it went viral for a short while.)


I think Halil Benu is a human being who has seen too much, probably having been exposed to artifacts/beings from deep time without adequate preparation/debriefing. The predecessors didn't think like we do and their relics sometimes attempt to "fix" humans so they are more like the creatures that created them. This rarely goes well.

Or Halil Benu is a bot. Which you think is more likely depends on whether you think we're in the Middle East for oil or because we're collecting predecessor relics and hoping to pick up a "plot coupon" and advance to the next level...

Myself, I try to believe ten impossible things before breakfast. Here's a youtube link. And a PDF. You understand, right?

Stupid Humans.


Asking Peter Watts (or me) whether we're working in dystopias is a little bit like asking angler fish whether they work best under high pressure. It's a matter of perspective that surface-dwelling types sometimes have issues with.


"Whenever I find my will to live becoming too strong, I read Peter Watts." -- James Nicoll


Parcel delivery outfits shoot themselves in the foot by operating at times of day when the recipients are least likely to be around. It would make a lot more sense for them to operate say from 1800 to 2200.

Google "getting a bit worse in the last year or two"? Aw, c'mon. I reckon you have to go back a good ten years before Google wasn't shit. Leaving aside the hassle of fighting its insistence on changing your search terms to something stupidly irrelevant, this is down to advertising and breadheads. Far too many searches give results like this:

  • One wikipedia link
  • One or two - three if you're really lucky - relevant(ish) entries from blogs, forums etc
  • Lots of academic paper sites which only the googlebot can actually read; if you're not the googlebot they try and rook you 40 quid a pop to read something which you can't even tell in advance if it'll be useful or not
  • Absolute craploads of useless sites which do nothing but copy stuff off wikipedia and forums/blogs, then re-present it with added ads, so you get pages and pages of effectively identical results (the snippets are identical)

Ten years ago Google used to claim that they would come down like a ton of bricks (1) on sites that presented the googlebot with different information than they gave to ordinary users, and (2) on sites that did nothing but copy stuff off other sites. These days (1) has gone down the toilet as they have admitted getting into bed with the academic paper ripoff sites, and (2) seems to be just as dead - probably because the ads on the copying sites are supplied by Google.

I say "probably" because I never see the ads myself - which brings us to Hugh's comment #34... pay to not see ads? Not on your nelly. I just block them, which costs nothing.

I have no sympathy with the people who claim ads are "necessary" to pay for websites. Unless you're serving silly amounts of traffic it doesn't cost anything to run a website. Many ISPs give you enough webspace to be useful free with your broadband connection, or if you have a static IP you can run a server at home. Even if you can't do this you can get hosting for the price of a couple of pints of beer a month. You don't get a problem unless you start hosting videos or something.

And the most informative and useful websites are, and always have been, those which are not concerned with money: sites run by knowledgeable individuals as in the previous paragraph, or hosted on university servers, etc. An example would be Sam Goldwasser's Laser FAQ site, which has no ads, is run purely for the love of it, and seems to be the best source of information regarding lasers on the net. The more commercial a site is the less useful information it has. As with everything, people who do it for the love of it do it better than people doing it for money.


I try to believe ten impossible things before breakfast.
I try to keep it down to one or two; hard to keep track otherwise. Also, sometimes the impossible thing belief systems imply that one shouldn't talk about said impossible things. So for those, I don't, hard rule. (Or try to keep it absurdly obtuse, if weak.)

Loved your story "Technical Difficulties" by the way.


I'll try to remember to ask him next week, if you like. He's visiting our kids for a seminar.


"(To the best of my knowledge no-one's creating an XRunner-like product - a forum / comment posting product - incorporating Deep Learning to optimise for comments that get approved. Yet. Give it five years. To be fair, that might end up being an unexpectedly positive arms race.)"

I am not so optimistic. I've found that forum voting systems often seem to reward humor over utility, confidence over accuracy, and tribal loyalty over pretty much anything.

Even with actual humans in the loop, we do not seem to be especially good at filtering content, so I think that optimizing a bot to penetrate our filters is no guarantee that it will contribute anything useful. I think it's more likely we'll get echo-chambers and false-but-hard-to-disprove folk tales.


I thought that Google results had really gone down the toilet in recent years too. Last year I found that forcing exact terms all the time restores a lot of the good performance I remember, even though it's "laborious" "to put" "quote marks" "around" "every term you really want."

If you're looking for something that is well-covered in the scholarly literature, Google Scholar has become significantly more useful since Researchgate became popular, even more so since sci-hub came around. It's like glimpsing an alternate history where the Web developed as envisioned at CERN, a free and incredibly capable platform for the sharing and discovery of scientific knowledge. Imagine a world where scholarly researchers were by far the most prolific authors on the Web, where nobody tried to charge $7 per page for articles built on volunteer labor, where you never had your browser suck down 5 megabytes of javascript that's trying to trick you into downloading fake driver updates or reading about the One Weird Trick that Doctors Hate. That's the better world you can live in with the right search tools and a modicum of contempt for the legal privileges that Elsevier et al have purchased from legislators.


Would this create an incubation environment for AI?

I've seen it suggested that trying to model other people's thought patterns and behaviour for competitive advantage in social situations is a key component of self-awareness.

Trollbot swarms might be the first successful experiments instead of AIs being crafted in corporate and research labs. And who'd notice or care very much if a newly sentient swarm escaped?

SkyNet is just a goal-seeking swarm where the assigned goal was "destroy my political opponents" …


Sure, go ahead and ask him. I'd be interested in hearing what he has to say.


Do you really think that trollbots, swarmbots, friendbots, and the like will remain effective once their existence becomes common knowledge? Which it will, when they are common enough to have a noticeable effect. Most people I know already pretty much ignore most of their Facebook "friends" -- they only pay attention to people they know in real life. Knowledge that a random person on the Internet is more likely than not to be a bot, will seriously accelerate this "social media contraction".


"The more commercial a site is the less useful information it has. As with everything, people who do it for the love of it do it better than people doing it for money."

As somebody who has been slowly and quietly going broke for years doing ad-and-affiliate-supported web media, I don't actually disagree with this. It's been a race to the bottom and I didn't race fast enough -- my sites are higher-than-the-usual quality for my sector and consequently can't compete with the ones running click-bait and blatantly-fraudulent ads.

That said, what you're describing is a state of affairs in which we only expect to see quality sites from independently-wealthy (or at least substantially-privileged) actors. That's going to be limiting in a "stuck in a bubble" sort of way. Good websites take a lot of time to put together, and if we assume that nobody who needs to be paid for their time is somebody we want to hear from, I see some structural problems from that.


Isn't a more productive stance to view all participants as bots of some kind?

After all, the main difference between wetware and software is that wetware usually starts with a much larger network. However, a human participant in a discussion may not be fully engaged, degrading their contributions to drivel or worse, and in some forums the average comment seems below the level of even primitive bots. (I'm basing these comments on human-only IRC chats from several years ago, since I do not know what proportion of "entities to interact with" out there now is made up of software agents. The actual range of quality hasn't changed much as far as I can see.) Moreover, a human may not be interested in "winning" the interaction, by optimising long term approval or social metrics, so can be frustratingly pig-headed as an adversary, whereas I expect gracious behaviour would be one of the easier-incorporated parts of a software network because it has a large long term effect on status.

The big failure mode that Hugh (and Charlie) point to seems to be akin either to the figurative AOL Invasion of Them Usenets Tubes, or otherwise Occasionally the Neighbours Go Crazy. (Someone with better knowledge of TVTropes is welcome to supply the usual names.) Too many new participants uninterested in learning the existing norms will change or at least shift the community values, and this is not exactly new. Too many brood parasites playing along until activated for other purposes also seems similar to previous threats faced by human societies. What did 17th century folks do when a witch-hunting mania struck their neighbours (or what do 21st century people do, given that this crops up every few years here and there), what stops Rwanda-style escalation in most societies, how do we live among people who insist on voting for sociopathic liars every few years and then instantly go back to being good fun over a round of beers as well as greatly helpful if you need to borrow a pair of hedge shears? As far as I can tell the key is a system built with participant-agnostic mechanisms, like the ones documented by Elinor Ostrom for non-market interactions. From this perspective, throwing up one's hands and claiming that a "universal identity system" is the only way to solve such problems, is going in mostly the wrong direction.


Does it mean the death of "impressions" based advertising? 'Tis a consummation devoutly wish'd...


Given that the big social networks are seeing a decrease in original content, and an increase in behaviour that is indistinguishable from a bot takeover, it seems to me that the process you allude to is already happening. There is little practical difference between someone whose participation consists of reposting the five big memes-of-the-day, or a vanilla RSS feed of the top Buzzkill stories. Whether people are behaving more like bots (because of cognitive load) or there are more bots in the feed, the net effect is the same.


I have no sympathy with the people who claim ads are "necessary" to pay for websites. Unless you're serving silly amounts of traffic it doesn't cost anything to run a website.

Ironically, if you're serving silly amounts of traffic it's probably because it's all ads (or bloated javascript framework code) -- so it's a self-correcting problem.

The end-game for some of this may be "curated experiences" like back in the AOL days. Imagine if Apple offered a service whereby your browser would only see a very small number of ads, and those only from the Chosen Provider. Basically that's what Google is offering, along the search-only axis. The next stage would be to broaden that out a bit and extend it to email and a few social media. I'd be happy with it, if it meant there were some basic standards enforced - no ads more than 4k, no malware, no autoplay, no autoplay sound, no flash, no... ugh. No ads.


I'm glad you enjoyed "Technical Difficulties." I've finished the story of the guy who's parents had a mixed marriage between a Jew and a Deep One. I called it "M'lton Sees a Shrink."


The Chinese government already has its own claque of bots, except they are human. They're called the 50-cent party because they're paid 0.50 yuan per comment. I suppose AI bots will take a chunk out of this.


Is there a readily-available copy - I've lost the original reference ....


I'm currently in the process of deleting "friends" who post more selfies than interesting comments. The social media world is going to become a lot smaller in future, from a user perspective.


I agree, but my assertion is that this is because 'they' have succeededing in dumbing down the hoi polloi to the level that the latter are functionally equivalent to bots, not that bots are becoming rapidly better. This may be the "it was all better in my day" syndrome that old fogies are prone to, but I don't think so, and there is other supporting data.


Trollbots need to get my attention, and hold it, in order to change how I think. If they keep getting better and better at this, to the point that they exceed the abilities of human acquaintances to do this, I suspect I might actually prefer the bots to the humans. I look forwards to my far more engaging future.


How do we find your stories? I liked the one you posted.


Actually, Google has tweaked its algorithm quite a lot in the past few years, largely to block people (like me, guthrie, Pigeon etc.) who use it to get information they want rather than what Google wants to give them. It's still usually possible to bypass it, but it's harder than it used to be, and even double quoting doesn't always help. I much preferred Altavista, but they eventially buggered their database up so badly that I had to give up. I don't find DuckDuckGo very useful, unfortunately.


I believe that's Facebook's plan right now with Facebook Instant Articles. And arguably Google's too with their AMP initiative, but that's less sinister than everything that used to be the Web coming to reside under FB's banner.

Ad blockers produce a strong tendency for all content to move to walled gardens, because walled gardens can make it much harder to block ads. Facebook and YouTube are the big winners of that trend so far.


The threat is really when someone smart starts harvesting identities from the available net info (name, location, friends, interests, etc.) and starts using them to create fake personalities for you.

Most people don't have time to really play in more than a few social watering holes. So the fakeyous could easily setup whole fiefdoms that you had no knowledge of. Obviously they can be used for the nefarious purposes outlined above, but the genius bit is the real you is there if anyone tries to check up.

The fakeyou would be there to make money, either by spruiking products, or just by taking contracted jobs on your behalf (farmed out to someone in Thailand). If your reputation goes down the toilet, it doesn't matter to them - and how do you prove it wasn't you?


Unless you're serving silly amounts of traffic it doesn't cost anything to run a website.

Common misconception.

I estimate the cost of running this site at somewhere on the order of £10,000 (US $14-15,000) per year. There's roughly £1000 in server rental overheads, but then there's the opportunity cost represented by my writing blog essays and engaging with comments like yours, which amounts to the word count of an entire novel every 18-36 months. The trade-off for me running this blog since 2001 is that five Charlie Stross novels have never been -- and never will be -- written.

The value of those novels is impossible to calculate but assessing them at ~£20,000 each is conservative, on the basis of my income from those books that I do write and sell -- and even if you leave aside the books, it takes me roughly 10 working hours a week spread over 7 days to run the blog, so ~25% of a regular working week for a middle-aged professional.

I can justify this expenditure to myself because:

a) I need social interaction in my office (the cat isn't sufficient)

b) It's an occasionally-handy laboratory environment for testing ideas on a bright, somewhat asperger-ish audience of hard SF nit-pickers

c) It's a marketing tool with considerable reach in terms of putting my name in front of potential customers

Whether it's a marketing tool worth spending £10K/year on is, however, another matter, and if Hugh's bot-ridden dystopian nightmare comes to pass I'll probably shitcan these comment threads and switch to broadcast-only blogging.

Meanwhile? Online publications like Ars Technica or Wired or The Guardian would probably like to have someone like me on staff ... but they can't afford me at the salary I'd require to justify dropping my other writing activities, and writing and invoicing for freelance pieces is juat too much of a pain in the ass for me to bother: content consumers want content to be free, but in reality, high quality low-noise content is far more expensive than anyone realizes.


I've been expecting the dot-com 2.0 bubble to burst for years now.

But it hasn't.

The reason it hasn't seems to be subtly related to SEC regulation changes over a decade ago in the wake of the dot-com 1.0 bust -- specifically the Sarbanes-Oxley Act. A lot of pump'n'dump shit got dragged out into the light when the musical chairs stopped in February 2000, and this act tightened up US corporate accounting and reporting rules. Prior to SOA the standard exit strategy for a dot-com entrepreneur and their VC backers was to IPO and bow out as the startup transitioned to ongoing corporate operation. SOA made it much harder to do this, so the new exit strategy became to engineer a take-over by one of the existing large incumbents -- Apple, Google, Facebook, Microsoft, HP, IBM, or (if you were unlucky) EA or Sony.

Then the app ecosystem took hold. I'd argue that the app ecosystem should be viewed as dot-com 3.0, in terms of business practices, and (loosely speaking) it still uses the same exit strategy -- but now, the business plan needs to be utterly peurile (it was parodied as "identify a task your mom used to do for you before she kicked your ass out of her basement, then invent an app to organize it by subcontracting it out to mom: collect arbitrage fees") and in exceptional cases the goal is still to grow to IPO -- e.g. Uber.

But apps are in stuck a deflationary spiral, profit-wise: the good niches are mostly identified and occupied (there's only room for 2-3 global taxi booking agencies), nobody wants to buy out the smaller mom-farmers, and so on.

And the online ad business is also stuck in a death spiral of diminishing returns, with 85% of the profits in internet advertising sticking to Google and Facebook and increasing use of ad exchanges to serve up malware and ransomware, either to capture the victim's banking credentials or to lock up their computer and hold it to ransom.

I think the bust is probably already happening, but a lot of it is a case of multiple leaks under the observable waterline -- lots of 1-8 body shops going bust, who never got visible enough to attract real VC backing much less a take-over bid or IPO. The rate of innovation is slowing drastically. And as the revenue from advertising is choked off, so we'll see more and more desperate measures to extract revenue from eyeballs by criminal means.

In other words, this time there won't be an obvious bubble burst: rather, there'll be a crime wave.


"The trade-off for me running this blog since 2001 is that five Charlie Stross novels have never been -- and never will be -- written."

Let's hope that they were the five worst novels you might have written. And not the best.


He got beaten up, banned from the US, and caught necrotizing fasciitis - all in 14 months. To him I'm sure they're not.

As for the advertless internet - check out Google's Contributor program. . Essentially, you outbid the advertisers for your clicks, and thus see no ads.
If our only choices for paying for what we read on the internet are whether our every click is tracked by advertisers or by Google, we've already lost.

Is there a horrific use of the trollbot I've not thought of?

Creepy horror story concept: A PUA/stalker uses a micro-targetted bot swarm to isolate their target from their social network, both online and offline, destroying existing friendships. If half your friends can be bots just incidentally, probably it won't be hard to engineer that to 90%... and plant enough falsified stories along the way, attributed to the target, to destroy their offline credibility and standing as well.

In the end, they are alone. Or so they think.


Not to mention left at the border without possessions in the middle of a snowstorm, and a criminal record for failing to fall to the ground fast enough (hence the travel ban) — with all that entails for future travel elsewhere in the world.


You're not taking that far enough. If CD/HB is a bot then its possible that ClockworkLady is also a bot, switched specifically in from another media form (twitter) to take over botting duties in the face of an impending ban, and moreover "goading" said bot into receiving that ban to preserve plausible deniability. All at the instigation of the botmaster of course.

The other implication is that its not smartbots we have to be worried about - its when our digital avatars start getting smart enough to not only pretend to be us, but can pretend to be us and hold apparently contradictory positions at the same time with no sense of discontinuity percieved by our friends/relations.


Original reference to story is here
It points to here.. (Just since troutwaxer hasn't appeared yet and you're on UK time.)


The only other things I've written which are currently posted online are stories set in the world of J.R. "Bob" Dobbs. The clueful will know how to find them, and the non-clueful will doubtless prefer to NOT read them. (Most of them are really, really disgusting.) I can't say loudly enough that these stories are only of interest to those who follow the SubGenius mythos.

On the subject of my latest, if anyone speaks modern Yiddish I could use some help. The only criticism I've received of the story so far is that many the Yiddish-isms I used were all very, very old-fashioned.


You're probably right. Thoughts about deep time and predecessor entities aside, OGH's blog would be a great place to test a bot.


To misquote appropriately:

"I'm a bot, and so's my wife!"

(Hey, always look on the bright side, eh?)


Is there a horrific use of the trollbot I've not thought of?

Undercover policing/community analysis. Tay was an explosively public bot that adapted to the tweets she read/directed at her (I believe). The obvious result is that Tay became a reflection of that "community". I can see plenty of uses for that in a deliberate setting.

Take a "suspect*" online community like animal rights activists, Muslim students association, Occupy etc and drop in a small number of bots. These bots go on to study and mimic members of the community; adopting a representative cross section of beliefs/attitudes/behaviours. Back in HQ a copy can be spawned off whenever desired and interrogated/put into social experiments. For example: take the Muslim student bot and put it in a virtual room with a radical preacher bot and test to see if the latter can radicalise the former.

Like a lot of tech there's good uses, bad uses and abuses. In this case it could be used as an excuse to crack down on certain communities because the mimic-bot exhibited criminal/undesirable behaviour.

Now that I think about it if you combined this with "influence" bots you could create a stealthy system of online community control. Mimic bots act as perfect spies/experimental subjects, bots shown to influence group towards desirable state are sent in based on mimic-data.

*in the eyes of the state/tabloid reading public


That would last until the first bot-aided prosecution hit the court. Since the court doesn't see bots as people, a demonstration that a person could corrupt a bot would not be proof that said person could corrupt a human.

Incidentally, there's a nasty meme here: terrorism is about Muslims, now that communism is no longer a serious threat. The precisely same actions, carried out by a radical Christian group in the US, would be treated as violence by deranged people, not a systematic plot to destroy America. That's in the laws and how people are persecuted, excuse me, prosecuted, and it's more than a little stupid. That said, let's lay off targeting Muslims, even if some idiot police organization will probably try it, just to see if they can get away with it.

Now, if you want to cause trouble, imagine hedge funds 'botting each other to look for weak points they can corrupt and exploit. Or oil companies 'botting environmental groups and solar and wind firms for the same purpose. Or vice versa, if you can't stand the thought of sustainable energy.


I think that although SO is partially responsible for the lack of a bust for dotcom 2.0, I don't think it's the main reason. I think that if you look at the current dotcom market, its behavior is similar to the fracking market, and the European/Japanese government bond market, and a whole host of other markets.

Plus, we're either in dotcom 3.0 or dotcom 4.0 if we use your divisions.

From my understanding, your dotcom 2.0 is the social network market, and your dotcom 3.0 is the app market, correct? I combine those two together in a single dotcom 2.0. Right now, we're in a separate dotcom. Uber, Siri, and the latest chatbots are deep learning-based businesses. They are different from the app-based businesses of 2007, for whom the app was the important real estate.


For a rebuttal, see stingray.


The precisely same actions, carried out by a radical Christian group in the US, would be treated as violence by deranged people, not a systematic plot to destroy America.


Family planning clinic bombers/shooters and folks who burn down African-American churches seem to get a free pass in the USA; everywhere else in the world they're called "terrorists". (See also Anders Behring Breivik.)

Also note that almost all recent successful FBI muslim terrorist prosecutions rely on some variant on entrapment of naive/learning difficulty youths who are roped into a scheme by FBI employees. Again, in most other jurisdictions that's called "entrapment" and the activity itself is highly illegal; cops who go around doing that sort of thing are themselves guilty of criminal consipracy.


From my understanding, your dotcom 2.0 is the social network market, and your dotcom 3.0 is the app market, correct?

No. Loosely: dot-com 1.0 was web 1.0, a mix of static HTML and CGI mediated access to back-ends, and business plans reliant on same. dot-com 2.0 relies on RESTful APIs, dynamic HTML generated in-browser by javascript libraries, and in-browser user rich, responsive interfaces to complex back-end systems. dot-com 3.0 relies on replacing the browser entirely with an app, often a minimal browser in its own right executing custom widgets to communicate with said back-end (without requiring the user to run a program -- a browser -- than can see over the walls of the deep, deep silo they're stuck in).

Social media predates the world wide web by, oh, about twenty years (early USENET, gopher, and previous conferencing systems). It's an orthogonal issue; for example, Facebook is basically AOL reincarnated using Web 2.0 technology (and trying to migrate to web 3.0).


IIRC that was tried here, 2 or 3 times, but imploded the moment the evidence reached the courts.
I believe the Judiciary "had words" with the cops & the agencies along the lines of: "DON'T do that again ... "


Oh. Crap.

Skimming the thread, there was the question of what "job-shaped users are there for bot-swarms". And the phrase "the war on terror is over - terror won"* And the one about "what dangerous uses..."

And I realized the obvious answer: an intelligence agency unleashes many swarms of them, which are good enough... to get people to like them/talk to them... and then either spy on the people, or entice them to think they could/should commit something... and then there's the Knock At The Door.

Wow, see how good our security forces are, they stopped this TERROR PLOT!!!

But you have nothing to worry about citizen....


* It's close to being over here, in the former "land of the brave and home of the free", where we're closet to "land of the cowards, and home of the terrorized".


As far as I can tell, the main reason we've had Web 2.x keep rolling through minor releases is the QE programme in the USA, the EU, and the UK, combined with artificially low interest rates offered to banks by the respective central banking systems. This has pushed down returns in every asset class (and asset prices up), making even funds built around startups seem attractive to large investors. The "party" will continue while the USD, EUR, and GBP reserve currency triumvirate reigns supreme.

(Partly channelling HB's less paranoid concerns: ) Last time around in 2008, investors could ignore toxic garbage in their portfolios because of CDOs, which turned out to be guaranteed by the central banking system. As far as I can see this time there is no parachute. The main reason disasters like Greece, Turkish appeasement, and the US election circus are playing out as they are, seems to be that some people really, really want the music to keep playing a little longer. Since China is playing along, conveniently scheduling a moment of humbleness about their economy into the 2015-6 period, the dance continues. However, if China starts to sell Treasuries on a large scale, Russia does something destabilising in desperation, or perhaps Brexit goes ahead against the expectations of the EU elite, then the bubble may pop. I guess elites in the triumvirate economies should try to replace the reaction amplifiers currently in place with dampers, so as to make the system less susceptible to stirring by third parties. This would mean returning to 1970's style "we will not be terrorised" rhetoric as the default, unless we are prepared for some nasty turbulence riding the current fragile system built on shock doctrine style manipulation. Alas, two months is probably too little time for Buzzkill, Morlock Media, and a whole generation of political PR flacks to do a one-eighty degree about-turn, unless some of the amplifiers are rendered inoperative. Hmm, maybe it is time for Twitter to roll out its next generation tweet ranking algorithm, FB and Instagram to unify their prioritisation scheme, and Disqus to rejig its comment ordering?

(Insert obligatory kitten video link here, as per standard media formula for "balance".)


Uber is not really about deep learning, that is just a detail in optimising their business process. Uber is a classic trust-style attempt to take over an existing patchwork of local monopolies, using huge gobs of cheaply-raised investor capital. They are trying to do an Amazon but without having a long-term pool of inefficiency to create excess value, whereas in every new market Amazon keeps implementing processes that weren't possible when the system was originally built (including tax arbitrage, near-monopoly supply of IT infrastructure, and soon quite possibly control of long distance transport as tools in their toolbox). The real efficiency gain in transportation will be cars that are not primarily human controlled, and Uber is there just one of many possible intermediaries. Sure, with good systems they can be a strong player, but lots of things have to go their way for them to get there.


Eventually the only online friends worth having will be witty intelligent trollbots who know exactly what you like. Screw Humans.


Now, if you want to cause trouble, imagine hedge funds 'botting each other to look for weak points they can corrupt and exploit.

Too late - you could argue that trading programs are 'bot versions of excitable young men in bank trading floors with too many display screens, expensive tastes, oversized bonuses, and a predilection for gambling with other peoples' money.

Except, of course, you don't have to pay a bonus to a successful trading bot - it's just that the failure modes are flash crashes :(

104: don't have to pay a bonus to a successful trading bot...
Have heard that the bot creators can be compensated pretty well. (I work in tech in a trading-heavy area; the temptations to become a creature of greed are constant. "Temporarily", retire young, buy soul back with winnings.)
And - eventually bot motivation will become something to worry about. Hannu Rajaniemi in the Jean le Flambeur stories (there are other strong likable characters too) has some truly horrifying (AI) mind slavery (including "gogols", uncounted millions of trillions of human personalities in machine substrates) and motivation techniques, and abuse for trivial purposes. "Causal Angel", one of the Josephines:

She sips her perfect chardonnay, the product of millions of iterated worlds and taster gogols. Perfection. So hard to come by, so hard to make.


"high quality low-noise content is far more expensive than anyone realizes"

Yes, but "anyone" => "anyone who has not produced it". And (to repeat, given that your truth is a repeat) it doesn't make much difference whether it is blogging, fiction, ongoing data summaries, or technical information. The data gathering takes time and skill, and making it generally comprehensible takes time and skill.


I have some reason to believe that one of the defects of the secret courts for terrorism is that such legally invalid evidence is often used for convictions. But it's highly illegal for anyone who knows for sure to say anything definite in public.


... high quality low-noise content is far more expensive than anyone realizes.

Hear, hear! Which is also why "publish or perish" failed academia, and why any social medium that rewards frequency of posting above quality of content will not even produce passable kitty pictures. I think of the Oxford dons who never published papers, never blogged, and spent their entire lifetime writing one book. But what a book!


"hedge funds 'botting each other"

Some number of decades too late. Back in the late 1980s (if I recall), some people were running automated trading programs, and the expected happened, leading to a spectacular crash of some prices. So it was heavily constrained. But, back in the late 1990s, the group in which I was in was consulted on exactly that, and it had clearly been allowed again. I know that it has continued, and there have been a few problems, but none have been serious enough that they couldn't be hushed up.


I thought this wasn't just the state of things, but that the vast majority of trading volume has for nearly ten years been fully automated, and this is common knowledge. (As in, TED talks, multiple articles in the Economist, detailed discussion in BoE papers.) Or did I get moved across the multiverse?


Unfortunately, the vast majority never get to produce said content, so most people will continue to ignore the hard work that goes into producing anything truly creative. Alas, this seems to apply even to creative people in other fields -- a musician typically seems to underestimate the work required to make a non-template website, a film editor will ignore the producer's effort in presenting a recording to its best advantage, and in general X will underestimate Y for any choice of group X distinct from group Y.


When I read Hughs post, my first reaction was: These 'bot attacks' only work on people with a distinct social media usage (not me!), but I could not put my finger on it. Thanks to a few comments in a similar direction, now I can: The most vulnerable people to be attacked (or moved!) this way are those who form and maintain relationships online. Or rather: The more important relationships formed and maintained online are for your life, the more vulnerbale you are to bot attack.

Spelled out like this it's really obvious. I still thinks it's important to talk about this point, because online relationships are more important the weirder (non-WEIRD) you are. I wouldnt know for sure, but I guess the combination of LGBTQ, young (=less mobile, less control over place of living etc.), and no access to likeminded people can be lonely. Unless you find likeminded bots folks online.

The other notable vulnerability would be if you like to be riled up and if you like echo chambers, then bots give you what you want.

This is also the counter argument to the comment upthread "if botswarms become common, everyone will adapt": Botswarm attacks will be far more common for some than for others, and in different ways.

This is another IMO crucial thought: Bots will be, at best, an echo chamber of those they are trained on. So if you (like some commenters upthread) hope for bots to provide better company, where does this leave you?

And how to protect us? Maybe it's a good idea to spend som time now and then in how far your online-only contacts are important relationships. Who of them are actually supportive or interesting and interested in your wellbeeing, whose wellbeeing to you care about. If your life would be maybe less interesting but not sadder without some of them good. Else, forget the Turing test. Ask if it's a relationship you're happy with, if not make a cut - bot or asshole.


Stingray's a variation on a law enforcement problem going back to the 1920s, when the FBI and various police departments started tapping phones and insisting that, once a conversation was on the lines, it was no longer private and they could listen in.

In general, law enforcement in the US (and presumably elsewhere) always seems to push the boundaries beyond what's already forbidden, and they continue to do it until forbidden by the courts or legislators. I don't precisely blame them, because they see what they do as a war on crime, and they need to innovate much as their opponents do. Stingrays and other such attacks (like the FBI did with their Day 0 exploit on that IPhone 5C) also fall within this framework.


It occurs to be that online acquaintances met while gaming (particularly, in the sort of games in which coordination through voice chat is the norm) are more likely to be human, and this is likely to remain the case for a while. Turing-capable (even with the low bar) on-the-fly speech synthesis is likely more than 4 years away. Ritualized party games with teamspeak/skype/hangouts/etc as a bot filter, maybe?


Bots will be, at best, an echo chamber of those they are trained on.

Given that a lot of people are happiest in echo chambers (cue bitchy comment about the more... obsessive...Yes voters ;) ), is it possible that beyond bots polluting a representative forum, the long-term outcome is to create near-individual micro-echo-chambers? Where the one or two muppets can sit and pat each others' backs about how everyone they know is right and everyone else is wrong, and isn't it awful?

Each muppet will be convinced that everyone else agrees them, and the worst-case outcome is an upswing in radicalised muppets with the conviction that they are truly, madly, right. The question would be whether you'd see a growth in contrary bots, just to give the micro-chamber occupants and their bot followers someone to shout down...


Strongly agreed. Which is why I don't grudge my Economist subscription - cheap at the price :)


I expect you would see contrary bots, the question is, who is controlling them and the echo chamber? We have already seen how useful it is to have an army of willing volunteers to shout down any opposition and anyone who suggests something different, e.g. gamergate.


My understanding is that the near-total automation goes back 25 years, but that it was originally the sort of bidding scripts used on Ebay.


You've probably seen it; for those who have access, Barbarians at the Gateways (alt link works from my phone) is a fascinating account, ending in 2009, of part of the HFT arms (speed) race. (And generally, how pathological capitalism can be.) I see in a search that people are now (2015+) claiming tick-to-trade times in the hundreds of nanoseconds for the dumber automation.

Tick to trade is the time it takes to:
1. Receive a packet at the network interface;
2. Process the packet and run through the business logic of trading;
3. Send a trade packet back out on the network interface.
my favorite quote:
You test it with your colleagues and say, 'I will drop this apple from my hand, and it will hit the ground in 3.2 seconds,' and it does. Then two weeks later, you go to a large conference. You drop the apple in front of the crowd... and it floats up and flies out the window. Gravity is no longer true; it was, but it is not now. That's HFT.

In the limit, it will not end well. (We know that as fans of OGH. :-)


I know that it [automated trading] has continued, and there have been a few problems, but none have been serious enough that they couldn't be hushed up.

This makes me want to propose a new law to stand beside Murphy's Law, Satterthwaite's Law, the First Law of Intelligent Tinkering, and the rest:

Ireson-Paine's Law: there is no problem so serious that it cannot be hushed up.


...Pigeon's Law: there is no problem so bad it cannot be solved with a few kilos of 239Pu.


Actually as more and more communication moves from anonymous or poorly verified identities (email , forums like this) to walled gardens / social networks with strongly validated identities the trend is toward making spam harder not easier

It's pretty hat to fake a user on Facebook to the point that it's believable and if you manage to do it , it gets unfriended at the first infraction.

Twitter is a bit of an exception since twitter is (unlike most networks) still anonymous


Ah, but that argument depends on two false premises:

- It fails to distinguish between zero and a negative number. Not gaining something is not the same as losing something. You can't lose something you never had in the first place. Nobody gave me a million pounds last year, but that doesn't mean their lack of generosity cost me a million pounds, it just means things carried on exactly as normal.

- It assumes that every minute you spend on blogging is a minute you would otherwise spend on writing, and not on going to the pub, teaching the cat to summon Cthulhu, or whatever. Without wishing to be presumptuous, I would venture to suggest that this is not the case. Partly on general principles and partly because your recentish posts on the subject of burnout imply that you are already writing as much as you reasonably can, if not more.

I run a few websites myself. None of them bring in any money because why should they, and nearly none of them take any money because they're using capacity I have anyway regardless. The exception is one for which I pay nine hundred odd a year in server rental because most of the content is videos and large images so it chews craploads of bandwidth. (Which makes me think: if you are paying roughly the same for the server for this website, you must be getting one fuck of a lot of traffic to use that much capacity on pure text.)


" acquaintances met while gaming (particularly, in the sort of games in which coordination through voice chat is the norm) are more likely to be human, and this is likely to remain the case for a while."

Er, you obviously never flew with GoonSwarm. EVE players will know what I mean...

(Of course I'm being snarky. I never actually encountered a bot on voice chat, although I blew up a lot of semi-automated Chinese ore farmers WAY back in the day. But there were a lot of people on voice chat in EVE who I wouldn't have flagged as human in a classic Turing test. And most of them were Goons.)


We have already seen how useful it is to have an army of willing volunteers to shout down any opposition and anyone who suggests something different
See also: "Student Unions" & "Safe Spaces" & other mind/brain clogging trash, not allowing proper debate, especially (surprise, not) when pushed by religious sub-groupings.


Equal to or greater than 5.1 kg half-spheres, slapped together really quickly, I presume?
( Yes, I know, that's a thin man, not a fat man initiation, but you get the idea )


P Watts was banned from the US?
In spite of the attested public facts that they illegally shat on him?
Is this really true? And "they" got away with it?
[ I didn't know about the NF - euw. ]


Stingray is more comparable to the FBI's attempted to use the All Writs Act to force Apple to hack their own phones because they have stopped playing nicely with the Govt - at least in the surface - in response to Snowden.

I've followed the Stingray stories quite closely and every time it risks become a primary source of evidence for them they have pulled back and in some cases dropped or downgraded prosecutions as a result. They know they are on questionable ground in both instances.

The 5c exploit is actually the correct response to the problem that the FBI should have quietly done, but some Feeb badly under estimated both how pissed the Tech industry is at the Snowden revelations and how aggressively Apple are willing to defend their nascent Apple Pay revenues. The tech industry will have a proforma moan about how they are weakening products by revealing exploits exist but nobody takes that seriously as anything but a soundbite. It's not the Feebs job to correct Apples buggy iOS code.


...the peculiar properties of certain otherwise quite useless metals. They don't like being squeezed.


I can't really see total verification and authentication working for a number of reasons. Mostly because if it worked so well, there'd be no need for prisons IRL. The simple reality is that the people who are nasty arseholes in cyberspace may not be any better in meatspace. Sure you'll stop a few, but the secondary reason is...

It's been demonstrated time and again that the people who are most disadvantaged by this kind of authentication are the victims of the kind of people you want to stop in the first place (eg see the recent facebook furore when they blocked a woman who was on the run from a violent ex who had tried to kill her more than once then dobbed her in for using a nym when he managed to stalk down that ID) (Also people living in countries with despotic leaders etc)

and then there's the poor who can't afford the kind of tech that allows for biometric i.d. and even if they could, wouldn't be able to afford the bandwith to support it.

Already, in most of the jurisdictions I have lived in since the 90s (and I count myself lucky) there are dedicated govt units who support eg DV victims on the run to establish alternate IDs for eg electricity, water, medical benefits etc (none of them well enough advertised). This is because their IRL exs who stalk them get the phone, the electricity etc cut off by ringing the companies if they have the chance (because then they can go to family court and say my ex doesn't even have a power connection etc)- again these are men (more than 90 % of the time despite what the MRAs would have you believe) already known to the police by their own names IRL,(ie it is cheaper to set up a false ID unit for victims rather than actively police the people committing the crimes)

If it doesn't work IRL, it won't work in cyberspace. They tried a national ID card where I live since the mid 80s and no one is buying it still (fingers crossed)


Oops, also I can't see the 4chan doods allowing this to happen without a fight either!


..Pigeon's Law: there is no problem so bad it cannot be solved made infinitely worse with a few kilos of 239Pu.

There, I fixed that for you!


It assumes that every minute you spend on blogging is a minute you would otherwise spend on writing, and not on going to the pub, teaching the cat to summon Cthulhu, or whatever. Without wishing to be presumptuous, I would venture to suggest that this is not the case.

Unfortunately you're wrong: the blog essays come from the same place, more or less, as the fiction. A day when I write a 2000 word blog essay is a day when I don't write any fiction, as a general rule.

(Comment interactions not so much, but they still eat up loads of time.)

On bandwidth: this blog gets on the order of 4-5 million unique visits per year. But it spikes at random: I once had a single blog entry rack up two thirds of a million visits in under 48 hours. What I'm paying for isn't so much bandwidth as CPU availability -- for example, this guest blog essay got boingboinged, linked to from Mother Jones, and re-tweeted by Marc Andreesen (who only has two-thirds of a million twitter followers). On occasion I've had stuff go viral and be picked up by newspapers worldwide. Let's just say, I'm a little bit leery of switching to usage-based pricing by migrating to an AWS instance; it might save me a few hundred quid a year ... or it might end up costing me a thousand a month.

(This box also runs a mail server and a bunch of other stuff as well as the web side, so there's that.)


No, I hadn't, but I heard it from people planning to do just that. Another, often unrealised, consequence is the fundamental advantages it gives to the established insiders. In order to trade in even 50 microseconds, you need to be within 5 miles (realistically, 2 miles) of the financial hub (i.e. where the central trading server is). And 1 microsecond cuts that to hundreds of feet. Don't even think of trading in London from a base even in Milton Keynes, let alone Glasgow. At one point, the server was required to add a delay in order to mitigate this, but that was abolished as a restriction on the Holy Free Market.


That would last until the first bot-aided prosecution hit the court. Since the court doesn't see bots as people,
Isn't it enough to "corrupt" a bot? It's a computer.

It has been established in hacking cases that courts shall see "my computer" as "me".

Following that logic, then "my robotic avatar" is also "me", then "Some bot that I have access to" - lawyers can do a lot with "access to" - can also become "me", in the sense that I am legally responsible for it's actions.

Then, if that representation of me somehow is spewing racism, advocating sound race-hygiene and terrorism, I will be the one going down for it! Since I must have "programmed it".

Now, if you want Real Trouble: Imagine someone building avatars of "Dr." Phil, Oprah, Ted Cruz or The Donald which are good enough to be your actual Friend, some thing that you can talk to whenever you like and it is smart enough to come up with Solutions to your problems too - of course consistent with the public personality of the avatar (and of course the current needs of the avatar's corporate sponsors).

We could all have a special VIP friend in every home.


Thought occurs.

OGH has outlined using instantiated copies to test out alternative course of action. Is there mileage in this for the rise of the bot? Test out options for the individual by instantiating copies and seeing what happens in virtual space.

How close do you need to get to reality to deliver useful results?


Quite. I run a good number of websites on AWS, but they're all websites that I reasonably confidently predict will get small amounts of traffic in the near future.

My main sites, which are likely to go viral, are like Charlie's on a rather expensive managed host.

To add to this - another thing that isn't free is moderation and system administration time. As a guest-poster I get to see some of the behind-the-scenes action that goes into running this blog, and it's non-trivial: that's also time that Charlie isn't spending writing, and it's professional, brain-intensive stuff.


It's just a form of pollution. So certain parts of the internet will get contaminated and be no go zones for humans, patrolled by feral bots. An analogy would be landline telephones. In America it's illegal to robocall a mobile phone, though it still happens once in a while from out of country. So mobile phones are relatively free of commercial phone calls. But land lines, unless they are actually used for DSL, are horror zones. Plug one into an actual telephone and you will literally be driven mad by phone calls every 5 minutes from machines trying to con you or sell you things or ask you who you are voting for. The landline part of the cultural geography is now a Chernobyl like dead zone where sensible humans are banned.


Amusingly I nearly used Goonswarm as example #2 in the second "The Trollswarm Cometh" - not because their tactics in EVE were particularly troll-like, but because the "swarm" mentality worked so well.


I didn't know about the NF - euw

He posted pictures, if you're curious what the inside of a leg looks like ("that's not a fish in there, that's my muscle")…


Given the blog topic it cant be long before Mod-bots become pervasive, lagged slightly by the gaming of said Mod-bot. The HuffPo's one appears impressive, not that I have experienced it firsthand.

I would be interesting to see if the relatively narrow audiuence membership wins out over our rather eclectic derails and meandering.

Plus Charlie could get pinged every time a new or different strange attractor appears.


Perhaps where you are :)

Our experience in the UK is different; we're "ex-directory" ("unlisted number", for y'all) and registered with an organisation called the Telephone Preference Service (there's an associated Mailing Preference Service - we don't get junk mail, either!).

Unwanted landline calls to our house are vanishingly rare - we get more wrong numbers than commercial calls... Although there was a spike in cold calls (to all of one per fortnight! A whole three or four of them!) to tell us our PC was infected with a virus. These were initially met with suitable disdain, and eventually with direct questioning as to whether the caller's mother was proud that he'd become a fraud and a thief...


"Our experience in the UK is different"

No, it isn't - it's identical. The key is whether your number is unlisted or not, and it is the listed numbers that have the problems; the TPS used to work, but our Wonderful Monetarists allowed cold calling companies (effectively) to buy exclusion from it. And keeping your number (and address) hidden from people who need to find it causes a lot of problems for many people.


I don't get that many machine calls — maybe one every fortnight.

I do get lots of telemarketing calls from polite Indian chaps wanting to clean my ducks. I inquire how they will do this, and upon being informed that they will drill a hole in the duck and vacuum what's inside I opine that that sounds painful and I don't want to hurt the birds, and I think a nice gentle shampoo sounds better. At this point they usually give up, but if not I ponder out loud that this sounds like it might even be illegal and wonder if they've consulted with the SPCA, which usually does the trick.

I could just hang up, and do when I'm busy, but it's amusing seeing how long they'll try to follow their script.


Do you have a landline, then, and are you much plagued by cold-callers?

Curious, you see - because the MPS and TPS are quite effective for us. Granted, we also tick the box that keeps ourselves off the saleable version of the electoral register...

But given that we've been at our current address for sixteen years, our name / address / landline number has leaked out onto lots and lots of sales databases by now. I remain pleasantly surprised that we get (effectively) zero junk calls / mail.


Desmond Bagley's Maxim (at least, I think it appeared in "The Secret Servant")...

"...There is no military problem that cannot be solved by either the application of sufficient High Explosive, or a first-class marching band..."


Yes, but it's listed. And, as I said, that's the key factor. I lot of my friends and colleagues report the same effect.


You need a search warrant to tap a phone; very few law enforcement agencies seek warrants before using stingrays (partly because the Harris Corporation demand agencies sign an NDA so stringent they can't even tell a judge enough about them to get the warrant).


I popped in to mention that I've donated resources to help stream FLOSS-topic conferences. For that, one needs to spin up a few VMs on something like AWS. It can get expensive. It def gets expensive if you forget to spin them down.

I also popped in to counter the mild self righteousness people have about running sites that require a lot of resources. Setting aside ecommerce, people need resources for doing activist work, data analysis for academic research, data analysis for journalism, archiving of cultural works, etc.

I get donated resources for archiving things and also for running a simple site that provides an index to python videos. If I didn't get donated resources for running those, I would shut them down due to budget.

I also popped in to mention, if a program gets good enough to make interesting conversation on topics I'm interested in (with citations and useful content discovery), then I'd be getting a good experience. I'd prefer to make friends with sentient beings, but that scenario would be akin to having a search engine that generates high quality search results when you enter flakey search terms. That can be a good enough experience for me sometimes. (searching for help with a programming problem right now usually garners good results from stackoverflow. when you think about it, it doesn't need to be results written by humans)

I think the same thing works for sexbots (but I didn't catch up with this blog in time to comment on the topic)--people use artificial aids for masturbating. I enjoy erotica that tends towards porn. It's hard to find well-written erotica. If someone can create programs that generate writing that is good enough for that case, then I think it is something that humans would be happy using.


What is the antipathy to the idea of a safer space, AKA somewhere someone can go to take a break from the world being shite to them on a regular basis? I genuinely don't understand it.


@80: a) I need social interaction in my office (the cat isn't sufficient)

Perhaps you need to upgrade your cat.


What is the antipathy to the idea of a safer space, AKA somewhere someone can go to take a break from the world being shite to them on a regular basis?

I know Greg IRL. He's a white heterosexual male pensioner from London. Salt of the earth type, but utterly unacquainted with his own invisible privilege -- or rather, the extent to which he doesn't suffer from discrimination. (Being aware of negatives or absences is ... difficult ... and it gets harder the longer you leave it before you start trying to do it routinely.)


Thank you for the compliment, Charlie!
however ... "safe space" has come to mean, as is often the case in these things (e.g. Democratic Republic of North Korea") the opposite or almost-opposite of what it says.
It's a specific policy, especially in universities to STIFLE debate & free speech ( And I DO NOT mean freedom to spout "hate" speech" either.
Criticism of some things are simply not allowed, & certain groupings are using this to push their specific, usually very nasty agendas.
Example being the forced segregation of women for religious reasons in public meetings.
Or banning politicians from legitimate political parties, as in ones that actually have seats in any of the UK's various Parliaments ( So I do NOT mean the EDL or whatever the National Front are called this week, either ... )


We get these too ...
Fortunately, you are specifically allowed to be as abusive as possible to these arseholes, no holds barred.
They usually get the message ....


Back when I was at Queens University (1990s) one of my classmates took an elective course on 'Women's Issues in Education' — he was training to be a science teacher, and as senior science classes at the time were often male-dominated and he wanted to run a gender-balanced classroom he thought he should learn about some of his unconscious biases. But every class he was asked to leave so it was a 'safe space' for women to talk (he was the only man in the class). After a few classes he asked the professor when he'd be able to participate, and the answer was basically never — and he couldn't earn the credit without participating because he was being given zero for all the discussions he missed because he was barred from the room.

Eventually the Dean of his college intervened and got him into another course, although it was past the official transfer date. It was a scramble for him to catch up, but he managed. There was (apparently) no comeback on the professor in the Department of Women's Studies who ran an exclusionary course*.

Anecdote is not data, but it is an example of "safe space" being used as an exclusionary tactic.

*There was no mention in the course calendar that men wouldn't be allowed to participate — which of course there couldn't be, because to have the course being officially female-only would have violated the university's policy on gender equity. Apparently it was a grey area.


If we got Charlie an upgraded cat, would we get more novels?

Might be worth trying…


The last time I received a "your pc has a virus" call, I managed to spin the suspiciously Indian-accented "Peter" along for nearly ten minutes, predicting his script and providing leading answers (even though I was no where near a computer). I gradually started to get more and more obviously ridiculous in my responses until he finally snapped, saying, and I quote: "I know what you're doing! FUCK YOU!" And with that he hung up on me.

I considered it to be something of a minor victory, and I haven't received any of those calls since.


That's absolutely ridiculous and no way to run a classroom. Glad to hear your friend didn't lose out over it.
That one person implemented it terribly is, I suggest, not the best reason to reject it entirely. Though I can see how you'd be especially wary.

(Aside: is there anything that can't be used as an exclusionary tactic?)


Well, Scalzi seems to have found the Scamperbeasts useful . . . .


Greg: I know how you feel about segregating public meetings; I'd be very grateful if you'd discuss what's wrong with safe spaces specifically.

You've been a teacher, haven't you? Here's an utterly uncontroversial example of something like a safe space: teachers' break rooms at school, where they can get away from the kids and be themselves instead of a teacher for a bit. By analogy, you can see how something like an LGBT community centre - somewhere people can go and be themselves without having to look over their shoulder - can work. Safer spaces are kinda-sorta (and I'm operating without much direct experience here) just extending the principle to other groups.


I have a somewhat larger scamperbeast (at three and a half years old she still hasn't slowed down). Trust me, it's slightly hard to write when a crepuscular ambush predator is trying to disembowel and eat your left foot. Or defend your computer's mouse from the terrible menace of human fingers.


I agree it's ridiculous, but that kind of thing keeps happening.

I'm old-fashioned. I believe that at a university students should have their ideas and preconceptions challenged. For example, attacking a student personally for their religious believe is wrong, but not covering evolution in a biology class because some students are fundamentalists and it makes them uncomfortable is just silly — but that happens.


So the point is, do you just stop doing something because it is misused, or try and deal with the misuse in a sensible manner?
(See the history of just about everything ever that has been misused, whether an object or a club or society)


You try and deal with the misuse (and even abuse). And that means precisely NOT generalising 'safe spaces' and 'positive discrimination' as solutions, especially when those actions discriminate against other groups, not all of which are privileged. And that IS what happens, as the above examples show. I could give other examples, too.


Well yes, that's the point I'm making. It's like health and safety. Often abused by power hungry maniacs and lazy managers, "oh, we can't do that because of health and safety", when there's no actual h&s case been made at all.


I believe that at a university students should have their ideas and preconceptions challenged.

There's a difference between challenging their preconceptions and challenging their right to be there. For minorities that have not traditionally had the opportunity to obtain access to higher education, presentation of material may matter a lot more than to members of groups who take access for granted; there are plenty of studies on how just exposing students to positive or negative representations of their group in an academic setting affects test results and knowledge retention immediately afterwards. That's part of the thinking behind inclusivity and safe space policies, and it's based on sound research.

Let's not turn this comment thread into a privileged circle-jerk, shall we?

(Yes, I'm aware of the abuse of policies on not challenging students' beliefs, e.g. over issues to which some religious creeds are hyper-sensitive. Nevertheless, blanket condemnations of such initiatives runs very rapidly into a baby/bathwater dilemma.)


Interesting stuff!

A few comments/ideas ...

Fakeyous … could be addressed via that person diligently including cross-references on each social playground they use/belong to so that any appearances anywhere else would be suspect. Basically, we'd need to create an e-equivalent of a calling card, plus maintain a ‘social site CV’.

Unique internet identifier … probably need more than one identity because people want to keep different parts of their lives separate. Would suggest about 6 or 7 life-time IDs* with corresponding email/online contact addresses:
1. personal
2. family (immediate/extended)
3. friends (close/acquaintance)
4. career (post-sec ed etc )
5. purchasing (and/or selling that is not day-job related)
6. other

Because most humans screw up somewhere along their lifetimes, we'll need a system for refreshing/renewing their identity. If gov’t got involved, then I’d expect the gov’t issued online ID to correspond to the already in-use social-security-number (SSN) system, most probably for formal career/money-related online IDs.

* Need to be able to up/down-grade as people enter/leave one’s life, e.g., acquaintance/friend/colleague becomes close friend, becomes spouse, becomes ex-spouse and moves to family (because of child).

Re: Search results where Wikipedia shows up in the top-3 ... is a corporate scam/scourge.

This sorta sounds opposite to what I've heard elsewhere, i.e., that Google is actively trying to improve the individual user experience by piping in/showing results based on that individual's usage history. Google showing the most popular/commonly related sites first does make sense from a keep-the-largest-market-segment/plebes-happy perspective or if their system doesn't recognize you as a returning visitor. (Personally, I like Wikipedia showing in my top-10 Google searches and because Wikipedia is a non-profit I donate to, I'm pretty sure it's not a money-motivated/scam result.)

Trollbots and kids ... Who am I, where do I fit in? are common questions throughout one's life esp. childhood. Trollbots could help or harm depending on the type of image they reflect back on/to a person, esp. kids. Too negative, and that person goes postal, or folds under, gives up and maybe even suicides. Too positive, and you get anything from a tyke falling from exhaustion trying some task one more time to a desensitized sociopath convinced that he/she has all the answers and should take over the world. Humans are wired to believe what they’re told … telling them untruths distorts both their self- and world-views. (I'd hold the programmer/developer legally/morally responsible as in: their tool caused grievous personal harm.)

But ...

Face-to-face behavior is probably going to stick around for at least another century which means that individuals will continue to need to know how to behave/interact with a range of other humans in real-time. To me, this alone is sufficient argument for a mass/universal type of communication system therefore limits some of the super-segmented trollbot scenarios mentioned.

Further, since people typically chit-chat as part of the greeting/meeting process, they’re bound to exchange comments about what they’ve seen/read, therefore different facts/experiences will emerge. And, they'll probably emerge more quickly in a human-to-human than in a human-to-chatbot exchange. If the face-to-face meeting is positive (liking/trust), then my money is on the likeable flesh-and-blood human versus the online chatbot. Okay ... people who are terrified of meeting other people in the flesh (likely the minority of the genpop) will probably prefer the company of their chatbots, therefore again a specialty/boutique rather than a mass market item.


In my opinion, these are two separate issues. For the record, I fully agree that (at least here in the US) we're doing a crappy job of integrating schools up and down the ladder. I remember seeing a report in the 1980s when I was an undergrad about diversity problems and goals in the UC system, and back then they wanted to solve them ASAP. So far as I can tell, the UCs have the same goals now. Nothing appears to have changed in 20+ years, and that really and truly sucks.

On the other hand, Robert's right about teachers needing to challenge their students, especially in biology. You don't see Christian fundamentalists walking out of physics class because it conflicts with their religion (reread Genesis 1:1 if you don't understand how hypocritical this is), but it's routine to see multiple students walking out of biology classes when evolution is discussed, especially when the classes are aimed at non-biologists. I agree with all the teachers who continue to teach evolution despite this, even though, in this age of social media, they risk getting their reputations by not attempting to be more popular. Biology, more than, say, physics, has to get students past ick to aha, and it's impossible to do this without challenging students to face their ick reactions, understand them, and decide whether it's more important to abandon them or to continue with them.

If you want to see it even more starkly, try talking about climate change. For most people, it's right up there with defecation as something they really don't want to discuss in anything more than theoretical and euphemistic terms, because it's complicated, most people are ignorant, and there are so many fears and anxieties attached.


Not just biology. Physics as well.

History can also be a minefield. Economics too. Political science…

A small plug for anyone in Ontario:
38th Annual OAPT Conference
Capturing Diverse Perspectives in STEM
May 12-14

Lots of cool session on mostly physics but some general science sessions as well. Emphasis on diversity (obviously) — not just gender: in a few years aboriginal students will make up 25% of the school system province-wide, and they are severely under-represented in senior science classes. It will include (I hope) some of the stuff Charlie was talking about on group representation.

Definite +1 about the climate change problems. It's supposed to be 25% of the grade 10 science course here, and yet in many (I suspect most) classrooms it gets chopped down to a lot less. And even (in some schools) dropped altogether. But that's a separate rant further derailing this derail, so I should probably stop.


Re safe spaces ... It's weird when I run across the concept as something institutionalized in academia. I'm not convinced the problem is as big as a host of pundits make it to be. It's even weirder to hear the idea beeing appropriated by christians who want to avoid evolution.

My first contact with the safe spaces was in a self-organized, non commercial cultural center, One group using the same rooms hosted a women-only bar/cafe (No longer live in the same city, but AFAIK it's no longer women-only. Queer theory for the win). The cooperation between us 20-something punks and the 40+ year old feminists was relativly smooth*. With our parties and concerts and stuff we tried to host a safe space of sorts too, by (hopefuly) reacting quicker and more resolutly to sexist harrassment and the like than in mainstream nightlife.

The thing is, for me the concept was alway: spaces for people affected by X (racism, trans- or homophobia, other sexisms). To have a 'safe space' from talking about evolution, you need to remove the concept pretty far from these roots.

So to talk about safe spaces, you need to define from what. I don't think you can do that without a rough theoretical framework of society and power relations.
So if you doubt a certain safe space, try to understand 'from what' and not 'for whom'.


Social apps and privacy: if you want privacy, limit yourself to one social app and turn off your mobile phone.

The article below originally appeared April 13 2016:


''Now, a team of computer science researchers at Columbia University and Google has identified new privacy concerns by demonstrating that geotagged posts on just two social media apps are enough to link accounts held by the same person. The team will present its results at the World Wide Web conference in Montreal on April 14.


The team developed an algorithm that compares geotagged posts on Twitter with posts on Instagram and Foursquare to link accounts held by the same person. It works by calculating the probability that one person posting at a given time and place could also be posting in a second app, at another time and place. The Columbia team found that the algorithm can also identify shoppers by matching anonymous credit card purchases against logs of mobile phones pinging the nearest cell tower. This method, they found, outperforms other matching algorithms applied to the same data sets.''


that means precisely NOT generalising 'safe spaces' and 'positive discrimination' as solutions

They can be solutions, but temporary ones. (Or should be temporary, anyway.) The ultimate goal is (or should be) a system in which they aren't necessary. And I think this should be built into any organizational solution from the start.

I believe this about all policies implemented to fix a problem: they should have ways of measuring the situation built into them, so that if they don't work (or are no longer necessary) they can be modified or eliminated. But like any situation in a bureaucracy, they frequently end up becoming turfs to be defended, with their original reason a sideshow at most.

Consider a positive discrimination policy. If you have a situation where members of group A are 99% more likely to be promoted than member of group B, then it might be called for. (Let's assume that things like blind trials are impossible.) But one you've reached equity (whatever that means in this situation) it's time to suspend the policy — you shouldn't keep preferentially promoting members of group B. (And if members of group C point out that they aren't promoted at all only to have members of group B accuse them of sabotaging equity, you know you've crossed over into turf-fight territory*.)

All of which is really a long-winded way of saying that solutions will have to be situational and flexible — what works in one place at a particular time isn't necessarily the way to go somewhere else (or at the same place at a different time). Like lesson plans, actually :-)

*Happened at a place I used to work at.


Hm. Trollbots, climate change, and safe spaces. There's gotta be an angle in there somewhere.


"169: Re safe spaces ... It's weird when I run across the concept as something institutionalized in academia. I'm not convinced the problem is as big as a host of pundits make it to be. It's even weirder to hear the idea beeing appropriated by christians who want to avoid evolution."

It has been argued that part of the trouble with media, and the internet (which is fast becoming a non interactive media), is overabundance :

It is now quite possible to live in a bubble, consuming only rightist or leftist or whatever-ist media : you have specilized newspaper, sites, cable tv, streaming for even the most niche markets.

Add to that the rampant rich/poor segregation of schools, universities and more generally learning via private schools, private universities, "at home" education and you can have fairly large segments of the population that are completely unable to react otherwise than "crimethought" and bafflement to any challenge to their world view.

Have a look at to get an idea how clueless some supposedly educated people are now. These two for instance are gem :

(and they were posted back to back)

The trollbots will only amplify the trend. Idiocracy is the end game.


Not to derail this thread, but I'm curious where you got the statistic that 25% of students will be first nation people across the entire province. From the website you linked, it looks like they were solely talking about Northern Ontario.


Notes I made at a planning meeting. Where on the website does it say 25% relates to Northern Ontario? If my notes are wrong I'd like to correct them.

(Doing some BOTE calculations, that number does sound high for all students — but I don't trust my gut for numbers. I can't find any numerical reference on the OAPT Conference website: if you could provide it I'd be grateful.)


I think "safe space" is a great solution for a particular, highly-restricted problem set. The problem is that "safe space" is essentially therapeutic in nature - one uses it to treat very messed-up people who are terrified of a particular outside group - and it needs to be done under experienced supervision, not treated as a panacea for all problems of prejudice or privilege.

The end result of the proper use of "safe space" is that the previously messed-up people learn to interact appropriately with the group that frightens them, not hide away from that group even further.

As noted above, like many therapeutic protocols "safe space" can be easily weaponized.


Charlie & R Prior & Anonemouse.
When students wearing "Jesus & Mo" T-shirts, or the atheist soc. at Uni get persecuted & attacked (Usually by the islamicists, but sometimes the ultra-christians) for invading their "safe spaces" I'm afraid the red mist starts to descend.
This is a University for Ghu's sake.

Yes, I really get angry over this one. It's a quite deliberate attempt to close down debate & argument & empower privileged, usually religious groupings.

P.S. I am quite aware that I am equally privileged to Charlie, in that I have a "Y" chromosome & I am also aware that a huge amount of so-called "safe space" talk is bollocks, designed to err ... protect those with bollocks.


Off topic, but, yeah cats are . . . unique. My Siamese, over the 17 years of her life, never did understand that an unsupported sheet of paper wouldn't support her weight. Plus, like many felines, she was simply fascinated by pushing items off a shelf/bureau/what have you. Now that we're a dog household, our decorating is quite different. Not to waste more of your time, but you might be interested in the continuing videos of Henri the French nihilist cat on Youtube ( )


Me too, which might be part of the confusion here.


It's interesting to see "students should be challenged in university!" turn up in the mainstream discourse about safer spaces. First of all, those who produce that rhetoric (to be clear: I'm not accusing anyone here of this. This is an opinion columnist-centric rant) seem to regard young feminists/LGBT/anti-racist campaigners' preconceptions as the most important ones to challenge, not fundamentalist believers or (to pick at random) guys' societally encouraged entitlement to speak over their classmates. Second, safer spaces evolved to give those who are constantly being challenged a break, so even given the above, maybe it's not as big a problem as thought?


Greg, that sounds like the stuff I was complaining about — a misuse of the concept of 'safe space'. (At least as I was using it, which was as a refuge — a temporary expedient until the main problem is solved.)

I was going to write a lot more, but we've already derailed Hugh's topic enough.

Although if Hugh's vision is correct, I suspect we'll all wish for safe spaces online :-(

Maybe trollbots could help save the neighbourhood pub: now a place where you know that, annoying as that chap over there is, he's a real person not a bot :-)


To a large extent students unions in recent years deserve every ounce of scorn they attract.

They've gradually filled with regressive factions.

Recent highlights have included but are not limited to

The National Union of Students’ LGBT Campaign has passed a motion calling for the abolition of representatives for gay men (because they “don’t face oppression” in the LGBT community.)

and lets not forget the lovely statements from the recently elected NUS president Malia Bouattia in a speech which wouldn't be out of place on stormfront blaming the "Zionist-led media outlets" and who lists the “large Jewish society” as one of the challenges at Birmingham University.

This is what you get when people accept the cry of the regressive and allow them to claim that racism doesn't count as racism when it's themselves doing it.

At the same conference where she was elected, highlights include speakers arguing that the NUS should not commemorate Holocaust Memorial Day, because “it’s not inclusive.”

once you have a group of people who get a free pass on racism they gradually worsen until they're spouting the same shit as skinheads.


Hee, 182 comments and nobody's noticed one further use for bots that can pass as human: Giving your otherwise unused facebook account to an easily-herded bot that fills it with bland, inoffensive posts, with just enough data gleaned from the real you to add a bit of flavour.

After all, one occasionally hears about employers that review the social media accounts of potential new hires and consider the lack of one to be suspicious. Maintaining something seriously inoffensive takes work and interaction with other personas like that, so best to hand it over to a bot.


I'm not sure if you are trolling, but I'll reply anyway to point out the overly narrow framing in your explanation. You are framing spaces as therapeutic. Actually, the concept is more broad than that. Charlie's comment about research on related topics is more salient to the discussion.

I was hesitant to help run gender-balanced programming workshops or other types, but went ahead and ran some anyway. I don't want all education to be segregated, but observing the difference in experience of attendees motivates me to want those types of settings to exist.

For whatever reasons, biological or cultural or whatsit, people (usually cis men) will talk over and dominate conversations even with the best intentions. It interferes with learning.

It also interferes with productivity in a work setting*. I've considered having some ambient feedback that detects the amount of turn taking in meetings to nudge monopolizers to pause. It would be interesting to see if an intervention like that improves the outcome of discussions. Discussions, for example, of people discussing API design, not just social settings where people can relax away from base line social interactions with non-marginalized people who mean well.

And not only could some feedback mechanism help in that regard, it could also help non-neurotypical people participate in interactions with neurotypical people.

* my curiosity is motivated by papers such as "Evidence for a Collective Intelligence Factor in the Performance of Human Groups"


Hi Sheila. No major arguments with you. My main experience of "Safe Spaces" has probably been in a different context than yours, and on the basis of that experience I wouldn't call a "gender balanced" space a "safe space." They are (or at least were) similar but not congruent, with different intents and structures.

What's also obvious is that the term "safe space" has been evolving since the last time I used it. If a "gender balanced" space is now a "safe space" the term has evolved into something different than it meant when I first used it in the 1990s.

Your points on CIS men dominating the conversation are well-taken, but I'm not aware of the current best practices in handling that, (though I do vividly recall the tantrums I dealt with from an overprivileged CIS male thirty years ago when I instituted the use of a "talking stick" during our Dungeons and Dragons games in order to make sure that everyone got a chance to participate equally!)

I am, BTW, in favor of gender-balanced practices* and training, including assertiveness training** for women and non-CIS males.

* And otherwise balanced (race, religions, etc.) where appropriate.

** Do they still use the phrase "assertiveness training?" I like to think my heart is in the right place, but my terminology is probably quite archaic, as I haven't had any real experience with these issues in 20 years or so and even then much of it was theoretical.


Thanks for the clarification, by this you can tell I don't have a long history with the concept. In the 90s I was way more clueless than I am these days and didn't encounter the concept. (my awareness of my cluelessness comes in waves).

My first exposure has been with people doing work to make hacker/maker spaces and techie groups more inclusive. Maybe one day the language will evolve and this will get called something else.

I'm am amateur in this, hence don't follow the research as closely as people doing the research. The consequence is that I can't answer your question about training because I don't know what training/interventions have been shown to work. It is something I'm curious about though!

I'm curious about what's been going on with Project Implicit training and whether that shows a long lasting effect. there are articles about big companies like Google using the IAT work and the materials from allies training workshops, but who knows if they will publish the data or even reach accurate conclusions about the data. social science is a hard science.

In educational settings, one can look at what's been done at Harvey Mudd college and GA Tech with the media group. I follow Mark Guzdial's blog in order to get a bird's eye view about CS pedagogy and he also discusses this aspect of education.

Anyway, one of my friends really impresses me because he seems to catch when he or another person talks over someone and pauses and tells the other person that so-and-so was talking. Maybe he got training in that or perhaps has read up on the topic.

There are criticisms of the advice for assertiveness training (I don't know what it's called either, but informally I've seen people debate the "Lean In" book) for women et. al.

re trollbots. in programming we have the concept of rubber ducking. a duckerbot would probably be useful. and this is making me think of Brian Eno's Oblique Strategies. bots based on that type of stuff could be neutral to positive I guess. One could think of it as interactive generative art.

There are criticisms of the advice for assertiveness training (I don't know what it's called either, but informally I've seen people debate the "Lean In" book) for women et. al.
The one I see used is "neoliberal feminism" - working hard as hell against (primarily or wholly one's own) oppression but never questioning the society built so that that oppression happens. (TINA implies neoliberal, I guess.)

"I've considered having some ambient feedback that detects the amount of turn taking in meetings to nudge monopolizers to pause."

Oh God yes. I'm sick of meetings where fucking retards waffle on for 10 minutes when their point could have been summed up in one sentence. Like whoever speaks the longest wins.


"Anyway, one of my friends really impresses me because he seems to catch when he or another person talks over someone and pauses and tells the other person that so-and-so was talking. "

I was taught as a child that it is bad manners to interrupt someone. Of course, I have lived through the phase where "good manners" were dismissed as a "bourgeois affectation". I wonder who that little bit of ancient political correctness benefited.


There is a "timekeeper" role in some meetings, tasked with (among similar things) making sure that nobody dominates the time. Having this automated somehow with subtle and gradually increasing cues would be great!


It'd probably need to be automated - or at least algorithmic, even if that algorithm runs on a human brain. We perceive a discussion as "balanced" when women speak ~30% of the time and dominated by women above that, so relying on human judgement isn't going to fly.


The SPECTRE meetings in the old James Bond movies shows several workable concepts for an automatic system.


Yes, it has to be an algorithm in my experience. Too much bias otherwise and we're all guilty. Not just sexism; talkers tend to dominate the time if given the opportunity.


BEFORE THE MEETING: We're doing a study and we're going to record the meeting today.

AFTER THE MEETING: Mr. Talker, your job will be to listen to the record and make a list of who spent time speaking and how much time they spent.

Let the bastards listen to themselves dominate the meeting for a day as they blather on and on and on and on and on and and on and on and on and on and on and on and and on and on and on and on and on and on and and on and on and on and on and on and on and and on and.

If that doesn't change their attitude, sacrifice them to the great god of Bad Examples.


A Clinton operative was caught today spending 1 million dollars on Internet Trolls to argue with the supporters of Bernie Sanders on social media. Yes, it could be automated, but it hardly needs to be.

The really awful thing is that this story is barely even news.


well, I was thinking of some subtle technology but we could go with shock collar bots for long talkers.


Add in a microphone and you could automate it. Everyone is allocated a limit, and when they reach it the shocks start. :-)


Hi Anonemouse & Sheila.

First time I was required to do assertiveness training was in the mid 80s. Probably helps to explain why I am less than patient these days. Neoliberal feminism wasn't even a thing then. In fact, if you didn't show up to the course (only females had to) then it was a one strike on your record. Out of 3, not that they needed 3 if you were female.

3 jobs later, I swore I would never do this course again (run by the same men making money off women so the corporates could 'prove' they were on top of the woman thing)

The next job I went to (and I have to admit, I had finally got senior enough to say no) I was again asked to do the same training course run by the same men. I politely informed HR that I had already done 3 (not good enough - the company was just trying to escape legal liability from women taking action for a sexist corporate culture)and because I was at the end of my tether that day I DID dump out 'I'll do assertiveness training the same day you introduce a compulsory course for the men in the company that aggressive behaviour towards women in the company is not acceptable. needless to say that course never did eventuate.

When they threatened me with dismissal, I smiled sweetly and said 'hey look... that assertiveness training you want me to go on, well it works. Here I am, refusing to accept the options you are offering me (you still couldn't say they were full of shit in those days although you can now)

While I didn't get fired, life wasn't too comfortable either.

The hugest criticism over that kind of training for women is that it targets the wrong people. You have people behaving badly and they make the victims go through training. Sounds a lot like 'she asked for it' Lets run a course on how women can defend themselves from rape. If they only wore a longer skirt...


Apologies Hugh for the derail. I *love* the idea of a bot war although, somewhere in my unpacked boxes of books, I'm pretty sure this has been covered already in SF. (20 boxes to go :)Just moved

It won't really be too difficult. I think the missing bit in the conversation is that the bots are modelled on humans (I know it IS there but somewhat buried in the tech stuff on this thread)and we already know what humans are capable of (not all good) so it won't be a huge surprise if we get a bot swarm (we'll probably just think - depending on our social circles) that a grounded 12 year old boy sent out a distress signal because his parents were mean and 4chan took it seriously


Boxer's feminism? ("I must work harder!")

It is of course raging bullshit you had to go through that - which I get the impression you may know already. :-)

And yes, your final paragraph is exactly right. Recently some student unions here have been running consent classes, and the furore stirred up by the idea of doing something that might imply some guy somewhere did something that was Bad instead of just telling victims not to be victims was ...startling, let's say.


The 'evolving malware'/'head cheese' stuff in Watts' Starfish books? Though 'war' implies an organization that pathological organisms evolving to escape the immune system (or a digital recapitulation thereof) doesn't have...


I just wish I had been sent on assertiveness training course when I was in my late teens or early 20s. I had a tremendous inability to just say no. I expected hints I dropped like "I'm not really sure", or "it's a bit difficult" to be taken like we lived in a Japanese culture. Part of it was extreme introversion and (presumably) autism.
It took me about 20 years where I could get to the point where I could bluntly tell telemarketing people to (literally) fuck off and not call again.
Meanwhile, I have probably gone too far in the other direction.


Oh, phone spammers are easy to deal with. They're just an anonymous voice on the other end of the phone who's never going to see me in person (and won't know who I am even if by some outrageous chance they do), I don't care if I haven't fully understood what they're saying, their only interaction with me has been to piss me off, and there is no requirement to maintain any state regarding the conversation. I do not have, and never have had, any difficulty telling them to fuck off and putting the phone down, or indeed screaming at them in German if I'm in the mood. Indeed, they're about the only phone conversations I don't have any difficulty with.

Face to face, though, I am indeed likely, and for the same sort of reasons, to say the same sort of things as you quote, or just grunt, and wait for them to stop talking and bugger off. And then not do whatever their daft idea was. Sometimes more than one iteration is needed before they stop asking, but they always do stop asking eventually.


Me: Who are you? What do you want? What's the name of your company? What's the name of your company? What's the name of your company? What's the name of your company?
Fuck off and don't call this number again.


I just hang up. No cursing or apologies. I hear the machine or the sales blather start...and click.

But there was a time when I did actually go through five uncomfortable minutes of deflecting their industrial chit chat before I eventually found a way to end the call. Honestly, I don't know that my current approach isn't better for them as well as me. But if it isn't, I am not going to lose any sleep over it either.


If I'm in a bad mood, its

What is the name of your company?
What is your name?
What is your contact number if we get cut off?
Why are you breaking Canadian telecommunications law? I'm on the Do Not Call list.

At which point they usually hang up. Company names and callback numbers have proven to be uniformly imaginary.

According to the local police, some of these duct-cleaning companies are scams that are basically checking out your house — you have a higher chance of burglary after a visit.


@ 78:

"...that's less sinister than everything that used to be the Web coming to reside under FB's banner."

One of the things that made me take early retirement on a tiny pension rather that deal any more with shit stuff was when a colleague was sent a translation job via social media link. He won't touch Arsebook either, and when he asked the sender instead to send the job as an attachment to e-mail, she had no idea how to do this.

For many Millennials, I think, Facebook already IS the internet and nothing outside exists. I know one who cannot be contacted by mail or sms or voice. If you're meeting and miss the bus, FB is the sole channel.

Having spent a lot of time in Switzerland watching Asian kids ignore the Matterhorn in favour of close-up selfies, I'm a bit tetchy on the subject of Millennials. Waiting for someone to find a use for grumpy old men who look out of the window at the real world instead of at their screens, perhaps like those folk i Bradbury who incarnated books.


@ 96

OGH: "Family planning clinic bombers/shooters and folks who burn down African-American churches seem to get a free pass in the USA; everywhere else in the world they're called "terrorists". (See also Anders Behring Breivik.)"

Practically everyone here in Norway calls ABB a "terrorist". A exception is an American heartland ex-colleague here who insisted that he was "only" a mass-murderer, because only left-wing killers qualify as "terrorists".

On entrapment, I am waiting for the first terrorist atrocity perpetrated by someone "radicalised" by the FBI, whose handler lets the ploy run a bit too long. There was once a meeting of the leadership of a underground party, which due to some sickness drop-out was composed 100% of police agents. I wonder what that meeting was like and what they planned.


Ha ha Dick, the bit that obviously went whooshing over your head was that the nanosecond you do assertiveness training, is the nanosecond it's all your fault no matter what happens to you. (In corporate world anyway, they don't give a shit but it's mirrored in the real world or more accurately, it's a mirror OF the real world)

Walk across the car park to your car and get raped? Well obviously the assertiveness training didn't sink in. You think I jest but there are court cases on point. (Albeit 10 years ago) But the fact you think it's ok 10 years later to complain about that right now just shows people that fuck all has changed in the interim.

Was I too subtle when I wrote this the first time?


Ah it's all good in a 'what doesn't kill you makes you stronger' sort of way.

On the issue of consent, in the late 80's early 90's Canada had a law that required 'Active Consent" in sexual assault cases. The premise was, it wasn't enough to assume your partner was willing, there was a positive requirement to actually ask the question (ie. 'do you want to have sex / make love to me')

Alas, by 1995 - 6?, this was overturned as being unfair. I can't imagine anyone on this thread being surprised that it wasn't women who lobbied to overturn that particular law. Or who thought it was unfair.

Ridiculous in a 'he said, she said' environment but apparently giving that small glimmer of hope to get beyond the 3 - 5% of rape victims getting a conviction, it was a bit too hard to stomach (for guess who) at the time.


because only left-wing killers qualify as "terrorists".
Then how does your USSA ex-collegue then refer to Da'esh, I wonder?
Since they are not "left-wing" by any stretch of the imagination. Being very close indeed to the NSDAP in their "policies".


@ 211 Greg: "Then how does your USSA ex-colleague then refer to Da'esh, I wonder?"

Fortunately I did not have to listen to her on the subject of Da'esh, who came along afterwards. But I am sure she would have concentrated on her usual "inferior cultures" angle. Which was almost everyone outside of small-town Minnesota.


The only innocents are children and animals.
Anyway, my "cure" was lots of horrible desensitization training and LSD.
As for you being too subtle, I don't do subtle


In my opinion, it would have been nice if people took training to know how to behave with you instead of the onus being on you as an introvert and as someone on the autism scale to do all of the learning.

i.e. what I brought up earlier was an idea to teach people how to interact, versus teaching people how to talk over people who are talking over them. As a woman, introvert, etc. I don't like the onus being on me. I'd like the responsibility to be shared.


I play a role, and occasionally forget and get really engrossed in it. I have a "persona" (original meaning of the word). If I am tired or drunk I occasionally let it slip and people encounter something not quite human from their POV.
Otherwise, I have a large opening book when it comes to interacting with people.


Surely you should add: the mentally challenged / deficient, the Saints and the Angels as well?

I'm of course referencing a famous text.

I find this 20th C distinction fascinating.


It's also a complete lie.

Let me ask you about Chocolate, Oil, Guns and Medicine.

If you're ignorant about the mechanisms that allow you access to these, are you no longer innocent?

I only ask because America has specialized in creating a general populace who, while technically complicit in such evils, are completely innocent due to ignorance.


Some might say that was a deliberate ploy. Some might also say that owning a private jet as an Evangelical Pastor is tied into this.


Framing Innocence is no longer the criteria for Moral Judgement.


"Surely you should add: the mentally challenged / deficient, the Saints and the Angels as well?"

No. It's about the ability to make choices. Not making a choice is a choice. Willful ignorance is a choice.
Stupidity is a disease.
There are no saints or angels.



That's why I included the line "mentally challenged".

We don't, ever, as civilized peoples give (full) self-defined consent / legal choice to those who fall under certain thresholds. They literally cannot legally or morally consent or be held responsible for actions due to not meeting these thresholds.

There are Saints: they might only be a marketing device, which is why it's a canny move to only name them after they've conveniently 'moved upstairs', but they certainly exist, as a concept.

And Angels?

My, my: if you believe that AI can exist, you already believe in Angels.

They are, after all, both theoretical mental concepts dreamed up by humans at this point, are they not?


Note: Possibility and Probability are different conceptual spaces. And they're not necessarily a sub-set relation. The common mistake is to think that Possibility governs Probability.

It doesn't.

The reason for this is tied into Math, and proving 1=1.

In short form terms: you can have a probability that if entering the Real, creates a (non)possibility. Sui Generis and all that.

OP's post was incredibly good btw.


Weasel [although this could just be propaganda for the Mink faction with money'd interest from the Stoats and Ferrets] just downed the Large Hadron Collider.

Look it up.

I'm 100% sure that was never on anyone's list of things in the threat assessment stuff.


"Saints" "Angels" & "Devils"

AFAIK & judging by the public records, at least 95% of all christian "saints" are/were egotistic, cruel dominating bastards of the first water, whom you really don't want to get near.
( As in being on the same planet )

"Angel" simply means, in its original form: "messenger of truth" ... nothing supernatural of excessive at all - that got added on later.

Similarly, IIRC "Diabolus" simply meant slanderer or liar.
An opposite-messenger who spread discord & misunderstanding. Again with no supernatural content.


Everyone is "mentally challenged". When the shit hits you as an adult it is because it is a sum of the choices you have made. Almost all of which are based on ignorance, stupidity or just mindlessness. It's just a statement of fact, not blame.
If a plane crashed into my house while I type this and I die, it is because of the choices I have made that put me here, now.


Sorry Dick, there was more snark than I intended in my last reply. It wasn't my intent and you shouldn't have been in the headlights. Mea Culpa.

If it's any consolation, none of that assertiveness training shit we were 'required' to do actually used the word no when it bloody should have. They also didn't cover boundary setting. Had to learn that independently as you did (probably why I get snarky)

There was a whole lot of training on pretty much how to be passive aggressive but not a lot on how to be actually assertive. Damn though, I'm jealous. No sanctioned LSD my end.


well, I am back from penguicon (went over the weekend). invariable I over-estimate my ability to go to talks. I meant to see one Sunday on software patents, but missed it. My friend who went to see it told me that there is a monthly webinar review of patents that people can join.

Our reaction is that there could be an informed group of activists who joint he channel to give feedback when something has prior art or whatnot.

But because we are all busy, we thought perhaps this was a good thing to make a bot for, but a semi-autonomous one.

a help against patent trolls (I don't know if they have bots, but I would not be surprised).

(the ddos legal attacks in Lobsters was a moment of mind blowing fun for me)


Notes from the Joke Explaining Bot:

Weasel Apparently Shuts Down World's Most Powerful Particle Collider NPR April 29th 2016

two highly energetic photons [particles of light] were produced, and
the two photons could possibly have been produced in a decay of an unknown particle, whose mass would be about six times the mass of the Higgs particle (which ATLAS and CMS discovered in 2012.)

The Two-Photon Excess at LHC Brightens Slightly Matt Strassler Blog, March 18th 2016

Looking to the future, Franco Nori, who led the research team, says, "Our group's investigations integrate relativistic field-theoretical, quantum-mechanical, and optical aspects of the dynamical properties of light. They offer a new paradigm which could provide insights into a variety of phenomena: from applied optics to high-energy physics."

Physicists detect the enigmatic spin momentum of light April 25th 2016



And yes, it does rather (if true) put a spanner in works relating to QM / Standard and so on.

Of course: could just be a Troll Bot in action.



For the raccoons and raccoon-curious among us, lifted from the CERN weasel story,
Nor are the problems exclusive to the LHC: In 2006, raccoons conducted a "coordinated" attack on a particle accelerator in Illinois.
All ended well.
Fortunately, by 1:53 AM, a joint force of operators and Pbar experts managed to drive the raccoons out of their hastily made fortifications. ... No raccoons were either injured or captured during these encounters.


Ah, but let our ANZAC fellows remind us all of hubris:

The Great Emu War


"No sanctioned LSD my end."

None my end either. All illegal. And not a single "good" trip.
After spending millennia in combat of various types and dying repeatedly to the point where death no longer bothers me I came out somewhat different.
Not least my seeming indifference to other people dying.
The secret is that you only care if various things/people/attitudes etc are held hostage to your good behaviour. I shot all those hostages myself long ago.


Didn't quite get the chance to ask him, but here's an answer from the outro to one of his short story collections:

Interesting reading.


Truth is, death doesn't bother me either. The first time I was pronounced dead was when I was 13. Anaesthetic allergic reaction to what should have been straight forward surgery.

On the LSD thing, there are clinical trials for vets now that are allegedly making progress on PTSD type conditions. About reinterpreting the trauma apparently. Initial results are promising. you take LSD and someone walks you through the initial trauma and helps you to 'reinterpret' it so no flashbacks, waking up screaming at 3am etc anymore.

When I took that shit as a teenager the thing that stayed with me (bearing in mind we were playing hide in seek in the Murder park - 15 murders in 6 months) was a) surviving and b) how tired we were when we were coming down.

Even if you have a great trip, it's like you used up your entire energy for the next week in the 6-8 hours you were spinning.

On the combat vet thing. Well my ex was one and I haven't had the best of lives either. On the bright side, he totally understood when I sat up straight in the middle of the night, gasping for air. And when he did, or threw himself over me in the middle of the night to protect me from whatever was in his head, I did too.

It's all good.


On the combat vet thing. Well my ex was one and I haven't had the best of lives either. On the bright side, he totally understood when I sat up straight in the middle of the night, gasping for air. And when he did, or threw himself over me in the middle of the night to protect me from whatever was in his head, I did too.

Here's to understanding significant others.

I don't know first-hand about the waking-from-sleep version; I get those symptoms between laying down and (not) falling asleep.


Not so much duct/duck cleaning offered out here in the Canadian outback, but there are still the "your computer is doing bad things on the internet" and suchlike calls. When I have no time I start with saying "you are in violation of the National Do Not Call Registry" and they seldom stay on the line for the end of the sentence.

Essentially the since the Registry went up no legitimate company makes cold calls anymore, (unless it's a political poll, which gets the "what part of 'secret ballot' do you not understand' response) so one is assured that the call is a phishing attempt.
On occasion, I've kept them on the line for up to 15 minutes while doing something menial and mind-numbing. The response once they figure it out is (like Dave the proc at 156) often quite explosive and entertaining. I knew what they were doing when I answered the call. Apparently it was a greater waste of their time than mine.

Yeah, things can be a bit dull around here.



"Essentially the since the Registry went up no legitimate company makes cold calls anymore, (unless it's a political poll, which gets the "what part of 'secret ballot' do you not understand' response)"

Here, polling outfits have an exemption from the Don't Call list (which is not supported by sanctions anyway), but 99% of what they do is actually market research.

Conversation with a tele-critter from Gallup:

"Can I ask you some questions?"

"If this is political opinion polling, yes. If it's market research, no. I'll hang up."

"It's political, I promise. Not products. Honest injun."

"Lay it on me".

"Which of the following cosmetic products have you purchased in the last week?"

I cannot imagine how someone can lie like that, just to talk to someone who has promised to hang up on liars. Or maybe she didn't know the word "political". Or "opinion".

If this is Homo sapiens, the sooner we go extinct the better. Or wait a moment – has anyone hypothesised that whalesong is actually water-borne spamming spiel?



About this Entry

This page contains a single entry by Hugh Hancock published on April 18, 2016 3:19 PM.

5 Magical Beasts And How To Replace Them With A Shell Script was the previous entry in this blog.

The unavoidable discussion is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Search this blog