Back to: PSA: 5-Point Writer's Block Checklist | Forward to: Three Unexpectedly Good Things VR Will Probably Cause

A purely theoretical dilemma

"Welcome to the galactic federation, humans! All our riches and elite super-science will be yours—immortality, faster than light travel, the tools to build AIs, cures for all your illnesses and a working theory of economics that abolishes poverty and war and provides as much wealth as anybody wants—just so long as you sign this simple easy agreement and consent to make one minor cognitive tweak so that you don't mistakenly destabilize the false vacuum and destroy the universe."

"Um. Wossat, then?"

"We need you to become a group mind. Studies show that in all identified previous cosmoi, individualist tool-using sophonts with access to these technologies harboured splinter groups so deranged that they collapsed the vacuum energy. There are no known exceptions to this rule: apparently telling you collectively not to do something just makes it inevitable. Unless you Borgify first—become a group mind, render your individual mental boundaries permeable to one another, and allow any other human complete access to your thoughts and memories. Group minds generally stick to the terms and conditions voluntarily."

"Uh, let me get back to you on that. What happens if we say 'no'?"

"Then, regrettably, you will discover that you have asked a question to which you really did not want to learn the answer."




Your question: should the human species collectively sign up to (and ruthlessly enforce) the proposed agreement? Discuss the pros and cons.

(Follow-up question: what are the failure modes and unforeseen consequences?)

354 Comments

1:

Alternative question: What are the properties of the true vacuum that would result from false vacuum collapse, and is it possible to engineer a form of life/computational substrate that can operate in it?

You have confirmed the opportunity, now it is time to look for motivation.

2:

DERAILING.

You're poking behind the curtain. Do that again and I'll unpublish you (at least until we get past the 300th comment).

This goes for anyone else wandering off-topic, too. Discuss, within the parameters set -- don't try and break the frame before the discussion has started!

3:

"Wait, you're asking me to partake of a group mind with those people...?"

4:

Are you sure it's theoretical? More seriously, being a group mind would imply more than mere access to other people's thoughts and memories - it needs some way to enforce consensus. I have believed for a long time that the total abolition of secrecy would cause most of our legal and social lunacies to be sanitised, and replaced by a genuine privacy. I.e. "yes, I know that you do that, but it's harmless, so is none of my business". In the west, the term 'privacy' is normally used as a euphemism for secrecy. So, no problem to that.

I would be distinctly unhappy about forcing consensus, however. Inter alia, it would more-or-less eliminate innovation - the statistics of random walks (as in Darwinian evolution) are pretty simple, and an unbiassed one in a sample of 10^9 moves like treacle. So the only innovations that would develop fast are those are that nearly universally accepted.

5:

> failure modes and unforeseen consequences?

With everybody able to read each other's mind, the planet devolves into a brief spate of murders, and then a decades long orgy of sex, much of it transgressive to pre-existing social norms. We never get around to using the hyperdrive. We are however grateful for the cure to our various diseases.

6:

I think you'd have to amplify that bit about consensus stifling innovation: a hive mind can afford to waste a few sub-units on trying things out, and good ideas aren't locked in a single brain any more.

(I'm still pretty certain that the main obstacle to getting such a thing started would be the insistence of quite a lot of the currently-powerful on not letting the Wrong Sort into the hivemind.)

7:

My response: It depends ... need more info.

How far into the future is this scenario?

Does every human get a vote?

What would be a sufficient majority for this Borgification to start? And if only a technical/statistical majority (50%+1) is required, what happens to 'No' voters?

How permeable are these mental boundaries exactly? It sorta sounds like it's passive info-sharing vs. active forcing info onto the other person. First reading gives the impression this would be more like being crammed into an over-populated cell with next to no privacy rather than having all minds/persons forcibly being show-horned/squished into one.

How does this affect human reproduction, raising kids, etc.? Would humans actually survive?

Can this process be reversed so that we could bring back the human species sometime in the future?

What's the closest-to-human resembling species (and identify the exact attributes used for this) that these aliens have ever tried this on?

What's the success rate of this force-into-one-mind? If not 100% ... what went wrong, why? What are the chances of success for humans ... based on what exactly?

8:

You are assuming, that the change leads to a hive mind. This is not necesssarily so.

9:

Yes! But I think that the new generation would get bored with the obsession with sex and, while not stopping the activity, would start to put it in its place, and we would end up with a rather saner set of conventions. Of course, that does mean that the new scheme ensured that emotions and feelings passed through, and that empathy is the norm for humans (as research indicates).

10:

I, personally, would rather be dead.

Also, I doubt that the post-borgification entity could meaningfully be described as "human".

11:

Pls correct typo 'show-horned' should be 'shoe-horned'.

(Surprised the spelling auto-correction let this through.)

12:

You are assuming that the hive mind thinks logically, which would make it very little different from a single super-organism. I was assuming more of an emergent consensus mentality. That aspect needs consideration, I agree, as neither assumption can claim priority.

13:

In the scenario as presented, I don't think we have a choice.

One thing not mentioned is how we are introduced to the galactic federation -- who contacted who -- but I suspect that's stepping close to a derail.

The benefits are, presumably, a lack of critical anxiety related to our mortality, physical and economic limitations.

What would happen to crime, though? What would constitute a crime? Who would be the victim, and who would be the perpetrator? i.e. how would individuality be affected? What does "private property" mean in this scenario?

Would we be "immortal" only in the sense of being a potentially infinitely continuous group mind?

I guess all of the above can be rephrased in terms of earlier questions about the permeability of the individual minds: i.e. how much "me" is left.

So, it's very much a Faustian bargain, especially if we summoned the demon ourselves.

14:

On the very limited information provided concerning the scenario, I would say 'yes' if only because it is being strongly hinted that we aren't really being given a choice at all and, whatever my views on autonomy and liberty and whatnot, I think a good rule of thumb is "don't try to start a fight with something vastly more knowledgeable and powerful than yourself, you'll almost certainly lose." And the terms being offered? Well it appears the offer is being made by a group mind, so I guess that it has found a way of coming to terms with its status. I guess humanity could too...

15:

(apologies if mumbling about the details is derailing)

Well, there are group minds, and there are group minds, maybe. Contrast Alistair Reynold's Conjoiners with Peter Watt's various kinds of group mind, which are effectively emergent intelligences which appear to suppress all sense of consciousness or agency in its component minds.

The latter seems fairly indistinguishable from death, to me, though at least it has a rather more interesting outcome (and is better than a pointless death, which appears to be the alternative on offer by the federation). The former doesn't sound too bad to me, because although it would result in a total loss of individual privacy and secrecy, everyone would be in the same boat which is better than the sort of Orwellian future that various ruling classes would like.

I would be distinctly unhappy about forcing consensus

Would it be forced? It might be that the resulting super-organism avoids the usual cognitive traps that individuals suffer from, with the result that everyone agrees that certain kinds of action are a Pretty Stupid Idea. No posthuman superintelligence is ever going to say the equivalent of "hold my beer and watch this!", right?

16:

> Also, I doubt that the post-borgification entity could meaningfully be described as "human".

Absolutely, we would be post-human. But we're only greeted as humans, there's no implication on the part of the federation that we would (or could) remain so.

17:

I suspect one might have to allow people a choice between generational exile or joining a group mind.

Some sort of mind-wipe would need to be done.

Presumably, if this contact has been triggered by a certain developmental threshold being crossed, a similar scenario might occur in future with the exiled population.

Personally, I think I'd join a group mind. Sounds a bit like Nirvana.

19:

How does a group mind arrive at a consensus anyways? When faced with a true dilemma of choosing between two equally crappy alternatives, humans (as well as lower animals) can become stressed to the point of insanity/dysfunction. How do we know that this mind hasn't gone 'round the bend?

Let's see some evidence, please.

20:

"How does a group mind arrive at a consensus anyways? "

How does the group mind in your head arrive at a consensus? Ever been "in two minds" about something?
The fallacy of assuming that people already have a single mind and/or single consciousness.

21:

That's exactly what I mean ... just because that alien is a group mind in no way means that it is sane, i.e., is capable of making good decisions.

22:

Neal Asher posited that his quasi-immortal wasp hive minds would eventually develop cognitive dissonance and either die or split into new entities. One of the things that pulled me out of the second Ancillary book was when I started thinking through the implications of her universe. A big one was why would a Dyson sphere polity bother with something as small as obsessively conquering terrestrial size planets. But another was why hasn't the dictator split into a whole bunch of conflicting/diverse personalities over the centuries, not just two that are more or less the same?

23:

All your questions are irrelevant. Resistance is futile. Join the Borg and gain access to vast amounts of information and power, or stay out ... and don't get any of it.

(Assume a 100% success rate, but the process of joining is irreversible: you sign up for it, anyone can inspect your interior processes at will -- and vice versa.)

24:

There is also another problem with collective consciousness as implemented through known technologies.
We know, for example, that the human brain seems to operate with a binding frequency of around 40Hz. If we posit a superconsciousness with a reaction time similar to us, we need to be within a 25 milli light seconds, or spacially extended no more than 7500km.
Since we cannot send signals through the Earth even a single planetary mind cannot be unitary.

25:

Maybe a group mind protects against disrupting the vaccumn energy by the group mind being much much much less capable than the sum of its parts? Thus we become functionally immortal navel gazing idiots requiring gAI's to wipe our collective arses?

26:

I guess all of the above can be rephrased in terms of earlier questions about the permeability of the individual minds: i.e. how much "me" is left.

How much "you" is left right now, from the perspective of "you" circa age 4 years?

27:

Here's a set of questions that seems important to get answers to, before any more specific responses can be decided on:

"What evidence can you offer that the statements you've made accurately describe reality? How much evidence is available, how much can be independently verified through methods you'd have difficulty influencing, what novel statistical methods would be required to interpret that evidence, and, in general, how can we Science the **** out of your statements before a response is required?"

28:

What would happen to crime, though? What would constitute a crime? Who would be the victim, and who would be the perpetrator? i.e. how would individuality be affected? What does "private property" mean in this scenario?

The concept of "crime" ceases to have meaning, is my guess: either it's abstract nonsense about self-denial and fetishistic rule-following, or it's self-abuse.

What to separate individuals looks like an assault, is to a group mind "why am I hitting myself in my face?"

Tax evasion becomes a case of the right hand stealing from the left hand's pocket.

And the ability of the wealthy to ignore the plight of the poor vanishes when it's their experience of an empty stomach and their experience of a leaky roof.

(In fact, money and property either stop holding any meaning, or begin to mean something radically different, within a group mind.)

29:

I'd say yes.

I don't see that we'd lose individuality by doing so. Making our minds permeable and with complete access allowed does not have to mean that everyone is reading everyone else's minds all the time.

I'd expect the members of the group mind would be on the whole nice people, what with being immortal, healthy, and rich. Privacy would be based on social convention, or politeness if you like, not enforcement. Think of it as a community where you don't bother locking your doors because you trust your neighbours not to come in unless it's an emergency.

30:

My vote is 'No' if an immediate answer is needed without any further info because such a scenario is exactly like those pesky door-to-door scam artists who have this terrific deal 'Today only! Just sign here, here, and here.'

Thanks, but I'll pass.

31:

Now that's a good objection.

But, just for the sake of argument: even if we can't maintain coherency across distances greater than about 7000km, we can still sustain coherence in arbitrarily large (billion plus member) group minds. If what the Galactic Federation want is a brake on idiotic impulsivity, that should suffice.

More to the point: variation between geographically distributed sub-groups of the human-group-mind members will be smoothed out as they higher level groups exchange individual components freely. They may maintain separate internal state coherency but their boundaries will be leaky.

OK?

32:

> How much "you" is left right now, from the perspective of "you" circa age 4 years?

True.

Subjectively I can confabulate a continuum from age 4 to now: it doesn't feel as disjunctive as transitioning suddenly to a group mind might, presumably.

Objectively, I might go for it. If it meant the loss of self, freedom from attachment might be worth it, as terrifying as ego-suicide sounds.

I'd probably chicken out, though.

But the original question merely implies access to individual thoughts and memories, but doesn't specify how or whether those would be synthesised into some super-mind with its own intentionality.

If it's just the ultimate "surveillance regime", we might remain largely individual. How then, though, would a self-destructive act on the part of one or more nodes be averted without some positive act from somewhere?

Would "bad" thoughts simply ripple out and be acted upon by a physical police force of some kind? Thought-crime would really be a thing.

I don't think I'd sign for that...

33:

> If we posit a superconsciousness with a reaction time similar to us, we need to be within a 25 milli light seconds, or spacially extended no more than 7500km.

Do we get FTL before, after, or as an enabling technology for becoming a hive-mind?

If the latter, we're not limited to 7000km.

Charlie's fix works in either case, anyway.

34:

So, a group mind doesn't necessarily mean that each component is aware that its part of the group. The emergent group mind then functions as a subconscious of sorts, guiding the individuals to make the correct decisions (as judged by our benefactors). The illusion of free individual will is maintained. We can even pretend to keep practicing conventional democracy and so one. In this case, despite subversive undertones, I would say yes,we should. A failure mode for this would be certain individuals or groups realising the mechanism guiding their thought and understandably trying to get their true free will back.

35:

I say no. Our individual identity is too valuable to give away just because some other species is frightened by the end of the universe. Their desire to perpetuate a meaningless existence is unnatural anyway. If we collapse the universe in our exuberant desire to master nature, then so be it. We can prove these guys' premise is wrong by fighting them... and winning

36:

Can one of us join their hive mind and report back to advise us? Presumably that person will be fully informed about the consequences and true motivation behind the demand. Will they really be fully informed. Will we be able to trust their advice?

37:
I say no. Our individual identity is too valuable to give away just because some other species is frightened by the end of the universe. Their desire to perpetuate a meaningless existence is unnatural anyway

I am reminded of pro-gun-ownership arguments.

Your right to discover and operate false-vaccum-collapsing technologies ends at the point where we share a causal domain, etc.

38:

So this group mind - it would presumably also include all the people who aren't neurotypical, right? The people on the autism spectrum, the people with personality disorders, the people with attention deficit problems (ADD, ADHD), the people who exhibit mental illnesses (the schizophrenics, the bipolar folks, the depressives, the anxious, the obsessives, etc etc etc). I think it would be worth asking the Galactic Federation whether any of the species they've previously administered this particular cognitive quirk to had the same range of mental health issues humans can exhibit - and what the results were in such cases.

I mean, this is me speaking purely from self-interest, as someone who's chronically depressed. My brain is nasty enough when it's just focussed on making me miserable. I'd hate to think what it'd be like if it got a chance at everyone else on the planet.

(Alternatively: imagine a planet-wide epidemic of Koro...)

39:

Could be a bit like the ANA structure in Hamiltons Void universe. The consciousnesses downloaded into the substrate all comprise a part of the processing capacity of the governing consciousness. Each is technically individual but subject to the rules and decisions imposed by the governing gestalt. Argument endures over whether the downloads are really individual within ANA or merely experience the persistent illusion of individuality abound but that's hardly the point.

Would the aliens require active conscious participation in the group mind or would a kind of assumed participation on a subconscious level be sufficient? If the end game is a kind of behavioural limitation it shouldn't matter whether they consciously participate or not. So long as the gestalt enforces the rule set...

40:

Without evidence we have nothing to base our choice on: either decision could be terminal for the species.

An SF version of the God-Adam chat: Do exactly as I say without question or suffer eternal damnation/obliteration.

41:

It always seems to me that there is a large misunderstanding as to what constitutes a group/hive mind. The popular conception is that you lose your sense of identity, slave yourself to a higher will. Lose self control. Etc. I'm pretty sure that this is not how it works out in practice. Take the examples of hive minds that we have around us in other species. Classically, ants, bees, etc. There is no central control in these systems. The queens are merely reproductive pieces, not a central control nexus.

At least in my opinion, SF writing has done a horrible disservice in educating people on what exactly is a hive/group mind. Just look at Orson Scott Card's portrayal in the Ender's Game series. The central controlling nexus is Ender (on the human's side) and the queen on the bugs. This isn't a hive mind. It's simply traditional, hierarchical control. While I suppose it could be viewed as a group/hive mind,there's nothing hive/group about it. It's just the military with better order following. A control system, not a mind in the sense the word is offered up. And using Star Trek's Borg as an example is completely off base. It's just super individual with total hierachical control. Not a collective. Nothing at all like the examples of hive minds we have all around us.

A hive/group mind is an emergent phenomena, not a hierarchical control phenomena. Individual ants don't get marching orders from anyone. They make individual decisions using local data (scent trails, etc). And the hive intelligence emerges from all these individual actions. Actions, goals and behaviors that the individual ants are completely oblivious to.

So, while I enjoy the hypothetical question, the answer is pretty simple.

We _already_ live in a hive/group mind. And have for probably almost all of what we would call civilisation. Emergent organization, common goals. Even slavish devotion to imaginary goals that you could have no way of determining. This is where we live already. The internet accellerated this, and we're starting to see more and more emergent goal driven behavior.

If you're looking for hive/group minds and what they're like, just look around. Ask your self how you feel about it. You're already deeply embedded in any one of a number of the most highly advanced hive/group minds this planet has ever seen. And it has nothing to do with a centralized borgification.

To quote the Brady Movie: Alone, we can only carry buckets. Together, we can drain entire rivers. Religion, nationalism, tribalism, social networks, corporate lifestyle. We're already a group/hive entity. We've been swimming in it throughout our entire history.

So I'd just look at the alien representatives, cock my head and let them flatly know that I wouldn't want to be a member of any group that wanted me as a member. Especially not one blind to the fact that this planet already hosts many overlapping, powerful hive/group minds composed of shifting patterns of individual humans, and wouldn't even be worth contacting if it wasn't one already.

Don't know if anyone's interested in such, but David Sloan Wilson's Darwin's Cathedral is an excellent introduction to "society as an organism" and how this interplays with human evolution. Pretty interesting stuff.

42:

Isn't this more or less the plot of the galactic milieu series?

43:

While it is a fair objection, there is an obvious answer to it.
Not necessarily the only one, or what would be the case, of course. The very reason that IT people started getting interested in the operation of the mind is that the human brain solves a lot of problems a LOT faster than was expected, given the very slow switching time. Indeed, quite a lot happens in less time than it takes for a message to get there and back again. I can't remember the reference, but it is the real effect used in this:

http://www.nature.com/nature/journal/v436/n7047/full/436150a.html

A similar phenomenon occurs in the motion of jellyfish, oceanic light wheels and similar effects, where the action of the whole occurs faster than messages can pass. However, the most interesting point (at least to people like me) is what sort of problems are soluble by such systems and what are not. It's essentially the same question as what problems are (more-or-less arbitrarily) parallelisable and what are not.

44:

"Yes" is your answer, but I'm going to posit that part of the medical wisdom the Galactic Federation can provide includes fixes for the problems we experience due to dodgy neurochemistry -- schizophrenia, endogenous depression, etcetera -- and paliatives for some other conditions due to defects in neurological processing, e.g. psychopathy due to underdevelopment of the medial prefrontal cortex.

45:

Yes. Actually, that follows pretty obviously from the statistics; Susan and I posted on this a while back. The optimal strategy is very often to include non-optimal paths, with a lower probability. Inter alia, it is needed in order to explore the boundaries, and cope with change. This is, indeed, precisely WHY a lot of non-optimal genes have been preserved. Highly inbred populations are very sensitive to changing conditions, and die out much faster than mongrel ones.

46:

Would the aliens require active conscious participation in the group mind or would a kind of assumed participation on a subconscious level be sufficient?

What kind of group mind we form would be up to us. Whether it's simply functional telepathy and a strong inhibition against hitting ourselves in the [collective] face (by destroying the universe), or whether it's a singe consciousness entailing universal ego death and subordination (to a single consciousness that doesn't want to die, hence, is unwilling to destroy the universe it lives in), is up to us.

47:
The popular conception is that you lose your sense of identity, slave yourself to a higher will. Lose self control. Etc. I'm pretty sure that this is not how it works out in practice.

Have a look around for some of Peter Watts' writing on this matter.

His notion is that human hive minds would necessarily suppress the consciousness of their component parts. His justification for this is that each hemisphere of your brain is capable of operating independently (see also: hemispherectomy, and stuff like unilateral amobarbital anesthesia) and generally has a slightly different personality to the whole brain, but once you wire them up with a nice high bandwidth connection (eg. corpus callosum or in a pinch, the bits of crosswiring that still exist after a corpus callosotomy) you get one personality, which apparently has a single thread of consciousness.

Still just SF, of course, but he does take some time to cite his sources. It is all interesting stuff.

48:

This is a really relevant point and a key possible failure mode. Using a group mind in this fashion - to ensure no one does anything stupid - assumes that the components of the group, taken as a whole, will regress towards a mean of general rationality. It's possible that smushing together all the different 'non-neurotypical' traits you'd find in a random cross section of humanity could result in a group mind where unpredictable and emergent things happen as a result of the presence of those traits. Furthermore, these aliens may not be prepared to deal with these quirks of human psychology. Perhaps we are atypically psychologically complex and this approach that works on other species might not map over in the way intended.

49:

The question implies that whatever form this new cognitive regime takes, it's able to consistently suppress a particular type of self-destructive behaviour on the part of any node, which I don't think is true for hive/group-minds as you describe them.

What's not clear in the question is whether or not we'd have to cobble together something with whatever level of technology we had access to at that point, or whether we'd be given some tools and a template. (Possibly with a border protocol allowing us to interface with other group minds, but that's out of scope.)

50:

If what the Galactic Federation want is a brake on idiotic impulsivity, that should suffice.

If you need an brake on idiotic impulsitivity, the ability to read minds is probably not enough. I think you also need a way to coerce aberrant behaviour and to deal with impulsitivity you need a rather short reaction time, which means you probably need not only read access to the minds but also write access.

51:

We _already_ live in a hive/group mind. And have for probably almost all of what we would call civilisation.

Damn it, we have a winner already and we haven't even reached fifty comments yet!

(What I'm postulating could be achieved by just giving us all functional telepathy -- an organ to broadcast our brain state constantly, and the ability to tune in on someone else's brain state -- and letting us build more frictionless institutions on top of that, instead of on top of spoken/written language. In other words, a more extreme version of the hypothesis that social media are a crude form of machine-mediated telepathy.)

52:

> What kind of group mind we form would be up to us.

OK, fair enough.

> I'm going to posit that part of the medical wisdom the Galactic Federation can provide ..

But we could ask questions about how to achieve it.

53:

Ah, but hang on a mo - I seem to remember the original premise was the technology comes after we've gone the group mind scenario... at which point, well...

Let's put it this way: one of the most wonderful symptoms of a lot of different mental illnesses is this - when you're most in need of help, you are inevitably either unwilling or unable to ask for it. (Unwilling because either a) you don't think you're ill - often you're thoroughly convinced of the opposite; or b) you know you're ill, but you can't believe anyone else is willing to believe you; or c) you know you're ill, but you don't trust anyone else to help you for various reasons. Unable comes down to things like straight up catatonia, or having the sky bleed green in your head and all the noises in your mind coming out plaid).

Mental health technology first, please?

54:

Well that just raises the question of how the hell we get humanity as whole to agree on the type of group mind they want to form without being a group mind in the first place.

A simple majority vote? That leaves a lot of people being taken along unwillingly. You have a failure mode right there: not everyone joins. Humans being humans, the remnant will breed and end up back in the same place with the same dilemma or cause the end of the universe themselves should they find away to escape notice.

Would these aliens allow a collection of differing group mind types depending on individual preference? Or must it be a whole species thing? How many types of group mind can we think of? and at what point are there enough separate group minds of the same species that they start to function as individuals, leading to the same potential apocalypse scenario? The aliens might then insist all the group minds combine into another meta-group mind. Then the same dilemma arises. This could end up with fractal group minds - the more you zoom out, the larger the group of group minds becomes...

55:

I hope this isn't considered a derail, but there is an unspoken question here: In our particular case, would the formation of a group mind actually fulfil the purpose of the entity that invited us to form one?

The reason that I'm asking this question is that there is, among humans, a contagious form of mental instability (known as fundamentalist religion) which would probably be even more contagious given mind-to-mind communication; the group mind as a whole might end up affected by it.

And that particular mental illness has, as one of its characteristics, a desire to actually cause disaster on the scale spoken of.

Would you give any of the more lunatic fundie Christian or, maybe even more so, Moslem chiliasts who have openly spoken about the Rapture and/or the coming of the Twelfth Imam (for example) a vacuum energy implosion device? And if not...

56:

possible failure mode: if the hive mind does not get created from the whole human race at once but is grown through an intermediate stage of smaller hive minds then we will have a possibility of different hive minds in conflict- maybe enough of a conflict to cause wide-scale destruction, maybe enough to cause human extinction.

57:

I'm afraid our generous benefactors would find me less than enthusiastic at any attempts to "fix" my autism. And at the same time I somewhat doubt many would want to share the mind with me, it's a weird place. How bad that would be probably depends on the technical details, though.

OTOH, I tend to answer questions formed as "option A or die" with "option A, please".

58:

Sounds like a deal to me, it could be like Heinlein's Church of all worlds, just not as sweaty. But, is it even possible? Is the language of thought more or less the same thing species wide, or do each of us develop a unique mental software as we grow?

59:

Is this just a scam for the galactic mind to get more wetware to execute on? Either as a variation on the door-to-door salesman limited-time pitch, or because the galactic mind is in thrall to its own fundamentalist religion.

If we meet the other galactic civilizations, could we get a better deal?

60:

This is part of what I'm getting at with my questions about the status of the neuro-atypical and the mentally ill. It doesn't even have to be something like a religious imperative to suicide causing the problem - ordinary old suicidal ideation could be just as big of an issue. Actually, I suspect garden-variety suicidal ideation would be an even bigger issue, because from personal experience, when it hits, it meshes in so very nicely with the rest of a person's belief system that it's positively indistinguishable from "normal" functioning.

I've survived thirty-five years of suicidal ideation (it started kicking in when I was about ten, and it still hits every now and then) because I come from two family lines full of miserable depressives who tend to carry on with life in order to spite the universe - call it a micro-cultural adaptation, if you will. I had a lot of discussions with each of my parents growing up, where they each passed on to me various coping strategies for dealing with the miseries when they hit. Even then... I've come close at times.

As with modzero - I doubt there'd be many people who'd want to be sharing in my mind. It's not a comfortable place be on occasion.

61:

I'd add something which I think is more of a point (d) than an expansion of your point (c). I don't want to be "fixed". I'm fine with being prescribed drugs to reduce the shitness to a tolerable level. But I'd run a mile from other forms of therapy because I'd feel my identity was at risk.

On which note, WRT the "how much of you is left from age 4" point - hard to be definite because I can remember early experiences much better than thoughts, but based on what I can remember of my thoughts on the nature of existence, I'd say "most of it". Later experience has by and large tended to confirm those early opinions. Of course, there is now a lot more to it, but it's mostly by way of expansion of scope (cf. Einstein vs Newton) rather than contradiction and replacement (oxygen vs phlogiston).

Charlie's minimum scenario of global read access is diametrically opposed to the way I like to live; it would be impossible for me to cope with, and whether it's offered "raw" or with a preceding treatment that would necessarily involve a large degree of identity destruction, either way I'd consider a soft nosed bullet to the back of the skull a preferable option.

62:

Not to mention the old saw "people are clever, crowds are stupid"...

63:

I'm not convinced that problem arises at all. The aliens are offering us FTL, so I reckon we can assume the maximum coherence radius is a lot larger than a planet.

65:

Why do you see consensus being forced as a requirement of a group mind? I'm certainly aware that my own personal mind doesn't have a consensus. It usually has a decision, but there are usually parts of it that would prefer the decision be otherwise. Why should a group mind be any different?

Additionally, there are parts of my mind that have particular interests that the rest of it sort of ignore. E.g., I'm usually not too interested in the particular pattern of my breathing, but a part of me is. And it's a part that can be conscious, as in the focus of attention, when appropriate (and sometimes when inappropriate).

66:

At which point we elect Donald Trump as "group mind coordinator" and humanity becomes a race of depraved real-estate salesmen who redline the aliens that "don't give us any respect."

67:

Do you often do two incompatible things simultaneously? If not, you have a consensus on action. A consensus does not mean that everyone agrees, merely that there is an accepted decision.

68:

You kinda beat me to it; I was thinking of making a comment on Drumpf and his followers as an example of a less than sane (to be charitable) collective mind, in large part mediated by social media--just look at his rabid tweeters replying to any mention of the man.

Also was thinking isn't this sort of how "Childhood's End" ended? It's been on my reread list for a long time, been 30+ years since I read it and I refuse to watch the recent TV version.

69:

What're the odds of the temporary consensus, as we first experience the systemic effects of our society as something we personally do, being an overwhelming wave of suicidal ideation?

(hint: I've seen this happen intermittently to enough individuals with twitter or facebook access)

70:

My take is property means something radically different. My right hand is still spatially localized, and reactions within it are local to that hand (with effects that spread out, of course). If I get a splinter in my finger, it's the finger that gets most of the inflammation. But my whole body has to deal with fixing the problem, and if it gets bad enough, my whole body starts experiencing a hyper-active immune system.

So you still have localized properties, but the meaning is more similar to the property of an object in a computer language. SOME of the meanings of property don't translate from individual to collective, and others do.

71:

A good point. Permeability of thoughts can mean a lot of different things. Given that it isn't going to be a union of bodies, each local actor will need to control local actions, because the group mind doesn't have unlimited attention. How much attention were you paying to your left little toe before you read this? So each physical body will need to be locally controlled, and the physical mechanics for doing that are already present. The local body needs to attend to what the local body is doing.

The way this is different from a high speed version of advertising-plus-political-propaganda is that lies can't be hidden.

The more I think of it, the more it seems that any reasonable implementation of this would be a great net benefit to everyone except socio-paths. Justice AND Mercy for all.

72:

Having just read my RSS feed and then read the comments it seems one of Charlie's acceptable definitions of a hive mind is, in essence, functionally indistinguishable from a single human global culture.

Given the readership of the list, it's a pretty safe bet no one is going to disagree if I say people should not be discriminated against on the basis of their gender, sexuality, race or religion. If I add disability, I'm on less sure ground because there's a decent chance on a list with a readership this diverse someone will raise their hand and say "but what if...?"

However, if we'd grown up in a different culture (I can think of several without stretching my mind too hard), lets say the Europe, America and antipodes of my grandmothers (so start of the 20th Century) none of those truisms of modern life would be. To a fairly large extent it's much less generally true in Japan still than in Western civilisations I believe for example.

So, if I frame Charlie's questions in terms of "you must adopt a single culture by consensus, with some form of thought police, or face extermination of your race" my honest answer is I don't know. A thought police might do away with a lot of the crap we get from our current political and religious hierarchies and so we'd get stewardship and leadership (or more of it) rather than self-service (or less of it). But none of our cultures make me think "Yeah, that's so great I want the whole world to live like that."

Possible failure modes: We pick one of the Chinese languages (Cantonese or Mandarin probably) as the lingua franca because it's selected as the most widely spoken and read single language (everyone that reads any of the other Chinese languages reads those two as well after all). There is a horrible period of adjustment for everyone else. The numbers might not bear this out but it's nicely illustrative because I'm pretty sure a small minority of people on this list speak either of them.

Classically teenagers rebel, how does the group mind cope with this? (This is different to being neuroatypical I think.)

Many human societies have evolved and changed. If we form a group mind is it fixed or can it change over time as long as it remains a group mind for us?

73:

There are lots of other reasons why the group mind would need to have a hierarchical (well, or some structured clumping) organization. Reaction time comes to mind. So does dealing with local conditions locally.

I don't think you need FTL transmission to enable large group minds, Internet transmission protocols can handle that problem. But reaction times will suffer as latency increases. If the mind expands to include the moon, then reaction times get a lot slower, so the problems that are considered as a whole group mind will need to be lot slower. But many would still be reasonable. What to do about climate change, e.g., has currently had a resolution time of decades, and I'm rather certain a group mind could have come to a decent solution more quickly.

N.B.: The Internet itself could be considered a first step toward this kind of an entity. It has speeded inter-mind communication and made keeping secrets more difficult.

74:

Request for clarification re: would the hived humans all have the same thoughts at the same time vs. would hived humans only all have the same cognitive/emotional capabilities and tendencies/thresholds? Wondering whether the exceptional would be lost and humanity reduced to a statistical norm.

How is this scenario different from Clark's 'Childhood's End' apart from only the children being uplifted, becoming evolved and the adults (current homo sapiens version) dying off?

If all humans could immediately read each others' minds and memories, unless there was a significant emotional damper put in place, possibility of murder and suicide epidemic. The first generation adults would probably have problems initially, but later generations would probably adapt. Which begs the question: At what stage of development does the human mind join the hive mind ... in utero or later?


75:

Finally to the end of the comments so far. First of all, Charlie, thanks for a wonderfully stimulating question.

That being said, I'm with the people who want some questions answered, and mine go something like this:

Religion seems more and more to me like a mental illness. Is it cured, or at least turned down to a non-fanatical level, upon the group mind's creation? The last thing I want is to join a group mind where the fanatics are all arguing with each other. Or do the aliens have some insights into such questions as "the nature of God" or "why are we here" that render such discussions obsolete?

Can I turn the group mind's volume down, or even off, at least temporarily? I note the Douglas Adams bit about the planet which was "cursed by telepathy." In short, I'm willing to have an empathic mind-meld with someone who's miserable, and I'm willing to take concrete steps to deal with their misery, but I'm not willing to be in an empathic mind-meld with a miserable person 24/7/365 - I need to sleep!

Does the mind-meld recede into the background when I sleep?

How much of my mental processing power goes into maintaining the mind-meld? Does the process cause me to become a doofus?

Is there enough fine control that I can meld with some people more than others? (I don't want Donald Trump in my foreground, thank you.)

Do I feel the physical pain of other people? This would be enormously distracting, particularly while driving. Imagine driving past the scene of a bad auto accident, (possibly caused by mechanical failure, which would not be cured by a group mind) and experiencing the pain of the dying drivers. Now I'm screaming and I can't watch the road and I have an accident too! (Or maybe I get into an accident because I'm experiencing the orgasm of someone who lives by the road?)

Do I broadcast my own physical pain? Can I turn off my ability to send/receive pain without turning off other parts of the group mind? In the same accident scenario, it would be wonderful to not feel pain, but also be able to hear the person mentally shouting about how they're running out of bandages; does anyone have a first-aid kit in their car?

Does the group mind allow us to trade bodies, and if so, is this logged somehow. In the accident scenario, what if I'm nearby when the accident happens, so I pull over and broadcast my willingness to trade bodies with an ER doctor? If this body trading is logged, I can't be sued for the ER doctor's mistakes.

Can the group mind force me to pull over and trade bodies with the ER doc?

Do humans or aliens decide what is considered a mental illness?

In a more general sense, how do we work the bugs out of our group mind? For example, if religious fanatics turn out to be a problem, how do we rewrite the code for our group mind to fix/quiet/exclude such people? If you're annoying enough, can you be banned from "posting" to the group mind, but still have to read what the rest of us "posted" so you learn not to be an asshole.

Is the group mind different for children?

It seems to me that many of these issues are routing/switching problems and can be solved by the application of various routing/switching rules. That is, there is a LAN, which is the people with whom you are intimate, and a WAN which you are in contact with sometimes, and you have VPN tunnels to your friends, but employ firewall software when dealing with Racist Uncle Donald. Or does this kind of firewalling violate our pact with the aliens?

76:

"Is the language of thought the same..."
Current evidence indicates that it is highly similar. Words with similar meanings activate similar connections between different areas of the brain. Currently this is only known at a very coarse level, but this may well be due to the crude level of instrumentation.

OTOH! It's not the same. Just highly similar. So this may mean exact translations aren't possible at that level. It may be analogous to translating between English and Chinese...you can get usefully similar translations, but the precise shades of meaning don't come across.

77:

With Childhood's end the kids got the group mind first. I was also reluctant to watch the SyFy channel's version. Are you sure you're not me?

78:

How much "you" is left right now, from the perspective of "you" circa age 4 years?

It would be morally wrong to murder a 4-year-old, even if I replaced them with a synthetically created adult. It is not morally wrong to allow a 4-year-old to grow to be an adult.

But back to the main question (and assuming we had adequate proof of the claims): you start off by saying "Borg". That's terrible. Kill off all humans, replace them with a single mind -- judging by quality-adjusted life years, that's a huge loss.

Later, you drop it back to functional telepathy. That's perfectly fine to me. I'd go for that without the giant hammer, if it were fully under my control. With a hammer, even if I couldn't control what I broadcasted and everyone could look at all my thoughts and memories, it's still worthwhile.

79:

I'm sorry, but in my language consensus means that an overwhelming majority agrees. If it's even arguable that it's close, it's not a consensus. The basic meaning I learned was unanimity, but the term was applied in that way in very small groups of people, and when expanded to larger groups the meaning was diluted.

So I can say I often don't have a consensus when I take an action, merely a decision. Sometimes a very reluctant one.

I suppose this argument is being picky, but considering the topic it seems apropos. Sometimes one needs to act without achieving a consensus, merely a decision. And if the group mind is analogous to the mind I would expect it to have analogous limitations.

80:

My answer is yes, but only because I imagine that any Group Mind would be running on my biological substrate, and I would still be me (no different to the current delusion that I am me), and that the human species would effectively be 7 billion thoughts from which would emerge the human personality.

81:

What if they did not limit the meld to our species? In the interest of global unity, we had to merge with all emergent consciousness native to this planet: giant fungi, the ant and termite networks, etc. What if what was left of the cetaceans refused to join with us and we were brought before a tribunal? Loser does not get to join and faces penalties. Eradication or maybe exile to a new habitat for rehabilitation and/or everyone's else safety.

82:

No, and it's not even a difficult question. My argument is based on unprovable axioms, of course, but all ethical questions, once you dig down far enough, have to have at least ONE unprovable axiom, even if it's something like "pain is bad." "Pain is bad" may be obvious to most people, but it's unprovable. Anyway, that's not the primary axiom I'm going with; I'm just making a general point.

Aristotle came up with the term "eudaimonia" to mean "the purpose which all other purposes are working toward." It means something like "a good life." Is your goal to have a good job? Why? To have nice stuff? Why? Because it will help you have a good life. Is your goal to be elected to school board? Why? Because you'll get respect AND you'll make the schools better for your kids. Why? Because your kids will have a better life. Why? Because your kids having a better life, AND having respect in the community, helps give you a good life.

Aristotle didn't think that he actually knew all the parts of eudaimonia, but he listed a bunch of different things that he thought were part of it, like "physical health", "a comfortable living situation", "respect in the community", and bunches of other things.

Now, for me, one fundamental part of eudaimonia, and perhaps the MOST important part from a political standpoint, is "the right of self-determination." Depending on what you think of as your "self", that can lead to all sorts of political views, from libertarianism ("my property is part of myself, therefore, all infringements upon my property are infringements on self-determination, and society must use the absolute minimum of that as possible") to socialism ("property, at least some kinds of it, are NOT inherent parts of a person's self, which means that society has the ability to use them to create structures that aid other people in gaining their own senses of self-determination, and helping provide people with tools to gain as much eudaimonia as possible").

But, if "self-determination" is the fundamental purpose of life, then it's not even remotely a question. By this definition of "eudaimonia", we're better off being hunter-gatherers scratching in the dirt -- and not even SKILLED hunter-gatherers, but, like, really incompetent, sickly hunter-gatherers -- than being forced into being members of a transcendent hive-mind.

That said, a person can absolutely CHOOSE to be part of a transcendent hive-mind, as part of their own self-determination, and, if I was given the choice between being an incompetent forager and part of a transcendent hive-mind, I might well choose the latter. But forcing anybody into that? Nope, not acceptable.

83:

Are you sure you're not me?

In a group mind the answer would be "Of course we are."
Or something like that.

84:

Heh.
"Being able to read minds would be of limited value without the ability to also write minds, considering all the blank media out there." -- Anthony de Boer

85:

First off, you are discussing genocide.

When billions of people are forced into a collective you end up with a single mind that cannot exist in a single body.

James Schmitz wrote about post singularity worlds, were a single individual took control of a planet, and when that individual died, the world died. Other worlds learned how to detect powerful PSI individuals like that and co-opt them to work to protect society so that no single powerful PSI could take over a planet.

Both Michael Swanwick and John Barnes had the Earth as one vast collective mind. Joss Whedon in his TV series _Doll House_ had the world taken over by just a handful of people controlling the bodies across the world. His _Serenity_ was where a system was seeded with individuals after the group mind realized that it had committed genocide. The people in power on the high tech world were rapidly approaching the same singularity, about to duplicate the same mistake and commit genocide again.

You also have the Cyber Men of Doctor Who. Their Human 2.0 was genocide.

Now, don't get me wrong, if someone wants to play god to force a collective on society to stop the destruction of property or bodies, I can understand the rational, but it is still genocide.

86:

Tell the wankers to gtfo. I've had enough internet time to know that I don't want the collective id of Humanity to have access to my mind that I can't throttle. Also, I've read enough SF to be extremely wary of aliens bearing gifts, particularly ones contingent on political change. Finally, I wouldn't trust anyone who wanted to assemble one to build a group mind; they'd do things to it we'd not like.

87:

I definitely like being air-gapped from my fellow meat computers. There is a good chance we would eventually develop some kind of cognative disorder, fall prey to some kind of Snow Crasish death meme or just fall into civilization wide suicidal ideation....though we may be doing that now anyways.

88:

Yeah, that smells like a prelude to some kind of sabotage to me - smells like a ol' fish. Deffo tell the shites to go shove it.

89:

Possible failure modes: We pick one of the Chinese languages (Cantonese or Mandarin probably) as the lingua franca because it's selected as the most widely spoken and read single language.

Mandarin, not Cantonese. Mandarin speakers outnumber Cantonese speakers 15 to 1. More people speak French or German than Cantonese*.

If you're going by total speakers, Mandarin has just over a billion, English just over half a billion, Hindi just under half a billion, and Spanish 10% less than Hindi. Next up is Arabic at about half as many as speak Spanish.

But would individuals need to understand the dominant language to be members of the group mind? If so, then we don't have a group mind, we have several. Would we all end up polyglot because we can communicate with the mind no matter what language we grew up speaking? Would minority languages die out as children brought up in the group mind stuck to the main one?

I guess I'm not seeing this as a nasty situation, unless the alien dictum is "learn language A and join the group mind or be exterminated".


*Sorry for quibble but it really bugs me when people assert that Cantonese and Mandarin are equal. I suspect that a few generations of meeting more Hong Kongers than other Chinese has skewed the perception in Western eyes.

90:
When billions of people are forced into a collective you end up with a single mind that cannot exist in a single body

You believe that. There are just as many counter examples in fiction. (To pick an example that's been discussed elsewhere this week, the telepaths in Emerald Eyes.)

But I'm also going to point out that Charlie didn't really say a single mind -- he specifically said, "render your individual mental boundaries permeable to one another, and allow any other human complete access to your thoughts and memories."

So the question is: do you choose extinction (implied by the "or else"), or do you lose all privacy -- not just in current actions, but in everything you've done or thought? <sarcasm>I truly do not understand what metaphor he's going for here.</sarcasm> 8-)

My take on it? The "or else" means we have to accept. What kind of time-frame are we talking about, however? And what is the actual implementation?

If there's enough warning... I'd predict a rash of murders and suicides. Quite likely a significant portion of the population. If it has to be implemented a person at a time (e.g., each person goes to the privacy-destruction booth, or someone/something visits each one to do something), then I would also expect violence around that.

On the other hand, if it's implemented by the Universal Overlords flicking a switch, with no warning... Hm, I'd expect a rash of suicides. A few murders, but I would expect that to stop quickly.

And there, I lost my train of thought due to a cat. In this setup, I'm sure someone would have been watching and could remind me what I was going to type next.

91:

I echo the other posters who are wondering if it's all a big con by a hegemonizing swarm that arrived slightly before the Culture could make contact. But, if they're the only advanced aliens around at the time of discussion, I'd probably sign up for the group mind -- warily, mind you. It would be a lot harder to sign up if I had children. Maybe by jumping in quickly I get more control over the tenor of the group mind than laggards. Or maybe I just lose myself that little bit earlier.

When the Western missionaries show up with their colorful delusions and their gunboats, you probably can't defeat them in war regardless of their obviously delusional beliefs about life after death, the origins of the world, and the squicky fetish they call a work ethic. And that metaphor actually overstates Earth's capacity for resistance against immortal aliens with FTL. (Or at least it seems to: you have to wonder, if the aliens cannot be resisted why are they giving us even this narrowly constrained "choice" in the first place?)

92:

That's cheating(!)
Given that get-out, where your group_mind is a form of "telepathy" where people CAN read each other in depth, but don't usually do so, except, perhaps in medical & "Criminal" circumstances ( And the latter would be very rare because of ... ) then the answer is YES, right now.
However, if you are assuming a real "Borg" then the answer is "NO" ....

Define the terms of the group-mind before we can make our choice, even allowing for the time/velocity communication lag problem that Dirk posited.

93:

Disagree
We still have wars.
Da'esh are loose, & so is Trumpy.
There are too many "insane" people out there for that statement to be generally true.
Though we are heading in that direction.
Better communication usually brings better understanding.

( Always excepting people who insist on "speaking" obscurely for effect, of course /snark )

94:
At which point we elect Donald Trump as "group mind coordinator"

...which would be about as practical effective as nominating one of your braincells to be the coordinator of your whole mind.

It would also be slightly harder to get ahead in life as a power-hungry narcissist who holds everyone else in utter contempt, given that your thoughts, feeling and plans would be laid bare for all to see.

95:

What's a group mind, again?
Actually, scratch that.
What is a mind?
Honestly, I'm a mind, and I have no idea.

Sorry if this is a derailment, but stuff like this in SF always falls apart when you dwell on it long enough. There is a tacit assumption that the reader will just go "sure, group mind" and continue reading, but I've been exposed to too much lesswrong.com...

96:

Okay, so a super-duper hive mind comes over and offers an ultimatum.

Why would the aliens even bother with this so-called 'choice' if they could actually go through with it? Is the offer being made in some hope that humans en masse finally confess to being terrible and flawed, and then repent. Or, that the aliens feel that the only way we might change our terrible ways is by being threatened with extinction. Or, is this 'offer' being presented because their offer is an outright fake or only works if the subject is completely committed to making it work.


Being forced to make an on-the-spot life or death decision is scary. Being forced to choose between two forms of death is sadistic. These aliens are not benevolent.

Nope ... no signature.

Unless the ethics question here is which is more ethical: offering a choice or not offering any choice and forcing a species into mind-captivity in order to save it and other sapients.

97:

I was on my ancient iPhone with struggles to open 1 web page let alone a second and couldn't remember which was the larger of the two Chinese languages. Sorry for pressing your buttons, it was a function of bad connectedness rather than anything else.

98:

As usual the aliens are phrasing things badly - what they should be saying is something like "We're really sorry, there's currently a 2800 year waiting list for sentient species to be considered for membership in Club Borg. You could leave your species details and request preferential treatment, but unless you're immediate danger of an extinction-level event it's unlikely to be granted. Why, you're not even a group-mind yet, how could you possibly hope to hold your own in our society...?"

Then hang up the phone and wait for the humans to come begging for admission.

99:

As for the rest of your comment, I'm not sure you can have a homogenous world culture without a single language. The example I have best, albeit really very limited, experience of is British-speaking and French-speaking Canadian cultures which are similar but distinct. I'm not sure how much of that is that the Quebecois feel persecuted and oppressed and how much is that they're experiencing their country's culture in different ways but I'm pretty sure that there is a chunk of the latter in there.

Within the way I've described my answer to Charlie's question, I think we need more than interpreters to do the job, if our group mind is a single culture we need a single language so we all experience it in the same way.

100:

Once you get past the evolutionary hardwired survival instinct then collapsing the vacuum and putting a stop to all this shit makes a lot of sense.
I vote to collapse the vacuum.

101:

All your questions are irrelevant. Resistance is futile. Join the Borg and gain access to vast amounts of information and power, or stay out ... and don't get any of it.

Uh... if the Galactic Federation is acting to remove an existential threat to the Universe and their entire civilization, and it has an overwhelming advantage in technology and materiel, why are they even giving us the choice? Wouldn't everyone just wake up one morning to discover they were all jolly sub-units of the group mind?

Mind you, the Galactic Federation is going to have their work cut out worrying about all those other Galaxies out there. Given this is a Galactic-scale civilization, and not capable of spanning intergalactic distances.

Pretty unlikely that every other Galaxy avoids being dominated by a civilization formed of rampant individualists some of whom are bound to play fast and loose with space-time. Even if the effect of destabilizing the false vacuum is initially localized and propagates outward at the speed of light, that's still a death sentence hanging over all those immortal GF citizens even though it may take many million years to arrive.

102:

If you suddenly placed a Big Red Button in front of the billions of adults now alive, with a label that said "Collapse the vacuum and end the universe" how many people would press it?

103:

My response to OGH's question is going to be a bit long. I think I'll recycle it as a blog post on my own blog (if I ever actuall launch it...), assuming OGH doesn't mind. I'm not sure what the etiquette is here, so I'll break my response into several still-pretty-big posts rather than a single oh-dear-god-why length post. Here goes:

I would break OGH's question down a bit differently: 1) Why is the galactic federation doing this? 2) Should humans consent to a telepathically transparent society in exchange for the federation's Super-Science? (My answer is a provisional yes.) 3) Would such a telepathically transparent society necessarily constitute a group mind? (My answer to this question is “no.”) 4) What would a species-wide group mind look like? 5) Should we humans consent to forming a group mind that is rational and reasonable enough to be trusted with the federation's Super-Science? (My answer to this question is “hell yes!”)

104:
why are they even giving us the choice?

Maybe they're not actually a hegemonising swarm? Possibly they have some notion that all intelligent races of individuals deserve a chance to sort themselves out voluntarily? Clearly they already have FTL super-science, which probably implies that they could wipe us out on a whim already.

Mind you, the Galactic Federation is going to have their work cut out worrying about all those other Galaxies out there

They could just be the local franchise.

Possibly you could manipulate dark energy to create a local causal domain which isolates you from vacuum metastability events (and possibly other nasty things limited to lightspeed) arising in distant galaxies.

105:

Looks more and more like an obvious scam the further it goes.

106:

Douglas Hofstadter proposed that many copies of our current standard conventional image of a single personality are running concurrently in the minds of everyone acquainted with that personality, and of course most fully developed in the original version growing in the "host" organism. As the boundaries of individual consciousness become permeable to other minds, the various personalities coexisting at different levels of detail among the many hosts would merge, forming full expressions of a personality as seen by all observers, plus the original. At this point the more dominant personalities would sort out their differences until one or several in cooperation emerged as the guiding direction setting thought creators for the collective mind. Pre-borgification personalities would still exist as memories within the hive mind, but no further progress or development would occur at the individual level, they'd be frozen forever at whatever stage they were at when the boundaries dropped. Voluntary individuation could still take place if the group so desired, maybe for entertainment purposes like theater, storytelling, or to run simulations and experiments for pure research into historical recreation topics. It would be funny if this state could be recaptured simply by wearing a tinfoil hat to act as a mental Faraday cage. Not so funny if they then ran amok refusing to take it back off again, time to gently exert some collective will and find out what the hell they were thinking before the tinfoil was removed.

107:

What does "rational and reasonable enough" even mean in this context?

The whole premise of the question suggests that ending the universe is possible and that group minds are less likely to push the button than individuals.

How much less likely?

Unless there is only a single mind for the entire universe then there will be disagreements and differing agendas. It's a given that every single one of them is holding a gun to the heads of all the others and drama theory suggests that too much sanity can be a sub optimal strategy.

Forcing everyone to join their friendly neighbourhood hive reduces the number of actors but the best case is still an unstable mutually assured destruction nightmare.

Given that I would probably say "yes" ro group mind + tech because it is better to be a superpower holding one of the guns than one of the little countries that the superpowers play around in.

108:

Why is the galactic federation doing this? This may need to be broken down a bit. What the federation fears is that if enough entities are granted access to the Super-Science, the risk of one entity (regardless of whether that entity is a a single Super-Science powered “individual,” an empowered group of such individuals, or a group-mind entity) embarking on a cosmic murder-suicide becomes unacceptably high. The galactic federation hopes/has determined that a telepathically transparent society will bring this risk down to reasonable levels. It will do this by allowing the society as a whole to monitor the contents of its constituents' minds, detecting any individuals that are warped, insane, or unreasonable enough to take the entire cosmos down with them. For the sake of our society, the galactic federation, and the universe at a whole, such individuals would need to be comforted, treated, cured, or excised before they could act.

Given this understanding of the federation's motives, I think we can answer certain questions some of Stross' readers/commentators have posted:

Sfreader asks:

“Does every human get a vote?

“What would be a sufficient majority for this Borgification to start? And if only a technical/statistical majority (50%+1) is required, what happens to 'No' voters?”

I don't think the galactic federation would require unanimous consensus, or any legalistic majority. Any time a Super-Science empowered entity comes into contact with an unempowered entity (that could potentially gain Super-Science power given contact with an empowered entity), the empowered entity has two options: 1) take the time to win the unempowered entity over, or 2) eradicate the unempowered entity before it can acquire Super-Science. The longer the lead time the unempowered entity has before it can acquire Super-Science on its own, the more time the empowered entity has to win it over.

What I would expect the galactic federation to do is to get a large enough collective of humans to agree to its terms that the collective, once empowered, has a decent enough chance to convert or eradicate any individual humans (or nation-states, for that matter...) before they could gain Super-Science on their own. The galactic federation would presumably be able to use their own telepathy to audit this collective to see if it is acting on good-faith. At that point, the federation would then Empower this collective, and leave it to this collective to mop-up the rest of Earth.

PrivateIron asks:
“What if they did not limit the meld to our species? In the interest of global unity, we had to merge with all emergent consciousness native to this planet: giant fungi, the ant and termite networks, etc. What if what was left of the cetaceans refused to join with us and we were brought before a tribunal? Loser does not get to join and faces penalties. Eradication or maybe exile to a new habitat for rehabilitation and/or everyone's else safety.”

At bare minimum, the Terran collective would need to assimilate or eradicate any terrestrial entity capable of acquiring Super-Science given that humans have access to it. Since it might take longer than their life-expectancy as a species for dolphins to learn Super-Science from humans on their own, I think the galactic federation would be okay with leaving them out. But I'd imagine that sharing a group mind with dolphins would be pretty cool, so I think we'd probably bring them in anyway.

Matthew Seaman asks:
“Uh... if the Galactic Federation is acting to remove an existential threat to the Universe and their entire civilization, and it has an overwhelming advantage in technology and materiel, why are they even giving us the choice? Wouldn't everyone just wake up one morning to discover they were all jolly sub-units of the group mind?”

Well, other than basic morality, I would say that a group mind would need to value diversity in its membership – just so long as no member is likely to go on a cosmos-ending murder-suicide spree. Plus, I would say that being unwilling brought into a telepathically transparent society might well push the needle up on my own personal “end it all” dial...

109:

((Havent read the comments yet, so maybe this has been brought uop allready))

Species are not collective actors. The human species is the sum of all humans, nothing more. This question will not be posed before 'humanity' but before some or all humans, and their institutions - states etc.

This means the question is not "will humanity borgify" but "will you, faction X, be one of the individualists, or will you be one of the borg factions hunting down the former." Which is hard to answer without looking around at what the otrher are up to.

The next question is, how can anyone trust the galactic federation and it's promises/threats. For all we know, vacuum energy is a term they use because it fits into a gap in our theories, there's no there there but they want us to borgify because that enables some xploits or maybe just change the tone of our chatter.

If no human understands vaccuum energy, how can we mnake the decision?

But the final qustion is, why won't the federation just make the secret knowledge available to group minds, and group minds only? Species is just an abstraction. The moment some us borgify, we are no longer the same as you individualists.

Why does it have to be one group mind, why can't it be several group minds each above a critical size (to satisfy the criteria of the federation. Whe the federation shows up, there will be roundabaout 9 billion humans top. Who says that all sophonts so far formed a group mind equivalent to that? Maybe one species was only 3 billion, another was 9000 billion but they where eusocial termite likes? Maybe humanity needs a certain number of individuals to form a large enough group mind, and the fedeation will say: 'breed like rabbits. then borgify!'

What I'm getting at: I see no logical way to demand that all humans join one group mind. One way would be a lower bound on the number of individuals pre-borgification, but then you open other cans of worms.

The least illogical outcome in my eyes would be a scenraio like this:
Several group minds form , based on political or cultural understanding of the individuals pre-borgification. Some of these get the magic info package, other are not deemd enlightened enough (curiously, there seems to be a consensus about this between post human hive minds and older members of the galactic federation).
Many humans don't sign up for ideological reasons. they don't get the magic info package.
etc.

110:

I call bullshit. As it's currently set up, I'm supposed to assume that a vastly superior race of aliens wants us to voluntarily submit to a group mind so that we won't do The One Thing That Would Destroy Us All. If that's what's at stake, why take the chance? Either forcibly Borg us or destroy us. There must be some reason that they need our consent. So if they need our consent, and the stakes are as high as they can be, then I expect something more than cartoon villain bullshit as a reason to go along. Explain it as best you can, assume that most of will try to reason it out, or else I assume that everything you say is a lie and I shouldn't give an inch.

Cory Doctorow's piece on lifeboat rules ("Cold Equations and Moral Hazard") seems to apply here. When someone who is in a position of apparent authority tells you that you must do something against your own interests or You Will Surely Die, ask what role that person had in putting you in that position.

I say no.

111:

Reminds me of Theodore Sturgeons "To marry Medusa".At least I understood it along lines similar to what you write here. I'm not at all sure, of course, that that'S what the author intended. That the noel was published with anotehr title at one tome doesnt help either.

112:

I think I answered this in my own post, which is just above yours :p :D

113:

Should humans consent to a telepathically transparent society in order to gain access to the galactic federation's Super Science and incidentally prevent our being eradicated by the galactic federation out of self-defense?

My answer to this question is a provisional “yes.” I'm assuming here that we're talking about a society that is merely telepathically transparent and does not actually constitute a group mind. (More on what a group mind would need in a later post.) I think that such a society could range from, at best, merely annoying (perhaps no more so than our present society, perhaps better in some ways) to sufficiently dystopian that I'd switch my vote to “fuck it, let the galactic federation end us.” A lot depends on whether the overall ethic of the new culture is what I call “provincial” or “cosmopolitan.”

I'm basing these two terms on “Moral Foundations Theory,” which you can find out about on Wikipedia easily enough. The theory claims that when we humans evaluate the morality of an action, it's not a rational cognitive process of applying principles to the situation. Instead, moral judgment is more like perception, and we only use ethical principles to retroactively justify our perceptual moral judgments. Further, the theory claims that we filter our moral judgments through several “color” filters. The current version of the theory posits six of these moral “colors”: Care, Fairness, Liberty, Loyalty, Authority, and Sanctity. If a society objects to incest or violating other sexual taboos, violating dietary taboos, or intermarrying with certain ethnic groups, this is because these acts are perceived as immoral under the “Sanctity” filter of moral perception.

The theory also claims that not all humans or cultures use these “colors” equally. At one end of the spectrum, some humans and cultures place much greater weight on Care, Fairness, and Liberty. The authors of this theory use the label “liberal” for this end, but I prefer the term “cosmopolitan.” At the other end of the spectrum, some humans and cultures place roughly equal weight on all of the colors. The authors call this end “conservative”; I prefer “provincial.” A cosmopolitan ethic is pro-survival in a diverse, multi-cultural society, while a provincial ethic can be pro-survival in a homogeneous culture where everybody is on the same page. (“Dude, don't fuck your sister, that's disgusting!” gets the point across faster than a lengthy seminar on genetics. Same thing with “don't eat pork, it's disgusting!” vs. a lecture about the effect warm sun has on the meat of an animal that shares a lot of diseases with humans.)

I would note that Donald Trump and Da'esh presumably fall on the “provincial” end of this spectrum.

The main problem with a telepathically transparent society is that everybody would have all of their “sins” put on public display, and possibly be subject to public shaming. This would be less of a problem in an extremely cosmopolitan society: they would still point and laugh (at best) at somebody who has a taste for non-consensual sadism (since that would violate the Care, Fairness, and Liberty “colors”) (*cough* Drumpf *cough*), but wouldn't give much of a shit about your extensive mental collection of furry porn.

By contrast, a telepathically transparent society with a dominant provincial ethic would be a hell-on-earth for any deviants.

Fortunately, it would be unlikely for a society that had telepathic transparency imposed on it would maintain and develop a provincial ethic. The more provincial an ethic, the more deviants there will be in the society. Telepathy will root out any hypocrites – those who publicly endorse the dominant ethic but deviate from it in secret. But, in principle, there could be enough “saints” who both publicly endorse and privately follow the dominant ethic to enforce some degree of provincialism.

Still, I think any society that developed after the imposition of telepathic transparency would tend to fall on the extreme cosmopolitan end of the spectrum. (The less extreme the provincialism; the more “saints.”) This would also help out re: Not Getting Eradicated by the Galactic Federation Even After We Signed Their Contract. I think a cosmopolitan society might collectively decide to suicide if the alternative is being dominated by an “evil” galactic federation, but they would be less likely to take the “bastards” with them. By contrast, I think a provincial society would be more likely to take “fuck them all” approach to an “impure” universe. For example, how well do you think Donald Trump would react to a galaxy that is nothing but wall-to-wall Mexicans?

114:

Hello all; this is my very first post here. I'm a social anthropologist and occasional SF writer from Canada.

First of all -- we're screwed. Welcome to the role of indigenous populations in the Americas (and elsewhere) upon the arrival of early Modern period Europeans. This theoretical dilemma really doesn't offer much in the way of real choice, now does it? We either comply and sign the agreement and accept the "minor cognitive tweak" or bad things are going to happen to us; which is the only rational interpretation of the statement that we really don't want to know the answer to the question "what happens if we say 'no'?" So we are looking a power relationship here, where we humans do not have any power, nor do we have any actual choice. We only have the word of the galactic federation (or at least their emissaries) that we are going to get all these super-science wonders in exchange for becoming a group mind and signing the agreement.

We are suffering from Too Little Information...

What is the agreement? Is it simply that we consent to becoming a group mind, or is there more; given that they are "asking" that we agree to group mind transformation and sign the agreement, the latter is more probable. So, what are the terms of the agreement? When will we receive all of this super-science? All at once, the entire package, or in dribs and drabs doled out by the federation? What control do we have over this technology? Do we get the knowledge and theory and equations behind it, or is it just "black box" tech that we can use, but don't understand how it operates? As a group mind are we also permeable not only to our individual constituent consciousness’s, but also to other group minds? What evidence have we been presented that group minds can never be deranged or sociopathic? What evidence do we have that there actual is a galactic federation?

Putting aside all the real issues of culture shock and culture loss from this transfer of technology and altering of what it means to be human, under our current understanding of the term; how do we know this isn't just a band of free-booting explorers (who have no intention of handing over any super-science), who are only claiming to represent a galactic federation, and are demanding an alteration to human consciousness, that in all probability, would grant them full access to the one thing that an non-contacted species has of value -- our culture(s)?

Of course, in all probability (given that they have arrived here), they probably have the power to take what they want anyway... Like I said, we're screwed.

115:

"Tell the wankers to gtfo. I've had enough internet time to know that I don't want the collective id of Humanity to have access to my mind that I can't throttle."

I think this is a point that we would all have to consider. Given my comment above, would you say that you're more afraid of 1) having some Provincially-minded asshole raid your head and post the results on youtube for general pointing and laughing by other Provincially-minded asshats? or 2) having a firehose of Rule 34 directly aimed at your own Provincial frontal lobes?

I'm self-aware enough to recognize I have some Provincial tendencies myself.

I would point out that this telepathic transparency might have some similarities to Real Names posting policies. These policies have been shown to benefit thugs like the Men's Rights Activists, since apparently those groups are more immune to public pantsing than women who are just trying to make it in a man's world.

Would the ability to carry out Mutual Assured Destruction against the doxxers of the world make you feel any better? Would you limit this to public pantsing, or would you be tempted to zap the bastards out of existence?

116:

An excellent point! I would point out that at least we wouldn't have to worry about them inadvertently wiping us out with turbo-charged infectious diseases, like what happened when Europeans got to the Americas. I have been taking the alleged galactic federation's line at face value, and focusing more on human-vs-human drama. But I think we would still have a choice – do what they say, or (if the galactics are to be believed) be wiped out. It's not a good choice, but it is still a choice. And I think culture shock can be a good thing – just so long as that culture shock doesn't come with a dose of colonization with it. But I am speaking from a position of privilege so take that with a grain of salt.

Let's say that these guys are just galactic joy-riders, here for shits-and-giggles. No real Super-Science, beyond interstellar travel and telepathy. In that case, a telepathically transparent society might still be a good thing. It would be a massive culture shock, but I think our society would be a better, stronger place after it, and better able to resist our galactic overlords, should that prove necessary. As for them stealing/pirating/appropriating our culture, wouldn't that grant us some level of soft power over them? Maybe make them less likely to wipe out the people producing the really cool music or whatever? It would probably be best for us if we could hold out for some kind of copyright treaty or whatever, but beggars can't be choosers.

If we do take them at their word, I think it would be in their best interest to treat us as fairly as possible. What they are proposing to do is hand us poor tribal people nuclear freakin' weapons. Or worse, planet-crackers. Assuming these things work as advertised, it would probably be a stupid thing for them to force us to work on their coffee plantations afterwards. The labor disputes could get ugly.

And I do agree that merely making our society telepathically transparent would not be sufficient to make us trusty cosmic citizens, even if our society is open to auditing by the galactic federation. More would need to be done – but that's for a different post.

You're right, we're probably screwed. But we can at least hope it'll be a gentle screwing, and I think we'd be stronger after it.

117:

The quoted bit and your reply don't really fit what I would have expected a group mind to be like at all. If anything, a group mind would surely crush the impulse to take someone else's information/ experiences and post them in public, because doing so has no value for the mind as a whole.
Doxxers wouldn't have any power because either their impulses would get squashed, or their actions would fit in with a 'greater good' sort of ideal in which sometimes the group benefits from abberant behaviour being more widely known.

118:

Well, I am assuming that there is a difference between a society that is merely telepathically transparent and a true group mind. A telepathically transparent society could be... tolerable... or it could be a hell-on-earth. I certianly wouldn't trust a society in the midst of transitioning to telepathic transparency with a sharp spoon.

I don't think that even a society that survived the transition would be reasonable enough to trust every single member with an End It All Button. That would require all of the agents in the society to be rational, reasonable, and either content with how things stand or at least willing to go quietly into that good night. I'm not sure if a group mind would be necessary or sufficient to ensure this. But such a society would have to be a literal Heaven on Earth, and a group mind might help with that. (I'll address group minds and Heavens on Earth in a later post, I'm taking an incremental approach towards this...)

With a telepathically transparent society, anybody would be able to doxx you. But, in turn, you could dox them. So it's mutual assured destruction... unless one of the players truly has nothing to hide. I'm calling these immune players "saints," people who actually follow the dominant ethic. The more provincial of a society, the fewer saints there will be, since it will be harder, perhaps impossible, to follow all of the rules.

One way to get a society of full of immune players is to make society more cosmopolitan. Another is to keep it provincial, but eliminate or silence anybody who isn't a saint.

Unfortunately, the fallout of Real Names policies suggest that the second option might be more possible that I would like to think.

If somebody decided that they wouldn't join such a society, I could respect that choice. But the question I have is whether 1) you'd be backing out because you're too provincial to accept mind-share with the various pervs that that society would be force to tolerate, or 2) because you think that the society would be to provincial to accept your personal quirks and kinks. Or some other possibility I haven't thought of yet.

119:

Above, I broke OGH's questions down into a slightly different list of questions. On that list, I've gotten to these two:

3) Would such a telepathically transparent society necessarily constitute a group mind? (My answer to this question is “no.”) 4) What would a species-wide group mind look like?

In light of what I've learned in the course of this discussion, I think I'll modify these questions too. Why would a telepathically transparent society not necessarily constitute a group mind? What would a society have to look before we/the galactic federation could trust giving every single member of it an End It All Button? Would such a society constitute a group mind? Would such a society require a telepathic transparency? Would a group mind require telepathic transparency among its members?

I don't think a telepathically transparent society would necessarily constitute a group mind. I believe that in order for a society to constitute a group mind it would need to at least 1) function as a rational agent, and 2) possess a “narrative self.” I'll get into what these terms mean in a later post. I do not believe that telepathic transparency would be enough to guarantee either of these, nor do I think it would be required for either of them. (It might come in handy, and might be a good softening-up procedure, but I don't think it would be essential.)

What would have to be true of a society before every single member could be trusted with an End It All Button? Well, for each member, the member would have to be either 1) perfectly content with its present and foreseeable future, 2) if it's not happy about the future, it believe it has the power to make its situation bearable, 3) if it thinks the future will be unbearable, then it it is willing to go quietly into that good night rather than take the rest of the cosmos down with it, 4) if one of these conditions does not apply, then society must be able to detect this and take measures to correct this situation before the member decides to End It All. This correction could involve changing the member's environment so it goes back to content, changing the member's mind so it's content with its environment, or eradicating the member before it can drop The Bomb.

Telepathic transparency would help with 4), but it would be much better if conditions 1-3 hold generally.

Would a group mind help with this? Well, I think society as a whole would need to function as a rational agent in order to carry out this program. And while a society-wide “narrative self” might not be necessary, it would help. And the narrative selfs of its members would need to be taken into account.

120:

Good comments...

To use old 19th century/early 20th century British evolutionary anthropology stages -- for all intents and purposes, for an alien species that has FTL, immortality, etc. -- humankind are savages (or at best lower barbarism); we are a .73 on the Kardashev scale. There are only two reasons for this theoretical galactic federation to contact us; their stated goal (to prevent us from messing things up "destabalising the false vacuum") or to trade. If the latter is correct, the only thing we have of value is our unique human cultures. A short lived commodity because after the any deal is made we will have massive culture loss as we acculturate to the culture(s) of the galactic federation and adapt to being a true group mind (the winning answer to this theoretical dilemma it seems is that we already are a group mind -- I would say, not quite yet).

I was just raising the question that the aliens could be free-booters as we only had the info in the original post and no evidence that they actually were representative from a galactic federation. However, even if we take them at their word, and even if their intentions are benevolent, we are still in an unequal position -- they are more technologically advanced than we are. There is going to be some paternalism, given that situation, or at least some lower rank or some sort of stigma -- we didn't develop FTL ourselves, we were given it. But no humans slaving away on the alien plantations or processed into pet food...

I don't see a group mind as a bad thing (though I am less in favour of a hive mind -- I am a petty bourgeois anarcho-syndicalist with a bias toward retaining some degree of individuality) at all and that is the direction we are moving in with our technology. Within the next fifty years (or sooner) we will have seemless mind-computer interfaces and we will have a legitimate group mind -- right now we have a foundational-ish group mind.

121:

Yes to both, but more a dysphoria inducing hatebomb than rule 34

Considering that, as you mentioned, the GIFT has been conclusively disproven, yeah I ain't got no reason to believe that total transparency wouldn't stop the shitbeards - fuck, it would probably let them congregate even faster than the internet. Fuck, given how often my kind gets bashed, you might as well shoot me and mine; it's not just doxing which I gotta worry about, and there's a lot of folks who'd find the pain and misery from their attack on me enjoyable rather than a deterrent.

Frankly, on so many levels, we'd be better off becoming space-sentinelese, as least to this species. This reeks of deniable ratfuck to me; give us a groupmind, watch ourselves destroy each other through our own shitter, move in on the remains under the pretense of helping a victim so that the other cultures can't legally fuck them.

122:

Seems like the objections about "how can we link geographically distant minds together, much less interplanetary/interstellar/intergalactic minds?" missed the bit where FTL was explicitly mentioned.

Since this seems more like the Transcendent end of Baxter's Coalescent/Exultant/Transcendent series than the Coalescent one, hell yes.

123:

As you've already admitted, this is just an analog for society and the way it tries to forces conformity on members, coupled with references to social media providing telepathy-lite.

In which case the group-mind would actually be saying "all your base belong to us, have you seen this adorable spluong video, and what is Cim Cardashian doing now?"

The response is clear, we offer up Kim Kardashian and Kanye West as a 'test subjects' to merge with their group mind 'to ensure the process is compatible' - then wait till their life support collapses through the operator looking in the mirror too much.

Independence Day MkII.

Of course, this just points up the fallacy of the 'groupmind' concept, and the reality that collectives tend to devolve to less than mean. It's akin to Ender's Game and the idea that social networks could be the way for intellectual discussion and agreement on ideas above surface - reality is very different. Which itself then raises the question - how do you create connectivity and 'group' where the 'best' wins out? Answering that question would have immediate applicability.

IMHO the reason capitalism and non-group think works as well as it does is because it's evolutionary in nature. Multiple competing ideas select down through selective pressures to something that might not be right, but does at least explore the niches well. Centralised control might not waste as much, but it can disappear down failure rabbit holes.

Your galactic federation that turns up on our doorstep isn't likely to be of one mind.

124:

"If the latter is correct, the only thing we have of value is our unique human cultures. A short lived commodity because after the any deal is made we will have massive culture loss as we acculturate to the culture(s) of the galactic federation and adapt to being a true group mind"

In this scenario, would cultural adaptation necessarily result in cultural loss? I would think a Super-Science culture would be able to resurrect pre-Contact cultures (perhaps in simulations, perhaps in meat-space reservations). You wouldn't be able to trust these cultures with End It All Buttons, of course, and I don't think a Super-Science society would bother. (After all, I think it would be rather unfair to the "resurrectees.") But I would think, that, in principle, it would be possible.

Also, would forcing us to form a true group mind necessarily mean that we would simply acculturate to the Galactic Federation's culture? It could be that the Federation is hoping that we'll do something different and interesting in forming our own group mind, and that's one reason why they're bothering to contact us at all. (As opposed to leaving us in ignorance or eradicating us before we could develop Super-Science on our own...) That different evolution could give us something to offer in trade, even if they aren't interested in our current cultural products.

Not saying you're wrong, just questioning your assumptions.

"However, even if we take them at their word, and even if their intentions are benevolent, we are still in an unequal position -- they are more technologically advanced than we are. There is going to be some paternalism, given that situation, or at least some lower rank or some sort of stigma -- we didn't develop FTL ourselves, we were given it."

Heh... "Who's this 'we,' white man?" One of the things that have been thrown around in this thread is the persistence of "narrative selfs" in the context of group minds. Given the End It All Button, the development of the group mind would have to follow certain constraints. It depends on how much value we place on our current identity. There are some alternatives: 1) The group mind would be constrained to allow for the continuation, perhaps in a modified form, of our current identities. 2) Those who do join the group mind will have to surrender their current identities, will have to agree to "death." 3) Those who are not okay with "dying" will need to be eradicated before they can gain access to End It All Buttons.

Same is true for the narrative selfs of cultures, of "we"s. "We" might be completely assimilated into the Galactic culture, leaving no Earthly "we" behind. There will only be "us." Or there is still "us" and "them," but the situation is bearable. Or "we" will be eradicated before we can gain End It All Buttons.

If we survive long enough to get End It All Buttons, I think we can count on the Galactic Federation treating "us" well. Assuming that there even is an "us" that can be distinguished from "them" when all is said and done...

What would be the best outcome, do you think? No "we," just "us"? Or there's still "us" and "them," but "they" make the situation as bearable as possible as they can?

125:

Na, consciousnesses is more like a merged distributed database. 40Hz is the reaction time for a sub section to sense --> compute --> react --> queue for integration. As bits of the hive come into contact (say within ~1000km) they kick of a consciousnesses merge and narrative synthesis session.

126:

My, my, my.

I do believe ya'll missed the point by a mile here.

If you want irony, it's a load of Single Mind Models horse-trading over potential position post Hive.

If we are to be banal, the entire point of localized politics, tax havens and hatred due to Othering is a lack of information.

That ceases to exist.

Your Mind is shaped by the Information it gets: more importantly, it's shaped by the models of interpretation and expression of said information.


So, immediately, 86.2% of the posts here simply cease to matter.

You don't hate X, you understand X.

You don't hate Y, you understand Y.

And, yadda, yadda, GRANDFATHER HEGEL comes along and you form new mental schema.

Zzz.


#1 Your Mind is instantly not your Mind once this occurs.

#2 The old AI-Single-Consciousness-ID/EGO issue. (Most recently addressed in Neal Asher's series. The Technician is under-rated in what it's actually doing/saying, btw - bit miffed it was ignored, although understandable since the dip of quality post-Skinner / Splaterjay).

#3 CTRL+F - ORCA. CTRL+F - OCTOPUS. CTRL+F [Redacted]

NoT FoUnD.

Sigh.


~

Anyhow, it already happened and you already all decided to like in pig shit and watch the world die.

And the set for the UK is the dismal COPY/PASTE semi-industrial park / entertainment / cinema that's all over the place.

The World's End YT: Film: 3:05


The USA chose Old-Skool Biblical style.


~


So Be It.

127:

Alternatively: something like the Mind in Transcendent, or even a true utopia like the Culture could provide. Just because you are part of "us" doesn't mean you have to stop being "you", it just means "we" can be "you" and "you" can be "me" if you choose. Understanding what the perspective of a mind more powerful than your own might be is not really something we're good at it, most of our best efforts at this I would say fall under the category of sci-fi.

Having a higher bandwidth connection to the universe in exchange for not being destroyed seems like an easy choice to me... but I liked the depiction of the Trascendence from the inside, maybe that's just me... er... us?

128:

Ah, yes.

That word missing:

BEING.


129:

Well, after giving the idea some thought, I guess I have an answer for OGH: Turn the Galactic Federation down, but not for the reasons you think.

1) Our present society simply cannot be trusted with an End It All Button.

2) *Before* we could be trusted with an End It All Button, we would need to transform our present society into a literal Heaven On Earth. *Every* single member of our society would need to fit into one of several groups: First, the member could be perfectly content, well-fitted to its present environment and confident that it will continue to be well-fitted to its environment for the foreseeable future. Second, there is a mismatch between the member and its environment or the member believes that there is a significant risk that such a mismatch will occur in the future. In such a case, society must be able to correct this mismatch: a) by modifying the member's environment, b) by modifying ("curing" or "treating") the member (either preemptively or with the member's consent), c) convince the member to peacefully suicide, thus eliminating the mismatch, or d) preemptively eradicate the mismatch and the member before the member can activate its End It All Button.

3) A telepathically transparent society would not be sufficient to bring about a Heaven On Earth. It *might* be a decent first step on the way there, since it might prompt the development of a society with a cosmopolitan ethic, and eliminate any individual unwilling to tolerate a truly cosmopolitan society. A cosmopolitan society would be a closer step towards a Heaven On Earth, since cosmopolitan citizens are less likely to find the continued existence of certain neighbors a reason to push their End It All Buttons. However, a telepathic transparent society might also be able to retain a provincial ethic, and such an ethic would combined with telepathic transparency would be a dystopic society for any and all deviants.

4) A true group mind might not be necessary for the development of a Heaven On Earth, but it would probably be a good step on the way there.

5) Telepathic transparency would *not* be required for a group mind, for reasons sufficiently abstruse that I'm reluctant to bore everybody with them. Telepathic transparency between members might not be desirable either (being forced to drink from the Rule 34 firehose, the cognitive burden of auditing others, the risk of somebody still managing to maintain secrecy despite the Panopticon, the dystopia of a Panopticon unless you're a "saint," etc.).

6) Given this, we should reject the galactic federation offer of Super-Science. We should accept any help they are able to offer for the task of developing a Heaven On Earth, assuming that this help does not greatly accelerate our own independent development of an End It All Button.

7) This help may well include deploying a telepathically transparent society. If so, then we might as well take them up on it. It'll give us something to do while waiting for somebody else to End It All.

8) Given the Galactic Federation's poor sales pitch and their not already understanding the above, are we sure we can trust them with an End It All Button? If not, well, at least "give me liberty or give me death" is still a live option...

130:

Our present society simply cannot be trusted with an End It All Button.


You're kinda missing Reality here.

Global Thermo-Nuclear War has been a set standard for 70+ years now.

Even GW #2 didn't nuke the Middle East.


What you're missing is the entire:

"I HAZ NO IDEA WHAT ECOLOGY IZ AND I DO NOTZ BELIEVE THAT DINOSAURS EXIST OR THAT USING THE STORED ENERGY POTENTIAL OF TWO BILLION YEARS OF LIFE (WHICH IS IT ITSELF A COMPLEX PROCESS FROM THE SUN AND PLANETARY CORE MECHANICS AND URANIUM AND SO ON)...


SO IT DON'T COUNT AS A BUTTON, #YOLO"


Sigh.

~


Sorry, very tempted to take the Mask off soon - I'll leave you to the visions of what-could-have-been.

You burnt 2,000,000,000 years of evolution for fucking McDonalds toys.


That makes you Cunts.

131:

As had already been suggested, human civilization is already a group-mind to some extent.
I would argue that religion is just the sort of group mind we are talking about, and we have already faced this offer before.
Whether Jews facing the Spanish during the Inquisition, of the current fundamentalist strains of Islam demanding conversion or death.
Stalinist Russia and Maoist China were other examples.

I don't see much value in a hive mind if it leads to group-think. We know the results have never been good once diversity has been removed from society.

So unless the penalty is the death of our race or worse, I say "no thank you". Taking it would be accepting a bad deal only to stave off a worse one.

132:

Dude who doesn't understand Science or the premise.

And yes, ffs.

DUDE.

THE FAILURE STATE IS DEATH OF YOUR SPECIES.

~

Like, seriously.

If you hit 4oC, you're looking at a 4 billion kill zone.

You hit 6oC, you're looking at the upper reaches of 80-90%.

You hit 8oC, it's game over.

And the argument against a Hive Mind here is that you might feel a little less fucking special and snow-flake?

~

Whelp.

I'd say it was a pleasure, but it wasn't.


Clouds in the Sky and all that.

133:

See comment #2 from Charlie.

134:

I demure.

Kick-ass comment involving The Scarand so on.

~

Self-ban, 72 hrs.

135:

Sounds like you're assuming the Sapir-Whorf Hypothesis is correct? (At least a weak version.) I'm not so certain — I've seen wider differences than English/French Canada within the English-speaking world.

If a hive mind is essentially a superorganism, then maybe it functions more by affecting our subconscious. When a disaster strikes, a bunch of people who can help decide they should — no conscious order, any more than you consciously decide which cells to contract to type a post here, but your post gets typed anyway.

Which makes the hive mind more of an entity that can reliably nudge behaviour rather than a mutual panopticon. Which may not be the story Charlie is trying to tell.

136:

Maybe we need to work this the other direction, as in "How would a practical group mind function and what would the results look like?"

I'd note the following:

1.) A practical group mind needs the ability to opt out of certain experiences, at least some of the time. As I noted above, feeling the agony of some horribly injured person while you're driving is a possibly lethal event. Thus a practical group mind involves filtering of some kind. This means that the "router" which connects me to another person needs to understand what "driving" is, plus the difference between driving and being a passenger, so that router has to be pretty smart and be able to track multiple people at once. (You can substitute some other complex task with high personal safety requirements for driving if you'd like - we don't want pilots, machinists, chemists or doctors feeling someone's agony/orgasm at inopportune times.) The router(s) will be amazingly smart, probably at AI level.

2.) There will also be times when we are hyper-aware of other people despite ourselves, like when something we do hurts somebody else. Somehow this will never happen while we're driving/operating heavy machinery. Maybe we'll get out of the car and get a download of everything we didn't feel. (The router's are really, really smart.) We'll always feel the other person if we are the cause of their hurt. Or maybe if they believe we are the cause of their hurt... are the routers smart enough to know the difference?

3.) No one person will be aware of all other people. Instead, we'll be connected to certain groups of people, perhaps with a random sampling of some others. You'll have a friends/family group, a work group (if we still have employment after we create our group mind) a group composed the dozen people nearest to you, whatever that means, and the group of people you're romantic with - and that will be quite enough information to process!

4.) There will be some kind of permissions system for the other people in your head, including, read, write, and execute access. (Execute will basically be possession.) Someone mentioned this above, but I forget who.

5.) There will be something like a "volume" or "depth" control, which will determine how deeply we get into someone else's mind/thoughts. This will be tied into the permissions system above. This will involve everything from a basic awareness of another human being, through empathy, telepathy, and to the point of possession.

6.) We will not be telepathic while we sleep except for emergencies. (Once again, the routers are hella smart!)

7.) Somehow this system will be safe for children. (I also have a bridge for sale.)

This is the sanest, most practical form of "hive mind" I have been able to design, and it still seems pretty horrible. But since Charlie provided an important condition...

8.) Since everyone must be sane to avoid starting Armegeddon, and this happens because we are empathetic with others, your psychological defenses will optionally be switched off by the routers, which are much smarter than you.

9.) Understanding your firewall and access-control systems are the key to not being miserable under the new system. Please do not mess with the defaults until you have read the Firewall-nomicon.

137:

...render your individual mental boundaries permeable to one another, and allow any other human complete access to your thoughts and memories.
As described, this is just enabling full access to anyone interested. Whether a "traditional" group/hive mind emerges is immaterial. So the easy answer is yes. That it is structured as an ultimatum makes it an easier decision but that doesn't really matter. We're already headed in that direction, with only privacy technologies like cryptography being technological barriers to all-to-all communication. (Have always felt ambivalent about crypto for that reason.) There would still be variation (a good thing!), even into the future due to differences in experience and in mind style and capabilities. What would be the effect of accessing the mind of somebody much more intelligent? More moral? Different gender? Expert in something fun? I would enjoy those accesses. As others have noted, there is a question about how this would affect child-rearing - we'd need an answer for that at least. Also languages would be an issue probably.
I was recently grousing obtusely in another thread about the slowness of large nation states like the US at making necessary changes. (Riffing on a comment by NN.) One big reason is that communications between arbitrary pairs of members of a large society are approximately nonexistent, so people think locally and have very poor and slowly-updated models of what non-local members of their society really think/believe/know. All-to-all communication on demand reduces this effect, reduces the effective size of the society (on all scales) a lot, apparently enough for the Galactic Federation. Ideally, enthusiastic (and non-deluded :-) minds could be much more infectious, and desirable change much more rapid.

Thanks CS for the question. Hadn't thought it through previously.

138:

Point one. This mental tweak destroys government secrecy. Probably, it destroys governments, as we know them.

Point two. Only governments have the resources to ruthlessly enforce the tweaking process, globally.

Conclusion. We're so screwed

139:

To marry Medusa YES !

140:

VERY VERY loaded nomenclature there.
"Saints" ??
DO YOU REALISE what utter total complete egocentric controlling bastards most saints are or were?
See also "rule of the saints" - usually referring to ultra-"protestant" communities like Calvin's Geneva & presbyterian Scotland.
Shudder.

Be very careful what you wish for.

141:

NO EVEN WRONG
You don't hate X, you understand X.

NSDAP?
Da'esh?

"Understand" - possibly enough to want to kill them all .....

142:

"That makes you Cunts."

Sorry, but isn't this contrary to the moderation policy?

143:

I guess I am. Like you I've seen big differences within English-speaking cultures too. For a small (but decidedly not trivial) recent UK-based example you only have to look at how the news of Margaret Thatcher's death was received. Everything from burning effigies and "Ding Dong the Witch is Dead" hitting number 1 to Gideon crying at her funeral and the uncomprehending outrage of the grandees that did well from her time in office at the hate she still evoked.

But I think some of those things differences will linger, if not that particular example. The world-view of a man who has grown up in Nepal and never seen the sea for himself directly will be different from the world view of a woman who has never Raoul Island (New Zealand's most northerly inhabited island with an area less than 30 sq. km and rising 516 m from the surrounding Pacific Ocean, even if they both speak the same language as natives. If they don't and have that as an extra barrier to communication, those differences will be even bigger.

How will that affect the world culture version of a "group mind?" Not sure but not positively I think.

145:

Yes.

The thing I'm giving up is privacy. Privacy is currently important to me because of a set of very British, Victorian-era values that make it embarrassing to disclose details of my finances, medical conditions, sexual preferences and opinions on people that are different to me.

In this new world, money is unimportant, medicine is perfect, my sexual preferences are going to be similar to a lot of people and no-one will be that different from me.

It makes me think about peadophile rings operating on tor. There are a lot of them, operating in small, unconnected groups. This is like drawing all of those groups together into one fairly large group, and also inviting all of the people that might have an interest but don't know how to do anything about it. Is it still a problem when they constitute a statistically measurable portion of society and everyone else can see what they are doing?

Also, I'm thinking about the question based on my opinion right now. A few generations down the line living without the group mind would be like living off-grid today. It's possible, but really it's a difficult existence reserved for the mentally ill.

146:

"This is the sanest, most practical form of "hive mind" I have been able to design, and it still seems pretty horrible."

Interestingly, I was talking about something very like this last night, and I think that you are barking up the wrong tree. You are thinking in terms of deterministic programming, rather than the emergent properties of a stochastic system. I can get the glimmerings of how such a mind could work, but it's foully difficult to put into words (or even mathematics). If we assume the way that existing group entities (honeybees, many ants, jellyfish, and human societies) work, we end up with something where evolution is incredibly slow and innovation next to impossible.

But, if we assume a looser group entity, where the optimising measures are probabilistic, we would get something a lot more interesting. Basically, dysfunctional subgroups would be allowed to arise, but would be eliminated by something rather like an immune system. Only if they had enough advantages to fight that off would they last very long or spread. However, this area is still very poorly understood (whether in computational theory or immunology), and we have no idea how to design a stable system.

147:

I don't recognise your model of the group mind but I can play in paradigm.

1) Is relatively simple, at least to describe. You simply require variable awareness of others and a mean level for everyone over a set period (say a week). A pilot or a surgeon can tune right down so they can concentrate at the critical moments, but must spend other time more connected than, say a typical house-husband, to retain their citizenship. They can still be monitored by others. This rule should catch the psychopaths who want to be disconnected. This is simple to achieve by making it an active engagement process rather than a constant thing. I still actively engage with blogs, games etc. Why does a technological hive mind have to be always on for me?

Is 2 a "punishment" a form of forced empathy or what? I'm not sure why it's there? I think some people will be drawn to those they hurt in means of atonement, some will want to be there to gloat. Some of those hurt will want to see into the minds of those that hurt them to understand why and how and seek to forgive them. Your rule needs to be thought through more IMO.

3) Ok.

4 & 5) Why does the hive mind require more than read permissions?

6) Kind of irrelevant if you actively engage with the hive mind surely?

7) Entry is by proving you're a sane, mature individual. No children involved.

148:

I think you want to look at ecosystems and interacting populations, which we do model somewhat successfully I think, rather than modelling the immune system for your stochastic system.

Novel groups, either in the form of new mutations within a species or a new species coming into an environment, arise and the ecosystem adapts if they're a viable and successful change they go extinct if they're not.

The biota in an ecosystem change fairly rapidly and dramatically in response to these disruptions.

149:

You misunderstood me. The systems that we can model successfully are all fairly simple, and we haven't yet got a handle on how to predict the behaviour of a new variant. I was describing to a 'higher level' version of such systems, which we can just about imagine. What I was saying is that any such group system needs an equivalent to the immune system in order to allow variation, but to prevent variant subsystems from taking over. Not that the immune system is a suitable model for the group system.

The point is that our bodies have to allow growth and variation, in order to recover from damage and to adapt to new requirements, but that that those are also what happens in cancer. So a major function of our immune system is to maintain the balance - too much, and we get autoimmune diseases, too little and we get cancer. Humanity, considered as an organism, has gone too far in the latter direction.

150:

"In this scenario, would cultural adaptation necessarily result in cultural loss? ...Also, would forcing us to form a true group mind necessarily mean that we would simply acculturate to the Galactic Federation's culture? It could be that the Federation is hoping that we'll do something different and interesting in forming our own group mind, and that's one reason why they're bothering to contact us at all."

I am 90% certain that there would be massive cultural loss (we are already experiencing cultural loss in our present quasi-global, yet dysfunctional, civilisation and this would only undergo rapid acceleration in this first contact scenario). The super-science is a product of the culture(s?) of the galactic federation -- its norms and values, forms of creative expression, science, philosophy, etc. -- and is imbedded in that cultural matrix. Adopting the goodies results in the adopting of that galactic culture, in whole or in part, and if we seek to actually understand the technology itself (rather than simply use it) that will involve further acculturation by our species. I do agree with you that there could be some cultural preservation, that not everyone will want to adopt these new technologies (e.g. Old Order Mennonites in our current society), and there will be some new subcultures/cultures that form based on the introduction of this tech; however, most of humankind will jump on the bandwagon and adopt the new technologies and the culture that it comes with. Humans would probably also make some unique cultural innovations within the context of that galactic culture. So far, I am only talking about the potential impact of galactic super-science introduction to human cultures...

Humankind becoming a group mind is a whole different thing, though. What type of group mind? If it is the type of group mind that would "allow for the continuation, perhaps in a modified form, of our current identities" and for temporary sub-groups within the group mind (like teams) that are also limited or constrained; then yes, we could still have multiple subcultures and there would be no complete acculturation of our species -- but there would still be large scale acculturation. Thus it would still be possible to be an Amish farmer and part of the species group mind. That said, just becoming this sort of group mind (even without the galactic culture and technology) would result in a more rapid homogenisation and the creation of a single dominant human culture (with some new and old subcultures within it). It would also make previous human culture appear quaint and almost alien within a couple of generations. Being a group mind changes the entire human experience and the culture that it exists in. With some retention of the individual there would still be "I"s and "we's and "us" -- depending upon the situational context. Of course, this seems to be what the aliens wish to have us avoid as it could lead to deranged "splinter groups", so the group mind is probably...

A hive mind -- just one big collective "I" that is the human species. However, the fact that the aliens continue to use the pronoun "we" would lend support to the notion that the galactic culture is not a single Overmind-esque hive mind, but a federation of hive minds.

151:

Greg, I think she's referring to the whole human race, and misanthropy is not forbidden by the moderation policy.

152:

What type of group mind? If it is the type of group mind that would "allow for the continuation, perhaps in a modified form, of our current identities" and for temporary sub-groups within the group mind (like teams) that are also limited or constrained; then yes, we could still have multiple subcultures and there would be no complete acculturation of our species

I should hope for something where culturral as well as personal identities are preserved. One thing to keep in mind is that if all of Humanity were taken into a Group Mind it would be mostly non-white and non-christian*. A Good Thing, IMO. Hopefully it would have a positive effect regarding cross-cultural understanding.

*Welp, there goes a portion of the US left out.

153:

So a group of beings with superior technology come to our land and tells us to embrace their faith or else...

In a sense we're already there but there are groups of hiveminds right now. So which parts does the new and improved hivemind integrate?
Science?
Belief in afterlife?
Earth is holy and we won't hurt it?
Individualism? ;)

The hivemind is basically a negative feedback loop, it dampens impulses. It wouldn't need to be particulary fast in reaction speed. How often does the population of a whole continent have to react in a second? At the speed of seasons would be quick enough.


immortality, faster than light travel, the tools to build AIs, cures for all your illnesses and a working theory of economics that abolishes poverty and war and provides as much wealth as anybody wants are all about wants for individuals. The hivemind might conclude that it doesn't need FTL, AI, wealth and all that. It might decide that it is better to do away with those wants and be happy sitting in the sun and smell the flowers and do just enough to maintain it's existence. Diseases? All members of the hivemind will be happy to exercise and not sit around watching screens. They also won't drink beer... THAT'S IT, I'M OUT!

154:

Why does a technological hive mind have to be always on for me?

In practical terms, it can't be always on. That's my point.

Is 2 a "punishment" a form of forced empathy or what? I'm not sure why it's there? I think some people will be drawn to those they hurt in means of atonement, some will want to be there to gloat. Some of those hurt will want to see into the minds of those that hurt them to understand why and how and seek to forgive them. Your rule needs to be thought through more IMO.

My number 2 is as much a question as it is an answer, with a little unavoidable sarcasm thrown in. 2 was a later addition and I was starting to see how ugly the system could get. That being said, I think the idea is not "punishment" as much as the idea that when you hurt someone you need to know it, and you need to understand how, so you don't do it again.

4 & 5) Why does the hive mind require more than read permissions?

Because it becomes very easy to move expertise around the planet if you have write and execute permissions. For example, if you're driving down the highway and see a bad accident with injuries, you invite a doctor to operate your body. Or your expertise is needed and someone else invites you in. Write permission is something you give to your closest intimates (and possibly instructors?) and execute permissions are for emergencies.

6) Kind of irrelevant if you actively engage with the hive mind surely?

The last thing I want is to dream someone else's dreams, or get disturbed by my neighbors making love in some fashion that turns me off (mental rape - ick!) while I'm sleeping. The connection gets disconnected while I sleep. Period.

7) Entry is by proving you're a sane, mature individual. No children involved.

Or some version of "kids connect to each other, but can't connect to adults." Or maybe parents have read access to their own kids so they can monitor, but kids can't hear parents. There are a lot of possible variations on this one.

A more intelligent version of this whole thing is that the aliens give us the technology, both theoretical and with practical samples, and encourage us to find our own solution to the problem of building a "hive mind." I have a lot less problems with that approach than with buying an "off-the-shelf" hive mind from the Federation.

155:

"In practical terms, it can't be always on. That's my point."

In your model, yes. In the one I am envisaging, it is always on, but that does not mean that it will take action; that is controlled by whether your particular behaviour or thoughts trigger its police/immune reaction. You are merely one cell among many, in terms of the entity that is humanity, despite being an individual person.

156:

I'm not thinking of deterministic programming so much as routing, which depending on the protocol used can be very responsive to changing conditions. The dozen people physically nearest to you would be your "first hop" or maybe "default connections" in routing terms, giving you a good gesalt of what's happening in the general area, including their views of people who are not your "first hops." Naturally, this would be constantly and (semi) randomly changing.

If you developed a close relationship with someone, they might also become a "first hop," regardless of distance, or if you break up with a romantic partner, they might stop being a "first hop." Generally family members would be first hops... but the system would be very dynamic and ever-changing.

Once again, I think the important thing would be to develop our own version of the "hive mind" rather than buying one off-the-shelf from an alien civilization.

157:

If they are capable of saying "in all identified previous cosmoi" they presumably have enough experience to draw upon that there'd be no sense in them asking. They'd take a look at us, run through some checklist of behavioral traits and mental capabilities and conclude whether humanity could hive-mind or not. If we could, they'd simply introduce it because they would literally know what was best for us. If we couldn't, we'd never know that the rock they dropped on us wasn't simply bad luck.

I'm curious how one would hack a hive-mind. What would an insurgency look like?

158:

I too hope for there remaining some level of personal identity and culture within the group mind.

I completely agree though, whether it is a group mind that preserves some aspects of personal identity or a hive mind where those identities are lost; the entity will not have a predominantly Western worldview and it will certainly not have an American one. ;)

159:

Certainly many saints were designated as such because of their support for the policies of a corrupt imperialist political organisation masquerading as the mediators of divine authority. But I think the idea of "saint" as a "very loaded" term is very dependent on personal viewpoint. I still default to interpreting the word in its idealist sense despite being aware that many so-called saints fell far short of that; in context, I like to think that I tend to take it as what it's meant as, which is usually somewhere on the scale between the two.

Having said all that, I think the "rule of the saints" concept is a possible failure mode for some of the directions in which the proposal has been developed - although not necessarily the most likely one.

160:

Maybe I should say, "In practical terms, there must be times when I'm unable to hear it."

Example: Steve is hetero and prefers vanilla sex. He is trying to sleep. The Gay, male+male couple next door is having S&M sex. Steve would really prefer that his sleep not be disturbed by nipple-clamps and fisting...

Meanwhile, the Gay couple next door are being intimate, and they don't want their neighbor to solve the problem of being disturbed by their mental noise by downloading their appreciation for Gay, BDSM sex and being a mute participant in to their most intimate moments.

Maybe this issue completely evaporates under the conditions of a hive mind (My neighbors are having sex? That's great!) and maybe it doesn't, but we can't know until we try it out, then make the necessary tweaks. Thus HiveMind 1.0 includes the ability to sleep without being disturbed by the neighbors.

As a personal note, I can drive up to 500 miles in an ordinary work day, (though 300 miles is more likely) so I really appreciate my rest, and get very angry when something disturbs my sleep. This is a matter of simple self-preservation.

161:

Write permission is something you give to your closest intimates (and possibly instructors?) and execute permissions are for emergencies.
This is what I have in mind too. Normal access would be universal read only access. The exceptions (write/execute)[1] would be very carefully granted by the mind instance associated with a single brain through upbringing as a singleton prior to change (at least initially), ideally on a temporary basis with periodic renewal required ("leases"), and would be entirely voluntary.
In the fullness of time (which might only be a few months or years) we would develop trust enough that write/execute permissions would be granted more routinely.

[1] There are other schemes for access control, notably capability-based security, that might be a better fit.

162:

"You burnt 2,000,000,000 years of evolution for fucking McDonalds toys.

That makes you Cunts."

And would any degree of collectivisation of consciousness make it any better? I rather doubt it.

Some people understand the problem. Most don't, or don't really care about it, or purport to care about it while demonstrating by their actions that their real concern is with acquiring more fucking McDonalds toys. And the majority view prevails, so more fucking McDonalds toys is what we get. Along with the idea that every problem can be solved by restating it in terms of fucking McDonalds toys.

Human minds are very good at coming to the conclusion they want to come to and finding ways to dismiss contrary evidence even if it's staring them in the face. If increased communication would help counter that, we should have already seen a step change in attitudes with the expansion of the internet. What we've actually seen is that providing a population with internet access accelerates their conversion to the religion of fucking McDonalds toys. It seems to me that one effect of the advent of communication that does not require technological gadgets to work would be the further acceleration of this self-destructive tendency, and the most likely thing to stop it happening would be the similar acceleration of other self-destructive tendencies that act more rapidly.

Consider the tendency of aggressive and destructive behaviour to spread through large crowds, as people lose their inhibitions against such behaviour just by seeing other people indulging in it, and so the crowd becomes a riot. It seems to me that if (using Charlie's minimal "Belcerebron" scenario) you were able to actually feel other people's disinhibition and their delight in it, with all the intensity that comes from being among or near hundreds of people in a highly emotionally charged state, the rioting would spread enormously faster and attain much more destructive levels.

And it would start more or less spontaneously in thousands of places. Society only holds together at all because when you think someone's a cunt you don't usually go and call them one. Remove that safeguard and any high street immediately sees hundreds of people engaging in the fights which currently the voluntary nature of speech prevents.

I think Charlie's aliens have already written the human species off as a lost cause, and their aim is our destruction. The choice is simply between whether they just drop a planet-buster on us, or turn up the positive feedback of our existing self-destructive tendencies to make sure we go foof well before we get to the stage of developing a vacuum collapser. (Why bother, instead of just dropping the planet-buster anyway? Maybe it fits their notions of morality better. Or their notions of engineering elegance. Whatever. They're aliens, so there's no reason to suppose their motivations make sense to our mentality in the first place.)

163:

Tell them "we're going to need time to think about it." In fact, stall as long as possible while waging a secret Manhattan Project to develop escape hatches, backdoor ways to deborgify (such as engineered biological or cybernetic individuation viruses) and/or keep back a reserve of individuated Terrans (frozen deep under the ice of Pluto) who can then burst out and reboot the rest of human civilization. Because of course we are different and we won't destroy the universe. Terran exceptionalism applies, of course. We can do this.

164:

"One thing to keep in mind is that if all of Humanity were taken into a Group Mind it would be mostly non-white and non-christian*. A Good Thing, IMO. "

Yes. It would finally eliminate all that secular liberal crap we have to put up with from Whitey

165:

Groupminds take groupthink up to entirely new levels; it's bad enough as it is with the current low bandwidth connections. And frankly, given how easy it is to learn online about othered groups and how folks still are shites... I don't think it would kill that. Frankly, I have no desire for the TERFs anna like to be able acess the one area I can effectively hide from their hate.

...

As I said, it's a ratfuck.

My money on their motivation is on a plausibly deniable sabotage that leaves them with an excuse to occupy us so that other alien civilizations don't climb up their ass for breaking the laws of their milieu.

166:

Perhaps we are misunderstanding the representatives of the Galactic Federation? Perhaps what they are suggesting is at once more literal and more subtle than we've understood?

What, precisely, is a hive mind? Think about actual ant hills or bee hives. There's no sentience there, let alone sapience. No actual controlling intelligence. Instead, there's an emergent behaviour derived from individual members of the colony following fairly simple algorithmic rules (follow the scent trail of the ant in front, but cut out the corners to a certain extent) which leads to an apparently purposeful behaviour (a trail of ants making a bee-line towards some food source).

Given that 'destabilizing the false vacuum' is not something any one person could do alone, but requires some massive collaborative effort along the lines of the LHC project, perhaps all the GF needs is for us all to accept some sort of conditioning to behave in a way that makes the emergent behaviour of the resultant society extremely risk averse. I don't see why this should require allowing other people complete access to your thoughts and memories, buts that's the thing about emergent behaviour -- the outcome doesn't seem to follow logically from the inputs. Perhaps it's only necessary for this complete mental access to be possible; not that it is necessary for it to be realized continually.

As individuals we wouldn't be constrained to avoid gambling or cave diving or unprotected sex or other potentially self destructive activities; but as a society we would find it intolerable to consider any action that might harm a significant number of people, so no massive build-up of nuclear arsenals or fanatical adherence to political or religious ideologies, or unrestrained overconsumption of resources. And no ever-more-massive high-energy physics projects where there would be even the slightest doubt about what the outcome might be.

Oh, and the implied threat if we don't sign up to this? The GF just leaves us well alone. We've only our own devices to handle environmental collapse, technological singularities and all those other great filters of the Fermi paradox.

167:

Short answer: I would prefer to die than live in a hive mind.

168:

I would prefer to die than live in a hive mind.

You offer is acceptable to us. Signed: Galactic Federation.

169:

On what is a hive mind, I defer to others above who have pointed out that we already live in such, an emergent entity.

However, "Given that 'destabilizing the false vacuum' is not something any one person could do alone, but requires some massive collaborative effort along the lines of the LHC project," while LHC and really high energy seem plausible, we really don't have any idea of what it would take. Maybe a 9-volt battery and a few well-shaped coils of wire, a couple of transistors.

I like Leith Laumer's Imperium novels ( https://en.wikipedia.org/wiki/Worlds_of_the_Imperium )
in which a couple of late-C19 Italian experimenters wrapped a Moebius strip with wire and mostly managed to destroy the planet except in the very few worlds where they got it right and went sailing off among the universes.

170:

Any "rule of the saints" is a guaranteed failure mode.
The groupthink & "We can do no wrong, because we're pure" mind-set ensure that this is the case.

Dost thou think, because thou art virtuous, there shall be no more cakes & Ale?

171:

Routing is part of programming, and you are STILL thinking deterministically! I am not denying your model, but am pointing out that mine is entirely different. As I said, it's extremely hard to put into words, or even mathematics.

172:

Actually when the experimenters got it wrong they destroyed or severely messed up their entire local universe, not just the planet.

173:

Given that vacuum catastrophes are supposed to be universal " Studies show that in all identified previous cosmoi, individualist tool-using sophonts with access to these technologies harboured splinter groups so deranged that they collapsed the vacuum energy." seems dubious.
Vacuum collapse isn't supposed to leave behind enough evidence to make such claims.

These guys strike me as some kind of liars/scammers. Maybe approximate equivalent of the more aggressive, "help in exchange for religious conversion" missionaries from human history.

Maybe galactic 4chan taking a piss at backwards small civilization's expense.

Maybe like that "math puzzle probe" from Babylon 5.

These guys are sketchy. Since telling them to GTFO is implicitly fatal, the correct course is "draw time with bureaucracy" + "sabotage" while looking for ways to make them GTFO.

174:

Considering that the Sentinelese can make us fuck off with bows and arrows...

175:

There is an easy way to verify some of their claims, considering that one of them is FTL travel.

"We are considering your offer but we need a little while to analyse all the options. Would it be possible to come back and give us more information last week?"

176:

That implies our theories on causality are accurate.

177:

LOL, indeed, FTL implies timetravel, so in fact they should already know how it all goes down.

178:

Well, demonstrating causality-preserving FTL should not really be a problem for a civilization that allegedly has it.

If these guys refuse citing mumble-mumble-technobabble, the probability of them being some kind of starfaring 419 scam increases profoundly.

179:

It is a dogma of the Orthodox Church of Relativity that FTL travel implies breaches of causality, but I have been unable to find a proper proof - or, more importantly, find a relativist who knows of one! There is a 'proof' involving graphs, but no proof that the graph is mathematically equivalent to the formulae. My manipulations of the formulae, and discussions with relativists, indicate that the claim may not be universally true. Going into this in more depth would be a derail.

180:

Of course the obvious reply to that is "We just did".

181:

Given the chance that we're living in a simulated universe, the "Galactic Federation" might just be the AI researchers who've noticed that their little experiment seems to be working.

Those willing to join get copied to a bigger and better server with an internet connection. Those who don't can revel in their freedom and the Indomitable Human Spirit until the server gets rebooted next Tuesday.

182:

Causality preserving FTL is equivalent to traveling between very close copies of multiverse timelines.

183:

Remembering, of course, that there will be a large number of Nationalist politicians who will insist that "the Galactic Federation don't really mean it - we'll be better off negotiating our own terms" followed by voodoo economics and the blind faith that they need us just as much as we need them, and of course we'll get a better deal as an independent planet, who are these unelected bureaucrats on Sirius telling us what to do, etc, etc...

184:

No, because it would be the death of humanity, which for me is "Homo narrans". Without individual narrative selves there wouldn't be any stories and therefore the human culture would be dead. Even if a collective mind would be able to understand human stories, it would lack fresh experience of living in a story.

If we talk about a group mind which still has individual personalities but also mandatory telepathy and enhanced empathy, that would be more acceptable (at least for me). But that's also what we already have with all that social web and big data and CCTVs. I fail to see how that leads to more cohesive and responsible behaviour, though...

185:

Let's see if this is derailing...

The problem with this dilemma is that the issue is really in the implementation. Can anyone really make a decision for the whole of humanity to go into a group-mind, immediately? Even if 99% of people say yes, it'd still be grossly unfair on the 1%. And can immediate complete collectivisation really be necessary?

It seems sensible to instead say, "alright, how about we phase it in. We start off with a group of 1000 volunteers. Then if that works out, we start expanding it. Hopefully over the course of several generations the idea of group minds stops being so scary. Maybe we'll start offering incentives. And so on."

Generally speaking I assume aliens in these situations to be 100% telling the truth, because when you have FTL and true AI and all that fun stuff and are talking to a civilisation that doesn't, why do you have to lie?

186:

"Group minds generally stick to the terms and conditions voluntarily."

Generally? As in *most* cases? What happens to the ones who still look like they might not play nice?

Bigger problem:

This sounds like the requirement to impose a significant change to the morals or utility function of large portions of the human race. (at least anyone who values privacy or individuality of any form)

It reminds me of one of the answers given to a similar dilemma in "The Baby-Eating Aliens". If that's simply the first change we're required to adopt might there be future equally extreme changes required at a later date?

Loss of one set of morals or changes to human values to avoid extermination might be preferable but if it's potentially just the first in a long series of equally extreme changes then at some point you're not really saving humanity any more. You might find after a century that everything even vaguely human has been erased and replaced with the goals/morals/culture of the leaders of this galactic federation.

every week they call up and say something like "well, our projections show that if humans continue to care about love and caring for their children then there's a risk of them destroying everything, erase that, also species that are willing to eat their own young are 6% less likely to destroy the universe, edit humans to make them keen to eat babies".

Each time once you've made the change you stop caring about it because your morals now allow it or say it's a good thing and each time it's only a small incremental step to preserve yourself.... until eventually the borglike flesh beast that the human population have become is merrily conducting an exterminatus against a planet belonging to the enemies of the galactic federation all while singing praises to the dark god Korne and planning the eradication of all music from the universe.


187:

Or, alternatively, that it's those simulated mind that recognize it as a probable ruse that get "stamp of approval" as successful experiment products.

You don't really know what kind of results "simulation admin" is looking for :-)

188:
Generally speaking I assume aliens in these situations to be 100% telling the truth, because when you have FTL and true AI and all that fun stuff and are talking to a civilisation that doesn't, why do you have to lie?

Let's see how this works...

[a little search and replace later]

Generally speaking I assume missionaries in these situations to be 100% telling the truth, because when you have ocean-traversing wooden ships and thunder sticks that send death at a distance and all that fun stuff and are talking to a civilisation that doesn't, why do you have to lie?

:-)

189:

Older forms of routing, such as RIP, are deterministic. Newer forms of routing are capable of much more complex behaviors and routers adapt themselves to the situation they're placed in (within certain limits.)*

Anyway, I'd like to hear about your model. If you can't convey the math, can you describe it in practical terms?

*Eric Raymond relates a meeting in which the original idea for IPV4 was to give each person their own IP Address. Think about THAT for a moment!

190:

These guys are sketchy. Since telling them to GTFO is implicitly fatal, the correct course is "draw time with bureaucracy" + "sabotage" while looking for ways to make them GTFO.

I think the chance of the aliens in this scenario being sketchy increase substantially if they try to sell us an "off the shelf" solution, and decrease substantially if they say, "You need to develop your own group mind. Here's some theory, and we'll help with practical engineering if you need it. Your deadline is three-hundred years from now."

One approach implies substantially more free will for the human race.

191:

"I'm going to build a group mind and make the aliens pay for it!"

The idea that our current crop of leaders would handle such a situation intelligently and appropriately has completely overwhelmed my suspension of disbelief!

"Maybe we can get the aliens to attack Iran!" or "Sir, I think the aliens are socialists!" are the most intelligent responses I'm imagining.

Obama might be smart enough to bring in a science-fiction writer and listen carefully to the advice he was given, but I can't think of a U.S. writer I'd trust under the circumstances. Maybe Vernor Vinge or Greg Bear? The number of possible bad choices where U.S. science fiction writers are concerned is gigantic!

As for "CallmeDave" I'll let the folks from the U.K. discuss that one.

192:

Have been trying to think of what that 'simple tweak' might be and can't. Humans vary across every attribute/trait that I can think of apart from the most fundamental survival things like requiring oxygen to breathe, or sleep. Immediate examples: innate immunity to HIV/AIDS or ebola, aphantasia (inability to form mind pictures), non-OAB blood type ('Bombay phenotype ... serum contained antibodies that reacted with all red blood cells' normal ABO phenotypes. The red blood cells appeared to lack all of the ABO blood group antigens and to have an additional antigen that was previously unknown', etc.

So no matter what this simple fix is someone (possibly many someones) will be immune. And since some immunities are acquired as part of the developmental process or as a result of interaction/environmentally, this further destabilizes the homogeneity/hive mind.

The level of homogeneity required/desired plus the promise of woo-woo technology suggests AI/machine rather than an 'organic/living' being.


193:

The distinction between an alien federation with FTL and true AI is not really like an explorer with gunpower and wooden ships meeting some native people with canoes and bows.

It's more like a modern human dictating terms to an ant. You're rather impressed these people are bothering to talk at all.

194:

About selfhood/individuality vs. clone issue several posters raised:


C'mon aren't there any readers here that are monozygotic twins/triplets?


Would be edifying to get their perspectives/perceptions on this.


195:

Given the chance that we're living in a simulated universe, About 0.001 at a generous estimate.
Nice try, no banana

196:

Are the true AIs part of the group mind? Assuming they are and that they think at "culture mind" speed (FTL signalling) then what proportion of the actual thinking would be done by the AIs?

Would the humans have any real contribution or would the human "mind" really just be a bunch of meat puppets doing the machines bidding while under the impression under the impression that the ideas they were having were their own?

197:

Depends. The objective of the aliens is to prevent powerful factions, so there's plenty of room for different kinds, some of which might not have the personality of an average human. A group mind wouldn't necessarily be dominated by the most numerous type of mind. Every relationship is dominated by the member(s) who want(s) it the least, so perhaps the most individually willful minds would take the lead in the group mind. Or perhaps the direction of the group mind is determined by variety count, like counting the importance of a life form by how many species it has rather than how many individuals (who are all the same anyway, useless like having extra copies of the same book. So the religion of the group mind might be a mishmash of protestant Christian, since there are so many varieties, and the language would be some dialect from New Guinea, again since there are so many languages there (English counts once, each tribal tongue counts once). Or there could be room for much individuality, with just a congress theoretically necessary for important decisions, like the Taelon in Earth Final Conflict. However it works, it just takes a minor tweak to do. If you can do that, why not just build in a horror of using vacuum energy?

198:

My manipulations of the formulae, and discussions with relativists, indicate that the claim may not be universally true. Going into this in more depth would be a derail.
Please feel free to derail in more detail when the thread gets to the requisite >300 comments. Sounds interesting.
I was ignoring the FTL bit as a trope added (uncharacteristically for OGH) to a fictional scenario for color, but as Dirk says, without a multiverse it seems hard to achieve and even then it is (OK might be) a drastic thing to do.

199:

But there are people who like horror.

"Don't be silly Bob. Everyone knows there is no such thing as vacuum energy."

200:

Not really - as I said, it's even harder to put into words :-( It's based on the way that the optimum strategy for many decisions is probabilistic, not deterministic; incidentally, that's also true for many routing problems. Essentially, nothing is required or forbidden, but the probability of it succeeding depends on how beneficial it is, and even the criteria for beneficial aren't necessarily deterministic. Much like the way that evolution works. Some people have studied IT systems like that, but we are still way out of our depth - don't believe a word the 'neural network' or (worse) AI people say when they claim they know what they are doing. But we have good evidence that such systems do work, and are stable. Chaos theory was a major advance, though it is still being developed, but these systems need a meld of Markov theory and chaos theory.

201:

While I am not one myself, my first wife was a monozygotic triplet. She and her sisters were very determined that they were individuals, and they had little time for the ridiculous idea that clones are in any way identical.

This refusal was so entrenched that they didn't actually think they were 'identical' until a few years ago when they had DNA testing done, at which point some of us quietly said "We told you so".

(They do look much more similar than, for example, my sisters who are fraternal twins.)

So, a data point. And the idea that artificial clones would be any more 'identical' than natural ones is something that always makes my disbelief break.

In terms of group mind, I'd reckon my current wife and I are more likely to finish each other's sentences than my ex and her sisters. To some extent, that's theory of mind — we model each other reasonably well because we spend a lot of time in each other's company. That's not something that'd scale.

202:

If you use the words "Galactic Milieu" and "Unity" then Julian May's books might apply :)

203:

Just spotted your post - you beat me to it :)

204:

Maybe we are all over thinking this. All the aliens say that they want is the assurance that an insane faction won't end the universe, and they want us to use some form of group mind to do it.With a hierarchical society and ubiquitous social media, we are halfway there already. Surely we can group our collective minds and come up with a proposal that would satisfy everyone. We just need a way to make small group decision-making accountable. In other words, its not individual humans who have to give up autonomous decision-making, its small groups. Rescind freedom of assembly and make it stick and I think we're there.

205:

YUCK
Christian dominionism across the multiverse

206:

all well and good until some lone nutter with a matter assembler hides out in a mountain bunker for a few months fabricating a vacuum collapser.

207:

Funny you ask.

When I was 10 or so, I read the 'My Teacher is an Alien' series. Which is pretty much a 4 book series with this as the conclusion.

Since I was imprinted by that meme, it makes me much more likely to say yes without hesitation.

Mostly because I think it would force such true empathy it would make it dang hard to hate anyone. We'd be able to fix a great deal of our problems quickly by just giving a crap.

Not everything though, since empathy doesn't mean there cannot be real disagreements on the best solution, or what is actually a problem in all cases. But lots of fake problems caused by us being bullheaded (like hunger, fear of others, and homelessness) could end the first day.

The bigger thing would be to what extend a single mind could infect or corrupt the overmind. Could one person's homicidal or suicidal thoughts propagate as a meme? Alternatively, could someone's thoughts of love?

If thoughts can propagate, can we form a safeguard? At what point in maturation do we join the link? Do we have with the immortality a treatment for senility?

Can surgery remove the link for a madman?

208:

I think you're confusing it with Narnia :)

Just because an author uses religion-originated philosophy (that promptly hits your "all religion is..." button) doesn't mean that it's religious propaganda.

209:

Ah, sorry, what "Serenity" is this? It really does *not* look like Joss Whedon's, where the government-in-long-term-power wanted to find a way to make the "citizens" docile, as well as less aggressive, and "overshot" further than the Nazis did....

mark

210:

That's been my experience also. The only identical twins I know aren't that much more similar than some unrelated like-minded folks I know. But they and I are North Americans, and of the nations studied, the US places a pretty high value on individuality (Hofstede).

To confuse things further, while there's quite a bit of research showing that there's a very strong, fundamental affiliation need in most humans, I'm not aware of similar depth or amount of research on individuality/individual identity as a fundamental psychological/biological need.

So, is individuality just a cultural phenomenon? If we could measure the need for affiliation/social conformity vs. individuality and map against 'happiness/well-being' across nations/cultures by studying monozygotic twins, we might be able to provide an informed answer to Charlie's question. Plus maybe debunk some cultural myths.

https://www.geert-hofstede.com/national-culture.html

211:

Well, we're well over 100 cmts (209, I see, including mine), so first... about that vacuum energy? If it's happened before, why is the Universe still here... or has the GF members that escaped from previous Universes?

Second: What level telepathy? Being able to read someone else's projection *to* *you*, or only broadcast telepathy, or to read someone else's mind if they let you, or being able to forceably read it (mark, second book into rereading Doc Smith's Skylark series, the Fenacrhone sneered as he brought out the mechanical educator. Then saw the hardware, then started to look nervous as Seaton hooked in the 5KW amplifier....)?

Third: please *define* group mind. There are a *lot* of possibilities: one single consciousness, or, like programs running on a computer, each one sees only itself, but calls on the o/s for resources, or ? Actually, speaking of calls to the o/s, do we come to a consensus, and all members abide by it (I could probably make money by sending out spam, but I think that's the act of a vile scum, and would not even think of it, even when I was out of work, long-term)?

Fourth... does it *have* to be all of us? Can *some* say yes, and become part, and gain the benefit? For that matter, can someone who is *not* part of it understand the super-tech? Does it even have buttons, or do you tell it to do things with your mind?

Finally, yeah, there are folks who you'd really not want to share their mind, willy-nilly; me, for example, when I'm this depressed, missing my ...late... wife (18.5 years or so....) And what *would* you do about all the troops who been in wars - wipe their memories, or....?

mark

212:

But the J N May series was religious propaganda - in fact it was an almost-retelling of "Paradise Lost" (Regained)
It took me a long time, reading through the series, before the penny dropped, too ...
It's no longer on my shelves

213:

You should read the comments. At the very least, you should read Charlie's comments, one of which answered the second and third questions. Also you should read the original post, which answered the first question.

214:

Well, disagree, but if you insist on "ants versus humans" analogy then fine.

I routinely "lie" to ants promising them food and giving them subtle poison instead.

I do this via a special chemical trap that costs the equivalent of about $3 and is reasonably effective at killing them.

Ants have no idea. Up until they die. And then they keep having no idea - but for different reasons :-)

Point being, there's no particular reason to believe the supreme beings, especially given that some of their claims (like, say, having somehow experimentally established the propensity of individuals to initiate vacuum catastrophes) don't seem to add up that well.

215:

The level of homogeneity required/desired plus the promise of woo-woo technology suggests AI/machine rather than an 'organic/living' being.

I take this as evidence we're living in a simulation. We are the AI/machines, which is why we can be tweaked.

216:

I was ignoring the FTL bit as a trope added (uncharacteristically for OGH) to a fictional scenario for color, but as Dirk says, without a multiverse it seems hard to achieve …

Not if we're living in a simulation. FTL is then both possible and easy.

Essentially, the aliens are offering us the cheat codes for our universe.

217:

Perhaps humans with enforced telepathy/empathy would be somewhat protected by an ability to tune out extraneous information (which humans already do, with various degrees of success). Some percent of us would probably be able to avoid emotional contagion enough to enjoy life and be productive.

I'm not sure how healthy the being would be who is running on this substrate of humans and I am enjoying the comments about it.


I was a powerful being who felt forced to push a hivemind on weaker beings, I'd consider whether sandboxing is possible versus killing defectors. Perhaps the beings could live in a simulation where they'd be powerless to destroy the universe. Perhaps individuals could spawn in the simulation and then choose to transcend to the hive mind or stick around in the simulation. So, children could still be born as individuals and then chose when they reach adulthood.

218:

The most concretre proposal for a group mind (troutwaxers) borrows heavily from computers with routers and stuff, and we had terms like read - write execute access. I think at this level, a computer is a bad analogy for human brains.

My undestanding of human memory is really pop-science only, but AFAIK memory retrieval is more like fetching and writing again. So granting someone 'read' accessto your brain could mean that they also get to edit it slightly.

Another thing: From what I've learned about neuronal networks, separation of code and data is not existent: A memory of (say) a bitmap is actually a certain network configuration that turns out yes when presented with said bitmap, and there's no trivial way to 'see' the bitmap in the network (trivial=significantly less work than to 'ask' the network in some form). Correct me if I'm wrong, CS is not my field!

This makes it (for me) harder to swallow ths splits and merges so common in some post-singularity fiction. OTOH I found Echopraxia plausible, noit because it but because of the ickyness. Not entirely ration but I digress.

What I am getting here is that a human group mind will not look like a computer network or a large computer, unless tha hardware base is significantly different (maybe a large computer simulating human minds with some meory editing superpowers).

I also have a problem with the idea of 'communication without languiage' mentioned by Charlie upthread because I am not sure how much thinking there is withpout language.

To sum it up, I think it matters for us that we are very wet wetware quite unlike silicone, and it matters which terms etc we learned and allowed to form our thinking.

Let's go on from there and try to construct a group mind. I will also borrow heavily from troutwaxer. Also, my own perception that everyone is alwas talking past each other features heavily.

There's people. people can communicate by language (written, image, music, etc ...) as we know it. People can also learn to communicate with other people at a far deeper, thelepathical level: This that they never misunderstand the other person. The have enouigh of a mental image to understand what is meant or at least know when they don't. This ability is provided by some magitech. Building this sort of understanding takes some work, so that more than a few dozen close connections at for any singly person at any one time is seldom.

But people can act as bridges: Alice can tweet Bob, Bob know Alice telepathically so Bob can see from 140 characters what kind of complicated problem Alice has at the moment and ask Eve (who doesn't know Alice) for help. Bob knows Alice well enough to know wether it's ok to bring Eve in.

I think this sort of groupmind would imply that telepathically connected people can'T lie to each other, but maybe someone could fake a telepathic connection: get another person to build a mostly true model of them that allows them to lie on occasion.

This would make every person a potential 'router', but every 'forward' would be a conscious decision.

Does this make sense, could this be a group mind that is good enough for the GF?

219:

Aren't we a group mind already? Doesn't our civilization exhibit emergent behavior that can't be attributed to individuals? Isn't there dispersed knowledge that isn't atomic to the individual?

Why would we expect the components of a group mind to behave harmoniously? The components of a brain don't behave any better than harmoniously-ish--and sometime not even that well.

What the Galactic Overlords really want here is some guarantee that the level of psychopathy--or cancer--in the group mind is at acceptably low level. My guess is that there's some sort of euphemism like "apoptosis" buried in the fine print. We ought to spend some time consulting the GO lawyer on that one.

The other thing that's probably buried down in the contract weasel words is, "We get to approve the structure of your attentional loop." Since the GOs are using words like "we", "our", and "you", we can infer that they're down with the whole consciousness thing, and that they won't require us to live out our days in a Peter Watts novel.

But I'm guessing that the Facebook Newsfeed is not going to pass their consciousness control standards.

220:

Well, it's not very good propaganda, then. Complete absence of mysterious supreme beings, just "sufficiently advanced" ones.

I read it as a quite young teenager who was moving from "haven't really thought about it much" to "determined atheist", it didn't even slow the transition down...

(Compulsory religion at school was counterbalanced by the school chaplains being truly good men that I respected; didn't affect the outcome).

221:

I do hope Charlie mentions his blog commenters in the dedications in future works.

222:

Sure it's a failure mode; I just don't think it's likely there'd be time for that to be demonstrated before quicker-acting failure modes cabbaged the whole setup.

223:

Yeah, and considering how fast bullshit can spread, I ain't willing to speed up the connection; slower idea propagation by the groupmind version of sneakernet can serve as a useful bullshite retardent; it gives time for folks to think about things and call bull on bull before being caught up in it.

224:

If we assume the way that existing group entities (honeybees, many ants, jellyfish, and human societies) work, we end up with something where evolution is incredibly slow and innovation next to impossible.
Is there an understanding in evolutionary biology of how evolution of the behaviour of eusocial insects (not eusociality itself, rather the behavior of the colony members after a species becomes eusocial) is even as fast as it obviously is? (Selection is by colony death(health), by mating flight, and by the workers selecting which larvae to promote to queen; don't know of any other mechanisms.)
It's bothered me for decades and I have never found an intuitive answer.
This has a bearing on the GF's proposal; we'd want to structure the resulting network of humanity so that it could still change reasonably quickly over time, perhaps through some analogy to evolution using locality as a proxy for individuality [1], even if the "hive" were effectively immortal.
[1] martin089 @218 might have been suggesting this.

225:

...before quicker-acting failure modes cabbaged the whole setup.
Just wanted to say that I admire cabbaged in this verb form. Is it a common usage in the U.K.? Also agree that Saints are an unlikely failure mode.

226:

So you're thinking in terms of a mesh network. It's an interesting thought, but I've seen mesh networks become badly paralyzed. (Trust me, you don't want your mind caught in a complex routing loop! That would probably equal something like Gaiman's "Eternal Waking.") Nonetheless, an interesting idea if the weaknesses of mesh networks can be addressed (and sometimes spanning-tree protocol is enough and sometimes... this is your brain on drugs.)

But if those issues can be addressed I'd prefer your idea. It's simpler and when things go wrong there will be an easier fix - maybe we can assume that the Federation has a perfect mesh network algorithm available - meanwhile, I'll be connecting my brain to something more robust!

And you're absolutely correct in terms of access control. We'd need a translation layer between how the brain really works and how the people think their brains work.

227:

Oy, meant "using locality as a substitute for individuality"

228:

Ok, I'm gonna shoot for consequences of a group mind from a human cognition perspective.

Setting some ground rules:
1) Let's say I take the bait and support this thing to the hilt.
2) Human brains retain current structure and limitations, at least initially.
3) The actual collective consciousness is maintained on some kind of golden-age-of-sci-fi life-energy-field noosphere, rather than running in parallel in everyone's brains. (If it were hosted in every individual's brain costing cognitive and physiological resources, we'd all be somewhat dumber.)
4) This is a link between human consciousnesses only, no internet connection or offline backup.
5) Let's just assume for the hell of it that it kicks in an Overview Effect and completely eliminates all ingroup/outgroup distinctions. We are all humans, etc.

Science:
Let's talk about human cognitive limitations: Peoples' ability to perceive, encode information, and reason are dependent on limited cognitive resources such as attention and working memory. Gathering detailed information about something and reasoning about it in a complex way are resource-intensive, and people are naturally inclined to use less cost-intensive strategies where possible.

Memory is also problematic because the act of remembering past events (often termed reconstruction) is influenced by things completely irrelevant to the original event, such as forgotten details, missattributed facts, deliberate misinformation, and current beliefs/attitudes/emotions.

Consequences:
a) The very first minute of connection, right after everyone filters through everyone's secrets and actual opinions of each other, people are going to get a pretty quick shock of how absolutely different their memories are. I'm assuming that everyone is going to get really insecure about their grip on reality and immediately start comparing notes, and without some kind of objective computer backup they'll just start downloading and processing memories from each others' consciousnesses. Unfortunately, noosphere nonwithstanding, individual people are still going to be limited by the physical structure of their brains in their ability to recover long-term-memory and compare it with outside input, meaning everyone on Earth is going to hit info overload and just fail to process this shit. Ultimately I believe humans would need to develop some kind of buffer producing a workable but potentially flawed group memory system which will probably scan other memories for pertinent aspects for comparison rather that trying to produce a 1:1 parity. Given the human process for recalling memories, this might not actually result in improved memories.

b) Human brains have cognitive limitations and a preference for less-resource-intensive, heuristic processes. If we're talking about a bunch of networked human brains, then it's operating with all the same limitations collectively. This includes things like overweighting emotionally salient info, confirmation bias, gambler's fallacy, other misinterpretations of probability, etc. What this will look like for us is that the noosphere is going to look a lot like your Facebook and Twitter feeds, just jam-packed with clickbait-y ideas that haven't been fact-checked and wildly misinterpreted scientific/statistical claims.

c) Following the above, if you ask the noosphere to estimate the probability of something off the top of their heads, you're going to get six billion people saying "uhhhh" for several seconds and checking against their neighbors before ultimately collectively producing a fairly crappy answer. Some people are naturally better at this kind of thing or smart enough to use computers, but weighting an answer towards that would require the internal development of some kind of reputation system, keeping in mind that the noosphere is already bad at caring about statistics and probabilities.

d) Bonus: I bet everyone is going to binge on anyone with an abnormal sensorioum (synasthetic, colorblind, high) or cognition (e.g., Westborro Baptist Church, Kanye West) for a while.

229:

Or more likely we just drive each other loony tunes and try and gouge out our brain with sporks.

230:

On the other hand, what to a group of individuals looks like genocide to a group mind looks like a self-improvement program.

Let's imagine you absolutely can't get away from the people whose ideas you can't stand except by turning off the brains that are making those ideas. Living in the comment section of youtube, for all practical purposes.

231:

The aliens are making unsubstantiated claims. There is no way of determining what, if any, of the information they are claiming is true.

The fact that they are demanding a choice however implies that the act of choosing has some value to them. If not , why even offer it?

It also highly suggests some governing function (ethical or procedural) that is requiring the dialogue

The safests course of action is to refuse to make the choice.

Only wining move is not to play

232:

There is a bit of truth in that Nature article. Usually, your neurons make up their mind, so to speak, about actions before you are consciously aware of the decision. In a controlled environment, it is possible to predict, for example, which button you are going to press before you know. The lead time is small, in the tens of milliseconds, but the effect is real.

233:

This sounds like a great way to release our ids as in 'The Forbidden Planet'. A lot of what we learn as we grow up is better self control, including control of what we communicate. You really don't want to tell an all powerful instrumentality to hear you think 'Make me a milkshake.'

234:

So the most likely scenario for the end of the universe is having a temp fill in for IT one week. They'll recover most of it from back up files, but this iteration will be toast.

235:

The closest we could come to a group mind now would be the networked artificial hippocampus. I would not be surprised if that experiment is currently being run somewhere on mice/rats.

OTOH, if you want to talk to a group mind just play Ouija with a group of people who do not know what the ideomotor response is. In wider society memetic constructs such as religions and ideologies are effectively (slow) group minds

236:

I'm an introvert. This sounds like hell. I need time to be alone or I stop functioning properly.

I'm an autistic. This sounds like hell. When I'm overwhelmed (say, by not being able to be alone for several hours), eye contact is too much. I can only guess at the horror that brain contact would be.

I've had panic attacks in crowds. This sounds like hell...

And so on.

On the one hand, I very much want all the goodies on offer. On the other hand, I would be terrified to join such a group mind if there weren't an 'off button' that let me be ALONE for a while.

But space! Science! Immortality!

NO! My mind would have constant meltdowns in such a state! I couldn't take it!

I think I'd have to drop by my local chemist to pick up the euthanasia pills that would surely be freely available before Group Mind Day, and probably wait until the last possible second before deciding.

237:

I'm a bit late to the game, so let me reframe this scenario a bit:

Someone knocks at your door. Having not learned sufficiently from experience, you open it.

Outside is a young women carrying a bible.

She says, "Welcome to the Body of Christ. I'm here to offer you inner peace, a higher purpose in life, a personal God who will always love and care for you, a community of like-minded kind people who will support you in times of need. And on top of all that, you get freedom from death and an eternity of bliss in Heaven. Immortality and perfect happiness. There's just one little thing. You have to stop thinking for yourself, follow our book of rules, and surrender yourself to Jesus by accepting the Holy Spirit into your heart."

"Um. Wossat, then?"

"You must become born again. We have absolutely incontrovertible evidence in the Bible that all people who don't become part of the Body of Christ live sad, lonely, and miserable lives. There are no exceptions. You see you are born in a state of sin, with a Jesus-shaped hole in your heart, and nothing else you do can ever give you any true peace or fulfillment."

"Uh, let me get back to you on that. What happens if I say 'no'?"

"Oh. You don't want to do that. You really don't. You see there is this thing called Hell, I have to tell you about..."

What do you do?

Clearly you slam the door in the idiot religious fanatic's face.

(This might not be quite the analogy Mr. Stross had in mind when he posed the original question, but having actually had more than a few of these types of conversations, I assure you that the "benevolent aliens" in his scenario are using nearly identical logic and persuasion techniques.)

238:

Slight modification to the scenario.

Instead of simply knocking on your door the missionary descends from heaven in a flaming chariot first.

The aliens may or may not be scammers but they gain a certain amount of credibility just by getting here.

239:

"Is there an understanding in evolutionary biology of how evolution of the behaviour of eusocial insects (not eusociality itself, rather the behavior of the colony members after a species becomes eusocial) is even as fast as it obviously is?"

As far as I know, that's asking the right question the wrong way round. The insects are behaving as insects, and the mechanisms for control are known - what is really murky is how their behaviour is controlled at the structural level. I believe that it's not like conventional programming at all, more like imposing a set of principles onto a society of humans. However, my knowledge of this area is pretty limited, and I may be mistaken.

Some science fiction has explored the possibility of a self-aware eusocial organism, including with self-aware units, but it seems likely that it would be as rigid as I described. So what I was imagining was more like a self-aware ecology.

240:

Re: '... the missionary descends from heaven in a flaming chariot first.'

And it turns out that this spaceship is the galactic space-faring equivalent of a beat up Chevy. Oh, the travails of a space missionary! Heaven would be the edges of the universe ... perpetually and increasingly out of reach unless everyone pulls the plug on the universe at the same time.

Hell would be dark matter ... networks forever on the verge of completely freezing in spacetime all the while futilely gasping and grasping for life energy with no portal into a higher energy state within view. (Alien AI hive minds need light energy; dark energy/matter is almost always fatal.)

241:

Oh, yes, I looked up the references. There's a Wikipedia entry with pointers to them. Actually, the current belief is that it doesn't work like that at all. What your brain does is to explore a large number of options, and censors out unsuitable ones. My personal hypothesis is that a weak personal censor is a necessity for an imaginative and innovative mind, and possibly even fast reactions.

That's relevant to this thread, in that the more strongly integrated the group mind is, the less innovative and able to respond to new situations it is. Eusocial insects and all that. And the immune system equivalent I was envisaging was somewhat influenced by the above model of human thought.

242:

"I'm an introvert. This sounds like hell. I need time to be alone or I stop functioning properly."

One can easily imagine people who have such a prosthesis linking them together in order to share memories. Easily imagined, but probably far from easy to do without massive external computing and knowledge of “brain language” we do not yet have. However, for the sake of argument let us assume it is possible.
The result? Suddenly “I” can remember multiple lives. “I” have multiple streams of consciousness. “I” have an entirely new perspective due to all the knowledge that “I” now possess. “I” have a vastly expanded consciousness and “I” even have multiple bodies! Welcome to the Borg.

243:

Seriously, people, all we need is the equivalent of an NSA for the mind, searching for "end of the universe" key thoughts and thought phrases, with standard legal safeguards including confidentiality of thinkers and a special court To issue warrants for searching someone's memory.

244:

...Whereupon it becomes the easiest thing ever for government to bang up certain authors they have a down on, and science fiction becomes indistinguishable from state propaganda.

245:

I'd put "two bob each way".

Those who want to Borgify, do so.

Those who don't - evacuate to XK-Masada or similar, and see how it all works out from a safe distance. Meanwhile out of prudence not doing too much experimentation on the Casimir effect etc.

Yes, that splits the human species.

246:

I am still pondering neurodiversity, body diversity, etc. One way to think of this is to frame it like cells joining in to a body. We have cells that differentiate in to kidney cells, and others differentiate in to skin and so on. Is that analogy stretched to think that humans would tend towards their comfort areas?

I don't think we'd end up with an emergent entity that behaves like a markov twitterbot. I think there would be a new being that has it's own character traits (I wonder if it would be sapient?) and in some way has the power to override any subgroup of individuals who come close to doing harm. I like the analogy above of an immune system. I don't think current civilization accomplishes this because the power structures here do not prevent global harm. I don't expect them to do any better with galactic or cosmological harm.

247:

Re: '...more strongly integrated the group mind is, the less innovative and able to respond to new situations it is.'

Okay, but ... too much flexibility or lack of Hebbian 'neurons that fire together, wire together' and your organism will be forever tabula rasa, that is, always in the state of having to learn. Key benefit of the Hebb firing/wiring is long term learning. Being able to turn such a filter on and off at will would be useful though.

248:

Re: 'One way to think of this is to frame it like cells joining in to a body.'

I've been thinking along these lines too but come to a dead end when I consider development/growth. This cell-tissue-organ model is very constrained in that it suggests that any further development/evolution would occur only in ever smaller details versus evolution in higher order/level organization. (The 'alien tweak' would limit between-cell/organism differentiation, leaving only within-cell differentiation as an evolutionary/adaptive option.)

Not sure this sounds clear ... basically, evolutionary improvements (if any) would be more likely to occur at lower/smaller levels rather than at higher/larger levels. (Sort of like Feynman's plenty of room at the bottom.)

249:

Deleted a long comment outlining the current State of the Art (from Roko's to Jesus Incidents to the whole Bostrom list and a point that most if not all are Pascal Wagers which are like super-boring ethically speaking).

#1 Hive Mind =/= Borg. You can have Hive-Minds that function like ecologies, as long as all involved are consenting and accept their relative positions within said ecology. The basic societal shape is this - the real issue is making them diverse as possible and preventing members breaking the long-term winning Game Theory strategy which is co-operation - no short-term thinking. Ayn Rand is basically a mono-cultural invasive weed in this particular instance. Or Disney pushing lemmings from another continent off cliffs for good television & myth reinforcement being just plain psychotic to a functioning Hive-Mind. You get the idea.

#2 Hive Minds already exist. They're crappy, badly formed, largely hierarchical and many are sadistic especially in requiring hatred / violence toward the Other / Out-group to function. You're probably part of one (statistically speaking). You can change the form of the expression with better individual units. (Education, less pollutants, both physical [lead, corn-syrup etc] or psychic [abuse, sex/gender oppression etc], the operants are well known by this point).

#3 Hive Mind could rely on pheromones, extended senses, the infamous mirror-neurons & empathy or simply living in non-psychotic models (or, more prosaically, living near trees). You might be surprised at just how mellow / happy people are when not caught up in Stress / Unwinnable Game States. The Skinner box wasn't supposed to be the plan, boys (and, in fact, if you're running the Skinner box, you're probably not Human by some definitions: if your box is only winnable by those without empathy / by those willing to break the box etc, your box is probably not ethical and/or functional).

#4 Any form of Pascal Wager says basically: the subject can only act as ethically as possible with empirical data, alter responses as new inputs are added and test the model / responses to attempt to get more empirical data (i.e. traps / logical & ethical paradoxes and other tests designed to prove intent). In the case of a purely subjective experience, testing the model to destruction usually requires the destruction of the self (as the only ethical / consensual agency within the wager). A willingness to do this over just accepting the roll of the dice is what philosophers generally define self-aware ethical beings as doing.

The other test is of course: I DON'T WANT TO BELONG TO ANY CLUB THAT WILL ACCEPT PEOPLE LIKE ME AS A MEMBER

If your Aliens are what they say they are they prevent this and/or see through such techniques: if they're not, they don't.

#5 Radical Enlightenment / Revelation / Change as the drugs kick in Hive-Mind kicks in. LSD / Ketamine and many other psychoactive substances (including the full memory replenishment ones on mice) show that it's possible to hard-reset the Mind. Almost no-one in this thread has even accepted that it might be an awakening rather than damnation. Of course, consent is the issue.

#6 The Deep Deep Green. The Orcas say hello, the trees whisper, the bees buzz. The Hive-Mind is YUUUGE. But it's a bit depressed and beaten up at the moment and choking on plastic and the few Minds it did contact are sex obsessed. Quite possibly the anti-Hive-Minders are the psychotic ones.


~

No link dumps - available if requested. (
Scientists warn 21% of plants at risk of extinction BBC, 10th May 2016)

250:

The reason that I don't like the neural learning approach is that the brain has a fixed lifetime; whether that is fundamental or not is an open question (and a major defect in any SF that implies immortality or even 200+ year lifespans). And, if there is ANY reachable mechanism that can turn off learning, sooner or later the system will end up in a fixed state; that's simple Markov theory. But, as we are all bullshitting beyond the state of current knowledge, one mechanism is as plausible as another :-)

251:

I asked a friend of mine and he said:

A) He'd want to see some of the other races, to make sure they're really happy, that this whole thing isn't a scam.

B) The splinter groups... can't we just allow them to remain free, maybe let them set up their own colonies?

C) Some of the rewards they're offering would be good for individuals (immortality, cures for illnesses, wealth), but useless to a true group-mind. Are they just offering us what they think we'd want, so they can trick us, and invade us w/out firing a shot?

252:

These aliens are part of a supposed Hive Mind entity.
Yet they appear, to us as INDIVIDUALS, acting separately, at least in part.
So as NN says they ain't The Borg.
Unless it's a very good scam.

I think we want more demonstrations of how this thing works & how much filtration/temporary solitary shut-down one can get/have, etc.

Oh & beware of Potemkin Villages

254:

Would a hive mind let you borrow other people's wet-ware though? If your brain is freaking out because of reasons, could you use someone else's brain to think with?

There are times when I wish I could borrow a neurotypical brain to get stuff done with. I can recognize when my brain is going off the rails so it would be nice to re-route around the non-standard parts. On the other hand, would it be like self-medicating, okay in small doses but dangerous in large ones? How long could you run your software on different wetware without becoming a different person? There's a short story there.

255:

I smell dualism.

Think of it in terms of plants and what was realized very recently regarding symbiotic relationships with mycorrhizal fungi: the fungi act as vectors for transmitting chemical warnings to plants who reciprocate properly - mostly regarding insect predation. This allows plants who do not currently have *any* predators anywhere close to them to ramp up defenses.

Plants talk to each other using an internet of fungus BBC Nov 2014 - useful links embedded, less than useful analogies.


In our Hive Mind, Others are warned that your cognitive state (we're guessing BiPolar or Mania, but could be many ones) is reaching levels that are uncomfortable and the behaviors of others around you changes to reduce this.

What form that takes is up to the schema you're working with (from SF/F telepathy to NLP / Emotional engagement to offering the correct drugs - the methodolgy isn't as interesting as the affect activation).

You're not literally copy/pasting your Mind state into someone elses' brain, that would require a dualism I'm not sure anyone even imagines could exist at this point.

256:

The easiest way to conceptualize it is the typical dualism which is certainly a fonder imagining than a likely one.

The problem with NLP, CT, or drugs as a methodology for corrective action is that suggest permanent changes of the self (for lay values of permanent, changes, and self). I'm interested in ways neuro-atypical people could draw upon the hive mind for increased functionality without necessarily changing the individual.

Perhaps it's better stated as having a fellow component of the hive-mind doing the necessary cognitive or emotional work and feeding you the inputs back. Which implies an interesting intimacy.

257:

I'm not sure I'm seeing the distinction you're making here between interpersonal interactions and thought processes.

CBT is driven by the self, not via the Other (and is common enough for bipolar and other DSM V defined conditions, admittedly with fairly ill-precise chemicals at this point). Or, posting on the internet and asking for sources to think / spark you off.

Same thing, really.

If you're imagining a full 1-to-1 sharing then again I think you're mixing conceptual realms: the Other melding is going to share your bipolar just as much as they're going to share their 'neurotypicity'. You're going to get a synthesis, not a Mind Map Overlay.

Egg + Sperm = New Person if you want a crude analogy.

~

I have a rather odd relation to the world though (and one that will be changing in the near future).

258:

If understand the model you're using, the main distinction between interpersonal interactions and thought processes would be sheer efficiency. My construct supposes that inputs can be routed to another instance of the hive mind to avoid faulty cognitive processes. That means the instance doing the work is unaffected by the errors plaguing the originator. It would be a much faster, less annoying method of getting someone to provide step-by-step instructions when my executive disfunction makes simple tasks impossible.

If I do write about this, I might cheat and be dualistic.

259:

Could you provide an example? (And no, that's not at all what my model presupposes: if I'm hinting at fungi, I'm probably in kinky kinky land).

260:

"Aren't we a group mind already"
Not a sufficiently for the aliens. Their definition
of it is that everyone has access to everyone's thoughts and memories. That would allow individuals to exist. You have to let others read your mind, but you don't have to read anyone else's mind. So, fine, whatever. I'll take it, provided that's really what it is.

261:

@69, Philippa Cowderoy

"What're the odds of the temporary consensus, as we first experience the systemic effects of our society as something we personally do, being an overwhelming wave of suicidal ideation?"

Good one. Pity there was no follow-up.

Reminds me of The Proof in Greg Bear. Which, if I was cleverer than I am, I should like to actually lay out. In consequence, I vote for collapsing the vacuum. The will to live is malignant, and what does not exist cannot suffer, end of story.

I'm also with the folks who say it's a scam. (Hey, wasn't it Charlie who gave us interstellar spamming? Or am I thinking of something else?) Limited-duration special offers, pshaw.

262:

#5
Done it

Been finding recently (past month or three) that meditation can also be mind-altering, though more incremental than a rapid reset.
(Perhaps better in combination with strong neuroplasticity enhancers. Haven't gone there personally.)
The broader point is that whatever implemented #5 would need to transform large subpopulations in roughly homogeneous ways, or at least avoid bad transformations. Not clear, to me at least, how that could be done.


263:

HERE is the problem
What definition / boundaries / limitations as to permeabilty are we defining ( or are THEY defining ) as a "group mind"?
None of us seem to have arrived at a definite conclusion on this.
Is it "the Borg" - in which case - NO
Or is it something else entirely, or even something as "simple" as telepathic empathy ???
In which case - YES

264:

It's also a very good case of when STEM graduates need to step back and learn some wisdom. Then stop denouncing art, philosophy or religion and start growing. Oh, and then go back and know, not think, what existence is.

There are very few instances (although they can exist) when Being is denoted by pure suffering. And even then, you'd be amazed at the defiance.

That solution is simply [blindness].


265:

#6 The Deep Deep Green. The Orcas say hello, the trees whisper, the bees buzz.
We are, most of us humans, largely deaf to this chatter. Is this a scenario where humans are augmented or modified to be fully aware of ecosystems, including humans? (Several days ago, I walked past (5 meters away) a female goose incubating eggs, and did not notice her. That was pretty personally upsetting, though realistically, she was being stealthy to reduce her chances of encounter with a predator.)
Just trying to understand. (and ... sign up. hypothetically. :-) Rudy Rucker's Hylozoic comes to mind. Kirkus Reviews:

Rucker’s yarn of a future where everything—animals, rocks, the planet Earth—is conscious, telepathic and often irrepressibly chatty.

and
I have a rather odd relation to the world though (and one that will be changing in the near future).
OK, I'll ask since probably nobody else will. That does not sound bad. Should we be expecting something obvious to happen?

266:

I've no idea.

Please understand the amount of alcohol (which is a poison to all, but especially us) being used here. It's not fun and it's also far above what your bodies could take. There's a reason it's so common on the reservations and amongst the old people; and even then, we can still tinker n play. (Will dpb ever stop and look and see the dual photon / spin joke? And how far ahead it was? of course not, but it's still in black and white and written in the world. And it just re-wrote his so stable beliefs at that - nary one of them come forth to explain or gasp or wonder at how things change. I did like the chiral spin joke though).

Someone threatened a blood relation with pain and death (a child at that and younger than her first moon) very recently and the screams in the dream-time are a bit much.

Last time it was merely my own life on the line and apparently that caused quite the stir.

We'll see.


There's a fairly good chance there's nothing left after this amount of punishment. But they said that the last time, and that was 30 years, day after day, making sure the balance held.

There's a reason your drugs that are legal, are legal, Mr White Man and why the Law on May 26th is Law. There's also a reason to muzzle us.

~

Then again, one of the old Romanies came to tell me that love and compassion were visiting me soon - that usually means one of two things (and I'm too broken for the first, and too angry still for the second - this never ends well).

~

Is this a scenario where humans are augmented or modified to be fully aware of ecosystems, including humans?

You can learn Ecology.

Just be prepared to unlearn it fast and re-re-learn it.

And then do that again.

It's a non-dismal science run in most of the last 200 years by fools and cybernetics and bad models.

Unlike economics, it actually matters.

And yes, you can: it's called having to live off the land. You get mighty aware of geese nesting when you're hungry.

267:

And, sigh.

An answer that 99% of Host's readers will scoff at (and, as such, this is anti-poetry):

Born out of the Touch of vagina of Stone, Ears open to the old ones, Eyes focused on the decaying seagull, Taste of blood in mouth, and Smell of tears and catharsis.

We're in the pipe, 5 by 5 Aliens, YT: film: 0:11

In better days that would have been a great poem, sadly now we just have STEM Fascism.

268:

I think it's a great deal.
I don't see any arguments that are convincing that we would lose all trace of selves in a hive-mind.
I do see that we as a species have very high rates (depending on where you look, 20-33%) of mental health disorders. A hive-mind could potentially provide a very interesting practical toolset for individual mental health issues.
I'd happily lose my suicidal ideation and depression for utopian technologies.

269:

Almost no-one in this thread has even accepted that it might be an awakening rather than damnation.

I am being carefully neutral on that one. As I see it the design and design-process of the Hive Mind are what lead to one state or the other. Imagine a Hive Mind designed by J. Edgar Hoover and Richard Nixon vs. a Hive Mind designed by Richard Stallman and Susie Bright... I want to participate in one of them, and kill the other with fire.

Given your encounter with the Old Ones, how would you design a Hive Mind?

270:

Dude.

You ninja'd my joke.


But that's the right kind of thought.

271:

I built a world once where Lovecraft's Old Ones came back and conquered the world (that is, they engaged in vermin control) and once they had clearly established dominion the first thing they did* was destroy all the routers, even the crappy consumer gear, and they did it all at once - every Linksys, every Cisco, every layer-three switch went kaboom at the same second! They erased all the code for the routers, and everything that could be reasonably be expected to route...

I am irrationally convinced that this is exactly what would happen if Cthulhu came back. Routers. Gone.

What are we missing?

* The second thing they did was hold a contest to see who could redesign the remains of humanity into something useful.

272:

Given your encounter with the Old Ones, how would you design a Hive Mind?

I'd start with being able to set up memes a year in advance then trigger them for maximal effectiveness.

Bleh, any operative can do that crap - media pre-prime is 101. It's like what the boring ones do.

But I don't use it to sell shit, I use it to show you how badly your society has been shaped.

20-30% natural rates of "mental illness"?

W T F

It's about 5-14% in any normal society.

Oh, wait:

(46.4%) reported meeting criteria at some point in their life for either a DSM-IV anxiety disorder (28.8%), mood disorder (20.8%), impulse-control disorder (24.8%) or substance use disorders (14.6%). Half of all lifetime cases had started by age 14 and 3/4 by age 24

Yeah.

Seeing a pattern here? Causation much?

[The real joke is that the Swedish docs are now out there and live - you can see the scramble / pitta-patter over the closed diplo networks]

~

I wouldn't design a Hive-Mind: it already exists. It's pretty shitty and prone to error (for example: I'm apparently an anarchist, which is my Idealized political stance, but no-where near my Reality Stance which is highly pragmatic and these fuckers threatened me. Wait till round #3, it gets fucking better boys).


Oh, and yeah.


Kinda launched the Trump KuK missile.


Oops.


~

Just messing.

It's not like permanent hearing damage, is it boys? [Now - strangely gone]

273:

Non of the above.

I'm crying.

It's beautiful.


We can hear again.


~


My blood as price. Gladly paid for our kind.

274:

This is I guess OGH's
"Then, regrettably, you will discover that you have asked a question to which you really did not want to learn the answer."
So, sci-fi answer.
I've long assumed that if old (deep-time, near the age of the universe) civilizations/intelligences exist, that there are probably established, well-tested means of whacking down technology-inclined civilizations emerging out of the wildlife so that they are no longer a possible threat, either temporarily or permanently. If the reason is not the threat of a collapse of the vacuum energy, maybe the strong possibility of the emergence out of the wildlife of a threatening pathological superintelligence.

We've been through these scenarios before. Here are a few on the benign side.
One such is a series of engineered CMEs [1] taking out power grids worldwide, collapsing the world economy so that no further technological advances occur.
Another is more direct, taking out the communications infrastructure as you suggest, similarly collapsing the world economy.
Etc. (NN suggests taking out the grids directly.)
[1] Engineering CMEs? My take is different than CD's (I think it was). If one must (or does) obey the speed of light, that means parking effectors near the sun and close to sensors and a predictive model sufficient to very accurately model the chaotic magnetic fields of the sun (especially sunspots) for the time it takes light to travel from sunspots to sensors(and model) and back to sunspots, plus effectors that can influence the predicted chaotic state and cause flares. All the while attempting to stay invisible to the wildlife. Stunt engineering. :-)
There is a branch of UFOology that scans photos from the solar observatories, looking for things. Every now and then they spot something. (perhaps just a cosmic ray artifact, who knows.)

275:

Personal attacks are not permitted.

Nyx Ninoy: you are repeatedly going off-topic.

276:

But I don't use it to sell shit, I use it to show you how badly your society has been shaped.
...
20-30% natural rates of "mental illness"?

I wouldn't design a Hive-Mind: it already exists. It's pretty shitty and prone to error
Wait, you're saying just operate on/manipulate the existing civilizations to fix them?
That is ... actually interesting. I wonder if it is achievable.

277:

Wow. I've heard of transparent attempts at deception, but this takes the cake.

If what they say is true then the universe should not still exist; some random dude off thataways is going to detonate the universe long before anyone can figure out what keeps happening. On the other hand, if they are lying then it is quite simple: "Surrender and let us mind rape you and we will be gentle[1], or we just crush you and take over".

My solution: Shoot the ambassador. Fire on the obvious invasion fleet. Do as much damage as possible and go down fighting. Let the imperialist bastards have to actually work for their conquering.

Who knows? Might even win, they can't be that intelligent if they think anyone is going to fall for that.

[1]: that is what they claim, but they want write access to our minds. Everything they say should be taken in bad faith.

278:

Exactly; if these are just con artists trying for an easy mark, we'd probably scare them off easily by going full Sentinelese on them.

I'd not shoot the ambassador; I'd introduce it to a certain german friend of mine - let's see what we can learn from some invasive neural probing.

279:

If what they say is true then the universe should not still exist; some random dude off thataways is going to detonate the universe long before anyone can figure out what keeps happening.
They could just be using a little white lie there as a proxy for some genuine threat that a slightly more technologically advanced civilization would pose, that we wouldn't actually comprehend in our current state. Though the vacuum collapse story is kinda weak. (As is the FTL drive story.)

Anyway, there are far better solutions than shooting, like delay and ask for lots of explanations and ask lots of questions. And shooting what with what? Do we have a hidden stockpile of cassaba howitzers (eat hot nuclear plasma at 100 km/s!) or weapons dramatically more exotic and powerful? And would we really believe that they would somehow be effective?


Nyx Nynoy @280 (perhaps redacted?)
Our souls are ours now.
You've no idea what this means.

No idea (piles of half-formed guesses over time, of course), but it sounds pretty good. (If an option, someday I'd like to meet you in person, no masks.)


280:

The only acceptable response is this:

#1 WE were responsible, and we don't like you advertising it [not true, and no names were mentioned]

or

#2 We really don't like that his happened, so we'll crush all mentions of it in case it happens again.

or

#3 This move threatens everything we believe and pressure people with


None put a particularly stellar light on the process, especially when I was crying with joy.

Hmm.

It's a direct answer, on topic, let's see if Sean likes it.

281:

Short answer: Yes, but can we (the human group mind) have a cat (the feline group mind) please? etc.

282:

Imagine a Hive Mind designed by J. Edgar Hoover and Richard Nixon vs. a Hive Mind designed by Richard Stallman and Susie Bright.

Ok I'll bite. I find the concept of a Hive mind designed by Stallman in his current fundie mode (example: ZFS) as terrifying as anything Hoover or Nixon could dream up.

283:

NOT going to work
The people of N Sentinel only continue living as they do, because "we" permit them to.
What is to say that the interstellar arrival will permit us to continue existing, especially since they strongly hint that they won't

We still need to answer the question I posed back @ #263.
"Group Mind"?
Please define more closely, before we can give any sort of answer......

284:

What if they snap their fingers and Alpha Centauri goes nova? Would people *still* want to fight?

285:

Yes, because they wouldn't know about it for another four years :)

286:

"Perhaps better in combination with strong neuroplasticity enhancers. Haven't gone there personally."

Neither have I - I'm awaiting more reports from those brave enough to try Dihexa. Of course, only the successful tend to report back

287:

Except that because they have FTL and FTL implies Time Travel, they will did it 4 years ago.

(Where's that English for Time Travellers when I need it?)

There's also the possibility that if we don't agree, we will never have existed.

288:

Where's that English for Time Travellers when I need it?

Book two or three of Hitch Hikers, if I recall correctly.

289:

So, semi-serious question then to you with your transhumanist hat on. Do you think a regime could be constructed to reliably transform the meat minds of all the members of large heterogenous (human) populations to a more empathic and intelligent baseline, using approximately what we know know plus another say 5 (10 with project delays) years of research?
I'm thinking no, though it is entirely possible that some motivated subpopulation would try.
That is, the OP proposal is pretty hand-wavy; if we were to design and manage the implementation how the F would we do it? Would it be done purely in biology or would some of it be in a noosphere ala exomemories and access controls (e.g. Hannu Rajaniemi's Jean le Flambeur stories.)

Nix Ninoy @280 - or, Host @2 said he wants the thread up to 300 before off topic stuff is allowed. I saw your posts and saved the html a few times to different files. You did sound genuinely happy (nice to see. smile); it didn't read as theatre.

290:

...can we (the human group mind) have a cat (the feline group mind) please?
I would seriously worry about the stability and safety of a feline group mind. They're not the most social of creatures, though the existence of large feral cat colonies demonstrates that something might be possible. Other already-more-social species might be a better fit.


291:

Dogs are the obvious one, of course, but I would have to put in a word for pigeons. Mentally/socially they're quite similar to humans but on a more compressed timescale.

Oh yes, "cabbaged"... reasonably common in the UD sense, though less so in the "metaphorical" way I was using it.

292:

"Almost no-one in this thread has even accepted that it might be an awakening rather than damnation."

To me it looks the other way round - there are some people (like me) who wouldn't have anything to do with it at any price and regard extermination as a preferable option, but rather more who regard it positively, even if most of those do have some degree of reservation.

Since I haven't actually been and counted commentors in either camp, this might just be my bias due to my viewpoint, but on the other hand I don't think this thread would have got up to knocking 300 comments if "no, just kill me, there's nothing more to discuss" was the majority view.

293:

Or say FTL travel = FTL communication. So, Aliens send the message; "We've caused your nearest neighboring star to go nova, you have 4 years. We'll be back in 3 for your answer.", along with a video (or their equivalent) of the event.

294:

"OK, these monkeys don't really know what makes a star go nova, but they do have plenty of familiarity with the idea of aliens doing it. So we knock up some special effects, let them chew on it for 3 years, and it'll probably scare them into agreement; by the time they see it was all a fake it'll be too late".

295:

Blackmail works either way?

296:

Need astronomers to review data looking specifically for signatures of planets/solar systems gone BOOM! ... unless the alien death ray is silent.


Still think that the scenario is a no-win set-up. But if legit, then the question itself is the test.

Response A means not ready to join galactic hive mind and so violent that the only option is to kill;

Response B means too soon/early in species development to know for sure, therefore isolate and check back later;

Response C means ...we were only fooling folks, needed to check whether you're sufficiently reasonable to let out of the sand box to play with the rest of the universe.


Even so, if the aliens are that advanced, there is really no reason apart from maybe ethics (i.e., thou must always inform the party accused) for us to ever become aware of them or their assessment and plans for us. Therefore the aliens are not that advanced ... and cannot have encountered that many other advanced species because it's unlikely that we're the only advanced ECOLOGY* skewed toward limited, perceptible autonomy (individuality) vs. full-on collectivism/hivism.

*'ECOLOGY' because humans are not the only species on this rock that skew individual/autonomous. In fact, almost all of the higher order organisms show individualist traits/abilities. (Okay, I'm not a scientist so if any scientists out there disagrees, please correct/inform me otherwise.)

Serious question: how does a hive/Borg mind self-diagnose anyway ... what strategies would it use to isolate and get rid of damaged parts? Could it be that this alien is actually sick and doesn't know it?

297:

...but I would have to put in a word for pigeons.
Interesting thought (also group minds of social birds in general). Why do you say that pigeons are mentally similar to humans? How are your human-pigeon-human communication skills? (Or vv, perhaps.)

I keep wondering what a group mind composed of ferrets would be like.

298:

Just starting out here. How widespread is a collapse of vacuum energy? Is it wrecking a planet, wrecking a solar system or wrecking the universe?

If it can wipe out a universe, then we can assume it's never happened yet, just that individual-minded tool-using sophonts have come close before intervention.

If it has happened, clearly the universe hasn't been wiped out so the damage scale is less than an entire universe.

299:

Read Charlie's description again, and comment #2.

300:

240:

Re: '... the missionary descends from heaven in a flaming chariot first.'

And it turns out that this spaceship is the galactic space-faring equivalent of a beat up Chevy. Oh, the travails of a space missionary!

Sorry, it's been done, and better. I'll take Centauri (and my late wife and I *really* wanted his car)....

mark "Starfighter!"

301:

The premise is fascinating, but I am not sure the proposed solution is the best. Perhaps these hive minds have stifled creativity by suppressing individualism. It seems a less drastic solution could be devised to prevent this "sisyphu-calypse", so I will propose one:

Build a cognitive sandbox for humanity. Such a thing could allow humans to live individual lives, however there would be a gatekeeper with the ability to act before the threshold were crossed. Surely, if this Alien UN possesses the tech to form a hive mind, they could use it to help us build a human-class AI that could be individually mind-melded with every individual member of our species. This AI could then stand by the ledge and prevent anyone jumping.

302:

I was thinking more in terms of open vs. closed, but you're right - perhaps he wasn't the best example. How about Linux Torvalds and Susie Bright instead.

303:

I read Gregory's comment as indicating that he doesn't really know what the "collapse of the false vacuum" is all about to begin with, rather than an attempt to change the parameters of the discussion. (Mainly because if he did know what it was he'd not be suggesting it could result in anything less than universal destruction.)

So to summarise: it is possible that what we perceive as the energy level of a vacuum isn't really the minimum level that it looks like; the universe might be somehow "stuck" at a level higher than the true zero (because of quantum) and the right sort of nudge might unstick it. If this happened, the disturbance would spread out from the point of initiation at the speed of light and there would be no way to stop it. And the effect would be to bugger up physics so comprehensively that everything we know about would no longer be able to exist.

This may or may not be actually possible, because physicists aren't sure whether or not our universe is "stuck", and it might not be; but for the purposes of this discussion we're assuming it is.

304:

I've been pondering the problem of groupthink. When does it have a worse affect than in the conditions where collaboration has a better one? It seems equal group participation can increase the intelligence of a gorup of people performing some types of tasks.

We also see studies on collective intelligence where a group of people are able to solve a problem better when each member participants in discussion with relatively equal frequency.

Evidence for a Collective Intelligence Factor in the Performance of Human Groups
doi 10.1126/science.1193147


Psychologists have repeatedly shown that a single statistical factor—often called “general intelligence”—emerges from the correlations among people’s performance on a wide variety of cognitive tasks. But no one has systematically examined whether a similar kind of “collective intelligence” exists for groups of people. In two studies with 699 people, working in groups of two to five, we find converging evidence of a general collective intelligence factor that explains a group’s performance on a wide variety of tasks. This “c factor” is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking [emphasis mine], and the proportion of females in the group.

and then more on social intelligence without in person interactions

Reading the Mind in the Eyes or Reading between the Lines? Theory of Mind Predicts Collective Intelligence Equally Well Online and Face-To-Face
10.1371/journal.pone.0115212


Recent research with face-to-face groups found that a measure of general group effectiveness (called “collective intelligence”) predicted a group’s performance on a wide range of different tasks. The same research also found that collective intelligence was correlated with the individual group members’ ability to reason about the mental states of others (an ability called “Theory of Mind” or “ToM”). Since ToM was measured in this work by a test that requires participants to “read” the mental states of others from looking at their eyes (the “Reading the Mind in the Eyes” test), it is uncertain whether the same results would emerge in online groups where these visual cues are not available. Here we find that: (1) a collective intelligence factor characterizes group performance approximately as well for online groups as for face-to-face groups; and (2) surprisingly, the ToM measure is equally predictive of collective intelligence in both face-to-face and online groups, even though the online groups communicate only via text and never see each other at all. [emphasis mine] This provides strong evidence that ToM abilities are just as important to group performance in online environments with limited nonverbal cues as they are face-to-face. It also suggests that the Reading the Mind in the Eyes test measures a deeper, domain-independent aspect of social reasoning, not merely the ability to recognize facial expressions of mental states.

background: I am someone who studied psychology a long time ago but my day job is software development. I only have an arm chair knowledge of this stuff.

306:

Basically because of living with pigeons for several years and so getting to know them very well. A lot of their behaviour pings my "debugging sense" as originating with the same sort of computational processes as human behaviour, in a way that eg. dog behaviour doesn't.

("Debugging sense": some sort of intuitive appreciation of what kind of processes are going on under the hood of a complex system from observing its responses. Eg. reading "The Man who Mistook his Wife for a Hat" by Oliver Sacks and being struck by how similar the symptoms he describes are to various kinds of anomalous behaviour of computer programs.)

I can read pigeon communication very well, though talking back is harder because so much of it is body language and doesn't work very well with such a differently-shaped body. Heteromeles (where is he?) reckons to have had more success.

307:

Seriously, people, all we need is the equivalent of an NSA for the mind, searching for "end of the universe" key thoughts and thought phrases

Have to disagree with you here and also with Charlie's starter for the thread:

and allow any other human complete access to your thoughts and memories.

It's a top - down forced endeavour. What you would want in a hivemind is improve upon a talent most people already have: read the emotional state of someone, those systems come built in. If that state would be know to everyone, even if it's only in a 100m radius AND positive corrective action is taken things would improve. Children that will not be ruined by sexual abuse because everybody around will know what daddy's feelings are when he looks at his child. Corrective action is taken and lives improve.

The stated starting point of the thread would be a big NO for me. It would also be easy to beat an enemy who comes with that proposal. My guess would be that the built in electronics of their hivemind will not respond well to EMP. Another tactic could be to use jammers. Put enough RF energy in the air to swamp their in-skull receivers or crack the protocol and install a preferred state of behaviour and actions instead.

Another tactic one could use is to build a machine version of their 'mind open policy'. Sensor fusion is what it's called. Your war machines communicate to each other where the aliens are and take corrective action. The difference in reactiontime between a weaponsystem and a connected brain with all the connected overhead?

Try this with a brain in the loop and hit your supersonic target.

It's not going to work out well.

308:

How is that derailing? It's trying to understand the scope of the problem as defined.

309:

Ok, upon rereading what they're saying is they have signs of collapse from a previous universe or other parallel universes? They'd have to teach us a hell of a lot of physics just to get to the point where we can intelligently evaluate their claims.

I'd say anyone who doesn't want to join the collective should just be able to get spot on an individualist reserve overseen by AI. Keep the tech early 21st century, avoid high energy physics research upon pain of death. Sort of like the Eschaton telling people not to violate causality in his historic light cone.

A negotiation is something either party can walk away from. These are terms of surrender.

310:

Because Charlie said it was. Go read his comment/warning.

And his original post answered your other question as well.

311:

You can learn Ecology.
...
It's a non-dismal science run in most of the last 200 years by fools and cybernetics and bad models.
Unlike economics, it actually matters.

Models seem to be improving of late; is this your impression?
In general, I would want Deep Green Hive Mind participants to have a deep and rich and accurate model-based understanding, and a rich metamodel toolkit for new models and model changes. And the models need to be usable for generation of accurate predictions.

And yes, you can: it's called having to live off the land. You get mighty aware of geese nesting when you're hungry.
True that, though hunter-gatherer life (a) doesn't scale to billions of humans and (b) usually doesn't have a very conceptually rigorous understanding of ecology. (My experience talking with old nature lore deep experts is very limited though.)

Then again, one of the old Romanies came to tell me that love and compassion were visiting me soon...
Since we're talking about Impossible Things, visitations can be abstract. (Perhaps somebody here had an effect...) Have you determined what happened?

312:

How do the group minds get defined? Would people choose which to belong to in a local area? Would they encompass a geographic region? In either case, it could get interesting at the places minds overlap.

I'm impressed that their archeology is good enough that they can tell the ideology of the folks who destroyed previous iterations of the universe. How did the group mind universes end? Could this be group mind universes attempt to survive that end? (A bit like Benford's "The Hydrogen Wall.")

My own answer is to ask if they'll allow a long sign up period so those who don't enroll can see what happens to those who do. It isn't perfect, especially with the implied "or else", but it lets us keep some agency in whats happening.

313:

btw, if we are able to experience some group mind, I would like to be able to experience non human minds as well. Your comments on pigeons reminds me of this in particular because I already want to be able to experience seeing as a pigeon does. Their perception is interesting.

314:

Don't bother, she was bullshitting - again.

315:

... I already want to be able to experience seeing as a pigeon does. Their perception is interesting.
Pentachromacy: in particular, I want to see what pigeons look like to other pigeons!
In general, being able to tap into all the sensory apparatuses in an area would be quite nice. Multiple viewpoints, and interesting capabilities like smell, echolocation, electric field sensing, heat sensing, UV, polarization, magnetic senses, etc oh my. Would need a way to integrate it all into our experience.
Does anybody else recall Karl Schroeder's Ventus?

316:

Having read the right fiction and watched the right films I think I can confidently say that there is an 87% chance that hacking a pigeons view of the world would reveal the invisible things that move among us and prey upon us, and that at the moment you perceive them THEY would see YOU.

13% chance of getting away with it though so go for it.

317:

On further thought, I think such an offer would not be made by aliens who thoroughly understand human societies, too many of us are unpleasant, or downright sociopathic, if that became the dominant theme of a human-derived group mind with access to highest technology the results could be unprecedented levels of ugly.

318:

No. At least not till we know a great deal more. Human groups tend to fall part past a certain size anyway, and we have all seen the wonderfulness of group think, sometimes by being the target of it. Some of us already know where we are at on the food chain, and our thoughts being are own is the one autonomy we have. This can be brought about by love and duty as well as by slavery, but the desire to keep one's thoughts to oneself can be just as great in either situation.

And how do we know the aliens are telling the truth, anyway? Oh, look, a cookbook!

319:

"Their definition of it is that everyone has access to everyone's thoughts and memories."

This'll never work. "Access" drops you into the qualia problem almost instantly. Humans don't have well-normalized thoughts. My activation pattern for "I could really go for a cheeseburger right now" is likely to be different from yours. It may have partial commonality (e.g. the "hungry" areas likely light up with both), but I may store the referents to "cheeseburger" completely differently--to say nothing of "cheese", "burger", "cow", "delicious oozing grease", etc.

What we can see is black-box behavior. You can know when I'm likely to want a cheeseburger, where I'll go to get one, and who I might call to see if they want to tag along. And guess what? We can see that black-box behavior already. Google and Facebook make a very nice living off doing exactly that.

Twenty years from now, we'll likely be able to watch the activation patterns from a particular brain just before it goes cheeseburger-shopping. But that's merely a matter of degree, not of kind. Today, if a bunch of us start behaving in a certain way, segments of commerce can already adjust their production and service patterns, advertise differently, and generally behave like a vaguely smart organism. High-level political behavior responds in the same way. So does culture.

That sounds like a rudimentary group mind to me.

If all you have is read-only access to each human, I'm at a loss to see how that gets you the--literal--unanimity necessary to honor our species-level contracts to the point where a GF member would be willing to trust us. You have to be willing to do some very unpleasant things if you want to stamp out undesirable thoughts or behaviors.

I suppose you can send Pre-Crime off after anybody who shows the behavioral correlates of, say, admiring Trump. But I suspect that the fly in that pot of ointment is that you'll very rapidly wind up with a high-level conflict about whether Trump admiration is a sufficiently serious sociopathy to send in the black helicopters. Or perhaps a group would rather go after everybody who isn't a Trump admirer. You're always going to have factionalism. The trick is to develop a sure-fire, distributed way of resolving the conflict.

The thing that actually works is the thing that's already sorta-kinda working: behavior gets aggregated by the primitive group mind, beneficial behaviors are spread, and harmful ones tend to die out. Evolution works the same at all scales, and with most morphologies. All you need is a nice mechanism to generate random traits and a statistical mechanism to sort the sheep from the goats.

320:

it is possible that what we perceive as the energy level of a vacuum isn't really the minimum level that it looks like; the universe might be somehow "stuck" at a level higher than the true zero (because of quantum)

More specifically, because of the Higgs mechanism. The Higgs potential starts out in a false vacuum state, and subsequently transitions to the lower-energy state in which we currently find ourselves. This lower-energy state might not be the lowest energy state, in which case we are currently in a "false vacuum" or "metastable vacuum".

If this is the case, it is highly improbable that the universe could spontaneously decay to the true vacuum (there must be a barrier, or else we wouldn't have got stuck in the false vacuum to start with), but not impossible (because of quantum tunnelling: given enough time, you can transition through the barrier). If one did create a bubble of true vacuum, then its boundaries would expand outward at the speed of light: the universe as we know it is doomed, although it'll take a while.

The best current evidence is that we are in such a state, see Bednyakov et al. 2015 (may be behind a paywall: here is the key plot, and here is a commentary on it which I think is open access).

321:

Re: '... they have signs of collapse from a previous universe or other parallel universes'

Okay, not a scientist, but my understanding of the Wikipedia entry on false vacuum is that once the false vacuum goes, the entire universe goes Poof! instantaneously. That's everything and everyone in that universe gone, no exceptions. So unless the aliens are capable of creating a stable pocket universe which subs as their intra-universe vehicle, this is an ISIS type mentality: repent/change/conform or we kill your universe ... and ourselves at the same.

322:

No. The EULA would need to be heavily renegotiated to make it tolerable. Any advanced being that can reprogram your wetware to be a hive mind would also be able to accept degrees of control.
Like the controls to a wevbrowser, there should be conscious opt-in/opt-out aspects to the wetware contract for different features. Access to any technologies that could collapse the false vacuum would be limited to those who accept ALL terms and security clauses. Wetware security clauses would be enforcable on the EULA.
Access to varying degrees of universe tech; i.e. Nanobot life extension cell repair, controlled cornucopia fabricators, ackles to the nanobot matter substantiation cloud, access to T&A-gates would require assent to large portions of hive mind access.
If terms were offered, but not forced, you'd probably get the majority of folks to assent to becoming eternal superbeings.

323:

Yes, the Physical Review Letters article is behind a paywall, and yes, the other two are visible.

(Last time I read PRL, it was only available in hard copy, which shows how long ago that was. I spent months buried deep in a university library in Scotland with it and the other physics journals, looking for pictures of Fermi surfaces — and yay, this issue has one on its cover.)

324:

Ah, I was hoping you'd turn up. Thanks for those links.

325:

Re:
TheRadicalModerate replied to this comment from RDSouth | May 12, 2016 05:23 | Reply
319:

"Their definition of it is that everyone has access to everyone's thoughts and memories."

This'll never work. "Access" drops you into the qualia problem almost instantly. Humans don't have well-normalized thoughts.
**********
Are we not closing in on that *now*, he says, as he shuffles around to let folks see the computer that has facepalm and twitter and instagram and ..... up....

And *all*? Do you *really* want to know about my bowel movement yesterday, or how my arthritic knee, or my partially-replaced knee felt when I got up this morning? And where's all this to be archived, and how long are the backups for, and how frequently do they run?

mark "I could find a tape drive..."

326:

I've been thinking about consequences, and the more I consider it, the more I think that the short term would be a disaster, but the long term would offer a steady improvement.

In the short term, I expect a wave of suicides. Bigotry is fractal, and abrupt, enforced empathy to every person on the planet could cause many people who didn't think of themselves as evil to reevaluate that perception. Some would give into despair. Others might be pushed into it by the unleashed subconscious fury of the majority of the human population who live lives of relative privation to benefit an arbitrary minority.

After that, the species would then have the hangover of having killed a significant part of itself during what might be considered a kind of global puberty. Perhaps we'd resolve to be better. I'd like to think that we would.

Of course a lot of this pain could be averted if we're smart enough to ask the aliens for help with the transition period. They'd probably be experienced at uplifting primitive societies to the galactic norm and would know what kind of traumas to expect and how to address them.

As for the commenters who have suggested we spurn them and go down fighting? To what end? There's no reason to think they're trying to trick us. Space is BIG. If they can get to us, that means they have the technology to swat us like a fly. If they wanted to harm us, they wouldn't give us an ultimatum in the first place. Even if they did want to turn us into a hive mind for reasons other than the ones they specify, there's no reason to think they wouldn't be able to do it against our will. Because, again, we're insects compared to them. By asking for consent they show good will. The status quo is not an option in this scenario, so we might as well ask for as much help as we can and get off to a good start as the new kid on the block.

327:

Humans don't have well-normalized thoughts.
I've been assuming a thought translation layer, with some sort of universal intermediate expression language for thoughts. Mental thought schemas can be pretty different and the model to model transformations are likely to be considerably more difficult than something like XSLT. (Of course, I haven't a real clue here. :-)
That is, much of this group mind connectivity lives in a noosphere of some sort, probably with a large technology component. That, or people get really good at thought translation, or start using a universal intermediate thought language directly.

And where's all this to be archived,...
Storage has not hit limits yet. Also, thoughts are probably quite compactly expressed even losslessly, compared with e.g. visual information. (In a raw form, changes over time of at least fovea, peripheral vision, saccades, and probably best to also record what the individual visual cortex makes of the inputs since individuals vary.)

328:

Interesting question from OGH...

The way I've unpacked it is thus :-

Point 1.

This is about what is rational to do in the face of an existential threat. Here we have both of the parties making a decision on what to do. The questioners have already decided, in the event you decide not to group mind, you loose the ability to affect them. The implication being death for you.

You then have the choice to live or die. From a game theory point of view, the rational choice would be to choose group mind. I'd like to point out that you already are the product of generations of individuals making the right choice in the event of existential threat.

Point 2

This is about the evolution of intelligence.

The way the groupmind is portrayed in the question is interesting; to 'render your individual mental boundaries permeable to one another, and allow any other human complete access to your thoughts and memories.'

This sounds a lotlike the galactic civilisation in Julian May's 'Galactic mileu' series, which in turn takes its core philosophies from Teilard de Chardins ' The Phenomenon of Man, The Divine Milieu,'

I'd recommend a glance through its wikipedia entry...

Right now the most complex thing we know of is the human brain, we have no evidence of anything more complex. We have the belief that there ought to be, even that we will create greater complexity; but as yet we are as complicated as it gets. What's the next stage? Groupmind is a good answer... otherwise it's just our brains getting bigger and that's going to be just as bad.
If you're going to have a Groupmind, what's the best one to have? The ideas in de Chardins work (stripped of the religious labels) provide much to think on. If we get to see the highest level of complexity in the universe, I think there's a good chance the notion of individuality will be radically different to our own.

What would I do? Join. As the question seemed to imply 'join or die'

329:

Well, if it's join or die, I'd certainly join. It doesn't sound worse than nonexistence.

But suppose that ominous language does not mean they will vaporize us, but only apply a quarantine, or or rigorous supervision and loss of many freedoms and technologies, or something. In that case, rather than just saying sure, there are some questions I'd like to ask.

1. Can I have a thirty day trial period? And at the end, get coherent sample of memories, say, half as large as my tiny mind can hold?

2. If not 1), can I get a synthetic memory set from a realistic simulation of the potential human group mind.

3. Can I get historical information on the fates of a large sample of group minds, with annotations on the way the dissolve, go insane, commit suicide, etc., and information on the frequency of such events and any indications that it would be more or less likely that humans would suffer such fates. And conversely, how often do the resulting minds conclude that, gosh, this was the best thing that ever happened to them?

4. I don't see any way that fusion into a group mind can realistically be called a minor tweek. It's not. It's just about the most radical change I can imagine, short of apotheosis. But surely there are some relatively minor tweaks that would serve the same security purpose. Post an angel with a flaming sword at the gates of our consciousness, under a vast banner of light: THOU SHALT NOT MUCK WITH THE VACUUM ENERGY! Well, or something comparably effective but less distracting.

But personally, I'd like the answers to 1) or 2) and especially 3) before asking question 4). Turning into a vast group mind sounds like something I'd enjoy. Our emotional reaction to Borgification stems, I think, from a false analogy with being under the control of an evil parent or a totalitarian state -- false because such discipline as is imposed on our individual selves post-merger is self-discipline by our now-extended selves. At least that's how I imagine it. And I'm usually happier when my self-discipline is better.

Also, the Borg was presented in a remarkably negative light. Most of them were pretty ugly, the man-machine interfaces appeared to produce weeping sores and to have been designed with no regard for esthetics, they lumbered about as if they were strong but slow and exhausted, and, except for the evil queen, all of them looked unhappy all the time.

And the notion of incorporation by conquest is ridiculous. It will inevitably lead to the destruction of big much of the resources that they are trying to gain. If instead they appeared in a dazzling burst of song as the cool kids, snazzy cheerful gorgeous super-hero godlets all, and proceeded to demonstrate that they can out-play, out-dance, out-talk and out-fuck -- _and_ out-empathise and out-befriend -- any ten of us, and offered Borghood to only the elite that can pass their stringent test of coolness, well, that's what Facebook did. You'd get the same results with no loss of life or destruction of assets in a (cosmically speaking) twinkling.

330:

takes its core philosophies from Teilard de Chardins ' The Phenomenon of Man, The Divine Milieu,'
Which is one of the biggest loads of wall-to-wall bollocks ever written.
Draw your own conclusions.

331:

@329:

"It doesn't sound worse than nonexistence."

You have bad experiences of non-existence? Well I never.

Before your conception, you didn't exist. Tell us how it felt.

I don't think, either, that we can sit down and weigh the merits of an unsatisfactory existence contra no existence at all. That just doesn't make sense to me. With non-existence, there can be no customer complaints.

The human mind just does not seem designed to think about this issue. When people are asked to imagine being dead, they generally imagine floating somewhere above the cortege.

This comes of having a subject-verb-predicate syntax that allows us to choose non-existents as subjects of a proposition. We would be better off with ourselves as verbs. So then no one can ask the silly question of where and how it is Johnbrowning when it has ceased to Johnbrown.

332:

Interesting comments ...

Something that hasn't been discussed is whether hive minds can learn, ideate and grow on their own. There's too little info about this alien. For all we know, our minds are its next meal - regardless of how we vote - simply because it's no longer able to generate its own ideas.

Our species (including our thinking) is messy because it evolved out of a messy system of reproduction and evolution. Then again this system is really good at generating surprises including novel solutions both simple and complex. To abandon this despite the risk of a collapsing universe suggests that the aliens do not have as deep an understanding of mind as they do of astrophysics.

333:

I wonder if the transition to a group mind is analogous to the transition hunter gatherer cultures make to an agricultural mode. It's a one-way trip because it supports a higher population density. There's no way back once your population can only feed itself through agriculture.
In the same way, a group mind simply wouldn't fit into multiple discrete units.

334:

Before I get to the point, there's this episode of lone wolf and cup: Okami is hired to kill an eremite. When he goes to the old man, the guy scares him somehow (despite beeing old and unaremed and Okami beeing Okami. Okami retreats to the woods, meditates for weeks (his son is left to himself IIRC) until Okami achieves some sort of enlightenment and slays the eremite in one perfect stroke. Ooops, spoiler. Now, onward to alines and groupminds ...

I'll make another stab ant defining this group mind.

a) he group mind must be able top prevent people from collapsing the vacuum energy

b) true telepathy as in bypassing language is impossible, different brains think differently

c) our group mind is in fact a group mind, meaning it is a group of discernible something, and not one mind running on distributed wetware.

Let's bang these together ...
We need to monitor everyone so noone can collapse the vacuum. What is the least invasive way to do this?

There's no clear, communicable thought without some form of language. But let's say you can transmit your inner monologue (words, pictures too if your good at vizualizing, melodies to a higher fidelity than you can sing/wistle ...) along with whatever you say/write/draw: As long as you talkj about concepts that whoever you are talking to undertands, there will be no misunderstanding. Especially when talking about hard to define concepts (love, dignity ..) you can broadcast all the interpretations you thought of and did't mean at the time.

Let's say this roadcasting is voluntary, but it's impossible to lie. This will be the major tweak done by the aliens: equipping us with broadcasting/receiving equipment with these abilities.

Like any signal this can be faked, but when talking to someone in realtime it's impossible to fake convincingly (in rtealtime) without a fast AI with a complete ToM of people. Think of it as a very high bandwidth turing test, because it is.

So you can keep all your thoughts private all the time, but also share in the most intimate way possible. Now how to monitor that noone wants to end the universe?

I see several ways:
The catholic way: A regular scheduled one-on-one, like a confessional or therapy or meeting with your boss, where you talk and broadcast about wether the world should go on long enough that the auditor (let's call them that) knows for sure that you won't collapse the VE anytime soon. Who audits the auditors? they cross audit each other or there's an hierarchical audit chain

The eastern orthodox way: Group talks, like a group confessional or maoist self criticism session. Regular scheduled meetings where non self-selected groups talk amd broadcast about the state of the world, again until noone could lie about their plans to end it all.

The public ritual: Large groups gather and chant as a group: I will not end world anytime soon! a few times, when your hearts no in it it will show.

The tribal way: People live in communities built on trust and stable relationships and broadcast most of their inner monologue most of the time. If someone looks like they will get off the rails, it will be noticed.

The idea is: We just need to know that you have no intention of collapsing the VE, your other thoughts may remain private if you so choose.

How often these sessions are will depend on how long it would take an individual or small group to collapse the VE starting from forming the idea. Maybe access to VE is at the heart of everything the aliens bring us and a competent kid could rig a toaster to end the universe, then we would need constant auditing. If it is impossible to do alone, anyone not doing the audit might simply be surveilled so they can't ever meet up or get near the neccessary equipment. Maybe the audits will be monthly and you go to jail for not taking part.

In such a scenario, there will be other actions as prohibited as VE collapsing - creating AI with the ability to fool people, having kids without the broadcasting implants/ability maybe.

And another thing: accodring to this , achieving enlightenment means you have no inner monologue (at least for this guy). Maybe you can't be audited anymore, any attempts at achieving such a state by drugs or meditation will seem suspect ...

335:

drat, the last paragraph was supposed to read:

And another thing: accorging to this thougtmenu on enlightenment, achieving enlightenment means you have no inner monologue (at least for this guy). Maybe you can't be audited anymore, any attempts at achieving such a state by drugs or meditation will seem suspect ...

336:

I was looking through my dictionaries at the definition of "theism", following the changes over time, and found this in a dictionary from the '30s.

Theism (1) - A morbid condition due to immoderate tea-drinking.

Then the second use was the same old boring definition about religion.

The first use is, sublime, profound, a joy.

I googled the word and found that it has changed over time to "theinism" or "theaism".

Google the phrase "theism immoderate tea-drinking" and look at the various results.

337:

There are plenty of people who think complex thoughts for extended periods with no internal verbal monologue.
How could they think, with all that odious internal chatter disrupting their concentration? :-)
That is, achieving these states require suppression of all thoughts, not just verbal inner chatter. And he was discussing achieving it, not what happens afterwards.
Thoroughly enjoyed the linked piece though.

338:

But.we're a borg of neurons.

I'd say it makes us worse.

339:

Late to the party because I was on an extended road trip.

Personally, I'd say no thanks. Here's my logic:

If it's so wonderful, all they have to do is ask for human volunteers to join their Cosmic Mind. The lives these borg were living would be so much better that the idea would catch on in short order, and humanity would adopt it freely, without needing a contract or threats. That part appears to be the essence of a win-win.

Since the scenario posits that they need both a contract and to very deliberately NOT specify what would happen to us if I said no, I suspect that there are some serious problems with the deal, and for some reason, they don't want us to know what they are.

Clearly, they could kill me for refusing. However, they seem to consider that the worse outcome for them than having me sign, which suggests to me that, just in case they could indeed kill me (not a given, this could all be total BS), death would be a better option than whatever they have planned.

As presented, I suspect this is a scam. If the galactics are worse salesbeings than, say, Facebook or Twitter, I'm not sure they could deliver on their promises, or even turn us into borg.

340:

I am tempted to repost the contents of your "crying with joy" posts (from saved html). Doubt Sean would object; well past 300 now. They seemed pretty harmless, and happy.
(Hope you return BTW. That day and evening (May 10 locally) was notably strange personally fwiw. Including unusual, playful meditations...)

342:

Awww I missed this neat discussion.
Well, in case anyone reads this far.

I see absolutely no downside to creating a vast planetary superorganism out of our limited minds, and if that means we
a) DOn't destroy the universe
and
b) Get all kinds of nifty problems solved and sci fi toys? So much the better!

Although, having a groupmind would solve most of our actual problems. The big problem we have is that your problems aren't MY problems, so the fact that YOU starve is irrelevent to me (except on a basic empathy level which isn't enough)

343:

totes unrelated but we're in strange attractor territory allready ...
A few thousand people blocked the traqin tracks to the coal power plant Schwarze Pumpe in Lausitz and the excavators in a nearby stripmine. Apparantly Vattenfall had to drastically reduce the power outpu of the plant to ake sure they don't run out of coal over the weekend.
Well don, would have liked to join the actions.

Anyone good ideas where to find (mining/coal/energy) industry insider perspectives on this?

344:

If we were seriously presented with such a proposal, it would be seriously gamed out. The actual proposal in the OP is pretty weak, agreed. We're not doing terribly here (the thread has some interesting thoughts) but we're missing a lot of possibilities.

just in case they could indeed kill me (not a given, this could all be total BS),
Nah. Given even 30 or 40 years more technology development, humans of 2046/2056 could easily wreck the civilization of/destroy humanity of 2016. And we would need to presume that the tech differential is well in excess of that.
We all have our favorite how-to-destroy/damage-humanity scenarios, (presumably) not based on experience. Here's my list FWIW.
Probably easiest would be a sequence of targeted bioweapons of some sort, depending on goal. If goal is annihilation, then repeat until done. If the goal is threat removal, maybe reduce population to 10% then introduce a virus that dramatically lowers intelligence at the high end then introduce another to get such mod into the germ line.
Technology and advanced economies are also pretty vulnerable if the choice is threat removal.
Or maybe just watch and wait for humans to do it to themselves, and be proactive only if they somehow fail to remove themselves as a potential threat or the threat becomes imminent. This requires ubiquitous monitoring. (But is better for alien karma. :-)

Sean Eric Fagan @341: OK. I did ask you'll note.

345:

Actually, we don't need to assume any of this, because if they are that far advanced, any "contract" or whatever is a fig leaf, and they'll do what they want, regardless. There's no reason to think that they need or want our consent, if their goal is to uplift us to full sapience.

The problem here that I'm trying to highlight is how easily this could be a scam, and how rapidly people into "game theory" talk themselves into taking the bait. That's one of the huge weaknesses of game theory, especially among dilettantes like most of us: we think it makes us smart, even when it renders us weak to scams that assume we're using game theory.

One thing to think about is that anything that has flown interstellar distances has a tremendously thin and long supply line. They may want us to submit to them, but there's no reason to think, whatever their cultural level, that they can force us.

Another thing I'd point out is that I just spent some time in Sedona, which is a hot spot for dubious alternative healing methods. You always have to ask yourself why people would be lured in by such things. One answer (among many) is that they are being taken advantage of by people who know the weaknesses of things like medicine, and are using promises of superior technology to sucker them.

The same thing could easily be happening here.

346:

We are agreeing actually. I used "seriously gamed out" as shorthand for first trying to determine all the assumptions (some conflicting), and assigning them probabilities (some of them linked) and confidence levels (some of them linked), then working out from there.
"Scam" (and being played in general) is a subgraph of the analysis. Agree completely on the lure of woo. (Know someone seeing a Lyme disease (a tickborne disease common in parts of the US Northeast/Northern midwest for those unfamiliar with it) "specialist"; afraid they're doing damage to her with all the antibiotic cocktails.)

347:

My sympathies to your friend with Lyme. That's a problematic disease at the best of times, and if she's had it for awhile, it's going to take some effort to get rid of it.

348:

Thanks. The "woo" part is because the doctor (maybe including the Internet, hypochondria's friend) has convinced her that she has 3 or 4 different tick-borne diseases, encysted/dormant and hard to detect with objective tests.
Few of us are immune to woo; I tend to keep one or more null hypotheses firmly in mind when examining anything that is traditionally considered an extraordinary claim.
Which the trope-y Galactic Federation proposal in the OP certainly would be.


349:

One also has to be careful about dismissing real cutting edge science as "woo". For example, the power of fasting to effectively rejuvenate and reset the immune system.

http://www.telegraph.co.uk/science/2016/03/12/fasting-for-three-days-can-regenerate-entire-immune-system-study/

350:

I assume that's A Periodic Diet that Mimics Fasting Promotes Multi-System Regeneration, Enhanced Cognitive Performance, and Healthspan (July 2015). Don't have access except to the abstract; how big was the study out of curiosity?
Anyway, that's science, not woo. The problem as you note is that standards of evidence for policy-making, and standards of evidence for picking problems and guiding research and further investigation, are two different beasts. For the later category (particularly picking problems), one needs to be more willing to consider that their priors are entirely, or at least interestingly, wrong.
While I'm thinking of it, unrelated question; do you know of (or have) any opinions on the effect of nootropics on intuition and related hard-to-measure aspects of mind performance? Has any research been done, e.g. in any of the many papers on racetams?

351:

The only thing I know is that microdosing with LSD is somewhat fashionable for "creativity". Not sure how well it works, but it seems logical. I have only ever done megadosing :-)

Some discussion here:
https://www.reddit.com/r/Nootropics/comments/2grjbn/best_nootropic_for_enhancing_creativity/

Easiest thing would be for you to just try some Piracetam. It's about the safest drug I have ever encountered.

352:

Tx! Also found this (2009), related to Piracetam and intuition, and uhm other stuff. Search engines are great! (You might find it amusing.)
Odd that the creative class hasn't picked up on it (or maybe they have), like performers have picked up on beta blockers etc.

353:

Piracetam especially is more than subtle. For me its effect is imperceptible. I only notice it when playing something like Sodoku, where my scores increased over a period of around 3 weeks by some 7%. I initially thought the online game was somehow broken because they were getting significantly easier. However, I only do a few weeks with piracetam and then take time off. During the latter period the games get harder again. Perhaps that is why many people under rate it. Some of its relatives, like aniracetam etc tend to be significantly more effective more quickly, but I prefer the original, partly because it is really cheap.

354:

Me to aliens: Borg off.

To expand a little...

I want immortality but not at the cost of self.

I have no evidence that a group mind is any more likely to produce happy results than a bunch of individuals. (Show me some evidence and I might, I say Might, be convinced otherwise.

I have no evidence that these aliens are honest and trustworthy. Again show me evidence that they are not con-artists or the equivalent of Scientologist or other types of evangelists and I might be convinced otherwise.

I never agree to major changes to my entire species on a first date.

And they might be testing us but not for the thing we think they are: perhaps they have learned not to allow species gullible enough to sign up access to the really useful stuff.

Specials

Merchandise

About this Entry

This page contains a single entry by Charlie Stross published on May 7, 2016 12:34 PM.

PSA: 5-Point Writer's Block Checklist was the previous entry in this blog.

Three Unexpectedly Good Things VR Will Probably Cause is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Search this blog

Propaganda