Shame it couldn't go out in time for the mandatory winter-solstice shopping bonanza: it would have made the "what you want?" question much easier to answer.
Did you manage to fit in any last-minute rewrites to account for 2016 weirdness, just out of interest?
]]>Forgive me, I was assuming a panopticon-style setup - I should have been explicit.
I agree severity of punishment is a poor tool - human history is replete with examples to back that up (relatively recent British and American history, not to mention classical civilisations). I guess my point was human beings would be more likely to revert to type and re-use old social arrangements, even deeply flawed ones, rather than attempt to invent new ones for novel scenarios.
The fact is, when the failure states flowing from individualism/libertarian ethics are unsurvivable for entire communities, then severe restrictions on personal liberty and/or privacy will be the first thing people try, not the last. The only alternatives that I can think of would involve total surveillance (which is ultimately what you're talking about with group-minds/telepathy - no privacy even in your head), along with some sort of "Nanny State" variation on the AI concept I mentioned above - preemptively smacking your wrists/"nudging" you when your behaviour or thoughts moves out of bounds.
That, or the internalisation of group ethics through some engineered form of organised religion designed to increase survival chances for groups. Or perhaps a combination of those with authoritarian social structures.
I'm thinking that the Age of Colonisation Pt.II is probably going to be deeply unpleasant for a great deal of people. Of course if we invent Banks-style benevolent AI to manage the process we might fare better, but I'm sceptical we'd survive the attempt intact.
]]>Dependent upon the level of AI available - you might expect some form of "arbiter AI" apparatus set up. This wouldn't necessarily need to be of the Strong AI variety, perhaps a fairly comprehensive collection of algorithms for risk and resource management.
The precise weighting of value within those algorithms, well, you can imagine there's scope for conflict there - imagine, for instance, a corporate-biased machine that strongly favours saving infrastructure and key workers over serving the needs of the general population. Nation-states, NGO's and Corporations might have distinct arbiters, with distinct rulesets.
In terms of social relationships, I'd imagine that, given the fragility of these sort of environments, some form of military-style hierarchical structure would be the safest for the collective population - justice would be summary, subject to AI approval, and enforced with severe brutality.
I'm not entirely sure I'd want to live there, now I think on it; I do think that any early space colony would necessarily want to highly prioritise group security, with everything (nasty) that implies.
An interesting point for a sci-fi writer might be the point at which terraforming starts to make the strict rules governing the colony less necessary - at what point in the process of making the colony more resilient do you decide it's time for liberal democracy?
]]>