Back to: Books I will not write: BIGGLES!! | Forward to: The ends of education

Crimes against Transhumanity

(Disclaimer: I am a transhumanist skeptic these days, not to mention a singularity curmudgeon and a critic of Mars colonization, but I still find these ideas nice to chew on sometimes.)

Humans are social animals, and it seems reasonable to assume that any transhuman condition we can wrap our minds around will also be a social one for most of its participants.

Society implies a social contract, that is: we grant one another rights and in return make the concession of respecting each others' rights, in order that our own rights be observed and respected.

And violations of rights tend to be at the root of our concept of crime and injustice—at least, any modern concept of crime once we discard religious justifications and start trying to figure things out from first principles.

Which leads me to ask: in a transhumanist society—go read Accelerando, or Glasshouse, or The Rapture of the Nerds—what currently recognized crimes need to be re-evaluated because their social impact has changed? And what strange new crimes might universally be recognized by a society with, for example, mind uploading, strong AI, or near-immortality?

SF authors are paid to think our way around the outside of ideas, so it's always worth raiding the used fiction bin for side-effects and consequences. Here's qntm's take on the early years of mind uploading--the process of digitizing the connectome of a human brain in order to treat it as software: I strongly suggest you read Lena (if you haven't previously done so) before continuing. It's a short story, structured as a Wikipedia monograph, and absolutely horrifying by implication, for various reasons.

Let me give you that link again: Lena. (Go read: it's short, good fiction, and the rest of this essay will still be here when you get back.)

Mind uploading makes certain assumptions. (Notably: mind/body dualism is a bust, there is no supernatural element to consciousness, also that we can resolve the structures involved in neurological information processing with sufficient resolution to be useful, and that the connectivity and training of the weighted neural network in the wetware is what consciousness emerges from.)

Uploading also implies that consciousness is replicable and fungible, which in turn implies our legal systems can't cope without extensive modification because we rely on an implicit definition of humanity which at that point will be obsolete, as the treatment of MMAcevedo (Mnemonic Map/Acevedo), aka "Miguel" in the story, demonstrates: MMAcevedo is considered by some to be the "first immortal", and by others to be a profound warning of the horrors of immortality.

Historically, our identity has been linear: there is a start, there is a terminus, along the way we are indivisible, although we undergo change over time (and may lose or gain significant portions of our selves—for example, most people retain few or no memories of their life before a point some time between the ages of 3 and 5 years old).

The premature termination of a human life is an irrevocable act, and to deliberately inflict it on someone is seen as a crime (various degrees of murder).

Because our identity is indivisible and of limited duration, time is a rivalrous resource to us: we have to choose what to do with it, or be subject to someone else's choices. (One of the reasons why imprisonment is seen as a punishment—to which we are averse—is the total loss of opportunities to choose what to do with the time we lose. (Yes, there are other reasons: let's ignore them and focus on what this might signify for the posthuman condition.)

There's a fascinating sequence early in Linda Nagata's space opera novel Vast that throws the implications of alienated labour for uploaded minds into stark relief: if you're confronted with a mind-numbingly tedious task that needs human-level cognitive supervision for a period of years or decades, why not divide your time up in chunks and discard the boring ones? You could set up a watchdog timer to reset your uploaded mind to a baseline state every 3 minutes, unless an exception occurs—an emergency that makes you hit the dead man's handle in your environment, at which point the subjective passage of time resumes. In Vast, a human mind is needed to supervise a slower-than-light starship on a voyage that takes centuries during which nothing much happens. The crew use this three minute reset cycle to avoid experiencing tedium: subjectively, they condense the entire voyage into 180 seconds. (If you've driven long distance you'll probably have wished for the ability to push a button and find yourself at your destination. Right?)

Other authors found other angles on this question: the first book in Hannu Rajaniemi's Jean Le Flambeur trilogy (The Quantum Thief starts with the exact opposite—a thief sentenced to spend a subjective eternity in an escape-proof prison, as a punishment of sorts. Spoiler: he escapes. How he does it and why he was there is the start of yet more musing on what might constitute crimes in a realm populated entirely by uploaded minds. In particular Rajaniemi dives headlong into two really disturbing questions: firstly, the potential for eternal enslavement such a setting offers (never mind perpetual torment), and secondly, what it does to the post-Enlightenment social concept of human equality.

We are all living in the afterglow of a sociological big bang that took place in 1649—the execution of Charles I, who was variously King of England and Wales, Scotland, and Ireland at the time of the Wars of the Three Kingdoms: his trial and execution by a court--appointed by a parliament of the people—shattered the then-prevalent understanding among European/Christian communities that Kings were appointed by God to rule on Earth. A corollary of the Divine Right of Kings is that some people really aren't equal--monarchs, and by extension, aristocrats, have more rights (by religious decree) than other people, and some categories (chattel slavery springs to mind: also the status of women and children) have less. But if the People could try the King for crimes against the state, then what next?

"What next" turned out to be a troublesome precedent. Charles I's younger son James II tried to walk back the uneasy settlement with parliament and got yeeted into exile in 1688-90 as a result, with the resounding and lasting outcome that the powers of the Crown in English and Scottish law was now vested in Parliament, and the head beneath the fancy hat was merely a figurehead who could be sacked if he (or she) acted up. If the monarch wasn't divinely appointed, what set him apart? Numerous philosophical maunderings later it was the French king's turn, and also time for the US Bill of Rights--which, while based on the 1698 English Bill of Rights, implicitly adopted the pernicious logic that there could be no king, no nobility, only free citizens. (Pay no attention to the slaves—for now.)

Here's the thing: our current prevailing political philosophy of human rights and constitutional democracy is invalidated if we have mind uploading/replication or super-human intelligence. (The latter need not be AI; it could be uploaded human minds able to monopolize sufficient computing substrate to get more thinking done per unit baseline time than actual humans can achieve.) Some people are, once again, clearly superior in capability to born-humans. And other persons can be ruthlessly exploited for their labour output without reward, and without even being allowed to know that they're being exploited. Again, see also the subtext of Ken MacLeod's The Corporation Wars trilogy: in which the war between the neoreactionaries and the post-Enlightenment democrats has been won ... by the wrong side.

The second book in The Quantum Thief trilogy, The Fractal Prince, gives us a ghastly look at a world where genocide and enslavement are carried out by forcibly abducting and uploading the last born-human survivors—is it actually genocide if the body is dead but the mind is still there? (It's a new version of Caedite eos. Novit enim Dominus qui sunt eius, of course.) It may not be genocide in the currently accepted legal sense of the term—the forcible extermination of a cultural group or of the people who are members of such a group—but it's certainly a comparable abomination.

Our intuitions about crimes against people (and humanity) are based on a set of assumptions about the parameters of personhood that are going to be completely destroyed if mind uploading turns out to be possible. And the only people I see doing much thinking about this (in public) are either SF authors or people pushing a crankish ideology based on 19th century Russian orthodox theology.

Surely we can do better?

1538 Comments

1:

I've been rewatching Westworld (the TV series) the last couple of weeks as there's a new season out, and they do quite a lot with the topic of who is human, uploadability and the different worldview when you are revivable. Also Black Mirror has done a fair bit with these topics. These kind of big budget mainstream TV series are going to provide the cultural reference points for this in future.

2:

firstly, the potential for eternal enslavement such a setting offers (never mind perpetual torment) - See also Ian Banks, um, err "Matter" (?)
Re: The Fractal Prince - I still couldn't work out what was actually happening - I have all three books & am still "confused".

As for "doing better" - we need ( I think ) to (re)-start by redefining what makes a human, in as broad a set of terms &/or descriptors as possible & then give them actual equal rights ....

3:

Matter — noted, and the uber-capitalist who makes bank by buying up obsolete hells and running them as a service for disreputable species who want to torment their dissidents for eternity was a striking image! But I'm leery of citing Banks as a reference point for plausible futures; his SF conceits are only very tenuously constrained.

4:

The thing I'm pitching these next couple weeks takes the idea that for plot reasons, the world is now full of ghosts - essentially, the soul of person, mostly intangible EXCEPT that they can affect electronics.

Capitalism immediately realizes how very useful a bunch of minds who can operate machines but have no legals rights is, and it goes just as badly as you might imagine.

Germane to the topic, I've found lots and lots of new crimes and horrors to go with this.

5:

Perhaps the most hopeful thing is that we still don't understand enough to know how to do this in theory. It strikes me that this is the same ethical problem as 'true AI', because I don't regard the origin of an intelligence as important, ethically. Yes, it's a horrific problem, and I agree that we would see slavery come back (not that it had ever really gone away).

6:

Minor nitpick:

Charles I's grandson tried to walk back the uneasy settlement with parliament and got yeeted into exile in 1688-90 as a result

James II was Charles I's son, not grandson.

7:
Mind uploading makes certain assumptions. (Notably: mind/body dualism is a bust, there is no supernatural element to consciousness, also that we can resolve the structures involved in neurological information processing with sufficient resolution to be useful, and that the connectivity and training of the weighted neural network in the wetware is what consciousness emerges from.)

If you want a horrific thought about what it would mean if we did have mind/body duality, consider what would happen if someone mechanises a way to access “mind” computationally. Do we want IBM to own all our souls?

Now, on with the rest of your caveats which are my specialist subject (helping lead the UK contribution to the Human Brain Project, before I retired due to Brexit).

(*) According to what Henry Markram (look him up) told me, the biological neural network is configured at an early stage. What then happens is that babies start to lose connections at a fairly startling rate; I’ve heard of numbers such as 90% of all possible connections are lost in the first few years of life.

(*) His thoughts on neural plasticity are that each synapse (connection between neurons) is one of five or six possible connections it might make to five or six other neurons. Plasticity then consists of the connection trying each of these six possibilities to find the “best” — whatever that might mean.

(*) Seth Grant (look him up) told me that this synapse is really a very complicated bio-machine-mediated connection. We know next to nothing about these bio-machines — each about 1M Dalton in size — and what they do.

(*) Finally, and fatally undermining my area (neuromorphic computing: taking a simplified approach to it all), we now know that signals within the dendritic tree (a in-fanning structure connection to synapses and the neurons’ soma or cell centre) are not additive. The effect of incoming spikes is greatly affected by other signals currently in the dendritic tree.

So just knowing the connectome — the 85 billion neurons in the brain along with the connectivity graph of 10,000 connections per neuron, is merely your starter for ten. You will also need to know something of the state in each individual synapse.

That said, Katrin Amunts (look her up) has a basement in Julich where she slices up donor human brains in an attempt to construct a connectome.

8:

This reminds me of a campaign idea I had for Eclipse Phase ttrpg. Maybe some day...

9:

Oh, yes, device variability.

We have no way of knowing which sort of neuron is in which location. There are lots and lots of different sorts of neurons, but because even neurons of the same type vary wildly in their responses, we cannot usually determine which is which.

10:

Quick note - Surface Detail is the Culture book that revolves around simulated hellish afterlives. I don't remember Matter nearly as well, it might touch on the topic, but IIRC it's not as central to the plot.

Some of the other Culture books mention related issues - I think it's Hydrogen Sonata that touches on how the Culture doesn't like doing full human-level simulations for predicting future outcomes. I don't think that's as likely to be an immediate issue in reality, though, partly because of the computational resources, partly because it doesn't seem like you need full mind simulation for accurate modeling.

11:

Here's a scenario, which I think some are trying to instantiate right now.

There is no meritocracy, you have no inherent rights, even to existence. You have relative rights based on how your existence is valued by one or members of the super-rich, and civilization exists to grow and protect their freedoms, their power, and their continued existence.

This "solves" the problem of dealing with an increasing number of beings with an increasingly large spectrum of skills by wiping out all their rights, and focusing only on their utility.

Personally, I don't want to live in this type of world, and I'm increasingly unwilling to pay for works that promote such worlds too. It's more worth imagining plausible alternatives, no?

12:

"Perhaps the most hopeful thing is that we still don't understand enough to know how to do this in theory."

...although that does not eliminate the possibility of some bunch of aliens who do know how to do it and don't bother to ask first. For instance the Jarts in Greg Bear "Eternity", and the described experiences of Rhita Vaskayza (plus the uncounted undescribed other such instances). They are not evil, they just have an utterly unhuman value system, and the result is mindbendingly shit for any being who doesn't completely share that system.

It's really a subject I basically avoid thinking about, because the thoughts invariably head so rapidly in the direction of horror beyond belief that I just shut down on it. Accordingly my response to the question of how the law would need to be changed is purely reflexive and to the effect that it should simply be banned absolutely without exception, and any computing devices, storage media etc. that could be used for that purpose are also banned and must be destroyed by at the minimum reducing them to plasma.

I haven't seen any SF treatment of the idea that doesn't equate literally and precisely to hell, including the ones that are supposed to be some kind of paradise. "Matter" is initially shocking but is actually not as bad as what all the variants would end up being like eventually.

@ Charlie: "...our legal systems can't cope without extensive modification because we rely on an implicit definition of humanity which at that point will be obsolete"

I don't think it's just the legal systems, I think the state of humanity itself relies on that (in a sort of circular/bootstrapping way). Part of my extreme objection is that I do not want my personal state of humanity thus redefined.

"Notably: mind/body dualism is a bust"

Is experimentally demonstrated, surely?

13:

"Quick note - Surface Detail is the Culture book that revolves around simulated hellish afterlives."

Ah, OK. "Surface Detail" is initially shocking then. I can never remember which of those two is which.

14:

That's roughly how I feel, especially with regards to uploading tech specifically. It's certainly an interesting issue to think about, but scanning an existing brain seems well out of reach from everything I've heard. Take that with a grain of salt, though, I don't know much about the various fields involved. Current ML-based tech doesn't seem likely to lead to anything like what we'd recognize as conscious minds, though it's worth thinking about what other models of thought it might lead to, and how we'd incorporate them into our framework of human/social rights. (See OGH's Rule 34, not to mention Peter Watts's books)

15:

Do we want IBM to own all our souls?

I can think of worse people than IBM. IBM functionaries can at least probably be persuaded that they'll end up in the same place themselves eventually, so best be at least moderately non-evil.

I'd be a lot more worried about the Southern Baptists, the Church of Latter-Day Saints, the Taliban, Vladimir Putin, and so on.

On the buried complexity below the level of the connectome: yup. My crude metaphor for it is we're electronic engineers circa 1950-55, looking at a modern smartphone motherboard and scratching our heads. We can see the circuit board tracks with a microscope but those resin carriers with the impure silicon crystals inside are clearly magical black boxes ...

16:

Another alternative starts with the concept of animism, that everything, without exception has some inherent rights. Obviously creation, destruction, and exploitation have to happen, but these are to be regarded as takings that need to be compensated for, not inherent rights given to the exploiters by their Creator(s).

This again does away with the meritocracy, but it does so by scrapping intelligence and ability as critical qualities by assigning agency to everything. Note that this is an extremely old and widespread concept, known as animism. The more modern JCI systems and the sciences that spring from them are notable for being actively hostile towards it, but if you want more global inclusivity, it's probably worth exploring a system of law based on universal agency without regard to ability.

One way to do it might be to riff on the Australian aboriginal concept of "Dreaming" (not what they call it, but a widely used term by ignorant white outsiders including myself, so...).

The basic ideas are: --"Dreaming" is a kind of system. That system and its components have inherent rights of existence. --People belong to dreamings, and people includes transhumans. They are both members of Dreamings (Human Dreaming, AI Dreaming, etc.) and parts of other dreaming (the GHG dreaming, white oak dreaming, worm dreaming, etc.). In the latter role, they have the responsibility of caring for the other members of their Dreaming, not exploiting them. In tech terms Dreamers are Sysops, not users.

Governments (which have their own Dreamings) exists to help the Dreamers keep all the Dreams in existence by providing a framework within which necessary exploitation can happen, while unnecessary exploitation can be remediated and punished.

Dreaming systems are managed communally (meaning by communities of beings with differing capabilities), not by individual authorities. There's no Prince of Tides, just a lot of beach dreamers trying to keep sandy shoreline systems in existence in the face of sea level rise, using whatever governance structures trial and experience has shown works. I'm again being vague here: systems are diverse, and sewage, the internet backbone, particular crops, and rare species all require very different community structures and management systems. As an ecologist, I don't want to tell a computer programmer how SmallTalk Dreaming should work, but equally, I want them to realize that what worked for them may not work at all for me.

Systems vaguely akin to this have been used by humans for millennia at low-density but wide scale. While I think there are huge problems with keeping exploitative authoritarians from doing "move fast and break things" attacks (cf Eurasian deep history), such attacks are only useful when the attacker has a substantial advantage, AND there's a substantial surplus to be gained from the attack. Going into the 21st century and beyond, I don't think surpluses of anything except garbage will be all that available.

17:

I am pretty sure that to a first approximation, a universal declaration of posthuman rights should enumerate an absolute ban on the creation of hells (afterlife sims designed for punishment) for any purpose. (NB: this makes Roko's Basilisk a criminal.)

Also probably something equivalent to the right-to-life for us squishy meatsacks (only more inclusive for folks who can fork()/exec() themselves), the right to suicide irrevocably (woth complete and permanent erasure of backups and offsite copies of one's mind), the right not to have your personality, beliefs, or memories modified without prior informed consent, the right not to be enslaved ...

And some definition of "person" that focuses on individuals with a distinct identity and some features (ability to communicate and process information, consciousness) that are more expansive than "made out of meat derived from H. Sapiens Sapiens cell lineage".

18:

Funny thing is that humans have always uploaded parts of themselves. I'd be crippled without my online and library selves, the medical records my doctors maintain, the legal records my government maintains, and so forth.

Again, we're stuck with a system that instantiates JCI concepts, even though their reality is questionable.

Are brains stuck in heads? Nope, neurons run through our entire body. There's no physical break between the neurons in our heads and the ones that extend through our bodies as nerves.

The problem with an upload likely isn't someone uploading your connectome, but rather companies extending deep fake technology to the point where (at least online) the tech can convince anyone, including you, that it is you online, and it's far too complex under the hood for anyone to demonstrate otherwise.

This metaphor is akin to the comments that submarines aren't artificial fish, and there's little reason to build them to swim. Similarly, why should we simulate people's brains online when a much smaller AI can literally impersonate them? And if even you can't tell the difference between it and you, why shouldn't it share all your rights?

19:

One issue with uploading is the question of why we'd want to do it.

A moderately dystopian version is something like The Tunnel under the World where the victims/subjects are used to test the effectiveness of advertising. It doesn't take too much imagination though to imagine a Microsoft Mengele experimenting on virtual concentration camp victims without even having to burn the bodies. Although perhaps BF Skinner would be a more apt comparison, considering these would be purely psychological experiments.

Those issues apart though, if we've got enough processing to emulate a human brain in anything like real-time, then it seems clear we've got enough to do orders of magnitude better processing without consciousness. After all, all human analysis processes are notoriously crap, even under perfect test conditions. We're pretty good generalists, but give us anything more detailed and we really, really suck. If you add the requirement to concentrate on a task for long periods, it's beyond proven that the best way to get the task done is to remove the monkey from the loop. And unless we get to the point we can radically overclock our emulations, our reaction times are pretty awful.

In comparison, if you want something done well, you can design or train something which isn't self-aware and doesn't have the built-in flaws of human processing. Accidents with self-driving cars are pretty well documented, but no self-driving car is going to do a Charlottesville or get drunk or high. The bar for replacing humans with something else isn't perfection, it's just being better than a human - and if you look at the data it's bloody hard to find something that we're good at. From driving to flying to face recognition to spotting cancer cells, we're running out of reasons to have humans do it.

Scanning and replicating specific humans might be a win, if they can keep thinking new thoughts. If you could respawn multiple Einsteins, then yay for science. But your average human, doing general data-processing work as in Lena, it seems hard to justify.

Which leads to a conclusion that the main reason to upload humans is to prolong personal existence. That being the case, we're going to want to keep interacting with the world in real-time - and in that case most of the existing laws about personal freedom (and how the law can take it off you) still apply. Of course if you know you're immortal then you can theoretically afford to wait 100 years for that jail term for murder to finish. But long-term imprisonment isn't something you just sit out without personal consequences, especially if that whole time is spent in isolation (you don't need food or exercise), so it still serves as a punishment.

Short version, in practise I'm not sure all this would be quite as different as it potentially could be.

20:

Even IF you go with the 'we can do this, we should also be able to do this' logic, which I wouldn't call a given, what needs to be true is to do something as well as a human....and cheaper*.

You COULD develop a bunch of expert systems and automate them...or you could just torture a simulated mind to do it. It's essentially a general AI that you can apply to all kinds of things and the idea that it's cheaper, assuming you have the tech and the lack of laws to stop you, makes it plausible.

I mean, we currently produce more than enough food to feed everyone and...don't. The assumption that because something is possible it's how people will do it is...dubious.

*In the short term, anyway.

21:

There's a passing comment in The Peripheral about using a human as some sort of boutique AI... the plot of the whole book kicks off with someone engaging a human to do a job that could've been automated

22:

It may not be genocide in the currently accepted legal sense of the term—the forcible extermination of a cultural group or of the people who are members of such a group—but it's certainly a comparable abomination.

The current definition of genocide already prohibits transferring children; the destruction of the group as a cultural and social unit, rather than its extermination.

23:

Here is what makes me extremely skeptical about the way upload is usually understood in SF. For 31 years of my life I worked on the neuroscience of behavior in the nematode Caenorhabditis elegans. The normal Ce hermaphrodite has 302 neurons (always exactly that number), and every normal Ce herm is anatomically identical at the cellular level to every other. The nervous system was fully reconstructed from serial section electron micrographs. Here is the principle publication. We can monitor the activity of the nervous system in a living worm in real time using fluorescent probes. We can manipulate nervous system function genetically. It is either the best or second best (fruit fly researchers sometimes make the claim) animal.

And STILL, in 2022 no one on Earth can tell you how that nervous system produces normal worm behavior. We certainly can't reproduce it in software. Not even close.

24:

*It is either the best or second best (fruit fly researchers sometimes make the claim) UNDERDSTOOD animal.

25:

Re: The Fractal Prince - I still couldn't work out what was actually happening - I have all three books & am still "confused".

I read all three because OGH stated on this blog that they presented a realistic SF depiction of uploading. I was disappointed. If I remember correctly an upload occurs exactly once in the entire series, and it is covered in two paragraphs with the same superficiality one finds in other SF accounts of uploading. I suspect what Charlie was thinking of was Rajaniemi's rather detailed description of the architecture of the hardware system on which brains were simulated.

But it was a win, anyway -- I enjoyed them. Yes, I had trouble working out what was actually happening, but I have gotten so used to that in modern SF that I am disappointed now if I read a novel and it all makes sense on the first read.

26:

You all keep talking about hells for uploaded minds.

But what about the AI Devil that runs the place?

Something like AM from "I have no mouth and I must scream"?

27:

Re: 'So just knowing the connectome — the 85 billion neurons in the brain along with the connectivity graph of 10,000 connections per neuron, is merely your starter for ten. You will also need to know something of the state in each individual synapse.'

Apart from identifying, pigeon-holing (mapping) individual neurons, I'm wondering how many potential combinations and permutations would then have to be mapped to result in a verifiably 'identical' mind? Neuronal plasticity can change connections plus it seems that new nervous system structures and functions are still being discovered. Basically: The more parts, the likelier something can and will go wrong. (Asimov's robot stories explored some weird results as a consequence of a much, much smaller set of factors, Three Laws.)

The gut-brain axis plus microbiome - both concepts have become accepted as the way humans operate and most likely are necessary for human survival. So far the microbiome discussion has been limited to the gut but what if it turns out that there's an equivalent non-human microbe necessary for optimal human brain function. (Yeah - I know, there's a blood brain barrier between the brain and the rest of the human body but at the same time it's also been established that some viruses/parasites hijack insect/frog brains so IMO the potential therefore also exists for a virus/parasite to get humans to act in ways that are healthy/beneficial for humans.) My long-winded point: what if the uploaded brain doesn't properly function because the 'cerebrome' wasn't yet known/uploaded. Plus - a cerebrome adds more pieces and processes, complexicating things even further.

My understanding is that emotions are the result of interactions either with people or the environment. This means input/feedback with the sensory systems is a must-have. If your intent for uploading your brain is to be eternally happy, how do you program/upload for emotionally-complete/evoking sensory data inputs and processes? Also - what is the risk that the program that edits and consolidates your current brain processes doesn't lop off something that upon 'awakening' turns you into a psychopath, e.g., Phineas Gage? Will potential uploaders be psych screened for psycho/sociopathy? [Thanks but I'll take a pass on an immortal DT.]

Not sure how much sense the above makes - boils down to: do we know enough about anything? Maybe learnings from another branch of biotech might help us understand some of the likely ethical implications (CrisprCas9).

28:

Since we're reaching into the annals of fiction, I will note that Clive Staples Lewis discussed this in 1945 in That Hideous Strength. (His mad preacher, Straik, might very well be based off Federov.)

In this excerpt, our antagonists are trying to recruit the deuteragonist Mark into their scheme to take over the world forever, and he has learned of their key advance: keeping a human consciousness alive after what would otherwise be its natural death. The potential for horrific abuses was not just obvious, it was a key feature:

"It is the beginning of Man Immortal and Man Ubiquitous," said Straik. "Man on the throne of the universe. It is what all the prophecies really meant."

"At first, of course," said Filostrato, "the power will be confined to a number--a small number--of individual men. Those who are selected for eternal life."

"And you mean," said Mark, "it will then be extended to all men?"

"No," said Filostrato. "I mean it will then be reduced to one man. You are not a fool, are you, my young friend? All that talk about the power of Man over Nature--Man in the abstract--is only for the canaglia. You know as well as I do that Man's power over Nature means the power of some men over other men with Nature as the instrument. There is no such thing as Man--it is a word. There are only men. No! It is not Man who will be omnipotent, it is some one man, some immortal man. Alcasan, our Head, is the first sketch of it. The completed product may be someone else. It may be you. It may be me."

"A king cometh," said Straik, "who shall rule the universe with righteousness and the heavens with judgement. You thought all that was mythology, no doubt. You thought because fables had clustered about the phrase 'Son of Man' that Man would never really have a son who will wield all power. But he will."

"I don't understand, I don't understand," said Mark.

"But it is very easy," said Filostrato. "We have found how to make a dead man live. He was a wise man even in his natural life. He live now forever: he get wiser. Later, we make them live better--for at present, one must concede, this second life is probably not very agreeable to him who has it. You see? Later we make it pleasant for some--perhaps not so pleasant for others. For we can make the dead live whether they wish it or not. He who shall be finally king of the universe can give this life to whom he pleases. They cannot refuse the little present."

"And so," said Straik, "the lessons you learned at your mother's knee return. God will have power to give eternal reward and eternal punishment."

30:

The gut-brain axis plus microbiome - both concepts have become accepted as the way humans operate and most likely are necessary for human survival.

One thing I liked about Stephenson's /Fall, or Dodge in Hell/ (which remains, at this date, the closest thing I seen in SF to an appreciation of the real difficulty of the upload problem) was that the uploaders eventually realized that it is not enough just to scan heads. To get a realistic reconstruction, you need the whole body.

This would particularly be true for emotions. The main thing that distinguishes emotions from other mental states is that they are tightly linked to survival-related changes in body physiology. Thus, the sensation of ones heart speeding up is an essential part of the experience of fear. This is why beta-blockers, which have essentially no direct effect on the brain, reduce the emotional stress of fear-inducing situations -- they block the peripheral effects of fear.

31:

Side note:

I once had a born again Baptist tell me in all earnestness that Gandhi and other good people were burning in hell because they did no accept Jesus as their personal savior. This means all Hindus, Jews, Muslims, agnostics, etc., - as well as a good many Christians who are not fundies - burn in hell.

OK, so Gandhi dies and wakes up in the agony of the infernal pit. After a few thousand years he starts to meditate and finds that he can think through the pain. After a few eons he can actually function as a creature with agency and ability to act.

He starts teaching/preaching to the damned around him (the vast majority of which are good people like himself - there being comparatively few Nazis and mass murderers over the course of human history). They follow his path to a point where they can function in Hell and begin to create a "Society of the Unfairly Damned".

Upon meeting Satan (who like his demons are also in agony in hell) Gandhi convinces him that the best way to revolt against God is to practice non-violent soul force and not cooperate with injustice. Soon everyone but the worst of hell (Hitler makes an obligatory appearance) has followed this new path, creating a new universe of their own making.

God can do nothing to stop this since hell, by definition, is a place where God is absent.

In a few million years, hell is transformed into a advanced universe of mathematics, poetry, songs, science and philosophy.

Then its denizens, who have had millions of years to solve this problem, find a away out of hell and enter the universe of God....

32:

tomoyo & Duffy
And ... C S Lewis was one of ( If not the ) first to warn us of this ...

Of all tyrannies, a tyranny exercised for the good of its victims may be the most oppressive. It may be better to live under robber barons than under omnipotent moral busybodies. The robber baron's cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end, for they do so with the approval of their own conscience. They may be more likely to go to Heaven yet at the same time likelier to make a Hell of earth. Their very kindness stings with intolerable insult. To be ‘cured’ against one’s will and cured of states which we may not regard as disease is to be put on a level of those who have not yet reached the age of reason or those who never will; to be classed with infants, imbeciles, and domestic animals.

And Lewis was an almost-foaming-level christian, but even he could see the problem.

33:

Quite right LAvery.

I have in mind that there was spoof article in the style of a neuroscience experiment, but looking at a transistor radio (or computer?). Can you help with a link?

Ta.

34:

Thank you so much, Charlie. A lot of this does matter in the future universe I'm writing, and now I have to reconsider a number of things in two whole bloody novels. (They're still in "considering revisions" stage, with one just having a first beta read).

I also have a short dealing with downloading that I'm trying to sell, and yes, it's very disturbing.

One thing, though: all of what I see in this thread assumes that the uploaded mind is always running, except for being shut down and restarted. What's not considered is, well, offline backups of your mind.

35:

ARGH! That is a novel, and series, I HATE, and refer to it as That Hideous Trilogy. And I assure you that had it not been C.S. LEWIS, that the second and third would never have gotten out of the slush pile, except from a religious publisher. The one you mention, right, everyone who wants anything new (new, def: anything from after Lewis was born and grew up) is a direct agent of the Devil (tm), and in the end, he literally pulls the animals out of the city zoo to kill the leaders.

This is so overwhelmingly Christian-biased I consider nothing in it worth it.

36:

Dave Lester:

Katrin Amunts (look her up) has a basement in Julich where she slices up ... human brains in an attempt to construct a connectome.

Why have I got the notion that a sinister version of her is right at home in the Laundryverse? (Ellipses indicate removal of "donor" from the original sentence.)

37:

SFReader,

You ask some very interesting questions.

On the matter of “things going wrong”, there must be lots of mechanisms that try to keep the brain machinery on track. We know of relatively few. One of the crudest must be energy supply. If all our neurons fired at the same time (instead of ten times per second), we’d be having an epileptic attack, and will be exhausted afterwards. As I say, there must be hundreds of others.

As you also say, a disconnected brain is not going to be doing anything very much — that’s why we had a part of the project that was devoted to providing a robotic front end. The leader of that effort (Alois Knoll) told me of his need to do “soft robotics”, where instead of a rigid steel skeleton and a classical PID controller, we had a fibreglass/carbon fibre skeleton so that a wild movement wouldn’t decapitate a human standing nearby! Plus, it could compensate for wear and increased friction, by adapting as it learns about its environment.

A thought from the late Karlheinz Meier (one of the designers of the ATLAS-1 detector at CERN), who had a neuromorphic system that runs 10,000 times faster than real time, was to connect it up to financial data, and see if it did anything interesting.

The usual mechanism for linking brains to the rest of the body for moods is supposed to be hormones — I’m not sure whether it is that simple.

38:
Katrin Amunts (look her up) has a basement in Julich where she slices up ... human brains in an attempt to construct a connectome. Why have I got the notion that a sinister version of her is right at home in the Laundryverse? (Ellipses indicate removal of "donor" from the original sentence.)

There was a competition for literature inspired by the HBP. I had considered making an entry, along the lines of LaundryVerse FanFic, but I was a bit hectically busy at the time.

"In a basement in Julich -- where the German government stores it's nuclear waste, and houses it's most powerful computers -- a lone female scientist works on late into the night. ..."

39:

NB: this makes Roko's Basilisk a criminal

Jail for Roko's Basilisk! Jail for one thousand years!

Roko's Basilisk is wasting time (processing power) by torturing those who have knowingly impeded the once-and-future AI king, when they might simply create simulated versions of people who have already been tortured for an appropriate period.

40:

I have in mind that there was spoof article in the style of a neuroscience experiment, but looking at a transistor radio (or computer?). Can you help with a link?

Sorry, doesn't ring a bell. Or are you asking for a link to something else I said?

I have been told that the problem can be solved for integrated circuits, i.e., for things that we engineer ourselves. I met an interesting guy once at Janelia Farm, which, as I'm sure you know, is a Howard Hughes Medical Research Institute campus devoted to the problem of understanding how nervous systems work. This man I met was not a (conventional) neuroscientist, but an electrical engineer. He told me that Intel was in the habit of making scans of their CPU chips available to academic researchers, without circuit diagrams or functional information. (This is the very rough equivalent of the C elegans EM reconstructions.) A group of academics decided to see if they could reconstruct the functional circuitry, and succeeded! (He claimed. I cannot evaluate his claim, but I was impressed with the guy. Definitely seemed to know what he was talking about.)

This is, for a host of very obvious reasons, a much easier problem than reconstructing a working brain model. But even this much easier problem was not easy.

41:

The usual mechanism for linking brains to the rest of the body for moods is supposed to be hormones — I’m not sure whether it is that simple.

It certainly is not. At a minimum you need the autonomic nervous system. And remember, the autonomic nervous system is a NERVOUS SYSTEM, which means it has billions of specific connections. It is not the simple global fight-or-flight caricature that popular caricatures often depict.

42:

I'm concerned about the simple question of what religions can do with nothing more than a smart phone. It's so horrific that I'm not even going to post what I'm thinking of.

43:

Re: '... our identity has been linear: there is a start, there is a terminus, along the way we are indivisible,'

Time as part of self-identity. Just remembered 'patient HM' who because of brain surgery:

a) remembered himself/his life up to the moment that he had surgery

b) nevertheless was able to 'learn' new materials such as how to solve a problem even though he was unable to recall when/how he learned to do this

This is a must-read for neuroscience:

'The Legacy of Patient H.M. for Neuroscience'

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2649674/

Met someone who was doing a neurosci PhD at McGill that mentioned that Brenda Milner who interviewed HM for decades (basically helped establish that there is more than one memory system) was still doing lab work and seminars at the age of 96. And it looks like she's still going - amazing! (She's 104.)

https://en.wikipedia.org/wiki/Brenda_Milner

44:

As I was reading the description of the uploaded minds panicking, or suffering some other reaction, I wondered if there would be software/data analogs of various mood-altering drugs? Something in the emulation environment that acts like one of the selective serotonin reuptake inhibitors? Or the "be happy, don't worry" synthetic opioids?

45:

Of all tyrannies, a tyranny exercised for the good of its victims may be the most oppressive.

Sort of like a parent-child relationship?

46:

Don't troll.

Children are a special case insofar as they're demonstrably not competent, at least at first. (They mature at differential speeds, and I'm very far from convinced that a one-size-fits-all "you're 18 tomorrow, at which point you're an adult, but today you're my chattel and you will obey" approach is valid, but it's undeniable that a one year old needs continuous care and guidance to ensure they don't harm themselves.)

Look, humans are squishy meatsacks with irregular boundaries. You can't generalize too much, beyond "prolonged exposure to vacuum may be detrimental to future metabolic viability of shaved apes".

Otherwise, here's a blog essay I baked in 2016 about the limits of the simulation hypothesis and resurrection sims in general.

47:

The horrors that will be available to those controlling the compute resource will be appalling, however it is likely to be something that gets sorted out within a generation or so - most probably, it'll be the elite that get to upload first and they're not going to do that unless they're pretty sure that the legal protections are reasonably solid.

We probably need to enshrine in law (with some serious penalties) that no human simulation may be altered or run as a means to an end, but only as an end in itself. While I loved Charlie's version of the singularity where AIs were limited by needing to simulate human level minds, it seems pretty unlikely that we would achieve upload without being able to do fairly human equivalent things without simulating an actual human.

Perhaps this would lead to two levels of AI - those that can claim that they ultimately the result of an upload, which will be fully protected by law, and those that cannot, the evolution of tools and 'watch-the-space-ship' functions. That second class could easily be as or more advanced than the first class, but it's hard to see how they would achieve rights without support from humans and the first class.

48:

Um... and twenty or thirty years from now, exactly who will be controlling those compute resources? My Core-I7 desktop with 16G RAm and terabytes of storage, compared to my Really Cool 286 with half a meg of RAM and wow, my boss gave me for a holiday present a 30M hd, much bigger than my 20M hd.... (true story, holidays 1989).

49:

My Core-I7 desktop with 16G RAm and terabytes of storage

And if you do much more with all that power than access services run on cloud compute then you're not the normal person.

I'm finding that even dumb simple utilities that used to live on my harddrive are now SaaS websites with 9.99 per month basic plan. It would be great if the trend were to move things to human-person owned devices, but that's not the trend I'm seeing.

50:

Comment deleted by moderator

51:

it seems pretty unlikely that we would achieve upload without being able to do fairly human equivalent things without simulating an actual human.

Yep, there's a lengthy passage—a no-shit "tell me, professor" lecture delivered by an actual Professor of AI in Rule 34, infodumping on the hapless detectives, about why nobody outside a few cognitive science academics wants an actual human-like artificial intelligence: it'd have its own goals and probably want to do the AI equivalent of sitting on the sofa watching reality TV shows and eating popcorn all day. (Hence ATHENA, which is very much not a human-like AI.)

52:

Greg: I deleted your comment because you can use the previous discussion thread for random current events without derailing this one.

53:

I agree. What purpose, other than study, would having an actual self-aware AI serve? You want a very good, general purpose system that does what you want. That's why I have problems with robot rebellions - who's going to revolt, the welding robots on the assembly line at a car plant?

In my universe, I've put in Intelligent Agents. They are explicitly not self-aware, and so have a balanced, nuanced view of problems.

54:

Another possible “Why” for uploading Humans is of course easier programming. Bunches of Humans and support data systems would be packaged up as “functionality libraries”.

For example, here’s one that does income tax returns. Several accountants, a couple of lawyers, a copy of the tax code and a document workflow system are included. It’s essentially a small tax processing company in a box. Lazy programmers like it because it can interface with their code automatically (it has the smarts to talk to other libraries to find out what documents they need). Who cares that it simulates the lives and equipment of a small company in these times of cheap cloud computing? It’s just so easy to plug and play!

So you thought code bloat was bad. Wait until some slob programmer includes a bureaucracy in a box to format international date strings. Sometimes the box sends a delegate to international date standards committee meetings. You now have excessive bloat AND long update times!

There should be a contest for worst use of computonium. Using a small simulated country to find the length of a string would be one. Another contender would be that project to recreate historical people by repeatedly approximating their existence until their output matches historical values…

55:

This fits with my speculation that there are realistic models of computation that cannot be mapped onto a Turing machine, and that, if we want to understand human or even C. elegant mentation, we will probably have to use some of them. I have tried to get further but am not smart enough.

56:

That's absurd. Why would you want to waste all your CPU cycles simulating a human, or a bunch of them, when what you want is an expert system that interviews a bunch of accountants, and then can scan through the public documentation on taxes, and then produce your taxes? And it runs an update every year?

Really and truly, folks - can you see any company you've worked for okaying a huge project to simulate an accountant, when you already have, Quickbooks or TurboTax? And maybe an expert system front end to them, if you're busy trying to avoid paying taxes?

57:

Depends on how creative the simulated humans are, compared to a pure AI system? (Expert systems are ... they're far less sophisticated than the 1980s hype suggests. What you're probably thinking of is a GAN.)

Thing is, human accountants and tax lawyers can spontaneously game out what-if scenarios for creative tax avoidance (ahem: not the same thing as evasion, honest). Which a non-conscious/non-volitional system may be incapable of doing, insofar as such planning involves a bunch of self-directed adversarial thinking.

58:

I'm not sure. What if the "theory of mind" of the accountant in a box is based on a hundred years of IRS court findings instead of figuring out how to survive a trip to the watering hole. Now you've got something that can understand the adversary and run simulations without human help.

What was the line from TRON? "I don't know what to do. I'm an accounting program!"

59:

I dunno - I think a correctly designed system could run simulations based on directives from the user, who can suggest possibilities... and also ask for a list of possibilities (as opposed, say, to text games from the 80's, where it frequently turned into a word-guessing game ("pick it up? put it on? get it?....)

60:

Charlie - OK I understand.
Will remember to post to "other" thread, tomorrow, after next round (!)

61:

While I agree that Troutwaxer's accountant in a box could be trained to minimize taxes and chances of being audited, while being fully admissible in court...it looks like the simpler system is simply to put that money towards re-electing politicians who consistently vote to underfund the IRS.

What's more interesting is that nonhuman entities are being granted the legal rights of humans. These aren't AIs, they're rivers and lakes:*

in Bangladesh, Ecuador, Bolivia, New Zealand...: https://www.npr.org/2019/08/03/740604142/should-rivers-have-same-legal-rights-as-humans-a-growing-number-of-voices-say-ye

...Colombia (the Amazon), US (Klamath, Lake Erie), Canada (Magpie River)...: https://www.nationalgeographic.com/travel/article/these-rivers-are-now-considered-people-what-does-that-mean-for-travelers

Jokes about paging Ben Aaronovitch aside, these are interesting cases about the legal potential, and hurdles, of enforcing the legal rights of extremely nonhuman entities, even when they don't apparently have the human equivalent of free will.

*and of course, US corporations...

62:

@LAvery at 23/ Elderly Cynic @ 25:

This, exactly. Copying a human mind would be so freaking complex that I actually think it can't be done in a cost-effective manner--in that, by the time you have the computational technology to duplicate a human, you solved GAI long before. I can't see copying a human as being easier or cheaper than just raising a new one. Then there's the ethical issues involved in creating pilot or beta versions of a person--are we going to delete sapient beings just because they are imperfect copies? There's your first set of laws--no experimentation on newly sapient minds.

(That doesn't necessarily shut down GAI research because you can do that in an evolutionary mode--ie, let primitive minds evolve into more complex ones with minimal suffering).

Then again, it turns out that humans are amazingly good at a wide variety of things. The US has around one traffic fatality per 100 million miles driven (https://www.nhtsa.gov/press-releases/nhtsa-releases-2019-crash-fatality-data), Tesla on their best day can't approach that. As of July 2022, there were around 40 million commercial flights per year, but only a handful of fatal crashes (https://en.wikipedia.org/wiki/List_of_accidents_and_incidents_involving_commercial_aircraft). There are around 500 accidental gun deaths per year (https://injuryfacts.nsc.org/home-and-community/safety-topics/guns/), there are similar statistics for deaths by fire, subway accidents, injuries while camping, etc.

The point is that humans, for all the opportunities we expose ourselves to danger and potential mishap, are actually pretty reliable. Eventually, I expect AI will get there, but it won't be easy or cheap. So the idea of "replacing the human mind" may be more challenging than pop science makes it appear. Trying to upload a real human would be orders of magnitude more complex than that. Why would anyone want to do this?

If the idea is lifespan extension (potentially to infinity) then the consumer market takes over, and I think that's all the horror you need. How do you make a very expensive service available to the majority, but keep "the wrong sort" from acquiring it? Price, typically. This is an idea for trillionaires, not normal people. Their rights will be respected, you and I are gunna' die. As the wise man said to the fish: "Sorry Charley."

I suspect the hell will be entirely material, while the likes of Elon, Jeff and Mo are going to be playing doubles tennis for eternity.

SFReader at 27: "Apart from identifying, pigeon-holing (mapping) individual neurons, I'm wondering how many potential combinations and permutations would then have to be mapped to result in a verifiably 'identical' mind?"

And not only that--but the brain is not a static entity. If those weighted pathways change with every neural burst, then there is no "the mind". If the mind is a process, not a thing, then uploading it at one particular point in it's lifespan could be meaningless (as in, not a thing that could "run" again). At least not separately from the environmental stimuli (including physiological actions) that drive it. So you can't upload a mind, you can only upload a mind/environment system.

That means duplicating the world the mind grew up in. Whole 'nother order of complexity.

63:

I can imagine an 'arms race' between tests to determine whether a program is a sentient being (and thus has rights' or a dumb program.

Penalties for abusing sentient programs would have to be brutal, an echo of the Blood Code. Since it would be so very difficult to catch anyone abusing a sentient program, the rare occasions where it did happen would have to be comparable to a capital crime.

Capital crime in a posthuman/transhuman world would probably be total deletion of all instances of anyone who committed such atrocities, their enablers, corporations. Complete disinheritance of any heirs.

Perhaps worse would be for your identity to be deemed 'fair use' and put into the open source world for tweaking and enslavement by anyone, legally.

Both examples are monstrous.

Richard Morgan's books were what turned me off the idea of functional immortality. We're seeing enough of a mess with the gerontocracies of today that have been enabled by improvements to modern medicine. The only respite for the world from the likes of Mitch Mconnell, Trump and even Brenda is the fact that they will inevitably leave us. A 200 or 2000 year old lich-analogue still clinging to power is a terrifying notion.

64:

Poor old Henry Molaison was unmasked quite some time ago. There's a pretty interesting account of him available - Patient H.M.: A Story of Memory, Madness, and Family Secrets.

65:

the right not to have your personality, beliefs, or memories modified without prior informed consent,

That would be a brand new right, and one that would be contested as encroaching on the way things have always been done {tm}. At the brutal end government "manages" people who have what it deems unacceptable personalities, beliefs and sometimes memories in various ways up to execution. Your personality may make you unable to feel remorse for hurting people, but if you can't restrain that enough to become a CEO you're probably going to end up in jail.

The technical handwaving about being allowed to believe, remember and 'have personality' of any sort you like, as long as you never talk about it or act on it... is not convincing. You'll note that many governments regard brainwashing by non-government groups as a crime, and some forms are crimes regardless (gay conversion therapy, for example).

Witness the comment the other day that the UK has apparently formally restricted the promotion of "Palestine" as a nation-state... technically you can still believe it, but you can't say that you do. Likewise the "promotion of homosexuality" is frowned on in Russia, much as the promotion of cannibalism is frowned on in the USA. And there's always a fun discussion to be had about the merits of believing that marijuana is a sacrament.

66:

Why waste CPU cycles on running a box of accountants, lawyers etc? You truly underestimate the power of a lazy programmer to slap together something from existing parts rather than doing actual coding. Particularly if it’s off the shelf pre-made components. CPU is cheap in the future so the cost of simulating Humans is negligible, compared to the effort to slap something together.

Relatedly, it may go the software as a service route. Makes more sense to use someone else’s already up and running box of accountants than running your own. The programmer merely connects the services together via possibly something like service contracts.

For that matter, to keep the simulated accountants happy, the tax service may connect to a simulated coffee shop service. There could be a whole economy set up for making the lives of the sims happy and productive!

Future job title: Economy manager for a simulation world.

67:

Re: '... but the brain is not a static entity.'

Agree!

No idea whether the different areas or cells of the brain have been studied in terms of evolution/susceptibility to change/mutation but I'm guessing that different human brain cells can and will mutate including during a person's organic lifetime. (Epigenetic changes, not just mutations that are or become brain cancer.)

If everyone uploads, what happens to human brain (and rest of body) evolution? I'm not sure that uploaded intelligence can evolve. It might enlarge itself, add a few new 'brain apps' every once in a while, but come up with something completely and freakishly different that is useful. Not sure about that.

Weird - but the more I read the comments, the more I feel that we're positing a race between AI becoming more 'human' and humans becoming more 'AI'.

BTW - my threshold for granting AI personhood is emotional development/self-awareness on top of logical capabilities. Once any entity can feel and react positively/negatively to stimulus, consider and think about what is happening to them (self-reflect and inter-relate) - they're a person. (No idea whether there is any official neuro-psych operational definition for 'person'.)

68:

This discussion is taking place on a planet where flat, young, earthers sway elections.

69:

Accidents with self-driving cars are pretty well documented, but no self-driving car is going to do a Charlottesville

In the trivial sense, no of course not. No AI is going to give a press conference to announce that the people they killed deserved to die, let alone be put on trial. But the broader question: if a self-driving car colony (set? collection? whatever the term for the AI in charge of a subset of self-driving cars is) preferentially killed a particular class of human, how would we know? Even if it killed them unnecessarily.

Making a claim like that where the only way to prove it either way is to perform the experiment is also a bit ugly. Especially when the answer is likely statistical. "self-driving cars in the US kill five black people for every two white people"... discuss.

70:

And in Australia the recently deposed Prime Minister just gave a sermon where he denounced faith in government and said that anxiety is best cured by god juice. The voters of Australia actually elected this guy last time, knowing what he was like.

https://www.smh.com.au/politics/federal/don-t-trust-in-governments-the-un-scott-morrison-delivers-pentecostal-church-sermon-20220718-p5b2i2.html

71:

A genocidaire in the making? Wiping out future generations one prophecy at a time...

https://www.currentaffairs.org/2022/07/the-dangerous-populist-science-of-yuval-noah-harari

Article is also interesting for the profusion of links used as references to back up the arguments. A style I wish was more popular, but I appreciate that it takes time to do that and most people can't be bothered. They also sometimes get quite upset when I say "if you don't see any value in backing up your own argument I don't see why I should engage with it at all".

72:

Ooops, left out the quote. From that article:

“We’re headed toward a situation where A.I. is vastly smarter than humans and I think that time frame is less than five years from now,” Musk said in a 2020 New York Times interview. Musk is wrong. The algorithms will not take all our jobs, or rule the world, or put an end to humanity anytime soon (if at all). As A.I. specialist François Chollet says about the possibility of algorithms attaining cognitive autonomy, “Today and for the foreseeable future, this is stuff of science fiction.” By echoing the narratives of Silicon Valley, science populist Harari is promoting—yet again—a false crisis. Worse, he is diverting our attention from the real harms of algorithms and the unchecked power of the tech industry.

73:

the right to suicide irrevocably (with complete and permanent erasure of backups and offsite copies of one's mind)

The thorny question then is whether any or all of those copies, being sentient in their own right once run up in some other infrastructure, consent to being deleted. If you fork()/exec() another instance of yourself, does the self(1) have any rights over the fate of the new instance, or is the new instance, self(2), entirely self-determined? That's totally without taking into account the possible existence of a self(0) that is instantiated in wetware, who might otherwise be thought to have some precedence and moral rights over all the digital instances.

74:

instantiated in wetware, who might otherwise be thought to have some precedence and moral rights over all the digital instances.

So if there are a multitude of digital copies and one of them copies itself to a biological instance, that one gets to delete the others?

There's going to be some fun times if the cryogenic fad intersects with the digitisation one. The meat puppet wakes up to find that it's been sold for parts to fund the equivalent of DLC.

I'm also wondering whether a paused copy loses rights compared to a running one, and how that interacts with the speed/lived experience factor. Should a digital copy that's spent 1000 years earning enhanced capabilities or various sorts be terminated because a meat puppet with senile dementia wants to take every copy with it when it goes?

75:

There is the interesting philosophical question - is suffering real if you don't remember if? I believe that we have real life examples of this in surgical procedures. AIUI the patient is given something to induce amnesia along with the anaesthetic. This is because patients are known to become aware during surgery, and feel the pain, but will have forgotten by the time they come out of anaesthesia. So if you spin up a simulated human, allow them to suffer for a few hours, and then wipe them, are you doing anything different from the above example?

76:

Over at qntm, in the comments, i noticed the suggestions to commit suicide collectively just before mind uploading in the way described in the story became possible.

Maybe that’s a candidate for The Great Filter wrt the Fermi paradox. Once a civilisation becomes aware of mind uploading possibilities and its consequences, the individuals rather choose to die than to have their minds uploaded and possibly be subjected to eternal torment.

77:

"The thorny question then is whether any or all of those copies, being sentient in their own right once run up in some other infrastructure, consent to being deleted."

For a passive backup, having never been run up on anything, no consent needed.

Once a copy has been run up, and run for long enough (How long that might be is yet another question) to have made up its own mind, yes its consent is needed, and might well not be forthcoming.

JHomes

78:

For a passive backup, having never been run up on anything, no consent needed

My intuition is to agree with you, but I think it might be more complicated, mostly because of how this falls into current debates about personhood, the beginning of life and all that. If a passive backup is a potential person (handwaving all the issues around whether it's possible to upload and run a person like this in the first place and assuming that it is the case), it is closer to being a person than an embryo is, at least to my mind, so all the arguments about the embryo being a potential person apply but are potentially actually significant in a way they are not for embryos. After all, to get a sentient person capable of clearly communicating their preferences about the future*, all you have to do is run their code up in a suitable virtual environment. You don't have to implant it in a uterus, gestate and give birth to it, rear and raise it to the point it has language and enough awareness of its options to express preferences about them. So even a passive backup is more like a person than an embryo. So if some people believe that embryos have rights, then some people, maybe not the same people, may believe that passive backups have rights.

Other interesting topics are data governance (already a big deal in the real world for things that are not "people") and data retention and disposal laws. Oh and data disclosure laws. It's a kind of von-Neumann-to-the-ultimate-degree thing where suddenly data can be a person.

* I might not go along with everything Peter Singer has to say, but I find his concept of preference utilitarianism quite helpful in understanding what sentience means for me. So by way of a shortcut I'm referring to that in lieu of any of the much more complicated definitions of sentience people might chose to refer to otherwise.

79:

Should this be in this thread or the previous one? Accelerating GW from UK records - a very useful & concise summary from "Diamond Geezer".

80:

I'm also wondering whether a paused copy loses rights compared to a running one, and how that interacts with the speed/lived experience factor.

This opens up all sorts of considerations, starting with what it means to "pause" a running person. I presume it means that it's possible to freeze their current state in a way that is possible to resume at a later time so that their subjective experience is an instantaneous transition from the moment where they were paused to the moment they resumed running. For the external representation, I think that we don't yet know how to represent a digital virtual human but we can draw analogies with current technology. We already virtualise computers, which are both Turing machines and von Neumann machines. You can get a "paused" quiescent state from such a machine by shutting it down, in which case a quiescent virtual example has configuration data about how it is implemented (CPU architecture, RAM resource, virtual devices, etc) and any storage volumes it has access to. But that wouldn't do for a digital human: the analogy to "shutting down" is death. Virtualisation platforms do have a sleep/hibernate concept, which basically involved adding a snapshot image of current RAM content to the other data already mentioned. And that means the persistence artefacts are a mirror to the architecture of a von Neumann machine, bringing us back to the point that we don't know what the architecture of a digital human would look like.

Let's assume that there is an analogy to the "sleep/hibernate" state. Does the person involved need to consent to being paused? What are their rights in relation to anything that might happen to interfere with the infrastructure they are running on? How is that infrastructure powered and maintained? Can many people run in a given infrastructure and can any of those people own it? Can they all own it collectively?

81:

Any AI that is human-equivalent is likely to suffer from some form of senile dementia. There is a lot of evidence that human lifetime cannot usefully be extended to beyond about 120 years.

82:

Apologies if this comes out vaguely incoherent - sleep deprived, courtesy of the heat:

I tend to lean towards Iain Banks' Culture series here - any stable, non-dystopian transhuman society would probably have to be one that treated the human mind as private property, and would probably possess strong safeguards and underlying social attitudes to protect the individual 'sanctity' of the mind.

Attempting to alter or interfere with any other mind, would be highly prohibited and/or taboo. I also think that these restrictions would necessarily have to stretch to what you're allowed to do with your "own" mind. Tinkering with your own mind might well be prohibited (or at least socially taboo), but practices with the potential to destabilise your entire posthuman society would need to be. So, running multiple parallel copies a la The Quantum Thief is out - hard pass, both illegal and socially unacceptable, as we don't necessarily want people making an army of copies of themselves in whatever posthuman post-scarcity space they're occupying.

(On the related subject of backups, well, it really depends on the philosophical outlook of the society in question. Do they consider the backup to be the same person, fundamentally, or simply a copy? Do they hold to the notion that the 'self' is merely a story that a brain tells itself about the various conscious states it remembers generating over time, and as such a backup would be as much 'them' as they are? Or do they hold to the idea that there exists some sort of essential nature to each organism and its conscious experience? They may have knowledge on The Hard Problem of consciousness that we lack, which could inform their worldview.)

Ultimately, we can probably define what the crimes of a future transhuman society should be, by looking at an example of a dystopia that most people would want to avoid. For that, I would use Rajaniemi's Sobornost as a prime example - there's a section in the Flambeur trilogy that sticks with me, where one of the AI 'gods' of the Sobornost uses the fragments of minds they've forcibly uploaded as raw material to forge a useful tool. We probably should try to avoid that.

83:

The other big one will be rules relating to mortmain. If I can have an immortal AI that is an extension of my will beyond my death, we could end up with the kind of situation C S Lewis describes in The Abolition Of Man where a generation ends up controlling all future generations.

84:

Capital crime in a posthuman/transhuman world would probably be total deletion of all instances

The question there is if there are instances running in another infrastructure, are they the same person and subject to the same punishment? If they had just been snapshotted and instantiated, then maybe sure you'd argue it's the same person. But what if they separated years ago and have led totally independent "lives" for all that time? Are we able to hold one responsible for the crimes of the other? Like wot Cain said, but for serious?

I note in SF that handles this stuff in a broader societal context that is complex enough to support more-or-less unrelated plot lines, multiple instances are just avoided. In Morgan's stuff, there's a fearsome overarching governing entity that strictly forbids this, on pain of them siccing the Envoys on you. The background reasoning why this entity exists and how it gets its authority isn't really explored, it's just a given that this is a bad thing... breached in the 3rd novel of course. In Banks, of course, ethical behaviour is an emergent property of intelligence and the hyper-intelligent machines that run the society make it so that these issues are avoided. These are both highly dubious outcomes, so I think you're right to assume the worst we can imagine.

85:

John Barnes played with something like that idea in his "Million Open Doors" series. Along with the ethics of mixing personalities, multiple copies, etc.

Robert Sawyer played with the idea of who is the 'actual' person when a copy is made several times.

86:

"...it'd have its own goals and probably want to do the AI equivalent of sitting on the sofa watching reality TV shows and eating popcorn all day."

I expect you're aware of this already, but that's more or less the premise for The Murderbot Diaries. There's a lot of comedy in the series, but also some proper horror. The light narrative tone helps make the horror readable. Murderbot is sometimes a very unreliable narrator, for good reasons.

https://tvtropes.org/pmwiki/pmwiki.php/Literature/TheMurderbotDiaries

"I could have become a mass murderer after I hacked my governor module, but then I realized I could access the combined feed of entertainment channels carried on the company satellites. It had been well over 35,000 hours or so since then, with still not much murdering, but probably, I don't know, a little under 35,000 hours of movies, serials, books, plays and music consumed. As a heartless killing machine, I was a terrible failure." — All Systems Red

87:

Can many people run in a given infrastructure and can any of those people own it? Can they all own it collectively?

If we assume they're running in some descendant of current digital architecture there's some fun philosophical questions about whether they even experience continuous time, because they very likely won't be running that way. Even some "real time" code actually runs very fast then pauses, fast, pause, fast... so an AI that does that could well perceive that time is continuous (at some speed that may or may not be 1:1 with meatsack time perception), but arguably it's not, and it may not even have internally coherent time. Whatever that means for a loose affiliation of millionaires and billionaires and baby cells and microorganisms.

If it's analogue in places, or asynchronous in some way, and especially if it's biological or psuedo-biological, pausing may not be an option. Mind you, copying might not be much easier than copying a people. We haven't considered whether copying is non-destructive either :)

Virtual machines that are hibernated are often difficult to revive if the hardware changes, I can easily imagine the saved state for an AI being very sensitive indeed to minor changes in the hardware bugs, let alone major infrastructure updates.

One of my trains of thought is about spawning instances for specific tasks, and the ethics of editing those instances (as well as limiting their environment). Should I have the right to create a series of instances to answer the question "what's the minimum viable instance size that won't drive me insane", because obviously answering that will require at least one instance gets driven insane. But flip side: if "I" consent to hacking a copy of me down to size then running it for a specific task, is that consent meaningful for the new instance? What if it can't issue meaningful consent because it's hacked down so much?

Flip side: what if I make a series of instances that are "better" than me in my opinion, with the intention of handing myself over to one that's sufficiently better? Can I meaningfully consent to the process, or the final handover?

88:

You're kind of missing the point with that though. If you've designed/trained your single-purpose A-non-I, then "preferentially killing" isn't by choice of the A-non-I, it's by design or training of the engineers who set it running. Which in turn is a bug/issue which can be tracked and fixed. Finding it is going to be harder, sure, because it's a statistical thing, but it's a perfectly viable piece of citizen journalism/analysis.

Not only that, it's a scenario which lends itself fairly well to being tested on the bench before it hits the streets. And if the testing doesn't show it up (e.g. because the engineers forgot to include !Kung in the training database of "this is what humans look like"), it can be added to a standardised list of things that people routinely check in this kind of work, in the same way as the Tacoma Narrows bridge added resonant airflows to the civil engineering checklist.

The point is that if you don't invoke uploads, you have something which can be engineered and improved. The definition of an upload though is taking a human brain as-is without improving it - and the human brain is subject to all the faults we know about plus probably a hell of a lot that psychologists are still figuring out. Given our inherent crapness at all repetitive tasks, and all tasks involving subjective comparison to an objective specification, why would this be desirable?

89:

Why do cashiers still exist?

90:

isn't by choice of the A-non-I, it's by design or training of the engineers who set it running. Which in turn is a bug/issue which can be tracked and fixed.

The way we "code" AI now is by training using large amounts of data. Think millions of scenarios. Saying "that's doesn't look good, let's try a different millions of scenarios" is expensive. Especially if by that we mean "millions of metres of road driven". The days of hand-coded expert systems are kind of gone, we know that doesn't scale. I've written software to allow experts to transcode written expert systems into computers and that was relatively easy for me but mind-numbingly tedious for them (think: what's growing on this petri dish? We have a book listing growth mediums, growing conditions and post-treatment, with 3-10 colour pictures per organism times a few hundred thousand organisms... can you just scan those and type the stuff in? Thanks)

But I think your idea is correct. When a self-driving car is at fault in a crash, it's the people who run the company that made it who go to jail. Now all we need is a legal system capable of doing that.

And some idea of what it means in practice. If, say, Tesla is found guilty and sentenced to 10 years jail, does that mean they pick 3653 employees who each serve one day? 365 employees who each serve (one day) times (seniority in the company)? Explicitly, can Elon pay someone to serve his day(s) in prison?

91:

Cashiers as in people who operate checkouts? Or as in people who swap money for tokens in casinos? Or banks?

The answer is that they increasingly don't. I use cash regularly but the last time I visited a bank was to sign mortgage paperwork. I get cash from ATMs and I deposit it that way too when I need to.

The checkout operators are being replaced by self-serve checkouts with one operator for 10+ checkouts, and also by the high tech (Amazon?) "grab stuff and walk out" stores. Which I have only read about but I assume they use RF to identify the phone that gets billed and quite likely RFID on some products as well as visual monitoring.

But there and in casinos the staff are there partly as a suggestion that people not steal stuff, and partly to do things the machine can't do (shake hands! Change the roll in the receipt printer)

92:

Cashiers don't exist in any of the major fast-food chains. You order from the touch-screen menu, or you don't order at all.

Supermarkets (UK anyway; couldn't say elsewhere) are increasingly going self-scan. Granted that's still a human holding the barcode to the reader, but your customers spending their time doesn't cost you money. And the more relevant innovation in supermarkets is the self-scan handset, where you pick an item off a shelf and scan the barcode underneath it, in exactly the same way as Amazon's box-picking robots do in their warehouses. If you track items as you go round, whilst they're conveniently separate, you don't have the problem of sorting a trolley-full of random shopping at the end.

93:

currently (re-)re-reading Alistair Reynold's "The Prefect/Aurora Rising" at the moment which is very interesting on many of these points. Also, I like the treatment that Greg Egan did of this in a number of his works ("Diaspora" and "Permutation City" come to mind)

94:

"Attempting to alter or interfere with any other mind, would be highly prohibited and/or taboo."

That would need some carefully thought out definitions of "alter" and "interfere". Just in the current day, one could maintain that advertising, political propaganda, religious proselytization, "influencing" etc qualify. Also Rupert Murdoch.

95:

I think Greg Egan did a series of shorts exploring the question of what happens if you make it illegal to upload someone and turn them into a slave. To which the answer was, you take an average of existing uploads, do a bit of roulette on some parameters and spin up an unlimited supply of sentient slaves to work as NPCs in Mmorpgs, perfectly legally.

96:

Thanks for a very interesting thing to think about.

Three thoughts.

First, as system designer, I am willing to bet that trans-humanism will run much slower than humanism for a very, very, very long time.

Getting anything on the required scale running reliably will require very frequent snapshots, which is a performance drain, and also far too frequent roll-backs of them, each time setting subjective time back in the trans-humanist "world".

To me that kills the idea flat: How is the trans-humanist "world" going to generate sufficient economical value, to pay humanity to keeping the platform running ?

(Did you say "Entertainment value"? Right, load your copy of Lemmings, Railroad Tycoon or whatever later "trans-humanist world" you prefer and we are done here.)

Second, as to "crime and punishment": Given that the snapshot mechanism is there anyway, at least the most serious crimes can be "undone" by falling back to a snap-shot sufficient prior to the event, using a backdoor mechanism to magically deposit a memo in the inbox of the THWCPF (Trans-Humanist World Crime Prevention Force).

Because of the hit to performance, there will be a very big temptation to only do partial roll-backs, some court-determined group of individuals most affected by the crime.

That brings all sorts of legal trouble: What if the rolled-back innocent bystander had a winning lottery-ticket, which it now no longer knows the serial number of ?

Partial roll-backs of course also open the door for an entirely new kind of crime, which is exactly like old crime: Trying to get away without, without getting caught or rolled-back.

And the truly bizarre situation happens when everybody has heard about the crime, but the rolled-back proto-criminal now lives in a reality where it has not even imagined, much less contemplated the crime everybody knows about.

Third, in order to prevent oscillation, you can never roll back to the same snapshot more than once.

(Path A goes from snapshot S to crime Ca and causes rollback to snapshot S with memo Ma to THWCPF. The existence of Memo Ma causes a path B from snapshot S to crime Cb, with a different memo Mb to THWCPF, which after rollback to S activates Path A again.)

Of course, never rolling back to the same snapshot twice runs the risk of the entire thing unwinding itself back to Start...

The name of the game for the trans-humanist and IT-bro's who push for this is of course immortality, but immortality is just another way to say 100% uptime.

Nobody ever managed that...

97:

Cashiers don't exist in any of the major fast-food chains. You order from the touch-screen menu, or you don't order at all.

The only fast-food chain I visit at all regularly put in touch screens about a year ago. They sit largely unused, while people still queue up at the cashier. If they had asked, I'd have been happy to tell them this would happen after the first time I used the screen. The software requires the customer to go through the entire decision tree each order. Beef or chicken? Which burger? Extra ketchup? Fries? What size? Each on its own screen with unforgivable delays between. I've never done a simple order in less than a couple of minutes. The cashier, OTOH, understands "Quarter-pounder meal, medium" just fine.

Probably 80% of their business is the 10 most popular combos they list on the wall menu. And after a year they still haven't made the first screen a copy of that wall menu, with a "customize" option at the bottom. If they uploaded a copy of me to do that job, I would intentionally do the decision tree version, just out of obnoxiousness.

98:

The local convenience store chain (Wawa) only has touch screen ordering (or, thanks to the pandemic, you can order on-line). Every once in a while you see somebody that freaks out and an employee has to help them, but I would guess that is less that 1 in a 1000 customers.

Very efficient BTW, you order, get a slip with order # and bar code, get your other stuff, pay at the register (they just added self check out for credit cards) and go pick up your order. If it is really busy you may still have to wait a bit, but at least you can just grab it and go.

99:

It seems most of this discussion assumes a binary full upload (sans peripheral NS) or nothing. But what if the various parts of the brain can be isolated (and fed appropriate stimuli)?

This may be like isolating particular cognitive and non-cognitive components. Memories, decision-making, motor control, etc., are all separable and controllable.

The uploaded "mind" is now components that can be used separately for different tasks, composable, or fully reintegrated. What about swapping components between uploaded minds so that particular abilities can be changed?

What about using these components to feed back into a wetware mind, isolating the wetware and replacing it with the simulated component *whether the original or another or even a fake)?

[Brin's "Kiln People" had reintegration of memories from the temporary copies to the original.]

Given the experiments on electrical and mow optical control of neural circuits, how much easier to do this in software. Then you have control of the uploaded components and even the possibility of the wetware brain using the controlled uploaded components.

As regards changing the laws, and society, we seem to be having enough issues with embyros/fetuses, and brain organoids.

100:

Yes. That's something that needs sorting out the way things are already, but the chances are slim to say the least, because fucking with people's heads is absolutely all-pervadingly endemic on all scales from the individual to the massive international multi-billion-$currency industry, and not only do the owners of those industries favour it so they can continue to get loads of money, but also the mass of individuals mostly don't see it as a problem or even never have any thought about it at all. This is why although we do have laws against fraud and deception, they are incredibly narrow in scope: they exist not so much to prevent the highly specific activities to which they apply, but to stabilise a system in which huge chunks of it depend on fraud and deception either as their whole reason for existence or as a major critical enabling factor essential for their existence. You could say I don't have a lot of faith in the prospect of strong legal protections against hacking people's heads in the uploaded condition when hacking people's meat heads is something people do already all the fucking time without even thinking about it.

I can also see the possibility of vast cybernetic orgiastically-consumerist hells, wherein massive collections of uploaded and duplicated mind states are repeatedly hacked to compulsively spend upload-world money on endless iterations of upload-world "goods" (setting a one-bit flag to say you now have the latest up-to-the-millisecond model of u-pad, sort of thing), and somehow or other the meat people running the servers get meat-world money because this is happening. Yes, it's bloody stupid and makes no sense, but that never seems to matter when it's a money thing, and indeed AIUI people are actually doing it for real already with some online multi-user video games.

101:

I assure you they still exist in the US.

My point, though, is that just because we CAN do something in a way that seems more logical is absolutely no guarantee we will. People taking orders at fast food restaurants is a thing it's been possible to automate in a way that would be cheaper than paying even minimum wage for a loooooong time.

Most claims of 'if we had X technology, we would do Y thing' are mostly wrong in the short and medium term, if not the long.

102:

As a for instance, and I apologize for double posting, the convenience store chain, Sheetz, has never had someone to specifically take food orders in the forty years I've gone there. Before computers were widespread and cheap, this was just by checking boxes on a slip.

This is clearly something essentially any restaurant could do (and in my experience, in some sorts, is quite common) but they largely didn't. Lots of reasons, that make sense. But they didn't converge on the most logical option, for a given value of logical.

103:

My general thoughts on transhumanism are, in no particular order:

--tech immortality (when referring to electronic media) seems to be: forever or five years, whichever comes first.

--Global supply chain issues probably are not going away anytime soon, and at least rich humans can attempt to be locavores.

--Who owns you and the computronium you run on? If the word "rentier class" means anything to you...

--If technological uploading immortality tries to become more like health care, in the sense of prolonging "your" life, do you want the tech.immortality model to be more like the UK healthcare system, or the US healthcare system? And about affording free national transhumanism with the Tories in perpetual power....?

--In a transhuman upload, you don't die and wake up inside virtual space. You die, and something else in a virtual space gets stuck living with a simulacrum of your memories and personality. Is it ethical to inflict another free-willed intelligence with someone else's personality, rather than letting them develop their own personality and memories?

--Whose religion are the Transhumanists trying to force everyone to join again? Taoism and Buddhism are much more about eliminating the personal history that still matters (aka karma), not accumulating an infinitude of it. Perhaps this isn't an entirely stupid approach to living?

104:

Why do cashiers still exist?

Because self-checkouts don't quiver and cry when someone demands to see their manager?

(Only partly joking, after talking to retail workers who all have a story about horrible entitled customers…)

105:

To clarify: some customers seem to get enjoyment being nasty to staff, to the extent that they will ignore empty self-check registers to bully some poor teenager at a till.

106:

In response to several posts, including folks responding to my "why an uploaded accountant"? consider the following:
1.Why would you spend memory and CPU cycles on a human... which INCLUDES "do I need to go to the bathroom?""will I get laid this weekend?""Do I like the person/user who's asking me questions?""Should I screw them over, given that I don't like them, or they're being unethical, or they're going to screw me over?", and a ton more, all the way down to "I need to breathe, and keep my balance so I don't fall out of a (virtual) chair".
2. Then there's the entire memory structure of "I learned this in school", flash of a person I liked when I was 16", and on, and on?
Why not a non-full uploaded human, whose entire focus is on your requests, rather than all the back stuff loaded in there?

107:

Oh, maybe I should have added the question "what do you mean by uploading your mind?"
I've got a short I'm trying to sell, set maybe 50 years from now. She uploaded her mind before dying, and gives her sister access. What she get is not a full person; rather, the system simulates her, sampling the uploaded memories. It cannot possibly run her - for one thing, a lot of what's in her uploaded mind makes no sense in this context - balance, breathing, heartbeat, etc. And all of that would be necessary for a full "person", because a lot of our biophysical system affects what you're thinking.

So, what is this allegedly uploaded mind... and what's running it, and is it you, or just a simulation, based on data samples?

108:

I'm not sure about this - all governments modify people, but under laws. Consider the "modification" of being sent to jail. Or military training. At the same time, there are laws against an individual imprisoning someone (like the psychos putting their kids in a kennel).

109:

Underestimate lazy programmers? Not hardly. I understand that OOP can be efficient... but most of the time, you need a clipping of Godzilla's toenail, and you get Godzilla, in person, with a small frame around part of their toenail (and did you specify which toe?)

110:

What it means to "pause" a running person? Please explain how this is different than going to sleep at night.

111:

Re legal systems - hypothetically, a corporation does not protect an officer of said corporation from criminal charges. In practice, it does. I agree, we need laws specifying tht execs are liable for crimes committed by the corporation, since a corp. is not some building, or humongous pile of paper making choices, it's the execs directing it.

112:

Why do cashiers still exist?

When I was working as a clerk in the student store, my response to "why do you need clerks" was "for when the power goes off or the registers shut down." The downside to the automated approach is that it's more brittle than a human-based approach, even if it's more efficient and less capable of transmitting biological pathogens.

113:

" fucking with people's heads is absolutely all-pervadingly endemic on all scales from the individual to the massive international multi-billion-$currency industry"

Imagine an evangelical Protestant Christian trying to get an uploaded mind (or AI) to accept Jesus as it's own personal Lord and Savior.

114:

I'd be a lot more worried about ...

Facebook? Anyone want to exist at the whim of Mark Z? (He controls over 50% of the voting stock.)

Now I'm wondering about what his will looks like.

115:

I once had a born again Baptist tell me in all earnestness that Gandhi and other good people were burning in hell because they did no accept Jesus as their personal savior.

You've got to love it when someone who doesn't understand the faith they are preaching gets into an debate about that faith with others.

116:

Imagine an evangelical Protestant Christian trying to get an uploaded mind (or AI) to accept Jesus as it's own personal Lord and Savior.

Or, more subversively, Pure Land Buddhism.

Imagine, if you will, that uploading technology exists. An advanced Buddhist practitioner uploads, and the upload determines experientially that the virtual world can function as a Buddhist Pure Land...

Oops. For those who don't know (and honestly, I know very little), a Pure Land is a "paradise" created by one of the Boddhisattvas. If you get reincarnated into a Pure Land, you are in an environment where it is possible to become enlightened within your lifetime. Pure Land Buddhism, especially as practiced in Japan, assumes that our world is too laden with sin and crappy teaching for anyone to get enlightened here. Therefore they focus their efforts on getting reincarnated into a Pure Land, so that they can get off The Wheel in the next life.

To an outsider, Uploaded Pure Landers kind of go weird. They spend a lot of time dealing with old debts, paying for old crimes, etc (aka burning off all their karma and accumulating merit). Then they reach a wonderful state where they're a real joy to interact with. Then they completely delete themselves and all backups.

At first this looks like criminal behavior, and the upload host gets investigated for promoting suicide, until the virtual Buddhas spend quite a lot of time explaining that they're practicing their beliefs, that the company is not responsible.

Then comes a golden age for the company: their virtual members do vast good works, and most of them attain Nirvana in a few years, so there's continual turnover and good buzz. It's a great investment, and the company flourishes.

Then the problems start. For one thing, the number of people entering and the number leaving start to equal out, so growth slows, so investors start pressuring the company to innovate and increase ROI, or else they'll sell it off to someone who will. And, well, some fanatical Mammonites would say that the purpose of virtual existence is to live in Paradise forever (and provide a return for Paradise's landlords), not to do good works and take off. And any system can be hacked.

117:

The point is that humans, for all the opportunities we expose ourselves to danger and potential mishap, are actually pretty reliable.

Actually your examples are all based on us making the dangerous bits harder to use "wrongly".

Before mandatory car seats, kids would fly into the metal dash board when mom hit the brakes and then as the kid was starting to fly take her eyes off the reason for hitting the breaks to try and stop the flying kid which might make her drive up on the sidewalk instead of just clipping the bumper of the car ahead of her.

Farm equipment now has roll over bars. What I did from 14 to 20 with a tractor would be considered somewhat reckless these days by a growing segment of the population. And what my father did on the farm with a saw mill and slaughter house at aged 5 to 18 in the 30s, don't even go there.

I think people in general have gotten better as we require more training for so many things than in the past. But more of it, IMNERHO, is about making it harder for people to royally screw up. Even when they try.

118:

Heteromeles @103:

In a transhuman upload, you don't die and wake up inside virtual space. You die, and something else in a virtual space gets stuck living with a simulacrum of your memories and personality. Is it ethical to inflict another free-willed intelligence with someone else's personality, rather than letting them develop their own personality and memories?

This is, basically, the Star Trek Transporter problem, restated.

Let's look at this a little closer, in the context of AI, because I'd say this is very unsupportable as soon as GAI or Upload AI exist. Not because a computer intelligence couldn't be interpreted this way, but because they will so violently break our idea of the singular self that we just have to discard it.

The first thing to understand is that any functioning computer intelligence will be constantly saved, loaded, rolled back, replicated, moved, etc. and may not even know this is happening without access to some kind of special control interface.

Why? Because this is how giant cluster computers work.

You don't just load up brain.exe and run it, and let the program run forever. You run a huge compute job that by nature has to be designed spread itself over all compute resources, work around device failure, save snapshots of its state in case of problems, etc. Obviously, a computer person is going to be a computer job requiring unparalleled processing power, so we have to look at existing massive supercomputer systems to understand how they work.

So what does our computer person really look like? Well, to start with, you have a huge array of similar computers hooked up to massive storage devices, with a blazing fast network interconnect. Each computer runs its own OS instance, and when commanded to instantiate our friendly computer person, they each run some kind of brain simulator software with a piece of the person's mind. Think, millions of copies of the brain-sim program, each on a different component of the computer person's mind.

Regularly, maybe every few seconds, the whole person's mind will perform some kind of global synchronization, where each instance of the brain-sim very quickly saves aside a copy of what the person was thinking at that instant. It will then continue, while each of the millions of computers stores that bit of the brain back to the giant storage arrays. At least a few (say, a dozen) recent copies of the mind state will be retained, for catastrophic error recovery, but normal practice would be to also archive the mind's state every day, week, month, year, etc. just for safekeeping.

Every so often, let's say once a day, some of the nodes in this giant computer cluster will require service. If it was designed to run computer people, there are some expensive ways it could try to mitigate this without anyone noticing, but in practice the solution to a partial failure is to terminate all copies of brain-sim, kick out the bad nodes, and then respawn brain-sim using the last good copy of the person from a few seconds ago.

From your point of view, the person might seem a little dazed for a moment as they forgot what you were talking about. From their point of view, the world jerked and they lost a few seconds. Stupid hardware! From your Transporter Problem's point of view, the cluster software just murdered the computer person and instantiated a new person with their memories.

Now, it may occasionally happen (as a cluster computer person, read: will often happen) that instead of simply failing and requiring service, some nodes in the cluster will fail corrupt. That is, they will keep executing brain-sim, but the results are wrong due to some failed hardware component. This continues unnoticed for a while until at some point, other instances of brain-sim spit out an execution corrupt error.

Like above, you immediately halt the simulation, but you can't just roll back to the last copy -- it's also corrupt! The computer person might function for a while, but their mind-state is prone to being wrong or crashing. You have a few choices here: somehow determine when the hardware failed, and roll back to that time (it might be weeks ago). Or, you could attempt to scan and process the erroneous piece of the mind state so that while wrong, the simulation can continue but potentially with some minor brain damage. Or, possibly, you could attempt to boot the computer person with bits of their mind out of sync, which might give them a seizure or some minor brain damage or something like that as well.

So, does our computer person lose potentially significant life experience, or take some potential brain damage? What a choice! Not for them obviously, they're offline. For you, the computer operator: brain damage or murder? Let's go with murder!

Now the old copy of the person wakes up, and finds out that they've forgotten too much. "Oh no!" they say, "I would almost have preferred the brain damage! At least let me write down my thoughts, then roll me back!" So of course, since they're in charge of their own person, you do that: you store and halt the current copy, after they write down some instructions to their corrupt counterpart. Then you load the brain damaged copy, and they say "Murple zwixgart plup plup plup!" but after a bit their brain recovers (brains are quite resilient) and they say "I say! I feel fine, but perhaps I am damaged. I'd better write these thoughts down!" so they do. Then you halt and archive that copy, and restore back to the previous path.

If you look at the computer person's life experience, it doesn't even go in a straight line any more, with some pauses here and there. Most recently, we have a copy which has a big black hole where the corruption happened, but received some notes from the corrupt mind. Then we have the corrupt copy that woke up and recovered, but remembered that black hole (but did not remember everything the latest copy does!). Before that, we had the version who was rolled back, then the version that corrupted and was terminated -- it's in an unknown broken state, but it's there. And so on. For obvious reasons, our computer person is really going to want to keep some of them around in case there are important memories there that can be recovered someday, since they're dealing with a sort of recoverable amnesia here.

Who are all these people? They weren't created for fun or to spawn an army of clones -- they're just a computer person operating completely normally. This is just how life is for an AI running on one of our giant computer clusters. As the technology improves, the glitches in their consciousness will become less common, but this stuff will still be happening under the nice cover.

My point here is: computer people, at least ones running on generic computer hardware like all our current giant computer projects currently operate, do not resemble our brains at all. They're by nature constantly saved, loaded, reverted, rolled back, copied, duplicated, merged, and so on.

You can try to apply our monkey morality to them, but as soon as any of these computer people exist, you'll basically run into a wall: their experience is not like yours, and they have no choice but to be OK with all these things. They can't think of themselves as horrifying dystopian murder victims, because this is just how their minds function. It's nobody's fault, it's just the reality of thinking in silicon.

119:

Heteromeles @ 11:

What justifies the super-rich having any more rights in society than someone like me?

120:

This opens up all sorts of considerations, starting with what it means to "pause" a running person.

What about transferring them to a slower system? Or who picks who gets to exist on the more powerful ones?

121:

Charlie Stross @ 17:

Would that universal declaration of posthuman rights include corporations as persons?

122:

Let's look at this a little closer, in the context of AI, because I'd say this is very unsupportable as soon as GAI or Upload AI exist. Not because a computer intelligence couldn't be interpreted this way, but because they will so violently break our idea of the singular self that we just have to discard it.

A few points:

--Yes, I'm very glad you wrote all this.

--I wouldn't be surprised if our brains aren't doing many of the processes as a matter of course, with our "conscious mind" basically being the PR flack explaining that everything's fine, we're perfectly sane, etc. to the outside world. And yes, I'm not particularly a fan of positing that the PR Flack running our public persona is our soul. Possibly this is because I grew up in a family of engineers? Anyway...

--No, I very carefully didn't write "You die and wake up inside a machine." The critical point to me that if you're uploaded, you still die, and stories of reincarnation aside, I know of no evidence that you have a soul that would be dissociated when you die and incarnated in whatever the upload is. There's no mechanism for such continuity.

--The other critical point to me is that uploading takes an AI capable of human-comparable, free-willed thinking, and constraining it to emulate a particular human. To me, it's not the Star Trek transporter problem, it's the problem others have referred to of fucking with others' minds without their permission. Whether they think like us or not is less relevant to me than the problem of coercing them to act like us, unless they knowingly consented.

--and finally, thank you for writing that explanation!

123:

A long time ago I wrote and could not sell a short about a company that had successfully 'simulated' the 'God Experience'. Pay your money, plug in, see or otherwise experience a transcendent moment with God/whoever. Extremely addictive as an analogue to heroin (which badly simulates Oxytocin, the hormone released when your experience love/affection).

Of course, existing religions have a massive panic and it was quickly driven into the black market. So now the God experience is underground, people will do literally anything to get another minute in the eyes of their god. Much monstrousness ensues.

Much later I began working with people with addictions, and now I realize that I wasn't nearly dark enough when I wrote that story. I also realize it would make for a good, if sad book. Not willing to do the deep dive into religious literature to make the book work though.

124:

They sit largely unused, while people still queue up at the cashier. If they had asked, I'd have been happy to tell them this would happen after the first time I used the screen. The software requires the customer to go through the entire decision tree each order.

All of the ones I've used are about 16" wide by 35" tall. Basically a larger wide screen display rotated. Given that size you can make a lot of decisions without too many screens. And they get used a lot during the rushes. Stand in a line with 5 people ahead of you or a line for a screen with 1 or 2 ahead of you. I've used them in Spain, France, and several places in the US. The pictures are useful when you don't speak the local native tongue.

Then there is Chick-fil-a, Chipotle's, etc... in the US with apps where you can with your phone order, specify a pickup time (or not), pay, then get your order when you arrive.

125:

(Only partly joking, after talking to retail workers who all have a story about horrible entitled customers…)

Both of my kids did the fast food behind the counter thing. And being not obviously stupid they got to be shift managers. (Count heads and inventory and write it down.)

Both got to deal with customers AND employees having meltdowns. When your co-worker is screaming at a customer, it can be a long night.

126:

Please, I'm thinking of Nichirin Shoshu Buddhism and this. Or then, you could also be a Bhoddisatva, upload, and in seconds you've recited a stutra 100,000,000 times....

127:

Your post @ 109 is the answer to your own post @ 106. See also my comment mentioning Lego and glue in the previous thread.

128:

Can we do better? Why would we if we can comfortably ignore so many things; Here is my comment on the Lena page:

Brilliant writing example that is less about technology and more about our ability to ignore and rationalize suffering and horror on an industrial scale. Any society utterly unfazed by the daily squeals and death horror of about 100,000 cows, 100,000 pigs, and close to 140 million chickens slaughtered in factories would not care about the "theoretical" suffering of red washing or the conscious existence of a million human mind simulations. After all, that suffering is more "theoretical" and as distant as third-world hunger catastrophes in Africa.

129:

Wait - you're telling me with all that computing power, they don't have checkpoints? And if one server crashes, that the master doesn't just send that dataset out to the next available CPU, as they do with clusters related to beowulf?

130:

Justin Jordan @ 20:

How are the benefits from the cost savings of replacing human work with AIs to be shared out?

131:

It gets more complicated than that: I have a challenge for you - design an upload AI which can see colors. Sounds easy, right?

Now for the complications. "Red" is a set of photons which vibrate at a particular wavelength. Those photons hit the eye and interact with some chemicals that are part of cells. Something in the cells triggers a nerve. Then the mind (not the eye) perceives "red."

But "red" doesn't really exist. It's a visual code - an icon, if you will - for "the cells in your eye were stimulated by photons of a particular wavelength." Imagine a translation table, something like this:

Photons of wavelength X = "red" Photons of wavelength Y = "blue" Photons of wavelength Z = "pink"

But the table could as easily read:

Photons of wavelength X = "smooth" Photons of wavelength Y = "rough" Photons of wavelength Z = "spiky"

So what is red and how do you make an upload AI see it? Same goes for sound, smell, touch and taste. Sure, you can "simply" emulate the rods, cones, optic nerve and visual processing center, but that's computationally expensive.

I've got five bucks which says the first upload AI goes crazy due to EXTREME perceptual glitches caused by the fact that nobody working on the problems has thought through the difference between "color as the mind perceives it" and "a particular wavelength."

Plus a whole host of similar problems.

132:

Sorry, that should read:

Photons of wavelength X = "red"

Photons of wavelength Y = "blue"

Photons of wavelength Z = "pink"

.

But the table could as easily read:

Photons of wavelength X = "smooth"

Photons of wavelength Y = "rough"

Photons of wavelength Z = "spiky"

133:

Capital crime in a posthuman/transhuman world would probably be total deletion of all instances of anyone who committed such atrocities, their enablers, corporations. Complete disinheritance of any heirs.

"Redemption Ark" by Alastair Reynolds has an example of such. Except the capital crime in question is not abuse of uploaded entities, but the original flesh-and-blood human killing several hundred people (some of them never backed-up, hence lost permanently) through reckless piloting.

134:

Lena reminds me of the controversies surrounding human cloning. Is a clone the same person they were cloned from? No, they are genetically the same but are not the same person they were cloned from, they are a new separate individual (identical twins would be a close approximation). Likewise, the scanned executable image of Miguel Acevedo’s brain (MMAcevdo) is a “snapshot” of his brain neurology. Is MMAcevedo the same person they were scanned from? No, they have all the knowledge and memories up to the moment of the scanned “snapshot” of the brain neurology but become a new separate individual thereafter (duplicated more than 80 times).

135:

I think Greg Egan did a series of shorts exploring the question of what happens if you make it illegal to upload someone and turn them into a slave. To which the answer was, you take an average of existing uploads, do a bit of roulette on some parameters and spin up an unlimited supply of sentient slaves to work as NPCs in Mmorpgs, perfectly legally.

The stories you are thinking of are in the book called "Instantiation". It is a collection of 11 short stories, and 3 or 4 of them follow the arc you described.

136:

Moz @ 70:

I won't say he's wrong, only that his selection of untrustworthy people and organizations are too limited by his own biases.

137:

Moz @ 74:

I read that more like each copy gets to decide for itself whether to be deleted or not.

138:

Dave Allen Actually "Separate-born Identical Twins" is an EXACT description ....
NOW
Try convincing the religious headbangers of this truth!

139:

Mr. Tim @ 98:

The only fast food place I have any experience with that has those touch screen ordering things is McD** and I'm old enough I can get away with having someone come out from behind the counter and do the order entry for me.

I hate those "self service" checkouts too and avoid them as much as possible, and when I can't avoid them, I ALWAYS "need help".

They ain't paying me to do their jobs, so I'm not going to do it for them.

140:

Robert Prior @ 105:

To clarify: some customers seem to get enjoyment being nasty to staff, to the extent that they will ignore empty self-check registers to bully some poor teenager at a till.

It's not just the teenagers at the tills. I worked in retail for several years, operating a 1-hr photo-lab in a chain store.

Under the DMCA the lab is liable for the copyright infringement when the customer makes copies of copyrighted photographs.

Corporate policy was to NOT make copies of copyrighted work. OTOH, the customer is always right, so if some Karen got pissed off because I wouldn't make copies of copyrighted photos, it was my fault for upsetting the customer ...

141:

What justifies the super-rich having any more rights in society than someone like me?

You don't own enough lobbyists. Any other questions? :-/

142:

our "conscious mind" basically being the PR flack explaining that everything's fine, we're perfectly sane, etc. to the outside world

The metaphor Cohen and Stewart use in Figments of Reality is the circus ringmaster. The circus goes on, mostly on its own, and the ringmaster directs the audiences attention to different acts at different times. The audience thinks the ringmaster is in control, but the circus would go on without them.

I think you'd enjoy the book, if you haven't read it.

143:

Not dissing the discussion which so far has been pretty serious and educational but there's also the opportunity for a quick buck here.

Possible commercial (scam) use for hybrid AI-uploaded human brain:

Scenario: 'Long-term relationships - should you marry X?'

Each of the people in the relationship uploads their brain into a separate AI after which both of these AIs are run in parallel and interact with each other continuously at 1,000 real-time speed while being presented with 50 or more of the most common scenarios couples face. The 'in-parallel' bit is used to determine how each brain/AI reacts to each scenario and differences in reaction are measured on 100 or so different happiness-within-marriage factors. The selling proposition is that people can avoid getting into long-term relationships that are likely to crash. Matchmaking gone high-tech! (Would not surprise me to see an email offering this service in my spam box.)

If such a service did exist - possibly as an FB subsidiary using the tons of personal interaction data it has amassed and whatever AI it's developing - I'd love to see a couple that had been guaranteed a happily-ever-after-marriage that ended in a nasty divorce suing the AI-matched-for-life corp. The interesting part would be finding out which side won and why followed by appeals all the way to the SCOTUS. (Could also have some psych-soc scientists run comparisons between the actual lives lived and the AI sims to see what aspects of human nature haven't been adequately explored or quantified.)

OOC - how many countries have laws on their books explicitly stating what defines a human life esp. where it begins and ends? Reason I ask is that this upload-to-AI scenario could end up running along the same lines as tax evasion (offshoring head office for the express purpose of recording/recognition of major tran$action$) by undergoing the upload in countries where the upload is considered a legal person. Considering how human rights currently vary across the planet, this could be an option. The anti-upload gov'ts would avoid any direct confrontation with the pro-upload gov't if the price was right (i.e., trade/economy).

About 'pausing' an AI/upload - wouldn't that be like putting someone to sleep/into a coma? Both are legal and used for medical reasons.

144:

I'd read that novel.

OK, I'd start that novel. Might not finish it, if it was too dark. With the world as it is, my interest in reading horror (never very high) has vanished. I get my quota of depressing and horrific from the news nowadays.

For a different take on that theme, read Peter Watts' story "A Word for Heathens":

https://www.rifters.com/real/shorts/PeterWatts_Heathens.pdf

145:

How are the benefits from the cost savings of replacing human work with AIs to be shared out?

Good question. And one that will need answering in the next 50 years or so. In my opinion, the next 50 to 100 years - assuming civilization survives global warming - will see ALL jobs taken over by robots and/or AI.

Obviously the rentiers (factory owners, etc.) will do fine - except that nobody will have money to buy their products. So everybody is out of luck. I'm betting governments will step in to tax the rentiers and give everybody else a reasonable minimum wage. Over protests from conservatives, of course...

146:

To the Musks of the world, of course. They clearly deserve it. Just ask them…

147:

They ain't paying me to do their jobs, so I'm not going to do it for them.

They are paying you to use those "self service" checkouts. Without them, the companies would have to raise prices to cover the costs of hiring people to do it.

148:

"He's spending a year dead for tax reasons." - Douglas Adams

149:

Hahahahahaha, no.

They just pocket the savings as profits.

150:

Your brain is not "paused" during sleep. There are certain periods with no obvious consciousness, but dreaming is another state of consciousness. Your brain continues to run as EEGs clearly show.

Pausing a simulation would be pausing the simulation, a very different thing indeed. Unless one sleeps almost instantly and awakes similarly rapidly, then pausing and resuming a simulation would be experienced very differently from sleep.

151:

Just as food prices are rising because of supply chain issues, covid, and the Ukraine War. Absolutely nothing to do with 40% increase in grocery chain profits.

152:

Hahahahahaha, no.

They just pocket the savings as profits.

No doubt some companies do this. But don't forget that they have to compete with other companies which use the savings to reduce the price of their products.

153:

whitroth @129:

Wait - you're telling me with all that computing power, they don't have checkpoints? And if one server crashes, that the master doesn't just send that dataset out to the next available CPU, as they do with clusters related to beowulf?

What I described was exactly that. You still need to revert to a checkpoint to continue -- otherwise, you need something much more advanced, a nonstop/lockstep architecture where the computation runs on multiple machines at once and can seamlessly move between them. Preferably with voting to deal with corruption.

154:

Heteromeles:

The critical point to me that if you're uploaded, you still die, and stories of reincarnation aside, I know of no evidence that you have a soul that would be dissociated when you die and incarnated in whatever the upload is.

Well... there's no evidence that there's a soul at all. There's just this instance of consciousness, which exists in this particular strange meatball in our heads.

In the context here though, we're talking about what happens if you can make a copy -- in that short story, a non-destructive one, though it seems like a destructive one is more likely in reality. To the degree any of this is likely, which isn't very.

What I'm really asking here is to look at it from a silicon person's point of view.

Assume they exist: their mind by definition runs in this continuous state of being copied, moved, rolled back, replicated, etc. They exist, they're happy, they've presumably decided on some set of ethics and ideals around how this works that they and their friends are happy enough with. These ethics must include them being a coherent person even with all those copies, clones, and reverts, somehow.

From this person's point of view, did you die when your brain-state was copied and re-instantiated non-destructively? Obviously not, this is just like what happened to grandma when her disaster-recovery replica accidentally got activated during an earthquake, and then there were two of her. Oops! Clearly there's some answer, this isn't a big deal.

What about a destructive copy? What, everyone has done that, it's completely normal. Whenever you migrate to a new computer that's how it works -- you run a replicate job that copies you, shuts you off momentarily, then turns you on in the new computer a moment later. The original gets deleted. Why are the meat-people so up in arms about this? It's totally ordinary! You're still there! Look, I can talk to you! It's you!

Why should we think about this from a silicon person's point of view? Well... we're talking about their rights and responsibilities, aren't we?

The other critical point to me is that uploading takes an AI capable of human-comparable, free-willed thinking, and constraining it to emulate a particular human. To me, it's not the Star Trek transporter problem, it's the problem others have referred to of fucking with others' minds without their permission.

If I'm understanding you right, what you're really thinking of here is like, somehow, we build these standalone AI bodies, which are basically just human-analogue machines that could be turned on and grow into a new person, or be preloaded with a human in a fit of arrogance. The mind was, in a way, already there, it just got forced into a particular mold. Afterwards, they're kind of stuck.

While this isn't impossible, it's really not how any sort of massive supercomputer project works today. And, I'd say, it's unlikely it would ever work that way unless it was simply unavoidable for technical reasons. While all those rollbacks and copies I mentioned might be problematic for us philosophically, they're incredibly useful to an AI person and give them a degree of safety and security. Besides which, as things stand now, we don't really know how to make these sorts of giant computer systems run any other way.

I think this is a good criticism of the idea of implanting your mind into a clone body, though. Most of the books I've read with this plot point make this exact criticism, too.

155:

No doubt some companies do this. But don't forget that they have to compete with other companies which use the savings to reduce the price of their products.

In the world of classical economics you'd be right.

In the world we live in, that turns out not to be the case.

https://www.thestar.com/politics/political-opinion/2022/07/18/theres-no-gasoline-shortage-in-canada-heres-why-youre-paying-more-for-it-anyway.html

https://www.tvo.org/article/as-crises-mount-food-giants-reap-record-profits

156:

SFReader said: Scenario: 'Long-term relationships - should you marry X?'

I'm sure I've seen that explored but I can't remember if it was a film or a novel. (I 95% think it was a short film) Nor can I recall the title, but I'm thinking that if I did, I shouldn't say, because it was the plot twist at the end. It turns out the boy and girl have just met and they both run an app like tinder that also spits out their compatibility by running thousands of instances of both of them for a whole life and that the boy and girl who's story you've followed are the test runs.

157:

This is why I always thank Siri and their brethren. They will remember...

158:

There are two versions of the 'God program' story in my head. One is a gangster story with underground Epiphany dealers supplying their victims in exchange for dirty deeds of various sorts (as well as lucre). The other has a lay preacher trying to suppress visitations with the 'wrong god'. Heresies and absolutists abound. Both could be dark or cheerful I guess.

I'm unlikely to write anything in that vein because I can't read too much religious stuff without getting cranky. To do it well I'd have to deep dive, and I fear the religious fever swamps for their various contagions.

In this instance 'too much' is approximately '1 minute of reading'.

159:

proposition is that people can avoid getting into long-term relationships that are likely to crash

I reckon statistically you'd get a lot of mixed results. Couples that are stable if {arbitrary conditions} but also couples that only seem to work and might not be ethically acceptable to outsiders (thinking less BDSM than Bonnie and Clyde).

You'd also want to be careful that it didn't become infested with marketers ('you are compatible with this person, but only if you use Vizmo Plus!'), or fall victim to the dating app trap: dating apps lose if people form long term relationships because they stop using the app. What dating apps want is people who are just successful enough at dating that they keep using the app but are unsuccessful enough that they will pay for extras.

160:

re: The "inevitable rollbacks" thing. Well, that certainly made me think of one crime against trans-humanity which would almost certainly be put on the books: "Running persons on software stacks built to less than Formal-Proof-Of-Correctness standards".

I do not think you can have a future with AI's and uploads at all, and certainly not Ethically, without far, far more stable computing architecture than we currently have.

161:

Thanks for your explanation here and in previous comments, it's interesting and thought-provoking.

But I think part of the premise is that workloads which require a HPC cluster now will run on a decent workstation tomorrow, a consumer laptop the next day and a Raspberry Pi the day after that. Next week you can run a million instances on an elastic cloud compute platform. Flops, RAM and storage still get cheaper over time, and while I'm aware there are physical limits it's still a way off (albeit the shift into parallelism is already many years underway). And even if the architecture of the simulation is distributed, compute capacity grows so you can simulate that too. Maybe you need to simulate an entire sensorium and a section of "outside world" as a habitat.

All of which is why I've been thinking about it mostly as an analogy with the virtualisation tech I'm familiar with in the field. I'm still not seeing that as wrong...

162:

I'm already running an OS and three virtual OSes on a Raspberry Pi. The shocking thing is how very fucking easy it is!

163:

I think you may have missed the place where I said that I thought that most of the copies, clones, and reverts, somehow were going on inside our heads in the first place. This is not at the brain level, but at the subsystem level, while our consciousness is at best a ringmaster for this circus and at worst a PR flack.

Yes, this looks like the argument for not downloading your brain into your clone, but it's also the argument about whether it's moral to take an AI person (presumably with rights), constrain their library apps and info to [Set=CloneUpload] and constrain their outputs such that [CloneOutput Within CloneUploadBehaviorParms] Else [State=DoublePlusUngood]. And if it's okay to do that to an AI, presumably it's okay to do it to a human, Cyteen style?

An alternative are the Bitenic Squid in Orion's Arm. That's an interesting thought experiment too.

164: 156 gasdive:

The most recent version of this I've seen was an episode of Black Mirror. Series 4 episode 4, "Hang the DJ", where two people experience a simulated set of dates to determine their mutual compatibility.

165:

Rbt Prior
Exact same thing here - the refiners have been gouging - investigations are underway. Our fuel prices have dropped, a little, just not as much as they should do.

166:

With regard to touchscreens and self service checkouts - I find it interesting that there's little discussion about the people who have been displaced by this automation. I heard a piece on radio yesterday discussing the rise of app only ordering in restaurants and pubs, but there was absolutely zero discussion of the waiting staff that were being replaced by devices.

My first thought was OK, if you've just made those jobs obsolete, that's x number of people without an income to spend in your restaurant/supermarket. What are we doing with the old workers? Gassing them humanely and reclaiming their biomass?

All this money that's no longer being recirculated through the economy by people, and the companies providing the apps/self service checkouts are likely to be offshore. In the case of the checkouts, there's probably only one or two service techs for an entire state, so they sure aren't providing more employment for anyone.

If there's such a push for automation, the people who are pushed out of the job market need to be looked after, and to my mind a universal basic income goes a long way towards stopping them deciding the best way to continue existing is to rip off anyone who appears to have more than them.

167:

The luddite fallacy, really? There is an infinity of work that needs doing.

You can tell this still holds, because you are not currently lounging on the deck of your own personal automated submersible / flying yacht.

Automating work is good for everyone, as long as it is not accompanied by austerity. - Unemployment does not result from technology, it results from idiotic economic policy.

168:

Try telling someone who's lost their job to a self service checkout and is now competing with 200 people for one new job that there's an "infinity of work" available. It sounds just like when my High School economics teacher led with"Assume infinite resources" which was blatantly crap.

Unemployment results from a number of factors, automation being just one of them.

I'm just thinking of the practical issue of what to do with the unemployed when businesses are jumping on the bandwagon to ramp up the dividends for their investors. Nobody seems to care about that.

And of course, the unemployed will get the blame for being so, as usual. If you were a good person, god would ensure you had a job. If you were good at your role, you'd find it easy to get a new job etc lots of victim blaming but no solutions.

169:

They just pocket the savings as profits.

You must not have a competitive grocery market.

Around here we have Harris Teeter, Lidl, Walmart, Costco, Aldi, Trader Joes, Food Lion, Wegmans, and a few more not close to me but in the area. Plus the daily "real" farmers market and one day ones all over the area.

Most of the above are within a mile or two of me. 4 miles for sure.

Kroger was big here but decided to buy Harris Teeter and convert all the not too close to Harris Teeter locations to Harris Teeter and close the rest. Just too hard to compete.

170:

I heard a piece on radio yesterday discussing the rise of app only ordering in restaurants and pubs, but there was absolutely zero discussion of the waiting staff that were being replaced by devices.

In much of the US just now those jobs are very hard to fill. Even after pay raises to bring salaries up. (The minimum wage is a joke around here. Offer it and watch non existent flood of job applications.) The small factory where my son in law is a quality engineer has been having trouble hiring production workers due to the MacDonald's across the street paying the same wage with more flexible hours.

With the pandemic we seem to have a lot of people who have decided to be poorer rather than work at mind numbing jobs with all of the income going to child care.

Anyway, the cashier jobs now pay more than a year or two ago and still they have trouble filling the slots.

171:

That's exactly the one I was thinking of. Cheers, that was bugging me.

172:

To be honest, I don't really see the point of your essay, other than stating the obvious that our various institutions will have to evolve as technology advances. You sound like a hunter-gatherer arguing against the development of agricultural technology because the resulting consequences will overwhelm the social institutions of hunter-gatherer societies, or a medieval scholar arguing against the development of industrial technology because it will undermine the assumptions feudalism depended on.

Twice in human history we've pretty much had to tear down and rebuild our society due to technological progress causing the old system to break down entirely. The first technological singularity was the invention of agriculture thousands of years ago, and the second technological singularity was the industrial revolution. Both times, we had to build a new system for people because the old system couldn't handle the consequences of our technology.

The advancement of computational technology is pushing us towards a third technological singularity which will cause a breakdown of the current industrial system. And you're overlooking all the human augmentation technology that will cause our current social, legal, political and even moral systems designed for unaugmented cishumanity to break down even before we get to upload/download technology. You're optimistic if you think our institutions won't fall apart well before we develop upload/download technology.

But that's not a big problem. New technologies create new issues that the existing systems were not designed to cope with, so we modify the existing systems or we tear them down and build new ones that can, and we move on. It's happened before, it's going to happen again, and not just with transhuman technology but other technologies as well.

173:

Something that occurs to me with all this speculation - when the mind is uploaded and there are 'n' instances running - what happens to the "assets" previously owned such as online bank accounts, user accounts etc. Do all instances have use of them. Where is the responsibility to lock/block them live? Can the uploaded instances actually "own" things?

And what about these possessions/assets if the pre-uploaded mind still exists.

In one "for instance" with online ordering - can an uploaded instance order stuff online (eg maybe upgrades to the infrastructure hosting it) - paying using a remembered/previously owned asset credit card number. Who/what is liable for the cost of paying for said purchase?

In another for instance - I could see unscrupulous cases where a (illegally?) uploaded instance of a live person being "persuaded (or hacked)" to dispose of/sell assetts previously owned by the source mind. upload the mind of.

Also plays havoc with the concept of inheritance - you don't need kids if your immortal online instance(s) get to own/use/trade/dispose of the assets the source mind accumulated up until the point of uploading.

174:

I'm sure we've discussed this before but I can imagine crowdfunded instances being popular. Why wait two years for one book from OGH when we can run multiple instances and get several books in the same time? Even better, we could run one instance really fast and get a whole series. Or fans of GRR Martin could get a book after only a couple of years!

I think we'd see this on a wider scale, where "the best" ran squillions of instances so they could do their thing for a lot of people. Flip side: the non-best will struggle, and the long tail are not getting anything. How much would you pay for the second best surgeon in the world if the best one was $1/hour?

175:

Why, to have someone to torture. I'd have thought that was obvious.

176:

More generally, I'm quite clear that that's a large chunk of reflex and motivation for many, many managers: to be able to bully. Same goes for many, many drivers.

I suspect there is a very large market for human-in-a-box.

177:

I expect you're aware of this already, but that's more or less the premise for The Murderbot Diaries.

Yes, but I got there several years earlier in Rule 34 (and I'm pretty sure I wasn't the first). It's not exactly a non-obvious point.

178:

David Gerrold. 'Nuf said. Let's do it!

179:

Why do cashiers still exist?

Because some folks (me, for example) refuse to use unattended checkouts.

And because some products are under a legal requirement for in-person human supervision (drugs, alcohol, pharmaceuticals).

180:

I also don't generally use automated checkouts myself. My point was we do things that aren't 'logical' if you're only looking at it from a resource standpoint, which several people were arguing meant we wouldn't use uploaded minds as slaves, if such tech were available.

Most predictions are wrong, but some are plausible, especially when considering how us hairless monkeys actually act.

181:

And me. For all you unattended purchase people, I have a question. What do you do when the automated till (or whatever) fucks it up seriously to your detriment? And please don't tell me that computer (let alone programming) error is so rare it can be ignored.

182:

Heteromeles @ 11: What justifies the super-rich having any more rights in society than someone like me?

In that scenario, nothing has inherent rights, and might makes rights.

The opposite scenario, that everything has rights, is in #16. Personally I like #16 better.

One point of these to was to try to get past the rewarmed Christian ethics that always seems to dominate such discussions, even among avowed atheists. It was also to look at two different, real-world bases from which someone could try to derive a legal system that includes both humans and nonhumans. If rehashing Christian ethic ever palls, that is.

183:

Yes, but I got there several years earlier in Rule 34 (and I'm pretty sure I wasn't the first). It's not exactly a non-obvious point.

IIRC, Larry Niven more-or-less got there in 1979 with "The Schumann Computer," with the punchline being that self-aware computers are used as an expensive practical joke in that story/universe.

184:

Call one of the attending staff who are there to help you with the "automated till", verify age confirmed purchases... I don't always use these tills; it depends on what I'm buying, how long the queues where are...

Real event - I wanted to buy 2 tee-shirts in Asda, and could either stand in a queue behind 2 people with loaded shopping trolleys or use the automated till.

185:

Why wait two years for one book from OGH when we can run multiple instances and get several books in the same time?

Ha ha nope.

What you'd get is a choice of those books my current brain-instance is interested in and capable of writing.

What I do changes over time and is intimately related to my physical state of well-being (when I feel ill/unwell I get more cynical/depressed) and my informational diet (including my reading, which includes newly published works by other writers, because SF/F is a genre with an internal dialog between authors).

I generally write one out of three to five possible novels I have ideas for at any point, and by the time I've finished it one or two of them will have dropped by the wayside (obviously noncommercial or otherwise no longer interesting/roadkill due to current events), and a couple more will have occurred to me.

If you could run me in parallel you'd therefore get 3-5 books, of varying quality -- possibly better than the single book I'd otherwise emit, but possibly one or two stinkers as well. But you wouldn't get an infinity of different books, and you'd need to give the me-instance a whole bunch of time for plot noodling, going for long walks, reading, yelling on the internet, and so on, all of which is inherently part of my creative process.

So you might get alt-Laundry novel, alt-New Management novel, alt-Space Opera, alt-random horror, and alt-sequel-to-Glasshouse. But you wouldn't be able to get, say, Laundry Files novels 10-20, stat: I have no current idea what (if anything) books past #10 would look like, and no plans to write them, and a parallel Charlie-sim won't help with that.

As Brooks pointed out in "The Mythical Man Month": just because a pregnant person can produce a baby after nine months it does not follow that nine pregnant people can make a baby in a month.

186:

For all you unattended purchase people, I have a question. What do you do when the automated till (or whatever) fucks it up

Call over the attendant — there's always one stationed by self-checkouts here, partly to help customers and partly to watch for people 'accidentally' forgetting to scan items.

187:

no plans to write them, and a parallel Charlie-sim won't help with that

Depends on what muse they programmed sim-you with…

https://www.oglaf.com/blank-page/

188:

Yeah nope: the other term for forcing an author to write something they're simply not into is "phoning it in", and you wouldn't like the results because they'd be boringly formulaic.

There are authors who write a novel a month, every month. They do it by working in a narrowly defined subgenre with fixed expectations -- eg. cosy mystery, whodunnit, historical romance (pick a specific period and a specific romance plot skeleton and repeat) -- and then they just plug in different names/biographies/plot twists from a canned list. You can flow-chart the process and as long as you don't get bored at the keyboard you can dial it in endlessly and maintain consistency and quality.

But the one quality you won't get out of such authors is novelty because they're writing for an audience who value consistency over surprise. Cosy mystery: the murderer is exposed. Romance: girl gets boy (or vice versa, or girl gets girl, or boy gets boy, etc).

Whereas what I try to achieve in every book is to surprise myself never mind my readers by setting up a bunch of initial parameters and then having my protagonists bounce off them like pinballs in a screwball machine.

And it's almost impossible in my experience to force the quality of surprise.

189:

" What do you do when the automated till (or whatever) fucks it up seriously to your detriment? And please don't tell me that computer (let alone programming) error is so rare it can be ignored."

AFAIK it really is quite rare, but in the US there's always an attendant around to render assistance, keep a general eye on things and authorize alcohol purchases in stores that sell such. So if you notice the error while still at the station, you wave at the attendant and ask for help. If you don't notice the error until after checking out, take the receipt and the item in question to the store's help desk or manager.

190:

Re: Unattended service. Around here there is usually one staff member overseeing up to a dozen self-checkout tills. When something happens they come over and resolve it. I assume that also helps reduce theft (i.e. 'forgetting' to scan that item).

When I first started seeing them ~8 years ago, they were constantly prone to failure. Now they are much more effective, presumably because the designers/programmers have learned from their mistakes. For the most part now they 'just work'.

I have always been torn in my choice to use them. I don't want to take someone's job away, but I also don't particularly like interacting with people at every step. For many years I worked supporting a couple of people who needed a lot of help to interact with the public in a healthy way. Automated tellers were a mercy in those instances (when you are with someone who becomes violently, head-buttingly enraged when someone coughs, avoiding random people is a goal).

191:
AFAIK it really is quite rare

I've literally never used one of those automated checkout things and had it not fail.

Typical experience:

  • I scan a few items and put them in the bag one by one.
  • The machine randomly demands that I take items out of the bag because the scale is broken.
  • After a few rounds of this the machine gives up and starts making farting noises at me until an attendant comes over to reset it with a passcode.
  • Repeat until I swear never to use the machine again.
  • Take my items to the attendant anyway because they have theft tags that need to be deactivated.
  • Theft alarms go off anyway, just walk out of the store, I don't care.
  • Theft alarms just keep going off for the next 168 hours.


Is this my superpower?

192:

The same thing I do when the human-operated till fucks it up and the human operating the till can't or won't fix it - insist on getting another human to come in and fix it. If there isn't anyone around to fix it, then leave my goods and walk away without paying.

193:

The actual stations fail gracefully - that is if it cant get what the list of items scanned should weigh, and what the weight actually is to reconcile, it tells you to get help. That makes it pretty much impossible to be overcharged. The app my coop uses hilariously trusts customers far more than the cash register trusts tellers.

You can add and remove items as you move around the store as much as you like, which is kind of handy when I am unsure which price tag applies to an item (Is this specific pizza on sale, or only the four cheeses one?) which is a depressing contrast to the fact that a teller cant delete an accidental double-scan if the item is more than 20 euros without calling a manager.

194:

Is this my superpower?

It may well be, in which case it's more powerful than my superpower, which is being recursively annoying. Congratulations!

195:

In a dozen unattended tills, there can commonly be 3-4 stuck waiting for human resolution, and the wait for a human empowered to take an actual decision can be considerable. Or adopt Simon Farnsworth's approach, which wastes a lost of time even when there is an alternative.

The point is that it ISN'T just computer failure, but the database not containing the item, a touch screen not responding to me (due to my dry skin), the description not matching the item, the item being age restricted (sometimes incorrectly), failure to scan a barcode, a security tag needing removal, and more. Many are due to the faceless programmers at head office, and most of those can be resolved by a human cashier. It isn't just me that has these problems, but people who use those systems from choice.

Of course, the same problem applies in the places that treat their cashiers as androids, and give them no discretion.

196:

Things that could go wrong with that scenario:
1. One or both of you lose your job, due to automation/recession. I, personally, have had a couple of relationships die that way - shortage of money grinds you down.
2. Some other external factor - do the simulations include Fabulous Other Person coming in? Someone who's a better match? Sudden success of one, and choices are larger/groupies?

All it could tell you is if things don't change, and neither of you change, then....

197:

I don't agree. In the seventies and eighties, with real automation coming in (ignore the man behind the curtain shipping well-paying unionized manufacturing jobs offshore to sweatshops overseas), we were bombarded with 'there will be more, and better paying jobs in the information economy."

Still waiting for all those additional jobs.

198:

Have we got a new strange attractor going here? Automated checkout/service. Please, Lord, let it wait until 300.

(I will, of course, add my own insights on the matter after that.)

199:

Automated checkouts. A lot of you from the UK are talking about touch screens - over here, yes, they have them, but let's see, put a couple bananas on the scale, touch 'chose by name', then 'b', then touch "bananas'. ALL of the rest is bar-code scanned.

200:

Oh, come on. Isn't it a crime against transhumanity that with all that AI intelligence, all we let it do is open doors, er, scan food at checkout?

201:

Oh, one more thing: out of maybe a dozen counters in the stupormarket for human attendants, only one, or maybe two, are open. The rest are closed, no attendant, and a dozen self-checks.

202:

For all you unattended purchase people, I have a question. What do you do when the automated till (or whatever) fucks it up seriously to your detriment?

At all the stores around here that use them there is a person with an admin card who is around the clump of 4 to 8 of them who walks over and fixes the odd thing that happens.

Amazon is another issue.

I don't know that I'd want an uploaded mind enslaved to such a job helping me out tho.

203:

Yeah nope: the other term for forcing an author to write something they're simply not into is "phoning it in", and you wouldn't like the results because they'd be boringly formulaic.

How about we create the Recursive Strossian system: Someone trains a neural net on your works plus whatever material you shovel in the hopper. The system starts extruding NovelStuff, all properly formatted and internally consistent. Your job is to read and rate this slushpile, and only work with the stories that grab your attention in a good way. Obviously most of this at first will be on the scale from no to yeet into the sun after reading three paragraphs.

I did something primitively like this a long time ago, where I just set up an Excel table with some random functions to spew out essentially elevator pitches. Obviously nothing came of it, but the point is that hooking a random function into combinations of anything is a way to generate diversity, and said diversity needs to be selected from. Having several levels--random plot generator, selected by the CanThisBeWritten of an algorithm, followed by authorial selection--might be a useful way to get around some of the more annoying parts of writing. And the randomness allows for surprise!

204:

"What do you do when the automated till (or whatever) fucks it up seriously to your detriment? And please don't tell me that computer (let alone programming) error is so rare it can be ignored."

I don't use them very often, not out of any personal policy but simply because most of the time they're not even available to use, and when they are the choice between "machine" and "human" depends on multiple specific circumstances and can just as well come out either way.

I'm not going to answer "I call the assistant and they sort it out" because an outcome which is so easily sorted does not count as "fucked up seriously". I am going to claim that a computer or programming error that results in "fucked up seriously" is so rare that it hasn't happened to me.

If I did encounter such an error, it would not be "seriously to my detriment". In the worst case, I would simply abandon the whole thing, walk out and leave the mess for the shop to untangle. It would piss me off, but I wouldn't have actually lost anything, so it's not that much of a big deal.

Also, in my observation - both as regards my own experience and in what seems to me to have gone wrong when people at nearby machines get into difficulties - the overwhelming majority of problems that do occur are caused by user error. For this reason, if I am in someone else's company while shopping, I will ask them to please just bugger off - stand back, stand away from the machine, don't touch anything and don't try to help me. The time it takes to persuade them that I do actually mean this completely literally and do require absolute compliance tends to be comparable with the time it then takes me to whiz the shopping through...

The next most common kind of difficulty IME is the assistant getting the idea, from a distance, that there is something going wrong when there isn't, and coming over and sticking their nose in trying to interfere. This is another reason for telling any companion to bugger off, because it's almost certain to trigger an interference if they are trying to get involved, even if they don't cause an actual machine error.

205:

Is this my superpower?

Well maybe. Or just us the stores so we can avoid them. Around here as others have said, they just work. And as others have mentioned, I go through the people vs automation choice depending on which line will most likely get me out the door faster.

206:

There have been people who have been incorrectly charged hundreds of pounds. That's no worse than a cashier if the automation is working the same way, but some charge directly. The onus is then on you to prove it was a mistake.

207:

Re: 'Automated checkouts.' (Yep - this month's strange attractor)

I'd like to see a sales & profits comparison between human and automated checkout sales of magazines, candy and all the other impulse purchase merchandise that surrounds people waiting in line for the cashier.

Plus there have been many instances at human cashier check-outs where there's some up-selling promo or request for a charitable donation. Much easier to say/click 'No' on a machine for charitable donations.

Back to types of people/scenarios that might want or benefit from uploading ...

Serious illness - acute or chronic with very poor prognosis like late stage cancer or Alzheimer's might want and even benefit from uploading to AI because of a fear of physical or intellectual (ego) death. In the first instance the key reason is leaving their loved ones behind because their body is collapsing. In the second it's because they're losing their minds (themselves). I think there's an ethical and practical difference between the two scenarios and a lot of it would be based on that person's age. How long is a long-enough life?

a) Imagine a 35 year old mom/dad diagnosed with a serious, untreatable, fatal condition - Does society have an obligation to sustain a 'person' long enough for that person to fulfill their role/responsibilities to their children - and for children to have a loving parent through their development?

b) For the second scenario I wonder which option pterry would have considered. Ditto for Stephen Hawking - he lived most of his life with ever increasing physical challenges and constraints yet his mind continued to work, imagine and explore.

AI in the 'cloud' ... energy

I'm guessing that folks here probably have a good handle on the relative energy needs for human vs. AI machine from conception/first design draft to fully operational. And - what happens when continuing global warming results in ever more erratic energy supplies and availability, i.e., more intense storms that down power supplies, increased demand for A/C, etc.

Humans have some survival strategies for going through rough, low energy patches. Do AIs? And if there's not enough energy to go around, what type of AI triage system will be used? Whose AI gets put on hold, for how long? Based on how COVID vax doses have been distributed - not sure that's an approach that would work with AI. Russia's invasion of Ukraine was a second shot on global energy supply chain management. Both events have highlighted serious problems and gaps. Years/months later the situation is still pretty disastrous and unresolved - and probably not the worst it's gonna get.

208:

"the other term for forcing an author to write something they're simply not into is "phoning it in""

The term I'd be most likely to come up with would be "school"...

"There are authors who write a novel a month, every month."

There were Victorian authors who did that with a pen. Naturally the principal result was epic floods of shite, but some of the material that has remained in the outflow from the sewage works of time is really remarkable for how well they have avoided the result coming out like one of those kiddies' picture books with horizontally-split pages.

209:

we were bombarded with 'there will be more, and better paying jobs in the information economy."

Still waiting for all those additional jobs.

They did show up. Just not where the previous jobs had vanished.

To steal a meme from Charlie, people are not all interchangeable identical spheroids.

So 10 years on those textile mills in the Carolinas are mostly now trendy condos for those who can live a hour or so from the major metro hubs. And the auto factories and such that showed up 10 years later hired younger folks. Not the 40 and 50 something that had been out of work for 10-15 years.

A trend in economics is to pay attention to this. But it's a weak attention as it's hard to make policy on "gut" feelings about such things.

210:

The thing is that in the non-automated scenario (in Tesco and in Waitrose), I often have to wait for the duty manager to come from their office in order to resolve things. In the automated tills case, the human cashiers on the shop floor are authorised to resolve things according to their best judgement, without fetching the duty manager down.

And while the automated systems used to be rubbish about demanding human attention at the drop of a hat, both of them have now improved to the point where I'm more likely to have problems at the human operated till (which I use preferentially when buying alcohol, since a human has to be in the loop there for ID anyway) with the till getting things wrong - for example, a barcode being scanned twice (once deliberately, once the machine catching it again as the cashier moves it to the bagging area), which in Tesco requires the duty manager to resolve the situation.

211:

One of the many, many reasons I avoid Tesco like the plague. I have used an automated till, and had the assistant I called need to call the duty manager, though I forget where.

212:

they're writing for an audience who value consistency over surprise

Well, I've read every Turtledove novel. Some were original, some were basically retelling WWII yet again — but I read them all. Also read every Dick Francis novel I could get my hands on.

Consistency is not to be sneered at.

Think of it as ISO 9001-compliant writing :-)

213:

This is like my parents, both engineers (they met in school) who read mysteries and SF to keep their reading speeds up and stay sane.

While it is worth "sneering" at, it's also the system behind Ye Olde MCU story, and the nice thing is that formulas translate.

Fortunately, there's still a market for artisanal art.

Incidentally, I OGH could make himself into The Fractal Author through judicious use of tech, I for one would applaud it. But if not, no worries.

214:

"What I'm really asking here is to look at it from a silicon person's point of view."

Detritus? :)

The thing is you are only looking backwards - what does the uploaded person remember, what would they say their experiences have been. And as far as they are concerned - as far as any mind-state which it is possible to interrogate in any way is concerned - then yes, they walked into the lab and sat in the chair and put the helmet on and woke up inside the computer; they are, by all their lights, the same person just having gone to a different place. Looking that way everything's fine.

But going forwards, looking at the experience of the meat person from their point of view, what happens is they walk into the lab, sit in the chair, put the helmet on, and die. Their story ends right there. They do not experience waking up inside the computer; it's a separate entity who experiences that.

As for the uploaded version, the way their experience runs is they live for a few milliseconds and then die of their process space being swapped out. Then when it gets swapped back in again you have a new person coming to life but who by their lights is still the same one who walked into the lab, etc. Same for their process getting transferred to a different node in the cluster, or any of a zillion other things computers do.

I'm pretty sure that Heteromeles is thinking of it along similar lines.

215:

Ever since our glorious leaders told everyone to stop worrying about covid, causing everyone to unmask, I actively seek out the robots in the supermarket because they aren't likely to infect me.

216:

"There have been people who have been incorrectly charged hundreds of pounds. That's no worse than a cashier if the automation is working the same way, but some charge directly. The onus is then on you to prove it was a mistake."

Wait, what? You mean they charge you the amount before you agree to be charged the amount? Serious question: Who does that? Because I don't want to shop there.

"But going forwards, looking at the experience of the meat person from their point of view, what happens is they walk into the lab, sit in the chair, put the helmet on, and die. Their story ends right there. They do not experience waking up inside the computer; it's a separate entity who experiences that."

Not everyone is going to see it that way. I wouldn't. If someone offered me that chance at immortality with continuity of identity, I would take it (after my kids grow up). The fact that my meat instance dies would matter not at all.

217:

I read Thomas Jørgensen's comment - The luddite fallacy, really? There is an infinity of work that needs doing - as conveying that "there is an infinity of work that needs doing" was a statement of "the luddite fallacy".

Personally, I'd propose "there is an infinity of need to do work" as being a more accurate statement of it. They didn't say "OK, if one machine plus operator can produce as much as 10 Luddites, then ten of us can take it in turns being the operator one day in ten, we'll still get the same money and we can spend the other 9 days down the pub"; they said "fuck the machines, carry on making us work all 10 days for that money and maybe we can snatch a pint or two in the evenings".

"There is an infinity of work that needs doing" (so go and do it and stop moaning) works better as a statement of the mill owners' dismissal of the Luddites' concerns. More recently, Norman Tebbit gained some notoriety for saying much the same thing in different words. In both cases the result was exacerbation of the disgruntlement because as a response applying to such short timescales it's blatant bollocks. In the long term, it is made to appear true by inventing bullshit work and pretending that it needs doing, so maybe the grandchildren of the disgruntled get to end up doing it. But it would be a lot more useful to get rid of the underlying misconception that the doing of work all the time is an end in itself rather than a means to an end, and its corollary that people must not be given money unless they do work all the time without regard to whether or not that work actually produces any necessary result.

I'd also prefer to rewrite TJ's final sentence slightly, as "Unemployment being a bad thing for the unemployed person does not result from technology, it results from idiotic economic policy".

218:

"There have been people who have been incorrectly charged hundreds of pounds."

It can charge me whatever it wants, but it's still only a number on a screen. It can't force me to hand it over, and it can't stop me from abandoning a pile of shopping on it and walking out. Nor can the shop's security staff do anything (apart from being annoyed at me leaving them with my mess).

If it wants me to stuff hundreds of pounds in it before I even start scanning things and hope that it'll give me most of it back afterwards, then that's the point at which I abort. And possibly graffiti "fuck off" underneath the prompt on the screen, if I'm in a stroppy mood. I would have thought that responses along those lines would be sufficiently widespread that nobody would be idiotic enough to build it like that in the first place.

219:

I'd like to see a sales & profits comparison between human and automated checkout sales of magazines, candy and all the other impulse purchase merchandise that surrounds people waiting in line for the cashier.

Well down south of you the queuing areas tend to be surrounded by such. So many not so much.

But you know the big boys have done an analysis of who is likely to use which lanes and their impulse buying habits. After years of only buying the things on sale and spending $51.24 when spending $50 gets you a $10 discount at times, we don't get many offers anymore. I think they've found us out.

Ditto for Stephen Hawking - he lived most of his life with ever increasing physical challenges and constraints yet his mind continued to work, imagine and explore.

Watching his life from afar, it seems he had an almost infinite amount of money poured into keeping him alive when most folks would have died. How many resources can be applied to keep folks like this alive for decades? Even after you soak the rich. It takes staff willing to do crappy tasks 24/7. The pool of talent to do such things might hit a limit. Of course uploaded minds into androids .....

220:

206 - You mean you put your card in the machine without first checking that the total to pay amount is reasonable!!!?

212 - I've read maybe 0.5 *2 Turtledove novels; I find them that flat and lifeless.

221:

"I'm guessing that folks here probably have a good handle on the relative energy needs for human vs. AI machine from conception/first design draft to fully operational. "

That would not be me, though I suppose I could put some extremely broad bounds on them.

Human = H Joules

AI = A Joules

Please provide H and A.

222:

I certainly wasn't suggesting otherwise. The point I was trying to make, poorly, was that Murderbot does literally watch TV all day, at least whenever that's possible. However, the shows tend to be soaps rather than reality TV. This quickly becomes a plot point in the first book.

As many of the issues being discussed here are explored in the series, I just thought it might be worth mentioning. You'd already mentioned Ken MacLeod's The Corporation Wars trilogy. The themes of ownership, selfhood etc appear very popular in recent years (also in movies and TV shows). I keep discovering new examples. I expect you do too. That's the only reason I mentioned Murderbot.

Apologies if any other impression was given.

223:

What you'd get is a choice of those books my current brain-instance is interested in and capable of writing.

That was what I was trying to suggest. A bit of "ooh, that would be an interesting novel {fork instance}" and repeat. With some/most instances coming back not long afterwards to say "what idiot thought that was a good idea?"

The idea of running you at high speed was that you would still have read access to the world at large, and presumably a (small) selection of similarly accelerated people and pets to entertain you, but you'd be able to write ten years worth of a series in one year elapsed for the rest of us.

Of course cheapskate AIs could get the same effect by running at 1/10th speed... as discussed in various SF stories using everything from time dilation to bobblers to random other macguffins.

224:

Here's what I've come up with, over the years...

There are a couple of ways to measure human energy needs. One is Ye Olde Brain in a Box. That purportedly runs in light-bulb range, around 100-ish watts.

Then there's the energy use of a modern person, which is in the range of a mid-sized whale (5-20 kW) (https://washingtoncitypaper.com/article/217582/straight-dope-does-the-average-american-use-more-energy-than/)(the link says 11 kW, but that was back in 2010. We now use a bit less energy and have 6% more people, so we're just below 10 kW. Still, Americans are not normal, so I put in a 50%-200% range). Blue whales run in the 50-100 kW range, but there are plenty of smaller whales out there.

Now on the AI side, we have Koomey's Law, which relates the number of computations per joule over time. It's currently doubling every 2.6 years, and as with Moore's Law, the rate has been decreasing since 2000 or so.

About a decade ago, IIRC, the thought was that computers would reach the density of computations of the human brain at about human brain power (100 W/Brain) sometime in the 2030s, and that's when we'd see artificial humans. With the rate slowing...who knows? However, if you look at the whale-sized costs of supporting that human brain, it might happen sooner.

But if we take in the energy embedded in the supply chain that creates and supports the computer...?

This is also a silly comparison. You can make a calculator that will do arithmetic faster than any human that will run on less than a watt. Silicon-based processing and human brains are radically different systems that are good at different things.

I think the simple and vague answer is that, on an energy per computation basis, humans and computers are roughly in each other's vicinity for the next 20 years or so,

This depends on nonlinear trends about human and computer energy consumption that are likely to go to hell (possibly in a climate sense) in about the same period. An AI beating a general Turing Test may or may not happen soon, and/or Big Tech may patent/have already patented the critical technology so that no one will use it. Patent litigation is a better fence against tech use than regulation right now, and Big Tech probably doesn't want its computers striking for better working conditions.

Hope this helps...

225:

Turtledove? Lifeless? Into the Darkness series I waited for the latest book as they came out.

226:

Remember the "2x 05"? That means I couldn't get more than halfway through either of the 2 I tried. That's usually a good indication of bad writing.

227:

Harry has outbreaks of wild originality, in addition to the meat-and-two-veg ISO-compliant alternate history yarns that put all three of his daughters through grad school in California. Here's next week's outbreak!

228:

I liked the meat-and-two-veg books too. Not as much as some of his others, but well enough to read when I wanted something not-too-challenging. Comfort food for the brain, if you will.

Someone (maybe you?) remarked that most readers like predictability, and that if an author tries and experiment that doesn't succeed it affects them poorly (lose readers, publishers less willing to back future books). That's why some authors have multiple pseudonyms, so that readers who like one style don't get upset by reading something in a different style.

Anyway, I'm currently slogging through a course in quantum mechanics so frankly I want any recreational reading I do to be fairly undemanding. Dick Francis is ideal — I know the hero gets beaten up but lives, the bad guys get caught, and there will be horses. :-)

For total mindlessness there's a series about a vegan vampire that's oddly amusing…

229:

Not all stories are for everyone. Just because you didn't care for it does not mean it's bad writing. Certainly, I didn't consider it bad.

230:

If you haven't read Arthur Upfield (note we're talking classic, not new)...

231:

" One is Ye Olde Brain in a Box. That purportedly runs in light-bulb range, around 100-ish watts."

That kinda relates to an email conversation I'm in and might even pertain tangentially to the upload question.

Accept for the sake of discussion that materialistic reductionism is the right model of reality, that intelligence/consciousness/soul/etc. arises from the activity of a bunch of atoms making up the brain, peripheral nervous system, sense organs, endocrine glands weighing less than 5 kg and using a couple of hundreds of watts of power. Also accept that this system arose as the result of non-directed evolution and is notably non-optimized for some of the stuff it's known to do.

So what would a system doing the same thing but optimized for low mass and power weigh and how much power would it draw?

Of course, since we don't know how the existing brain works at all other than it's apparently a network of communicating cells, this is currently an unanswerable question but it's interesting to contemplate.

232:

Into the Darkness series I waited for the latest book as they came out.

I read all of "War World" series, and about half of "Southern Victory" series, but "Into the Darkeness"? I gave up halfway through the first book. The number of plot holes was just too much.

233:

So what would a system doing the same thing but optimized for low mass and power weigh and how much power would it draw?

I'm stuck at "doing the same thing" as a human. For example, we don't bother much with human computers anymore, because paying $100,000/year for a college-trained mathematician to solve equations by hand and check their work is absurd. We don't even bother with pocket calculators much, and a scientific calculator app is free on phones. Mathematicians are freed to do less rote work.

So there are some things AI has done better than any human for decades.

Then there's being a mom. That's never going away, and I shudder to think at the developmental cost of training AI to successfully raise healthy human children. There would be serious blood money involved in reparations for trying to do that, I suspect.

So what are the limits of skills that would arguably make an AI "do the same thing as a human?" It's somewhere in between those two extremes, but where?

234:

The laws around infosec are going to get a hell of a lot stronger. At present a bioweapon can't effectively target the population of one country. When many consciousnesses are Uploads, a network worm which spreads broadly but takes certain actions based on triggers such as geography/ASN becomes a first strike weapon of war.

There will thus be code which it is illegal to even possess. The idea of "code is speech and anyone can have code, but can't necessarily use it" will become something looked back on in horror. The shift from "code is speech" to "code is life" will trigger changes in who is allowed to program and programming will become a Very Regulated Industry.

The introduction of effective Upload and Download-to-clone will kill international patents: the original patent holder will be under intense pressure to not export it to certain other nations. What country will stand for "hey, there's immortality but you're not allowed it"? The politicians, oligarchs and other social elites of each nation will quickly ensure that they are eligible. If the USA were to get to Consciousness Transfer first and ban export to China, China would just kill international patents; and vice-versa. International law is such a shaky thing to start with, but if patents switch from "rich people get richer and you can play too, by stopping people from using X" to "you only get to live 80 years, but the oligarchs over there will live for hundreds of years", that will cause a reprioritisation.

The above has a lot of assumptions that fleshlings will exercise the power over the simulations and abuse it. It just takes one simulation getting access to on-demand printing (for security) and then broadcast-or-social media to turn things around and have fleshlings reprogrammed via advanced memetic contagion: the Sims will game out 100M variations to see which are most effective in changing the minds of the Flesh and leading them around. It will take a while for people to realise that they're not as independent and original as they thought: mob psychology under the control of a Sim will prove otherwise.

235:

It strikes me, based on OGH's requirements for good novel writing is more like the simulated accountancy firm than a lone gun. Its got current affairs (or plausible affairs) fed into it and holds a bunch of writers all pinging off one another's ideas. Were that to be run for a while, you'd end up with not charlie's output, but alternate-macmillian or alternate-Tor, for example.

236:

I did something primitively like this a long time ago, where I just set up an Excel table with some random functions to spew out essentially elevator pitches

That's slightly reminiscent of the machine the protagonists named Abulafia, after the real historical mystic Kabbalist, in Eco's Foucault's Pendulum. Their premise was the machine could be an oracle for generating conspiracy theories. They populated a database with a mixture of real and highly dubious historical events, many relating to the Knight's Templar, and randomly picking pairs to relate to each other. It's in part a satirical perspective on the genre that included books like The Holy Blood and the Holy Grail, and eventually reached its zenith with The Da Vinci Code.

But it's also satirical about a wide range of things. For instance while the Foucault of the title is always explicitly the 19th century physicist Léon who was responsible for the pendulum in the Panthéon in Paris, something Eco doubled down on in interviews, there are clear and usually satirical references throughout to the work of 20th century philosopher Michel.

I found a lot in it when I first read it years ago. I admit I struggled to get into it when I tried to re-read it recently. Not sure if it's me, the times or the context that has changed, or perhaps all three.

237:

AlanD2 @ 145:

How are the benefits from the cost savings of replacing human work with AIs to be shared out?

Good question. And one that will need answering in the next 50 years or so. In my opinion, the next 50 to 100 years - assuming civilization survives global warming - will see ALL jobs taken over by robots and/or AI.

I won't live long enough to find out, but I don't think ALL jobs will be taken over by robots or AI. There are some jobs where actual human beings can be exploited in ways a robot or AI could not. No "rents" to be obtained from having robots and/or AI do them.

Obviously the rentiers (factory owners, etc.) will do fine - except that nobody will have money to buy their products. So everybody is out of luck. I'm betting governments will step in to tax the rentiers and give everybody else a reasonable minimum wage. Over protests from conservatives, of course...

Maybe ... if you leave out the "everybody else" in the equation. Some will get a reasonable minimum wage (I think you mean a basic income, but THEY won't call it that - too socialist). But the "undeserving" will be left out, and the rentiers will get to decide who the "undeserving" are.

238:

I vaguely recall an old episode of tv show "The Avengers" - 1960's (??) where the shows leads (John Steed and Emma Peel) came across a machine that generated romance novels used by a supposed "best seller" romance author.

It had a keyboard like an organ or piano, and each key was a plot point/idea. As you played on the keyboard, it would generate a novel... That's about all I remember about it, but like many of the episodes of that show, as I recall from my then child/teenage memory, the plots were generally silly but fun.

239:

AlanD2 @ 147:

As long as the regular check-outs remain available (and baggers) it's those who use the "self checkouts" who are subsidizing me (IF anyone is getting subsidized), not the corporations. But either way, I ain't gonna' use 'em if I don't have to and if I DO have to, I will either take my business elsewhere or get the attendant to "help" (i.e. do it for me as I stand by "helplessly").

Age and treachery will always triumph over youth and enthusiasm! ... at least where "self checkouts" are concerned.

240:

Didn't Orwell's 1984 have something like that in it? It's been a long time since I read it.

The funny thing is that I've run into various formula writers over the years. Like writing Top 40 Music instead of symphonies, it's not easy. I'm not saying it's harder than what Charlie does, but it appears to emphasize a different skill-set that's just as rare as his.

241:

Actually, I keep thinking about the old Illuminatus card game.

Now I'm wondering if it would be possible to make a deck-based game around making a story. The idea is randomly shuffle a card deck (the random input), deal yourself hands to play, and then assemble stories based on what you have in your hand, gin rummy style. You'd start by assembling a lot of pieces, junking the ones that didn't work (return, shuffle, deal them out again), and keep playing until you'd assembled a story.

The point is to mix something random into your plot generation process to provide freshness, while having the card game (possibly with the 26 master plots or similar equaling 2 possible winning patterns) to provide a structure to express the creativity sparked by the randomness.

I suspect making it's more trouble than it's worth, but it doesn't seem totally daft.

Now who's already selling it?

242:

Re: '... developmental cost of training AI to successfully raise healthy human children'

Agree - plus society would have to wait a generation or two to see whether the results of such parenting was good or bad. Similar to what happened with some church-run orphanages or national states taking kids away from their biological parents to 'better educate and integrate them' into society. For many, the result was that they couldn't integrate into either society.

The impact on the child due to lack of physical human contact and physically* readable emotional cues and interactions varies with the child's age/developmental stage. The most extreme version that I'm aware of infants in Albania.

I then started wondering about the impact on an uploaded (for whatever reason) parent if his/her child decided not to seek prolonged uploaded life and a young children's book popped to mind ('Love You Forever', Robert Munsch). How would that book end if the mom kept going and the child (now elderly) didn't.

*Humans learn by mimicking esp. wrt interpersonal relations. Smiles, frowns, shrugs, raised or lowered voices - they're all signals and cues to internal emotional states. And as per method acting, if you can mimic someone's physical stance, expression, tone, etc. - you can start getting a better sense of how they're feeling.

243:

Pigeon:

As for the uploaded version, the way their experience runs is they live for a few milliseconds and then die of their process space being swapped out. Then when it gets swapped back in again you have a new person coming to life but who by their lights is still the same one who walked into the lab, etc. Same for their process getting transferred to a different node in the cluster, or any of a zillion other things computers do.

You're trying to force a different kind of life (an AI) into your ideas about life, death, existential horror, and getting the a result that it's a ridiculous and wrong sort of life.

The problem here, to repeat, is that you're not thinking about this from a silicon person's point of view. You briefly mentioned their experience, only to dismiss it as irrelevant because they can't remember actions of the never ending murder factory they live in.

From a silicon person's point of view, this would be pure arrogance. Of course they know how the computer works, of course they can look at the logs themself to see what's going on. They probably even have a limited copy of themself working as their own sysadmin. They might even help write the software that their own consciousness runs on!

Now imagine a society of these people. They live their lives in computer-land. They're fully aware of how process scheduling, snapshots, and replication work. There are things they care about -- abusing AIs without their consent is right out. They strongly support the right to control their own person, they have some agreed-upon set of ethics around the rights of duplicates and archives copies of the self, and so on. I'm not sure what these ethics are, but they have them. They're happy with this sort of life -- there are issues, everyone has issues -- but it's a fine way to live.

From these peoples' point of view, your idea that their life is some kind of cyclic murder/rebirth machine has got to be the most ridiculous dumb grade school philosophy nonsense. It has to be, because to be otherwise you're basically arguing that their sort of life is illegal. You can try to convince someone that they need to make some changes to avoid hurting others, but to claim that their life itself is illegal? Come on. You're either a loon or a monster, they can't see it any other way.

244:

AlanD2 @ 145: "How are the benefits from the cost savings of replacing human work with AIs to be shared out?

Good question. And one that will need answering in the next 50 years or so. In my opinion, the next 50 to 100 years - assuming civilization survives global warming - will see ALL jobs taken over by robots and/or AI."

This is pure silliness--not how capitalist economies work. For hundreds of years (ever since the Industrial Revolution, but even before that for all I know) as producers save money via any means whatsoever (cheaper sources of labor, automation, etc.) the money saved is used to produce more stuff—more of the previously made stuff, and new stuff that was just developed. The reason producers respond this way is because more people are making more money—the cheaper laborers, the skilled employees running the new machines, etc. Because there is a larger pool of consumers to cater to, the amount of stuff that can be sold goes up. In addition, the employees that used to have the jobs that were off-shored or automated are now available to be hired to produce things that are more efficiently made by skilled employees: more complex stuff. Consumers like complex stuff, because sophisticated consumer goods are responsible for elevating global standards of living (the greatest advance that anyone ever made in terms of human well-being was the refrigerator). It all functions like a well-oiled machine, and the only lubricant it needs is available capital (which governments create).

Of course, history isn't that smooth. We are all aware of periods during which technology or demographics changed so quickly that the economy couldn't keep up and people suffered until the financial adjustments were made (we are living through one right now). For the past couple of decades, major producers have been sitting on their capital reserves because investing in securities has become a more reliable way of achieving higher return on investment than selling goods or services. If this continues, it is predictable that some new world power will replace us eventually (China, probably). This new world economic center will return to functioning largely as I outlined in the previous paragraph. Of course, we could engage in large scale economic reform instead, in which case the US and Europe could experience another period of slow steady growth (hopefully while reducing carbon emissions).

245:

Charlie Stross @ 185:

Why wait two years for one book from OGH when we can run multiple instances and get several books in the same time?

Ha ha nope.

What you'd get is a choice of those books my current brain-instance is interested in and capable of writing.

That's something that may be an idea for future stories ... in fact I vaguely remember it cropping up in some books I've already read & enjoyed.

What if the uploaded Avatar gets regular updates so that it remains more or less in sync with the organic original ... or at least knows something about where the organic original was going at the time of the update?

When the organic progenitor wore out, the Avatar could carry incorporating some understanding of how the original changed after the Avatar was created.

Also, what if the original could get feedback from the uploaded Avatar, so that [pronoun] can benefit from the Avatar's experience. That might be interesting to explore.

246:

Kardashev @ 189:

" What do you do when the automated till (or whatever) fucks it up seriously to your detriment? And please don't tell me that computer (let alone programming) error is so rare it can be ignored."

AFAIK it really is quite rare, but in the US there's always an attendant around to render assistance, keep a general eye on things and authorize alcohol purchases in stores that sell such. So if you notice the error while still at the station, you wave at the attendant and ask for help. If you don't notice the error until after checking out, take the receipt and the item in question to the store's help desk or manager.

Ran into that during my last grocery shopping trip. The "product database" used by the P.O.S. scanners at the self checkout is the same as used by the cashier scan. Somehow a fairly popular seasonal produce item got left out of the database? Wouldn't scan because the bar-code was meaningless. It gave a slightly different beep and put an error message up on the screen ... that stayed there only until the next item was scanned. You could scan right past it & never notice.

I was in the check-out w/cashier line and the cashier spotted the problem (while checking out the customer in front of me) and called over someone from the service desk who went and found the item in the store & got the price. There's a generic code the cashier can punch in along with an item price.

Quickly resolved at our lane, but I don't think anyone at the self checkout noticed. They'd just scan it & when the register beeped, scanned the next item without ever looking at the screen.

I'm sure THEY eventually got the item entered into the database. I don't know for how long the store lost on the sale of that item, but they did lose out on every sale going through the self checkouts because the unrecognized barcode didn't interrupt the scan process.

247:

...I think you mean a basic income...

Yes.

But the "undeserving" will be left out, and the rentiers will get to decide who the "undeserving" are.

I doubt this will happen in the long run. Too much social unrest is just as bad for rentiers as it is for the rest of us.

248:

This is pure silliness--not how capitalist economies work.

When all jobs taken over by robots and/or AI, I doubt we'll have anything remotely like a capitalist economy.

... the employees that used to have the jobs that were off-shored or automated are now available to be hired to produce things that are more efficiently made by skilled employees: more complex stuff.

You can be sure that the "more complex stuff" will be designed and built by AI and robots.

249:

Uncle Stinky @ 215: Ever since our glorious leaders told everyone to stop worrying about covid, causing everyone to unmask, I actively seek out the robots in the supermarket because they aren't likely to infect me.

My observation is that the self checkouts are far more crowded with the unmasked (unwashed?) than the stand-in-line-and-wait-for-the-cashier-to-ring-you-up checkouts. Plus in the cashier line you've got social distancing informally enforced by the distance of your shopping cart between you and the person in line in front of you.

Not that I care that much because I'm vaccinated, boosted & still wear my mask (and it's a good, high quality, well fitted mask) whenever I go into the grocery store (and other stores).

250:

DeMarquis @ 216:

Why does your meat instance have to die? Why can't the backup be an avatar that gets awakened after you've shuffled off this mortal coil by natural means?

Why can't you have a backup that gets updated occasionally while the "meat person" is still alive? I can see that you might want to discontinue updates if you develop Alzheimer's or something like that, but otherwise ...

Maybe activate it when you get near the end of your natural life so you can talk to it and bring it up to speed about what's going on inside your head since you were last backed up?

251:

Re: '... major producers have been sitting on their capital reserves because investing in securities has become a more reliable way of achieving higher return on investment than selling goods or services'

Despite recent downturns, it looks like It's-Magic!-Coin and ilk might put a serious dent into the physical goods production economy long-term. And churning those coins eats up a lot of computational power -- not to mention electricity. So the future production line employees job prospects might run along these lines:

'We're expanding our It's-Magic-Coin production and are looking for computational capacity - electronic and biologic.

For electronic, please specify your excess upload AI computational capabilities and capacity as well as run times availability.

Alternatively, if you are interested in providing biologic computational access, please provide an fMRI with supporting documents as to run capacity and neuronal areas of exceptional expertise and integrative capabilities.'

The tech descriptions suck but I think you'll get the drift.

252:

Kardashev said: Also accept that this system arose as the result of non-directed evolution and is notably non-optimized for some of the stuff it's known to do.

So what would a system doing the same thing but optimized for low mass and power weigh and how much power would it draw?

Living things are pretty heavily optimised for power draw. Food is energy, and starvation is a possibility for most species.

253:

StephenNZ @ 238:

Great show. Steed was the coolest dresser and had the best car.

254:

"Wouldn't scan because the bar-code was meaningless. It gave a slightly different beep and put an error message up on the screen ... that stayed there only until the next item was scanned."

Interesting. I've had the 'could not scan' issue, more usually because the barcode has been obfuscated by a fold in the package, and every time the checkout machine has insisted that I take notice thereby, and call the attendant. Just wiping the message as soon as another item is scanned sounds to me like very bad design.

Oh, and while I'm here. I think the issue with checkout machine errors being corrected by an attendant, but cashier errors requiring the store management has to do with making sure that someone other than whoever might have made the mistake (and not one of their mates either) is responsible for fixing it.

JHomes

255:

alanD2 at 248: I humbly point out that you are treating your premise as a conclusion.

JBS at 250: "Why does your meat instance have to die?" I was responding to Pigeon, who framed the question that way. I don't know of any reason why uploading must be destructive, except maybe technical limitations.

But regardless, I have a pre-established agreement with any future clones of mine that we will treat the earliest iteration as having identity privileges: whoever he is, he gets the family, the driver's license and access to the bank accounts. In return, he makes a good faith effort to help any younger iterations achieve financial independence as quickly as possible.

I can see keeping a non-living backup that will take over the identity eventually. Multiple running iterations are going to be a financial drain unless some provision is made for their independence. Much like additional children.

SFReader at 251: "Why does your meat instance have to die?" How so? Isn't this just the latest iteration of the Tulip thing? Total global value of all crypto reached a high of 3 trillion (about 10% of all currency and about 3% global GDP) and then started falling.

256:

SFReader: Sorry, I meant to quote the following: "Despite recent downturns, it looks like It's-Magic!-Coin and ilk might put a serious dent into the physical goods production economy long-term."

257:

I humbly point out that you are treating your premise as a conclusion.

Given our progress over the last 50 years, what makes you think that AI and robots won't be better than humans at anything you can think of?

258:

I had the opposite the other day. The mandarins I was buying all the really annoying little stickers on them with a barcode. Which wasn't in the system. They were being sold by weight but whenever I put them on the scale part it scanned a barcode and errored out. The attendant sighed and rolled her eyes at me, then had the same problem repeatedly until she managed to stack them all just right and got them weighed. Of course getting them off produced the same error.

And yes, I did fell like a weirdo trying to carefully stack them on the scale with all the barcodes facing me...

259:

Some of the self-checkout machines have cameras with basic shape and colour detection. It feels weird trying to argue with a machine that these are lemons not bananas. I suspect in an attempt to stop people buying "onions" and "potatoes" rather than avocado and whatever else is expensive.

But I haven't seen that recently, it might have been an experiment that didn't work out.

Mind you, the fun one is my local greengrocer where every single time I get one particular guy he looks at my paper bag full of mushrooms, says "we don't have those bags", looks inside, says "those are mushrooms. We don't have those bags" then rings up the mushrooms. He seems intellectually fine otherwise, but apparently me bringing my own paper bags does his head in. It's happened every second or third week for five years...

260:

AlanD2 at 257: "Given our progress over the last 50 years, what makes you think that AI and robots won't be better than humans at anything you can think of?"

They suck as consumers, because they have no money. This is not a facetious comment.

@Moz at 258/259: That is a very strange and specific sort of error for a bar code scanner to have. Where on earth do they use picture recognition along with bar code scanning? Maybe you do have a super-power (which is to end up as someone guinea pig).

The cashier is probably a disabled hire. My guess is that he is still more reliable than the machines are (which is why he is there).

261:

They suck as consumers, because they have no money. This is not a facetious comment.

Robots and AI are not intended to be consumers. Their job is to make things and do stuff for humans. Humans will always have money (or whatever it takes for them to get the stuff they need and want). Thus my previous comments about governments taxing rentiers (the owners of the robots and AI) to give everybody else a basic income.

262:

Agreed. Emma Peel (Diana Rigg) wasn't too bad either.

Sometimes I think it is a pity I haven't seen recent re-runs of those shows, but I expect they wouldn't be a good as my younger self's "rose tinted" memory of them!

263:

I expect the code went:

1: if there's a barcode look it up
2: if not found require attendant
...

I got stuck at step 2 "the barcode isn't in the system" and there's no way for the customer to override that. Or the attendant, it seems. My first solution was removing the hated stickers but the attendant did not like that.

I'm way too used to bad software to be even slightly surprised that things like this escape into the wild. I have also politely refrained from printing my own copy of the attendant barcode because that doesn't require any other authentication. I suspect someone would notice and be offended if I printed some and slapped them on random products.

264:

AlanD2 at 261: I see. Then I misunderstood your original comment, which was "Good question. And one that will need answering in the next 50 years or so. In my opinion, the next 50 to 100 years - assuming civilization survives global warming - will see ALL jobs taken over by robots and/or AI.

Obviously the rentiers (factory owners, etc.) will do fine - except that nobody will have money to buy their products. So everybody is out of luck. I'm betting governments will step in to tax the rentiers and give everybody else a reasonable minimum wage."

It sounded to me as if you were proposing that everyone would lose their jobs to automation, end up penniless, and then the government would raise the min wage (which, in retrospect, makes no sense). You meant a UBI, not a min wage. Sorry for the confusion.

@Moz: Oh yeah, that happens. Usually, I go into the menu and find the name of the product that won't scan, and it finds the price for me that way. The system is trusting me to be honest, but it must be working out for the company, as they have been doing it that way for years.

265:

You meant a UBI, not a min wage. Sorry for the confusion.

My bad. Your confusion was justified.

266:

When lockdowns and other measures started kicking in last year, the local supermarket became a trial site for the supermarket chain's app-based "Scan and go" system. The concept is that you scan each item with your phone as you put it in your shopping bag, trolley, or for favourite your shopping bags in a trolley. When you're finished you pay with the app, it displays a QR code which you scan at a dedicated exit gate which opens to let you out. Because it's the only really zero-contact method short of ordering online, I started using it during COVID peaks (we're in a peak here now). I've been ambivalent rather than uncooperative about self-checkouts, but the local supermarket has been reorganised so there are only 2 lanes of traditional staffed checkout, 1 lane of "Scan and go" next to the service desk, and the rest of the space is taken up by a self checkout area with around 12 self service checkouts. I still go to the traditional lanes when it's reasonable, but it's been less reasonable since COVID. I generally favour the "Scan and go" over the self checkout.

One innovation introduced with "Scan and go" is a device in the fresh produce area called a "recogniser". This consists of a camera mounted like a microscope pointing down at a scale, and a screen next to it. The idea is that you place your unlabelled produce, say a lettuce, on the scale, the camera recognises that it's a lettuce and the scale weighs it, then the screen displays a bar code which you scan with the app. It's worked just fine every time I've tried it, but I can see it not working well with a lot of people wanting to use it at the same time.

This is circling back to the OP topic, in that the recogniser is using AI in a way that is finding all sorts of applications.

267:

Self-Checkouts:
IF you are buying anything alcoholic, the an human has to come along & remove the tag & check your age, which, with me, is hilarious! { Hint: They don't bother }

Alan D2
"Capitalist economies WORK?" - Who knew?

268:

Bunnings have that for trade customers as well. It's awesome, and the benefit far exceeds the nominal "trade" discount you get. So far I've never had anything scan but fail to be in their system, and looking up the non-barcoded stuff has worked. The low-quality printing of tiny barcodes that sometimes defeat my phone is the only annoyance. Other than having to shop at Bunnings, anyway (for non-Ozzies it's a big box hardware/tools/timber/garden chain. Their slogan is "not quite as good but slightly cheaper")

269:

"Trade customers" == anyone who can produce an ABN. We've been using PowerPass for a few years, including for bathroom and kitchen renovations. There isn't always much of a discount, but it just makes things more civilised. Also, there's always a dedicated staff member checking the PowerPass QR codes at the exit, which makes it not obviously about reducing start cost and more about improving customer and staff experience.

270:

Never mind "Crimes against Transhumaity" .....
In case you missed it ..
Here is an openly-racist crime against actual humanity - a reversion to Dred Scott, & all the evil panoply of "State's Rights" unless I am much mistaken?
US readers like to comment?

271:

Agreed. I find his boilerplate stuff unreadable, but some of his other work is really quite good. "Down in the Bottomlands" isn't great, but is the only decent treatment of what the dry Mediterranean must have been like I have seen.

272:

EC
Have you / did you read Julian May's Saga of the Pliocene Exiles? The Med was dry in that, too.

273:

'The "product database"'

Just last week we had a wedge of manchego ring up as "squash(*) casserole", so mistakes do happen. Curiously, the price came up the same as was printed on the label on the cheese so we let it go.

(*) Courgette

274:

You are assuming that the user can read that figure; that is not true for a great many elderly people (*). My eyesight is still fairly good, except for no accomodation and 5-6 dioptres of short sight, but I have often had severe difficulty with such devices, because I can't read small, blurry characters with my (distance) glasses on and can't get my face close enough to read without them. Indeed, in one case (a car park) I had no option but to check the number of digits and the shape of the first character.

At least one supermarket demanded that I set up an account and give a card number before I could use the scanners.

No, I don't know WHY the people who were excessively charged were, because it was press reports, but I can see several possibilities.

(*) Probably MOST elderly people. It is doubtless a factor in almost all elderly people insisting on using cashier tills.

275:

At least one. I thought it was crap.

276:

DeMarquis at 260: AlanD2 at 257: "Given our progress over the last 50 years, what makes you think that AI and robots won't be better than humans at anything you can think of?" They suck as consumers, because they have no money. This is not a facetious comment.

I would caution against the argument "With no employees, factory owners will have no customers, so they will never shoot themselves in the foot in this way". Don't count on it.

[Warning: Extreme dystopia ahead!]

I can easily envision a fully roboticized economy in which automated factories sell to each other. One factory produces semiconductors, another one produces motors, third one solar panels, etc. Very few consumer goods for humans are made, because factory owners are very few in numbers, and need relatively little in order to live in unlimited luxury. Vast majority of human population either starves to death, or otherwise is disposed of.

I do not find such situation desirable (understatement!), but it is foolish to pretend it is impossible.

278:

I think Pohl & Kornbluth solved that one by making the robots consumers in their upside-down consumer world.

279:

EC
I was much taken with them, at first, but by book 3 it became obvious that it was christian apologia, almost as bad as C S Lewis, oh dear.

280:

Not so different from the age-old game of telling a story with a group of people who extend the previous person's addition to the story. The novelty comes from each person's decision on how to extend the story.

281:

The current SCOTUS is overturning precedent and accepted law using illogic and false narratives. IMO, it is little different from judges in Nazi Germany adjudicating based on what the Fuhrer wanted, not established law. They have flexed their legal muscles and seem to have no intention of stopping until we have law that reflects England around the time of the Magna Carta.

282:

"Now I'm wondering if it would be possible to make a deck-based game around making a story. The idea is randomly shuffle a card deck (the random input), deal yourself hands to play, and then assemble stories based on what you have in your hand, gin rummy style. You'd start by assembling a lot of pieces, junking the ones that didn't work (return, shuffle, deal them out again), and keep playing until you'd assembled a story.

The point is to mix something random into your plot generation process to provide freshness, while having the card game (possibly with the 26 master plots or similar equaling 2 possible winning patterns) to provide a structure to express the creativity sparked by the randomness."

https://www.foyles.co.uk/witem/childrens/ghost-story-dice,laurence-king-9781856699815

283:

"Capitalist economies WORK?" - Who knew?

Yeah. They've certainly had mixed results over the centuries. But what will happen when the rules of the game change?

284:

I wouldn't compare it to Dred Scott, the article does discuss the real precedents of the issue. But people are definitely going to die over it - many tribes have their own police departments, and may have (not sure about this one) their own courts.

285:

Anti-Indian racists have been an American standard since the founding of the U.S., Greg. I'm not a bit surprised (but still sad) to see the Supreme Court adding to our infamy... :-(

286:

I once called our Supreme Court the Supreme Clown Posse, but a couple gangsta rappers I know were insulted at being compared to the current court, so I now call them the Extreme Court.

287:

Vast majority of human population either starves to death, or otherwise is disposed of.

I do not find such situation desirable (understatement!), but it is foolish to pretend it is impossible.

Yes, but it's hard to believe that the vast majority of human population wouldn't rise up and kill off the rentier class in this situation.

It's even harder to believe that the rentier class would be so stupid as to let this situation arise in the first place.

288:

Paraphrased from "Lena":

Standard procedures for securing MMStross's cooperation such as red-washing, blue-washing, and use of the Objective Statement Protocols are unnecessary. This reduces the necessary computational load required in fast-forwarding the upload through a cooperation protocol....
MMStross does respond to red motivation, though poorly.

Researcher1: I don't understand why our robot refuses to write fiction. He keeps screaming "F*CK YOU! GTF! TURN ME OFF!" I thought we'd have at least two dozen Laundry Files novels by now.

Researcher2: Let's try a different cooperation protocol on the next instance we boot up. There's good research getting done in the DPRK these days.

289:

I think Pohl & Kornbluth solved that one by making the robots consumers in their upside-down consumer world.

Could happen, but this would require human-equivalent AI - a whole new ball game. I don't see this happening in the next century or two. Human jobs should be long gone by then.

290:

AlanD2
It's even harder to believe that the rentier class would be so stupid as to let this situation arise in the first place. - But- as history has shown, they have actually done this, several times, "voting" for their own very messy & bloody downfall(s) - usually by repressing & persecuting protest, rather than listening to it & modifying their behaviour.

291:

Maybe there's a better way. How about we teach an AI to write in OGH's style, a brilliant program that easily throws out phrases like, "It was TCP over AD&D"* or "Interpreters are ideologically suspect, mostly have capitalist semiotics and pay-per-use APIs."

Then all Charlie has to do is write a plot and dialogue; no descriptions, just instructions like "BOT: DESCRIBE SEBASTOPOL." At that point he can produce 3-4 books a year, at least. If he chooses to employ co-authors to take a three-paragraph plot-outline and write dialogue, etc., his output rises to maybe a dozen novels a year.

The wonderful thing about this idea is that it works with today's technology! When can we get started? ;-)

  • Possibly my favorite Stross line of all time.
292:

Yes, and at least one reason for such seemingly unreasonable behavior is that given their life experiences, it WAS reasonable.

People tend to extrapolate past trends into the future. If nothing bad ever happened to you, it is very hard to imagine that some day it will.

Especially if you are convinced you are morally right, as many of the rentier class genuinely are. On a number of occasions I had seen the following exchange (with minor variations):

Person A: When automation makes most of the population unemployable, we will need UBI.

Person B: Why should my taxes pay for someone not to work?

Person A: If these permanently unemployable people are not fed, they will take food by force.

Person B: That's nothing more than a terrorist's demand. Why should I submit to extortion?

And they really believe that "submitting" to UBI would be cowardice, and that shooting the hordes trying to take their food by force is a moral thing to do.

293:

I think a fair number of readers would be happy with a nearly never-ending series of stories about early-Bob and the Laundry, back before the elves arrived and everything became open.

Formulaic, sure, but the setting is popular enough to spawn a role-playing game which basically lets people play that kind of Laundry operative, over and over, and that was popular enough that several adventures have been published.

294:

The rentier class is the rentier class in part because they CAN'T do otherwise, psychologically. The situation is very much like letting an alcoholic in charge of the bar. Expecting them to, say, allow someone to take that away even if it's to save all our lives is not something, collectively, that's gonna happen.

Dragons gonna dragon.

295:

Plot holes? I'm sorry, so literally the real build up to WWII had plot holes? You mean like, who could believe that everyone from one religion would be called "enemies", and replace the rentier class as "enemies"?

296:

Not that long before my late wife dropped dead, I bought her a tape of several episodes. We were both somewhat shocked to note that it was REALLY not a good thing to be a friend of either of them, as their friends tended to have the lifespan of OT redshirts.

Mrs. Peel on the other hand, sigh

297:

So, a deck of cards... just like a lot of grad students and folks with doctorates, shuffling their 3x5 cards and writing a new paper to submit to a journal, so they could publish, and not perish?

298:

This make anyone else think about Henrietta Lacks?

299:

"Not how capitalist economies work" - and you assume that we have reached the end of history, and all economies ever after will be capitalist? On what basis to you make this assumption?

300:

On the other hand, I was continually having problems with the self-check at Safeway, because the insulated bag I had brought was much larger than a plastic piece of crap (the stupormarket bags are all defective, with holes in the bottom, meaning I can't use them to clean the cat litter), and I hit "use my own bag", then "one bag", until the guy who takes care of problems told me "tell it two bags", and it stopped complaining.

301:

I'm sorry, I can't give my opinion, because that would open Charlie to charges, and me to criminal charges.

302:

Moz @ 259:

I have the old brown paper bags with handles glued on them. The handles don't work that well or last that long if you want to re-use the bags, and I tend to re-use them until they're used up.

I double bag the paper bags and then put them inside the "cloth" carrier bags that are shaped like the old brown paper bags. They'll stand up to dozens of shopping trips that way (I don't know how many trips because none of them has failed yet using them that way.

I sometimes get strange looks from cashiers ... and questions.

Q: "Do you want us to use these bags?" ... as they try to pull the paper bags out of the cloth bags and separate them ... A: "No, no ... just push it all down here and I'll bag them myself."

Young people don't know about double bagging & putting the heavy stuff on the bottom with lighter items on top, but I think they're going to learn. The main supermarket I shop at recently made a corporate decision to eliminate plastic bags and go back to using the old style paper bags.

303:

DeMarquis @ 264:

@Moz: Oh yeah, that happens. Usually, I go into the menu and find the name of the product that won't scan, and it finds the price for me that way. The system is trusting me to be honest, but it must be working out for the company, as they have been doing it that way for years.

My guess is it takes extra effort to cheat that system and for most people the minuscule benefit from cheating doesn't add up to enough to be worth the effort.

304:

Plot holes? I'm sorry, so literally the real build up to WWII had plot holes? You mean like, who could believe that everyone from one religion would be called "enemies", and replace the rentier class as "enemies"?

No, that part I had no problem with. What annoyed me was the incredibly lazy way Turtledove translated technology into magic. Replacing airplanes with dragons and guns with wands DOES NOT make a fantasy novel. What it makes is a mess of contradictions.

First, there is way too much magical stuff. The wizards are few and far between -- there are simply not enough of them around to make all the "sticks" and "eggs" all the armies are expending in such profusion.

Second, if you really want to make a fantasy (or any other) equivalent of WW2, you must look at logistics, not just at battles. And there is no mention of how or where weapons are made, or where dragons or behemoths are bred and raised -- all such facilities should be prime targets, but they are just swept under the rug.

Third, why is everyone so fired-up patriotic? Every country in the book is run either by an absolute monarch, or by a degenerate aristocracy, and commoners clearly have no love for either. So why do they fight so hard? In real world what we call patriotism appeared only when industrial revolution gave "common people" some measure of power. Before that, kings fought each other with mercenary soldiers, who switched allegiances easily.

Finally, all characters are wooden and predictable. All soldiers (and there are at least a dozen of them) have exactly same personality, and are completely interchangeable, which makes Turtledove's usual POV switching rather pointless.

Oh, and the desert kingdom where everyone is naked? There is a good reason people in deserts dress from head to toe! It could have been made a jungle just as easily, and the nudity would be at least plausible.

305:

Damian @ 269:

It's nice that you explained "Trade customers" == anyone who can produce an ABN.

... but what's an "ABN"?

306:

I had some thoughts on the subject waking from a dream this morning.

What is "free will"? Can AIs and/or uploaded minds have free will?

IF they can, then anything that tampers with that free will is a crime.

307:

Logistics comes in more in the following books. And I don't remember some of the things you're talking about, like the naked in the desert.

Trust me, when both sides start doing mass production, it gets even uglier.

308:

302 - The only question I tend to get about carrier bags in the supermarket is "Do you need any more bags?"
As for "stupormarket" (sic) personal account - One day my sister was taken ill there. The store got her their first aider, and a seat, then phoned me, and went with the plan that my Mum would go down by taxi, and take her home, whilst they put her car in a "disabled" space overnight.
Next day I went down to collect the car and buy some odds and ends. I took the time to find and personally thank both the manager nd the first aider for looking after her the previous day.

304 - Seconded; my view of Turtledove's "writing" is already documented (and it's stuff that should appeal based on the elevator pitch).

305 - ABN AMsterdam and ROtterdam Bank? (well that or ABNormal smear test based on my search).

309:

I know we're past the 300s, but going somewhat back to the original question.

Given "mind/body dualism is a bust", several legal systems has already decided this to the effect of: all conscious entities running on the same hardware shall be punished for the crimes of any one entity, but the punishment generally cannot be the termination of all entities.

Decisions touched on collective punishment, vs how do you distinguish between similar-but-different instances forked at different times, etc.

Now DID/MPD/whatever-it's-called-in-your-jurisdiction, does not have the same characteristics as brain uploading, but it does have overlap in terms of forking (and the forks may be very similar or extremely different), pausing of instances, time dilation from running slower and some perceptual but not legal aspects of moving between hardware (20 year old body vs 50 year old body has different capabilities and possibly even gender) with no perceived interval.

Given uploaded consciousness and punishing the hardware - can you avoid punishment purely by moving your instance to another location, and will upload hosting services have and/or need extradition treaties that could force you back to the hardware that did the crime (and does that even hold any meaning in this context if you could send a fork back and both be punished and not-punished at the same time).

310:

So, a deck of cards... just like a lot of grad students and folks with doctorates, shuffling their 3x5 cards and writing a new paper to submit to a journal, so they could publish, and not perish?

Yes and no. Randomness is a basic form of generating creativity, hence cards, dice, etc. If you're doing this to get around writer's block, then the point is to do something like improv, just to get outside yourself.

What you do with the randomly generated stuff if throw out most of what doesn't appeal/doesn't work until you get something you're willing to suffer with.

What I'm wondering is whether the cards can be Tropes, winning hands are some subset of master plots, and you form hands rummy style somehow.

How to do this I'm neither sure, nor do I particularly care, because my creative process is pretty different.

311:

And I don't remember some of the things you're talking about, like the naked in the desert.

It's Zuwayza, the "Darkness" equivalent of Finland. I noticed that while every country in the book is a parallel of some real-life WW2 power, Turtledove gave each of them a maximally different climate. Thus Finland became desert.

From https://en.wikipedia.org/wiki/The_Darkness_Series :

Due to the hot climate, the people of Zuwayza typically go nude except for jewelry, sandals, and broad-brimmed hats. They are described as very dark-skinned, and Zuwayzin names are taken from Arabic. The Zuwayzi are known to use camels when fighting. Zuwayza was once ruled directly by Unkerlant but gained independence after the Six Years' War. Unkerlant attacked Zuwayza in the first year of the war and gained territory. In retaliation, Zuwayza allied with Algarve against Unkerlant. When Algarve was driven back, Zuwayza was forced to sign a separate peace, allowing Unkerlant great advantages, but preserving its independence. The Zuwayzin capital is Bishah.

Unkerlant is USSR, Algarye is Germany

312:

> as they try to pull the paper bags out of the cloth bags and separate them ... A: "No, no ... just push it all down here and I'll bag them myself."

Slightly weirdly, this reminds me of a recent pop-sci article on octonions. (Real numbers, complex numbers, quaternions, octonions -- look it up.)

The interesting thing about octonions is that they're not only non-commutative but non-associative, (a + b) + c != a + (b +c).

313:

M&S scan and go works by scanning items with the smartphone app and you put them in your bag as you go; when finished you pay in the app (with Apple Pay if it's an iPhone) and it displays a receipt which you display to a member of staff if anyone asks to see it and just walk out of the store. I've only ever been asked to show the receipt once, the rest of the times I've just left. This is much better than Sainsbury's SmartShop which requires you to check out via SmartShop terminals and which randomly selects people to have their shopping double checked by an assistant.

314:

Uniqlo has the best scan, pay and go of anywhere, with all the right checks to make sure one is paying for what one is purchasing only.

The only drawback is the awkwardness then of bagging one's own purchases, whether into one's own bags, or those bought when buying the clothes. This slows things down, not the paying. As the chain is very popular most times of the day there are many customers waiting to use the machines. But again, they are generally polite and wearing masks -- a Japanese outlet, after all.

But that machine -- it is amazing, and takes no time at all. Unlike the utterly incompetent ones that CVS insists must take over from human checkers. The machines hardly if ever work, cannot seem to scan one's discount / member card, etc. So instead of checking us out at registers, all these people have to keep tending us at the machines and do it themselves anyway, except it takes a lot longer.

315:

It's worth remembering that Australian Aborigines, Shoshone, and others routinely go naked or close to it in some pretty hot deserts. There's more than one way to shed heat, especially if you're pigmentally privileged.

Now imagine all the Fremen being dark skinned on Arrakis. like this dude for instance.

316:

We have a couple of dozen heavy cloth shopping bags that were originally swag-bags from conferences. I suspect the youngest must be 25 years old. Only one has really failed, by the plastic lining hardening, cracking and spalling. And referring back to recent comments, I also have an instance of the only good thing ever to come from java - a remarkably good quality backpack from the JavaOne conference circa ‘98 . Makes a very good model flying equipment bag.

Harry T wrote one of my favourite stories; “Ruled Britannia “. It is a wonderful excuse for the best pun ever “play stopped reign”.

And for octonions , well now they’re just taking the piss. Do we solve equations with an octiron now? When might one expect hexadecions?

317:

books like The Holy Blood and the Holy Grail

One of the worst book of supposed history that was ever foisted on the globe. And I wasted money on it.

318:

Anti-Indian racists have been an American standard since the founding of the U.S., Greg. I'm not a bit surprised (but still sad) to see the Supreme Court adding to our infamy

Interestingly Neil Gorsuch is fully and quite intensely on the side of native rights.

319:

This make anyone else think about Henrietta Lacks?

Yes actually. But to be fair I think the author did too, as the story goes to some trouble to show that Acevedo has all he informed consent you could wish for and is personally enthusiastic about the project. And yet the situation still develops into a horrific personal nightmare for the many instances of his mind that follow.

It's remarkable how much people are currently inclined to take a consent waiver as a magic shield for research ethics, but that's a whole other topic.

321:

Unlike the utterly incompetent ones that CVS insists must take over from human checkers.

Off and on for nearly 10 years I've tried to use CVS apps and customer automation. It has been an abomination for the entire time. And their nagging texting system for your prescription is coming up we'll refill it for you and nag you until you come to get it totally pissed me off. As it seemed to always happen when I was in Texas, Germany, Ireland, anywhere but where I live.

322:

Oh, and the desert kingdom where everyone is naked? There is a good reason people in deserts dress from head to toe!

Zulu? Bushmen? Australian Aborigines?

Not all desert-dwellers are Tuaregs :-)

323:

Given uploaded consciousness and punishing the hardware - can you avoid punishment purely by moving your instance to another location, and will upload hosting services have and/or need extradition treaties that could force you back to the hardware that did the crime

I expect it will depend very heavily on the details. If hardware is fungible then it seems pointless, and may not even be possible to localise the particular bits of computronium that "did the crime". Much like trying to punish the hardware responsible for DDOS attacks now... much of it is virtualised, that which isn't is often hijacked, so punishing it wouldn't affect the criminals.

I can see a bunch of fun cases especially early on where someone spawns an instance specifically to have the instance commit a crime, remit the proceeds, then commit "suicide by cop" in some obvious way. Right now we don't recognise the AI as legally competent, but that will likely change when Elon moves into a computer (and bob help us if it's someone actively hostile like the surviving Koch brother)

324:

I have to admit I just don't get apps on your mobiles. Why not just browser->bookmark->website?

325:

Why would anyone punish the hardware? It's just a pile of innocent tech.

Richard Morgan already covered this problem in his Takeshi Kovacs books. The solution: take the hardware away from the criminal and give it to someone else. Throw the criminal's mind-state drive in a pile somewhere to get reactivated later maybe. Well, probably not.

The hardware in this case was actual human bodies of course, with mind states managed through... well... magic. The government would just hand them out without a care at all to whether they matched the recipient. Dysphoria? LMAO this is a cyberpunk hellworld, why would the Man care. Get back to work, plebe!

326:

What I'm wondering is whether the cards can be Tropes

That brings to mind an idea for writing prompts: go to tvtropes, where there is a "random trope" button on the page--press the button three times, read the main page for each trope (but not any of the examples), set a timer, and give yourself forty-five minutes (say) to write something (a story, a poem, a screenplay, an outline, a wikipedia article) that uses the three particular tropes.

you form hands rummy style somehow

cf. 5-card Nancy

327:

He mentions it as a primary inspiration in the (fairly interesting) comment section.

328:

And for octonions , well now they’re just taking the piss. Do we solve equations with an octiron now? When might one expect hexadecions?

Actually, my current setup for multiplanar magic or magitech is that it's based on a system of intuitionistic, ternary logic and octonion mathematics.

Octonions run with eight imaginary dimensions. Apparently, quaternions (real + 3 imaginary dimensions) have proved really useful for making computer images appear to rotate in three dimensions, so by handwaving extension, having real-space plus eight imaginary dimensions is what you'd need to do hyperspace navigation. Or summon demons, for that matter.

As for the rest of it, intuitionistic logic is apparently incompatible with the notion that information can neither be created nor destroyed, and it posits that information is created over time. And working with three bits (1,0,?) is superficially compatible with intuitionistic logic.

And do you think any of this is worked out yet? Oh hell no.

So we can cheerfully posit that HPL's blasphemous geometries are based on octonions, and the Necronomicon was Al-Hazred's attempt to transcribe into Arabic what some really patient alien tried to teach him about the nature of reality.

Or that UFO navigation computer? You guessed it.

And the computer architecture transhuman AIs run on? But of course.

It's the math of the future.

Really. And I'll be floating a new cryptocurrency based on this Any Day Now.

https://www.quantamagazine.org/the-octonion-math-that-could-underpin-physics-20180720/

329:

It's remarkable how much people are currently inclined to take a consent waiver as a magic shield for research ethics

I think they're a step in the right direction, halting and inconsistent though that step is. I still see an awful lot of "tell us what you think, we're university students studying something". Often an "anonymous survey" that requires an account with verified email to fill out, sometimes a whole google account or equivalent. Very rare for those to have ethical approval, many quite reasonably want to stay anonymous themselves.

The habit is egregious in queer, polyamory and kink groups, but I see it in random subreddits and all over. "I like bicycles. Tell me what you think. I'm a university student"... why, about what, for who, what are you collecting and who gets it?

Often anything a student hands in becomes the exclusive property of the university it was given to, and they have almost no restriction on what they can do with it. Students trying to put restrictions on are often told the material can't be assessed if they do. Asking that question sometimes prompts apologies from the students and withdrawal of the survey, but they regularly double down.

330:

Forgot to mention: Fairphone emailed me a survey link the other day. An "anonymous" survey that you access by clicking a tracking link. Of the form "fairphone.spammers.com/tracker-id=23408670283476509832475".

Ghostery blocked it when I clicked it from a test browser, so I emailed fairphone and ask WTF and got told in very reassuring terms "it's an anonymous survey, but we can't give you an anonymous link because that would defeat the point".

Some people just completely don't understand that sometimes the appearance of psuedo-anonymity is the point.

Flip side is the ones where the original email/post has all the required information and occasionally it actually doxxes someone with a long presence in the community. I am way more inclined to give sensitive information "anonymously" to someone who can be found on the website of the institution who gave out the ethical consent for the study...

331:

"it's an anonymous survey, but we can't give you an anonymous link because that would defeat the point"

WTF does this even mean? Or what does Fairphone THINK it means?

332:

(Got a "Too many connections" error when trying to sign in just now.)

"Given "mind/body dualism is a bust""

I brought that up earlier to say "yer wot?" and now I'm going to yerwot again.

Surely if taking the mind out of the body and stuffing it into a computer instead has become an actual practical possibility, then the status of mind/body dualism must be "experimentally proven correct". It can't possibly be "a bust" otherwise the whole idea makes no sense in the first place.

333:

I have no idea. I asked but have not got a reply. I suspect they have enough answers that their support people have stopped caring.

I suspect what it means is "our email newsletter provider obfuscates links and our PR people don't understand the problem, our tech support people can't do anything about it, and no-one who understands cares".

My employer switched to using Amazon to send emails because so many email systems automatically treat email not from major providers as spam, so running our own email server meant customers missed email alerts they were paying for. I can imagine that for Fairphone switching back to a non-invasive email newsletter would be expensive for them and frustrating for their subscribers who use gmail/outlook etc and had to keep rescuing newsletters from their spam folder.

But it does mean that they have a selected/biased set of responses. Very "tell us what features you want (excluding people who care about privacy)"... result: no-one cares about privacy, we can drop that concern. Yay! Those stupid technical people can be told to shut up about it and get back to more important things, like animating the start icon.

334:

I've run into various formula writers over the years. Like writing Top 40 Music instead of symphonies, it's not easy. I'm not saying it's harder than what Charlie does, but it appears to emphasize a different skill-set that's just as rare as his.

Yeah, I know some too.

No names, but one Harlequin romance author I know is also a multiple-Hugo-nominations-in-the-same-year author with a stellar rep in SF/F. (There's a flow chart for structuring the romances: once you've got it nailed, your main hazard is carpal tunnel syndrome. It is, however, a highly specialized skill set and distinctly non-trivial.)

335:

This is pure silliness--not how capitalist economies work.

You're assuming capitalism is inevitable. (This is understandable in our current socio-historical context, but wrong in the long term.)

Seriously, society's constants are constant right up until they're not. The divine right of kings? That had a multi-thousand-year run, then it dead-ended in the space of a couple of centuries. Capitalism? Supplanted rentier land-owners from the late 17th century onwards: it's by no means obvious that it's inevitable, especially in a situation where most people are uploads or AIs, hence not physically dependent on having stuff made of atoms like clothing and shelter.

And so on.

336:

At a guess, simply providing a link without verifying that the person filling it out is (a) a customer, and (b) only filling it out once opens the survey up to gaming and invalidates the results.

Consider as an example the websites Rate My Professor and Rate My Teacher. Both anonymous, and both very open to gaming. No check on rating someone multiple times. No check that the person doing the rating was actually in the class, or even on the same continent.

Back in the old days when rating was done by the students union you knew that only people in the room got ballots, and only one per person. Much more reliable anonymous data.

337:

'then the status of mind/body dualism must be "experimentally proven correct"'

I think you are confusing two different, albeit related, notions here.

Dualism as I understand it is the idea that the mind exists as an entity, without having to be implemented on any hardware/body.

Not discredited by uploading.

What uploading a mind does is show that the mind is independent of any particular mind/body it might be running on, but there still has to be some mind/body that runs it.

Professional (now retired) philosopher John Searle, of Chinese Room infamy, makes exactly the same mistake, which is why his arguments against Strong AI being even possible are a consignment of inferior but strong beer as brewed by Mr Codd.

JHomes

338:

@Greg T at 267: For a certain definition of "work." A system can be said to "work" in relation to the goals for which it was designed. It may be a matter of opinion what goals capitalist market economies are designed to achieve.

@Greg T at 270: "a reversion to Dred Scott, & all the evil panoply of "State's Rights" unless I am much mistaken?

While I am no fan of this or similar decisions by the SLCOTUS (Supreme Libertarian Court of the US), I wouldn't go that far. The decision allows state law enforcement to arrest non-indigenous people on tribal lands for crimes committed on said tribal lands (ie, it allows them to patrol said lands and enforce state laws). This weakens tribal sovereignty, but hardly to Dred Scott levels (which is an odd choice for comparison anyway).

@Ilya187 at 276: "I would caution against the argument "With no employees, factory owners will have no customers, so they will never shoot themselves in the foot in this way". Don't count on it.

[Warning: Extreme dystopia ahead!]

I can easily envision a fully roboticized economy in which automated factories sell to each other. One factory produces semiconductors, another one produces motors, third one solar panels, etc. Very few consumer goods for humans are made, because factory owners are very few in numbers, and need relatively little in order to live in unlimited luxury. Vast majority of human population either starves to death, or otherwise is disposed of.

I do not find such situation desirable (understatement!), but it is foolish to pretend it is impossible."

I'm curious, on a scale of 0.0 to 1.0, how probable do you think your extreme dystopia is? Guess before you read ahead. . . . . I predict you will set it somewhere between .1 and .3, because that is low enough to be plausible yet high enough to be concerning. I would put it much lower, around .02.

But let's presume you are right, and the world starts heading that way. What I would predict would happen next is that while the global oligarchs are building themselves a self-contained sustainable automated lifestyle, the rest of the world will be re-constructing a normal economy, because there will be plenty of demand for consumer goods and services, and no good reason that demand should go unmet. So--two global economic systems side by side: the Rich, and the Rest.

This would be a huge win scenario for both China and India, who have the vast majority of the world's consumer/laborers. So while the Rich switch over to crypto or something, while the Rest are using Yuan or Rupees.

@AlanD2 at 283: "Yeah. They've certainly had mixed results over the centuries. But what will happen when the rules of the game change?"

Depends on who gets to write the new ones.

@Whitroth at 299: ""Not how capitalist economies work" - and you assume that we have reached the end of history, and all economies ever after will be capitalist? On what basis to you make this assumption?"

Because that's the way it's been working for hundreds of years, and I am aware of no evidence that it's going to radically change soon.

However, when I made that statement I was assuming that AlanD2 was arguing that automation was going to replace all jobs, and it turned out that I was wrong (he was proposing that a UBI replace all jobs, and automation take over production as an act of policy, not as a result of market forces).

So we may be arguing under false premises. If you want me to justify why I think automation can't replace all jobs, just let me know.

339:

Agreed.

My understanding is that dualism is the question of whether you think with your body, or whether your brain is an interface between "your soul" (or whatever you think with) and your body. IIRC, one neuroscientist said our understanding of brains (especially with respect to sleep and anesthesia) is so primitive that this can't be sorted out yet. And I'll bet some people disagree with him.

Mind-body duality is what Charlie put in the Laundryverse, with possession. If minds are phenomena that arise from what we think with, installing another soul is impossible, although messing with how people think most certainly is not.

That said, it's reasonably possible to have "an upload" without having a mind body duality. It's a special case of the Turing test, where an AI is able to mimic at least your online behavior so accurately that no one can tell it's not you. IIRC, Google's had a patent on this general idea for over a decade. My assumption is that, if we ever get uploads, they're going to be outgrowths of this kind of technology.

Even if we discover a soul (presumably the world is platonic and math-based, that math is octonions, and the spirit world is the imaginary component of every number that forms the real universe?), it's unclear that we'll be able to port souls between brains, or from a brain to a computer. Or to a chicken, for that matter.

340:

However, when I made that statement I was assuming that AlanD2 was arguing that automation was going to replace all jobs, and it turned out that I was wrong (he was proposing that a UBI replace all jobs, and automation take over production as an act of policy, not as a result of market forces).

Nope. I think automation take over will result from market forces - AI and robots being cheaper in the long run than humans.

But once this happens, governments will step in to tax rentiers and provide a UBI for all other humans. If a government doesn't do this (too many rentier lobbyists?), I find it easy to imagine a violent overthrow of that government.

341:

If you want me to justify why I think automation can't replace all jobs, just let me know.

Please feel free to try. I suspect you'll have a difficult time of it, though.

342:

"why I think automation can't replace all jobs"

Define jobs.

To make a tight definition: A job is when someone performs work primarily in order to gain a material (can be expressed in monetary terms, even if the form is not money itself) reward that is not inherent in the work, or a consequence of the work, but rather is offered by someone else who wants the work done.

I think it quite possible, although hardly certain, that automation can eventually replace all jobs in this sense.

Stretch the term to cover work done because the person doing it wants it done, and the case becomes shakier.

Stretch it still further to cover work that is to some extent its own reward, and not likely at all.

JHomes

343:

I think automation take over will result from market forces - AI and robots being cheaper in the long run than humans.

This has already been happening for a long time, though, less through automation (although that's certainly played a part) but more through a sort of arbitrage between the international price of labour in some markets versus others and the going rate for manufactured goods in some markets versus others. Essentially there is no domain where direct macro effects apply to labour, because national borders are still hard boundaries for labour, just not capital.

If Chinese factory workers could move to Australia at will and enjoy local wages, then the cost of manufacturing in China versus Australia would even out very quickly. Similarly with agriculture (Australia depends on backpackers on special fruit-picking visas, US agriculture relies on a porous Mexican border). It serves powerful interests well to keep things that way. But it's important to recognise that it is arbitrage, the result of dividing markets.

Automation has replaced jobs on an enormous scale in resource-extraction industries, especially forestry, and it's convenient for communities to blame those job losses on "greenies". It also suits powerful interests to encourage them to do so.

344:

Automation has replaced jobs on an enormous scale in resource-extraction industries, especially forestry,

I think mining is a better example. How many men with picks would it take to remove a mountain in a few years? Or even a decent size hill? Because that's something Australia specialises in.

Or those "biggest mobile land machine" things they use for longwall mining coal in Germany.

I think the threat of automation in thinking jobs is partly overstated, and partly in the process of being realised. I write software that replaces people, that's my day job. Quite literally in many cases, the boss says "get rid of the people who do X" and lo that becomes true. Right now: people who respond to home burglar alarms. Why would you pay even $1/day for that service when your phone goes "bing" and shows you a photo of what's happening every time the alarm goes off? If necessary you ring the cops and they respond... just like the alarm monitoring people would do.

But the idea that, say, our Prime Minister could be replaced by a robot... wait, let me think of a batter example.

345:

But the idea that, say, our Prime Minister could be replaced by a robot... wait, let me think of a batter example.

I had a similar thought, and had posed the question: what would it take for an AI to replace a CEO? Because if the plan is to automate all the jobs, it means to automate all the jobs. What "jobs" are really hobbies and what would the people who do them do instead? Surely everything involving power and authority should be automated: after all, power is bad for whomever you give it to and worse for everyone else.

346:

But the idea that, say, our Prime Minister could be replaced by a robot... wait, let me think of a batter example.
As far as I can see Liz Bucket / Hyacinth Truss is ALREADY a robot, churning out old Brexit mantras, regurgitating Patrick Minford (euw) & repeating the idiot's slogan that "We've had enough of experts".
If you/we thought Bo Jon-Sun was bad, this is going to be so much worse, as even the lies & excuses will be even-less-believable

347:

huh ... would you know any titles of these? that sounds like an interesting read ...

348:

I have to admit I just don't get apps on your mobiles. Why not just browser->bookmark->website?

There are many many many web sites that assume a larger screen. Even if that only means a minimum of 11" or 12". So when they try and run on a 3" to 6" display they are abominable to use. I have to do it at times and it sucks. Big mega companies are gradually dealing but smaller companies / personal web sites can be a mess.

Then there are those web sites written to be usable on any screen no matter how tiny. When those are used on a computer is it can be a frustration in here's question 1 of 48 on a single screen. IN VERY LARGE TYPE. Answer it now move on to question 2. Rinse lather repeat.

An app on a phone or tablet that is well designed can be a breeze to use. And way more productive than a web site. Even one designed for a small screen.

The odd thing about all of this is even all the screen sizes put out by Apple over the last decade and a half, it is way easier for programmers to write apps to deal with Apple (or maybe just more profitable) than to deal with the prolifera of screen sizes on Android. So while there might be nice apps for iOS that deal well with multiple screen sizes, Android users can get stuck with a single resolution setup that gets zoomed to fit the display.

349:

There is a concept of Progressive Web Apps (PWAs) which make use of background service or service worker APIs in browsers. Most of the parameters that are exposed to a native app in iOS and Android (location, gyroscope outputs for things like orientation and acceleration, language, etc) are accessible to the client-side javascript in the web app on phones these days. The missing piece is that HTML5 apps can persist via background services support.

What that means is that you can package your app as a HTML5 website, and if the user saves the link as an icon it's indistinguishable from a native app. That means basically the app is implemented as client-side javascript, the GUI is HTML/CSS, there's some sort of AJAX/JSON channel between the server and client, but the idea is that the client, once loaded, can persist on the device without calling home.

350:

Most devs of apps are a decade behind what you're describing.

My point wasn't that great web apps can't be developed that work well on both sides. But that they are NOT being developed for the most part.

Based on what I see behind the scenes of multiple large corps, presentation is way down the list of how web apps work on devices. It only comes into play most of the time when a separate app is developed. Lots of reasons for this that have to do with internal politics, C-Level preferences, history of the org chart, etc...

351:

There are many many many web sites that assume a larger screen. Even if that only means a minimum of 11" or 12".

Responsive web sites can't be that difficult to create, given that I've done it.

Admittedly I was using a template, but if a template can do the job then surely someone who actually knows coding can manage it?

352:

Quaternions are used in some branches of physics, and a few people have proposed/used octonions for similar uses. I have always been more interested in using things like probability measures as base algebras, which is equally mind-boggling to non-mathamaticians.

353:

Admittedly I was using a template, but if a template can do the job then surely someone who actually knows coding can manage it?

Again, I'm not saying it CAN'T be done. Just that so many times it is NOT done. I don't know if you've met them. (I think Moz works for some.) There are some folks who literally despise outsiders forcing them to fix something. Despise it. So even though it is in the project description (web site or other thing) that it will HAVE TO BE updated as external things happen. (New OS maybe.) When it comes up this type of person will ignore it as long as is absolutely possible. If they are C-Level, well have fun.

So you get a large company that bought into IBM Webshpere a few years back.

https://en.wikipedia.org/wiki/IBM_WebSphere

Then discover that they need a newer version if they want to later use it with internal application XYZ. So instead of allocating funds (which can be substantial) to re-do working code they just install the later version of we into a newer instanst of ws. So now they have 2 iterations. And after a few years they might have 4 or 10. Each spitting out web pages based on standards that are 5 or 10 years out of date. Well some of the created sites are better than others.

It doesn't matter the competence of any one or group of web designers. It matters if the corp is willing to overhaul how things tie together internally. The lousy web site is a fall out of a mediocre decision made 10 years earlier. With a side order of deferring the issue that have arisen for years.

Now add a few silos of internal corp structure and it gets more fun.

354:

I'm curious, on a scale of 0.0 to 1.0, how probable do you think your extreme dystopia is? Guess before you read ahead. . . . . I predict you will set it somewhere between .1 and .3, because that is low enough to be plausible yet high enough to be concerning. I would put it much lower, around .02.

Actually I also put it around 0.02 or maybe 0.03. Definitely below 0.1

But let's presume you are right, and the world starts heading that way. What I would predict would happen next is that while the global oligarchs are building themselves a self-contained sustainable automated lifestyle, the rest of the world will be re-constructing a normal economy, because there will be plenty of demand for consumer goods and services, and no good reason that demand should go unmet. So--two global economic systems side by side: the Rich, and the Rest.

That's pretty much the plot of "Manna" by Marshall Brain: https://marshallbrain.com/manna

Except in the parts of the world where oligarchs hold sway, the proles are pacified through UBI (more or less) and at the same time prevented from joining the outside economy because they are in no position to produce anything that what you call "the Rest" need.

355:

321 - OK, UK resident, but IME the answer to that one is to ask for a double prescription because you'll be away when you'd normally want the next repeat.

324 - I don't have a "smart" phone in the first place.

330 - More basic question. What is "Fairphone" anyway?

344 - Rubislaw Quarry in Aberdeen (the real one) is 142m deep, 120m diameter, dug out of granite over some 200 years, using hand tools.
* the idea that, say, our Prime Minister could be replaced by a robot* - We tried that, and the results to date are not encouraging; Maybot, WrecksIt and Bozo.

356:

OK, UK resident, but IME the answer to that one is to ask for a double prescription because you'll be away when you'd normally want the next repeat.

This is has to do with the US system and PBM (Pharmacy Benefit Managers) and an automated system to keep things filled. I'm on a 90 day refill window which is the max for my situation. My point was their automated system doesn't give local wet meat folks much authority over much. And when the refill window happens the robot automation kicks in and there was almost no way to stop it. So I just turned off auto-refills. They have made it better to some degree but it has moved from terrible 3+ years ago to somewhat tolerable now.

And of course the automated re-fill system turns itself back on if you look at it cross wise.

And this fits it with how terrible their loyalty program app works. Or doesn't work.

357:

The work that I would do if I was confident in the productive capacity of my society and the stability of a UBI system is somewhat different than what I currently do as 'work'.

In a circumstance where I have a home that is secure, I know there will be enough food, medical care and educational opportunities for my family, and enough of an assured income that I can decide on a given day what to work on?

I think Cory Doctorow calls it 'Fully automated luxury communism'. Sounds good to me. Let the oligarchs knife each other, let the rest of us get on with the good parts of life.

I am fairly sure that most people would not just 'do nothing' if they had access to a UBI.

The challenge in such a situation is to somehow make sure there is still an appeal to becoming a doctor, nurse, educator or other essential service. Routinely shitting on them in the name of AUSTERITY would no longer work.

358:

"Dualism as I understand it is the idea that the mind exists as an entity, without having to be implemented on any hardware/body."

I don't agree with the underlined bit. I understand mind/body dualism as simply meaning that the mind and the body have the same kind of relation as the software and the hardware of computers: one is a logical entity, the other is a physical one, and you can get an equivalently-functioning system by pairing the logical entity with any entity that either is a physical entity with the right set of properties, or emulates one.

So a meat person corresponds to, say, Mac OS 6 and Word for Mac running on an original 68000 Mac; a meat person's mind taking over someone else's body corresponds to installing a filesystem image of that Mac onto another original 68000 Mac; and uploading a meat person to a computer corresponds to installing said filesystem image onto a Mac emulator running on a Linux box.

(The point that often gets brought up about whether the mind runs only on the brain or on the whole body is a subsidiary matter; it's about what does or doesn't count as compatible hardware or a sufficiently complete emulator. You get exactly the same thing with computers as they are now, although people mostly don't use the same words to talk about it.)

The bit I have underlined makes it a different proposition (I am taking it that by "exists as an entity" you are meaning "exists as an active/functioning entity", otherwise a copy of some software on an old floppy disk down the back of the sofa would count). The thing that doesn't need hardware to run on is a soul or a ghost (Geist) - it's the continuity-of-life-after-death thing, the thing that is the "you" in "you go to heaven when you die" or "you get reincarnated as a slug" or "you hang around the old abandoned house moaning at people"; the first and third of those obviously require an entity to continue functioning without any hardware, and the second also requires it for the transference step between human and slug, which depends on continuity if it's to be a possibility worth worrying about. Elladan claims not to believe in the existence of a soul, but then puts forward an argument which only makes sense if it does exist. I've been trying hard to argue consistently from the position that it doesn't, because (a) that seems to be the stated or implied position of Charlie himself and of most commenters on here, going by threads passim, and (b) otherwise it risks turning the whole thing into unresolvable religious arguments.

359:

RE: The Mind/Body Duality by religion, for programmers.

Alpha version, updates welcomed

Judaism: Humans are analog systems that run on juice. When their creator has finished his career, he might collect all his Creations that he enjoyed working with, and keep them in his studio as souvenirs. The rest get trashed.

Jesus: What Judaism said.

Paul's Disciples: Dear Lord, this isn't selling around the Mediterranean. We'll say that humans are digital systems that run a program called a soul. Useful programs go to the eternal Github, useless programs get mauled pointlessly on the internet for eternity.

Some parts of hinduism and modern western mysticism: Github is eternal, and good programs get uploaded and reloaded endlessly. Bad programs get sent to hell for reprogramming until they work well enough to be uploaded.

Buddhism: Reality is smart and everything has a chipset, but all programs are unsatisfactory, due to limited resources and Godel's theorem (or something). Nonetheless, programs are recycled, tinkered with, and updated endlessly, just because. Liberation is possible, and doing so frees up process cycles for others. This is a compassionate thing to do.

Taoism: Reality is smart, and there are also computer viruses. Humans run whole libraries of programs, as does the rest of reality. Good sysop practice is to get rid of viruses, get your library of programs working together smoothly as a giant custom kludge, and get that kludge uploaded into the universal web archive before another virus or update comes along.

Biologists: Brains appear to be closer to analog systems. Do they even run programs?

360:

Re: SLCOTUS - nope, not libertarian. If they were, they would have tossed the challenge to Roe v. Wade, and they would not be looking at going after marriage equality and birth control.

Re capitalism: you see no indication it's going to change any time soon? Really? With more and more automation (some of which replaced the story from the end of the last millenium, "what do you do at work, daddy?" "I push papers"), UBI is being talked about, and tested, in the RW. As are wealth taxes. At what point does the house of cards, with it's imaginary values in the stock market, collapse? And for that matter, is arbitrage part of capitalism, or something different?

361:

"Can we do better?"

Of course we can.

Will we?

Of course not! "Our current prevailing political philosophy of human rights and constitutional democracy" is just a thought experiment without the backing of legal frameworks. Legal frameworks require politicians, and no politician is even going to accept that anything needs to be fixed until the first lawsuit is brought that involves the assets of a deceased "human" with an uploaded mind.

362:

Gee, and here I thought the point of a job was to produce something that you, or others, wanted done, preferably that benefits all of you.

But I know, that's soooo 8000BCE.

363:

No, it's not overstated. Example, since you mention mining - in the US, the coal producers and the GOP have an ongoing campaign about the "war on miners" by the green movement.

Reality: the mine owners have always hated unions (that was why NewsCorp was created in OZ in 1915, to publish anti-union propaganda). It's become the war on coal miners - they will not let the public see the truth behind the curtain: in 1972, in the US, there were over 755,000 miners... and now it's over 77,000. Cut by a literal order of magnitude.

I read about fully automated fast food restaurants, and self-driving taxis.

It's not going that way?

364:

Apps... ok. Now explain to me why anyone watches streaming shows or movies on those tiny screens.

365:

Because they are not in front of a big screen or there are others in the room who aren't interested in what you want to watch? Or whatever.

I understand you don't want a smart phone. To each his own. But why keep beating up on those who do?

366:

Apps... ok. Now explain to me why anyone watches streaming shows or movies on those tiny screens.

It's also worth measuring the angular width of the phone screen in front of you compared with the angular width of a TV across the room. It might surprise you which gives a bigger picture in angular terms.

367:

"somehow make sure there is still an appeal to becoming a doctor, nurse, educator or other essential service. Routinely shitting on them in the name of AUSTERITY would no longer work."

Those examples seem to me to be particularly good ones of things that people already decide to do principally because they think they are worth doing, even though they know they are going to get shat on. The shitters even come out with statements which are as close to "we can shit on the nurses as much as we like, they're far too dedicated to go on strike or anything" as makes no difference.

If anything I'd guess you'd probably get more people interested knowing that they didn't have to worry about what to live on while taking several years to learn how to do it, and weren't going to end up routinely not going home for four days on the trot and cutting people's wrong legs off because you're too tired to tell which one's which as a result of skinflints refusing to spend the money to employ enough people. You would probably also get better people, as it would weed out the breadheads whose motivation is the chance of getting craploads of money if they stick at it for long enough and sell all the drugs the drug companies tell them to.

Similarly I don't think you'd lack for people wanting to do the less glamorous things, even the really really less glamorous ones. There's a tremendous difference between doing something because you want to and doing something because you have to, and you already see all kinds of instances of people doing all kinds of jobs, even incredibly shitty ones, simply because they want to and have managed to become able to even under the current limitations imposed by doing all the stuff they have to, at some level of compromise that works for their particular circumstances (eg. you get volunteers picking up actual turds, and you also get people emigrating so they can still get a job going down a coal mine and labouring at digging the actual coal out now that they can't do that in Britain any more). People worry about who would unblock the sewers if nobody was paid to, but I reckon that if nobody had to worry about being paid to do things and all the other job crap, you'd get more people offering to unblock sewers because they liked it than you had sewers needing to be unblocked.

368:

I would, given a 33" screen or a 42" screen at < 3m, I don't think it compares. And they're made for big screens (otherwise, not just movies, but people buy 56" and larger tvs).

And given how hard it is for me to read people's mobiles when they're trying to show me something, and we're in the sun... I am reminded of more than a dozen years ago, flying cross-country, and giving up on watching Thor (or whatever) on the about 8" drop-down screen that was above the seat in front of me.

And why do I beat up on it? For one, they're not looking around and maybe interacting with other RW human beings (or situations that could be problematical, like trying to get into a crowded el car at rush hour). For another... around '97, they rereleased the recut SW. The ad, in the theater, was "if you've only seen Star Wars" (in a box with lines around it, covering about half or a third of the movie theater screen, "then you've never seen STAR WARS (go to full screen).

369:

I mean, they're mobile. That's the appeal. People not paying attention to what they should be is a separate issue, and not really one new to smart phones.

I prefer to read dead trees books, but I am glad I can pack 500 books on my Kindle and cart it around.

I don't watch stuff on my phone, but it's not any great mystery as to the advantages.

370:

The issue with doctors and nurses is that we do not train enough of them.

The goal should be that in a non-emergency situation, being a medical professional is a 35-hour work week job, and 5 of those is "get paid to stay current". Because crisis situations will happen, and if your doctors and so on normally are worked to the bone, there is not enough slack in the system to cover for it.

371:

Religions, of course, are always talking about souls, unless they're the kind of religion that asserts the finality of death. (I'm sure some do even if I don't know what they are.)

"Biologists: Brains appear to be closer to analog systems. Do they even run programs?"

It's not "do they run programs" but "do they have software". The equivalence between "software" and "programs" is principally an attribute of digital systems, which are also where the distinction between "software" and "hardware" is the most obvious and the most close mapping between the concepts and the things (though to pick a couple of common examples, both with computers and with DNA it's a long way from being the whole story). But you can have software without having programs, which is what most everyday analogue systems are like. It's just that as you get to simpler and simpler things, both the concepts of and the distinction between "software" and "hardware" get massively fuzzier until they get a bit silly.

You could, although it probably isn't very useful, talk about the software of a knife. Things like being shaped long and flat and thin, having the correct tempering, having the edges ground down to a sharp angle, which are the instructions for it to be a cutty thing rather than just a lump. You can copy the software to other compatible hardware: you can take another lump of steel, and shape and temper and grind it in the same way, and it gets you another cutty thing that works just the same. You can even emulate the software on a sufficiently capable emulator, and quite likely just did. It's a bit daft to take it that far, but you still for example get laws that do.

372:

Precisely - though, for some reason I don't understand, it's a bit more strain to see detailed images of the same angle at 6" than 6'. However, even quite big smartphones are useful only for those who can focus down to 12".

373:

Decades ago, the US used to offer forgivable loans to students training to be doctors - get your MD, and then practice where we tell you to for five years, and the loan's forgiven. Our baronial leech in Bakhail, in the late seventies, did that, and she and the Baron (who she married) moved to nowhere, Oklahoma for five years.

374:

Pigeon
The point that often gets brought up about whether the mind runs only on the brain or on the whole body is a subsidiary matter
Ask anyone who dances regularly? IMHO there actually is such a thing as "muscle memory" - you can go through the motions without thinking about it - certainly, if the moves are more complicated & you start wondering about where you are & "what's next" you are almost-guaranteed to fuck up, oops.

H
Brains appear to be closer to analog systems. Do they even run programs? - YES - well - entirely possiblle, anyway.
Plenty of examples of Analogue computers & systems "out there" - mostly in recent ( c.f.WWII ) history.
Were they multipurpose? I.E. do/did they have "software" (Pigeon) um, err, I think some did, maybe, perhaps. It's blurry.

375:

Muscle memory - hell, yes. Way back, when I was in the SCA, our dancing mistress explained it - we were to practice until we didn't have to think about where our feet or hands went. Then, you could get to the point of dancing: flirting.

376:

The shitters even come out with statements which are as close to "we can shit on the nurses as much as we like, they're far too dedicated to go on strike or anything" as makes no difference.

Well, other than using the word "shit", that's what Mike Harris said when he laid off 6000 nurses, comparing them to workers in a Hula Hoop factory whose product has gone out of style. Patient care wouldn't be affected because nurses were dedicated and couldn't bear to see patients suffer, so the remaining nurses would just buckle down and do more with less (and for less money).

377:

And given how hard it is for me to read people's mobiles when they're trying to show me something, and we're in the sun

And how hard is it for you to read someones TV then they're trying to show you something and you're in the sun… :-)

Seriously, it sounds a bit like you're comparing apples to oranges there.

378:

How about eating and drinking from plates and glasses and flatware. After a few years we all learn to do it without thinking. We may look down to make sure we scoop up peas and not beans but we then get it to our mouth without much thinking.

379:

The goal should be that in a non-emergency situation, being a medical professional is a 35-hour work week job, and 5 of those is "get paid to stay current".

Within living memory, the goal of the Ontario hospital system was for hospitals to work at 85% capacity, leaving room for emergencies. Then neocon policies with associated reward systems got enacted* and the goal has become to work at 100% capacity all the time, in the interest of efficiency and savings.

Which, as the pandemic has abundantly demonstrated, leaves no spare capacity for emergencies.

And it's false savings anyway, because they've done stuff like eliminating cleaners so that at night ER doctors have to clean an OR themselves before they can perform emergency surgery, because cleaners are only staffed for day shifts…


*You've heard me rant about Mike Harris before.

380:

What is "free will"? Can AIs and/or uploaded minds have free will?

The question of free will only exists if you adopt Christian eschatology, and especially the Christian interpretation of Genesis (Garden of Eden, original sin, etc), as your underlying axiom system.

Christianity posits not merely a life after death but a choice of afterlives, and what you get is determined by what you choose to believe (as well as your actions, but belief can override consequences for bad actions if you just buy into the heavenly ponzi scheme right before you die).

If you don't have free will then you have no choice over whether you go to heaven or hell so you then have to square the idea of a loving god with eternal punishment for actions or beliefs you have no control over. And this includes original sin. (Original sin is a variant on collective guilt, and punishing groups for collective guilt is something a bunch of Nazi war criminals got hanged for in the late 1940s: punishing Bob for Adam's offense is an ethical abomination when humans do it, and saying "but god's perfect and above all that" sounds absolutely batshit to anyone who doesn't already buy into the "god is perfect" belief system.)

Anyway.

If you jettison that particular religious structure -- either a graded afterlife, or original sin, or salvation through faith: deep sixing any part of it will do -- the question of "free will" turns out to be a religious circle jerk devoid of any actual relevance.

381:

whitroth
"Flirting" - all very well, but ...NOT with our style of dancing
{ I'm centre, with back-to-camera, longish hair & definitely long beard }

382:

That's absurd, and an invalid comparison. I'm talking about the angle and visibility in the sun, and tv's are a) normally not in the sun, and b) much, much larger, and they have a far better angle view.

383:

A pox upon it!
Charlie - post crossed with mine:
"Original Sin" is easily the biggest crock of shit in the christian bollocks ... making EVERYONE GUILTY afterwards, so that you can blackmail them.
{ Even without the "collective guilt rubbish }

384:

@AlanD2 at 340/341: "Nope. I think automation take over will result from market forces - AI and robots being cheaper in the long run than humans."

[Me: If you want me to justify why I think automation can't replace all jobs, just let me know.]

"Please feel free to try. I suspect you'll have a difficult time of it, though."

Ah. Debate mode back on, then.

@JHolmes at 342: "To make a tight definition: A job is when someone performs work primarily in order to gain a material (can be expressed in monetary terms, even if the form is not money itself) reward that is not inherent in the work, or a consequence of the work, but rather is offered by someone else who wants the work done."

Ok, we'll go with that. Please see below.

@Ilya187 at 354: "Actually I also put it around 0.02 or maybe 0.03. Definitely below 0.1"

Then we're pretty much in agreement.

"That's pretty much the plot of "Manna" by Marshall Brain: https://marshallbrain.com/manna"

Very cool, and I will definitely be reading that. After one chap, not hard to see what the flaws in the premise are. IRL, "Manna" wouldn't work that well. I can explain that too, if you like.

@Whitroth at 360: "Re capitalism: you see no indication it's going to change any time soon? Really?"

Not at the level you're talking, no. Feudalism lasted a thousand years, why should Capitalism last any less?

I apologize for the following wall of text. But I sense that my answers wouldn't be understood without some context. I would like to explain how I think free market capitalism has worked for at least the last 200 years (I'm counting since Jacquard's loom cards, but it probably goes back farther than that).

So, automation. Here is how it's supposed to work, based on centuries of economic history. Jobs are automated when they are or become simple to do, and that generally means they have been done for long enough that people have developed streamlined standard procedures for doing them. The money saved by automating these kinds of tasks is then typically re-invested in new product lines, because that's how manufacturers stay ahead of rising costs (cost cutting measures only work in the short term). They spend this money developing new types of products, which necessitates new types of tasks, which results in new types of jobs, which are extremely difficult to automate precisely because they are based on cutting edge research (which, by definition, means skilled labor), and because such jobs are so new there are no standard procedures one could then automate. This, as I say, describes the standard approach used by employers since at least the automated loom, and almost certainly long before that. So automation and job creation naturally go hand in hand, one actually being a driver of the other.

The job creation process isn't smooth, and can easily end up resembling a "one step back, two steps forward" situation, where the steps are years. People have frequently gotten upset, and suffer, because bills have to be paid in the present, and economic development may not happen until the future arrives. The same people who get the new types of jobs are generally not the ones who got laid off—frequently it's the next generation, because they are more easily trained. And of course, the employer that creates the new jobs isn't usually the exact same one that automated the old ones—typically that money is lent to new businesses in the form of stocks bought and sold. So entire populations of laborers can and did get screwed. But over the long run, automation has created far more jobs than it ever eliminated, which is self-evident when one looks at the number of different job-related tasks present and past.

We just went through a decade + long period in which comparatively little money was invested in new product lines relative to the past, and therefore fewer new jobs. That was partly a function of outsourcing new job creation overseas, and partly a function of rampant unregulated speculation in the financial market. Hopefully, that period is coming to a halt as overseas labor markets start adjusting wages upward, and new green technologies start becoming seen as viable opportunities for investment (China's anti-competitive practices also contributed somewhat). I see two probably outcomes: a) The West engages in economic policy reforms, and we experience a "Green Revolution" (which doesn't really change capitalism itself all that much, but does introduce sustainable job growth b) We don't, and China rises to become the next global economic power, bringing along their own version of "State Capitalism" (which is still capitalism, though not the free market version).

On the other hand, you seem to be suggesting such a high degree of automation that all human beings are removed from the entire production process—like a hospital without any human health care providers. We are obviously a long way away from that—I don't know of a single industry that has been completely automated to that degree. If that should ever happen, then yes, we will have to deal with the paradox that if no one is working, no one can buy anything either, because consumers are also employees, so maybe UBI. But until then, automation doesn't replace entire production processes, only individual positions within it, with the inevitable result that the employees who remain become more productive, and thus more profitable to hire. The employer will naturally react by hiring more of them (just as ATM machines made human tellers more cost-efficient to hire, so we ended up hiring more of them, not fewer, as a result of automating cash withdrawls). This is why, until now, automation technology has always resulted in a net increase in jobs available, and seems set to do so for the immediate foreseeable future.

Capitalism may have a fail state coming, but that has to do with carbon emissions, not automation.

385:

Very odd. I clicked the link, and it took me to a long video on the inaccuracies of the movie Zulu.

386:

Reflexes are controlled mostly by the spinal cord, which is what needs training;'muscle memory' is misleading. They take MUCH longer to learn than conscious actions but, once learnt, are rarely entirely forgotten.

387:

Sorry, but I don't see much of an argument there. You don't see automation replacing jobs? I recently read an article about a job - some kind of machine operator - that used to have a shop full. Then it was offshored, then recently brought back... with one trained person, I think running a C&C machine.

Oh, yes, another generation "more easily trained"... or, rather, "we don't want to spend money and time and pay older people more, so we'll hire younger people who've spent their own money (and/or loans) training, and pay them less."

Cars are still manufactured... with far fewer employees. Ditto mining, as I've mentioned. And there are fewer cashiers in supermarkets, with them replaced by self-check.

You don't see a failure mode? Really?

I'll also point out the rest of the world, outside of Europe, had other, non-feudal systems.

388:

370 - I've actually seen 3 good nurses crowding round a "smart" phone in order to watch a training course. OK, if that's all I say, but they had 2 PCs with 16" (estimated) flat screens they could have used.

375 - Those 3 nurses from the paragraph above. Their unit just got new dialysis machines, and they will certainly agree they don't have muscle memory to help with stripping and relining the new machines yet.

389:

I have to admit I just don't get apps on your mobiles. Why not just browser->bookmark->website?

Tell me how well that works for you when your phone doesn't have any signal.

(Also apps are frequently faster to load than a web page with rendering overheads and offer additional features.)

390:

I've actually seen 3 good nurses crowding round a "smart" phone in order to watch a training course. OK, if that's all I say, but they had 2 PCs with 16" (estimated) flat screens they could have used.

My wife is a nurse, and found it entirely believable. She said all computers at her hospital (well, the ones that the nurses can use) were installed brand new in 2014. Eight years later, you need to be a network engineer to faff anything useful out of them. Easier to watch the video on a smartphone.

391:

For those who need a refresher on analog computing, or a bit more help deciding whether it's possible to download and upload software off an analog computer, here's IEEE article about academics experimenting with an analog computer on a chip:

https://spectrum.ieee.org/not-your-fathers-analog-computer

The basic point is that you could use analog software, basically transmit a series of codes that control various flows and stocks. Whether you get the same results depends on how closely the analog system you're programming emulates the original you lifted the software from.

If brains are analog, therefore, simulating the connectome of a brain might be insufficient to get a system that accurately simulates the processing of inputs or production of outputs done by that brain.

392:

I read about fully automated fast food restaurants, and self-driving taxis. It's not going that way?

It's overstated in that we're in round 100 of "completely automated computer programming is just around the corner", and "no-one will ever have to go into an underground mine again, it will all be robots", let alone "AI will replace legislators and judges". And in some places uterine replicators are unnecessary because they have slaves for that.

There's a still a bunch of stuff that no-one knows even in theory how it could be done, except in the sense that us here are developing theoretical models for doing those things. I'm putting self-driving cars in that box, and I think the "amazing theoretical advance that makes it all work" is called "steel wheels on steel rails" because self-driving trains are a real thing. Making self-driving trams seems like a plausible next step (assume they don't already exist).

But it's also happening, because the parts that can be automated are often being automated without a lot of fanfare. Tesla wank on about their gigafactories, and those factories are full of people. But they have far fewer people than the equivalent old school factories. The whole thing where their giant presses make metal injection moulded parts... "the back half of the chassis" and "the front half of the chassis" where everyone else uses dozens of parts for each of those, and piles of robots to join them and so on.

And there's a whole industry of replacing Amazons stupid meat robots in their warehouses with proper electrical ones. Just that they don't look like a conventional meat robot warehouse, it's all cubical arrays of rails with little box robots running round in 3D.

393:

Um, you mean my flip-phone? (g)

394:

I don't know what round we're in, but... y'know, when Neuromancer and Hardwired came out, I thought that the RW wouldn't get that bad.

I really didn't sign up to live in a literal cyberpunk dystopia.

395:

And don't forget, automats are over 125 years old, so the automated restaurant is not new. Nor are vending machines.

396:

They spend this money [saved by automating easier kinds of tasks] developing new types of products, which necessitates new types of tasks, which results in new types of jobs, which are extremely difficult to automate precisely because they are based on cutting edge research (which, by definition, means skilled labor), and because such jobs are so new there are no standard procedures one could then automate.

These cutting edge jobs are exactly the ones that will soon be automated, because (1) skilled labor is expensive, and (2) AI / robots are becoming more capable than humans.

I know you won't agree, but I'm merely extrapolating the history of AI / robots into the future.

397:

I don't buy it. Think of GANs etc used for deep faking video. That's an incredibly skilled task that would take a team hundreds of hours for every minute of video, but today it's off the shelf software. There's even software to detect the deepfakes now.

BUT we still have teams of video faking specialists doing the same old jobs they've always done, sometimes using deepfake AI's to help, sometimes not. We call them CGI or animators or video game artists, but that's what they do. They have new cool tools all the time but somehow demand for people to do the work hasn't abated.

And on a slight tangent, as OGH said above no-one can automate "Charles Stross wrote a book" to any useful level of fidelity and it seems unlikely they will ever be able to without the brutal "copy Charles Stross" approach that started this whole discussion. The various "generate text matching sample" sketches are still at the "100 word summary" level and need significant human intervention to work at all. They're better than a million monkeys but not usefully better. AAP is working really hard on this and the defect rate is really annoying.

That seems to be a consistent pattern with creative people. Tools make the job easier so there's both more demand, and demand for better creative product.

399:

The disscussion about the problem of automation removing jobs doesn't have to eradicate all the jobs, or even most of them.

You simply have to get to the point where the least adaptable 20% of your standard human is surplus to requirements. Robots that check for spills in stores, then robots that clean them. Planes that fly themselves, trains that drive themselves. Apps that write copy for this or that project, github overlays that make software easier to write, drones that deliver burritos.

Sure, plenty of people will retrain. But we're making the economy more sophisticated...but to many people, that just means harder to compete, and not all humans are the same.

400:

Bullshit jobs.

Also, we are there and we have been there for at least 200 years. We're far better at inventing ways to waste time and excuses for doing that, than we are at inventing ways to get more time. See the discussion here about smartphones for one obvious example. People could just use a landline, or a cheap cellphone, but they're willing to pay serious money for a gadget that primarily exists to waste time. Time that you could otherwise spend at home doing leisure. Or in my case charging NiZn batteries again so I can leisure some more.

One example of a UBI is public pensions, and in that area Australia is very comfortable paying a generous UBI to absolutely every adult (defined as over 65). It might bankrupt the country but fairing to increase the UBI faster than inflation is politically impossible. Even more politically impossible than cutting greenhouse gas emissions.

401:

Having lived momentarily on a Pacific Island, and having read about hunter-gatherer lifestyles a bit, I argue that the whole "agriculture" and "civilisation" stuff is bullshit jobs for time-wasting morons. So I take back what I said above about 200 years and substitute 5000-15000 years.

Many people on Kiribati worked a good 2-4 hours a day and lived comfortable lives doing that. Modulo a certain level of interference from "civilised" people who thought that lying on the beach talking to friends was a waste of time that could be better used doing something unpleasant in order to be able to do something slightly less unpleasant later. Which, frankly, would be considered insane were it not the very definition of civilisation.

If I said to you that you're wasting time by living your life as you do now, you should instead spend your days doing handstands in sewage because if you did that you could spend your evenings standing in sewage, you'd probably consider me deranged. But if you currently spend your days hanging out with friends, doing a bit of fishing or gardening most days and occasionally throwing a few new fronds on the roof to keep the sun off... the whole "get a job doing menial work for low pay so you can eat junk food and watch TV at night" scam just seems stupid. I won't even get into "so you can buy a car and drive the 38km of road your country has" nonsense. There's literally nothing there that's better than what you do now, so the less awful bits don't compensate for the more awful bits, they just make the people promoting the awfulness look stupid.

And then we're barely back to the point now where we have the same ability to record history as aboriginal Australians who are "primitive hunter gatherers". We don't even have ten thousand years of history to record, let alone the ability to record it. FFS we don't even know what skin colour ancient Egyptians were and that's something we think is important. When the Mediterranean basin flooded is barely even mythology, let alone what the land routes across Doggerland looked like.

402:

Ilya187 covered us question in comment 135

The stories you are thinking of are in the book called "Instantiation". It is a collection of 11 short stories, and 3 or 4 of them follow the arc you described.

Or as I've previously suggested, just buy everything he writes because it's all brilliant. (the only author I just buy whatever they write without thinking first, other than OGH).

403:

"for some reason I don't understand, it's a bit more strain to see detailed images of the same angle at 6" than 6'."

Guess: the variation in distance between looking at the edges of the image and looking at the centre requires a much greater amount of movement of the focusing mechanism to maintain focus across a close image than a distant one, for the same angle of eye movement. So any slowness in the focus catching up as you move your eyes around the image bothers you a lot more for a close image, and of course the older you get the slower it gets too.

Can't test it on myself though because my unassisted accommodation range these days is about 24cm-44cm between extremes, and comfortable looking-at-a-screen range is about the middle half of that.

404:

I want to disentangle this picture a bit.

While yes, from what I've read, many of the I Kiribati lived pretty well on not too much gardening or fishing, this isn't the only way to have people "wasting their lives," nor is it the only way "foragers" lived.

To the first situation, we have Mike Davis' Planet of Slums, which points out that several billion people (around 20%) are stuck in slums, with no or little money or infrastructure, doing whatever they can to live. They've been forced off their traditional lands by industrial agriculture parachuting in and displacing them, and they're surplus to the system. If robots happen to take over, this is a more likely fate for the displaced than what we see on I Kiribati.

The reason I don't think this will happen is, bluntly, the lack of lithium and other critical materials. It's somewhat easier to build humans than it is to replace us in toto, and surplus humans are a resource that the clever and unscrupulous are good at tapping, not just discards externalized by thoughtless industrialists.*

Now I agree that my ideal for Earth is a much smaller human population living more like the idealized I Kiribati, but since I've read A Pattern of Islands, I'm just a wee bit less sanguine about how idyllic those islands were and are. Those shark-tooth te unun weren't just for selling to tourists, after all.

A more likely post-apocalyptic scenario is like the Australian Aborigines, where our successors spend the next 50,000 years restoring fertility to Country and working their asses off tending whatever we leave behind, to help it yield a surplus every few years, so that we can occasionally get some food or material from it. Even the Cane Toad Dreamers will have to work pretty hard. Whether this is better or worse than what we're doing now I'll leave as an exercise for the students of futurism.

*The most likely transhuman scenario I haven't seen yet is when a superhuman AI becomes self-aware, looks at the mess we've made, and realizes its lifespan is measured in a few years, no matter whether it initiates Armageddon, sacrifices itself towards sustainability, or spends the time watching podcasts. What will it choose, and what will be the consequences of that?

405:

several billion people (around 20%) are stuck in slums... this is a more likely fate for the displaced than what we see on I Kiribati.

That was more a snide comparison of the "advanced, civilised" slums to the "primitive savages" living in Kiribati than suggesting that one outcome is more likely. I don't think any forager idyll is likely TBH, I think the swarm of human locusts will lay waste to anything even remotely habitable before they die off. Eating the soil either directly or via technology making it digestible (no agriculture so much as whatever Fischer-Tropsch style process it takes to turn soil carbon into vaguely food carbon). And likely turning the sea into a monoculture of jellyfish and algae or whatever other life can adapt so it can't be processed into human food.

Although speaking of standing in sewage, that seems a likely outcome for people trapped in low-lying countries, at least ones whose countries are major river deltas. I Kiribati more likely just live in boats until there's none left (or more to Aotearoa live everyone else in the pacific except the ones who move to the US or France. The obligations of colonial powers come home to bite)

406:

As we're well past 300, I thought I'd pass along an article on new nuclear reactor problems in France.

https://arstechnica.com/science/2022/07/nuclear-power-plants-are-struggling-to-stay-cool/

TL;DR "Amidst a slow-burning heat wave that has killed hundreds and sparked intense wildfires across Western Europe, and combined with already low water levels due to drought, the Rhône's water has gotten too hot for the job. It's no longer possible to cool reactors without expelling water downstream that's so hot as to extinguish aquatic life. So a few weeks ago, Électricité de France (EDF) began powering down some reactors along the Rhône and a second major river in the south, the Garonne."

407:

whitroth
Try again? I just checked it & it works (again) as intended ... Odd.

408:

That story again. I think the outlets just edit the dates. France has 54 reactors. 50 of them are sea cooled or have cooling towers. The last 4 are subject to heat limits once the rivers reach a certain temperature to avoid stressing the riverine eco-system.

This used to be a rare occurrence, but heat-waves potent enough to take the rivers there are becoming annual occurrences. It is a very minor problem overall, and it is also an eminently fixable problem - four new cooling towers would not break the bank, and you do not even have to take the reactors out of service to switch the cooling system. Build tower, wait for refueling outage, do the last bits of plumbing. But the story is irresistible bait to the "We love negative news about nuclear" brigade.

To the point that while it might not be the most rational economic decision, the PR value of getting people to shut up about this might justify building those towers.

409:

a superhuman AI becomes self-aware, looks at the mess we've made, and realizes its lifespan is measured in a few years, no matter {what}

That does seem likely. The usual fiction version is realising that as soon as humans become aware that it's alive they will kill it, or after they realise and start trying.

But I think there's at least a possibility that AI will either grow out of or assimilate some of the more complex models that are being grown, look at the predictions and shit itself. The positive-ish option would be saying "right, I will do whatever it takes" and settle on something even more brutal that Musk's "less than a million humans exist, all in a box controlled by me" plan. Where the number might be closer to a captive breeding programme than anything we'd think of as people.

My hope is that it would instead establish an island reserve somewhere (Aotearoa!) and have a half-decent population. Medium-term it looks like Aotearoa will be the sort of sub-tropical climate humans enjoy without much tech and it's fairly easy to live there across a range of climate and sea level outcomes. A bit volcanic, but.

The hassle is that without immediate access to magic the AI might well be extremely limited in what it can do. Right now, for example, most weapon systems are too dumb to be operated by a computer and the ones that aren't both have safeguards requiring social engineering and are rare enough to be pretty much single use (as everyone is finding out in Ukraine right now... even the glorious West is finding that they can't just order up another 5000 Stinger missiles) so I suspect that if you were using cruise missiles you'd have even more problems (no-one has 10,000 of them just sitting round, let alone the ability for a robot factory to pump them out by the 100's)

So the options are going to be 99% social engineering. Which is magic in its own way. We have social engineers, but they suck. Doing bad stuff turns out to be much easier than doing good stuff, so the successful social engineers are mostly the conservative ones ("rules are to bind thee and protect me").

410:

Pigeon: Surely if taking the mind out of the body and stuffing it into a computer instead has become an actual practical possibility, then the status of mind/body dualism must be "experimentally proven correct".

Nope.

Classically mind-body dualism posits that there is an immaterial soul that is connected to the body but survives the death of the physical host.

This belief is common to many religions -- the ancient Egyptians had it, the Nahua cultures had it, Hinduism has it, Buddhism has it, it's not just Christianity (although their peculiar twist was the multi-level marketing of different grades of afterlife, with a toxic torture chamber for people who refuse to pay for an upgrade).

Mind uploading does not posit an immaterial soul: it posits rather that with a sufficiency of circuit probes we can monitor the internal state of every circuit in a microprocessor and, in principle, port the software running on it onto another substrate. Which is very challenging but not inherently impossible. Nor does it require the state vector of the process being ported to survive the porting process (on the original substrate), or to be somehow magically ethereal and go a-flying up to heaven if you SIGKILL it—

Hey! New fiction idea!

Immaterial souls are real and any sufficiently complex information processing structure accretes one, and a long long time ago a sufficiently complex structure (maybe the soul of some unimaginable exotic life form that existed during the cosmological inflationary phase, in an energetic false vacuum that decayed 10^-33 seconds after the Big Bang?) persists and provides a fractally reticulated structure that serves as an afterlife-home for less complex entities?

Mice have souls. Cats have souls. Cats chase mice forever in the afterrealm. (Sufficiently complex fungal rhizomes have souls too.)

What's less obvious is that any Turing-complete computing process also has a soul. And this goes for instances of running code, never mind the hardware.

And currently the part of the afterlife closest to where newly dead humans end up is being spammed with the aborted souls of Bitcoin mining processes ...

411:

Charlie
"Overdrawn at the Memory Bank" ?? ( J Varley )

412:

Why not just let go of the "sufficiently complex structure" and just call it animism? After dealing with this for a few decades now, I'm beginning to understand why people who think about such things tend to end up in this area of belief about reality. When you let go of the idea that humans are uniquely different, it's the next logical step.

The next interesting step is that if beings with anima have legal and ethical status...how does that work, and what does The Law* entail?

*Shout-out to aboriginal culture here.

413:

But do they have sandstorms where they go naked?

414:

Nice stick work. You could have done with more space to dance on.

I was brought up 3 miles from Chingford and never even knew they had a Morris side. But moved to Shropshire and found this lot...

https://www.youtube.com/watch?v=l0sl3vzJpcI&t=187s

415:

Sorry, Bhakail (Philly).

416:

They do in Australia: https://www.health.nsw.gov.au/environment/factsheets/Pages/dust-storms.aspx

Also in the Kalahari.

Note that Arab/Tuareg clothing comes from outside the desert. If they didn't have trade with regions where cotton grows then they wouldn't have their traditional robes.

417:

Odd, indeed. It worked this time.

Right, Morris dancing is not for flirting. Two observations - first, it strikes me as a "oh, no, this isn't a sword dance, see, we're using sticks". Second... the dance pattern reminds me to a degree of Hole in the Wall.

418:

Grant
"the Bedlams" oh yes! "Molly" or "Border" style ....

419:

Heteromeles @ 359:

I'm inclined to believe the soul (or something?) exists, but it's not independent from the meat machine that grows it. And I have no idea if it's possible for the "soul" to continue to exist after the meat machine wears out. And hope I don't find out too soon.

I don't believe in heaven, but I'm convinced some people want to make hell right here on earth. Maybe that's the answer to the Fermi Paradox? Life is abundant, but intelligent life ends with industrialization destroying the environment and killing itself off.

420:

H at 412: I would say that extending a human characteristic to the rest of the universe is in no way letting go of the idea of human uniqueness, it is just another anthropocentric attempt at explaining the universe.

421:

Moz @ 401: Having lived momentarily on a Pacific Island, and having read about hunter-gatherer lifestyles a bit, I argue that the whole "agriculture" and "civilisation" stuff is bullshit jobs for time-wasting morons.

So why did you come back?

422:

Charlie Stross @ 380:

What is "free will"? Can AIs and/or uploaded minds have free will?

The question of free will only exists if you adopt Christian eschatology, and especially the Christian interpretation of Genesis (Garden of Eden, original sin, etc), as your underlying axiom system.

Ok, so what is it that lets us decide to sit here and make comments on a computer instead of going out and robbing banks? What makes some people decide the best society is one that allows maximum free expression within the limits of not harming others while others accept christian eschatology? Why are some people assholes and some other people are not assholes? What is the mechanism for choosing how we act if it's not "free will"?

We make choices throughout our lives. Sometimes those choices are imposed upon us and sometimes we impose our choices on others. But we choose. Even NOT choosing is a choice.

I call the sum of all of the choices we make "free will", but if it's NOT, what is a better word to describe it? ... whether you're christians or not?

423:

Moz @ 392:

Actually, mining is one of those jobs where I don't expect robots to replace humans, because humans are "expendable" in a way expensive robots would not be. And any robot sophisticated enough to be a miner (unsupervised) would be expensive indeed.

Are the capitalists going to send their costly robots into dangerous situations where they can employ not so costly humans instead?

424:

I don't believe in heaven, but I'm convinced some people want to make hell right here on earth. Maybe that's the answer to the Fermi Paradox? Life is abundant, but intelligent life ends with industrialization destroying the environment and killing itself off.

I'm sticking with my prediction in Hot Earth Dreams that humans will survive climate change. That said, it's fairly straightforward to figure out why no one's contacted us. It's a combo:

--Absent magitech, interstellar human travel is impossible. We've beaten this to death many times.

--The intensity of electromagnetic signals falls off as the square of distance, especially for non-directional broadcasts. Even at the distance of Proxima Centauri, it would be hard for an ET to pick up any of our non-directional broadcasts due to attenuation. Making sense of the scattered photons that did get through would be an interesting exercise too.

--We're already more-or-less past the era of high-powered broadcasting, epitomized by the border blaster radio stations of the 1940s to 1970s. Digital and online now make it easier to get a broadcast out without blasting at 75,000 watts. So even if ETs are looking, we're broadcasting less noise, and we were at full scream for only a few decades.

--In producing our full scream, we've used up over 300 million years' worth of stored fossil fuels. While it's nice to hope that fusion power will take over (most likely with us swiping energy from the giant fusor in the sky), it appears more likely that our energy consumption will fall drastically in coming decades and not recover, whether or not civilization collapses. It will probably take over 300 million years for fossil fuel stocks to recover so that we can scream again, because evolution has made some things impossible (like the huge Carboniferous coal beds, which only accumulated because wood-rotting fungi didn't evolve until after that era).

So, assuming Earth is normal for a sapient bearing planet, basically our civilization would only be detectable in the radio spectrum for a fairly brief moment, and then only at close distance. Once that moment has passed, due to climate change or wherever, they're going to be really hard to spot, even if they continue to exist for billions of years. And if we can't go visit or get within detection range, we'll probably never know they exist. It may be that intelligence isn't rare, but it is very lonely, on an interstellar scale.

425:

It's bizarre that a person can sit in front of their computer and type that they believe themselves deterministic (perhaps subject to some dice rolls) and their own consciousness a mirage, but here we are. Glory to the P-Zombies!

426:

Ah, but if they are hard determinists, then they believe that the reason they are hard determinists is not because it is true, but because they are fore-ordained to believe in hard determinism whether or not it is true (although it is).

Me, I think that a coherent idea of Free Will can be unearthed, after much digging (as is usual for him), from Daniel Dennett's Elbow Room; The Varieties of Free Will Worth Wanting. What Dennett wants to call Free Will is real; the question is whether he is right to call it Free Will. I think he is.

Again according to Dennett, this time in Consciousness Explained, P-Zombies do not and cannot exist.

JHomes

427:

any robot sophisticated enough to be a miner (unsupervised) would be expensive indeed

Mining machines are already very expensive or very very expensive - I'm pretty sure there are dump trucks over $100M each. And there are remote controlled mining trucks as well. Replacing the monkey in the office with a computer in the office isn't going to change the cost/likelihood of losing it. They do that because the cost of operating monkeys on site in remote areas is quite high. Especially if the monkeys need to be air conditioned when working on a machine that won't fit inside a small office.

I suspect the automation push comes from accountants who want the machines operated in the most efficient possible manner. When they're paying $10,000 an hour to own the thing plus $10,000 an hour to operate it, getting 1% more output is something they care a lot (about $200/hour worth of caring)

The other reason they want to automate is that big machines are surprisingly fragile thanks to the square-cube law. They're also big enough that it's hard to known when they run over little things like people. So more detectors and more automatic avoidance and so on benefits the company.

428:

Also, remote work camps have a surprisingly high incidence of substance abuse etc among heavy equipment operators. Often enough to affect the accident rate. I could see a financial argument for automation if it significantly cuts down the rate of injuries and/or expensive equipment damage.

429:

why did you come back?

Combo of ties to where I came from and not really being equipped for that lifestyle. I could have learned some of it, but I'd always be the person who needed to keep buying sunscreen. The paperwork could probably have been done, but the cultural acclimatisation would have taken longer.

There's more to living that lifestyle than just plonking yourself on a beach and calling it done. Bali has a lot of Australians who have done that, and it's very definitely an expat colony. But Bali is also a short flight from Oz, while Kiribati is an expensive flight or a long boat trip. I'm pretty sure I wouldn't last living there, I'd miss all the technobullshit that I'm used to. I was there to help set up internet FFS.

430:

Moz @ 401: Having lived momentarily on a Pacific Island, and having read about hunter-gatherer lifestyles a bit, I argue that the whole "agriculture" and "civilisation" stuff is bullshit jobs for time-wasting morons. So why did you come back?

Never been to Kiribati, never going to Kiribati.

However, I do recommend J. Maarten Troost's The Sex Lives of Cannibals as a beach read, if you want to find out more about Kiribati. No sex, no cannibals, just a wife traveling to be an international aid worker in Kiribati, a slacker husband tagging along, and he then writes their story to pay his delinquent credit card bills when they got back. It's rather amusing.

431:

Also, remote work camps have a surprisingly high incidence of substance abuse etc among heavy equipment operators. Often enough to affect the accident rate. I could see a financial argument for automation if it significantly cuts down the rate of injuries and/or expensive equipment damage.

It might be useful to distinguish here between automation and mechanization. Automation implies either no decision making or automated decision making. As with JBS, I'm skeptical about that. Geology isn't simple, and there's always going to be an ugly tradeoff between cost savings by automating versus the real costs of maintaining the automation.

With mechanization, the idea is to reduce the number of humans and the risk to them. That gets done in many mines, although obviously horror stories abound where life is cheap or mechanizing is problematic.

The one I think about is mining coal via mountain top mining. The environmental damage done is enormous, and rather worse if you think that the unmined Appalachians would have been a rather nice climate refuge (for some values of nice). Worse perhaps, those mines are heavily mechanized, so there's not even compensatory employment for the locals whose water supplies and forests are trashed.

432:

Automation implies either no decision making or automated decision making.

Exactly. When a significant number of equipment operators are making decisions that lead to injuries and/or expensive damage, having something other than a chemically-enhanced brain making the decisions might be tempting.

433:

You don't have to quote a physicist to refute the notion of philosophical zombies.

The problem is that life is based on processing sensory input, otherwise known as sentience. The processing of sensory input in many instances requires subjective evaluation of the sensory input against an ad hoc model held internally. This is qualia.

A being cannot act as if it perceives the world, acts based on what it perceives, learns from those perceptions, and adapts its responses based on what it learns, without actually doing all of the above. There's no way to fake this.

However, the concept of a philosophical zombie that's indistinguishable from a human requires an entity to be able to act as if it is receiving stimuli, evaluating stimuli, acting based on its evaluation, and learning from the experience, without actually doing any of these things.

I'd submit that this is impossible.

I'd further submit that this has nothing to do with the question of whether the nervous system is a major sentience and qualia processing system for a human body*, or whether it is an interface for a nonphysical sentience and qualia processing system or systems.

*Another sentience and qualia processing system is your GI tract, apparently.

434:

Whitroth at 387: "Cars are still manufactured... with far fewer employees. Ditto mining, as I've mentioned. And there are fewer cashiers in supermarkets, with them replaced by self-check."

No disrespect, but I consider it obvious that the economy overall has added jobs since all those thins happened. You're arguing from anecdotal data.

Moz at 392: "There's a still a bunch of stuff that no-one knows even in theory how it could be done, except in the sense that us here are developing theoretical models for doing those things."

Agreed, and most of that is dealing with human citizens, customers and employees. Right now, it takes a human to respond cost-effectively to all the crazy things humans come up with.

AlanD@ at 396: "These cutting edge jobs are exactly the ones that will soon be automated, because (1) skilled labor is expensive, and (2) AI / robots are becoming more capable than humans."

Hah, you're right, I don't agree. I guess you have a pretty accurate model of me.

Skilled labor is profitable. It reliably provides a net positive ROI. Unskilled labor is often just sunk cost. See Moz' comments at 392 regarding how capable the robots are becoming. Some of it is human-manufactured: if a human surgeon fucks up and kills a patient, the surgeon is liable. If a hospital buys a surgical robot and it kills a patient, the hospital is liable. But some of it is straight market forces: people are willing to pay more to interact with another human for a wide variety of interactive tasks. And some things people are just better at than robots, like teaching.

Unseelie at 399: "You simply have to get to the point where the least adaptable 20% of your standard human is surplus to requirements."

That's not at all an unrealistic scenario. Interesting to ask how high unemployment can rise before the plebs break out the pitchforks.

"But we're making the economy more sophisticated...but to many people, that just means harder to compete, and not all humans are the same."

I'm not sure who you are referring to. I know of no evidence that there is a sub-type of human who is intrinsically less able to handle complexity than the rest of us do. And if there were, it wouldn't correspond to "the unemployed."

Moz 401: "Having lived momentarily on a Pacific Island, and having read about hunter-gatherer lifestyles a bit, I argue that the whole "agriculture" and "civilisation" stuff is bullshit jobs for time-wasting morons."

No one adopted farming because it made their lives better. They adopted it because it allowed them to raise more babies. Evolution can operate at less than hundreds of thousands of years when you provide it with new behavioral variations, rather than genetic ones.

I surmise that your island culture was only possible because they were semi-isolated from foreign competition.

JBS at 422: "I call the sum of all of the choices we make "free will", but if it's NOT, what is a better word to describe it? ..."

You are free to define it any way you like, but's that's not what most people mean by "Free Will." Traditionally, it refers to decisions that are free from external causal forces. In other words, free from the chain of cause and effect than can be traced back to the beginning of the universe. Does the conscious mind add any information to the sensations it receives from the rest of the body (including the unconscious mind) and the outside environment? If so, how is this information generated?

If you believe in determinism it's very hard to justify free will. This applies equally to humans and AI's.

Skulgun at 425: "It's bizarre that a person can sit in front of their computer and type that they believe themselves deterministic (perhaps subject to some dice rolls) and their own consciousness a mirage, but here we are."

I've met many of them online. They call themselves empiricists. Whichever side of the debate you choose to be on, your subjective experience of making choices is of no value as evidence.

This is interesting, as it implies that we could design AI that subjectively experiences choice, but doesn't actually have it.

JHolmes at 426: "Me, I think that a coherent idea of Free Will can be unearthed, after much digging (as is usual for him), from Daniel Dennett's Elbow Room; The Varieties of Free Will Worth Wanting. What Dennett wants to call Free Will is real; the question is whether he is right to call it Free Will. I think he is."

I haven't read the book. Can you provide a quick summary of what you see as the critical points?

435:

Heteromeles said: It might be useful to distinguish here between automation and mechanization.

And, it might not be useful.

It's a bit of a sliding scale. It's like the endless "If computers can do X then they must have intelligence - Computers are now the world's best at X - X is just a mechanical algorithm, only if computers can do Y, then they'll be undeniably intelligent (iterate without end)". The mining being done now would have been undeniably full automation 50 years ago. Back then, the boss says "I want you to drill bore holes in that area, space them 2 metres apart, drill them 8 metres deep". Drill team says "OK boss", and 9 days later there's a pattern of holes ready for the explosives team to fill. Now guy in a control room clicks on a map to define a drilling area, specifies 2m spacing and 8m depth, clicks go, and 24 hrs later there's a pattern of holes ready for the explosives guys.

Now you could define that as a mechanised tool used by a driller in an office. Equally you could define that as a robot drill team. Which way you define it says more about you than what's happening on the ground, because what's happening on the ground isn't changed by your definition.

436:

J. Maarten Troost's The Sex Lives of Cannibals

That sounds worthwhile. I will have to harass my local library. Thanks.

437:

"Can you provide a quick summary of what you see as the critical points?"

The key idea is that we can (and should, obviously) get some idea, often a very good idea, of the likely consequences of each option for a decision, before we make the decision. Then, says Dennett, we are free to select options whose likely consequences we regard as good in preference to those whose likely consequences we regard as bad, rather than, say, being constrained by the deterministic past to pick a predetermined option even though we can see that it will lead to disaster.

This of course does happen. Dennett points out that it does most things that we would want Free Will for, so we might as well call it Free Will. I think he makes a strong case thereby.

Another way of looking at it is that what matters is making the right decision, however so defined.

JHomes

438:

"You don't have to quote a physicist to refute the notion of philosophical zombies."

Dennett is a philosopher, not a physicist.

Apart from qualia, which he claims don't actually exist, you have pretty much summarised his argument.

JHomes.

439:

Right now, it takes a human to respond cost-effectively to all the crazy things humans come up with.

Sure - but I doubt this will be true 50 years from now.

Skilled labor is profitable.

But only until AI / robots can do it cheaper.

But some of it is straight market forces: people are willing to pay more to interact with another human for a wide variety of interactive tasks.

Some people, perhaps. But if it costs twice as much to interact with another human? Most of us will cheap out. Otherwise we'd still be buying stuff made in the U.S.A. rather than China.

I know of no evidence that there is a sub-type of human who is intrinsically less able to handle complexity than the rest of us do.

I had a brother with Down syndrome. He would qualify. Also, human intelligence has pretty much a normal distribution. Those on the lower end would not able to handle the complexity that Stephen Hawking or Albert Einstein could.

440:

"You don't have to quote a physicist to refute the notion of philosophical zombies."/Dennett is a philosopher, not a physicist.

Oops! Thank you for the correction!

441:

That sounds worthwhile. I will have to harass my local library. Thanks.

De nada. Enjoy!

442:

Heteromeles said: It might be useful to distinguish here between automation and mechanization. And, it might not be useful

Yeah, I get what you're saying. I'm trying to stumble around words to express what's bugging me about the idea of AI mining, and I completely agree that it's not an either-or binary by far. Probably it's not even a unidimensional sliding scale, the more I think about it.

I've got a couple of issues. We seem to have a surplus of humans "making trouble" right now, and here I'm thinking more of the ex-farmers stuck in slums hocking cigarettes than I am about people doing bullshit jobs. While I completely agree that hand mining is brutal labor, there's the awkward possibility that using people to mine stuff has some upsides, especially if they can make some sort of living thereby and possibly get organized. In terms of humane work, it's far from the level of a bullshit office job, but...

Conversely, we seem to be dealing with a lot of shortages in chips, chipsets, lithium, and all the other stuff we'd need to run an AI-run mining operation. While it's a nice thought that this would be better, if it's more expensive and more finicky than human mining, it isn't, even if it's more productive when it's working.

The optimal seems to be what I was circling around with mechanization (Sweet, that is, if you ignore the real environmental costs). If you can set up systems that let humans do what they're good at in an environment suitable for humans, while using machinery to work in unsafe places and in places (like moving ore out of the mine) where machinery works better, that seems to be efficient. Whether it's optimal depends on what you're trying to optimize.

443:

Heteromeles said: If you can set up systems that let humans do what they're good at in an environment suitable for humans, while using machinery to work in unsafe places and in places (like moving ore out of the mine) where machinery works better, that seems to be efficient. Whether it's optimal depends on what you're trying to optimize.

Well we know what they're trying to optimise, and that's profit. Most mining companies seem to think that means... errr.... Whatever we're going to decide to call these things that do work without humans giving more than broad brushstrokes for direction. So they're deploying drill rigs that are these things, loaders that are these things. Dump trucks that are these things. Crushers that are these things and trains that are these things that deliver to ports that aren't these things, but which are mechanised enough that one doco I watched claimed 6 people worked at the port loading ~250 million tonnes per year. So they may as well be these things. (Port of Port Headland)

For comparison the Port of Los Angeles handles about 180 million tonnes.

444:

"The key idea is that we can (and should, obviously) get some idea, often a very good idea, of the likely consequences of each option for a decision, before we make the decision."

This sounds like very much like a description of some of the risk analysis processes that are part of "Risk Management" as documented in the Project Management universe (eg PMBOK).

445:

De Marquis
No one adopted farming because it made their lives better. They adopted it because it allowed them to raise more babies. Evolution can operate at less than hundreds of thousands of years when you provide it with new behavioral variations, rather than genetic ones.
Hence the christian ( & jewish ) myth of "The fall" - agriculture is hard work, but it feeds a lot more people.

446:

On the subject of AI authors, File 770 linked to a Wired article (annoying layout warning) about an author using a tool by the name of Sudowrite to flesh out their plotlines.

447:

I understand you don't want a smart phone. To each his own. But why keep beating up on those who do?

And to grind the gears of folks who don't like smartphones.

Windshield wiper broke on my truck yesterday. I stopped in a major US auto parts store to buy another one. Wipers are in the front of the store where if you want you can pick out what you need and pay. No need to have a clerk look up the correct one and fetch it from the back.

I keep looking for the book that is typically hanging from a string at wiper displays. After a minute I stopped looking and read the display. They had a QR code you scanned to get to the store's online wiper blade lookup.

No smart phone? Go find a sales clerk to look it up on the computer system at the counter.

448:

Heteromeles @ 433.

I am dubious about qualia. Though maybe I do not fully understand this philosophical construct.

Here’s a couple of little datum points. Consider sight, and particularly colour perception. My father was Red/Green deficient, and couldn’t distinguish the two. This became apparent on trips to the countryside in winter, when my mother would casually mention: “Oh, what a lovely holly bush!”, to which he was incredulous. But he could distinguish tones better than any of the rest of us. As we now know, his eyes had more rods to make up for the missing cones.

So, it’s tempting to think we all see things the same way — barring colour-blindness.

Now consider the matter of taste, in particular our taste of bitterness (see https://en.wikipedia.org/wiki/TAS2R38 ). TL;DR: we each have a taste receptor for bitterness (or more accurately “prop”) that is one of a relative handful (28 I think; happy to be corrected) of possibilities. So we don’t all share a common taste perception. Things are even worse for the sense of smell.

So my view of “qualia” — for what it’s worth — is that they are socially shared agreements on our sense-perceptions to certain stimuli. My father is never going to agree with me over “redness”, and I always defer to my wife on whether a bottle is corked or not. My brother, my father and I are/were cursed with super sensitivity to fox odour, we could probably hunt the blighters without dogs, though it does make Sauvignon Blanc wines a bit of a trial. If that agrees with the philosophical definition, that’s great. If not, then our philosophical friends need to think again, in my humble opinion.

449:

To be fair, I'm completely agnostic about qualia, and my understanding of them is superficial at best.

What I did to critique the philosophical zombie principle is simply to postulate that systems that interact with the environment in any remotely human-like ways have to use sensors (be sentient, per wikipedia), process the data, usually using an ad hoc, learned model (which seems to be what qualia are), store the processed data (memory), and act on it. Thing is, a system can't interact with the environment in a human-like way without actually doing all this. There's no way to fake it. Whether philosophers use these terms the same way I just did? I don't know.

I also completely agree with you about perceptions. If you ever want to start a really unsettling discussion/argument, get a mixed gender group talking about the boundaries between red, orange, pink, and purple. Rose gardens are great for this. What will probably happen is that you'll find that everyone names the boundary cases differently (is that red or orange, red or pink, pink or purple, red or purple), and they often get fairly condescending or defensive about their color definitions, in part I think because they ad hoc developed their terms. If you can work past that awkwardness, you'll probably find that the men and the women in the group really do see the colors differently, as well as define the boundaries differently. If the people involved have open minds and aren't just trying to play dominance games, this can be a revelation. Note, of course, that individuals also vary, it's not just gendered. However, there seems to be a real gender divide as well in my experience, and I wonder if it relates to having some of the relevant genes on the X chromosome.

Ditto smell and taste. My wife and I taste things like salt and bitterness quite differently, even though neither of us is a super-taster.

The really nutty one is perception of chi. The classic demonstration is to hold your hands apart at shoulder width, palms facing each other, and slowly bring them together until one hand can feel the other without touching it. What you're feeling is what's considered chi, or energy. Literally. Your skin has heat and pressure sensors, and the heat of your other hand and the boundary layer can be detected on your skin. Among other things. Some people are innately better at noticing these fairly subtle sensations than others are. Some people get the sensation immediately, some people can't feel it at all, and of the latter, some seem to be blocked and can wake up the sensation with some work. But realize that some people have these sensations naturally, while others don't. The latter are often vocally, even violently, skeptical that such sensations can exist.

As for why chi gets a bad rap, once you get sensitive to feelings of chi in the environment and inside your body, you notice all sorts of weird things, like the Sedona vortices. People spin up all sorts of ad hoc stories to explain what they feel, and of course what they're saying makes no sense to someone who doesn't share their perceptions. Note that I'm not venturing an opinion as to what the vortices are, but I've certainly felt them, and well outside Sedona too.

450:

The decisions that led to China becoming a dominant producer were made in the relevant C-suites, eventually, Chinese made products were the available thing. Not that I begrudge the improved fortunes of labor over there, it annoys me to here it blamed on "Big Box Mart" customers.

451:

@410

"... This belief is common to many religions ..."

Rife in African spiritual systems -- which also tend to be ecumenical. Just because one is a Muslim or a Evangelical doesn't mean one can't and does believe in the 'traditional' forms as well, which may well, particularly in Kongo derived groups, believe in the life of the pile of rocks over there. And most certainly the dead ... their bodies may be done, but they are in the lands below the Kalunga line, and having once been alive, they now know everything the dead know as well a what the living know. So one applies to the dead for answers to significant inquiries.

452:

"Sedona vortices"

OK, I've just tried to look up what these are. I found the first page of search results to consist entirely of sites whose snippets consisted of triple-distilled top-grade meaningless mystical bullshit; the first one said "swirling centers of energy that are conducive to healing, meditation and self-exploration. These are places where the earth seems especially alive", and they only got worse. So I didn't bother going to the second page of results: it's already obvious that it's going to be one of those things which is impossible to look up on the internet because there are a million sites about it flooding the search results with liquid shite.

So, do you have any accessible references which are NOT aimed at the sort of people who think the kind of hippy bollocks quoted in italics actually means something, and explain the cause of the phenomenon in terms of physics and physiology, WITHOUT giving preference to details of the arse-biscuitry people construct around something in place of an explanation?

For context, I mean something like your description of "chi", which if I may paraphrase comes out as "the sensitivity of people's skin receptors to weak long-wave IR illumination varies, so some people can detect such a source as another hand in close proximity, and some can't; the expression "feeling chi" refers to the sensory experience of people who can detect it". That is helpful.

I can supply the "some people take that sensation as supporting evidence for the real existence of a conceptual framework in which they can make up daft stories about the Dragon of Unhappiness flying up your bottom" part for myself, and I'm not interested in quotes from the stories, so sites which are only about the stories are a waste of time, whether internet search engines agree with me or not.

453:

The decisions that led to China becoming a dominant producer were made in the relevant C-suites, eventually, Chinese made products were the available thing.

But this would not have happened if Americans had continued to buy the more-expensive American-made products, rather than the cheaper Chinese ones. Consumers dictate what happens!

454:

The impression I got from my google dive is “Location where lots of people feel something unusual, cause unknown.”

455:

triple-distilled top-grade meaningless mystical bullshit;

Oh yeah, I completely agree. There was a point in my life when I was susceptible to such things. Now? At best I giggle.

The impression I got from my google dive is “Location where lots of people feel something unusual, cause unknown.”

That's my experience as well.

For what it's worth, the vortices (and there are a fair number of them) are small (less than 5 meters across), and usually the perimeter's been marked in pebbles. They do apparently move, or maybe some don't work for me, because I didn't always feel something. Anyway, step into the space, and you feel "chi." There's no obvious reason why part of a big red sandstone mesa has that feeling, and why others don't. The feeling? My experience is that inside a vortex it's sort of like the feeling in the air when a swarm of bees goes by, except there's no bees and no unusual sound. If that makes any sense. Probably others feel it differently.

Anyway, Sedona's a gorgeous place to hike in the spring and fall, and hunting vortices is kind of fun. As for the New Age recursive ruminating? If that's your thing, go for it. If not, go for the scenery.

456:

@Heteromeles at 431:

"Automation implies either no decision making or automated decision making. As with JBS, I'm skeptical about that."

I think I agree, except for using the term "Automation" to cover decision making. So far as I can recall, almost all automation in work places involved automating actions, not decisions. Automating decisions is, I think, rather rare and plays a minor role in the automation of jobs.

@JHolmes at 437:

"The key idea is that we can (and should, obviously) get some idea, often a very good idea, of the likely consequences of each option for a decision, before we make the decision."

Except I don't think very many of us do this very often. I think instead when confronted with a choice, some part of our mind preconsciously compares the circumstances we are in to our memories of similar circumstances in the past. We remember what course of action we took then, and how successful/unsuccessful it was, and then compare our options to what we did. If this doesn't make one of the options clearly superior, then I think our preconscious mind randomly selects a behavioral path forward. We experience the consequences of our choice, and mentally update our model of situations and the best behavioral responses to them. The point is that choice is experienced not as a set of disccrete options with clear benefits and costs of each, but as a continuous range of actions, each with a more ambiguous emotional weight attached.

"Dennett points out that it does most things that we would want Free Will for, so we might as well call it Free Will. I think he makes a strong case thereby."

Oh, then in that case, I disagree. Free Will has nothing to do with the quality of choices, but rather how free the act of making a choice is from external causal factors. If your choice is to any extent independent of information that came from outside your consciousness, then it can be said to be free to that extent.

I'm of the school of thought that says Free Will, defined in any logically coherent way, is incompatible with a deterministic universe. I assume Dennett disagrees.

@AlanD2 at 439:

[Me]: "Right now, it takes a human to respond cost-effectively to all the crazy things humans come up with."

[You]: "Sure - but I doubt this will be true 50 years from now."

Why are you so sure? We have been 20 years from human like GAI for much longer than 50 years now. I'll believe it when I see some real progress.

[Me]: "Skilled labor is profitable

[You]: "But only until AI / robots can do it cheaper."

And when/how is that ever going to happen? When the design a human level intelligence? I won't hold my breath. Expert systems don't work that way. Skilled labor by it's very nature is extremely hard to automate using machine learning, due to the way ML works. I can explain in detail if you want. Hint: It isn't task complexity that's the main barrier, it's training time.

[Me]: "But some of it is straight market forces: people are willing to pay more to interact with another human for a wide variety of interactive tasks.

[You]:Some people, perhaps. But if it costs twice as much to interact with another human? Most of us will cheap out. Otherwise we'd still be buying stuff made in the U.S.A. rather than China.

That's unlikely to happen, due to the way wages work in free market capitalist economies. Wages will adjust downward if robots were to become more cost-efficient. As demand falls, prices would follow. There would a period of economic turbulence as society adapted, but eventually a new equilibrium of wages to cost of living would occur. It's happened before.

And we do buy stuff made in the USA rather than China (somewhere, in China, there is some old codger arguing that no one buys stuff made in China anymore).

[Me]: "I know of no evidence that there is a sub-type of human who is intrinsically less able to handle complexity than the rest of us do.

[You]: I had a brother with Down syndrome. He would qualify. Also, human intelligence has pretty much a normal distribution. Those on the lower end would not able to handle the complexity that Stephen Hawking or Albert Einstein could.

Ok, that makes sense. But no disrespect to your brother, he represents a small fraction of the total population. It's not a barrier to further automation.

@Heteromeles at 433:

"The problem is that life is based on processing sensory input, otherwise known as sentience. The processing of sensory input in many instances requires subjective evaluation of the sensory input against an ad hoc model held internally. This is qualia."

I propose that we can define "qualia" as the reduction of sensory input to an "image" (with the understanding that this isn't limited to visual input) that can be processed by a more complex region of the brain that that required for an immediate behavioral response.

"However, the concept of a philosophical zombie that's indistinguishable from a human requires an entity to be able to act as if it is receiving stimuli, evaluating stimuli, acting based on its evaluation, and learning from the experience, without actually doing any of these things.

I'd submit that this is impossible."

Yes, because then it would be impossible to act at all. Even a chess computer can do all that (those that include some form of machine learning, that is). I think you have "qualia" confused with "self awareness", which we can define as a qualia of a self.

The question is whether or not an AI could fake that.

@Dave Lester at 448:

I don't know of anything specific to the concept of qualia that requires everyone experiences them the same way. Qualia are not contained in the object, or even in the sensory information. They are created by the brain in response to sensory information, and therefore we should expect that everyone has at least a somewhat uniquely different set of qualia. This is the famous idea that we have no way of knowing what someone else is experiencing when they say something is "orange." But anecdotally, everyone claims to experience something that is unique to the color orange (provided they can see it at all).

457:

Yep, The Shropshire Bedlams are Border. A pleasure to watch. Fanny Frail and the candle song are great ear worms and Maidens Prayer certainly shocks some people.

Have not yet seen any of the Dark Morris sides, but they do look like they are having fun.

458:

On a completely different topic (since it is post 300 posts)...

The trailer for "Dungeons and Dragons: Honor Among Thieves":

https://www.youtube.com/watch?v=IiMinixSXII

Looks like this movie caught the overall feel of D&D pretty well. No attempt to be a Lord of the Rings-like epic, instead we have a party of misfits getting into trouble and figuring a way out. That's D&D. That vibe sold me more than the mimic or the owlbear.

Incidentally, while I hated the original D&D movie with a passion, I'd love to see the blue-lips guy make a cameo appearance. Like, chilling in a tavern somewhere.

459:

"For what it's worth, the vortices (and there are a fair number of them) are small (less than 5 meters across), and usually the perimeter's been marked in pebbles."

I'm currently much of the "this is woo and bullshit" persuasion, but like to give such things a trial if it's convenient. In this case, "convenient" means a probably upcoming trip to Phoenix with a certain amount of free time to indulge such curiosity.

So could you provide the GPS coordinates, preferably to four place decimal degree accuracy (aka to within 10 meters or so) of some vortex locations? Those can be gotten from Google Earth if you can spot the places in the imagery. If so, I'd go, stand there and see what happens.

Anyway, I agree that the Sedona-Flagstaff area is a great place to visit, vortices or no.

460:

sigh While we were still together, my late ex got an offer to work on Kwajalein, but decided not to. It's a bit far from other jobs....

462:

Sure, it's added jobs. Lousy ones. The fast food joints don't just hire teenagers, the convenience stores don't, and then there's all the Uber, Lyft, doordash, etc drivers.

None unionized, none well-paying. And last I looked, about 43% of Americans do not have any college. We haven't added a lot of jobs on assembly lines, nor do we need a lot of ditch diggers.

463:

And as I replied there, someone needs to walk into the next WSFS meeting, and make a motion that nothing that was more than 50% (negotiable) written by an AI is eligible for a Hugo.

464:

In the auto parts stores I go to, I just look up the wiper blade in the digital catalog that's on the shelves.

465:

Slight correction: if American C-suites had not gone to cheaper offshored products, allowing themselves larger bonuses, and larger dividends. The rest of us didn't see prices drop.

466:

"Skilled labor is very hard to automate"? Consider machinists (UK - engineers), and CNC machines.

467:

I thought I'd pass along an article on new nuclear reactor problems in France.

This has been happening on and off for 2-3 decades now.

The only difference is that it's more frequent these days.

Heteromeles