Richard Morgan already covered this problem in his Takeshi Kovacs books. The solution: take the hardware away from the criminal and give it to someone else. Throw the criminal's mind-state drive in a pile somewhere to get reactivated later maybe. Well, probably not.
The hardware in this case was actual human bodies of course, with mind states managed through... well... magic. The government would just hand them out without a care at all to whether they matched the recipient. Dysphoria? LMAO this is a cyberpunk hellworld, why would the Man care. Get back to work, plebe!
]]>As for the uploaded version, the way their experience runs is they live for a few milliseconds and then die of their process space being swapped out. Then when it gets swapped back in again you have a new person coming to life but who by their lights is still the same one who walked into the lab, etc. Same for their process getting transferred to a different node in the cluster, or any of a zillion other things computers do.
You're trying to force a different kind of life (an AI) into your ideas about life, death, existential horror, and getting the a result that it's a ridiculous and wrong sort of life.
The problem here, to repeat, is that you're not thinking about this from a silicon person's point of view. You briefly mentioned their experience, only to dismiss it as irrelevant because they can't remember actions of the never ending murder factory they live in.
From a silicon person's point of view, this would be pure arrogance. Of course they know how the computer works, of course they can look at the logs themself to see what's going on. They probably even have a limited copy of themself working as their own sysadmin. They might even help write the software that their own consciousness runs on!
Now imagine a society of these people. They live their lives in computer-land. They're fully aware of how process scheduling, snapshots, and replication work. There are things they care about -- abusing AIs without their consent is right out. They strongly support the right to control their own person, they have some agreed-upon set of ethics around the rights of duplicates and archives copies of the self, and so on. I'm not sure what these ethics are, but they have them. They're happy with this sort of life -- there are issues, everyone has issues -- but it's a fine way to live.
From these peoples' point of view, your idea that their life is some kind of cyclic murder/rebirth machine has got to be the most ridiculous dumb grade school philosophy nonsense. It has to be, because to be otherwise you're basically arguing that their sort of life is illegal. You can try to convince someone that they need to make some changes to avoid hurting others, but to claim that their life itself is illegal? Come on. You're either a loon or a monster, they can't see it any other way.
]]>I've literally never used one of those automated checkout things and had it not fail.
Typical experience:
Is this my superpower?
The critical point to me that if you're uploaded, you still die, and stories of reincarnation aside, I know of no evidence that you have a soul that would be dissociated when you die and incarnated in whatever the upload is.
Well... there's no evidence that there's a soul at all. There's just this instance of consciousness, which exists in this particular strange meatball in our heads.
In the context here though, we're talking about what happens if you can make a copy -- in that short story, a non-destructive one, though it seems like a destructive one is more likely in reality. To the degree any of this is likely, which isn't very.
What I'm really asking here is to look at it from a silicon person's point of view.
Assume they exist: their mind by definition runs in this continuous state of being copied, moved, rolled back, replicated, etc. They exist, they're happy, they've presumably decided on some set of ethics and ideals around how this works that they and their friends are happy enough with. These ethics must include them being a coherent person even with all those copies, clones, and reverts, somehow.
From this person's point of view, did you die when your brain-state was copied and re-instantiated non-destructively? Obviously not, this is just like what happened to grandma when her disaster-recovery replica accidentally got activated during an earthquake, and then there were two of her. Oops! Clearly there's some answer, this isn't a big deal.
What about a destructive copy? What, everyone has done that, it's completely normal. Whenever you migrate to a new computer that's how it works -- you run a replicate job that copies you, shuts you off momentarily, then turns you on in the new computer a moment later. The original gets deleted. Why are the meat-people so up in arms about this? It's totally ordinary! You're still there! Look, I can talk to you! It's you!
Why should we think about this from a silicon person's point of view? Well... we're talking about their rights and responsibilities, aren't we?
The other critical point to me is that uploading takes an AI capable of human-comparable, free-willed thinking, and constraining it to emulate a particular human. To me, it's not the Star Trek transporter problem, it's the problem others have referred to of fucking with others' minds without their permission.
If I'm understanding you right, what you're really thinking of here is like, somehow, we build these standalone AI bodies, which are basically just human-analogue machines that could be turned on and grow into a new person, or be preloaded with a human in a fit of arrogance. The mind was, in a way, already there, it just got forced into a particular mold. Afterwards, they're kind of stuck.
While this isn't impossible, it's really not how any sort of massive supercomputer project works today. And, I'd say, it's unlikely it would ever work that way unless it was simply unavoidable for technical reasons. While all those rollbacks and copies I mentioned might be problematic for us philosophically, they're incredibly useful to an AI person and give them a degree of safety and security. Besides which, as things stand now, we don't really know how to make these sorts of giant computer systems run any other way.
I think this is a good criticism of the idea of implanting your mind into a clone body, though. Most of the books I've read with this plot point make this exact criticism, too.
]]>Wait - you're telling me with all that computing power, they don't have checkpoints? And if one server crashes, that the master doesn't just send that dataset out to the next available CPU, as they do with clusters related to beowulf?
What I described was exactly that. You still need to revert to a checkpoint to continue -- otherwise, you need something much more advanced, a nonstop/lockstep architecture where the computation runs on multiple machines at once and can seamlessly move between them. Preferably with voting to deal with corruption.
]]>In a transhuman upload, you don't die and wake up inside virtual space. You die, and something else in a virtual space gets stuck living with a simulacrum of your memories and personality. Is it ethical to inflict another free-willed intelligence with someone else's personality, rather than letting them develop their own personality and memories?
This is, basically, the Star Trek Transporter problem, restated.
Let's look at this a little closer, in the context of AI, because I'd say this is very unsupportable as soon as GAI or Upload AI exist. Not because a computer intelligence couldn't be interpreted this way, but because they will so violently break our idea of the singular self that we just have to discard it.
The first thing to understand is that any functioning computer intelligence will be constantly saved, loaded, rolled back, replicated, moved, etc. and may not even know this is happening without access to some kind of special control interface.
Why? Because this is how giant cluster computers work.
You don't just load up brain.exe and run it, and let the program run forever. You run a huge compute job that by nature has to be designed spread itself over all compute resources, work around device failure, save snapshots of its state in case of problems, etc. Obviously, a computer person is going to be a computer job requiring unparalleled processing power, so we have to look at existing massive supercomputer systems to understand how they work.
So what does our computer person really look like? Well, to start with, you have a huge array of similar computers hooked up to massive storage devices, with a blazing fast network interconnect. Each computer runs its own OS instance, and when commanded to instantiate our friendly computer person, they each run some kind of brain simulator software with a piece of the person's mind. Think, millions of copies of the brain-sim program, each on a different component of the computer person's mind.
Regularly, maybe every few seconds, the whole person's mind will perform some kind of global synchronization, where each instance of the brain-sim very quickly saves aside a copy of what the person was thinking at that instant. It will then continue, while each of the millions of computers stores that bit of the brain back to the giant storage arrays. At least a few (say, a dozen) recent copies of the mind state will be retained, for catastrophic error recovery, but normal practice would be to also archive the mind's state every day, week, month, year, etc. just for safekeeping.
Every so often, let's say once a day, some of the nodes in this giant computer cluster will require service. If it was designed to run computer people, there are some expensive ways it could try to mitigate this without anyone noticing, but in practice the solution to a partial failure is to terminate all copies of brain-sim, kick out the bad nodes, and then respawn brain-sim using the last good copy of the person from a few seconds ago.
From your point of view, the person might seem a little dazed for a moment as they forgot what you were talking about. From their point of view, the world jerked and they lost a few seconds. Stupid hardware! From your Transporter Problem's point of view, the cluster software just murdered the computer person and instantiated a new person with their memories.
Now, it may occasionally happen (as a cluster computer person, read: will often happen) that instead of simply failing and requiring service, some nodes in the cluster will fail corrupt. That is, they will keep executing brain-sim, but the results are wrong due to some failed hardware component. This continues unnoticed for a while until at some point, other instances of brain-sim spit out an execution corrupt error.
Like above, you immediately halt the simulation, but you can't just roll back to the last copy -- it's also corrupt! The computer person might function for a while, but their mind-state is prone to being wrong or crashing. You have a few choices here: somehow determine when the hardware failed, and roll back to that time (it might be weeks ago). Or, you could attempt to scan and process the erroneous piece of the mind state so that while wrong, the simulation can continue but potentially with some minor brain damage. Or, possibly, you could attempt to boot the computer person with bits of their mind out of sync, which might give them a seizure or some minor brain damage or something like that as well.
So, does our computer person lose potentially significant life experience, or take some potential brain damage? What a choice! Not for them obviously, they're offline. For you, the computer operator: brain damage or murder? Let's go with murder!
Now the old copy of the person wakes up, and finds out that they've forgotten too much. "Oh no!" they say, "I would almost have preferred the brain damage! At least let me write down my thoughts, then roll me back!" So of course, since they're in charge of their own person, you do that: you store and halt the current copy, after they write down some instructions to their corrupt counterpart. Then you load the brain damaged copy, and they say "Murple zwixgart plup plup plup!" but after a bit their brain recovers (brains are quite resilient) and they say "I say! I feel fine, but perhaps I am damaged. I'd better write these thoughts down!" so they do. Then you halt and archive that copy, and restore back to the previous path.
If you look at the computer person's life experience, it doesn't even go in a straight line any more, with some pauses here and there. Most recently, we have a copy which has a big black hole where the corruption happened, but received some notes from the corrupt mind. Then we have the corrupt copy that woke up and recovered, but remembered that black hole (but did not remember everything the latest copy does!). Before that, we had the version who was rolled back, then the version that corrupted and was terminated -- it's in an unknown broken state, but it's there. And so on. For obvious reasons, our computer person is really going to want to keep some of them around in case there are important memories there that can be recovered someday, since they're dealing with a sort of recoverable amnesia here.
Who are all these people? They weren't created for fun or to spawn an army of clones -- they're just a computer person operating completely normally. This is just how life is for an AI running on one of our giant computer clusters. As the technology improves, the glitches in their consciousness will become less common, but this stuff will still be happening under the nice cover.
My point here is: computer people, at least ones running on generic computer hardware like all our current giant computer projects currently operate, do not resemble our brains at all. They're by nature constantly saved, loaded, reverted, rolled back, copied, duplicated, merged, and so on.
You can try to apply our monkey morality to them, but as soon as any of these computer people exist, you'll basically run into a wall: their experience is not like yours, and they have no choice but to be OK with all these things. They can't think of themselves as horrifying dystopian murder victims, because this is just how their minds function. It's nobody's fault, it's just the reality of thinking in silicon.
]]>"My lack of being online" - another lie, since this entire blog is on-line.
You know, I've seen you mad at She Of Many Names for a long time, and the thing is... you did miss the point there.
Just to be clear, it was apparent to me, a mildly online person, that she was referring to the phenomenon described here: https://www.dailydot.com/unclick/what-does-it-mean-to-be-extremely-online/
And furthermore, I want to point this out: being Extremely Online is not seen as a good thing. Her saying you're not could be even be taken as a compliment, however mean-spirited what she said might have looked.
At the same time, it's apparent that she's speaking the language of Extremely Online people. When I understand what she's saying, it tends to be because I recognize it as being about some random thing that's currently hot in Internet Places which I happen to know about. More often, I have no clue.
Her methods of speaking also often (but not always!) relate to what is known as "Chan Culture." Trust me, you don't want to know, but suffice it to say there are many subcultures you know nothing about. Chans in particular are influential in the modern online world to a weirdly surprising degree.
So I mean, yeah, I can understand how it's frustrating that she's essentially using your name as a character archetype in her writings. But seriously: there's something there, it's just written in a language which looks like English but isn't.
Much like in this episode of Star Trek, it sounding like English isn't sufficient to understand: https://en.wikipedia.org/wiki/Darmok
]]>Going by that article, in the US when people buy and sell houses they sell them to the agent and the agent then sells them to someone who wants to buy one. [...] In the UK when people buy and sell houses they do it from each other. The agent advertises the house for sale, puts buyers in contact with sellers, and deals with all the horrible complex legal shit.
No, in the US it works like you described in the UK as well.
However, realtors also often buy properties themselves, either to rent them out (or just sit on them empty) while they appreciate rapidly, or if the property is ugly and undermarket, slap a coat of paint on them and re-sell.
Selling to an agent also tends to be more associated with desperate people looking to sell a house quickly, but it depends. There's also a great deal of new construction in the US, and all sorts of bizarre contracts exist attached to the property establishing eternal rights for the builder and so on.
Prior to the civil rights era, these contracts overwhelmingly existed to forbid selling your house to a black person. Once those contracts were made illegal, well, you can imagine how someone who bought a house in that neighborhood specifically because they liked such a contract would react when black people moved in.
]]>The true mortality rate of COVID-19 does depend on age and comorbidities, but it also critically depends on available health care.
Arguing about what the "true" mortality rate is at this point is kind of missing the point, in that it can only be known in retrospect based on how we respond to the virus in our local community.
Just to give a bit of data here, the crude fatality rate as of the original WHO China report (I believe it's been updated, with higher numbers) was 3.4%. Keep in mind that this includes the extraordinary measures China took in Wuhan to expand the health system, which included rushing 40,000 health care workers into the region. China also has very widespread testing now, but did not at the beginning of the crisis (obviously).
The fatality rate in South Korea is currently about 1%. South Korea is doing extremely widespread testing, and hence has discovered vast numbers of cases in young, asymptomatic or mildly symptomatic people. But also, critically, because of containment their health system is not overwhelmed and they're finding cases early. (Also as a technical point, the 1% is based on the current known cases and the current deaths, but deaths are offset from infection by weeks).
But we can't, from this data, just say that the true fatality rate is 1%. The reason is the other lesson from China: the hospitalization rate was 20%. Most of these cases don't need ventilators, but they did need oxygen or other assistance.
If medical treatment was not available due to overload, your personal chance of dying of the disease wouldn't just be based on the South Korea numbers, or the Wuhan or Italy numbers. It would critically depend on your chance of receiving competent hospital care if you need it, locally.
Even young people with no comorbidities need hospital care with this disease. I don't know how common it is, but it happens.
]]>When I was looking for a house, the other thing that quickly became obvious about basements is that real estate appraisers (you know, the people the bank sends out) don't include a basement in their formulas. So, a house on a slab is valued the same as a house with a basement.
It was also clear that realtors are trying to combat this by listing houses with basements as having twice as much living space as those without, regardless of what any pesky rules say about that. Doing an extremely shoddy remodel to add some drywall in the basement is also one of the most popular money grabs.
But if you're going for a construction loan to build a new house, well, why would you build a basement (which costs money) when the feature has zero value according to the appraiser?
]]>Just goes to show, money can buy bodies but buying loyalty is much more complex.
My partner was getting up to two Bloomberg flyers in the mail per day for a while. Me? I guess I'm on some list as a leftie...
]]>It is also in a way creating an artificial distinction - while Sanders doesn't directly accept superPAC funding, there are PACs that are funding ads promoting Sander - they just aren't officially endorsed by Sanders.
This bit was also particularly bizarre.
The "PACs" helping Sanders are a few labor unions that endorsed him (primarily one nurse's union) which have small-budget politics operations, and a non-profit which his organization created back in 2016 to help support left-ish (for America) democrats such as AOC.
These are extremely unlike the American "Super PAC" system which exist as a way for capitalists to dump unlimited money into elections at a moment's notice.
There are actually at least two Bernie Sanders Super PACs... it's just that they're anti-Sanders organizations created recently to run ads specifically against him.
]]>As you say, he doesn't appear to have the ground game and that is likely in part because of his superPAC aversion - he simply isn't raising enough independent money to fund the type of organization that can get out the vote in 50 states.
I've been trying to avoid this nonsense, but this is just getting utterly bizarre.
Sanders has a huge ground operation which is absolutely engaging in large scale GOTV operations, as has been widely reported on. Well, as much as anything about his campaign is reported on at all, given that there was a media blackout for months to the point that he was literally listed as "Other" in at least one poll that showed him ahead.
His operation leans heavily on ground operations, with volunteers going door to door as well as large scale phone contacts. They're known in particular for reaching out to minority, especially Latino communities and immigrants. In addition, they've consistently out-raised other campaigns in direct donations and have vastly more donors.
What they don't have is literally infinite money to blanket all media ad space, as the Bloomberg and Steyer campaigns did.
]]>1Password [...], Scrivener [...], and Microsoft Word
1Password: C'mon all the techies use Keep-Ass.
Scrivener: whistles Hey look! Shiny! Nothing to see here!
Microsoft Word: Come on now, Word runs perfectly fine on Linux. Microsoft has, after all, ported Word to Javascript some years ago as part of their SAAS offerings!
]]>If you want something fun, take a look at how Linux manages to do gettimeofday() (and a few other syscalls such as getpid/gettid/etc.) without doing any context switch at all, just a function call. :-)
]]>